正在加载图片...
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde The standard deviation of the estimate( the average) goes down with the square root of the number of observations This estimator for x can be shown to be the optimum linear estimate of x in the mean squared error sense for arbitrary distributions of the n, if x is treated as an arbitrary constant. That is, any other linear combination of the =k will yield a larger mean squared difference between x and x if x is an arbitrary constant For normal noise the min variance linear estimate is the min variance estimate An important factor which should bear on the estimation of x and which has not yet been mentioned is the possibility of some a priori information about x Clearly if we already had a reasonably accurate notion of the value of x and then took some additional data points-say of poor quality-we certainly would not want to derive an estimate based simply on the new data and ignore the a priori information Example: Supplemental measurenents Take n measurements starting with no prior information x Late take total of n ∑= Page 7 of 816.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 7 of 8 22 2 ˆ 1 2 2 ˆ ˆ 1 1 1 1 1 N xn n k x n x N N N σ σ σ σ σ σ σ = = = = = ∑ The standard deviation of the estimate (the average) goes down with the square root of the number of observations. This estimator for x can be shown to be the optimum linear estimate of x in the mean squared error sense for arbitrary distributions of the k n if x is treated as an arbitrary constant. That is, any other linear combination of the k z will yield a larger mean squared difference between xˆ and x if x is an arbitrary constant. For normal noise the min variance linear estimate is the min variance estimate. An important factor which should bear on the estimation of x and which has not yet been mentioned is the possibility of some a priori information about x . Clearly if we already had a reasonably accurate notion of the value of x and then took some additional data points – say of poor quality – we certainly would not want to derive an estimate based simply on the new data and ignore the a priori information. Example: Supplemental measurements Take N1 measurements starting with no prior information 1 1 2 1 1 2 1 ˆ 1 N k k k N k k z x σ σ = = = ∑ ∑ Later, we take more measurements, total of N 1 1 1 2 2 2 1 1 1 22 2 11 1 ˆ 11 1 N N N k k k k k k kN k k NN N k k kN kk k z z z x σ σ σ σ σ σ = = =+ = = =+ + = = + ∑ ∑ ∑ ∑∑∑ 1 1 2 2 1 1 1 1 ˆ N N k k k k k z x = = σ σ ∑ ∑ =
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有