正在加载图片...
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde More computation unless you know each conditional density is going to be al Must provide f(x)-the a priori distribution. This is both the advantage and disadvantage of this method Other estimators include the effect of a priori information directly. Several estimators are based on the conditional probability distribution of x given the values of the observations. In this approach, we think of x as a random variabl having some distribution. This troubles some people since we know x is in fact fixed at some value throughout all the experiment. However, the fact that we do not know what the value isis expressed in terms of a distribution of possible values for x. The extent of our a priori knowledge is reflected in the variance of the a priori distribution we assign. Having an a priori distribution for x, and the values of the observations, we can in principle-and often in fact-calculate the conditional distribution of x given the observations. This is in fact the a posteriori distribution, f(x =,-N).this distribution expresses the probability density for various values of x given the values of the observations and the a priori distribution. Having this distribution, one can define a number of reasonable estimates One is the minimum variance estimate -that valuex which minimizes the error variance Var=J(i-x)/(x|=En)dx But a derivative of this variance with respect to i shows the minimizing value of to be =xf(x1…,=)x which is the conditional mean -the mean of the conditional distribution of x Another reasonable estimate of x based on this conditional distribution is the value at the maximum probability density. This can perfectly well be called the maximum likelihood"estimate though it is not necessarily the same as th maximum likelihood estimate we have just derived. Schweppe calls it the MAP (maximum a posteriori probability)estimator. The two are related as follows The first is the x which maximizes f(2…,/)∫(=,…,=,x) f(x) Page 2 of 816.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 8 • More computation unless you know each conditional density is going to be normal • Must provide f ( ) x - the a priori distribution. This is both the advantage and disadvantage of this method. Other estimators include the effect of a priori information directly. Several estimators are based on the conditional probability distribution of x given the values of the observations. In this approach, we think of x as a random variable having some distribution. This troubles some people since we know x is in fact fixed at some value throughout all the experiment. However, the fact that we do not know what the value is is expressed in terms of a distribution of possible values for x . The extent of our a priori knowledge is reflected in the variance of the a priori distribution we assign. Having an a priori distribution for x , and the values of the observations, we can in principle – and often in fact – calculate the conditional distribution of x given the observations. This is in fact the a posteriori distribution, f ( xz z | ,..., 1 N ). This distribution expresses the probability density for various values of x given the values of the observations and the a priori distribution. Having this distribution, one can define a number of reasonable estimates. One is the minimum variance estimate – that value xˆ which minimizes the error variance. ( )( ) 2 1 Var | ,..., ˆ N x x f x z z dx ∞ −∞ = − ∫ But a derivative of this variance with respect to xˆ shows the minimizing value of xˆ to be ( ) 1 ˆ | ,..., N x xf x z z dx ∞ −∞ = ∫ which is the conditional mean – the mean of the conditional distribution of x . Another reasonable estimate of x based on this conditional distribution is the value at the maximum probability density. This can perfectly well be called the “maximum likelihood” estimate though it is not necessarily the same as the maximum likelihood estimate we have just derived. Schweppe calls it the MAP (“maximum a posteriori probability”) estimator. The two are related as follows: The first is the x which maximizes ( ) ( ) 1 1 ,..., , ,..., | ( ) N N f z zx fz z x f x =
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有