正在加载图片...
N. Wiener"Generalized harmonic analysis, Acta Math., vol 55, pp. 117-258, 1930 D.C. Youla, The FEE: A New Tunable High-Resolution Spectral Estimator: Part L, Technical note, no 3, Dept of Electrical Engineering, Polytechnic Institute of New York, Brooklyn, New York; also RADC Report, RADC-TR-81-397,ADA114996,1982,1980. G U. Yule, On a method of investigating periodicities in disturbed series, with special reference to wolfer unspot numbers, Philos. Trans. R. Soc. London, Ser. A, vol. 226, Pp. 267-298, 1927. 16.2 Parameter estimation Stella n batalama and dimitri kazakos Parameter estimation is the operation of assigning a value in a continuum of alternatives to an unknown parameter based on a set of observations involving some function of the parameter. Estimate is the value assigned to the parameter and estimator is the function of the observations that yields the estimate. The basic elements in the parameter estimation are a vector parameter gm, a vector space & m where em takes its values, a stochastic process X(r) parameterized by m and a performance criterion or cost function. The estimate 8(x")based on the observation vector x"=[x, x2,., xm] is a solution of some optimization proble according to the performance criterion. In the following, the function f(x"em)will denote the conditional joint probability density function of the random variables xr,,x- There are several parameter estimation schemes. If the process X(r) is parametrically known, i.e., if its conditional joint probability density functions are known for each fixed value em of the vector parameter 8 then the corresponding parameter estimation scheme is called parametric. If the statistics of the process X(t) are nonparametrically described, i.e, given 8E &m any joint probability density function of the process is a member of some nonparametric class of probability density functions, then the nonparametric estimation schemes arise Let In denote the n-dimensional observation space. Then an estimator e(xn)of a vector parameter em is a function from the observation space, r", to the parameter space, &m. Since this is a function of random variables, it is itself a random variable (or random vector) There are certain stochastic properties of estimators that quantify somehow their quality. In this stimator is said to be unbiased if its expected value is the true parameter value, i. e, if Ee{6m(x")}=θ′ where the subscript 0 on the expectation symbol denotes that the expectation is taken according to the probability density function f(xn0m) In the case where the observation space is the m and the parameter is a scalar, it Eo10(x"))=0(x")f(x e)dx The bias of the estimate is the Euclidean norm m-E(0m(x))/2. Thus, the bias measures the distancebetween the expected value of the estimate and the true value of the parameter. Clearly, the estimator is unbiased when the bias is zero 6. Usually it is of interest to know the conditional variance of an unbiased estimate. The bias of the estimate x")and the conditional variance Fl6(x")-E6叫(x")3吗 generally represent a trade-off. Indeed, an unbiased estimate may induce relatively large variance On the other hand, the introduction of some low-level bias may then result in a significant reduction of the induced variance. c 2000 by CRC Press LLC© 2000 by CRC Press LLC N. Wiener “Generalized harmonic analysis,” Acta Math., vol. 55, pp. 117–258, 1930. D.C. Youla, “The FEE: A New Tunable High-Resolution Spectral Estimator: Part I,” Technical note, no. 3, Dept. of Electrical Engineering, Polytechnic Institute of New York, Brooklyn, New York; also RADC Report, RADC-TR-81-397, AD A114996, 1982, 1980. G.U. Yule, “On a method of investigating periodicities in disturbed series, with special reference to Wolfer’s sunspot numbers,” Philos. Trans. R. Soc. London, Ser. A, vol. 226, pp. 267–298, 1927. 16.2 Parameter Estimation Stella N. Batalama and Dimitri Kazakos Parameter estimation is the operation of assigning a value in a continuum of alternatives to an unknown parameter based on a set of observations involving some function of the parameter. Estimate is the value assigned to the parameter and estimator is the function of the observations that yields the estimate. The basic elements in the parameter estimation are a vector parameter qm, a vector space %m where qm takes its values, a stochastic process X(t) parameterized by qm and a performance criterion or cost function. The estimate q ^ m(xn) based on the observation vector xn = [x1 , x2 , ...,xn] is a solution of some optimization problem according to the performance criterion. In the following, the function f(xn *qm) will denote the conditional joint probability density function of the random variables x1,...,xn. There are several parameter estimation schemes. If the process X(t) is parametrically known, i.e., if its conditional joint probability density functions are known for each fixed value qm of the vector parameter qm, then the corresponding parameter estimation scheme is called parametric. If the statistics of the process X(t) are nonparametrically described, i.e., given qm Œ %m any joint probability density function of the process is a member of some nonparametric class of probability density functions, then the nonparametric estimation schemes arise. Let Gn denote the n-dimensional observation space. Then an estimator q ^ (xn) of a vector parameter qm is a function from the observation space, Gn, to the parameter space,%m. Since this is a function of random variables, it is itself a random variable (or random vector). There are certain stochastic properties of estimators that quantify somehow their quality. In this sense an estimator is said to be unbiased if its expected value is the true parameter value, i.e., if Eq{q ^m(xn)} = qm where the subscript q on the expectation symbol denotes that the expectation is taken according to the probability density function f(x n *qm). In the case where the observation space is the ¬n and the parameter is a scalar, it is The bias of the estimate is the Euclidean norm **qm – Eq{qm(xn)}**1/2. Thus, the bias measures the distance between the expected value of the estimate and the true value of the parameter. Clearly, the estimator is unbiased when the bias is zero. Usually it is of interest to know the conditional variance of an unbiased estimate. The bias of the estimate q ^ m(xn ) and the conditional variance Eq{**q ^ m(xn) – Eq{q ^ m(xn)}**2 *qm} generally represent a trade-off. Indeed, an unbiased estimate may induce relatively large variance. On the other hand, the introduction of some low-level bias may then result in a significant reduction of the induced variance. E x f x dx n R n n m n n q {q q q ˆ( )} ˆ x = ( ) ( ) Ú *
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有