正在加载图片...
(b). For each 0 E 0 the distribution a log f(a: 0)1/(a0 ), i=1, 2, 3 exist for all ∈ (c).0< El(a/a0)log f(a; 0)12<oo for all 0E 8 In the case of unbiased estimators the inequality takes the form (e”)≥|E alog f(a: 8) the inverse of the lower bound is called Fishers in formation number and is denoted by In(0).2 Definition 10(multi-parameters's Cramer-Rao Theorem An unbiased estimator 0 of 0 is said to be fully efficient if alog f(a: 0)/alog f(a: 8 E 00 a2 log f(c: 8 0006 where In()= a log f(a: 0)/alog f(a: 0) 06 is called the sample information matriz. Proof: (for the case that 0 is 1 x 1) Given that f( 1, 2,.,n; 8) is the joint density function of the sample, it pos- sesses the property that f( 0)dcr.dan =1 or, more compactly, ∫(x;O)dm=1 2It must bear in mind that the information matrix is a function of sample size n(b). For each θ ∈ Θ the distribution [∂ i log f(x; θ)]/(∂θ i ), i = 1, 2, 3 exist for all x ∈ X ; (c). 0 < E[(∂/∂θ) log f(x; θ)]2 < ∞ for all θ ∈ θ. In the case of unbiased estimators the inequality takes the form V ar(θ ∗ ) ≥ " E  ∂ log f(x; θ) ∂θ 2 #−1 ; the inverse of the lower bound is called F isher′ s information number and is denoted by In(θ).2 Definition 10 (multi-parameters’s Cram´er-Rao Theorem): An unbiased estimator θˆ of θ is said to be fully efficient if V ar(θˆ) = E ∂ log f(x; θ) ∂θ ∂ log f(x; θ) ∂θ ′−1 = E  − ∂ 2 log f(x; θ) ∂θ∂θ ′ −1 where In(θ) = E ∂ log f(x; θ) ∂θ  ∂ log f(x; θ) ∂θ ′ = E  − ∂ 2 log f(x; θ) ∂θ∂θ ′  , is called the sample information matrix. Proof: (for the case that θ is 1 × 1): Given that f(x1, x2, ..., xn; θ) is the joint density function of the sample, it pos￾sesses the property that Z ∞ −∞ · · · Z ∞ −∞ f(x1, x2, ..., xn; θ)dx1...dxn = 1, or, more compactly, Z ∞ −∞ f(x; θ)dx = 1. 2 It must bear in mind that the information matrix is a function of sample size n. 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有