正在加载图片...
Dayu Wu Applied Statistics Lecture Notes Estimation:MLE 1.Assumption:N(0,02) 2.Probability Density Function:() 3.Likelihood Function: L(3,61,o2)=Πf=(2ro2)-ep{-a∑(h--31x)2} 4.Log Likelihood Function: 1ogL(8,B1,σ2)=-号1og2πa2)-a∑(--3x)2 5.First Order Condition:坐=-女+点∑(-%-ax)2=0 6.2=∑(贴-%-3)2=1∑ Estimation:B 1.房alinear combination of:a==器=∑rh 2.Unbiased:E=∑Eh=∑(6+r)=A 3.Var()=∑()VarW==是 4.a~(a,2) Estimation:Bo 1.房is a linear combination of张:A=可-月i=A∑h-r∑z,二rh 2.Unbiased:E3o=E(-B1)= 3.Cow(,a)=Cov(片∑h,∑rh)=∑Var()-0 4.Cou(h,)=Cou(e,6)=0 5.Var()=Var(g-a)=Var()+2Var(a)-2zCou(@,a)=(很+器) 6.属~N(,(+兰)2) 7.Co(,a)=Cou(@-a,)=Cou(@,A)-Var(间=-产o 2of6Dayu Wu Applied Statistics Lecture Notes Estimation: MLE 1. Assumption: ϵi ∼ N(0, σ2 ) 2. Probability Density Function: fX(x) = √ 1 2πσ e 1 2σ2 (x−µ) 2 3. Likelihood Function: L(β0, β1, σ2 ) = Πfyi = (2πσ2 ) − n 2 exp{− 1 2σ2 P(yi − β0 − β1xi) 2} 4. Log Likelihood Function: log L(β0, β1, σ2 ) = − n 2 log(2πσ2 ) − 1 2σ2 P(yi − β0 − β1xi) 2 5. First Order Condition: ∂ log L ∂σ2 = − n 2σ2 + 1 2σ4 P(yi − β0 − β1xi) 2 = 0 6. bσ 2 = 1 n P(yi − β0 − β1xi) 2 = 1 n Pe 2 i Estimation: βb1 1. βb1 is a linear combination of yi : βb1 = P P (xi−x¯)yi (xi−x¯) 2 = P Pxi−x¯ (xi−x¯) 2 yi 2. Unbiased: Eβb1 = P Pxi−x¯ (xi−x¯) 2Eyi = P Pxi−x¯ (xi−x¯) 2 (β0 + β1xi) = β1 3. V ar(βb1) = P Pxi−x¯ (xi−x¯) 2 2 V ar(yi) = σ 2 P(xi−x¯) 2 = σ 2 Lxx 4. βb1 ∼ N(β1, σ 2 Lxx ) Estimation: βb0 1. βb0 is a linear combination of yi : βb0 = ¯y − βb1x¯ = 1 n Pyi − x¯ P Pxi−x¯ (xi−x¯) 2 yi 2. Unbiased: Eβb0 = E(¯y − β1x¯) = β0 3. Cov(¯y, βb1) = Cov( 1 n Pyi , P Pxi−x¯ (xi−x¯) 2 yi) = 1 n P Pxi−x¯ (xi−x¯) 2 V ar(yi) = 0 4. Cov(yi , yj ) = Cov(ϵi , ϵj ) = 0 5. V ar(βb0) = V ar(¯y−βb1x¯) = V ar(¯y)+¯x 2V ar(βb1)−2¯xCov(¯y, βb1) =  1 n + x¯ 2 Lxx  σ 2 6. βb0 ∼ N(β0,  1 n + x¯ 2 Lxx  σ 2 ) 7. Cov(βb0, βb1) = Cov(¯y − x¯βb1, βb1) = Cov(¯y, βb1) − xV ar ¯ (βb) = − x¯ Lxx σ 2 2 of 6
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有