正在加载图片...
iii.Monte Carlo EM The tth iteration of the E-step can be modified by replacing Q(0;0())with a Monte Carlo estimate obtained as follows: (a)Draw samples of missing values y from fa(yis).Each sample yis a vector of missing values needed to obtain a complete-data vectory (yobs y)forj=1....N (b)Calculate 00,o0)=2sfy:01 j=1 where log f(y,;0)is the complete-data log likelihood evaluated at y;.It is recom- mended that N be chosen to be small during the early EM iterations but increased as the EM iterations progress.Thus,MCEM estimate will move around initially and then converge as N increases later in the iterations. iv.ECM Algorithm In this modification the M-step is replaced by a computationally simpler conditional maximization(CM)steps.Each CM step solves a simple optimization problem that may have an analytical solution or an elementary numerical solution(for e.g.,a univari- ate optimization problem).The collection of the CM steps that follow the tth E-step is called an CM cycle.In each CM step,the conditional expectation of the complete-data log likelihood,Q(0;0(),computed in the preceding E-step is maximized subject to a constraint on The union of all such constraints on is such that the CM cycle results in the maximization over the entire parameter space Formally,let S denote the total number of CM-steps in each cycle.Thus for s 1,....S,the sth CM-step in the tth cycle requires the maximization of (())con- strained by some function of +()/s),which is the maximiser from the (s-1)h CM-step in the current cycle.That is,in the sth CM-step,find (+s/s)in e such that Q(0+s/;0©)≥Q(0:0) for all0es.where日。-{0∈日:gs(0)=gs(gt+s-1/s)}.At the end of the S cycles,)is set equal to the value obtained in the maximization in the last cycle 9(t+(sis) In order to understand this procedure,assume that e is partitioned into subvectors (01,...,0s).Then at each step s of the CM cycle,Q(;0()is maximized with respect to 0s with the other components of o held fixed at their values from the previous CM- steps.Thus,in this case the constraint is induced by fixing (01,...,0s-1,0s+1,...,s) at their current values. 15iii. Monte Carlo EM The t th iteration of the E-step can be modified by replacing Q(θ; θ (t) ) with a Monte Carlo estimate obtained as follows: (a) Draw samples of missing values y (t) 1 , . . . , y (t) N from f2(ymis|yobs; θ). Each sample y (t) j is a vector of missing values needed to obtain a complete-data vector yj = (yobs, y (t) j ) for j = 1, . . . , N. (b) Calculate Qˆ(θ; θ (t) ) = 1 N X N j=1 log f(yj ; θ) where log f(yj ; θ) is the complete-data log likelihood evaluated at yj . It is recom￾mended that N be chosen to be small during the early EM iterations but increased as the EM iterations progress. Thus, MCEM estimate will move around initially and then converge as N increases later in the iterations. iv. ECM Algorithm In this modification the M-step is replaced by a computationally simpler conditional maximization (CM) steps. Each CM step solves a simple optimization problem that may have an analytical solution or an elementary numerical solution(for e.g., a univari￾ate optimization problem). The collection of the CM steps that follow the t th E-step is called an CM cycle. In each CM step, the conditional expectation of the complete-data log likelihood, Q(θ; θ (t) ), computed in the preceding E-step is maximized subject to a constraint on Θ. The union of all such constraints on Θ is such that the CM cycle results in the maximization over the entire parameter space Θ. Formally, let S denote the total number of CM-steps in each cycle. Thus for s = 1, . . . , S, the s th CM-step in the t th cycle requires the maximization of Q(θ; θ (t) ) con￾strained by some function of θ (t+(s−1)/S) , which is the maximiser from the (s − 1)th CM-step in the current cycle. That is, in the s th CM-step, find θ (t+s/S) in Θ such that Q(θ (t+s/S) ; θ (t) ) ≥ Q(θ; θ (t) ) for all θ ∈ Θs. where Θs = {θ ∈ Θ : gs(θ) = gs(θ (t+(s−1)/S) )}. At the end of the S cycles, θ (t+1) is set equal to the value obtained in the maximization in the last cycle θ (t+(S/S) . In order to understand this procedure, assume that θ is partitioned into subvectors (θ1, . . . , θS). Then at each step s of the CM cycle, Q(θ; θ (t) ) is maximized with respect to θs with the other components of θ held fixed at their values from the previous CM￾steps. Thus, in this case the constraint is induced by fixing (θ1, . . . , θs−1, θs+1, . . . , θS) at their current values. 15
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有