正在加载图片...
to construction of chains with faster convergence,and to the work of Hobert and Marchev (2008),who give precise constructions and theorems to show how parameter expansion can uniformly improve over the original chain. 4.2 Advances in MCMC Applications The real reason for the explosion of MCMC methods was the fact that an enormous number of problems that were deemed to be computational nightmares now cracked open like eggs. As an example,consider this very simple random effects model from Gelfand and Smith (1990).Observe Y=+e,i=1,,K,j=1,,J (2) where 0:N(4,o) N(0,2),independent of Estimation of the variance components can be difficult for a frequentist(REML is typically preferred)but it indeed was a nightmare for a Bayesian,as the integrals were intractable. However,with the usual priors on and a2,the full conditionals are trivial to sample from and the problem is easily solved via Gibbs sampling.Moreover,we can increase the number of variance components and the Gibbs solution remains easy to implement. During the early 1990s,researchers found that Gibbs,or Metropolis-Hastings,algorithms would crack almost any problem that they looked at,and there was a veritable flood of pa- pers applying MCMC to previously intractable models,and getting good solutions.For example,building on (2),it was quickly realized that Gibbs sampling could was an easy route to getting estimates in the linear mixed models (Wang et al.1993,1994),and even generalized linear mixed models(Zeger and Karim 1991).Demarginalization(the introduc- tion of latent variables)arguments made it possible to analyze probit models using a latent variable approach in a linear mixed model (Albert and Chib 1993),and demarginalization was also a route to estimation in mixture models with Gibbs sampling (see,for example, Robert 1996).It progressively dawned on the community that latent variables could be artificially introduced to run the Gibbs sampler in about every situation as eventually pub- lished in Damien et al.(1999),the main example being the slice sampler (Neal 2003).A (very incomplete)list of some other applications include changepoint analysis (Carlin et al. 1992,Stephens 1994);Genomics (Lawrence et al.1993,Stephens and Smith 1993,Churchill 1995);capture-recapture (George and Robert 1992,Dupuis 1995);variable selection in re- 12to construction of chains with faster convergence, and to the work of Hobert and Marchev (2008), who give precise constructions and theorems to show how parameter expansion can uniformly improve over the original chain. 4.2 Advances in MCMC Applications The real reason for the explosion of MCMC methods was the fact that an enormous number of problems that were deemed to be computational nightmares now cracked open like eggs. As an example, consider this very simple random effects model from Gelfand and Smith (1990). Observe Yij = θi + εij , i = 1, . . . , K, j = 1, . . . , J, (2) where θi ∼ N(µ, σ2 θ ) εij ∼ N(0, σ2 ε ), independent of θi Estimation of the variance components can be difficult for a frequentist (REML is typically preferred) but it indeed was a nightmare for a Bayesian, as the integrals were intractable. However, with the usual priors on µ, σ2 θ , and σ 2 ε , the full conditionals are trivial to sample from and the problem is easily solved via Gibbs sampling. Moreover, we can increase the number of variance components and the Gibbs solution remains easy to implement. During the early 1990s, researchers found that Gibbs, or Metropolis-Hastings, algorithms would crack almost any problem that they looked at, and there was a veritable flood of pa￾pers applying MCMC to previously intractable models, and getting good solutions. For example, building on (2), it was quickly realized that Gibbs sampling could was an easy route to getting estimates in the linear mixed models (Wang et al. 1993, 1994), and even generalized linear mixed models (Zeger and Karim 1991). Demarginalization (the introduc￾tion of latent variables) arguments made it possible to analyze probit models using a latent variable approach in a linear mixed model (Albert and Chib 1993), and demarginalization was also a route to estimation in mixture models with Gibbs sampling (see, for example, Robert 1996). It progressively dawned on the community that latent variables could be artificially introduced to run the Gibbs sampler in about every situation as eventually pub￾lished in Damien et al. (1999), the main example being the slice sampler (Neal 2003). A (very incomplete) list of some other applications include changepoint analysis (Carlin et al. 1992, Stephens 1994); Genomics (Lawrence et al. 1993, Stephens and Smith 1993, Churchill 1995); capture-recapture (George and Robert 1992, Dupuis 1995); variable selection in re- 12
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有