正在加载图片...
of the examples now processed by MCMC algorithms could not have been treated previously, even though the hundreds of dimensions processed in Metropolis et al.(1953)were quite formidable.However,by the mid-1980s the pieces were all in place. After Peskun,MCMC in the statistical world was dormant for about 10 years,and then several papers appeared that highlighted its usefulness in specific settings (see,for example,Geman and Geman 1984,Tanner and Wong 1987,Besag 1989).In particular, Geman and Geman (1984)building on Metropolis et al.(1953),Hastings (1970),and Peskun (1973),influenced Gelfand and Smith (1990)to write a paper that is the genuine starting point for an intensive use of MCMC methods by the statistical community.It sparked new interest in Bayesian methods,statistical computing,algorithms,and stochastic processes through the use of computing algorithms such as the Gibbs sampler and the Me- tropolis-Hastings algorithm.(See Casella and George 1992 for an elementary introduction to the Gibbs sampler'.) Interestingly,the earlier Tanner and Wong (1987)has essentially the same ingredients as Gelfand and Smith (1990),namely the fact that simulating from the conditional dis- tributions is sufficient to simulate (in the limiting sense)from the joint.This paper was considered important enough to be a discussion paper in the Journal of the American Sta- tistical Association,but its impact was somewhat limited,compared with the one of Gelfand and Smith (1990).There are several reasons for this;one being that the method seemed to only apply to missing data problems (hence the name of data augmentation instead of Gibbs sampling),and another is that the authors were more focused on approximating the posterior distribuion.They suggested a(Markov chain)Monte Carlo approximation to the target (x)at each iteration of the sampler,based on m π(x,), zk心元t-1(2x, m k=1 that is,by replicating the simulations from the current approximation(r)of the marginal posterior distribution of the missing data m times.This focus on estimation of the posterior distribution connect the original Data Augmentation algorithm to EM,as pointed out by Dempster in the discussion.Although the discussion by Morris gets very close to the two-stage Gibbs sampler for hierarchical models,he is still concerned about doing m iterations,and worries about how costly that would be.Tanner and Wong mention 7On a humorous note,the original Technical Report of this paper was called Gibbs for Kids,which was changed because a referee did not appreciate the humor.However,our colleague Dan Gianola,an Animal Breeder at Wisconsin,liked the title.In using Gibbs sampling in his work,he gave a presentation in 1993 at the 44th Annual Meeting of the European Association for Animal Production,Arhus,Denmark.The title: Gibbs for Pigs. 9of the examples now processed by MCMC algorithms could not have been treated previously, even though the hundreds of dimensions processed in Metropolis et al. (1953) were quite formidable. However, by the mid-1980s the pieces were all in place. After Peskun, MCMC in the statistical world was dormant for about 10 years, and then several papers appeared that highlighted its usefulness in specific settings (see, for example, Geman and Geman 1984, Tanner and Wong 1987, Besag 1989). In particular, Geman and Geman (1984) building on Metropolis et al. (1953), Hastings (1970) , and Peskun (1973), influenced Gelfand and Smith (1990) to write a paper that is the genuine starting point for an intensive use of MCMC methods by the statistical community. It sparked new interest in Bayesian methods, statistical computing, algorithms, and stochastic processes through the use of computing algorithms such as the Gibbs sampler and the Me￾tropolis–Hastings algorithm. (See Casella and George 1992 for an elementary introduction to the Gibbs sampler7 .) Interestingly, the earlier Tanner and Wong (1987) has essentially the same ingredients as Gelfand and Smith (1990), namely the fact that simulating from the conditional dis￾tributions is sufficient to simulate (in the limiting sense) from the joint. This paper was considered important enough to be a discussion paper in the Journal of the American Sta￾tistical Association, but its impact was somewhat limited, compared with the one of Gelfand and Smith (1990). There are several reasons for this; one being that the method seemed to only apply to missing data problems (hence the name of data augmentation instead of Gibbs sampling), and another is that the authors were more focused on approximating the posterior distribuion. They suggested a (Markov chain) Monte Carlo approximation to the target π(θ|x) at each iteration of the sampler, based on 1 m Xm k=1 π(θ|x, zt,k), zt,k ∼ πˆt−1(z|x), that is, by replicating the simulations from the current approximation ˆπt−1(z|x) of the marginal posterior distribution of the missing data m times. This focus on estimation of the posterior distribution connect the original Data Augmentation algorithm to EM, as pointed out by Dempster in the discussion. Although the discussion by Morris gets very close to the two-stage Gibbs sampler for hierarchical models, he is still concerned about doing m iterations, and worries about how costly that would be. Tanner and Wong mention 7On a humorous note, the original Technical Report of this paper was called Gibbs for Kids, which was changed because a referee did not appreciate the humor. However, our colleague Dan Gianola, an Animal Breeder at Wisconsin, liked the title. In using Gibbs sampling in his work, he gave a presentation in 1993 at the 44th Annual Meeting of the European Association for Animal Production, Arhus, Denmark. The title: Gibbs for Pigs. 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有