正在加载图片...
the technical aspects.A comprehensive entry with further references can also be found in Robert and Casella (2004). We will distinguish between the introduction of Metropolis-Hastings based algorithms from those related with Gibbs sampling,since they each stem from radically different origins, even though their mathematical justification via Markov chain theory is the same.Tracing the development of Monte Carlo methods,we will also briefly mention what we might call the "second-generation MCMC revolution".Starting in the mid-to-late 1990s,this includes the development of particle filters,reversible jump and perfect sampling,and concludes with more current work on population or sequential Monte Carlo and regeneration and the computing of "honest"standard errors.(But is it still history?!) As mentioned above,the realization that Markov chains could be used in a wide variety of situations only came(to mainstream statisticians)with Gelfand and Smith (1990),de- spite earlier publications in the statistical literature like Hastings(1970),Geman and Geman (1984)and Tanner and Wong (1987).Several reasons can be advanced:lack of computing machinery (think of the computers of 1970!),lack of background on Markov chains,lack of trust in the practicality of the method...It thus required visionary researchers like Alan Gelfand and Adrian Smith to spread the good news,backed up with a collection of papers that demonstrated,through a series of applications,that the method was easy to under- stand,easy to implement and practical (Gelfand et al.1990,1992,Smith and Gelfand 1992, Wakefield et al.1994).The rapid emergence of the dedicated BUGS (Bayesian inference Using Gibbs Sampling)software as early as 1991(when a paper on BUGS was presented at the Valencia meeting)was another compelling argument for adopting (at large)MCMC algorithms.1 2 Before the Revolution Monte Carlo methods were born in Los Alamos,New Mexico during World War II,eventually resulting in the Metropolis algorithm in the early 1950s.While Monte Carlo methods were in use by that time,MCMC was brought closer to statistical practicality by the work of Hastings in the 1970s. What can be reasonably seen as the first MCMC algorithm is the Metropolis algorithm, published by Metropolis et al.(1953).It emanates from the same group of scientists who produced the Monte Carlo method,namely the research scientists of Los Alamos,mostly 1Historically speaking,the development of BUGS initiated from Geman and Geman(1984)and Pearl (1987),in tune with the developments in the artificial intelligence community,and it pre-dates Gelfand and Smith (1990). 2the technical aspects. A comprehensive entry with further references can also be found in Robert and Casella (2004). We will distinguish between the introduction of Metropolis-Hastings based algorithms from those related with Gibbs sampling, since they each stem from radically different origins, even though their mathematical justification via Markov chain theory is the same. Tracing the development of Monte Carlo methods, we will also briefly mention what we might call the “second-generation MCMC revolution”. Starting in the mid-to-late 1990s, this includes the development of particle filters, reversible jump and perfect sampling, and concludes with more current work on population or sequential Monte Carlo and regeneration and the computing of “honest” standard errors. (But is it still history?!) As mentioned above, the realization that Markov chains could be used in a wide variety of situations only came (to mainstream statisticians) with Gelfand and Smith (1990), de￾spite earlier publications in the statistical literature like Hastings (1970), Geman and Geman (1984) and Tanner and Wong (1987). Several reasons can be advanced: lack of computing machinery (think of the computers of 1970!), lack of background on Markov chains, lack of trust in the practicality of the method... It thus required visionary researchers like Alan Gelfand and Adrian Smith to spread the good news, backed up with a collection of papers that demonstrated, through a series of applications, that the method was easy to under￾stand, easy to implement and practical (Gelfand et al. 1990, 1992, Smith and Gelfand 1992, Wakefield et al. 1994). The rapid emergence of the dedicated BUGS (Bayesian inference Using Gibbs Sampling) software as early as 1991 (when a paper on BUGS was presented at the Valencia meeting) was another compelling argument for adopting (at large) MCMC algorithms.1 2 Before the Revolution Monte Carlo methods were born in Los Alamos, New Mexico during World War II, eventually resulting in the Metropolis algorithm in the early 1950s. While Monte Carlo methods were in use by that time, MCMC was brought closer to statistical practicality by the work of Hastings in the 1970s. What can be reasonably seen as the first MCMC algorithm is the Metropolis algorithm, published by Metropolis et al. (1953). It emanates from the same group of scientists who produced the Monte Carlo method, namely the research scientists of Los Alamos, mostly 1Historically speaking, the development of BUGS initiated from Geman and Geman (1984) and Pearl (1987), in tune with the developments in the artificial intelligence community, and it pre-dates Gelfand and Smith (1990). 2
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有