正在加载图片...
discretization).The remainder of the paper is concerned with the specific problem of the rigid-sphere collision model.The number of iterations of the Metropolis algorithm seems to be limited:16 steps for burn-in and 48 to 64 subsequent iterations (that still required four to five hours on the Los Alamos MANIAC). An interesting variation of(1)is the Simulated Annealing algorithm,developed by Kirk- patrick et al.(1983),who connected optimization with annealing,the cooling of a metal. Their variation is to allow T of(1)to change as the algorithm runs,according to a "cooling schedule",and the Simulated Annealing algorithm can be shown to find the global maximum with probability 1,although the analysis is quite complex due to the fact that,with varying T,the algorithm is no longer a time-homogeneous Markov chain. 2.2 The Hastings (1970)paper The Metropolis algorithm was later generalized by Hastings (1970)and Peskun (1973,1981) as a statistical simulation tool that could overcome the curse of dimensionality met by regular Monte Carlo methods(already emphasized in Metropolis et al.1953).5 In his Biometrika paper,6 Hastings (1970)also defines his methodology on finite and reversible Markov chains,treating the continuous case by using a discretization analogy. The generic probability of acceptance for a move from state i to state j is Sij Qij= 1+迈, Tiqii where sij is a symmetric function.This generic form of probability encompasses the forms of both Metropolis et al.(1953)and Barker (1965).At this stage,Peskun's ordering is not yet discovered and Hastings thus mentions that little is known about the relative merits of those two choices (even though)Metropolis's method may be preferable.He also warns against high rejection rates as indicative of a poor choice of transition matrir,but does not mention the opposite pitfall of low rejection rates,associated with a slow exploration of the target. The examples given in the paper are a Poisson target with a +1 random walk proposal,a normal target with a uniform random walk proposal mixed with its reflection (i.e.centered at -X(t)rather than X(t)),and then a multivariate target where Hastings introduces a Gibbs sampling strategy,updating one component at a time and defining the composed transition as satisfying the stationary condition because each component does leave the 5In fact,Hastings starts by mentioning a decomposition of the target distribution into a product of one-dimensional conditional distributions but this falls short of an early Gibbs sampler! 6Hastings(1970)is one of the ten papers reproduced in the Biometrika 100th anniversary volume by Titterington and Cox(2001). 5discretization). The remainder of the paper is concerned with the specific problem of the rigid-sphere collision model. The number of iterations of the Metropolis algorithm seems to be limited: 16 steps for burn-in and 48 to 64 subsequent iterations (that still required four to five hours on the Los Alamos MANIAC). An interesting variation of (1) is the Simulated Annealing algorithm, developed by Kirk￾patrick et al. (1983), who connected optimization with annealing, the cooling of a metal. Their variation is to allow T of (1) to change as the algorithm runs, according to a “cooling schedule”, and the Simulated Annealing algorithm can be shown to find the global maximum with probability 1, although the analysis is quite complex due to the fact that, with varying T, the algorithm is no longer a time-homogeneous Markov chain. 2.2 The Hastings (1970) paper The Metropolis algorithm was later generalized by Hastings (1970) and Peskun (1973, 1981) as a statistical simulation tool that could overcome the curse of dimensionality met by regular Monte Carlo methods (already emphasized in Metropolis et al. 1953).5 In his Biometrika paper,6 Hastings (1970) also defines his methodology on finite and reversible Markov chains, treating the continuous case by using a discretization analogy. The generic probability of acceptance for a move from state i to state j is αij = sij 1 + πi πj qij qji , where sij is a symmetric function. This generic form of probability encompasses the forms of both Metropolis et al. (1953) and Barker (1965). At this stage, Peskun’s ordering is not yet discovered and Hastings thus mentions that little is known about the relative merits of those two choices (even though) Metropolis’s method may be preferable. He also warns against high rejection rates as indicative of a poor choice of transition matrix, but does not mention the opposite pitfall of low rejection rates, associated with a slow exploration of the target. The examples given in the paper are a Poisson target with a ±1 random walk proposal, a normal target with a uniform random walk proposal mixed with its reflection (i.e. centered at −X(t) rather than X(t)), and then a multivariate target where Hastings introduces a Gibbs sampling strategy, updating one component at a time and defining the composed transition as satisfying the stationary condition because each component does leave the 5 In fact, Hastings starts by mentioning a decomposition of the target distribution into a product of one-dimensional conditional distributions but this falls short of an early Gibbs sampler! 6Hastings (1970) is one of the ten papers reproduced in the Biometrika 100th anniversary volume by Titterington and Cox (2001). 5
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有