正在加载图片...
7.8 Adaptive and Recursive Monte Carlo Methods 317 with Aa Lagrange multiplier.Note that the middle term does not depend on p.The variation (which comes inside the integrals)gives 0=-f2/p2+or p=If f升 = (7.8.6) -√仄-丁If升dw where A has been chosen to enforce the constraint (7.8.2). If f has one sign in the region of integration,then we get the obvious result that the optimal choice of p-if one can figure out a practical way of effecting the sampling-is that it be proportional to f.Then the variance is reduced to zero.Not so obvious,but seen to be true,is the fact that p f is optimal even if f takes on both signs.In that case the variance per sample point (from equations 7.8.4 and 7.8.6)is 8 S=Soptimal (7.8.7) One curiosity is that one can add a constant to the integrand to make it all of one sign, since this changes the integral by a known amount,constant x V.Then,the optimal choice of p always gives zero variance,that is,a perfectly accurate integral!The resolution of this seeming paradox(already mentioned at the end of $7.6)is that perfect knowledge of p in equation (7.8.6)requires perfect knowledge of fdV,which is tantamount to already knowing the integral you are trying to compute! If your function f takes on a known constant value in most of the volume V,it is certainly a good idea to add a constant so as to make that value zero.Having done that,the accuracy attainable by importance sampling depends in practice not on how small equation (7.8.7)is,but rather on how small is equation (7.8.4)for an implementable p,likely only a 3 Press. crude approximation to the ideal. Stratified Sampling Programs 只 The idea of stratified sampling is quite different from importance sampling.Let us CIENTIFI expand our notation slightly and let (f)denote the true average of the function f over the volume V (namely the integral divided by V),while (f)denotes as before the simplest 6 (uniformly sampled)Monte Carlo estimator of that average: 《f》立/fd )三N∑f) (7.8.8) The variance of the estimator,Var ((f)),which measures the square of the error of the Monte Carlo integration,is asymptotically related to the variance of the function,Var (f)= 《f》-《f》2,by the relation Numerica 10621 Var(())=Var(f) (7.8.9) 431 N (compare equation 7.6.1). Recipes Suppose we divide the volume V into two equal,disjoint subvolumes,denoted a and b, and sample N/2 points in each subvolume.Then another estimator for (f)),different from equation (7.8.8),which we denote (f)',is North y=.+0】 (7.8.10) in other words,the mean of the sample averages in the two half-regions.The variance of estimator (7.8.10)is given by Var()=[Var(()+Var(()] =+ (7.8.11) N/2 =六aa0+Na,l7.8 Adaptive and Recursive Monte Carlo Methods 317 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). with λ a Lagrange multiplier. Note that the middle term does not depend on p. The variation (which comes inside the integrals) gives 0 = −f2/p2 + λ or p = |f| √ λ = |f| |f| dV (7.8.6) where λ has been chosen to enforce the constraint (7.8.2). If f has one sign in the region of integration, then we get the obvious result that the optimal choice of p — if one can figure out a practical way of effecting the sampling — is that it be proportional to |f|. Then the variance is reduced to zero. Not so obvious, but seen to be true, is the fact that p ∝ |f| is optimal even if f takes on both signs. In that case the variance per sample point (from equations 7.8.4 and 7.8.6) is S = Soptimal =  |f| dV 2 −  f dV 2 (7.8.7) One curiosity is that one can add a constant to the integrand to make it all of one sign, since this changes the integral by a known amount, constant × V . Then, the optimal choice of p always gives zero variance, that is, a perfectly accurate integral! The resolution of this seeming paradox (already mentioned at the end of §7.6) is that perfect knowledge of p in equation (7.8.6) requires perfect knowledge of |f|dV , which is tantamount to already knowing the integral you are trying to compute! If your function f takes on a known constant value in most of the volume V , it is certainly a good idea to add a constant so as to make that value zero. Having done that, the accuracy attainable by importance sampling depends in practice not on how small equation (7.8.7) is, but rather on how small is equation (7.8.4) for an implementable p, likely only a crude approximation to the ideal. Stratified Sampling The idea of stratified sampling is quite different from importance sampling. Let us expand our notation slightly and let f denote the true average of the function f over the volume V (namely the integral divided by V ), while f denotes as before the simplest (uniformly sampled) Monte Carlo estimator of that average: f ≡ 1 V  f dV f ≡ 1 N i f(xi) (7.8.8) The variance of the estimator, Var(f), which measures the square of the error of the Monte Carlo integration, is asymptotically related to the variance of the function, Var(f) ≡ f 2 − f2, by the relation Var(f) = Var(f) N (7.8.9) (compare equation 7.6.1). Suppose we divide the volume V into two equal, disjoint subvolumes, denoted a and b, and sample N/2 points in each subvolume. Then another estimator for f, different from equation (7.8.8), which we denote f  , is f  ≡ 1 2 fa + fb  (7.8.10) in other words, the mean of the sample averages in the two half-regions. The variance of estimator (7.8.10) is given by Var f   = 1 4  Var fa  + Var fb  = 1 4  Vara (f) N/2 + Varb (f) N/2  = 1 2N [Vara (f) + Varb (f)] (7.8.11)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有