当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.8 Random Numbers 7.8 Adaptive and Recursive Monte Carlo Methods

资源类别:文库,文档格式:PDF,文档页数:13,文件大小:112.17KB,团购合买
点击下载完整版文档(PDF)

316 Chapter 7.Random Numbers 7.8 Adaptive and Recursive Monte Carlo Methods This section discusses more advanced techniques of Monte Carlo integration.As examples of the use of these techniques,we include two rather different,fairly sophisticated, multidimensional Monte Carlo codes:vegas[1,2],and miser [3].The techniques that we discuss all fall under the general rubric of reduction of variance ($7.6),but are otherwise quite distinct. 三 Importance Sampling The use of importance sampling was already implicit in equations (7.6.6)and (7.6.7). We now return to it in a slightly more formal way.Suppose that an integrand f can be written as the product of a function h that is almost constant times another,positive,function g.Then its integral over a multidimensional volume V is fdv =(f/g)gdv hgdv (7.8.1) In equation(7.6.7)we interpreted equation (7.8.1)as suggesting a change of variable to G,the indefinite integral of g.That made gdV a perfect differential.We then proceeded to use the basic theorem of Monte Carlo integration,equation (7.6.1).A more general ad出 Press. THE interpretation of equation (7.8.1)is that we can integrate f by instead sampling h not, however,with uniform probability density dV,but rather with nonuniform density gdV.In this second interpretation,the first interpretation follows as the special case,where the means of generating the nonuniform sampling of gdv is via the transformation method,using the Programs indefinite integral G(see $7.2). More directly,one can go back and generalize the basic theorem (7.6.1)to the case of nonuniform sampling:Suppose that points z are chosen within the volume V with a probability density p satisfying 6 pdV=1 (7.8.2) 1C% The generalized fundamental theorem is that the integral of any function f is estimated,using N sample points 1,...,N,by f2/p2)-U/p)2 Numerical Recipes 10-621 I=fav= (7.8.3) uction. 43108 where angle brackets denote arithmetic means over the N points,exactly as in equation (7.6.2).As in equation (7.6.1),the "plus-or-minus"term is a one standard deviation error estimate.Notice that equation (7.6.1)is in fact the special case of equation (7.8.3),with (outside p constant 1/V. What is the best choice for the sampling density p?Intuitively,we have already seen Software. that the idea is to make h=f/p as close to constant as possible.We can be more rigorous by focusing on the numerator inside the square root in equation (7.8.3),which is the variance per sample point.Both angle brackets are themselves Monte Carlo estimators of integrals, so we can write s=(〉-(}≈∫sw-V5'=5w-ra (7.84) We now find the optimal p subject to the constraint equation (7.8.2)by the functiona ariation o-(Ew-ffrav]'+xfoa (7.8.5)

316 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 7.8 Adaptive and Recursive Monte Carlo Methods This section discusses more advanced techniques of Monte Carlo integration. As examples of the use of these techniques, we include two rather different, fairly sophisticated, multidimensional Monte Carlo codes: vegas [1,2], and miser [3]. The techniques that we discuss all fall under the general rubric of reduction of variance (§7.6), but are otherwise quite distinct. Importance Sampling The use of importance sampling was already implicit in equations (7.6.6) and (7.6.7). We now return to it in a slightly more formal way. Suppose that an integrand f can be written as the product of a function h that is almost constant times another, positive, function g. Then its integral over a multidimensional volume V is  f dV =  (f /g) gdV =  h gdV (7.8.1) In equation (7.6.7) we interpreted equation (7.8.1) as suggesting a change of variable to G, the indefinite integral of g. That made gdV a perfect differential. We then proceeded to use the basic theorem of Monte Carlo integration, equation (7.6.1). A more general interpretation of equation (7.8.1) is that we can integrate f by instead sampling h — not, however, with uniform probability density dV , but rather with nonuniform density gdV . In this second interpretation, the first interpretation follows as the special case, where the means of generating the nonuniform sampling of gdV is via the transformation method, using the indefinite integral G (see §7.2). More directly, one can go back and generalize the basic theorem (7.6.1) to the case of nonuniform sampling: Suppose that points xi are chosen within the volume V with a probability density p satisfying  p dV =1 (7.8.2) The generalized fundamental theorem is that the integral of any function f is estimated, using N sample points x1,...,xN , by I ≡  f dV =  f p pdV ≈ f p ±  f 2/p2−f /p 2 N (7.8.3) where angle brackets denote arithmetic means over the N points, exactly as in equation (7.6.2). As in equation (7.6.1), the “plus-or-minus” term is a one standard deviation error estimate. Notice that equation (7.6.1) is in fact the special case of equation (7.8.3), with p = constant = 1/V . What is the best choice for the sampling density p? Intuitively, we have already seen that the idea is to make h = f /p as close to constant as possible. We can be more rigorous by focusing on the numerator inside the square root in equation (7.8.3), which is the variance per sample point. Both angle brackets are themselves Monte Carlo estimators of integrals, so we can write S ≡ f 2 p2 − f p 2 ≈  f 2 p2 pdV −  f p pdV 2 =  f 2 p dV −  f dV 2 (7.8.4) We now find the optimal p subject to the constraint equation (7.8.2) by the functional variation 0 = δ δp  f 2 p dV −  f dV 2 + λ  p dV  (7.8.5)

7.8 Adaptive and Recursive Monte Carlo Methods 317 with Aa Lagrange multiplier.Note that the middle term does not depend on p.The variation (which comes inside the integrals)gives 0=-f2/p2+or p=If f升 = (7.8.6) -√仄-丁If升dw where A has been chosen to enforce the constraint (7.8.2). If f has one sign in the region of integration,then we get the obvious result that the optimal choice of p-if one can figure out a practical way of effecting the sampling-is that it be proportional to f.Then the variance is reduced to zero.Not so obvious,but seen to be true,is the fact that p f is optimal even if f takes on both signs.In that case the variance per sample point (from equations 7.8.4 and 7.8.6)is 8 S=Soptimal (7.8.7) One curiosity is that one can add a constant to the integrand to make it all of one sign, since this changes the integral by a known amount,constant x V.Then,the optimal choice of p always gives zero variance,that is,a perfectly accurate integral!The resolution of this seeming paradox(already mentioned at the end of $7.6)is that perfect knowledge of p in equation (7.8.6)requires perfect knowledge of fdV,which is tantamount to already knowing the integral you are trying to compute! If your function f takes on a known constant value in most of the volume V,it is certainly a good idea to add a constant so as to make that value zero.Having done that,the accuracy attainable by importance sampling depends in practice not on how small equation (7.8.7)is,but rather on how small is equation (7.8.4)for an implementable p,likely only a 3 Press. crude approximation to the ideal. Stratified Sampling Programs 只 The idea of stratified sampling is quite different from importance sampling.Let us CIENTIFI expand our notation slightly and let (f)denote the true average of the function f over the volume V (namely the integral divided by V),while (f)denotes as before the simplest 6 (uniformly sampled)Monte Carlo estimator of that average: 《f》立/fd )三N∑f) (7.8.8) The variance of the estimator,Var ((f)),which measures the square of the error of the Monte Carlo integration,is asymptotically related to the variance of the function,Var (f)= 《f》-《f》2,by the relation Numerica 10621 Var(())=Var(f) (7.8.9) 431 N (compare equation 7.6.1). Recipes Suppose we divide the volume V into two equal,disjoint subvolumes,denoted a and b, and sample N/2 points in each subvolume.Then another estimator for (f)),different from equation (7.8.8),which we denote (f)',is North y=.+0】 (7.8.10) in other words,the mean of the sample averages in the two half-regions.The variance of estimator (7.8.10)is given by Var()=[Var(()+Var(()] =+ (7.8.11) N/2 =六aa0+Na,l

7.8 Adaptive and Recursive Monte Carlo Methods 317 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). with λ a Lagrange multiplier. Note that the middle term does not depend on p. The variation (which comes inside the integrals) gives 0 = −f2/p2 + λ or p = |f| √ λ = |f| |f| dV (7.8.6) where λ has been chosen to enforce the constraint (7.8.2). If f has one sign in the region of integration, then we get the obvious result that the optimal choice of p — if one can figure out a practical way of effecting the sampling — is that it be proportional to |f|. Then the variance is reduced to zero. Not so obvious, but seen to be true, is the fact that p ∝ |f| is optimal even if f takes on both signs. In that case the variance per sample point (from equations 7.8.4 and 7.8.6) is S = Soptimal =  |f| dV 2 −  f dV 2 (7.8.7) One curiosity is that one can add a constant to the integrand to make it all of one sign, since this changes the integral by a known amount, constant × V . Then, the optimal choice of p always gives zero variance, that is, a perfectly accurate integral! The resolution of this seeming paradox (already mentioned at the end of §7.6) is that perfect knowledge of p in equation (7.8.6) requires perfect knowledge of |f|dV , which is tantamount to already knowing the integral you are trying to compute! If your function f takes on a known constant value in most of the volume V , it is certainly a good idea to add a constant so as to make that value zero. Having done that, the accuracy attainable by importance sampling depends in practice not on how small equation (7.8.7) is, but rather on how small is equation (7.8.4) for an implementable p, likely only a crude approximation to the ideal. Stratified Sampling The idea of stratified sampling is quite different from importance sampling. Let us expand our notation slightly and let f denote the true average of the function f over the volume V (namely the integral divided by V ), while f denotes as before the simplest (uniformly sampled) Monte Carlo estimator of that average: f ≡ 1 V  f dV f ≡ 1 N i f(xi) (7.8.8) The variance of the estimator, Var(f), which measures the square of the error of the Monte Carlo integration, is asymptotically related to the variance of the function, Var(f) ≡ f 2 − f2, by the relation Var(f) = Var(f) N (7.8.9) (compare equation 7.6.1). Suppose we divide the volume V into two equal, disjoint subvolumes, denoted a and b, and sample N/2 points in each subvolume. Then another estimator for f, different from equation (7.8.8), which we denote f  , is f  ≡ 1 2 fa + fb  (7.8.10) in other words, the mean of the sample averages in the two half-regions. The variance of estimator (7.8.10) is given by Var f   = 1 4  Var fa  + Var fb  = 1 4  Vara (f) N/2 + Varb (f) N/2  = 1 2N [Vara (f) + Varb (f)] (7.8.11)

318 Chapter 7.Random Numbers Here Vara(f)denotes the variance of f in subregion a,that is,,《f》a-《f》a,and correspondingly for b. From the definitions already given,it is not difficult to prove the relation arU)=2ara)+VaU川+1(《》a-《》a)2 (7.8.12) (In physics,this formula for combining second moments is the "parallel axis theorem.") Comparing equations (7.8.9),(7.8.11),and (7.8.12),one sees that the stratified (into two subvolumes)sampling gives a variance that is never larger than the simple Monte Carlo case -and smaller whenever the means of the stratified samples,.《f》aand《f》b,are different. We have not yet exploited the possibility of sampling the two subvolumes with different numbers of points,say Na in subregion a and N=N-Na in subregion b.Let us do so 81 now.Then the variance of the estimator is m=+2】 (7.8.13) N-N。 which is minimized (one can easily verify)when n兰 ICAL Na Oa =0a+0 (7.8.14) RECIPES Here we have adopted the shorthand notationVar (f/2,and correspondingly for b. If Na satisfies equation (7.8.14),then equation (7.8.13)reduces to ar(f))= (ga+o%)2 (7.8.15) 4N Press. Equation (7.8.15)reduces to equation(7.8.9)if Var (f)=Vara (f)=Var (f),in which case stratifying the sample makes no difference. 9 A standard way to generalize the above result is to consider the volume V divided into 9 more than two equal subregions.One can readily obtain the result that the optimal allocation of sample points among the regions is to have the number of points in each region j proportional IENTIFIC to oj(that is,the square root of the variance of the function f in that subregion).In spaces of high dimensionality (say d4)this is not in practice very useful,however.Dividing a 6 volume into K segments along each dimension implies Kd subvolumes,typically much too g large a number when one contemplates estimating all the corresponding a,'s. Mixed Strategies Importance sampling and stratified sampling seem,at first sight,inconsistent with each 10621 other.The former concentrates sample points where the magnitude of the integrand f is Numerica largest,that latter where the variance of f is largest.How can both be right? The answer is that (like so much else in life)it all depends on what you know and how uctio 43108 well you know it.Importance sampling depends on already knowing some approximation to Recipes your integral,so that you are able to generate random points ri with the desired probability density p.To the extent that your p is not ideal,you are left with an error that decreases only as N-1/2.Things are particularly bad if your p is far from ideal in a region where the North Software. integrand f is changing rapidly,since then the sampled function h=f/p will have a large variance.Importance sampling works by smoothing the values of the sampled function h,and is effective only to the extent that you succeed in this. Stratified sampling,by contrast,does not necessarily require that you know anything about f.Stratified sampling works by smoothing out the fluctuations of the number of points in subregions,not by smoothing the values of the points.The simplest stratified strategy, dividing V into N equal subregions and choosing one point randomly in each subregion, already gives a method whose error decreases asymptotically as N-1,much faster than N-1/2.(Note that quasi-random numbers,87.7,are another way of smoothing fluctuations in the density of points,giving nearly as good a result as the "blind"stratification strategy.) However,"asymptotically"is an important caveat:For example,if the integrand is negligible in all but a single subregion,then the resulting one-sample integration is all but

318 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Here Vara (f) denotes the variance of f in subregion a, that is, f2a − f2 a, and correspondingly for b. From the definitions already given, it is not difficult to prove the relation Var(f) = 1 2 [Vara (f) + Varb (f)] + 1 4 (fa − fb) 2 (7.8.12) (In physics, this formula for combining second moments is the “parallel axis theorem.”) Comparing equations (7.8.9), (7.8.11), and (7.8.12), one sees that the stratified (into two subvolumes) sampling gives a variance that is never larger than the simple Monte Carlo case — and smaller whenever the means of the stratified samples, fa and fb, are different. We have not yet exploited the possibility of sampling the two subvolumes with different numbers of points, say Na in subregion a and Nb ≡ N − Na in subregion b. Let us do so now. Then the variance of the estimator is Var f   = 1 4  Vara (f) Na + Varb (f) N − Na  (7.8.13) which is minimized (one can easily verify) when Na N = σa σa + σb (7.8.14) Here we have adopted the shorthand notation σa ≡ [Vara (f)]1/2, and correspondingly for b. If Na satisfies equation (7.8.14), then equation (7.8.13) reduces to Var f   = (σa + σb) 2 4N (7.8.15) Equation (7.8.15) reduces to equation (7.8.9) if Var(f) = Vara (f) = Varb (f), in which case stratifying the sample makes no difference. A standard way to generalize the above result is to consider the volume V divided into more than two equal subregions. One can readily obtain the result that the optimal allocation of sample points among the regions is to have the number of points in each region j proportional to σj (that is, the square root of the variance of the function f in that subregion). In spaces of high dimensionality (say d >∼ 4) this is not in practice very useful, however. Dividing a volume into K segments along each dimension implies Kd subvolumes, typically much too large a number when one contemplates estimating all the corresponding σj ’s. Mixed Strategies Importance sampling and stratified sampling seem, at first sight, inconsistent with each other. The former concentrates sample points where the magnitude of the integrand |f| is largest, that latter where the variance of f is largest. How can both be right? The answer is that (like so much else in life) it all depends on what you know and how well you know it. Importance sampling depends on already knowing some approximation to your integral, so that you are able to generate random points xi with the desired probability density p. To the extent that your p is not ideal, you are left with an error that decreases only as N−1/2. Things are particularly bad if your p is far from ideal in a region where the integrand f is changing rapidly, since then the sampled function h = f /p will have a large variance. Importance sampling works by smoothing the values of the sampled function h, and is effective only to the extent that you succeed in this. Stratified sampling, by contrast, does not necessarily require that you know anything about f. Stratified sampling works by smoothing out the fluctuations of the number of points in subregions, not by smoothing the values of the points. The simplest stratified strategy, dividing V into N equal subregions and choosing one point randomly in each subregion, already gives a method whose error decreases asymptotically as N−1, much faster than N−1/2. (Note that quasi-random numbers, §7.7, are another way of smoothing fluctuations in the density of points, giving nearly as good a result as the “blind” stratification strategy.) However, “asymptotically” is an important caveat: For example, if the integrand is negligible in all but a single subregion, then the resulting one-sample integration is all but

7.8 Adaptive and Recursive Monte Carlo Methods 319 useless.Information,even very crude,allowing importance sampling to put many points in the active subregion would be much better than blind stratified sampling. Stratified sampling really comes into its own if you have some way of estimating the variances,so that you can put unequal numbers of points in different subregions,according to (7.8.14)or its generalizations,and if you can find a way of dividing a region into a practical number of subregions (notably not Kd with large dimension d),while yet significantly reducing the variance of the function in each subregion compared to its variance in the full volume.Doing this requires a lot of knowledge about f,though different knowledge from what is required for importance sampling. In practice,importance sampling and stratified sampling are not incompatible.In many, if not most,cases of interest,the integrand f is small everywhere in V except for a small fractional volume of"active regions."In these regions the magnitude of f and the standard 8 deviation=[Var(f/2 are comparable in size,so both techniques will give about the same concentration of points.In more sophisticated implementations,it is also possible to "nest"the two techniques,so that (e.g.)importance sampling on a crude grid is followed by stratification within each grid cell. Adaptive Monte Carlo:VEGAS R以 ed for The VEGAS algorithm,invented by Peter Lepage [1.2],is widely used for multidimen- sional integrals that occur in elementary particle physics.VEGAS is primarily based on importance sampling,but it also does some stratified sampling if the dimension d is small enough to avoid K"explosion(specifically,if(K/2)<N/2,with N the number of sample points).The basic technique for importance sampling in VEGAS is to construct,adaptively, ⑦入9 Press. a multidimensional weight function g that is separable, pgx,y,z,..)=9z(E)9g(y)9:(2).. (7.8.16) Such a function avoids the Kd explosion in two ways:(i)It can be stored in the computer as d separate one-dimensional functions,each defined by K tabulated values,say -so that SCIENTIFIC Kx d replaces K.(ii)It can be sampled as a probability density by consecutively sampling the d one-dimensional functions to obtain coordinate vector components (r,y,z,...). 6 The optimal separable weight function can be shown to be [1] 9(z)x dy dz... fz,]/2 (7.8.17) 9y(y)g(2).…」 (and correspondingly for y,2,...).Notice that this reduces to gf(7.8.6)in one dimension.Equation (7.8.17)immediately suggests VEGAS'adaptive strategy:Given a 10621 set of g-functions (initially all constant,say),one samples the function f,accumulating not only the overall estimator of the integral,but also the kd estimators (K subdivisions of the Numerical Recipes 43106 independent variable in each of d dimensions)of the right-hand side of equation (7.8.17). These then determine improved g functions for the next iteration. When the integrand f is concentrated in one,or at most a few,regions in d-space,then (outside the weight function g's quickly become large at coordinate values that are the projections of these regions onto the coordinate axes.The accuracy of the Monte Carlo integration is then North Software. enormously enhanced over what simple Monte Carlo would give. The weakness of VEGAS is the obvious one:To the extent that the projection of the function f onto individual coordinate directions is uniform,VEGAS gives no concentration of sample points in those dimensions.The worst case for VEGAS,e.g.,is an integrand that is concentrated close to a body diagonal line,e.g,one from (0,0,0,...)to (1,1,1,...). Since this geometry is completely nonseparable,VEGAS can give no advantage at all.More generally,VEGAS may not do well when the integrand is concentrated in one-dimensional (or higher)curved trajectories (or hypersurfaces),unless these happen to be oriented close to the coordinate directions. The routine vegas that follows is essentially Lepage's standard version,minimally modified to conform to our conventions.(We thank Lepage for permission to reproduce the program here.)For consistency with other versions of the VEGAS algorithm in circulation

7.8 Adaptive and Recursive Monte Carlo Methods 319 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). useless. Information, even very crude, allowing importance sampling to put many points in the active subregion would be much better than blind stratified sampling. Stratified sampling really comes into its own if you have some way of estimating the variances, so that you can put unequal numbers of points in different subregions, according to (7.8.14) or its generalizations, and if you can find a way of dividing a region into a practical number of subregions (notably not Kd with large dimension d), while yet significantly reducing the variance of the function in each subregion compared to its variance in the full volume. Doing this requires a lot of knowledge about f, though different knowledge from what is required for importance sampling. In practice, importance sampling and stratified sampling are not incompatible. In many, if not most, cases of interest, the integrand f is small everywhere in V except for a small fractional volume of “active regions.” In these regions the magnitude of |f| and the standard deviation σ = [Var(f)]1/2 are comparable in size, so both techniques will give about the same concentration of points. In more sophisticated implementations, it is also possible to “nest” the two techniques, so that (e.g.) importance sampling on a crude grid is followed by stratification within each grid cell. Adaptive Monte Carlo: VEGAS The VEGAS algorithm, invented by Peter Lepage [1,2], is widely used for multidimen￾sional integrals that occur in elementary particle physics. VEGAS is primarily based on importance sampling, but it also does some stratified sampling if the dimension d is small enough to avoid Kd explosion (specifically, if (K/2)d < N/2, with N the number of sample points). The basic technique for importance sampling in VEGAS is to construct, adaptively, a multidimensional weight function g that is separable, p ∝ g(x, y, z, . . .) = gx(x)gy(y)gz(z)... (7.8.16) Such a function avoids the Kd explosion in two ways: (i) It can be stored in the computer as d separate one-dimensional functions, each defined by K tabulated values, say — so that K × d replaces Kd. (ii) It can be sampled as a probability density by consecutively sampling the d one-dimensional functions to obtain coordinate vector components (x, y, z, . . .). The optimal separable weight function can be shown to be [1] gx(x) ∝  dy  dz . . . f 2(x, y, z, . . .) gy(y)gz(z)... 1/2 (7.8.17) (and correspondingly for y,z,...). Notice that this reduces to g ∝ |f| (7.8.6) in one dimension. Equation (7.8.17) immediately suggests VEGAS’ adaptive strategy: Given a set of g-functions (initially all constant, say), one samples the function f, accumulating not only the overall estimator of the integral, but also the Kd estimators (K subdivisions of the independent variable in each of d dimensions) of the right-hand side of equation (7.8.17). These then determine improved g functions for the next iteration. When the integrand f is concentrated in one, or at most a few, regions in d-space, then the weight function g’s quickly become large at coordinate values that are the projections of these regions onto the coordinate axes. The accuracy of the Monte Carlo integration is then enormously enhanced over what simple Monte Carlo would give. The weakness of VEGAS is the obvious one: To the extent that the projection of the function f onto individual coordinate directions is uniform, VEGAS gives no concentration of sample points in those dimensions. The worst case for VEGAS, e.g., is an integrand that is concentrated close to a body diagonal line, e.g., one from (0, 0, 0,...) to (1, 1, 1,...). Since this geometry is completely nonseparable, VEGAS can give no advantage at all. More generally, VEGAS may not do well when the integrand is concentrated in one-dimensional (or higher) curved trajectories (or hypersurfaces), unless these happen to be oriented close to the coordinate directions. The routine vegas that follows is essentially Lepage’s standard version, minimally modified to conform to our conventions. (We thank Lepage for permission to reproduce the program here.) For consistency with other versions of the VEGAS algorithm in circulation

320 Chapter 7.Random Numbers we have preserved original variable names.The parameter NDMX is what we have called K, the maximum number of increments along each axis;MXDIM is the maximum value of d;some other parameters are explained in the comments. The vegas routine performs m =itmx statistically independent evaluations of the desired integral,each with N=ncall function evaluations.While statistically independent, these iterations do assist each other,since each one is used to refine the sampling grid for the next one.The results of all iterations are combined into a single best answer,and its estimated error,by the relations 12 /best=】 (7.8.18) 8 Also returned is the quantity (I:-Ibest)2 (7.8.19) m-14 σ2 ICAL If this is significantly larger than 1,then the results of the iterations are statistically inconsistent,and the answers are suspect. The input flag init can be used to advantage.One might have a call with init=0, ncall=1000,itmx=5 immediately followed by a call with init=1,ncall=100000.itmx=1. The effect would be to develop a sampling grid over 5 iterations of a small number of samples, then to do a single high accuracy integration on the optimized grid. Note that the user-supplied integrand function,fxn,has an argument wgt in addition to the expected evaluation point x.In most applications you ignore wgt inside the function. Occasionally,however,you may want to integrate some additional function or functions along with the principal function f.The integral of any such function g can be estimated by Ig=>wig(x) (7.8.20) 灵兰 where the wi's and x's are the arguments wgt and x,respectively.It is straightforward to accumulate this sum inside your function fxn,and to pass the answer back to your main 6 program via global variables.Of course,g(x)had better resemble the principal function f to some degree,since the sampling will be optimized for f. COMPUTING #include 19200 #include (ISBN #include "nrutil.h" #define ALPH 1.5 色 #define NDMX 50 #define MXDIM 10 Numerical Recipes 43108 #define TINY 1.0e-30 extern long idum; For random number initialization in main. (outside void vegas(float regn[],int ndim,float (*fxn)(float []float),int init, Software. unsigned long ncall,int itmx,int nprn,float *tgral,float *sd, North float *chi2a) Performs Monte Carlo integration of a user-supplied ndim-dimensional function fxn over a rectangular volume specified by regn[1..2*ndim],a vector consisting of ndim "lower left" coordinates of the region followed by ndim "upper right"coordinates.The integration consists of itmx iterations,each with approximately ncall calls to the function.After each iteration the grid is refined;more than 5 or 10 iterations are rarely useful.The input flag init signals whether this call is a new start,or a subsequent call for additional iterations (see comments below).The input flag nprn (normally 0)controls the amount of diagnostic output.Returned answers are tgral (the best estimate of the integral),sd (its standard deviation),and chi2a (x-per degree of freedom,an indicator of whether consistent results are being obtained).See text for further details. float ran2(long *idum);

320 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). we have preserved original variable names. The parameter NDMX is what we have called K, the maximum number of increments along each axis; MXDIM is the maximum value of d; some other parameters are explained in the comments. The vegas routine performs m = itmx statistically independent evaluations of the desired integral, each with N = ncall function evaluations. While statistically independent, these iterations do assist each other, since each one is used to refine the sampling grid for the next one. The results of all iterations are combined into a single best answer, and its estimated error, by the relations Ibest = m i=1 Ii σ2 i  m i=1 1 σ2 i σbest =  m i=1 1 σ2 i −1/2 (7.8.18) Also returned is the quantity χ2 /m ≡ 1 m − 1 m i=1 (Ii − Ibest) 2 σ2 i (7.8.19) If this is significantly larger than 1, then the results of the iterations are statistically inconsistent, and the answers are suspect. The input flag init can be used to advantage. One might have a call with init=0, ncall=1000, itmx=5 immediately followed by a call with init=1, ncall=100000, itmx=1. The effect would be to develop a sampling grid over 5 iterations of a small number of samples, then to do a single high accuracy integration on the optimized grid. Note that the user-supplied integrand function, fxn, has an argument wgt in addition to the expected evaluation point x. In most applications you ignore wgt inside the function. Occasionally, however, you may want to integrate some additional function or functions along with the principal function f. The integral of any such function g can be estimated by Ig = i wig(x) (7.8.20) where the wi’s and x’s are the arguments wgt and x, respectively. It is straightforward to accumulate this sum inside your function fxn, and to pass the answer back to your main program via global variables. Of course, g(x) had better resemble the principal function f to some degree, since the sampling will be optimized for f. #include #include #include "nrutil.h" #define ALPH 1.5 #define NDMX 50 #define MXDIM 10 #define TINY 1.0e-30 extern long idum; For random number initialization in main. void vegas(float regn[], int ndim, float (*fxn)(float [], float), int init, unsigned long ncall, int itmx, int nprn, float *tgral, float *sd, float *chi2a) Performs Monte Carlo integration of a user-supplied ndim-dimensional function fxn over a rectangular volume specified by regn[1..2*ndim], a vector consisting of ndim “lower left” coordinates of the region followed by ndim “upper right” coordinates. The integration consists of itmx iterations, each with approximately ncall calls to the function. After each iteration the grid is refined; more than 5 or 10 iterations are rarely useful. The input flag init signals whether this call is a new start, or a subsequent call for additional iterations (see comments below). The input flag nprn (normally 0) controls the amount of diagnostic output. Returned answers are tgral (the best estimate of the integral), sd (its standard deviation), and chi2a (χ2 per degree of freedom, an indicator of whether consistent results are being obtained). See text for further details. { float ran2(long *idum);

7.8 Adaptive and Recursive Monte Carlo Methods 321 void rebin(float rc,int nd,float r[],float xin[],float xi[]); static int i,it,j,k,mds,nd,ndo,ng,npg,ia[MXDIM+1],kg [MXDIM+1]; static float calls,dv2g,dxg,f,f2,f2b,fb,rc,ti,tsi,wgt,xjac,xn,xnd,xo; static float d[NDMX+1][MXDIM+1],di [NDMX+1][MXDIM+1],dt [MXDIM+1], dx [MXDIM+1],r [NDMX+1],x [MXDIM+1],xi [MXDIM+1][NDMX+1],xin [NDMX+1]; static double schi,si,swgt; Best make everything static,allowing restarts if (init =0){ mds =-1; 7423 from NUMERICAL RECIPES IN npg=ng/NDMX+1; nd=ng/npg; ng=npg*nd; (North America to to users to make one paper for (k=1,i=1;i=0){ To order Numerical Recipes books or personaluse.Further reproduction,or 1988-1992 by Numerical Recipes OF SCIENTIFIC COMPUTING(ISBN 0-521- printf("%s:ndim=%3d ncall=%8.0f\n", -431085 Input parameters for vegas",ndim,calls); printf("%28s it=%5d itmx=45d\n","",it,itmx); printf ("%28s nprn=%3d ALPH=%5.2f\n","",nprn,ALPH) printf("%28s mds=%3d nd=%4d\n","",mds,nd); Software. for (j=1;j<=ndim;j++){ printf("7%30s1[%2d]=%11.4gxu[%2d]=%11.4g\n", (outside North America) "j,regn[j],j,regn[j+ndim]) machine for (it=1:it<=itmx:it++){ Main iteration loop.Can enter here (init 3)to do an additional itmx iterations with all other parameters unchanged. t1=tsi=0.0; for (j=1;j<=ndim;j++) kg[j]=1; for(i=1;i<=nd;i+)d[i][j]=di[i][j]=0.0;

7.8 Adaptive and Recursive Monte Carlo Methods 321 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). void rebin(float rc, int nd, float r[], float xin[], float xi[]); static int i,it,j,k,mds,nd,ndo,ng,npg,ia[MXDIM+1],kg[MXDIM+1]; static float calls,dv2g,dxg,f,f2,f2b,fb,rc,ti,tsi,wgt,xjac,xn,xnd,xo; static float d[NDMX+1][MXDIM+1],di[NDMX+1][MXDIM+1],dt[MXDIM+1], dx[MXDIM+1], r[NDMX+1],x[MXDIM+1],xi[MXDIM+1][NDMX+1],xin[NDMX+1]; static double schi,si,swgt; Best make everything static, allowing restarts. if (init = 0) { mds = -1; npg=ng/NDMX+1; nd=ng/npg; ng=npg*nd; } } for (k=1,i=1;i= 0) { printf("%s: ndim= %3d ncall= %8.0f\n", " Input parameters for vegas",ndim,calls); printf("%28s it=%5d itmx=%5d\n"," ",it,itmx); printf("%28s nprn=%3d ALPH=%5.2f\n"," ",nprn,ALPH); printf("%28s mds=%3d nd=%4d\n"," ",mds,nd); for (j=1;j<=ndim;j++) { printf("%30s xl[%2d]= %11.4g xu[%2d]= %11.4g\n", " ",j,regn[j],j,regn[j+ndim]); } } } for (it=1;it<=itmx;it++) { Main iteration loop. Can enter here (init ≥ 3) to do an additional itmx iterations with all other parameters unchanged. ti=tsi=0.0; for (j=1;j<=ndim;j++) { kg[j]=1; for (i=1;i<=nd;i++) d[i][j]=di[i][j]=0.0;

322 Chapter 7. Random Numbers for(;;){ fb=f2b=0.0; for (k=1;k1)[ xo=xi[i][ia[i]]-xi[j][ia[i]-1]; rc=xi[j][ia[j]-1]+(xn-ia[j])*xo; else read able files Per xo=xi[i][ia[i]]; rc=(xn-ia[j])*xo; x[j]=regn[j]+rc*dx[j]; granted for i wgt *xo*xnd; http://www.nr.com or call 1-800-872- (including this one) f=wgt*(*fx知)(x,gt); internet f2=f*f; fb +f; f2b+=f2; for (j=1;i=0)d[ia[j]][j]+f2; -7423(North America to any server computer,is strictly prohibited. tusers to make one paper 1988-1992 by Cambridge University Press.Programs from NUMERICAL RECIPES IN C: THE f2b=sqrt(f2b*npg); f2b=(f2b-fb)*(f2b+fb); if (f2b =1;k--){ kg[k]%ngi if (++kg[k]!1)break; email to directcustsen if (k 1)break; ART OF SCIENTIFIC COMPUTING(ISBN 0-521- tsi *dv2g; Compute final results for this iteration. wgt=1.0/tsi; si +wgt*ti; schi +wgt*ti*ti; swgt +wgt; @cambridge.org To order Numerical Recipes books or personal use.Further reproduction,or 1988-1992 by Numerical Recipes *tgral=si/swgt; -431085 *chi2a=(schi-si*(*tgral))/(it-0.9999); 1f(*chi2a0){ printf("%s %3d integral =%14.7g +/-%9.2g\n", (outside North America) Software. ying of iteration no.",it,ti,tsi); printf("%s integral =%14.7g+/-%9.2g chi**2/IT n %9.2g\n", all iterations:",*tgral,*sd,*chi2a); if (nprn){ for (j=1;i<=ndim;++){ printf("DATA FOR axis %2d\n",j); pr1ntf("%6s%13s%11s13s%11s%13s\n", "X","delta i","X","delta i","X","delta i"); for (i=1+nprn/2;i<=nd;i +nprn+2){ pr1ntf("%8.5f%12.4g%12.5f%12.4g%12.5f%12.4g\n", x1[j][i],di[i][j],x1[][i+1]

322 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). } for (;;) { fb=f2b=0.0; for (k=1;k 1) { xo=xi[j][ia[j]]-xi[j][ia[j]-1]; rc=xi[j][ia[j]-1]+(xn-ia[j])*xo; } else { xo=xi[j][ia[j]]; rc=(xn-ia[j])*xo; } x[j]=regn[j]+rc*dx[j]; wgt *= xo*xnd; } f=wgt*(*fxn)(x,wgt); f2=f*f; fb += f; f2b += f2; for (j=1;j= 0) d[ia[j]][j] += f2; } } f2b=sqrt(f2b*npg); f2b=(f2b-fb)*(f2b+fb); if (f2b =1;k--) { kg[k] %= ng; if (++kg[k] != 1) break; } if (k = 0) { printf("%s %3d : integral = %14.7g +/- %9.2g\n", " iteration no.",it,ti,tsi); printf("%s integral =%14.7g+/-%9.2g chi**2/IT n = %9.2g\n", " all iterations: ",*tgral,*sd,*chi2a); if (nprn) { for (j=1;j<=ndim;j++) { printf(" DATA FOR axis %2d\n",j); printf("%6s%13s%11s%13s%11s%13s\n", "X","delta i","X","delta i","X","delta i"); for (i=1+nprn/2;i<=nd;i += nprn+2) { printf("%8.5f%12.4g%12.5f%12.4g%12.5f%12.4g\n", xi[j][i],di[i][j],xi[j][i+1]

7.8 Adaptive and Recursive Monte Carlo Methods 323 di[1+1][j],x1[j][i+2],di[i+2][j]): for (j=1;j1)xo=x1[k-1]; Numerical Recipes books or xn=xi[k]; dr -rc; xin[i]=xn-(xn-xo)*dr/r[k]; y Software. for (i=1;i<nd;i++)xi[i]=xin[i]; xi[nd]=1.0; (outside North America) ying of machine Recursive Stratified Sampling The problem with stratified sampling,we have seen,is that it may not avoid the K explosion inherent in the obvious,Cartesian,tessellation of a d-dimensional volume.A technique called recursive stratified sampling [3]attempts to do this by successive bisections of a volume,not along all d dimensions,but rather along only one dimension at a time

7.8 Adaptive and Recursive Monte Carlo Methods 323 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). di[i+1][j],xi[j][i+2],di[i+2][j]); } } } } for (j=1;j dr) dr += r[++k]; if (k > 1) xo=xi[k-1]; xn=xi[k]; dr -= rc; xin[i]=xn-(xn-xo)*dr/r[k]; } for (i=1;i<nd;i++) xi[i]=xin[i]; xi[nd]=1.0; } Recursive Stratified Sampling The problem with stratified sampling, we have seen, is that it may not avoid the Kd explosion inherent in the obvious, Cartesian, tessellation of a d-dimensional volume. A technique called recursive stratified sampling [3] attempts to do this by successive bisections of a volume, not along all d dimensions, but rather along only one dimension at a time

324 Chapter 7.Random Numbers The starting points are equations(7.8.10)and (7.8.13),applied to bisections of successively smaller subregions. Suppose that we have a quota of N evaluations of the function f,and want to evaluate (f)'in the rectangular parallelepiped region R=(x,x).(We denote such a region by the two coordinate vectors of its diagonally opposite corners.)First,we allocate a fraction p of N towards exploring the variance of f in R:We sample pN function values uniformly in R and accumulate the sums that will give the d different pairs of variances corresponding to the d different coordinate directions along which R can be bisected.In other words,in pN samples,we estimate Var(f)in each of the regions resulting from a possible bisection of R, 1 Rat三(La6-2(6-a)e) (7.8.21) 81 Bi=(a+5e(6-a)e,6】 Here e is the unit vector in the ith coordinate direction,i=1,2,...,d. Second,we inspect the variances to find the most favorable dimension i to bisect.By equation (7.8.15),we could,for example,choose that i for which the sum of the square roots ICAL of the variance estimators in regions Ra and Ri is minimized.(Actually,as we will explain, we do something slightly different. Third,we allocate the remaining (1-p)N function evaluations between the regions RECIPES Rai and R.If we used equation(7.8.15)to choose i,we should do this allocation according 令 to equation (7.8.14). We now have two parallelepipeds each with its own allocation of function evaluations for estimating the mean of f.Our"RSS"algorithm now shows itself to be recursive:To evaluate Press. the mean in each region,we go back to the sentence beginning "First,..."in the paragraph above equation (7.8.21).(Of course,when the allocation of points to a region falls below some number,we resort to simple Monte Carlo rather than continue with the recursion.) 9 Finally,we combine the means,and also estimated variances of the two subvolumes, using equation (7.8.10)and the first line of equation (7.8.11). This completes the RSS algorithm in its simplest form.Before we describe some IENTIFIC additional tricks under the general rubric of"implementation details,"we need to return briefly to equations (7.8.13)-(7.8.15)and derive the equations that we actually use instead of these. 6 The right-hand side of equation (7.8.13)applies the familiar scaling law of equation (7.8.9) twice,once to a and again to b.This would be correct if the estimates (f)and (f) .were each made by simple Monte Carlo,with uniformly random sample points.However,the two estimates of the mean are in fact made recursively.Thus,there is no reason to expect equation (7.8.9)to hold.Rather,we might substitute for equation(7.8.13)the relation, -2+ Var ((f))=N (7.822) 吃a Numerical where a is an unknown constant 1 (the case of equality corresponding to simple Monte 431086 Carlo).In that case,a short calculation shows that Var ((f))is minimized when Na Vara(f)1/(1+a) Vara (f)()+Var (f)) (7.8.23) and that its minimum value is Var((f)Vara (()+var (f (7.8.24) Equations (7.8.22)-(7.8.24)reduce to equations (7.8.13)-(7.8.15)when a =1.Numerical experiments to find a self-consistent value for o find that a2.That is,when equation (7.8.23)with a=2 is used recursively to allocate sample opportunities,the observed variance of the RSS algorithm goes approximately as N-2 while any other value of a in equation (7.8.23)gives a poorer fall-off.(The sensitivity to a is,however,not very great;it is not known whether a =2 is an analytically justifiable result,or only a useful heuristic. The principal difference between miser's implementation and the algorithm as described thus far lies in how the variances on the right-hand side of equation(7.8.23)are estimated.We

324 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The starting points are equations (7.8.10) and (7.8.13), applied to bisections of successively smaller subregions. Suppose that we have a quota of N evaluations of the function f, and want to evaluate f  in the rectangular parallelepiped region R = (xa, xb). (We denote such a region by the two coordinate vectors of its diagonally opposite corners.) First, we allocate a fraction p of N towards exploring the variance of f in R: We sample pN function values uniformly in R and accumulate the sums that will give the d different pairs of variances corresponding to the d different coordinate directions along which R can be bisected. In other words, in pN samples, we estimate Var(f) in each of the regions resulting from a possible bisection of R, Rai ≡(xa, xb − 1 2 ei · (xb − xa)ei) Rbi ≡(xa + 1 2 ei · (xb − xa)ei, xb) (7.8.21) Here ei is the unit vector in the ith coordinate direction, i = 1, 2,...,d. Second, we inspect the variances to find the most favorable dimension i to bisect. By equation (7.8.15), we could, for example, choose that i for which the sum of the square roots of the variance estimators in regions Rai and Rbi is minimized. (Actually, as we will explain, we do something slightly different.) Third, we allocate the remaining (1 − p)N function evaluations between the regions Rai and Rbi. If we used equation (7.8.15) to choose i, we should do this allocation according to equation (7.8.14). We now have two parallelepipeds each with its own allocation of function evaluations for estimating the mean of f. Our “RSS” algorithm now shows itself to be recursive: To evaluate the mean in each region, we go back to the sentence beginning “First,...” in the paragraph above equation (7.8.21). (Of course, when the allocation of points to a region falls below some number, we resort to simple Monte Carlo rather than continue with the recursion.) Finally, we combine the means, and also estimated variances of the two subvolumes, using equation (7.8.10) and the first line of equation (7.8.11). This completes the RSS algorithm in its simplest form. Before we describe some additional tricks under the general rubric of “implementation details,” we need to return briefly to equations (7.8.13)–(7.8.15) and derive the equations that we actually use instead of these. The right-hand side of equation (7.8.13) applies the familiar scaling law of equation (7.8.9) twice, once to a and again to b. This would be correct if the estimates fa and fb were each made by simple Monte Carlo, with uniformly random sample points. However, the two estimates of the mean are in fact made recursively. Thus, there is no reason to expect equation (7.8.9) to hold. Rather, we might substitute for equation (7.8.13) the relation, Var f   = 1 4  Vara (f) Nα a + Varb (f) (N − Na)α  (7.8.22) where α is an unknown constant ≥ 1 (the case of equality corresponding to simple Monte Carlo). In that case, a short calculation shows that Var f   is minimized when Na N = Vara (f) 1/(1+α) Vara (f) 1/(1+α) + Varb (f) 1/(1+α) (7.8.23) and that its minimum value is Var f   ∝  Vara (f) 1/(1+α) + Varb (f) 1/(1+α) 1+α (7.8.24) Equations (7.8.22)–(7.8.24) reduce to equations (7.8.13)–(7.8.15) when α = 1. Numerical experiments to find a self-consistent value for α find that α ≈ 2. That is, when equation (7.8.23) with α = 2 is used recursively to allocate sample opportunities, the observed variance of the RSS algorithm goes approximately as N−2, while any other value of α in equation (7.8.23) gives a poorer fall-off. (The sensitivity to α is, however, not very great; it is not known whether α = 2 is an analytically justifiable result, or only a useful heuristic.) The principal difference between miser’s implementation and the algorithm as described thus far lies in how the variances on the right-hand side of equation (7.8.23) are estimated. We

7.8 Adaptive and Recursive Monte Carlo Methods 325 find empirically that it is somewhat more robust to use the square of the difference of maximum and minimum sampled function values,instead of the genuine second moment of the samples. This estimator is of course increasingly biased with increasing sample size;however,equation (7.8.23)uses it only to compare two subvolumes(a and b)having approximately equal numbers of samples.The "max minus min"estimator proves its worth when the preliminary sampling yields only a single point,or small number of points,in active regions of the integrand.In many realistic cases,these are indicators of nearby regions of even greater importance,and it is useful to let them attract the greater sampling weight that"max minus min"provides. A second modification embodied in the code is the introduction ofa"dithering parameter," dith,whose nonzero value causes subvolumes to be divided not exactly down the middle,but rather into fractions 0.5+dith,with the sign of the randomly chosen by a built-in random number routine.Normally dith can be set to zero.However,there is a large advantage in 8 taking dith to be nonzero if some special symmetry of the integrand puts the active region exactly at the midpoint of the region,or at the center of some power-of-two submultiple of the region.One wants to avoid the extreme case of the active region being evenly divided nted for into 2d abutting corners of a d-dimensional space.A typical nonzero value of dith,on those occasions when it is useful,might be 0.1.Of course,when the dithering parameter is nonzero,we must take the differing sizes of the subvolumes into account;the code does this through the variable fracl. One final feature in the code deserves mention.The RSS algorithm uses a single set of sample points to evaluate equation (7.8.23)in all d directions.At bottom levels of the recursion,the number of sample points can be quite small.Although rare,it can happen that in one direction all the samples are in one half of the volume;in that case,that direction is ignored as a candidate for bifurcation.Even more rare is the possibility that all of the samples are in one half of the volume in al/directions.In this case,a random direction is 号&s3d的 Press. ART chosen.If this happens too often in your application,then you should increase MNPT(see line if (!ib)...in the code). Note that miser,as given,returns as ave an estimate of the average function value 9 Programs (f)),not the integral of f over the region.The routine vegas,adopting the other convention, returns as tgral the integral.The two conventions are of course trivially related,by equation SCIENTIFIC (7.8.8),since the volume V of the rectangular region is known. #include #include COMPUTING #include "nrutil.h" 共daf3 ne peac0.1 19881992 #define MNPT 15 (ISBN #define MNBS 60 #define TINY 1.0e-30 色 #define BIG 1.0e30 Here PFAC is the fraction of remaining function evaluations used at each stage to explore the variance of func.At least MNPT function evaluations are performed in any terminal subregion; Numerical Recipes 43108 a subregion is further bisected only if at least MNBS function evaluations are available.We take MNBS =4*MNPT. (outside static long iran=0; North Software. void miser(float (*func)(float [])float regn[],int ndim,unsigned long npts, float dith,float ave,float *var) 言 Monte Carlo samples a user-supplied ndim-dimensional function func in a rectangular volume specified by regn[1..2*ndim],a vector consisting of ndim "lower-left"coordinates of the region followed by ndim "upper-right"coordinates.The function is sampled a total of npts times,at locations determined by the method of recursive stratified sampling.The mean value of the function in the region is returned as ave;an estimate of the statistical uncertainty of ave (square of standard deviation)is returned as var.The input parameter dith should normally be set to zero,but can be set to (e.g.)0.1 if func's active region falls on the boundary of a power-of-two subdivision of region void ranpt(float pt[],float regn[],int n); float *regn_temp;

7.8 Adaptive and Recursive Monte Carlo Methods 325 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). find empirically that it is somewhat more robust to use the square of the difference of maximum and minimum sampled function values, instead of the genuine second moment of the samples. This estimator is of course increasingly biased with increasing sample size; however, equation (7.8.23) uses it only to compare two subvolumes (a and b) having approximately equal numbers of samples. The “max minus min” estimator proves its worth when the preliminary sampling yields only a single point, or small number of points, in active regions of the integrand. In many realistic cases, these are indicators of nearby regions of even greater importance, and it is useful to let them attract the greater sampling weight that “max minus min” provides. A second modification embodied in the code is the introduction of a “dithering parameter,” dith, whose nonzero value causes subvolumes to be divided not exactly down the middle, but rather into fractions 0.5±dith, with the sign of the ± randomly chosen by a built-in random number routine. Normally dith can be set to zero. However, there is a large advantage in taking dith to be nonzero if some special symmetry of the integrand puts the active region exactly at the midpoint of the region, or at the center of some power-of-two submultiple of the region. One wants to avoid the extreme case of the active region being evenly divided into 2d abutting corners of a d-dimensional space. A typical nonzero value of dith, on those occasions when it is useful, might be 0.1. Of course, when the dithering parameter is nonzero, we must take the differing sizes of the subvolumes into account; the code does this through the variable fracl. One final feature in the code deserves mention. The RSS algorithm uses a single set of sample points to evaluate equation (7.8.23) in all d directions. At bottom levels of the recursion, the number of sample points can be quite small. Although rare, it can happen that in one direction all the samples are in one half of the volume; in that case, that direction is ignored as a candidate for bifurcation. Even more rare is the possibility that all of the samples are in one half of the volume in all directions. In this case, a random direction is chosen. If this happens too often in your application, then you should increase MNPT (see line if (!jb)... in the code). Note that miser, as given, returns as ave an estimate of the average function value f, not the integral of f over the region. The routine vegas, adopting the other convention, returns as tgral the integral. The two conventions are of course trivially related, by equation (7.8.8), since the volume V of the rectangular region is known. #include #include #include "nrutil.h" #define PFAC 0.1 #define MNPT 15 #define MNBS 60 #define TINY 1.0e-30 #define BIG 1.0e30 Here PFAC is the fraction of remaining function evaluations used at each stage to explore the variance of func. At least MNPT function evaluations are performed in any terminal subregion; a subregion is further bisected only if at least MNBS function evaluations are available. We take MNBS = 4*MNPT. static long iran=0; void miser(float (*func)(float []), float regn[], int ndim, unsigned long npts, float dith, float *ave, float *var) Monte Carlo samples a user-supplied ndim-dimensional function func in a rectangular volume specified by regn[1..2*ndim], a vector consisting of ndim “lower-left” coordinates of the region followed by ndim “upper-right” coordinates. The function is sampled a total of npts times, at locations determined by the method of recursive stratified sampling. The mean value of the function in the region is returned as ave; an estimate of the statistical uncertainty of ave (square of standard deviation) is returned as var. The input parameter dith should normally be set to zero, but can be set to (e.g.) 0.1 if func’s active region falls on the boundary of a power-of-two subdivision of region. { void ranpt(float pt[], float regn[], int n); float *regn_temp;

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共13页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有