正在加载图片...
324 Chapter 7.Random Numbers The starting points are equations(7.8.10)and (7.8.13),applied to bisections of successively smaller subregions. Suppose that we have a quota of N evaluations of the function f,and want to evaluate (f)'in the rectangular parallelepiped region R=(x,x).(We denote such a region by the two coordinate vectors of its diagonally opposite corners.)First,we allocate a fraction p of N towards exploring the variance of f in R:We sample pN function values uniformly in R and accumulate the sums that will give the d different pairs of variances corresponding to the d different coordinate directions along which R can be bisected.In other words,in pN samples,we estimate Var(f)in each of the regions resulting from a possible bisection of R, 1 Rat三(La6-2(6-a)e) (7.8.21) 81 Bi=(a+5e(6-a)e,6】 Here e is the unit vector in the ith coordinate direction,i=1,2,...,d. Second,we inspect the variances to find the most favorable dimension i to bisect.By equation (7.8.15),we could,for example,choose that i for which the sum of the square roots ICAL of the variance estimators in regions Ra and Ri is minimized.(Actually,as we will explain, we do something slightly different. Third,we allocate the remaining (1-p)N function evaluations between the regions RECIPES Rai and R.If we used equation(7.8.15)to choose i,we should do this allocation according 令 to equation (7.8.14). We now have two parallelepipeds each with its own allocation of function evaluations for estimating the mean of f.Our"RSS"algorithm now shows itself to be recursive:To evaluate Press. the mean in each region,we go back to the sentence beginning "First,..."in the paragraph above equation (7.8.21).(Of course,when the allocation of points to a region falls below some number,we resort to simple Monte Carlo rather than continue with the recursion.) 9 Finally,we combine the means,and also estimated variances of the two subvolumes, using equation (7.8.10)and the first line of equation (7.8.11). This completes the RSS algorithm in its simplest form.Before we describe some IENTIFIC additional tricks under the general rubric of"implementation details,"we need to return briefly to equations (7.8.13)-(7.8.15)and derive the equations that we actually use instead of these. 6 The right-hand side of equation (7.8.13)applies the familiar scaling law of equation (7.8.9) twice,once to a and again to b.This would be correct if the estimates (f)and (f) .were each made by simple Monte Carlo,with uniformly random sample points.However,the two estimates of the mean are in fact made recursively.Thus,there is no reason to expect equation (7.8.9)to hold.Rather,we might substitute for equation(7.8.13)the relation, -2+ Var ((f))=N (7.822) 吃a Numerical where a is an unknown constant 1 (the case of equality corresponding to simple Monte 431086 Carlo).In that case,a short calculation shows that Var ((f))is minimized when Na Vara(f)1/(1+a) Vara (f)()+Var (f)) (7.8.23) and that its minimum value is Var((f)Vara (()+var (f (7.8.24) Equations (7.8.22)-(7.8.24)reduce to equations (7.8.13)-(7.8.15)when a =1.Numerical experiments to find a self-consistent value for o find that a2.That is,when equation (7.8.23)with a=2 is used recursively to allocate sample opportunities,the observed variance of the RSS algorithm goes approximately as N-2 while any other value of a in equation (7.8.23)gives a poorer fall-off.(The sensitivity to a is,however,not very great;it is not known whether a =2 is an analytically justifiable result,or only a useful heuristic. The principal difference between miser's implementation and the algorithm as described thus far lies in how the variances on the right-hand side of equation(7.8.23)are estimated.We324 Chapter 7. Random Numbers Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The starting points are equations (7.8.10) and (7.8.13), applied to bisections of successively smaller subregions. Suppose that we have a quota of N evaluations of the function f, and want to evaluate f  in the rectangular parallelepiped region R = (xa, xb). (We denote such a region by the two coordinate vectors of its diagonally opposite corners.) First, we allocate a fraction p of N towards exploring the variance of f in R: We sample pN function values uniformly in R and accumulate the sums that will give the d different pairs of variances corresponding to the d different coordinate directions along which R can be bisected. In other words, in pN samples, we estimate Var(f) in each of the regions resulting from a possible bisection of R, Rai ≡(xa, xb − 1 2 ei · (xb − xa)ei) Rbi ≡(xa + 1 2 ei · (xb − xa)ei, xb) (7.8.21) Here ei is the unit vector in the ith coordinate direction, i = 1, 2,...,d. Second, we inspect the variances to find the most favorable dimension i to bisect. By equation (7.8.15), we could, for example, choose that i for which the sum of the square roots of the variance estimators in regions Rai and Rbi is minimized. (Actually, as we will explain, we do something slightly different.) Third, we allocate the remaining (1 − p)N function evaluations between the regions Rai and Rbi. If we used equation (7.8.15) to choose i, we should do this allocation according to equation (7.8.14). We now have two parallelepipeds each with its own allocation of function evaluations for estimating the mean of f. Our “RSS” algorithm now shows itself to be recursive: To evaluate the mean in each region, we go back to the sentence beginning “First,...” in the paragraph above equation (7.8.21). (Of course, when the allocation of points to a region falls below some number, we resort to simple Monte Carlo rather than continue with the recursion.) Finally, we combine the means, and also estimated variances of the two subvolumes, using equation (7.8.10) and the first line of equation (7.8.11). This completes the RSS algorithm in its simplest form. Before we describe some additional tricks under the general rubric of “implementation details,” we need to return briefly to equations (7.8.13)–(7.8.15) and derive the equations that we actually use instead of these. The right-hand side of equation (7.8.13) applies the familiar scaling law of equation (7.8.9) twice, once to a and again to b. This would be correct if the estimates fa and fb were each made by simple Monte Carlo, with uniformly random sample points. However, the two estimates of the mean are in fact made recursively. Thus, there is no reason to expect equation (7.8.9) to hold. Rather, we might substitute for equation (7.8.13) the relation, Var f   = 1 4  Vara (f) Nα a + Varb (f) (N − Na)α  (7.8.22) where α is an unknown constant ≥ 1 (the case of equality corresponding to simple Monte Carlo). In that case, a short calculation shows that Var f   is minimized when Na N = Vara (f) 1/(1+α) Vara (f) 1/(1+α) + Varb (f) 1/(1+α) (7.8.23) and that its minimum value is Var f   ∝  Vara (f) 1/(1+α) + Varb (f) 1/(1+α) 1+α (7.8.24) Equations (7.8.22)–(7.8.24) reduce to equations (7.8.13)–(7.8.15) when α = 1. Numerical experiments to find a self-consistent value for α find that α ≈ 2. That is, when equation (7.8.23) with α = 2 is used recursively to allocate sample opportunities, the observed variance of the RSS algorithm goes approximately as N−2, while any other value of α in equation (7.8.23) gives a poorer fall-off. (The sensitivity to α is, however, not very great; it is not known whether α = 2 is an analytically justifiable result, or only a useful heuristic.) The principal difference between miser’s implementation and the algorithm as described thus far lies in how the variances on the right-hand side of equation (7.8.23) are estimated. We
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有