350 Chapter 9.Root Finding and Nonlinear Sets of Equations for (i=1;i<=ISCR;i++)printf("%c",scr[i][1]); printf("\n"): printf("%8s%10.3f744s410.3f\n","",x1,"",x2); CITED REFERENCES AND FURTHER READING: Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). Chapter 5. Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America),Chapters 2,7,and 14. Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),Chapter 8. Householder,A.S.1970,The Numerical Treatment of a Single Nonlinear Equation (New York: McGraw-Hill). 9.1 Bracketing and Bisection 令 We will say that a root is bracketed in the interval (a,b)if f(a)and f(b)have opposite signs.If the function is continuous,then at least one root must lie in that interval(the intermediate value theorem).If the function is discontinuous,but bounded,then instead of a root there might be a step discontinuity which crosses zero(see Figure 9.1.1).For numerical purposes,that might as well be a root,since OF SCIENTIFIC the behavior is indistinguishable from the case of a continuous function whose zero crossing occurs in between two "adjacent"floating-point numbers in a machine's 6 finite-precision representation.Only for functions with singularities is there the possibility that a bracketed root is not really there,as for example f(a)=-1 (9.1.1) c 三N Some root-finding algorithms(e.g.,bisection in this section)will readily converge Numerica 10621 to c in(9.1.1).Luckily there is not much possibility of your mistaking c,or any 431 number z close to it,for a root,since mere evaluation of f()will give a very Recipes large,rather than a very small,result. If you are given a function in a black box,there is no sure way of bracketing its roots,or of even determining that it has roots.If you like pathological examples, North think about the problem of locating the two real roots of equation(3.0.1),which dips below zero only in the ridiculously small interval of about =+10-667. In the next chapter we will deal with the related problem of bracketing a function's minimum.There it is possible to give a procedure that always succeeds; in essence,"Go downhill,taking steps of increasing size,until your function starts back uphill."There is no analogous procedure for roots.The procedure"go downhill until your function changes sign,"can be foiled by a function that has a simple extremum.Nevertheless,if you are prepared to deal with a "failure"outcome,this procedure is often a good first start;success is usual if your function has opposite signs in the limit x→土oo
350 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (i=1;i<=ISCR;i++) printf("%c",scr[i][1]); printf("\n"); printf("%8s %10.3f %44s %10.3f\n"," ",x1," ",x2); } } CITED REFERENCES AND FURTHER READING: Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 5. Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapters 2, 7, and 14. Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), Chapter 8. Householder, A.S. 1970, The Numerical Treatment of a Single Nonlinear Equation (New York: McGraw-Hill). 9.1 Bracketing and Bisection We will say that a root is bracketed in the interval (a, b) if f(a) and f(b) have opposite signs. If the function is continuous, then at least one root must lie in that interval (the intermediate value theorem). If the function is discontinuous, but bounded, then instead of a root there might be a step discontinuity which crosses zero (see Figure 9.1.1). For numerical purposes, that might as well be a root, since the behavior is indistinguishable from the case of a continuous function whose zero crossing occurs in between two “adjacent” floating-point numbers in a machine’s finite-precision representation. Only for functions with singularities is there the possibility that a bracketed root is not really there, as for example f(x) = 1 x − c (9.1.1) Some root-finding algorithms (e.g., bisection in this section) will readily converge to c in (9.1.1). Luckily there is not much possibility of your mistaking c, or any number x close to it, for a root, since mere evaluation of |f(x)| will give a very large, rather than a very small, result. If you are given a function in a black box, there is no sure way of bracketing its roots, or of even determining that it has roots. If you like pathological examples, think about the problem of locating the two real roots of equation (3.0.1), which dips below zero only in the ridiculously small interval of about x = π ± 10 −667. In the next chapter we will deal with the related problem of bracketing a function’s minimum. There it is possible to give a procedure that always succeeds; in essence, “Go downhill, taking steps of increasing size, until your function starts back uphill.” There is no analogous procedure for roots. The procedure “go downhill until your function changes sign,” can be foiled by a function that has a simple extremum. Nevertheless, if you are prepared to deal with a “failure” outcome, this procedure is often a good first start; success is usual if your function has opposite signs in the limit x → ±∞
9.1 Bracketing and Bisection 351 http://.nr.com or call 1-800-872-7423 (North America only),orsend email to directcustserv@cambridge.org (outside North America). readable files (including this one)to any server computer,is strictly prohibited. Permission is granted for internet users to make one paper copy for their Copyright (C)1988-1992 by Cambridge University Press.Programs Copyright(C)1988-1992 by Numerical Recipes Sample page from NUMERICAL RECIPES IN C:THE ART OF SCIENTIFIC COMPUTING(ISBN 0-521-43108-5) Software. (d) Figure 9.1.1.Some situations encountered while root finding: (a)shows an isolated root n bracketed by two points a and b at which the function has opposite signs;(b)illustrates that there is not necessarily a sign change in the function near a double root (in fact,there is not necessarily a root!);(c)is a pathological function with many roots,in (d)the function has opposite signs at points a and b,but the points bracket a singularity,not a root
9.1 Bracketing and Bisection 351 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). a b (b) x1 e f cx1 d a b b a (c) (d) (a) x2 x3 Figure 9.1.1. Some situations encountered while root finding: (a) shows an isolated root x1 bracketed by two points a and b at which the function has opposite signs; (b) illustrates that there is not necessarily a sign change in the function near a double root (in fact, there is not necessarily a root!); (c) is a pathological function with many roots; in (d) the function has opposite signs at points a and b, but the points bracket a singularity, not a root
352 Chapter 9. Root Finding and Nonlinear Sets of Equations #include #define FACTOR 1.6 #define NTRY 50 int zbrac(float (*func)(float),float *x1,float *x2) Given a function func and an initial guessed range x1 to x2,the routine expands the range geometrically until a root is bracketed by the returned values x1 and x2 (in which case zbrac returns 1)or until the range becomes unacceptably large (in which case zbrac returns 0). void nrerror(char error._text[☐); int j; float f1.f2; if (*x1 ==*x2)nrerror("Bad initial range in zbrac"); f1=(*func)(*x1); f2=(*func)(*x2): granted for 19881992 for (j=1;j<=NTRY;j++){ if (f1*f2 0.0)return 1; 11-600 (including this one) if (fabs(f1)<fabs(f2)) f1=(*func)(*x1+=FACT0R*(*x1-*x2)); 872 else Cambridge f2=(*func)(*x2+=FACT0R*(*x2-*x1); n NUMERICAL RECIPES IN return 0; (North America server computer, users to make one paper UnN电.t THE Alternatively,you might want to "look inward"on an initial interval,rather ART than "look outward"from it,asking if there are any roots of the function f(z)in Programs the interval from to 2 when a search is carried out by subdivision into n equal intervals.The following function calculates brackets for up to nb distinct intervals which each contain one or more roots. to dir void zbrak(float (*fx)(float),float x1,float x2,int n,float xbi, f1 oat xb2☐,int*nb) OF SCIENTIFIC COMPUTING(ISBN Given a function fx defined on the interval from x1-x2 subdivide the interval into n equally 1988-19920 spaced segments,and search for zero crossings of the function.nb is input as the maximum num- ber of roots sought,and is reset to the number of bracketing pairs xb1 [1..nb],xb2 [1..nb] that are found. 10-621 f int nbb,i; float x,fp,fc,dx; Numerical Recipes -43108 nbb=0; dx=(x2-x1)/n; Determine the spacing appropriate to the mesh. (outside fp=(*fx)(x=x1); for (i=1;i<-n;i++){ Loop over all intervals North Software. fc=(*fx)(x +dx); if(fc*fp<=0.0)[ If a sign change occurs then record values for the Ame xb1[++nbb]=x-dx bounds. xb2[nbb]=x; if(*nb =nbb)return; fp=fc; *nb nbb;
352 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include #define FACTOR 1.6 #define NTRY 50 int zbrac(float (*func)(float), float *x1, float *x2) Given a function func and an initial guessed range x1 to x2, the routine expands the range geometrically until a root is bracketed by the returned values x1 and x2 (in which case zbrac returns 1) or until the range becomes unacceptably large (in which case zbrac returns 0). { void nrerror(char error_text[]); int j; float f1,f2; if (*x1 == *x2) nrerror("Bad initial range in zbrac"); f1=(*func)(*x1); f2=(*func)(*x2); for (j=1;j<=NTRY;j++) { if (f1*f2 < 0.0) return 1; if (fabs(f1) < fabs(f2)) f1=(*func)(*x1 += FACTOR*(*x1-*x2)); else f2=(*func)(*x2 += FACTOR*(*x2-*x1)); } return 0; } Alternatively, you might want to “look inward” on an initial interval, rather than “look outward” from it, asking if there are any roots of the function f(x) in the interval from x1 to x2 when a search is carried out by subdivision into n equal intervals. The following function calculates brackets for up to nb distinct intervals which each contain one or more roots. void zbrak(float (*fx)(float), float x1, float x2, int n, float xb1[], float xb2[], int *nb) Given a function fx defined on the interval from x1-x2 subdivide the interval into n equally spaced segments, and search for zero crossings of the function. nb is input as the maximum number of roots sought, and is reset to the number of bracketing pairs xb1[1..nb], xb2[1..nb] that are found. { int nbb,i; float x,fp,fc,dx; nbb=0; dx=(x2-x1)/n; Determine the spacing appropriate to the mesh. fp=(*fx)(x=x1); for (i=1;i<=n;i++) { Loop over all intervals fc=(*fx)(x += dx); if (fc*fp <= 0.0) { If a sign change occurs then record values for the xb1[++nbb]=x-dx; bounds. xb2[nbb]=x; if(*nb == nbb) return; } fp=fc; } *nb = nbb; }
9.1 Bracketing and Bisection 353 Bisection Method Once we know that an interval contains a root,several classical procedures are available to refine it.These proceed with varying degrees of speed and sureness towards the answer.Unfortunately,the methods that are guaranteed to converge plod along most slowly,while those that rush to the solution in the best cases can also dash rapidly to infinity without warning if measures are not taken to avoid such behavior. The bisection method is one that cannot fail.It is thus not to be sneered at as a method for otherwise badly behaved problems.The idea is simple.Over some interval the function is known to pass through zero because it changes sign.Evaluate 81 the function at the interval's midpoint and examine its sign.Use the midpoint to replace whichever limit has the same sign.After each iteration the bounds containing the root decrease by a factor of two.If after n iterations the root is known to 且品 be within an interval of size en,then after the next iteration it will be bracketed within an interval of size En+1 En/2 (9.1.2) neither more nor less.Thus,we know in advance the number of iterations required to achieve a given tolerance in the solution, 9 n=1sa号 (9.1.3) > where eo is the size of the initially bracketing interval,e is the desired ending 兰 9 tolerance. 9 Bisection must succeed.If the interval happens to contain two or more roots, bisection will find one of them.If the interval contains no roots and merely straddles a singularity,it will converge on the singularity. When a method converges as a factor(less than 1)times the previous uncertainty to the first power(as is the case for bisection),it is said to converge linearly.Methods that converge as a higher power, En+1 constant x (en)mm >1 (9.1.4) 鸟 6 are said to converge superlinearly.In other contexts"linear"convergence would be termed“exponential,”or“geometrical.”That is not too bad at all:.Linear convergence Recipes Numerica 10621 means that successive significant figures are won linearly with computational effort. 431 It remains to discuss practical criteria for convergence.It is crucial to keep in Recipes mind that computers use a fixed number of binary digits to represent floating-point numbers.While your function might analytically pass through zero,it is possible that its computed value is never zero,for any floating-point argument.One must decide North what accuracy on the root is attainable:Convergence to within 10-6 in absolute value is reasonable when the root lies near 1,but certainly unachievable if the root lies near 1026.One might thus think to specify convergence by a relative(fractional) criterion,but this becomes unworkable for roots near zero.To be most general,the routines below will require you to specify an absolute tolerance,such that iterations continue until the interval becomes smaller than this tolerance in absolute units. Usually you may wish to take the tolerance to be e(+2)/2 where e is the machine precision andand x2 are the initial brackets.When the root lies near zero you ought to consider carefully what reasonable tolerance means for your function The following routine quits after 40 bisections in any event,with 2-4010-12
9.1 Bracketing and Bisection 353 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Bisection Method Once we know that an interval contains a root, several classical procedures are available to refine it. These proceed with varying degrees of speed and sureness towards the answer. Unfortunately, the methods that are guaranteed to converge plod along most slowly, while those that rush to the solution in the best cases can also dash rapidly to infinity without warning if measures are not taken to avoid such behavior. The bisection method is one that cannot fail. It is thus not to be sneered at as a method for otherwise badly behaved problems. The idea is simple. Over some interval the function is known to pass through zero because it changes sign. Evaluate the function at the interval’s midpoint and examine its sign. Use the midpoint to replace whichever limit has the same sign. After each iteration the bounds containing the root decrease by a factor of two. If after n iterations the root is known to be within an interval of size n, then after the next iteration it will be bracketed within an interval of size n+1 = n/2 (9.1.2) neither more nor less. Thus, we know in advance the number of iterations required to achieve a given tolerance in the solution, n = log2 0 (9.1.3) where 0 is the size of the initially bracketing interval, is the desired ending tolerance. Bisection must succeed. If the interval happens to contain two or more roots, bisection will find one of them. If the interval contains no roots and merely straddles a singularity, it will converge on the singularity. When a method converges as a factor (less than 1) times the previous uncertainty to the first power (as is the case for bisection), it is said to converge linearly. Methods that converge as a higher power, n+1 = constant × ( n) m m > 1 (9.1.4) are said to converge superlinearly. In other contexts “linear” convergence would be termed “exponential,” or “geometrical.” That is not too bad at all: Linear convergence means that successive significant figures are won linearly with computational effort. It remains to discuss practical criteria for convergence. It is crucial to keep in mind that computers use a fixed number of binary digits to represent floating-point numbers. While your function might analytically pass through zero, it is possible that its computed value is never zero, for any floating-point argument. One must decide what accuracy on the root is attainable: Convergence to within 10 −6 in absolute value is reasonable when the root lies near 1, but certainly unachievable if the root lies near 1026. One might thus think to specify convergence by a relative (fractional) criterion, but this becomes unworkable for roots near zero. To be most general, the routines below will require you to specify an absolute tolerance, such that iterations continue until the interval becomes smaller than this tolerance in absolute units. Usually you may wish to take the tolerance to be (|x1| + |x2|)/2 where is the machine precision and x1 and x2 are the initial brackets. When the root lies near zero you ought to consider carefully what reasonable tolerance means for your function. The following routine quits after 40 bisections in any event, with 2 −40 ≈ 10−12
354 Chapter 9. Root Finding and Nonlinear Sets of Equations #include #define JMAX 40 Maximum allowed number of bisections. float rtbis(float (*func)(float),float xi,float x2,float xacc) Using bisection,find the root of a function func known to lie between x1 and x2.The root, returned as rtbis,will be refined until its accuracy is txacc. void nrerror(char error-text☐); int ji float dx,f,fmid,xmid,rtb; f=(*func)(x1): 三 fmid=(*func)(x2); if (f*fmid >=0.0)nrerror("Root must be bracketed for bisection in rtbis"); rtb=f0 for (j=1;j<=JMAX;++){ lies at x+dx. fmid=(*func)(xmid=rtb+(dx *0.5)); Bisection loop. if (fmid <=0.0)rtb-xmid; if (fabs(dx)<xacc II fmid ==0.0)return rtb; ICAL nrerror("Too many bisections in rtbis"); return 0.0; Never get here. RECIPES I America computer, Press. 9 9.2 Secant Method,False Position Method, Programs and Ridders'Method IENTIFIC For functions that are smooth near a root,the methods known respectively to dir as false position (or regula falsi)and secant method generally converge faster than bisection.In both of these methods the function is assumed to be approximately linear in the local region of interest,and the next improvement in the root is taken as the point where the approximating line crosses the axis.After each iteration one of the previous boundary points is discarded in favor of the latest estimate of the root. Recipes 10621 The only difference between the methods is that secant retains the most recent Numerica of the prior estimates(Figure 9.2.1;this requires an arbitrary choice on the first 43106 iteration),while false position retains that prior estimate for which the function value Recipes has opposite sign from the function value at the current best estimate of the root, (outside 腿 so that the two points continue to bracket the root(Figure 9.2.2).Mathematically, the secant method converges more rapidly near a root of a sufficiently continuous North function.Its order of convergence can be shown to be the"golden ratio"1.618..., so that lim+const x (9.2.1) The secant method has,however,the disadvantage that the root does not necessarily remain bracketed.For functions that are not sufficiently continuous.the algorithm can therefore not be guaranteed to converge:Local behavior might send it off towards infinity
354 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include #define JMAX 40 Maximum allowed number of bisections. float rtbis(float (*func)(float), float x1, float x2, float xacc) Using bisection, find the root of a function func known to lie between x1 and x2. The root, returned as rtbis, will be refined until its accuracy is ±xacc. { void nrerror(char error_text[]); int j; float dx,f,fmid,xmid,rtb; f=(*func)(x1); fmid=(*func)(x2); if (f*fmid >= 0.0) nrerror("Root must be bracketed for bisection in rtbis"); rtb = f 0 for (j=1;j<=JMAX;j++) { lies at x+dx. fmid=(*func)(xmid=rtb+(dx *= 0.5)); Bisection loop. if (fmid <= 0.0) rtb=xmid; if (fabs(dx) < xacc || fmid == 0.0) return rtb; } nrerror("Too many bisections in rtbis"); return 0.0; Never get here. } 9.2 Secant Method, False Position Method, and Ridders’ Method For functions that are smooth near a root, the methods known respectively as false position (or regula falsi) and secant method generally converge faster than bisection. In both of these methods the function is assumed to be approximately linear in the local region of interest, and the next improvement in the root is taken as the point where the approximating line crosses the axis. After each iteration one of the previous boundary points is discarded in favor of the latest estimate of the root. The only difference between the methods is that secant retains the most recent of the prior estimates (Figure 9.2.1; this requires an arbitrary choice on the first iteration), while false position retains that prior estimate for which the function value has opposite sign from the function value at the current best estimate of the root, so that the two points continue to bracket the root (Figure 9.2.2). Mathematically, the secant method converges more rapidly near a root of a sufficiently continuous function. Its order of convergence can be shown to be the “golden ratio” 1.618 ... , so that lim k→∞ | k+1| ≈ const × | k| 1.618 (9.2.1) The secant method has, however, the disadvantage that the root does not necessarily remain bracketed. For functions that are not sufficiently continuous, the algorithm can therefore not be guaranteed to converge: Local behavior might send it off towards infinity