当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.4 Root Finding and Nonlinear Sets of Equations 9.4 Newton-Raphson Method Using Derivative

资源类别:文库,文档格式:PDF,文档页数:8,文件大小:72.72KB,团购合买
点击下载完整版文档(PDF)

362 Chapter 9.Root Finding and Nonlinear Sets of Equations 2 a=b: Move last best guess to a. fa-fb; if (fabs(d)>tol1) Evaluate new trial root. b+=d; else b+=SIGN(tol1,xm); fb=(*func)(b); nrerror("Maximum number of iterations exceeded in zbrent"); return 0.0; Never get here. 83g CITED REFERENCES AND FURTHER READING: Brent,R.P.1973,Algorithms for Minimization without Derivatives(Englewood Cliffs,NJ:Prentice- 11800 Hall),Chapters 3,4.[1] Forsythe,G.E.,Malcolm,M.A.,and Moler,C.B.1977,Computer Methods for Mathematical Computations (Englewood Cliffs,NJ:Prentice-Hall),87.2. 9.4 Newton-Raphson Method Using Derivative America Press. Perhaps the most celebrated ofall one-dimensional root-finding routines is New- ton's method,also called the Newton-Raphson method.This method is distinguished from the methods of previous sections by the fact that it requires the evaluation SCIENTIFIC( of both the function f(x),and the derivative f'(z),at arbitrary points x.The Newton-Raphson formula consists geometrically of extending the tangent line at a current pointri until it crosses zero,then setting the next guess to the abscissa of that zero-crossing(see Figure 9.4.1).Algebraically,the method derives from the familiar Taylor series expansion of a function in the neighborhood of a point, fe+)≈f回+fa5+"回+. (9.4.1) 2 Numerica 10621 For small enough values of 6,and for well-behaved functions,the terms beyond 43126 linear are unimportant,hence f(x+6)=0 implies 6s、x) (9.4.2) f'(x) North Newton-Raphson is not restricted to one dimension.The method readily generalizes to multiple dimensions,as we shall see in 89.6 and 89.7.below. Far from a root,where the higher-order terms in the series are important,the Newton-Raphson formula can give grossly inaccurate,meaningless corrections.For instance,the initial guess for the root might be so far from the true root as to let the search interval include a local maximum or minimum of the function.This can be death to the method (see Figure 9.4.2).If an iteration places a trial guess near such a local extremum,so that the first derivative nearly vanishes,then Newton- Raphson sends its solution off to limbo,with vanishingly small hope of recovery

362 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). } a=b; Move last best guess to a. fa=fb; if (fabs(d) > tol1) Evaluate new trial root. b += d; else b += SIGN(tol1,xm); fb=(*func)(b); } nrerror("Maximum number of iterations exceeded in zbrent"); return 0.0; Never get here. } CITED REFERENCES AND FURTHER READING: Brent, R.P. 1973, Algorithms for Minimization without Derivatives (Englewood Cliffs, NJ: Prentice￾Hall), Chapters 3, 4. [1] Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for Mathematical Computations (Englewood Cliffs, NJ: Prentice-Hall), §7.2. 9.4 Newton-Raphson Method Using Derivative Perhaps the most celebrated of all one-dimensional root-finding routines is New￾ton’s method, also called the Newton-Raphson method. This method is distinguished from the methods of previous sections by the fact that it requires the evaluation of both the function f(x), and the derivative f  (x), at arbitrary points x. The Newton-Raphson formula consists geometrically of extending the tangent line at a current point xi until it crosses zero, then setting the next guess xi+1 to the abscissa of that zero-crossing (see Figure 9.4.1). Algebraically, the method derives from the familiar Taylor series expansion of a function in the neighborhood of a point, f(x + δ) ≈ f(x) + f (x)δ + f(x) 2 δ2 + .... (9.4.1) For small enough values of δ, and for well-behaved functions, the terms beyond linear are unimportant, hence f(x + δ)=0 implies δ = − f(x) f (x) . (9.4.2) Newton-Raphson is not restricted to one dimension. The method readily generalizes to multiple dimensions, as we shall see in §9.6 and §9.7, below. Far from a root, where the higher-order terms in the series are important, the Newton-Raphson formula can give grossly inaccurate, meaningless corrections. For instance, the initial guess for the root might be so far from the true root as to let the search interval include a local maximum or minimum of the function. This can be death to the method (see Figure 9.4.2). If an iteration places a trial guess near such a local extremum, so that the first derivative nearly vanishes, then Newton￾Raphson sends its solution off to limbo, with vanishingly small hope of recovery

9.4 Newton-Raphson Method Using Derivative 363 f(x) x http://www.nr.com or call 1-800-872-7423 (North America Permission is granted for internet users to make one paper copy for their Figure 9.4.1.Newton's method extrapolates the local derivative to find the next estimate of the root.In only),or this example it works well and converges quadratically. own f(x) rsend email to directcustserv@cambridge.org (outside North America). Copyright(C)1988-1992 by Cambridge University Press.Programs Copyright(C)1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C:THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Figure 9.4.2.Unfortunate case where Newton's method encounters a local extremum and shoots off to outer space.Here bracketing bounds,as in rtsafe,would save the day

9.4 Newton-Raphson Method Using Derivative 363 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 1 2 3 x f(x) Figure 9.4.1. Newton’s method extrapolates the local derivative to find the next estimate of the root. In this example it works well and converges quadratically. f(x) x 1 2 3 Figure 9.4.2. Unfortunate case where Newton’s method encounters a local extremum and shoots off to outer space. Here bracketing bounds, as in rtsafe, would save the day

364 Chapter 9. Root Finding and Nonlinear Sets of Equations f(x) http://www.nr.com or call 1-800-872- Permission is read able files (including this one) granted for from NUMERICAL RECIPES IN C: 19881992 by Cambridge to any server computer, -7423 (North America tusers to make one paper e University Press. THE Figure 9.4.3.Unfortunate case where Newton's method enters a nonconvergent cycle.This behavior ART is often encountered when the function f is obtained,in whole or in part,by table interpolation.With 是 a better initial guess,the method would have succeeded. ictly proh Programs Like most powerful tools,Newton-Raphson can be destructive used in inappropriate circumstances.Figure 9.4.3 demonstrates another possible pathology. Why do we call Newton-Raphson powerful?The answer lies in its rate of to dir convergence:Within a small distance e of z the function and its derivative are approximately: 1881892 OF SCIENTIFIC COMPUTING(ISBN f+)=fa)+efa)+e2f"四 十· 2 (9.4.3) f'(x+e)=f(x)+ef"(x)+… Numerical Recipes 10-521 43108 By the Newton-Raphson formula. f(xi) (outside i+1=i一 f'(xi) (9.4.4) North Software. so that f(xi) Ei+1=Ei- visit website f'(x) (9.4.5) machine When a trial solution x;differs from the true root by ei,we can use(9.4.3)to express f(),f'()in (9.4.4)in terms of e;and derivatives at the root itself.The result is a recurrence relation for the deviations of the trial solutions 2f"(x) 6+1=-6f2f(m (9.4.6)

364 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). x f(x) 2 1 Figure 9.4.3. Unfortunate case where Newton’s method enters a nonconvergent cycle. This behavior is often encountered when the function f is obtained, in whole or in part, by table interpolation. With a better initial guess, the method would have succeeded. Like most powerful tools, Newton-Raphson can be destructive used in inappropriate circumstances. Figure 9.4.3 demonstrates another possible pathology. Why do we call Newton-Raphson powerful? The answer lies in its rate of convergence: Within a small distance  of x the function and its derivative are approximately: f(x + ) = f(x) + f (x) +  2 f(x) 2 + ··· , f (x + ) = f (x) + f(x) + ··· (9.4.3) By the Newton-Raphson formula, xi+1 = xi − f(xi) f (xi) , (9.4.4) so that i+1 = i − f(xi) f (xi) . (9.4.5) When a trial solution xi differs from the true root by i, we can use (9.4.3) to express f(xi), f (xi) in (9.4.4) in terms of i and derivatives at the root itself. The result is a recurrence relation for the deviations of the trial solutions i+1 = −2 i f(x) 2f (x) . (9.4.6)

9.4 Newton-Raphson Method Using Derivative 365 Equation (9.4.6)says that Newton-Raphson converges quadratically (cf.equa- tion 9.2.3).Near a root,the number of significant digits approximately doubles with each step.This very strong convergence property makes Newton-Raphson the method of choice for any function whose derivative can be evaluated efficiently,and whose derivative is continuous and nonzero in the neighborhood of a root Even where Newton-Raphson is rejected for the early stages of convergence (because of its poor global convergence properties),it is very common to"polish up"a root with one or two steps of Newton-Raphson,which can multiply by two or four its number of significant figures! For an efficient realization of Newton-Raphson the user provides a routine that 81 evaluates both f(z)and its first derivative f'()at the point z.The Newton-Raphson formula can also be applied using a numerical difference to approximate the true local derivative, f'(x)≈f+)-fa) (9.4.7) dr This is not,however,a recommended procedure for the following reasons:(i)You are doing two function evaluations per step,so at best the superlinear order of convergence will be only v2.(ii)If you take dz too small you will be wiped out by roundoff,while if you take it too large your order of convergence will be only 墨会d的将 Press. linear,no better than using the initial evaluation f'(ro)for all subsequent steps. Therefore,Newton-Raphson with numerical derivatives is(in one dimension)always dominated by the secant method of 89.2.(In multidimensions,where there is a paucity of available methods,Newton-Raphson with numerical derivatives must be taken more seriously.See $89.6-9.7.) The following function calls a user supplied function funcd(x,fn,df)which supplies the function value as fn and the derivative as df.We have included input bounds on the root simply to be consistent with previous root-finding routines: Newton does not adjust bounds,and works only on local information at the point 是 x.The bounds are used only to pick the midpoint as the first guess,and to reject the solution if it wanders outside of the bounds 10621 #include #define JMAX 20 Set to maximum number of iterations. Fuunrggoleioh Numerical Recipes 43106 float rtnewt(void (*funcd)(float,float *float *)float x1,float x2 float xacc) Using the Newton-Raphson method,find the root of a function known to lie in the interval (outside [x1,x2.The root rtnewt will be refined until its accuracy is known within +xacc.funcd is a user-supplied routine that returns both the function value and the first derivative of the North Software. function at the point x. f void nrerror(char error_text []) int i; float df,dx,f,rtn; rtn=0.5*(x1+x2): Initial guess. for (j=1;j<=JMAX;j++){ (*funcd)(rtn,&f,&df) dx=f/df; rtn -dx; 1f((x1-rtn)*(rtn-x2)<0.0) nrerror("Jumped out of brackets in rtnewt");

9.4 Newton-Raphson Method Using Derivative 365 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Equation (9.4.6) says that Newton-Raphson converges quadratically (cf. equa￾tion 9.2.3). Near a root, the number of significant digits approximately doubles with each step. This very strong convergence property makes Newton-Raphson the method of choice for any function whose derivative can be evaluated efficiently, and whose derivative is continuous and nonzero in the neighborhood of a root. Even where Newton-Raphson is rejected for the early stages of convergence (because of its poor global convergence properties), it is very common to “polish up” a root with one or two steps of Newton-Raphson, which can multiply by two or four its number of significant figures! For an efficient realization of Newton-Raphson the user provides a routine that evaluates both f(x) and its first derivative f  (x) at the point x. The Newton-Raphson formula can also be applied using a numerical difference to approximate the true local derivative, f (x) ≈ f(x + dx) − f(x) dx . (9.4.7) This is not, however, a recommended procedure for the following reasons: (i) You are doing two function evaluations per step, so at best the superlinear order of convergence will be only √2. (ii) If you take dx too small you will be wiped out by roundoff, while if you take it too large your order of convergence will be only linear, no better than using the initial evaluation f  (x0) for all subsequent steps. Therefore, Newton-Raphson with numerical derivatives is (in one dimension) always dominated by the secant method of §9.2. (In multidimensions, where there is a paucity of available methods, Newton-Raphson with numerical derivatives must be taken more seriously. See §§9.6–9.7.) The following function calls a user supplied function funcd(x,fn,df) which supplies the function value as fn and the derivative as df. We have included input bounds on the root simply to be consistent with previous root-finding routines: Newton does not adjust bounds, and works only on local information at the point x. The bounds are used only to pick the midpoint as the first guess, and to reject the solution if it wanders outside of the bounds. #include #define JMAX 20 Set to maximum number of iterations. float rtnewt(void (*funcd)(float, float *, float *), float x1, float x2, float xacc) Using the Newton-Raphson method, find the root of a function known to lie in the interval [x1, x2]. The root rtnewt will be refined until its accuracy is known within ±xacc. funcd is a user-supplied routine that returns both the function value and the first derivative of the function at the point x. { void nrerror(char error_text[]); int j; float df,dx,f,rtn; rtn=0.5*(x1+x2); Initial guess. for (j=1;j<=JMAX;j++) { (*funcd)(rtn,&f,&df); dx=f/df; rtn -= dx; if ((x1-rtn)*(rtn-x2) < 0.0) nrerror("Jumped out of brackets in rtnewt");

366 Chapter 9.Root Finding and Nonlinear Sets of Equations if (fabs(dx) 18881892 #define MAXIT 100 Maximum allowed number of iterations. float rtsafe(void (*funcd)(float,float *float *)float x1,float x2 float xacc) Using a combination of Newton-Raphson and bisection,find the root of a function bracketed from NUMERICAL RECIPES I between x1 and x2.The root,returned as the function value rtsafe,will be refined until its accuracy is known within +xacc.funcd is a user-supplied routine that returns both the function value and the first derivative of the function. void nrerror(char error.-text[☐); University Press. 令 THE int j; Ameri computer one paper float df,dx,dxold,f,fh,fl; ART float temp,xh,xl,rts; (*funcd)(x1,&f1,&df) (*funcd)(x2,&fh,&df) 1f((f1>0.02fh>0.0)I1(f10.0) Bisect if Newton out of range, (outside 膜 II (fabs(2.0*f)>fabs(dxold*df))){ or not decreasing fast enough Software. dxold=dx: North dx=0.5*(xh-x1); rts=xl+dx; Ame if (xl =rts)return rts; Change in root is negligible. else Newton step acceptable.Take it. dxold=dx; dx=f/df; temp=rts; rts -dx; if (temp =rts)return rts; if (fabs(dx)<xacc)return rts; Convergence criterion. (*funcd)(rts,&f,&df); The one new function evaluation per iteration

366 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). if (fabs(dx) #define MAXIT 100 Maximum allowed number of iterations. float rtsafe(void (*funcd)(float, float *, float *), float x1, float x2, float xacc) Using a combination of Newton-Raphson and bisection, find the root of a function bracketed between x1 and x2. The root, returned as the function value rtsafe, will be refined until its accuracy is known within ±xacc. funcd is a user-supplied routine that returns both the function value and the first derivative of the function. { void nrerror(char error_text[]); int j; float df,dx,dxold,f,fh,fl; float temp,xh,xl,rts; (*funcd)(x1,&fl,&df); (*funcd)(x2,&fh,&df); if ((fl > 0.0 && fh > 0.0) || (fl 0.0) Bisect if Newton out of range, || (fabs(2.0*f) > fabs(dxold*df))) { or not decreasing fast enough. dxold=dx; dx=0.5*(xh-xl); rts=xl+dx; if (xl == rts) return rts; Change in root is negligible. } else { Newton step acceptable. Take it. dxold=dx; dx=f/df; temp=rts; rts -= dx; if (temp == rts) return rts; } if (fabs(dx) < xacc) return rts; Convergence criterion. (*funcd)(rts,&f,&df); The one new function evaluation per iteration

9.4 Newton-Raphson Method Using Derivative 367 1f(f<0.0) Maintain the bracket on the root. xlerts; else nrerror("Maximum number of iterations exceeded in rtsafe"); return 0.0; Never get here. For many functions the derivative f'()often converges to machine accuracy before the function f()itself does.When that is the case one need not subsequently update f'(z).This shortcut is recommended only when you confidently understand the generic behavior of your function,but it speeds computations when the derivative calculation is laborious.(Formally this makes the convergence only linear,but if the derivative isn't changing anyway,you can do no better.) Newton-Raphson and Fractals An interesting sidelight to our repeated warnings about Newton-Raphson's unpredictable global convergence properties-its very rapid local convergence 0人 令 notwithstanding-is to investigate,for some particular equation,the set of starting values from which the method does,or doesn't converge to a root. Press. Consider the simple equation 23-1=0 (9.4.8) 9 whose single real root is z=1,but which also has complex roots at the other two cube roots of unity,exp(+2i/3).Newton's method gives the iteration 6 号-1 2j+1=2j- 32号 (9.4.9) Up to now,we have applied an iteration like equation (9.4.9)only for real starting values zo,but in fact all of the equations in this section also apply in the 10621 complex plane.We can therefore map out the complex plane into regions from which a starting value zo,iterated in equation(9.4.9),will,or won't,converge to z=1. Numerical Recipes 43106 Naively,we might expect to find a "basin of convergence"somehow surrounding the root z =1.We surely do not expect the basin of convergence to fill the whole (outside plane,because the plane must also contain regions that converge to each of the two complex roots.In fact,by symmetry,the three regions must have identical shapes. North Perhaps they will be three symmetric 120 wedges,with one root centered in each? Now take a look at Figure 9.4.4,which shows the result of a numerical exploration.The basin of convergence does indeed cover 1/3 the area of the complex plane,but its boundary is highly irregular-in fact,fractal.(A fractal,so called, has self-similar structure that repeats on all scales of magnification.)How does this fractal emerge from something as simple as Newton's method,and an equation as simple as(9.4.8)?The answer is already implicit in Figure 9.4.2,which showed how, on the real line,a local extremum causes Newton's method to shoot off to infinity. Suppose one is slightly removed from such a point.Then one might be shot off not to infinity,but-by luck-right into the basin of convergence of the desired

9.4 Newton-Raphson Method Using Derivative 367 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). if (f < 0.0) Maintain the bracket on the root. xl=rts; else xh=rts; } nrerror("Maximum number of iterations exceeded in rtsafe"); return 0.0; Never get here. } For many functions the derivative f  (x) often converges to machine accuracy before the function f(x) itself does. When that is the case one need not subsequently update f  (x). This shortcut is recommended only when you confidently understand the generic behavior of your function, but it speeds computations when the derivative calculation is laborious. (Formally this makes the convergence only linear, but if the derivative isn’t changing anyway, you can do no better.) Newton-Raphson and Fractals An interesting sidelight to our repeated warnings about Newton-Raphson’s unpredictable global convergence properties — its very rapid local convergence notwithstanding — is to investigate, for some particular equation, the set of starting values from which the method does, or doesn’t converge to a root. Consider the simple equation z3 − 1=0 (9.4.8) whose single real root is z = 1, but which also has complex roots at the other two cube roots of unity, exp(±2πi/3). Newton’s method gives the iteration zj+1 = zj − z3 j − 1 3z2 j (9.4.9) Up to now, we have applied an iteration like equation (9.4.9) only for real starting values z0, but in fact all of the equations in this section also apply in the complex plane. We can therefore map out the complex plane into regions from which a starting value z0, iterated in equation (9.4.9), will, or won’t, converge to z = 1. Naively, we might expect to find a “basin of convergence” somehow surrounding the root z = 1. We surely do not expect the basin of convergence to fill the whole plane, because the plane must also contain regions that converge to each of the two complex roots. In fact, by symmetry, the three regions must have identical shapes. Perhaps they will be three symmetric 120◦ wedges, with one root centered in each? Now take a look at Figure 9.4.4, which shows the result of a numerical exploration. The basin of convergence does indeed cover 1/3 the area of the complex plane, but its boundary is highly irregular — in fact, fractal. (A fractal, so called, has self-similar structure that repeats on all scales of magnification.) How does this fractal emerge from something as simple as Newton’s method, and an equation as simple as (9.4.8)? The answer is already implicit in Figure 9.4.2, which showed how, on the real line, a local extremum causes Newton’s method to shoot off to infinity. Suppose one is slightly removed from such a point. Then one might be shot off not to infinity, but — by luck — right into the basin of convergence of the desired

368 Chapter 9.Root Finding and Nonlinear Sets of Equations http://www.nr.com or call 1-800-872- read able files Permission is (including this one) granted fori internet -7423(North America to any server computer, tusers to make one paper 1988-1992 by Cambridge University Press. from NUMERICAL RECIPES IN C: THE 是 Figure 9.4.4.The complex z plane with real and imaginary components in the range (-2,2).The black region is the set of points from which Newton's method converges to the root z 1 of the equation 23-1 =0.Its shape is fractal. strictly proh Programs root.But that means that in the neighborhood of an extremum there must be a tiny, Copyright (C) perhaps distorted,copy of the basin of convergence-a kind of"one-bounce away" to dir copy.Similar logic shows that there can be "two-bounce"copies,"three-bounce" copies,and so on.A fractal thus emerges. Notice that,for equation(9.4.8),almost the whole real axis is in the domain of ART OF SCIENTIFIC COMPUTING(ISBN 0-521 convergence for the root z=1.We say "almost"because of the peculiar discrete points on the negative real axis whose convergence is indeterminate (see figure). What happens if you start Newton's method from one of these points?(Try it.) .Further reproduction, 1988-1992 by Numerical Recipes -431085 CITED REFERENCES AND FURTHER READING: (outside Acton,F.S.1970,Numerica/Methods That Work,1990,corrected edition (Washington:Mathe- matical Association of America),Chapter 2. North Software. Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),88.4. Ortega,J.,and Rheinboldt,W.1970,Iterative Solution of Nonlinear Equations in Several Vari- ables (New York:Academic Press). visit website machine Mandelbrot,B.B.1983,The Fracta/Geometry of Nature(San Francisco:W.H.Freeman). Peitgen,H.-O.,and Saupe,D.(eds.)1988,The Science of Fractal /mages (New York:Springer- Verlag)

368 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Figure 9.4.4. The complex z plane with real and imaginary components in the range (−2, 2). The black region is the set of points from which Newton’s method converges to the root z = 1 of the equation z3 − 1=0. Its shape is fractal. root. But that means that in the neighborhood of an extremum there must be a tiny, perhaps distorted, copy of the basin of convergence — a kind of “one-bounce away” copy. Similar logic shows that there can be “two-bounce” copies, “three-bounce” copies, and so on. A fractal thus emerges. Notice that, for equation (9.4.8), almost the whole real axis is in the domain of convergence for the root z = 1. We say “almost” because of the peculiar discrete points on the negative real axis whose convergence is indeterminate (see figure). What happens if you start Newton’s method from one of these points? (Try it.) CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathe￾matical Association of America), Chapter 2. Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §8.4. Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Vari￾ables (New York: Academic Press). Mandelbrot, B.B. 1983, The Fractal Geometry of Nature (San Francisco: W.H. Freeman). Peitgen, H.-O., and Saupe, D. (eds.) 1988, The Science of Fractal Images (New York: Springer￾Verlag)

9.5 Roots of Polynomials 369 9.5 Roots of Polynomials Here we present a few methods for finding roots of polynomials.These will serve for most practical problems involving polynomials of low-to-moderate degree or for well-conditioned polynomials of higher degree.Not as well appreciated as it ought to be is the fact that some polynomials are exceedingly ill-conditioned.The tiniest changes in a polynomial's coefficients can,in the worst case,send its roots sprawling all over the complex plane.(An infamous example due to Wilkinson is detailed by Acton [1].) Recall that a polynomial of degree n will have n roots.The roots can be real 81 or complex,and they might not be distinct.If the coefficients of the polynomial are real,then complex roots will occur in pairs that are conjugate,i.e.,if=a+bi 虽2 is a root then z2 =a-bi will also be a root.When the coefficients are complex, the complex roots need not be related. Multiple roots,or closely spaced roots,produce the most difficulty for numerical algorithms (see Figure 9.5.1).For example,P(x)=(x-a)2 has a double real root at z=a.However,we cannot bracket the root by the usual technique of identifying neighborhoods where the function changes sign,nor will slope-following methods 9 such as Newton-Raphson work well,because both the function and its derivative vanish at a multiple root.Newton-Raphson may work,but slowly,since large roundoff errors can occur.When a root is known in advance to be multiple,then special methods of attack are readily devised.Problems arise when(as is generally the case)we do not know in advance what pathology a root will display. 超% 9 Deflation of Polynomials When seeking several or all roots of a polynomial,the total effort can be 6 significantly reduced by the use of deftation.As each root r is found,the polynomial is factored into a product involving the root and a reduced polynomial of degree one less than the original,i.e.,P(x)=(x-r)Q(z).Since the roots of are exactly the remaining roots of P,the effort of finding additional roots decreases, because we work with polynomials of lower and lower degree as we find successive 10621 roots.Even more important,with deflation we can avoid the blunder of having our iterative method converge twice to the same(nonmultiple)root instead of separately "hg2 Numerical Recipes 43106 to two different roots. Deflation,which amounts to synthetic division,is a simple operation that acts (outside on the array of polynomial coefficients.The concise code for synthetic division by a monomial factor was given in 85.3 above.You can deflate complex roots either by North converting that code to complex data type,or else-in the case of a polynomial with real coefficients but possibly complex roots-by deflating by a quadratic factor, [x-(a+b)][x-(a-b)]=x2-2ax+(a2+b2) (9.5.1) The routine poldiv in 85.3 can be used to divide the polynomial by this factor. Deflation must,however,be utilized with care.Because each new root is known with only finite accuracy,errors creep into the determination of the coefficients of the successively deflated polynomial.Consequently,the roots can become more and more inaccurate.It matters a lot whether the inaccuracy creeps in stably (plus or

9.5 Roots of Polynomials 369 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 9.5 Roots of Polynomials Here we present a few methods for finding roots of polynomials. These will serve for most practical problems involving polynomials of low-to-moderate degree or for well-conditioned polynomials of higher degree. Not as well appreciated as it ought to be is the fact that some polynomials are exceedingly ill-conditioned. The tiniest changes in a polynomial’s coefficients can, in the worst case, send its roots sprawling all over the complex plane. (An infamous example due to Wilkinson is detailed by Acton [1].) Recall that a polynomial of degree n will have n roots. The roots can be real or complex, and they might not be distinct. If the coefficients of the polynomial are real, then complex roots will occur in pairs that are conjugate, i.e., if x1 = a + bi is a root then x2 = a − bi will also be a root. When the coefficients are complex, the complex roots need not be related. Multiple roots, or closely spaced roots, produce the most difficulty for numerical algorithms (see Figure 9.5.1). For example, P(x)=(x − a) 2 has a double real root at x = a. However, we cannot bracket the root by the usual technique of identifying neighborhoods where the function changes sign, nor will slope-following methods such as Newton-Raphson work well, because both the function and its derivative vanish at a multiple root. Newton-Raphson may work, but slowly, since large roundoff errors can occur. When a root is known in advance to be multiple, then special methods of attack are readily devised. Problems arise when (as is generally the case) we do not know in advance what pathology a root will display. Deflation of Polynomials When seeking several or all roots of a polynomial, the total effort can be significantly reduced by the use of deflation. As each root r is found, the polynomial is factored into a product involving the root and a reduced polynomial of degree one less than the original, i.e., P(x)=(x − r)Q(x). Since the roots of Q are exactly the remaining roots of P, the effort of finding additional roots decreases, because we work with polynomials of lower and lower degree as we find successive roots. Even more important, with deflation we can avoid the blunder of having our iterative method converge twice to the same (nonmultiple) root instead of separately to two different roots. Deflation, which amounts to synthetic division, is a simple operation that acts on the array of polynomial coefficients. The concise code for synthetic division by a monomial factor was given in §5.3 above. You can deflate complex roots either by converting that code to complex data type, or else — in the case of a polynomial with real coefficients but possibly complex roots — by deflating by a quadratic factor, [x − (a + ib)] [x − (a − ib)] = x2 − 2ax + (a2 + b2) (9.5.1) The routine poldiv in §5.3 can be used to divide the polynomial by this factor. Deflation must, however, be utilized with care. Because each new root is known with only finite accuracy, errors creep into the determination of the coefficients of the successively deflated polynomial. Consequently, the roots can become more and more inaccurate. It matters a lot whether the inaccuracy creeps in stably (plus or

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有