正在加载图片...
9.7 Globally Convergent Methods for Nonlinear Systems of Equations 383 F.they often succeed where a direct attack via Newton's method alone fails.The next section deals with these methods. CITED REFERENCES AND FURTHER READING: Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America),Chapter 14.[1] Ostrowski,A.M.1966,Solutions of Equations and Systems of Equations,2nd ed.(New York: Academic Press). Ortega,J.,and Rheinboldt,W.1970,Iterative Solution of Nonlinear Equations in Several Vari- ables (New York:Academic Press). 9.7 Globally Convergent Methods for Nonlinear Systems of Equations 品三 2 We have seen that Newton's method for solving nonlinear equations has an unfortunate tendency to wander off into the wild blue yonder if the initial guess is not sufficiently close to the root.A global method is one that converges to a solution Press. from almost any starting point.In this section we will develop an algorithm that combines the rapid local convergence of Newton's method with a globally convergent strategy that will guarantee some progress towards the solution at each iteration. 兰a The algorithm is closely related to the quasi-Newton method of minimization which we will describe in 810.7. Recall our discussion of 89.6:the Newton step for the set of equations 6 F(x)=0 (9.7.1) IS xaew=Xold十x (9.7.2) where 6x=-J-1.F (9.7.3) Numerica 10.621 Here J is the Jacobian matrix.How do we decide whether to accept the Newton step 43106 6x?A reasonable strategy is to require that the step decrease F2=F.F.This is the same requirement we would impose if we were trying to minimize f.F (9.7.4) (Theis for later convenience.)Every solution to (9.7.1)minimizes(9.7.4),but there may be local minima of(9.7.4)that are not solutions to (9.7.1).Thus,as already mentioned,simply applying one of our minimum finding algorithms from Chapter 10 to (9.7.4)is not a good idea. To develop a better strategy,note that the Newton step (9.7.3)is a descent direction for f: 7f.6x=(F.J)·(-J-1.F)=-F.F<0 (9.7.5)9.7 Globally Convergent Methods for Nonlinear Systems of Equations 383 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). F, they often succeed where a direct attack via Newton’s method alone fails. The next section deals with these methods. CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathe￾matical Association of America), Chapter 14. [1] Ostrowski, A.M. 1966, Solutions of Equations and Systems of Equations, 2nd ed. (New York: Academic Press). Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Vari￾ables (New York: Academic Press). 9.7 Globally Convergent Methods for Nonlinear Systems of Equations We have seen that Newton’s method for solving nonlinear equations has an unfortunate tendency to wander off into the wild blue yonder if the initial guess is not sufficiently close to the root. A global method is one that converges to a solution from almost any starting point. In this section we will develop an algorithm that combines the rapid local convergence of Newton’s method with a globally convergent strategy that will guarantee some progress towards the solution at each iteration. The algorithm is closely related to the quasi-Newton method of minimization which we will describe in §10.7. Recall our discussion of §9.6: the Newton step for the set of equations F(x)=0 (9.7.1) is xnew = xold + δx (9.7.2) where δx = −J−1 · F (9.7.3) Here J is the Jacobian matrix. How do we decide whether to accept the Newton step δx? A reasonable strategy is to require that the step decrease |F| 2 = F · F. This is the same requirement we would impose if we were trying to minimize f = 1 2 F · F (9.7.4) (The 1 2 is for later convenience.) Every solution to (9.7.1) minimizes (9.7.4), but there may be local minima of (9.7.4) that are not solutions to (9.7.1). Thus, as already mentioned, simply applying one of our minimum finding algorithms from Chapter 10 to (9.7.4) is not a good idea. To develop a better strategy, note that the Newton step (9.7.3) is a descent direction for f: ∇f · δx = (F · J) · (−J−1 · F) = −F · F < 0 (9.7.5)
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有