当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.6 Root Finding and Nonlinear Sets of Equations 9.6 Newton-Raphson Method for Nonlinear Systems of Equations

资源类别:文库,文档格式:PDF,文档页数:5,文件大小:61.22KB,团购合买
点击下载完整版文档(PDF)

9.6 Newton-Raphson Method for Nonlinear Systems of Equations 379 Hence one step of Newton-Raphson,taking a guess k into a new guess k+1, can be written as P(k) k+1=-p'e)-P)∑-1k-)P1 (9.5.29) This equation,if used with i ranging over the roots already polished,will prevent a tentative root from spuriously hopping to another one's true root.It is an example of so-called zero suppression as an alternative to true deflation. Muller's method,which was described above,can also be useful at the polishing stage. CITED REFERENCES AND FURTHER READING: Acton,F.S.1970,Numerical Methods That Work,1990,corrected edition (Washington:Mathe- ICAL matical Association of America),Chapter 7.[1] Peters G.,and Wilkinson,J.H.1971,Journal of the Institute of Mathematics and its Applications, vol.8.pp.16-35.[2 RECIPES IMSL Math/Library Users Manual (IMSL Inc.,2500 CityWest Boulevard,Houston TX 77042).[3] Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: 9 McGraw-Hi),S8.9-8.13.[4] Adams,D.A.1967.Communications of the ACM,vol.10,pp.655-658.[5] Johnson,L.W.,and Riess,R.D.1982,Numerica/Analysis,2nd ed.(Reading,MA:Addison- Wesley).84.4.3.[6] Henrici,P.1974,Applied and Computational Complex Analysis,vol.1(New York:Wiley). 三起g合$ Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). 885.5-5.9. IENTIFIC 61 9.6 Newton-Raphson Method for Nonlinear Systems of Equations (ISBN Numerica 10.621 We make an extreme,but wholly defensible,statement:There are no good,gen- eral methods for solving systems of more than one nonlinear equation.Furthermore. Recipes 43108 it is not hard to see why (very likely)there never will be any good,general methods: Consider the case of two dimensions,where we want to solve simultaneously (outside f(x,y)=0 North (9.6.1) 9(x,y)=0 The functions f and g are two arbitrary functions,each of which has zero contour lines that divide the(,y)plane into regions where their respective function is positive or negative.These zero contour boundaries are of interest to us.The solutions that we seek are those points (if any)that are common to the zero contours of f and g(see Figure 9.6.1).Unfortunately,the functions f and g have,in general, no relation to each other at all!There is nothing special about a common point from either f's point of view,or from g's.In order to find all common points,which are

9.6 Newton-Raphson Method for Nonlinear Systems of Equations 379 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Hence one step of Newton-Raphson, taking a guess xk into a new guess xk+1, can be written as xk+1 = xk − P(xk) P (xk) − P(xk) j i=1(xk − xi)−1 (9.5.29) This equation, if used with i ranging over the roots already polished, will prevent a tentative root from spuriously hopping to another one’s true root. It is an example of so-called zero suppression as an alternative to true deflation. Muller’s method, which was described above, can also be useful at the polishing stage. CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathe￾matical Association of America), Chapter 7. [1] Peters G., and Wilkinson, J.H. 1971, Journal of the Institute of Mathematics and its Applications, vol. 8, pp. 16–35. [2] IMSL Math/Library Users Manual (IMSL Inc., 2500 CityWest Boulevard, Houston TX 77042). [3] Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §8.9–8.13. [4] Adams, D.A. 1967, Communications of the ACM, vol. 10, pp. 655–658. [5] Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed. (Reading, MA: Addison￾Wesley), §4.4.3. [6] Henrici, P. 1974, Applied and Computational Complex Analysis, vol. 1 (New York: Wiley). Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §§5.5–5.9. 9.6 Newton-Raphson Method for Nonlinear Systems of Equations We make an extreme, but wholly defensible, statement: There are no good, gen￾eral methods for solving systems of more than one nonlinear equation. Furthermore, it is not hard to see why (very likely) there never will be any good, general methods: Consider the case of two dimensions, where we want to solve simultaneously f(x, y)=0 g(x, y)=0 (9.6.1) The functions f and g are two arbitrary functions, each of which has zero contour lines that divide the (x, y) plane into regions where their respective function is positive or negative. These zero contour boundaries are of interest to us. The solutions that we seek are those points (if any) that are common to the zero contours of f and g (see Figure 9.6.1). Unfortunately, the functions f and g have, in general, no relation to each other at all! There is nothing special about a common point from either f’s point of view, or from g’s. In order to find all common points, which are

380 Chapter 9. Root Finding and Nonlinear Sets of Equations no root here! fpos two roots here g pos M ÷0 /pos 8 pos f=0 g neg 601 /pos Permission is f neg g neg 83 8=0 granted for (including this one) from NUMERICAL RECIPES IN 19881992 8=0 11-600 g pos 872 /Cambridge (Nort server Figure 9.6.1.Solution of two nonlinear equations in two unknowns.Solid curves refer to f(r,y) dashed curves to g(r,y).Each equation divides the (x,y)plane into positive and negative regions, America computer, users to make one paper UnN电.t THE bounded by zero curves.The desired solutions are the intersections of these unrelated zero curves.The number of solutions is a priori unknown. ART the solutions of our nonlinear equations,we will(in general)have to do neither more Programs nor less than map out the full zero contours of both functions.Note further that the zero contours will (in general)consist of an unknown number of disjoint closed curves.How can we ever hope to know when we have found all such disjoint pieces? 三2 For problems in more than two dimensions,we need to find points mutually 可 common to N unrelated zero-contour hypersurfaces,each of dimension N-1.You see that root finding becomes virtually impossible without insight!You will almost OF SCIENTIFIC COMPUTING (ISBN always have to use additional information,specific to your particular problem,to 1888192 answer such basic questions as,"Do I expect a unique solution?"and"Approximately where?"Acton [1]has a good discussion of some of the particular strategies that can be tried. In this section we will discuss the simplest multidimensional root finding FuurgPgoglrion method,Newton-Raphson.This method gives you a very efficient means of Numerical Recipes 10-521 43106 converging to a root,if you have a sufficiently good initial guess.It can also spectacularly fail to converge,indicating (though not proving)that your putative (outside root does not exist nearby.In 89.7 we discuss more sophisticated implementations of the Newton-Raphson method,which try to improve on Newton-Raphson's poor North Software. global convergence.A multidimensional generalization of the secant method,called Broyden's method,is also discussed in 89.7. A typical problem gives N functional relations to be zeroed,involving variables x,i=1,2,,N F(z1,x2,xN)=0i=1,2,,N. (9.6.2) We let x denote the entire vector of values zi and F denote the entire vector of functions Fi.In the neighborhood of x,each of the functions Fi can be expanded

380 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). g = 0 g = 0 f = 0 f = 0 f pos M g pos f pos f pos f neg g = 0 g neg g pos g neg g pos y x no root here! two roots here Figure 9.6.1. Solution of two nonlinear equations in two unknowns. Solid curves refer to f(x, y), dashed curves to g(x, y). Each equation divides the (x, y) plane into positive and negative regions, bounded by zero curves. The desired solutions are the intersections of these unrelated zero curves. The number of solutions is a priori unknown. the solutions of our nonlinear equations, we will (in general) have to do neither more nor less than map out the full zero contours of both functions. Note further that the zero contours will (in general) consist of an unknown number of disjoint closed curves. How can we ever hope to know when we have found all such disjoint pieces? For problems in more than two dimensions, we need to find points mutually common to N unrelated zero-contour hypersurfaces, each of dimension N − 1. You see that root finding becomes virtually impossible without insight! You will almost always have to use additional information, specific to your particular problem, to answer such basic questions as, “Do I expect a unique solution?” and “Approximately where?” Acton [1] has a good discussion of some of the particular strategies that can be tried. In this section we will discuss the simplest multidimensional root finding method, Newton-Raphson. This method gives you a very efficient means of converging to a root, if you have a sufficiently good initial guess. It can also spectacularly fail to converge, indicating (though not proving) that your putative root does not exist nearby. In §9.7 we discuss more sophisticated implementations of the Newton-Raphson method, which try to improve on Newton-Raphson’s poor global convergence. A multidimensional generalization of the secant method, called Broyden’s method, is also discussed in §9.7. A typical problem gives N functional relations to be zeroed, involving variables xi, i = 1, 2,...,N: Fi(x1, x2,...,xN )=0 i = 1, 2,...,N. (9.6.2) We let x denote the entire vector of values xi and F denote the entire vector of functions Fi. In the neighborhood of x, each of the functions Fi can be expanded

9.6 Newton-Raphson Method for Nonlinear Systems of Equations 381 in Taylor series N Ek+x)=F(x)+〉二6x5+O(6x). (9.6.3) j=1 The matrix of partial derivatives appearing in equation(9.6.3)is the Jacobian matrix J: OF J三0证j (9.6.4) In matrix notation equation (9.6.3)is F(x+6x)=F(x)+J.x+O(6x2): (9.6.5) ICAL By neglecting terms of order x2 and higher and by setting F(x+x)=0,we obtain a set of linear equations for the corrections &x that move each function closer to zero simultaneously,namely J.6x=-F. (9.6.6) 、 Matrix equation (9.6.6)can be solved by LO decomposition as described in $2.3.The corrections are then added to the solution vector, 9 Xnew Xold +0x (9.6.7) and the process is iterated to convergence.In general it is a good idea to check the degree to which both functions and variables have converged.Once either reaches machine accuracy,the other won't change. The following routine mnewt performs ntrial iterations starting from an initial guess at the solution vector x[1..n].Iteration stops if either the sum of the magnitudes of the functions F;is less than some tolerance tolf,or the sum of the 0子徐 absolute values of the corrections to oxi is less than some tolerance tolx.mnewt calls a user supplied function usrfun which must provide the function values F and Numerica 10621 the Jacobian matrix J.If J is difficult to compute analytically,you can try having usrfun call the routine fdjac of 89.7 to compute the partial derivatives by finite 431 differences.You should not make ntrial too big;rather inspect to see what is Recipes happening before continuing for some further iterations. Software. #include 首 #include "nrutil.h" void usrfun(float *x,int n,float *fvec,float **fjac); #define FREERETURN {free_matrix(fjac,1,n,1,n);free_vector(fvec,1,n);\ free_vector(p,1,n);free_ivector(indx,1,n);return;} void mnewt(int ntrial,float x[,int n,float tolx,float tolf) Given an initial guess x[1..n]for a root in n dimensions,take ntrial Newton-Raphson steps to improve the root.Stop if the root converges in either summed absolute variable increments tolx or summed absolute function values tolf. f void lubksb(float **a,int n,int *indx,float b[); void ludcmp(float **a,int n,int *indx,float *d);

9.6 Newton-Raphson Method for Nonlinear Systems of Equations 381 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). in Taylor series Fi(x + δx) = Fi(x) + N j=1 ∂Fi ∂xj δxj + O(δx2). (9.6.3) The matrix of partial derivatives appearing in equation (9.6.3) is the Jacobianmatrix J: Jij ≡ ∂Fi ∂xj . (9.6.4) In matrix notation equation (9.6.3) is F(x + δx) = F(x) + J · δx + O(δx2). (9.6.5) By neglecting terms of order δx2 and higher and by setting F(x + δx)=0, we obtain a set of linear equations for the corrections δx that move each function closer to zero simultaneously, namely J · δx = −F. (9.6.6) Matrix equation (9.6.6) can be solved by LU decomposition as described in §2.3. The corrections are then added to the solution vector, xnew = xold + δx (9.6.7) and the process is iterated to convergence. In general it is a good idea to check the degree to which both functions and variables have converged. Once either reaches machine accuracy, the other won’t change. The following routine mnewt performs ntrial iterations starting from an initial guess at the solution vector x[1..n]. Iteration stops if either the sum of the magnitudes of the functions Fi is less than some tolerance tolf, or the sum of the absolute values of the corrections to δxi is less than some tolerance tolx. mnewt calls a user supplied function usrfun which must provide the function values F and the Jacobian matrix J. If J is difficult to compute analytically, you can try having usrfun call the routine fdjac of §9.7 to compute the partial derivatives by finite differences. You should not make ntrial too big; rather inspect to see what is happening before continuing for some further iterations. #include #include "nrutil.h" void usrfun(float *x,int n,float *fvec,float **fjac); #define FREERETURN {free_matrix(fjac,1,n,1,n);free_vector(fvec,1,n);\ free_vector(p,1,n);free_ivector(indx,1,n);return;} void mnewt(int ntrial, float x[], int n, float tolx, float tolf) Given an initial guess x[1..n] for a root in n dimensions, take ntrial Newton-Raphson steps to improve the root. Stop if the root converges in either summed absolute variable increments tolx or summed absolute function values tolf. { void lubksb(float **a, int n, int *indx, float b[]); void ludcmp(float **a, int n, int *indx, float *d);

382 Chapter 9.Root Finding and Nonlinear Sets of Equations int k,i,*indx; float errx,errf,d,*fvec,**fjac,*p; indx=ivector(1,n); p=vector(1,n); fvec=vector(1,n); fjac=matrix(1,n,1,n); for (k=1;k<=ntrial;k++) usrfun(x,n,fvec,fjac); User function supplies function values at x in errf=0.0; fvec and Jacobian matrix in fjac. for (i=1;i<=n;i++)errf +fabs(fvec[i]); Check function convergence. if (errf <tolf)FREERETURN for (i=1;i<=n;i++)p[i]=-fvec[i]; Right-hand side of linear equations. ludcmp(fjac,n,indx,&d); Solve linear equations using LU decomposition. lubksb(fjac,n,indx,p); errx=0.0: Check root convergence. for (i=1;i<=n;i++){ Update solution. errx +fabs(p[i]); NUMERICAL x[i]+=p[1]; if (errx <tolx)FREERETURN FREERETURN RECIPES I (Nort server America computer, Press. Newton's Method versus Minimization In the next chapter,we will find that there are efficient general techniques for 9 Programs finding a minimum of a function of many variables.Why is that task(relatively) easy,while multidimensional root finding is often quite hard?Isn't minimization 葶绿 OF SCIENTIFIC equivalent to finding a zero ofan N-dimensional gradient vector,not so different from zeroing an N-dimensional function?No!The components ofa gradient vector are not 6 independent,arbitrary functions.Rather,they obey so-called integrability conditions that are highly restrictive.Put crudely,you can always find a minimum by sliding downhill on a single surface.The test of"downhillness"is thus one-dimensional. There is no analogous conceptual procedure for finding a multidimensional root, where"downhill"must mean simultaneously downhill in N separate function spaces, 10621 thus allowing a multitude of trade-offs,as to how much progress in one dimension Numerica is worth compared with progress in another. uctio 43106 It might occur to you to carry out multidimensional root finding by collapsing Recipes all these dimensions into one:Add up the sums of squares of the individual functions Fi to get a master function F which(i)is positive definite,and(ii)has a global (outside minimum of zero exactly at all solutions of the original set of nonlinear equations. North Unfortunately,as you will see in the next chapter,the efficient algorithms for finding minima come to rest on global and local minima indiscriminately.You will often find,to your great dissatisfaction,that your function F has a great number of local minima.In Figure 9.6.1,for example,there is likely to be a local minimum wherever the zero contours of f and g make a close approach to each other.The point labeled M is such a point,and one sees that there are no nearby roots. However,we will now see that sophisticated strategies for multidimensional root finding can in fact make use of the idea of minimizing a master function F,by combining it with Newton's method applied to the full set of functions Fi.While such methods can still occasionally fail by coming to rest on a local minimum of

382 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). int k,i,*indx; float errx,errf,d,*fvec,**fjac,*p; indx=ivector(1,n); p=vector(1,n); fvec=vector(1,n); fjac=matrix(1,n,1,n); for (k=1;k<=ntrial;k++) { usrfun(x,n,fvec,fjac); User function supplies function values at x in errf=0.0; fvec and Jacobian matrix in fjac. for (i=1;i<=n;i++) errf += fabs(fvec[i]); Check function convergence. if (errf <= tolf) FREERETURN for (i=1;i<=n;i++) p[i] = -fvec[i]; Right-hand side of linear equations. ludcmp(fjac,n,indx,&d); Solve linear equations using LU decomposition. lubksb(fjac,n,indx,p); errx=0.0; Check root convergence. for (i=1;i<=n;i++) { Update solution. errx += fabs(p[i]); x[i] += p[i]; } if (errx <= tolx) FREERETURN } FREERETURN } Newton’s Method versus Minimization In the next chapter, we will find that there are efficient general techniques for finding a minimum of a function of many variables. Why is that task (relatively) easy, while multidimensional root finding is often quite hard? Isn’t minimization equivalent to finding a zero of an N-dimensional gradient vector, not so different from zeroing an N-dimensional function? No! The components of a gradient vector are not independent, arbitrary functions. Rather, they obey so-called integrability conditions that are highly restrictive. Put crudely, you can always find a minimum by sliding downhill on a single surface. The test of “downhillness” is thus one-dimensional. There is no analogous conceptual procedure for finding a multidimensional root, where “downhill” must mean simultaneously downhill in N separate function spaces, thus allowing a multitude of trade-offs, as to how much progress in one dimension is worth compared with progress in another. It might occur to you to carry out multidimensional root finding by collapsing all these dimensions into one: Add up the sums of squares of the individual functions Fi to get a master function F which (i) is positive definite, and (ii) has a global minimum of zero exactly at all solutions of the original set of nonlinear equations. Unfortunately, as you will see in the next chapter, the efficient algorithms for finding minima come to rest on global and local minima indiscriminately. You will often find, to your great dissatisfaction, that your function F has a great number of local minima. In Figure 9.6.1, for example, there is likely to be a local minimum wherever the zero contours of f and g make a close approach to each other. The point labeled M is such a point, and one sees that there are no nearby roots. However, we will now see that sophisticated strategies for multidimensional root finding can in fact make use of the idea of minimizing a master function F, by combining it with Newton’s method applied to the full set of functions Fi. While such methods can still occasionally fail by coming to rest on a local minimum of

9.7 Globally Convergent Methods for Nonlinear Systems of Equations 383 F.they often succeed where a direct attack via Newton's method alone fails.The next section deals with these methods. CITED REFERENCES AND FURTHER READING: Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America),Chapter 14.[1] Ostrowski,A.M.1966,Solutions of Equations and Systems of Equations,2nd ed.(New York: Academic Press). Ortega,J.,and Rheinboldt,W.1970,Iterative Solution of Nonlinear Equations in Several Vari- ables (New York:Academic Press). 9.7 Globally Convergent Methods for Nonlinear Systems of Equations 品三 2 We have seen that Newton's method for solving nonlinear equations has an unfortunate tendency to wander off into the wild blue yonder if the initial guess is not sufficiently close to the root.A global method is one that converges to a solution Press. from almost any starting point.In this section we will develop an algorithm that combines the rapid local convergence of Newton's method with a globally convergent strategy that will guarantee some progress towards the solution at each iteration. 兰a The algorithm is closely related to the quasi-Newton method of minimization which we will describe in 810.7. Recall our discussion of 89.6:the Newton step for the set of equations 6 F(x)=0 (9.7.1) IS xaew=Xold十x (9.7.2) where 6x=-J-1.F (9.7.3) Numerica 10.621 Here J is the Jacobian matrix.How do we decide whether to accept the Newton step 43106 6x?A reasonable strategy is to require that the step decrease F2=F.F.This is the same requirement we would impose if we were trying to minimize f.F (9.7.4) (Theis for later convenience.)Every solution to (9.7.1)minimizes(9.7.4),but there may be local minima of(9.7.4)that are not solutions to (9.7.1).Thus,as already mentioned,simply applying one of our minimum finding algorithms from Chapter 10 to (9.7.4)is not a good idea. To develop a better strategy,note that the Newton step (9.7.3)is a descent direction for f: 7f.6x=(F.J)·(-J-1.F)=-F.F<0 (9.7.5)

9.7 Globally Convergent Methods for Nonlinear Systems of Equations 383 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). F, they often succeed where a direct attack via Newton’s method alone fails. The next section deals with these methods. CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathe￾matical Association of America), Chapter 14. [1] Ostrowski, A.M. 1966, Solutions of Equations and Systems of Equations, 2nd ed. (New York: Academic Press). Ortega, J., and Rheinboldt, W. 1970, Iterative Solution of Nonlinear Equations in Several Vari￾ables (New York: Academic Press). 9.7 Globally Convergent Methods for Nonlinear Systems of Equations We have seen that Newton’s method for solving nonlinear equations has an unfortunate tendency to wander off into the wild blue yonder if the initial guess is not sufficiently close to the root. A global method is one that converges to a solution from almost any starting point. In this section we will develop an algorithm that combines the rapid local convergence of Newton’s method with a globally convergent strategy that will guarantee some progress towards the solution at each iteration. The algorithm is closely related to the quasi-Newton method of minimization which we will describe in §10.7. Recall our discussion of §9.6: the Newton step for the set of equations F(x)=0 (9.7.1) is xnew = xold + δx (9.7.2) where δx = −J−1 · F (9.7.3) Here J is the Jacobian matrix. How do we decide whether to accept the Newton step δx? A reasonable strategy is to require that the step decrease |F| 2 = F · F. This is the same requirement we would impose if we were trying to minimize f = 1 2 F · F (9.7.4) (The 1 2 is for later convenience.) Every solution to (9.7.1) minimizes (9.7.4), but there may be local minima of (9.7.4) that are not solutions to (9.7.1). Thus, as already mentioned, simply applying one of our minimum finding algorithms from Chapter 10 to (9.7.4) is not a good idea. To develop a better strategy, note that the Newton step (9.7.3) is a descent direction for f: ∇f · δx = (F · J) · (−J−1 · F) = −F · F < 0 (9.7.5)

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有