正在加载图片...
382 Chapter 9.Root Finding and Nonlinear Sets of Equations int k,i,*indx; float errx,errf,d,*fvec,**fjac,*p; indx=ivector(1,n); p=vector(1,n); fvec=vector(1,n); fjac=matrix(1,n,1,n); for (k=1;k<=ntrial;k++) usrfun(x,n,fvec,fjac); User function supplies function values at x in errf=0.0; fvec and Jacobian matrix in fjac. for (i=1;i<=n;i++)errf +fabs(fvec[i]); Check function convergence. if (errf <tolf)FREERETURN for (i=1;i<=n;i++)p[i]=-fvec[i]; Right-hand side of linear equations. ludcmp(fjac,n,indx,&d); Solve linear equations using LU decomposition. lubksb(fjac,n,indx,p); errx=0.0: Check root convergence. for (i=1;i<=n;i++){ Update solution. errx +fabs(p[i]); NUMERICAL x[i]+=p[1]; if (errx <tolx)FREERETURN FREERETURN RECIPES I (Nort server America computer, Press. Newton's Method versus Minimization In the next chapter,we will find that there are efficient general techniques for 9 Programs finding a minimum of a function of many variables.Why is that task(relatively) easy,while multidimensional root finding is often quite hard?Isn't minimization 葶绿 OF SCIENTIFIC equivalent to finding a zero ofan N-dimensional gradient vector,not so different from zeroing an N-dimensional function?No!The components ofa gradient vector are not 6 independent,arbitrary functions.Rather,they obey so-called integrability conditions that are highly restrictive.Put crudely,you can always find a minimum by sliding downhill on a single surface.The test of"downhillness"is thus one-dimensional. There is no analogous conceptual procedure for finding a multidimensional root, where"downhill"must mean simultaneously downhill in N separate function spaces, 10621 thus allowing a multitude of trade-offs,as to how much progress in one dimension Numerica is worth compared with progress in another. uctio 43106 It might occur to you to carry out multidimensional root finding by collapsing Recipes all these dimensions into one:Add up the sums of squares of the individual functions Fi to get a master function F which(i)is positive definite,and(ii)has a global (outside minimum of zero exactly at all solutions of the original set of nonlinear equations. North Unfortunately,as you will see in the next chapter,the efficient algorithms for finding minima come to rest on global and local minima indiscriminately.You will often find,to your great dissatisfaction,that your function F has a great number of local minima.In Figure 9.6.1,for example,there is likely to be a local minimum wherever the zero contours of f and g make a close approach to each other.The point labeled M is such a point,and one sees that there are no nearby roots. However,we will now see that sophisticated strategies for multidimensional root finding can in fact make use of the idea of minimizing a master function F,by combining it with Newton's method applied to the full set of functions Fi.While such methods can still occasionally fail by coming to rest on a local minimum of382 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). int k,i,*indx; float errx,errf,d,*fvec,**fjac,*p; indx=ivector(1,n); p=vector(1,n); fvec=vector(1,n); fjac=matrix(1,n,1,n); for (k=1;k<=ntrial;k++) { usrfun(x,n,fvec,fjac); User function supplies function values at x in errf=0.0; fvec and Jacobian matrix in fjac. for (i=1;i<=n;i++) errf += fabs(fvec[i]); Check function convergence. if (errf <= tolf) FREERETURN for (i=1;i<=n;i++) p[i] = -fvec[i]; Right-hand side of linear equations. ludcmp(fjac,n,indx,&d); Solve linear equations using LU decomposition. lubksb(fjac,n,indx,p); errx=0.0; Check root convergence. for (i=1;i<=n;i++) { Update solution. errx += fabs(p[i]); x[i] += p[i]; } if (errx <= tolx) FREERETURN } FREERETURN } Newton’s Method versus Minimization In the next chapter, we will find that there are efficient general techniques for finding a minimum of a function of many variables. Why is that task (relatively) easy, while multidimensional root finding is often quite hard? Isn’t minimization equivalent to finding a zero of an N-dimensional gradient vector, not so different from zeroing an N-dimensional function? No! The components of a gradient vector are not independent, arbitrary functions. Rather, they obey so-called integrability conditions that are highly restrictive. Put crudely, you can always find a minimum by sliding downhill on a single surface. The test of “downhillness” is thus one-dimensional. There is no analogous conceptual procedure for finding a multidimensional root, where “downhill” must mean simultaneously downhill in N separate function spaces, thus allowing a multitude of trade-offs, as to how much progress in one dimension is worth compared with progress in another. It might occur to you to carry out multidimensional root finding by collapsing all these dimensions into one: Add up the sums of squares of the individual functions Fi to get a master function F which (i) is positive definite, and (ii) has a global minimum of zero exactly at all solutions of the original set of nonlinear equations. Unfortunately, as you will see in the next chapter, the efficient algorithms for finding minima come to rest on global and local minima indiscriminately. You will often find, to your great dissatisfaction, that your function F has a great number of local minima. In Figure 9.6.1, for example, there is likely to be a local minimum wherever the zero contours of f and g make a close approach to each other. The point labeled M is such a point, and one sees that there are no nearby roots. However, we will now see that sophisticated strategies for multidimensional root finding can in fact make use of the idea of minimizing a master function F, by combining it with Newton’s method applied to the full set of functions Fi. While such methods can still occasionally fail by coming to rest on a local minimum of
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有