722 Chapter 16.Integration of Ordinary Differential Equations (*rkqs)(y,dydx,nvar,&x,h,eps,yscal,&hdid,&hnext,derivs); if (hdid ==h)++(*nok);else ++(*nbad); 1f((x-x2)*(x2-x1)>=0.0)[ Are we done? for (i=1;i<=nvar;i++)ystart[i]sy[i]; if (kmax){ xp[++kount]=x; Save final step. for (i=1;i<=nvar;i++)yp[i][kount]=y[i]; free_vector(dydx,1,nvar); free_vector(y,1,nvar); free_vector(yscal,1,nvar); return; Normal exit. if (fabs(hnext)<=hmin)nrerror("Step size too small in odeint"); h=hnext; 83g nrerror("Too many steps in routine odeint"); 1-00 CITED REFERENCES AND FURTHER READING: Gear,C.W.1971,Numerical Initial Value Problems in Ordinary Differential Equations(Englewood 令 Cliffs,NJ:Prentice-Hall).[1] Cash,J.R.,and Karp,A.H.1990,ACM Transactions on Mathematical Software,vol.16,pp.201- THE 222.[2] Shampine,L.F.,and Watts,H.A.1977,in Mathematica/Software lll.J.R.Rice,ed.(New York:Aca- ART demic Press),pp.257-275:1979,Applied Mathematics and Computation,vol.5.pp.93- 121. Program Forsythe,G.E.,Malcolm,M.A.,and Moler,C.B.1977,Computer Methods for Mathematical Computations (Englewood Cliffs,NJ:Prentice-Hall). 、二核 OF SCIENTIFIC 61 16.3 Modified Midpoint Method This section discusses the modified midpoint method,which advances a vector of dependent variables y()from a point x to a point +H by a sequence of n substeps each of size h, Numerical 10621 h=H/n (16.3.1) 43106 In principle,one could use the modified midpoint method in its own right as an ODE Recipes integrator.In practice,the method finds its most important application as a part of the more powerful Bulirsch-Stoer technique,treated in $16.4.You can therefore consider this section as a preamble to 816.4. North The number of right-hand side evaluations required by the modified midpoint method is n+1.The formulas for the method are 20三yE) 21=20+hf(r,20) 2m+1=2m-1+2hf(x+mh;2m)for m 1,2,...:n-1 z+H刊)≈n三2m+2n-1+hf(x+H,2n月 (16.3.2)
722 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). (*rkqs)(y,dydx,nvar,&x,h,eps,yscal,&hdid,&hnext,derivs); if (hdid == h) ++(*nok); else ++(*nbad); if ((x-x2)*(x2-x1) >= 0.0) { Are we done? for (i=1;i<=nvar;i++) ystart[i]=y[i]; if (kmax) { xp[++kount]=x; Save final step. for (i=1;i<=nvar;i++) yp[i][kount]=y[i]; } free_vector(dydx,1,nvar); free_vector(y,1,nvar); free_vector(yscal,1,nvar); return; Normal exit. } if (fabs(hnext) <= hmin) nrerror("Step size too small in odeint"); h=hnext; } nrerror("Too many steps in routine odeint"); } CITED REFERENCES AND FURTHER READING: Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs, NJ: Prentice-Hall). [1] Cash, J.R., and Karp, A.H. 1990, ACM Transactions on Mathematical Software, vol. 16, pp. 201– 222. [2] Shampine, L.F., and Watts, H.A. 1977, in Mathematical Software III, J.R. Rice, ed. (New York: Academic Press), pp. 257–275; 1979, Applied Mathematics and Computation, vol. 5, pp. 93– 121. Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for Mathematical Computations (Englewood Cliffs, NJ: Prentice-Hall). 16.3 Modified Midpoint Method This section discusses the modified midpoint method, which advances a vector of dependent variables y(x) from a point x to a point x + H by a sequence of n substeps each of size h, h = H/n (16.3.1) In principle, one could use the modified midpoint method in its own right as an ODE integrator. In practice, the method finds its most important application as a part of the more powerful Bulirsch-Stoer technique, treated in §16.4. You can therefore consider this section as a preamble to §16.4. The number of right-hand side evaluations required by the modified midpoint method is n + 1. The formulas for the method are z0 ≡ y(x) z1 = z0 + hf(x, z0) zm+1 = zm−1 + 2hf(x + mh, zm) for m = 1, 2,...,n − 1 y(x + H) ≈ yn ≡ 1 2 [zn + zn−1 + hf(x + H, zn)] (16.3.2)
16.3 Modified Midpoint Method 723 Here the z's are intermediate approximations which march along in steps of h,while yn is the final approximation to y(+H).The method is basically a "centered difference"or"midpoint"method (compare equation 16.1.2),except at the first and last points.Those give the qualifier "modified." The modified midpoint method is a second-order method.like (16.1.2).but with the advantage of requiring(asymptotically for large n)only one derivative evaluation per step h instead of the two required by second-order Runge-Kutta.Perhaps there are applications where the simplicity of(16.3.2),easily coded in-line in some other program,recommends it.In general,however,use of the modified midpoint method by itself will be dominated by the embedded Runge-Kutta method with adaptive 81 stepsize control,as implemented in the preceding section. The usefulness of the modified midpoint method to the Bulirsch-Stoer technique (16.4)derives from a"deep"result about equations(16.3.2),due to Gragg.It turns out that the error of (16.3.2).expressed as a power series in h.the stepsize,contains only even powers of h, 3 RECIPES n-yx+H)=】 aih2i (16.3.3) =1 9 where H is held constant,but h changes by varying n in(16.3.1).The importance 。a以N of this even power series is that,if we play our usual tricks of combining steps to knock out higher-order error terms,we can gain fwo orders at a time! For example,suppose n is even,and let yn/2 denote the result of applying 9 (16.3.1)and (16.3.2)with half as many steps,n-n/2.Then the estimate 之o +H)≈4n-a2 3 (16.3.4) is fourth-order accurate,the same as fourth-order Runge-Kutta,but requires only about 1.5 derivative evaluations per step h instead of Runge-Kutta's 4 evaluations. Don't be too anxious to implement (16.3.4).since we will soon do even better. Now would be a good time to look back at the routine qsimp in $4.2,and especially to compare equation(4.2.4)with equation (16.3.4)above.You will see that the transition in Chapter 4 to the idea of Richardson extrapolation,as embodied 三 6 Numerica 10621 431 in Romberg integration of 84.3,is exactly analogous to the transition in going from this section to the next one. Recipes Here is the routine that implements the modified midpoint method.which will North Software. be used below. #include "nrutil.h" void mmid(float y[],float dydx[],int nvar,float xs,float htot,int nstep, float yout[],void (*derivs)(float,float[],float])) Modified midpoint step.At xs.input the dependent variable vector y[1..nvar]and its deriva- tive vector dydx[1..nvar].Also input is htot,the total step to be made,and nstep,the number of substeps to be used.The output is returned as yout [1..nvar],which need not be a distinct array from y;if it is distinct,however,then y and dydx are returned undamaged. int n,i; float x,swap,h2,h,*ym,*yn;
16.3 Modified Midpoint Method 723 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Here the z’s are intermediate approximations which march along in steps of h, while yn is the final approximation to y(x + H). The method is basically a “centered difference” or “midpoint” method (compare equation 16.1.2), except at the first and last points. Those give the qualifier “modified.” The modified midpoint method is a second-order method, like (16.1.2), but with the advantage of requiring (asymptotically for large n) only one derivative evaluation per step h instead of the two required by second-order Runge-Kutta. Perhaps there are applications where the simplicity of (16.3.2), easily coded in-line in some other program, recommends it. In general, however, use of the modified midpoint method by itself will be dominated by the embedded Runge-Kutta method with adaptive stepsize control, as implemented in the preceding section. The usefulness of the modified midpoint method to the Bulirsch-Stoer technique (§16.4) derives from a “deep” result about equations (16.3.2), due to Gragg. It turns out that the error of (16.3.2), expressed as a power series in h, the stepsize, contains only even powers of h, yn − y(x + H) = ∞ i=1 αih2i (16.3.3) where H is held constant, but h changes by varying n in (16.3.1). The importance of this even power series is that, if we play our usual tricks of combining steps to knock out higher-order error terms, we can gain two orders at a time! For example, suppose n is even, and let yn/2 denote the result of applying (16.3.1) and (16.3.2) with half as many steps, n → n/2. Then the estimate y(x + H) ≈ 4yn − yn/2 3 (16.3.4) is fourth-order accurate, the same as fourth-order Runge-Kutta, but requires only about 1.5 derivative evaluations per step h instead of Runge-Kutta’s 4 evaluations. Don’t be too anxious to implement (16.3.4), since we will soon do even better. Now would be a good time to look back at the routine qsimp in §4.2, and especially to compare equation (4.2.4) with equation (16.3.4) above. You will see that the transition in Chapter 4 to the idea of Richardson extrapolation, as embodied in Romberg integration of §4.3, is exactly analogous to the transition in going from this section to the next one. Here is the routine that implements the modified midpoint method, which will be used below. #include "nrutil.h" void mmid(float y[], float dydx[], int nvar, float xs, float htot, int nstep, float yout[], void (*derivs)(float, float[], float[])) Modified midpoint step. At xs, input the dependent variable vector y[1..nvar] and its derivative vector dydx[1..nvar]. Also input is htot, the total step to be made, and nstep, the number of substeps to be used. The output is returned as yout[1..nvar], which need not be a distinct array from y; if it is distinct, however, then y and dydx are returned undamaged. { int n,i; float x,swap,h2,h,*ym,*yn;
724 Chapter 16. Integration of Ordinary Differential Equations ym=vector(1,nvar): yn=vector(1,nvar); h=htot/nstep; Stepsize this trip for (i=1;i<-nvar;i++){ ym[i]=y[1]; yn[i]=y[i]+h*dydx [i]; First step. x=xs+h; (*derivs)(x,yn,yout); Will use yout for temporary storage of deriva- h2=2.0*h; tives. for (n=2;n<=nstep;n++){ General step. for (i=1;i<=nvar;i++){ swap=ym[i]+h2*yout [i]; ym[i]=yn[i]; 83g yn[i]=swap; 19881992 x+=h; 11800 NUMERICAL (*derivs)(x,yn,yout) } for (is1;i<=nvar;i++) Last step. yout [i]=0.5*(ym[i]+yn[i]+h*yout [i]); RECIPES free_vector(yn,1,nvar); free_vector(ym,1,nvar); 2 server computer, (North America to make one paper University Press. THE ART CITED REFERENCES AND FURTHER READING: Gear,C.W.1971,Numerical Initial Value Problems in Ordinary Differential Eguations(Englewood Programs Cliffs,NJ:Prentice-Hall).86.1.4. copy for their Stoer.J.,and Bulirsch.R.1980.Introduction to Numerical Analysis(New York:Springer-Verlag). S7.2.12. 16.4 Richardson Extrapolation and the 1881892 OF SCIENTIFIC COMPUTING(ISBN Bulirsch-Stoer Method e uurggoglrion Numerical 1021 The techniques described in this section are not for differential equations Recipes 43108 containing nonsmooth functions.For example,you might have a differential equation whose right-hand side involves a function that is evaluated by table look-up (outside and interpolation.If so,go back to Runge-Kutta with adaptive stepsize choice: That method does an excellent job of feeling its way through rocky or discontinuous North Software. terrain.It is also an excellent choice for quick-and-dirty,low-accuracy solution of a set of equations.A second warning is that the techniques in this section are not particularly good for differential equations that have singular points inside the interval of integration.A regular solution must tiptoe very carefully across such machine points.Runge-Kutta with adaptive stepsize can sometimes effect this;more generally, there are special techniques available for such problems,beyond our scope here. Apart from those two caveats,we believe that the Bulirsch-Stoer method, discussed in this section,is the best known way to obtain high-accuracy solutions to ordinary differential equations with minimal computational effort.(A possible exception,infrequently encountered in practice,is discussed in $16.7.)
724 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). ym=vector(1,nvar); yn=vector(1,nvar); h=htot/nstep; Stepsize this trip. for (i=1;i<=nvar;i++) { ym[i]=y[i]; yn[i]=y[i]+h*dydx[i]; First step. } x=xs+h; (*derivs)(x,yn,yout); Will use yout for temporary storage of derivah2=2.0*h; tives. for (n=2;n<=nstep;n++) { General step. for (i=1;i<=nvar;i++) { swap=ym[i]+h2*yout[i]; ym[i]=yn[i]; yn[i]=swap; } x += h; (*derivs)(x,yn,yout); } for (i=1;i<=nvar;i++) Last step. yout[i]=0.5*(ym[i]+yn[i]+h*yout[i]); free_vector(yn,1,nvar); free_vector(ym,1,nvar); } CITED REFERENCES AND FURTHER READING: Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs, NJ: Prentice-Hall), §6.1.4. Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §7.2.12. 16.4 Richardson Extrapolation and the Bulirsch-Stoer Method The techniques described in this section are not for differential equations containing nonsmooth functions. For example, you might have a differential equation whose right-hand side involves a function that is evaluated by table look-up and interpolation. If so, go back to Runge-Kutta with adaptive stepsize choice: That method does an excellent job of feeling its way through rocky or discontinuous terrain. It is also an excellent choice for quick-and-dirty, low-accuracy solution of a set of equations. A second warning is that the techniques in this section are not particularly good for differential equations that have singular points inside the interval of integration. A regular solution must tiptoe very carefully across such points. Runge-Kutta with adaptive stepsize can sometimes effect this; more generally, there are special techniques available for such problems, beyond our scope here. Apart from those two caveats, we believe that the Bulirsch-Stoer method, discussed in this section, is the best known way to obtain high-accuracy solutions to ordinary differential equations with minimal computational effort. (A possible exception, infrequently encountered in practice, is discussed in §16.7.)