724 Chapter 16. Integration of Ordinary Differential Equations ym=vector(1,nvar): yn=vector(1,nvar); h=htot/nstep; Stepsize this trip for (i=1;i<-nvar;i++){ ym[i]=y[1]; yn[i]=y[i]+h*dydx [i]; First step. x=xs+h; (*derivs)(x,yn,yout); Will use yout for temporary storage of deriva- h2=2.0*h; tives. for (n=2;n<=nstep;n++){ General step. for (i=1;i<=nvar;i++){ swap=ym[i]+h2*yout [i]; ym[i]=yn[i]; 83g yn[i]=swap; 19881992 x+=h; 11800 NUMERICAL (*derivs)(x,yn,yout) } for (is1;i<=nvar;i++) Last step. yout [i]=0.5*(ym[i]+yn[i]+h*yout [i]); RECIPES free_vector(yn,1,nvar); free_vector(ym,1,nvar); 2 server computer, (North America to make one paper University Press. THE ART CITED REFERENCES AND FURTHER READING: Gear,C.W.1971,Numerical Initial Value Problems in Ordinary Differential Eguations(Englewood Programs Cliffs,NJ:Prentice-Hall).86.1.4. copy for their Stoer.J.,and Bulirsch.R.1980.Introduction to Numerical Analysis(New York:Springer-Verlag). S7.2.12. 16.4 Richardson Extrapolation and the 1881892 OF SCIENTIFIC COMPUTING(ISBN Bulirsch-Stoer Method e uurggoglrion Numerical 1021 The techniques described in this section are not for differential equations Recipes 43108 containing nonsmooth functions.For example,you might have a differential equation whose right-hand side involves a function that is evaluated by table look-up (outside and interpolation.If so,go back to Runge-Kutta with adaptive stepsize choice: That method does an excellent job of feeling its way through rocky or discontinuous North Software. terrain.It is also an excellent choice for quick-and-dirty,low-accuracy solution of a set of equations.A second warning is that the techniques in this section are not particularly good for differential equations that have singular points inside the interval of integration.A regular solution must tiptoe very carefully across such machine points.Runge-Kutta with adaptive stepsize can sometimes effect this;more generally, there are special techniques available for such problems,beyond our scope here. Apart from those two caveats,we believe that the Bulirsch-Stoer method, discussed in this section,is the best known way to obtain high-accuracy solutions to ordinary differential equations with minimal computational effort.(A possible exception,infrequently encountered in practice,is discussed in $16.7.)
724 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). ym=vector(1,nvar); yn=vector(1,nvar); h=htot/nstep; Stepsize this trip. for (i=1;i<=nvar;i++) { ym[i]=y[i]; yn[i]=y[i]+h*dydx[i]; First step. } x=xs+h; (*derivs)(x,yn,yout); Will use yout for temporary storage of derivah2=2.0*h; tives. for (n=2;n<=nstep;n++) { General step. for (i=1;i<=nvar;i++) { swap=ym[i]+h2*yout[i]; ym[i]=yn[i]; yn[i]=swap; } x += h; (*derivs)(x,yn,yout); } for (i=1;i<=nvar;i++) Last step. yout[i]=0.5*(ym[i]+yn[i]+h*yout[i]); free_vector(yn,1,nvar); free_vector(ym,1,nvar); } CITED REFERENCES AND FURTHER READING: Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs, NJ: Prentice-Hall), §6.1.4. Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §7.2.12. 16.4 Richardson Extrapolation and the Bulirsch-Stoer Method The techniques described in this section are not for differential equations containing nonsmooth functions. For example, you might have a differential equation whose right-hand side involves a function that is evaluated by table look-up and interpolation. If so, go back to Runge-Kutta with adaptive stepsize choice: That method does an excellent job of feeling its way through rocky or discontinuous terrain. It is also an excellent choice for quick-and-dirty, low-accuracy solution of a set of equations. A second warning is that the techniques in this section are not particularly good for differential equations that have singular points inside the interval of integration. A regular solution must tiptoe very carefully across such points. Runge-Kutta with adaptive stepsize can sometimes effect this; more generally, there are special techniques available for such problems, beyond our scope here. Apart from those two caveats, we believe that the Bulirsch-Stoer method, discussed in this section, is the best known way to obtain high-accuracy solutions to ordinary differential equations with minimal computational effort. (A possible exception, infrequently encountered in practice, is discussed in §16.7.)
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 725 2 steps. 4 steps extrapolation to oo steps 6 steps http://www.nr read able files 83g granted for 19881992 x+H 1100 Figure 16.4.1.Richardson extrapolation as used in the Bulirsch-Stoer method.A large interval H is spanned by different sequences of finer and finer substeps.Their results are extrapolated to an answer that is supposed to correspond to infinitely fine substeps.In the Bulirsch-Stoer method,the integrations are done by the modified midpoint method,and the extrapolation technique is rational function or polynomial extrapolation. a后0 (Nort server Three key ideas are involved.The first is Richardson's deferred approach Americ computer, to the limit,which we already met in 84.3 on Romberg integration.The idea is to consider the final answer of a numerical calculation as itself being an analytic function(if a complicated one)of an adjustable parameter like the stepsize h.That analytic function can be probed by performing the calculation with various values of h,none of them being necessarily small enough to yield the accuracy that we desire.When we know enough about the function,we fit it to some analytic form, and then evaluate it at that mythical and golden point h=0(see Figure 16.4.1). 可 Richardson extrapolation is a method for turning straw into gold!(Lead into gold for alchemist readers. OF SCIENTIFIC COMPUTING (ISBN The second idea has to do with what kind offitting function is used.Bulirsch and Stoer first recognized the strength of rational function extrapolation in Richardson- type applications.That strength is to break the shackles of the power series and its Numerical Recipes 10-6211 limited radius of convergence,out only to the distance of the first pole in the complex plane.Rational function fits can remain good approximations to analytic functions uction, 43106 even after the various terms in powers of h all have comparable magnitudes.In other words.h can be so large as to make the whole notion of the "order"of the (outside method meaningless-and the method can still work superbly.Nevertheless,more recent experience suggests that for smooth problems straightforward polynomial North Software. extrapolation is slightly more efficient than rational function extrapolation.We will accordingly adopt polynomial extrapolation as the default,but the routine bsstep below allows easy substitution of one kind of extrapolation for the other.You might wish at this point to review 83.1-83.2,where polynomial and rational function extrapolation were already discussed. The third idea was discussed in the section before this one,namely to use a method whose error function is strictly even,allowing the rational function or polynomial approximation to be in terms of the variable h2 instead of just h. Put these ideas together and you have the Bulirsch-Stoer method [11.A single Bulirsch-Stoer step takes us from x to +H,where H is supposed to be quite a large
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 725 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 6 steps 2 steps 4 steps ⊗ extrapolation to ∞ steps x x + H y Figure 16.4.1. Richardson extrapolation as used in the Bulirsch-Stoer method. A large interval H is spanned by different sequences of finer and finer substeps. Their results are extrapolated to an answer that is supposed to correspond to infinitely fine substeps. In the Bulirsch-Stoer method, the integrations are done by the modified midpoint method, and the extrapolation technique is rational function or polynomial extrapolation. Three key ideas are involved. The first is Richardson’s deferred approach to the limit, which we already met in §4.3 on Romberg integration. The idea is to consider the final answer of a numerical calculation as itself being an analytic function (if a complicated one) of an adjustable parameter like the stepsize h. That analytic function can be probed by performing the calculation with various values of h, none of them being necessarily small enough to yield the accuracy that we desire. When we know enough about the function, we fit it to some analytic form, and then evaluate it at that mythical and golden point h = 0 (see Figure 16.4.1). Richardson extrapolation is a method for turning straw into gold! (Lead into gold for alchemist readers.) The second idea has to do with what kind of fitting function is used. Bulirsch and Stoer first recognized the strength of rational function extrapolation in Richardsontype applications. That strength is to break the shackles of the power series and its limited radius of convergence, out only to the distance of the first pole in the complex plane. Rational function fits can remain good approximations to analytic functions even after the various terms in powers of h all have comparable magnitudes. In other words, h can be so large as to make the whole notion of the “order” of the method meaningless — and the method can still work superbly. Nevertheless, more recent experience suggests that for smooth problems straightforward polynomial extrapolation is slightly more efficient than rational function extrapolation. We will accordingly adopt polynomial extrapolation as the default, but the routine bsstep below allows easy substitution of one kind of extrapolation for the other. You might wish at this point to review §3.1–§3.2, where polynomial and rational function extrapolation were already discussed. The third idea was discussed in the section before this one, namely to use a method whose error function is strictly even, allowing the rational function or polynomial approximation to be in terms of the variable h2 instead of just h. Put these ideas together and you have the Bulirsch-Stoer method [1]. A single Bulirsch-Stoer step takes us from x to x+H, where H is supposed to be quite a large
726 Chapter 16.Integration of Ordinary Differential Equations not at all infinitesimal-distance.That single step is a grand leap consisting of many (e.g.,dozens to hundreds)substeps of modified midpoint method,which are then extrapolated to zero stepsize. The sequence of separate attempts to cross the interval H is made with increasing values of n,the number of substeps.Bulirsch and Stoer originally proposed the sequence n=2,4,6,8,12,16,24,32,48,64,96,,[nj=2mj-2, (16.4.1) More recent work by Deufthard [2.3]suggests that the sequence 81 n=2,4,6,8,10,12,14,,[nj=2,. (16.4.2) is usually more efficient.For each step,we do not know in advance how far up this sequence we will go.After each successive n is tried,a polynomial extrapolation ICAL is attempted.That extrapolation gives both extrapolated values and error estimates. If the errors are not satisfactory,we go higher in n.If they are satisfactory,we go on to the next step and begin anew with n =2. RECIPES Of course there must be some upper limit,beyond which we conclude that there is some obstacle in our path in the interval H,so that we must reduce H rather than 9 just subdivide it more finely.In the implementations below,the maximum number of n's to be tried is called KMAXX.For reasons described below we usually take this equal to 8;the 8th value of the sequence (16.4.2)is 16,so this is the maximum 9 number of subdivisions of H that we allow. We enforce error control,as in the Runge-Kutta method,by monitoring internal s爱色日 consistency,and adapting stepsize to match a prescribed bound on the local truncation error.Each new result from the sequence of modified midpoint integrations allows a tableau like that in 83.1 to be extended by one additional set of diagonals.The size of 61 the new correction added at each stage is taken as the (conservative)error estimate How should we use this error estimate to adjust the stepsize?The best strategy now known is due to Deufthard [2,31.For completeness we describe it here: Suppose the absolute value of the error estimate returned from the kth column (and hence the k+1st row)of the extrapolation tableau is e+1.Error control is enforced by requiring Numerica 10621 Ek+1,k <E (16.4.3) 43106 as the criterion for accepting the current step,where e is the required tolerance.For the even sequence (16.4.2)the order of the method is 2+1: k+1,kH2k+1 (16.4.4) Thus a simple estimate ofa new stepsize Hk to obtain convergence in a fixed column k would be 1/(2k+1) Hk=H (16.4.5) Ek+1.k 邑 Which column k should we aim to achieve convergence in?Let's compare the work required for different k.Suppose Ak is the work to obtain row k of the extrapolation tableau, so Ak+1 is the work to obtain column k.We will assume the work is dominated by the cost of evaluating the functions defining the right-hand sides of the differential equations.For nk subdivisions in H,the number of function evaluations can be found from the recurrence A1=n1+1 (16.4.6) Ak+1=Ak+nk+1
726 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). — not at all infinitesimal — distance. That single step is a grand leap consisting of many (e.g., dozens to hundreds) substeps of modified midpoint method, which are then extrapolated to zero stepsize. The sequence of separate attempts to cross the interval H is made with increasing values of n, the number of substeps. Bulirsch and Stoer originally proposed the sequence n = 2, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96,..., [nj = 2nj−2],... (16.4.1) More recent work by Deuflhard [2,3] suggests that the sequence n = 2, 4, 6, 8, 10, 12, 14,..., [nj = 2j],... (16.4.2) is usually more efficient. For each step, we do not know in advance how far up this sequence we will go. After each successive n is tried, a polynomial extrapolation is attempted. That extrapolation gives both extrapolated values and error estimates. If the errors are not satisfactory, we go higher in n. If they are satisfactory, we go on to the next step and begin anew with n = 2. Of course there must be some upper limit, beyond which we conclude that there is some obstacle in our path in the interval H, so that we must reduce H rather than just subdivide it more finely. In the implementations below, the maximum number of n’s to be tried is called KMAXX. For reasons described below we usually take this equal to 8; the 8th value of the sequence (16.4.2) is 16, so this is the maximum number of subdivisions of H that we allow. We enforce error control, as in the Runge-Kutta method, by monitoring internal consistency, and adapting stepsize to match a prescribed bound on the local truncation error. Each new result from the sequence of modified midpoint integrations allows a tableau like that in §3.1 to be extended by one additional set of diagonals. The size of the new correction added at each stage is taken as the (conservative) error estimate. How should we use this error estimate to adjust the stepsize? The best strategy now known is due to Deuflhard [2,3]. For completeness we describe it here: Suppose the absolute value of the error estimate returned from the kth column (and hence the k + 1st row) of the extrapolation tableau is k+1,k. Error control is enforced by requiring k+1,k < (16.4.3) as the criterion for accepting the current step, where is the required tolerance. For the even sequence (16.4.2) the order of the method is 2k + 1: k+1,k ∼ H2k+1 (16.4.4) Thus a simple estimate of a new stepsize Hk to obtain convergence in a fixed column k would be Hk = H k+1,k 1/(2k+1) (16.4.5) Which column k should we aim to achieve convergence in? Let’s compare the work required for different k. Suppose Ak is the work to obtain row k of the extrapolation tableau, so Ak+1 is the work to obtain column k. We will assume the work is dominated by the cost of evaluating the functions defining the right-hand sides of the differential equations. For nk subdivisions in H, the number of function evaluations can be found from the recurrence A1 = n1 + 1 Ak+1 = Ak + nk+1 (16.4.6)
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 727 The work per unit step to get column k is A/Hk,which we nondimensionalize with a factor of H and write as WE =Ak+LH (16.4.7) Hk =Ak+(+达)e+) (16.4.8) The quantities W can be calculated during the integration.The optimal column index g is then defined by Wg=kmi血,Wa (16.4.9 .....f where ky is the final column,in which the error criterion(16.4.3)was satisfied.The g determined from(16.4.9)defines the stepsize H to be used as the next basic stepsize,so that we can expect to get convergence in the optimal column g. Two important refinements have to be made to the strategy outlined so far: ICAL ·If the current H is“too small,”then kr will be“too small,.”and so g remains "too small."It may be desirable to increase H and aim for convergence in a 3 column g kf. If the current H is"too big,"we may not converge at all on the current step and we RECIPES will have to decrease H.We would like to detect this by monitoring the quantities ek+1.for each k so we can stop the current step as soon as possible. Deufhard's prescription for dealing with these two problems uses ideas from communi- cation theory to determine the "average expected convergence behavior"of the extrapolation. 传 令 Press. His model produces certain correction factors a(k,g)by which Hk is to be multiplied to try to get convergence in column q.The factors a(k,q)depend only on e and the sequence {n and so can be computed once during initialization: ak,)=e可 for k Aq+2 Numerical Ha (16.4.12) -43126 or Ag+ia(q;q+1)>Ag+2 (16.4.13) During initialization,this inequality can be checked for g=1,2,...to determine kmax,the largest allowed column.Then when (16.4.12)is satisfied it will always be efficient to use H+1.(In practice we limit max to 8 even when e is very small as there is very little further gain in efficiency whereas roundoff can become a problem.) The problem of stepsize reduction is handled by computing stepsize estimates ik≡Hka(k,q), k=1,..,q-1 (16.4.14) during the current step.The 's are estimates of the stepsize to get convergence in the optimal column g.If any H is "too small,"we abandon the current step and restart using H.The criterion of being“too small'”is taken to be Hka(k,q+1a(k,q)
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 727 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The work per unit step to get column k is Ak+1/Hk, which we nondimensionalize with a factor of H and write as Wk = Ak+1 Hk H (16.4.7) = Ak+1 k+1,k 1/(2k+1) (16.4.8) The quantities Wk can be calculated during the integration. The optimal column index q is then defined by Wq = min k=1,...,kf Wk (16.4.9) where kf is the final column, in which the error criterion (16.4.3) was satisfied. The q determined from (16.4.9) defines the stepsize Hq to be used as the next basic stepsize, so that we can expect to get convergence in the optimal column q. Two important refinements have to be made to the strategy outlined so far: • If the current H is “too small,” then kf will be “too small,” and so q remains “too small.” It may be desirable to increase H and aim for convergence in a column q>kf . • If the current H is “too big,” we may not converge at all on the current step and we will have to decrease H. We would like to detect this by monitoring the quantities k+1,k for each k so we can stop the current step as soon as possible. Deuflhard’s prescription for dealing with these two problems uses ideas from communication theory to determine the “average expected convergence behavior” of the extrapolation. His model produces certain correction factors α(k, q) by which Hk is to be multiplied to try to get convergence in column q. The factors α(k, q) depend only on and the sequence {ni} and so can be computed once during initialization: α(k, q) = Ak+1−Aq+1 (2k+1)(Aq+1−A1+1) for k Aq+2 Hq+1 (16.4.12) or Aq+1α(q, q + 1) > Aq+2 (16.4.13) During initialization, this inequality can be checked for q = 1, 2,... to determine kmax, the largest allowed column. Then when (16.4.12) is satisfied it will always be efficient to use Hq+1. (In practice we limit kmax to 8 even when is very small as there is very little further gain in efficiency whereas roundoff can become a problem.) The problem of stepsize reduction is handled by computing stepsize estimates H¯k ≡ Hkα(k, q), k = 1,...,q − 1 (16.4.14) during the current step. The H¯ ’s are estimates of the stepsize to get convergence in the optimal column q. If any H¯k is “too small,” we abandon the current step and restart using H¯k. The criterion of being “too small” is taken to be Hkα(k, q + 1) α(k, q).
728 Chapter 16.Integration of Ordinary Differential Equations During the first step,when we have no information about the solution,the stepsize reduction check is made for all k.Afterwards,we test for convergence and for possible stepsize reduction only in an "order window" max(1,q-1)≤k≤min(kmax,q+1) (16.4.16) The rationale for the order window is that if convergence appears to occur for k #include "nrutil.h" #define KMAXX 8 Maximum row number used in the extrapola- #define IMAXX (KMAXX+1) tion. #define SAFE1 0.25 Safety factors. #define SAFE2 0.7 uction Numerical Recipes 431086 #define REDMAX 1.0e-5 Maximum factor for stepsize reduction. #define REDMIN 0.7 Minimum factor for stepsize reduction. #define TINY 1.0e-30 Prevents division by zero. (outside #define SCALMX 0.1 1/SCALMX is the maximum factor by which a stepsize can be increased. 首 Software. float **d,*x; Pointers to matrix and vector used by pzextr or rzextr void bsstep(float y☐,f1 oat dydx」,int nv,float*xx,f1 oat htry,f1 oat eps, float yscal☐,f1oat*hdid,f1oat*hnext, void (*derivs)(float,float []float []) Bulirsch-Stoer step with monitoring of local truncation error to ensure accuracy and adjust stepsize.Input are the dependent variable vector y[1..nv]and its derivative dydx [1..nv] at the starting value of the independent variable x.Also input are the stepsize to be attempted htry,the required accuracy eps,and the vector yscal[1..nv]against which the error is scaled.On output,y and x are replaced by their new values,hdid is the stepsize that was actually accomplished,and hnext is the estimated next stepsize.derivs is the user-supplied routine that computes the right-hand side derivatives.Be sure to set htry on successive steps
728 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). During the first step, when we have no information about the solution, the stepsize reduction check is made for all k. Afterwards, we test for convergence and for possible stepsize reduction only in an “order window” max(1, q − 1) ≤ k ≤ min(kmax, q + 1) (16.4.16) The rationale for the order window is that if convergence appears to occur for k #include "nrutil.h" #define KMAXX 8 Maximum row number used in the extrapola- #define IMAXX (KMAXX+1) tion. #define SAFE1 0.25 Safety factors. #define SAFE2 0.7 #define REDMAX 1.0e-5 Maximum factor for stepsize reduction. #define REDMIN 0.7 Minimum factor for stepsize reduction. #define TINY 1.0e-30 Prevents division by zero. #define SCALMX 0.1 1/SCALMX is the maximum factor by which a stepsize can be increased. float **d,*x; Pointers to matrix and vector used by pzextr or rzextr. void bsstep(float y[], float dydx[], int nv, float *xx, float htry, float eps, float yscal[], float *hdid, float *hnext, void (*derivs)(float, float [], float [])) Bulirsch-Stoer step with monitoring of local truncation error to ensure accuracy and adjust stepsize. Input are the dependent variable vector y[1..nv] and its derivative dydx[1..nv] at the starting value of the independent variable x. Also input are the stepsize to be attempted htry, the required accuracy eps, and the vector yscal[1..nv] against which the error is scaled. On output, y and x are replaced by their new values, hdid is the stepsize that was actually accomplished, and hnext is the estimated next stepsize. derivs is the user-supplied routine that computes the right-hand side derivatives. Be sure to set htry on successive steps
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 729 to the value of hnext returned from the previous step,as is the case if the routine is called by odeint. void mmid(float y[],float dydx[],int nvar,float xs,float htot, int nstep,float yout[],void (*derivs)(float,float []float[])); vo1 d pzextr(int iest,f1 oat xest,f1 oat yest[☐,f1 oat yz☐,f1 loat dy[], int nv); int i,iq,k,kk,km; static int first=1,kmax,kopt; static float epsold =-1.0,xnew; float eps1,errmax,fact,h,red,scale,work,wrkmin,xest; float *err,*yerr,*ysav,*yseq; static float a[IMAXX+1]; static float alf [KMAXX+1][KMAXX+1]; stat1c1 nt nseg[IMAXX+1]=[0,2,4,6,8,10,12,14,16,18}: int reduct,exitflag=0; d=matrix(1,nv,1,KMAXX) 1600 (including this one) granted for 18881992 err=vector(1,KMAXX); x=vector(1,KMAXX); 872 yerr=vector(1,nv); Cambridge ysav-vector(1,nv); n NUMERICAL RECIPES IN yseg=vector(1,nv); if (eps !epsold){ A new tolerance,so reinitialize. to make *hnext xnew =-1.0e29; "Impossible"values. (Nor eps1=SAFE1*eps; a[1]=nseg[1]+1; Compute work coefficients A America server computer, one paper UnN电.t THE for (k=1;kkopt-1 II first)) In order window. if (errmax 1.0){ Converged. exitflag=1; break;
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 729 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). to the value of hnext returned from the previous step, as is the case if the routine is called by odeint. { void mmid(float y[], float dydx[], int nvar, float xs, float htot, int nstep, float yout[], void (*derivs)(float, float[], float[])); void pzextr(int iest, float xest, float yest[], float yz[], float dy[], int nv); int i,iq,k,kk,km; static int first=1,kmax,kopt; static float epsold = -1.0,xnew; float eps1,errmax,fact,h,red,scale,work,wrkmin,xest; float *err,*yerr,*ysav,*yseq; static float a[IMAXX+1]; static float alf[KMAXX+1][KMAXX+1]; static int nseq[IMAXX+1]={0,2,4,6,8,10,12,14,16,18}; int reduct,exitflag=0; d=matrix(1,nv,1,KMAXX); err=vector(1,KMAXX); x=vector(1,KMAXX); yerr=vector(1,nv); ysav=vector(1,nv); yseq=vector(1,nv); if (eps != epsold) { A new tolerance, so reinitialize. *hnext = xnew = -1.0e29; “Impossible” values. eps1=SAFE1*eps; a[1]=nseq[1]+1; Compute work coefficients Ak. for (k=1;k a[kopt]*alf[kopt-1][kopt]) break; convergence. kmax=kopt; } h=htry; for (i=1;i= kopt-1 || first)) { In order window. if (errmax < 1.0) { Converged. exitflag=1; break; }
730 Chapter 16. Integration of Ordinary Differential Equations if (k =kmax Il k =kopt+1) Check for possible stepsize red-SAFE2/err [km]; reduction. break; else if (k =kopt&&alf [kopt-1][kopt]err[km]){ red=1.0/err [km]; break; else if (kopt =kmax &alf [km][kmax-1]err[km]){ red=alf [km][kmax-1]*SAFE2/err [km]; break; http://www.nr Permission is read able files else if (alf [km][kopt]err[km]){ red=alf [km][kopt-1]/err[km]; break; 839 granted for 19881992 11-600 if (exitflag)break: red=FMIN(red,REDMIN): Reduce stepsize by at least REDMIN red=FMAX(red,REDMAX) and at most REDMAX. /Cambridge h *red; from NUMERICAL RECIPES IN reduct=l: Try again. (Nor server computer, to make *xx=xnew; Successful step taken. *hdid=h; America one paper UnN电.t THE first=0; wrkmin=1.0e35; Compute optimal row for convergence ART for (kk=1;kkk&&kopt !kmax&&!reduct){ OF SCIENTIFIC COMPUTING(ISBN Check for possible order increase,but not if stepsize was just reduced. fact=FMAX(scale/alf [kopt-1][kopt],SCALMX); if (a[kopt+1]*fact <wrkmin){ *hnext=h/fact; kopt++; @cambridge.org 1988-1992 by Numerical Recipes 10-:6211 43106 free_vector(yseq,1,nv); free_vector(ysav,1,nv); free_vector(yerr,1,nv); free_vector(x,1,KMAXX) free_vector(err,1,KMAXX); (outside North Amer Software. free_matrix(d,1,nv,1,KMAXX); visit website The polynomial extrapolation routine is based on the same algorithm as polint 83.1.It is simpler in that it is always extrapolating to zero,rather than to an arbitrary value.However,it is more complicated in that it must individually extrapolate each component of a vector of quantities
730 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). if (k == kmax || k == kopt+1) { Check for possible stepsize red=SAFE2/err[km]; reduction. break; } else if (k == kopt && alf[kopt-1][kopt] = k && kopt != kmax && !reduct) { Check for possible order increase, but not if stepsize was just reduced. fact=FMAX(scale/alf[kopt-1][kopt],SCALMX); if (a[kopt+1]*fact <= wrkmin) { *hnext=h/fact; kopt++; } } free_vector(yseq,1,nv); free_vector(ysav,1,nv); free_vector(yerr,1,nv); free_vector(x,1,KMAXX); free_vector(err,1,KMAXX); free_matrix(d,1,nv,1,KMAXX); } The polynomial extrapolation routine is based on the same algorithm as polint §3.1. It is simpler in that it is always extrapolating to zero, rather than to an arbitrary value. However, it is more complicated in that it must individually extrapolate each component of a vector of quantities
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 731 #include "nrutil.h" extern f1oat米*d,*x; Defined in bsstep. void pzextr(int iest,float xest,float yest[],float yz[],float dy[],int nv) Use polynomial extrapolation to evaluate nv functions at =0 by fitting a polynomial to a sequence of estimates with progressively smaller values x xest,and corresponding function vectors yest [1..nv].This call is number iest in the sequence of calls.Extrapolated function values are output as yz[1..nv],and their estimated error is output as dy[1..nv]. int k1,j; float q,f2,f1,delta,*c; c=vector(1,nv); x[iest]=xest; Save current independent variable. granted for (includi for (j=1;j<=nv;j++)dy[j]=yz[j]=yest[j]; 19881992 if (iest ==1){ Store first estimate in first column. for (j=1;j<=nv;j++)d[j][1]=yest[j]; 1800 else for (j=1;j<=nv;j++)c[j]=yest[j]; for (k1=1;k1<1est;k1+)[ Cambridge from NUMERICAL RECIPESI delta=1.0/(x[iest-k1]-xest); to any fi-xest*delta; f2=x[iest-k1]*delta; for (j=1;j<=nv;j++) Propagate tableau 1 diagonal more. q=d[j]k1]: d[j][k1]=dy[j]; server computer, (North America to make one paper University Press. THE delta=c[j]-q; ART dy[j]=f1*delta; c[i]=f2*delta; yz[j]+dy[j]; Programs for (j=1;j<=nv;j++)d[j][iest]=dy[j]; st st free_vector(c,1,nv); to dir OF SCIENTIFIC COMPUTING(ISBN Current wisdom favors polynomial extrapolation over rational function extrap- rectcustser 18881920 olation in the Bulirsch-Stoer method.However,our feeling is that this view is guided more by the kinds of problems used for tests than by one method being actually "better."Accordingly,we provide the optional routine rzextr for rational function extrapolation,an exact substitution for pzextr above. Numerical Recipes 10-621 43108 #include "nrutil.h" (outside 膜 extern f1oat米*d,*x; Defined in bsstep. North oftware. void rzextr(int iest,float xest,float yest[],float yz[],float dy],int nv) Ame 6 Exact substitute for pzextr,but uses diagonal rational function extrapolation instead of poly- nomial extrapolation. int k,j; float yy,v,ddy,c,b1,b,*fx; fx-vector(1,iest); x[iest]=xest; Save current independent variable. if (iest ==1) for (j=1ii<=nvij++){ yz[j]=yest[j]; d[j][1]=yest [j];
16.4 Richardson Extrapolation and the Bulirsch-Stoer Method 731 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include "nrutil.h" extern float **d,*x; Defined in bsstep. void pzextr(int iest, float xest, float yest[], float yz[], float dy[], int nv) Use polynomial extrapolation to evaluate nv functions at x = 0 by fitting a polynomial to a sequence of estimates with progressively smaller values x = xest, and corresponding function vectors yest[1..nv]. This call is number iest in the sequence of calls. Extrapolated function values are output as yz[1..nv], and their estimated error is output as dy[1..nv]. { int k1,j; float q,f2,f1,delta,*c; c=vector(1,nv); x[iest]=xest; Save current independent variable. for (j=1;j<=nv;j++) dy[j]=yz[j]=yest[j]; if (iest == 1) { Store first estimate in first column. for (j=1;j<=nv;j++) d[j][1]=yest[j]; } else { for (j=1;j<=nv;j++) c[j]=yest[j]; for (k1=1;k1<iest;k1++) { delta=1.0/(x[iest-k1]-xest); f1=xest*delta; f2=x[iest-k1]*delta; for (j=1;j<=nv;j++) { Propagate tableau 1 diagonal more. q=d[j][k1]; d[j][k1]=dy[j]; delta=c[j]-q; dy[j]=f1*delta; c[j]=f2*delta; yz[j] += dy[j]; } } for (j=1;j<=nv;j++) d[j][iest]=dy[j]; } free_vector(c,1,nv); } Current wisdom favors polynomial extrapolation over rational function extrapolation in the Bulirsch-Stoer method. However, our feeling is that this view is guided more by the kinds of problems used for tests than by one method being actually “better.” Accordingly, we provide the optional routine rzextr for rational function extrapolation, an exact substitution for pzextr above. #include "nrutil.h" extern float **d,*x; Defined in bsstep. void rzextr(int iest, float xest, float yest[], float yz[], float dy[], int nv) Exact substitute for pzextr, but uses diagonal rational function extrapolation instead of polynomial extrapolation. { int k,j; float yy,v,ddy,c,b1,b,*fx; fx=vector(1,iest); x[iest]=xest; Save current independent variable. if (iest == 1) for (j=1;j<=nv;j++) { yz[j]=yest[j]; d[j][1]=yest[j];
732 Chapter 16. Integration of Ordinary Differential Equations dy[j]=yest[j]; else for (k=1;k<iest;k++) fx[k+1]-x[iest-k]/xest; for (j=1;j<=nv;j++) Evaluate next diagonal in tableau. vd[j][1]: d[j][1]=yy=c=yest [j]; for (k=2;k<=iest;k++) bi=fx[k]*v; b=b1-c; 1f(b)[ b=(c-v)/b; ddy=c*b; c=b1*b; 83 else Care needed to avoid division by 0. granted for 19881992 ddy=v; if (k !iest)v=d[i][k]; (including this one) d[j][k]-ddy; yy +=ddy; 111800.872 Cambridge n NUMERICAL RECIPES dy[j]=ddy; yz[j]=yy; free_vector(fx,1,iest); (North America 州bMe se to make one paper University Press. THE ART CITED REFERENCES AND FURTHER READING: ictly pror Programs Stoer.J..and Bulirsch.R.1980.Introduction to Numerical Analysis(New York:Springer-Verlag). 57.2.14.[1] Gear,C.W.1971,Numerical Initial Value Problems in Ordinary Differential Equations(Englewood 6 Cliffs,NJ:Prentice-Hall),86.2. Deufhard,P.1983,Numerische Mathematik,vol.41,pp.399-422.[2] Deuflhard,P.1985,SIAM Review,vol.27,pp.505-535.[3] OF SCIENTIFIC COMPUTING(ISBN 1920 16.5 Second-Order Conservative Equations Numerical 1021 Recipes 43108 Usually when you have a system of high-order differential equations to solve it is best to reformulate them as a system of first-order equations,as discussed in 816.0.There is (outside a particular class of equations that occurs quite frequently in practice where you can gain Software. about a factor of two in efficiency by differencing the equations directly.The equations are North second-order systems where the derivative does not appear on the right-hand side: y”=(x,y),y(xo)=0,y(xo)=20 (16.5.1) As usual,y can denote a vector of values. Stoermer's rule,dating back to 1907,has been a popular method for discretizing such systems.With h=H/m we have 1=0+h[20+hf(xo,o] 张+1-2k+k-1=2f(xo+kh,),k=1,,m-1 (16.5.2) 2m =(9m -ym-1)/h+hf(o+H,ym)
732 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). dy[j]=yest[j]; } else { for (k=1;k<iest;k++) fx[k+1]=x[iest-k]/xest; for (j=1;j<=nv;j++) { Evaluate next diagonal in tableau. v=d[j][1]; d[j][1]=yy=c=yest[j]; for (k=2;k<=iest;k++) { b1=fx[k]*v; b=b1-c; if (b) { b=(c-v)/b; ddy=c*b; c=b1*b; } else Care needed to avoid division by 0. ddy=v; if (k != iest) v=d[j][k]; d[j][k]=ddy; yy += ddy; } dy[j]=ddy; yz[j]=yy; } } free_vector(fx,1,iest); } CITED REFERENCES AND FURTHER READING: Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §7.2.14. [1] Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs, NJ: Prentice-Hall), §6.2. Deuflhard, P. 1983, Numerische Mathematik, vol. 41, pp. 399–422. [2] Deuflhard, P. 1985, SIAM Review, vol. 27, pp. 505–535. [3] 16.5 Second-Order Conservative Equations Usually when you have a system of high-order differential equations to solve it is best to reformulate them as a system of first-order equations, as discussed in §16.0. There is a particular class of equations that occurs quite frequently in practice where you can gain about a factor of two in efficiency by differencing the equations directly. The equations are second-order systems where the derivative does not appear on the right-hand side: y = f(x, y), y(x0) = y0, y (x0) = z0 (16.5.1) As usual, y can denote a vector of values. Stoermer’s rule, dating back to 1907, has been a popular method for discretizing such systems. With h = H/m we have y1 = y0 + h[z0 + 1 2 hf(x0, y0)] yk+1 − 2yk + yk−1 = h2 f(x0 + kh, yk), k = 1,...,m − 1 zm = (ym − ym−1)/h + 1 2 hf(x0 + H, ym) (16.5.2)