Chapter 16.Integration of Ordinary Differential Equations 16.0 Introduction NUMERICAL Cambridge Problems involving ordinary differential equations (ODEs)can always be reduced to the study of sets of first-order differential equations.For example the second-order equation 小、 (North C:THEA Py +a() Press (16.0.1) dr =r() 只 can be rewritten as two first-order equations 9 d地=2 d (16.0.2) 6 =r(x)-q(x)2(x) dx 霸 where z is a new variable.This exemplifies the procedure for an arbitrary ODE.The usual choice for the new variables is to let them be just derivatives of each other(and of the original variable).Occasionally,it is useful to incorporate into their definition Further some other factors in the equation,or some powers of the independent variable, for the purpose of mitigating singular behavior that could result in overflows or Recipes increased roundoff error.Let common sense be your guide:If you find that the original variables are smooth in a solution,while your auxiliary variables are doing ecipes crazy things,then figure out why and choose different auxiliary variables. The generic problem in ordinary differential equations is thus reduced to the Software. study of a set of N coupled first-order differential equations for the functions yi,i=1,2,...,N,having the general form America). d(回=f(红,h,N i=1,,N (16.0.3) dx where the functions fi on the right-hand side are known. A problem involving ODEs is not completely specified by its equations.Even more crucial in determining how to attack the problem numerically is the nature of the problem's boundary conditions.Boundary conditions are algebraic conditions on the values of the functions yi in (16.0.3).In general they can be satisfied at 707
Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Chapter 16. Integration of Ordinary Differential Equations 16.0 Introduction Problems involving ordinary differential equations (ODEs) can always be reduced to the study of sets of first-order differential equations. For example the second-order equation d2y dx2 + q(x) dy dx = r(x) (16.0.1) can be rewritten as two first-order equations dy dx = z(x) dz dx = r(x) − q(x)z(x) (16.0.2) where z is a new variable. This exemplifies the procedure for an arbitrary ODE. The usual choice for the new variables is to let them be just derivatives of each other (and of the original variable). Occasionally, it is useful to incorporate into their definition some other factors in the equation, or some powers of the independent variable, for the purpose of mitigating singular behavior that could result in overflows or increased roundoff error. Let common sense be your guide: If you find that the original variables are smooth in a solution, while your auxiliary variables are doing crazy things, then figure out why and choose different auxiliary variables. The generic problem in ordinary differential equations is thus reduced to the study of a set of N coupled first-order differential equations for the functions yi, i = 1, 2,...,N, having the general form dyi(x) dx = fi(x, y1,...,yN ), i = 1,...,N (16.0.3) where the functions fi on the right-hand side are known. A problem involving ODEs is not completely specified by its equations. Even more crucial in determining how to attack the problem numerically is the nature of the problem’s boundary conditions. Boundary conditions are algebraic conditions on the values of the functions yi in (16.0.3). In general they can be satisfied at 707
708 Chapter 16.Integration of Ordinary Differential Equations discrete specified points,but do not hold between those points,i.e.,are not preserved automatically by the differential equations.Boundary conditions can be as simple as requiring that certain variables have certain numerical values,or as complicated as a set of nonlinear algebraic equations among the variables. Usually,it is the nature of the boundary conditions that determines which numerical methods will be feasible.Boundary conditions divide into two broad categories. In initial value problems all the yi are given at some starting value xs,and it is desired to find the yi's at some final point f,or at some discrete list of points(for example,at tabulated intervals). In two-point boundary value problems,on the other hand,boundary conditions are specified at more than one z.Typically,some of the 虽 conditions will be specified at zs and the remainder at xf This chapter will consider exclusively the initial value problem,deferring two- 人餐 point boundary value problems,which are generally more difficult,to Chapter 17. The underlying idea of any routine for solving the initial value problem is always this:Rewrite the dy's and dz's in(16.0.3)as finite steps Ay and Ax,and multiply the equations by Ax.This gives algebraic formulas for the change in the functions when the independent variablez is“stepped"by one“stepsize”△x.In the limit of making 9 the stepsize very small,a good approximation to the underlying differential equation is achieved.Literal implementation of this procedure results in Euler's method (16.1.1,below),which is,however.not recommended for any practical use.Euler's method is conceptually important,however,one way or another,practical methods all come down to this same idea:Add small increments to your functions corresponding 及之台 to derivatives(right-hand sides of the equations)multiplied by stepsizes. In this chapter we consider three major types of practical numerical methods for solving initial value problems for ODEs: 。Runge-Kutta methods Richardson extrapolation and its particular implementation as the Bulirsch- Stoer method .predictor-corrector methods. A brief description of each of these types follows. 1.Runge-Kutta methods propagate a solution over an interval by combining h Numerica 10621 the information from several Euler-style steps(each involving one evaluation of the 431 right-hand f's),and then using the information obtained to match a Taylor series expansion up to some higher order. Recipes 2.Richardson extrapolation uses the powerful idea of extrapolating a computed (outside result to the value that would have been obtained if the stepsize had been very much smaller than it actually was.In particular,extrapolation to zero stepsize is North the desired goal.The first practical ODE integrator that implemented this idea was developed by Bulirsch and Stoer,and so extrapolation methods are often called Bulirsch-Stoer methods. 3.Predictor-corrector methods store the solution along the way,and use those results to extrapolate the solution one step advanced;they then correct the extrapolation using derivative information at the new point.These are best for very smooth functions. Runge-Kutta is what you use when(i)you don't know any better,or(ii)you have an intransigent problem where Bulirsch-Stoer is failing,or(iii)you have a trivial
708 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). discrete specified points, but do not hold between those points, i.e., are not preserved automatically by the differential equations. Boundary conditions can be as simple as requiring that certain variables have certain numerical values, or as complicated as a set of nonlinear algebraic equations among the variables. Usually, it is the nature of the boundary conditions that determines which numerical methods will be feasible. Boundary conditions divide into two broad categories. • In initial value problems all the yi are given at some starting value xs, and it is desired to find the yi’s at some final point xf , or at some discrete list of points (for example, at tabulated intervals). • In two-point boundary value problems, on the other hand, boundary conditions are specified at more than one x. Typically, some of the conditions will be specified at xs and the remainder at xf . This chapter will consider exclusively the initial value problem, deferring twopoint boundary value problems, which are generally more difficult, to Chapter 17. The underlying idea of any routine for solving the initial value problem is always this: Rewrite the dy’s and dx’s in (16.0.3) as finite steps ∆y and ∆x, and multiply the equations by ∆x. This gives algebraic formulas for the change in the functions when the independent variable x is “stepped” by one “stepsize” ∆x. In the limit of making the stepsize very small, a good approximation to the underlying differential equation is achieved. Literal implementation of this procedure results in Euler’s method (16.1.1, below), which is, however, not recommended for any practical use. Euler’s method is conceptually important, however; one way or another, practical methods all come down to this same idea: Add small increments to your functions corresponding to derivatives (right-hand sides of the equations) multiplied by stepsizes. In this chapter we consider three major types of practical numerical methods for solving initial value problems for ODEs: • Runge-Kutta methods • Richardson extrapolation and its particular implementation as the BulirschStoer method • predictor-corrector methods. A brief description of each of these types follows. 1. Runge-Kutta methods propagate a solution over an interval by combining the information from several Euler-style steps (each involving one evaluation of the right-hand f’s), and then using the information obtained to match a Taylor series expansion up to some higher order. 2. Richardson extrapolation uses the powerful idea of extrapolating a computed result to the value that would have been obtained if the stepsize had been very much smaller than it actually was. In particular, extrapolation to zero stepsize is the desired goal. The first practical ODE integrator that implemented this idea was developed by Bulirsch and Stoer, and so extrapolation methods are often called Bulirsch-Stoer methods. 3. Predictor-corrector methods store the solution along the way, and use those results to extrapolate the solution one step advanced; they then correct the extrapolation using derivative information at the new point. These are best for very smooth functions. Runge-Kutta is what you use when (i) you don’t know any better, or (ii) you have an intransigent problem where Bulirsch-Stoer is failing, or (iii) you have a trivial
16.0 Introduction 709 problem where computational efficiency is of no concern.Runge-Kutta succeeds virtually always;but it is not usually fastest,except when evaluating fi is cheap and moderate accuracy 10-5)is required.Predictor-corrector methods,since they use past information,are somewhat more difficult to start up,but,for many smooth problems,they are computationally more efficient than Runge-Kutta.In recent years Bulirsch-Stoer has been replacing predictor-corrector in many applications,but it is too soon to say that predictor-corrector is dominated in all cases.However,it appears that only rather sophisticated predictor-corrector routines are competitive. Accordingly,we have chosen not to give an implementation of predictor-corrector in this book.We discuss predictor-corrector further in $16.7,so that you can use a canned routine should you encounter a suitable problem.In our experience,the relatively simple Runge-Kutta and Bulirsch-Stoer routines we give are adequate for most problems. Each of the three types of methods can be organized to monitor internal consistency.This allows numerical errors which are inevitably introduced into the solution to be controlled by automatic,(adaptive)changing of the fundamental stepsize.We always recommend that adaptive stepsize control be implemented. and we will do so below. In general,all three types of methods can be applied to any initial value 9 problem.Each comes with its own set of debits and credits that must be understood before it is used. We have organized the routines in this chapter into three nested levels.The lowest or "nitty-gritty"level is the piece we call the algorithm routine.This 0三总P5 implements the basic formulas of the method,starts with dependent variables y;at OF SCIENTIFIC z,and calculates new values of the dependent variables at the value x+h.The algorithm routine also yields up some information about the quality of the solution 州 after the step.The routine is dumb,however,and it is unable to make any adaptive decision about whether the solution is of acceptable quality or not. That quality-control decision we encode in a stepper routine.The stepper routine calls the algorithm routine.It may reject the result,set a smaller stepsize,and call the algorithm routine again,until compatibility with a predetermined accuracy criterion has been achieved.The stepper's fundamental task is to take the largest Numerical Recipes 10621 stepsize consistent with specified performance.Only when this is accomplished does 43106 the true power of an algorithm come to light. Above the stepper is the driver routine,which starts and stops the integration, stores intermediate results.and generally acts as an interface with the user.There is 腿 nothing at all canonical about our driver routines.You should consider them to be North examples,and you can customize them for your particular application. Of the routines that follow,rk4,rkck,mmid,stoerm,and simpr are algorithm routines;rkqs,bsstep,stiff,and stifbs are steppers;rkdumb and odeint are drivers. Section 16.6 of this chapter treats the subject of stiff equations,relevant both to ordinary differential equations and also to partial differential equations(Chapter 19)
16.0 Introduction 709 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). problem where computational efficiency is of no concern. Runge-Kutta succeeds virtually always; but it is not usually fastest, except when evaluating fi is cheap and moderate accuracy (<∼ 10−5) is required. Predictor-corrector methods, since they use past information, are somewhat more difficult to start up, but, for many smooth problems, they are computationally more efficient than Runge-Kutta. In recent years Bulirsch-Stoer has been replacing predictor-corrector in many applications, but it is too soon to say that predictor-corrector is dominated in all cases. However, it appears that only rather sophisticated predictor-corrector routines are competitive. Accordingly, we have chosen not to give an implementation of predictor-corrector in this book. We discuss predictor-corrector further in §16.7, so that you can use a canned routine should you encounter a suitable problem. In our experience, the relatively simple Runge-Kutta and Bulirsch-Stoer routines we give are adequate for most problems. Each of the three types of methods can be organized to monitor internal consistency. This allows numerical errors which are inevitably introduced into the solution to be controlled by automatic, (adaptive) changing of the fundamental stepsize. We always recommend that adaptive stepsize control be implemented, and we will do so below. In general, all three types of methods can be applied to any initial value problem. Each comes with its own set of debits and credits that must be understood before it is used. We have organized the routines in this chapter into three nested levels. The lowest or “nitty-gritty” level is the piece we call the algorithm routine. This implements the basic formulas of the method, starts with dependent variables y i at x, and calculates new values of the dependent variables at the value x + h. The algorithm routine also yields up some information about the quality of the solution after the step. The routine is dumb, however, and it is unable to make any adaptive decision about whether the solution is of acceptable quality or not. That quality-control decision we encode in a stepper routine. The stepper routine calls the algorithm routine. It may reject the result, set a smaller stepsize, and call the algorithm routine again, until compatibility with a predetermined accuracy criterion has been achieved. The stepper’s fundamental task is to take the largest stepsize consistent with specified performance. Only when this is accomplished does the true power of an algorithm come to light. Above the stepper is the driver routine, which starts and stops the integration, stores intermediate results, and generally acts as an interface with the user. There is nothing at all canonical about our driver routines. You should consider them to be examples, and you can customize them for your particular application. Of the routines that follow, rk4, rkck, mmid, stoerm, and simpr are algorithm routines; rkqs, bsstep, stiff, and stifbs are steppers; rkdumb and odeint are drivers. Section 16.6 of this chapter treats the subject of stiff equations, relevant both to ordinary differential equations and also to partial differential equations (Chapter 19)
710 Chapter 16.Integration of Ordinary Differential Equations CITED REFERENCES AND FURTHER READING: Gear,C.W.1971,Numerical Initial Value Problems in Ordinary Differential Equations(Englewood Cliffs,NJ:Prentice-Hall). Acton,F.S.1970.Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America),Chapter 5. Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). Chapter 7. Lambert,J.1973,Computational Methods in Ordinary Differential Equations(New York:Wiley). Lapidus,L.,and Seinfeld,J.1971,Numerical Solution of Ordinary Differential Eguations (New York:Academic Press). 16.1 Runge-Kutta Method The formula for the Euler method is ⊙ Un+1 =Un hf(In;yn) (16.1.1) which advances a solution fromn to=n+h.The formula is unsymmetrical: % Press. It advances the solution through an interval h,but uses derivative information only at the beginning of that interval(see Figure 16.1.1).That means(and you can verify by expansion in power series)that the step's error is only one power of h smaller ◆分 than the correction,i.e O(h2)added to (16.1.1). There are several reasons that Euler's method is not recommended for practical use,among them,(i)the method is not very accurate when compared to other, 61 fancier,methods run at the equivalent stepsize,and(ii)neither is it very stable (see $16.6 below). Consider,however,the use of a step like (16.1.1)to take a"trial"step to the midpoint of the interval.Then use the value of both x and y at that midpoint to compute the "real"step across the whole interval.Figure 16.1.2 illustrates the idea.In equations, k1=hf(In,Un) 、6心 9 Numerica 10621 43106 k2 hf (In+5h,yn +k1) (16.1.2) (outside n+1=n+k2+O(h3) North As indicated in the error term,this symmetrization cancels out the first-order error term,making the method second order.[A method is conventionally called nth order if its error term is O(hn+1).]In fact,(16.1.2)is called the second-order Runge-Kutta or midpoint method. We needn't stop there.There are many ways to evaluate the right-hand side f(,y)that all agree to first order,but that have different coefficients of higher-order error terms.Adding up the right combination of these,we can eliminate the error terms order by order.That is the basic idea of the Runge-Kutta method.Abramowitz and Stegun [1],and Gear [2],give various specific formulas that derive from this basic
710 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). CITED REFERENCES AND FURTHER READING: Gear, C.W. 1971, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs, NJ: Prentice-Hall). Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapter 5. Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 7. Lambert, J. 1973, Computational Methods in Ordinary Differential Equations (New York: Wiley). Lapidus, L., and Seinfeld, J. 1971, Numerical Solution of Ordinary Differential Equations (New York: Academic Press). 16.1 Runge-Kutta Method The formula for the Euler method is yn+1 = yn + hf(xn, yn) (16.1.1) which advances a solution from xn to xn+1 ≡ xn+h. The formula is unsymmetrical: It advances the solution through an interval h, but uses derivative information only at the beginning of that interval (see Figure 16.1.1). That means (and you can verify by expansion in power series) that the step’s error is only one power of h smaller than the correction, i.e O(h2) added to (16.1.1). There are several reasons that Euler’s method is not recommended for practical use, among them, (i) the method is not very accurate when compared to other, fancier, methods run at the equivalent stepsize, and (ii) neither is it very stable (see §16.6 below). Consider, however, the use of a step like (16.1.1) to take a “trial” step to the midpoint of the interval. Then use the value of both x and y at that midpoint to compute the “real” step across the whole interval. Figure 16.1.2 illustrates the idea. In equations, k1 = hf(xn, yn) k2 = hf xn + 1 2h, yn + 1 2 k1 yn+1 = yn + k2 + O(h3) (16.1.2) As indicated in the error term, this symmetrization cancels out the first-order error term, making the method second order. [A method is conventionally called nth order if its error term is O(hn+1).] In fact, (16.1.2) is called the second-order Runge-Kutta or midpoint method. We needn’t stop there. There are many ways to evaluate the right-hand side f(x, y) that all agree to first order, but that have different coefficients of higher-order error terms. Adding up the right combination of these, we can eliminate the error terms order by order. That is the basic idea of the Runge-Kutta method. Abramowitz and Stegun [1], and Gear [2], give various specific formulas that derive from this basic