正在加载图片...
748 Chapter 16.Integration of Ordinary Differential Equations out in the middle.There is possibly only one exceptional case:high-precision solution of very smooth equations with very complicated right-hand sides,as we will describe later. Nevertheless,these methods have had a long historical run.Textbooks are full of information on them,and there are a lot of standard ODE programs around that are based on predictor-corrector methods.Many capable researchers have a lot of experience with predictor-corrector routines,and they see no reason to make a precipitous change of habit.It is not a bad idea for you to be familiar with the principles involved,and even with the sorts of bookkeeping details that are the bane of these methods.Otherwise there will be a big surprise in store when you first have to fix a problem in a predictor-corrector routine. Let us first consider the multistep approach.Think about how integrating an ODE is different from finding the integral of a function:For a function,the integrand has a known dependence on the independent variable z,and can be evaluated at will.For an ODE,the "integrand"is the right-hand side,which depends both on x and on the dependent variables y.Thus to advance the solution ofy'=f(,y) from In to x,we have 9 y(x)=yn+f(r',y)di (16.7.1) 工 In a single-step method like Runge-Kutta or Bulirsch-Stoer,the value yn+atn+1 depends only on yn.In a multistep method,we approximate f(,y)by a polynomial 9 9」 passing through several previous points n,n,...and possibly also through 、a2 n+.The result of evaluating the integral (16.7.1)atr=+is then of the form yn+1=n+h(o则n+1+1n+32则n-1+3n-2+) (16.7.2) 6 whereyn denotes f(n,yn),and so on.If Bo =0,the method is explicit;otherwise it is implicit.The order of the method depends on how many previous steps we use to get each new value of y. Consider how we might solve an implicit formula of the form(16.7.2)for yn+1. Two methods suggest themselves:functional iteration and Newton's method.In 10621 functional iteration,we take some initial guess for y,insert it into the right-hand f Numerica side of(16.7.2)to get an updated value ofy,insert this updated value back into 431 the right-hand side,and continue iterating.But how are we to get an initial guess for Recipes yn+1?Easy!Just use some explicit formula of the same form as(16.7.2).This is called the predictor step.In the predictor step we are essentially extrapolating the 腿 polynomial fit to the derivative from the previous points to the new point +and North then doing the integral(16.7.1)in a Simpson-like manner from xn to n+1.The subsequent Simpson-like integration,using the prediction step's value of yn+1 to interpolate the derivative,is called the corrector step.The difference between the predicted and corrected function values supplies information on the local truncation error that can be used to control accuracy and to adjust stepsize. If one corrector step is good,aren't many better?Why not use each corrector as an improved predictor and iterate to convergence on each step?Answer:Even if you had a perfect predictor,the step would still be accurate only to the finite order of the corrector.This incurable error term is on the same order as that which your iteration is supposed to cure,so you are at best changing only the coefficient in front748 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). out in the middle. There is possibly only one exceptional case: high-precision solution of very smooth equations with very complicated right-hand sides, as we will describe later. Nevertheless, these methods have had a long historical run. Textbooks are full of information on them, and there are a lot of standard ODE programs around that are based on predictor-corrector methods. Many capable researchers have a lot of experience with predictor-corrector routines, and they see no reason to make a precipitous change of habit. It is not a bad idea for you to be familiar with the principles involved, and even with the sorts of bookkeeping details that are the bane of these methods. Otherwise there will be a big surprise in store when you first have to fix a problem in a predictor-corrector routine. Let us first consider the multistep approach. Think about how integrating an ODE is different from finding the integral of a function: For a function, the integrand has a known dependence on the independent variable x, and can be evaluated at will. For an ODE, the “integrand” is the right-hand side, which depends both on x and on the dependent variables y. Thus to advance the solution of y  = f(x, y) from xn to x, we have y(x) = yn +  x xn f(x , y) dx (16.7.1) In a single-step method like Runge-Kutta or Bulirsch-Stoer, the value y n+1 at xn+1 depends only on yn. In a multistep method, we approximate f(x, y) by a polynomial passing through several previous points xn, xn−1,... and possibly also through xn+1. The result of evaluating the integral (16.7.1) at x = x n+1 is then of the form yn+1 = yn + h(β0y n+1 + β1y n + β2y n−1 + β3y n−2 + ···) (16.7.2) where y n denotes f(xn, yn), and so on. If β0 = 0, the method is explicit; otherwise it is implicit. The order of the method depends on how many previous steps we use to get each new value of y. Consider how we might solve an implicit formula of the form (16.7.2) for y n+1. Two methods suggest themselves: functional iteration and Newton’s method. In functional iteration, we take some initial guess for yn+1, insert it into the right-hand side of (16.7.2) to get an updated value of y n+1, insert this updated value back into the right-hand side, and continue iterating. But how are we to get an initial guess for yn+1? Easy! Just use some explicit formula of the same form as (16.7.2). This is called the predictor step. In the predictor step we are essentially extrapolating the polynomial fit to the derivative from the previous points to the new point x n+1 and then doing the integral (16.7.1) in a Simpson-like manner from x n to xn+1. The subsequent Simpson-like integration, using the prediction step’s value of y n+1 to interpolate the derivative, is called the corrector step. The difference between the predicted and corrected function values supplies information on the local truncation error that can be used to control accuracy and to adjust stepsize. If one corrector step is good, aren’t many better? Why not use each corrector as an improved predictor and iterate to convergence on each step? Answer: Even if you had a perfect predictor, the step would still be accurate only to the finite order of the corrector. This incurable error term is on the same order as that which your iteration is supposed to cure, so you are at best changing only the coefficient in front
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有