正在加载图片...
16.7 Multistep,Multivalue,and Predictor-Corrector Methods 751 Here r will be a fixed vector of numbers,in the same way that B is a fixed matrix. We fix a by requiring that the differential equation n+1=f(xn+1,yn+1) (16.7.10) be satisfied.The second of the equations in (16.7.9)is hynt1=hyn+1+ar2 (16.7.11) and this will be consistent with (16.7.10)provided T2=1, a=hf(En+1:Un+i)-hin+l (16.7.12) 菲 The values of r1,r3,and r4 are free for the inventor of a given four-value method to choose.Different choices give different orders of method (i.e.,through what order in h the final expression 16.7.9 actually approximates the solution),and different stability properties. An interesting result,not obvious from our presentation,is that multivalue and 9 multistep methods are entirely equivalent.In other words,the value yn+given by a multivalue method with given B and r is exactly the same value given by some multistep method with given B's in equation(16.7.2).For example,it turns out that the Adams-Bashforth formula(16.7.3)corresponds to a four-value method with rI=0,r3 =3/4,and r4=1/6.The method is explicit because r=0.The 9 Adams-Moulton method(16.7.4)corresponds to the implicit four-value method with 是g0 r1=5/12,r3 =3/4,and r =1/6.Implicit multivalue methods are solved the same way as implicit multistep methods:either by a predictor-corrector approach using an explicit method for the predictor,or by Newton iteration for stiff systems. 61 Why go to all the trouble of introducing a whole new method that turns out to be equivalent to a method you already knew?The reason is that multivalue methods allow an easy solution to the two difficulties we mentioned above in actually implementing multistep methods. 益a: Consider first the question of stepsize adjustment.To change stepsize from h 10621 to h'at some point n,simply multiply the components of yn in (16.7.5)by the Numerica appropriate powers of h'/h,and you are ready to continue to n+h'. 43106 Multivalue methods also allow a relatively easy change in the order of the method:Simply change r.The usual strategy for this is first to determine the new stepsize with the current order from the error estimate.Then check what stepsize would be predicted using an order one greater and one smaller than the current order.Choose the order that allows you to take the biggest next step.Being able to change order also allows an easy solution to the starting problem:Simply start with a first-order method and let the order automatically increase to the appropriate level. For low accuracy requirements,a Runge-Kutta routine like rkqs is almost always the most efficient choice.For high accuracy,bsstep is both robust and efficient.For very smooth functions,a variable-order PC method can invoke very high orders.If the right-hand side of the equation is relatively complicated,so that the expense of evaluating it outweighs the bookkeeping expense,then the best PC packages can outperform Bulirsch-Stoer on such problems.As you can imagine, however,such a variable-stepsize,variable-order method is not trivial to program.If16.7 Multistep, Multivalue, and Predictor-Corrector Methods 751 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Here r will be a fixed vector of numbers, in the same way that B is a fixed matrix. We fix α by requiring that the differential equation y n+1 = f(xn+1, yn+1) (16.7.10) be satisfied. The second of the equations in (16.7.9) is hy n+1 = hy n+1 + αr2 (16.7.11) and this will be consistent with (16.7.10) provided r2 = 1, α = hf(xn+1, yn+1) − hy n+1 (16.7.12) The values of r1, r3, and r4 are free for the inventor of a given four-value method to choose. Different choices give different orders of method (i.e., through what order in h the final expression 16.7.9 actually approximates the solution), and different stability properties. An interesting result, not obvious from our presentation, is that multivalue and multistep methods are entirely equivalent. In other words, the value y n+1 given by a multivalue method with given B and r is exactly the same value given by some multistep method with given β’s in equation (16.7.2). For example, it turns out that the Adams-Bashforth formula (16.7.3) corresponds to a four-value method with r1 = 0, r3 = 3/4, and r4 = 1/6. The method is explicit because r1 = 0. The Adams-Moulton method (16.7.4) corresponds to the implicit four-value method with r1 = 5/12, r3 = 3/4, and r4 = 1/6. Implicit multivalue methods are solved the same way as implicit multistep methods: either by a predictor-corrector approach using an explicit method for the predictor, or by Newton iteration for stiff systems. Why go to all the trouble of introducing a whole new method that turns out to be equivalent to a method you already knew? The reason is that multivalue methods allow an easy solution to the two difficulties we mentioned above in actually implementing multistep methods. Consider first the question of stepsize adjustment. To change stepsize from h to h at some point xn, simply multiply the components of yn in (16.7.5) by the appropriate powers of h /h, and you are ready to continue to xn + h . Multivalue methods also allow a relatively easy change in the order of the method: Simply change r. The usual strategy for this is first to determine the new stepsize with the current order from the error estimate. Then check what stepsize would be predicted using an order one greater and one smaller than the current order. Choose the order that allows you to take the biggest next step. Being able to change order also allows an easy solution to the starting problem: Simply start with a first-order method and let the order automatically increase to the appropriate level. For low accuracy requirements, a Runge-Kutta routine like rkqs is almost always the most efficient choice. For high accuracy, bsstep is both robust and efficient. For very smooth functions, a variable-order PC method can invoke very high orders. If the right-hand side of the equation is relatively complicated, so that the expense of evaluating it outweighs the bookkeeping expense, then the best PC packages can outperform Bulirsch-Stoer on such problems. As you can imagine, however, such a variable-stepsize, variable-order method is not trivial to program. If
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有