734 Chapter 16.Integration of Ordinary Differential Equations Note that for compatibility with bsstep the arrays y and d2y are of length 2n for a system of n second-order equations.The values of y are stored in the first n elements of y, while the first derivatives are stored in the second n elements.The right-hand side f is stored in the first n elements of the array d2y,the second n elements are unused.With this storage arrangement you can use bsstep simply by replacing the call to mmid with one to stoerm using the same arguments;just be sure that the argument nv of bsstep is set to 2n.You should also use the more efficient sequence of stepsizes suggested by Deufhard: n=1,2,3,4,5,… (16.5.6) and set KMAXX 12 in bsstep. ://www.nr CITED REFERENCES AND FURTHER READING: Deuflhard,P.1985,SIAM Review,vol.27,pp.505-535. 八g2 1.800 NUMERICAL RECIPES 16.6 Stiff Sets of Equations (North America to make one paper server computer, University Press. THE As soon as one deals with more than one first-order differential equation,the ART possibility of a stiff set of equations arises.Stiffness occurs in a problem where there are two or more very different scales of the independent variable on which 9 Program the dependent variables are changing.For example,consider the following set of equations [1]: 6 、三 OF SCIENTIFIC( u'=998u+1998U (16.6.1) v'=-999u-1999u with boundary conditions 10621 u(0)=1 u(0)=0 (16.6.2) Numerica 431 By means of the transformation Recipes u=2y-z v=-y+z (16.6.3) North we find the solution u=2e-z-e-1000z v=-e-z+e-1000z (16.6.4) If we integrated the system (16.6.1)with any of the methods given so far in this chapter,the presence of the e-1000 term would require a stepsize h1/1000 for the method to be stable (the reason for this is explained below).This is so even
734 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Note that for compatibility with bsstep the arrays y and d2y are of length 2n for a system of n second-order equations. The values of y are stored in the first n elements of y, while the first derivatives are stored in the second n elements. The right-hand side f is stored in the first n elements of the array d2y; the second n elements are unused. With this storage arrangement you can use bsstep simply by replacing the call to mmid with one to stoerm using the same arguments; just be sure that the argument nv of bsstep is set to 2n. You should also use the more efficient sequence of stepsizes suggested by Deuflhard: n = 1, 2, 3, 4, 5,... (16.5.6) and set KMAXX = 12 in bsstep. CITED REFERENCES AND FURTHER READING: Deuflhard, P. 1985, SIAM Review, vol. 27, pp. 505–535. 16.6 Stiff Sets of Equations As soon as one deals with more than one first-order differential equation, the possibility of a stiff set of equations arises. Stiffness occurs in a problem where there are two or more very different scales of the independent variable on which the dependent variables are changing. For example, consider the following set of equations [1]: u = 998u + 1998v v = −999u − 1999v (16.6.1) with boundary conditions u(0) = 1 v(0) = 0 (16.6.2) By means of the transformation u = 2y − z v = −y + z (16.6.3) we find the solution u = 2e−x − e−1000x v = −e−x + e−1000x (16.6.4) If we integrated the system (16.6.1) with any of the methods given so far in this chapter, the presence of the e−1000x term would require a stepsize h 1/1000 for the method to be stable (the reason for this is explained below). This is so even
16.6 Stiff Sets of Equations 735 83 granted for 1100 Figure 16.6.1.Example of an instability encountered in integrating a stiff equation (schematic).Here it is supposed that the equation has two solutions,shown as solid and dashed lines.Although the initial conditions are such as to give the solid solution,the stability of the integration (shown as the unstable NUMERICAL RECIPES I dotted sequence of segments)is determined by the more rapidly varying dashed solution,even after that solution has effectively died away to zero.Implicit integration methods are the cure. 令 though the e-1000x term is completely negligible in determining the values of u and v as soon as one is away from the origin (see Figure 16.6.1). 2名分 Press. This is the generic disease of stiff equations:we are required to follow the variation in the solution on the shortest length scale to maintain stability of the integration,even though accuracy requirements allow a much larger stepsize. To see how we might cure this problem,consider the single equation IENTIFIC y=-cy (16.6.5) 6 where c>0 is a constant.The explicit (or forward)Euler scheme for integrating this equation with stepsize h is Un+1 yn +hyn =(1-ch)yn (16.6.6 The method is called explicit because the new value yn+is given explicitly in Numerical 10521 terms of the old value yn.Clearly the method is unstable if h >2/c,for then 43126 yn+oasn→o. The simplest cure is to resort to implicit differencing,where the right-hand side is evaluated at the new y location.In this case,we get the backward Euler scheme: Un+1=Un hyn+1 (16.6.7) Or n+1=1+ch (16.6.8) The method is absolutely stable:even as hoo,yn+1-0,which is in fact the correct solution of the differential equation.If we think of as representing time, then the implicit method converges to the true equilibrium solution (i.e.,the solution at late times)for large stepsizes.This nice feature of implicit methods holds only for linear systems,but even in the general case implicit methods give better stability
16.6 Stiff Sets of Equations 735 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). x y Figure 16.6.1. Example of an instability encountered in integrating a stiff equation (schematic). Here it is supposed that the equation has two solutions, shown as solid and dashed lines. Although the initial conditions are such as to give the solid solution, the stability of the integration (shown as the unstable dotted sequence of segments) is determined by the more rapidly varying dashed solution, even after that solution has effectively died away to zero. Implicit integration methods are the cure. though the e−1000x term is completely negligible in determining the values of u and v as soon as one is away from the origin (see Figure 16.6.1). This is the generic disease of stiff equations: we are required to follow the variation in the solution on the shortest length scale to maintain stability of the integration, even though accuracy requirements allow a much larger stepsize. To see how we might cure this problem, consider the single equation y = −cy (16.6.5) where c > 0 is a constant. The explicit (or forward) Euler scheme for integrating this equation with stepsize h is yn+1 = yn + hy n = (1 − ch)yn (16.6.6) The method is called explicit because the new value yn+1 is given explicitly in terms of the old value yn. Clearly the method is unstable if h > 2/c, for then |yn|→∞ as n → ∞. The simplest cure is to resort to implicit differencing, where the right-hand side is evaluated at the new y location. In this case, we get the backward Euler scheme: yn+1 = yn + hy n+1 (16.6.7) or yn+1 = yn 1 + ch (16.6.8) The method is absolutely stable: even as h → ∞, yn+1 → 0, which is in fact the correct solution of the differential equation. If we think of x as representing time, then the implicit method converges to the true equilibrium solution (i.e., the solution at late times) for large stepsizes. This nice feature of implicit methods holds only for linear systems, but even in the general case implicit methods give better stability
736 Chapter 16.Integration of Ordinary Differential Equations Of course,we give up accuracy in following the evolution towards equilibrium if we use large stepsizes,but we maintain stability. These considerations can easily be generalized to sets of linear equations with constant coefficients: y=-C·y (16.6.9) where C is a positive definite matrix.Explicit differencing gives yn+1=(1-Ch)·yn (16.6.10) Now a matrix An tends to zero as n-oo only if the largest eigenvalue of A has magnitude less than unity.Thus y is bounded as n-oo only if the largest 菲 eigenvalue of 1-Ch is less than 1,or in other words 爱州 ICAL 2 h< (16.6.11) 入max where Amax is the largest eigenvalue of C. On the other hand,implicit differencing gives yn+1 =yn +hyn+1 (16.6.12) or 9 yn+1=(1+Ch)-1·yn (16.6.13) 9 If the eigenvalues of C are A,then the eigenvalues of(1+Ch)-1 are (1+h)-1 which has magnitude less than one for all h.(Recall that all the eigenvalues of a IENTIFIC( positive definite matrix are nonnegative.)Thus the method is stable for all stepsizes 61 h.The penalty we pay for this stability is that we are required to invert a matrix at each step. Not all equations are linear with constant coefficients,unfortunately!For the system y=f(y) (16.6.14) Numerical 10521 implicit differencing gives 43106 yn+1 yn+hf(yn+1) (16.6.15) In general this is some nasty set of nonlinear equations that has to be solved iteratively at each step.Suppose we try linearizing the equations,as in Newton's method: Yn+1=yn +h + (16.6.16) Here of/Oy is the matrix of the partial derivatives of the right-hand side(the Jacobian matrix).Rearrange equation(16.6.16)into the form Yn+1=yn+h 1-hof] ·fyn) (16.6.17
736 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Of course, we give up accuracy in following the evolution towards equilibrium if we use large stepsizes, but we maintain stability. These considerations can easily be generalized to sets of linear equations with constant coefficients: y = −C · y (16.6.9) where C is a positive definite matrix. Explicit differencing gives yn+1 = (1 − Ch) · yn (16.6.10) Now a matrix An tends to zero as n → ∞ only if the largest eigenvalue of A has magnitude less than unity. Thus yn is bounded as n → ∞ only if the largest eigenvalue of 1 − Ch is less than 1, or in other words h < 2 λmax (16.6.11) where λmax is the largest eigenvalue of C. On the other hand, implicit differencing gives yn+1 = yn + hy n+1 (16.6.12) or yn+1 = (1 + Ch) −1 · yn (16.6.13) If the eigenvalues of C are λ, then the eigenvalues of (1 + Ch) −1 are (1 + λh)−1, which has magnitude less than one for all h. (Recall that all the eigenvalues of a positive definite matrix are nonnegative.) Thus the method is stable for all stepsizes h. The penalty we pay for this stability is that we are required to invert a matrix at each step. Not all equations are linear with constant coefficients, unfortunately! For the system y = f(y) (16.6.14) implicit differencing gives yn+1 = yn + hf(yn+1) (16.6.15) In general this is some nasty set of nonlinear equations that has to be solved iteratively at each step. Suppose we try linearizing the equations, as in Newton’s method: yn+1 = yn + h f(yn) + ∂f ∂y yn · (yn+1 − yn) (16.6.16) Here ∂f/∂y is the matrix of the partial derivatives of the right-hand side (the Jacobian matrix). Rearrange equation (16.6.16) into the form yn+1 = yn + h 1 − h ∂f ∂y −1 · f(yn) (16.6.17)
16.6 Stiff Sets of Equations 737 If h is not too big,only one iteration of Newton's method may be accurate enough to solve equation (16.6.15)using equation(16.6.17).In other words,at each step we have to invert the matrix 1-hof (16.6.18) to find y Solving implicit methods by linearization is called a"semi-implicit" method,so equation(16.6.17)is the semi-implicit Euler method.It is not guaranteed to be stable,but it usually is,because the behavior is locally similar to the case of a constant matrix C described above. So far we have dealt only with implicit methods that are first-order accurate. While these are very robust,most problems will benefit from higher-order methods. There are three important classes of higher-order methods for stiff systems: Generalizations of the Runge-Kutta method,of which the most useful are the Rosenbrock methods.The first practical implementation of these ICAL ideas was by Kaps and Rentrop,and so these methods are also called Kaps-Rentrop methods. Generalizations of the Bulirsch-Stoer method,in particular a semi-implicit RECIPES extrapolation method due to Bader and Deufhard. Predictor-corrector methods,most of which are descendants of Gear's 9 backward differentiation method. We shall give implementations of the first two methods.Note that systems where the right-hand side depends explicitly on z,f(y,z),can be handled by adding x to 9 the list of dependent variables so that the system to be solved is 9 (16.6.19) In both the routines to be given in this section,we have explicitly carried out this 61 replacement for you,so the routines can handle right-hand sides of the form f(y, without any special effort on your part. We now mention an important point:It is absolutely crucial to scale your vari- ables properly when integrating stiff problems with automatic stepsize adjustment. 言m As in our nonstiff routines,you will be asked to supply a vector ysca with which the error is to be scaled.For example,to get constant fractional errors,simply set yscal =ly.You can get constant absolute errors relative to some maximum values Numerica 10621 by setting yscal equal to those maximum values.In stiff problems,there are often 43106 strongly decreasing pieces of the solution which you are not particularly interested in following once they are small.You can control the relative error above some threshold C and the absolute error below the threshold by setting yscal=max(C,lyl) (16.6.20) Ifyou are using appropriate nondimensional units,then each component of C should be of order unity.If you are not sure what values to take for C,simply try setting each component equal to unity.We strongly advocate the choice (16.6.20) for stiff problems. One final warning:Solving stiff problems can sometimes lead to catastrophic precision loss.Be alert for situations where double precision is necessary
16.6 Stiff Sets of Equations 737 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). If h is not too big, only one iteration of Newton’s method may be accurate enough to solve equation (16.6.15) using equation (16.6.17). In other words, at each step we have to invert the matrix 1 − h ∂f ∂y (16.6.18) to find yn+1. Solving implicit methods by linearization is called a “semi-implicit” method, so equation (16.6.17) is the semi-implicit Euler method. It is not guaranteed to be stable, but it usually is, because the behavior is locally similar to the case of a constant matrix C described above. So far we have dealt only with implicit methods that are first-order accurate. While these are very robust, most problems will benefit from higher-order methods. There are three important classes of higher-order methods for stiff systems: • Generalizations of the Runge-Kutta method, of which the most useful are the Rosenbrock methods. The first practical implementation of these ideas was by Kaps and Rentrop, and so these methods are also called Kaps-Rentrop methods. • Generalizations of the Bulirsch-Stoer method, in particular a semi-implicit extrapolation method due to Bader and Deuflhard. • Predictor-corrector methods, most of which are descendants of Gear’s backward differentiation method. We shall give implementations of the first two methods. Note that systems where the right-hand side depends explicitly on x, f(y, x), can be handled by adding x to the list of dependent variables so that the system to be solved is y x = f 1 (16.6.19) In both the routines to be given in this section, we have explicitly carried out this replacement for you, so the routines can handle right-hand sides of the form f(y, x) without any special effort on your part. We now mention an important point: It is absolutely crucial to scale your variables properly when integrating stiff problems with automatic stepsize adjustment. As in our nonstiff routines, you will be asked to supply a vector y scal with which the error is to be scaled. For example, to get constant fractional errors, simply set yscal = |y|. You can get constant absolute errors relative to some maximum values by setting yscal equal to those maximum values. In stiff problems, there are often strongly decreasing pieces of the solution which you are not particularly interested in following once they are small. You can control the relative error above some threshold C and the absolute error below the threshold by setting yscal = max(C, |y|) (16.6.20) If you are using appropriate nondimensional units, then each component of C should be of order unity. If you are not sure what values to take for C, simply try setting each component equal to unity. We strongly advocate the choice (16.6.20) for stiff problems. One final warning: Solving stiff problems can sometimes lead to catastrophic precision loss. Be alert for situations where double precision is necessary
738 Chapter 16.Integration of Ordinary Differential Equations Rosenbrock Methods These methods have the advantage of being relatively simple to understand and imple- ment.For moderate accuracies(10-4-10-5 in the error criterion)and moderate-sized systems (N S 10),they are competitive with the more complicated algorithms.For more stringent parameters,Rosenbrock methods remain reliable;they merely become less efficient than competitors like the semi-implicit extrapolation method (see below). A Rosenbrock method seeks a solution of the form yro+)=o+∑ck (16.6.21) i= where the corrections k;are found by solving s linear equations that generalize the structure 8 in(16.6.17)月 1-h)k=o+ak+hr·∑与 i=1,..,s (16.6.22) Here we denote the Jacobian matrix by f'.The coefficients ca,,and Yj are fixed constants independent of the problem.Ify =;=0,this is simply a Runge-Kutta scheme. RECIPES I Equations (16.6.22)can be solved successively for ki,k2,.. 令 Crucial to the success of a stiff integration scheme is an automatic stepsize adjustment algorithm.Kaps and Rentrop [2]discovered an embedded or Runge-Kutta-Fehlberg method as described in $16.2:Two estimates of the form (16.6.21)are computed,the "real"one y and a lower-order estimate y with different coefficients ,i=1,...,s,where s <s but the ki 六是 Press. are the same.The difference between y and y leads to an estimate of the local truncation error, which can then be used for stepsize control.Kaps and Rentrop showed that the smallest value of s for which embedding is possible is s =4,s 3,leading to a fourth-order method. Programs To minimize the matrix-vector multiplications on the right-hand side of(16.6.22),we rewrite the equations in terms of quantities SCIENTIFIC i-1 g1= Yigkj +yk (16.623) 61 The equations then take the form COMPUTING (1/h-f)g1=f(yo) 19200 (ISBN (1/h-f')·g2=fyo+a21g1)+c21g/h Numerical Recipes 21 (1/h-f')g3=f(yo+a31g1+a32g2)+(c31g1+c32g2)/h (1/hh-f)·g4=fyo+a41g1+a42g2+a43g3)+(c41g1+c42g2+c43g3)/h 43106 (16.6.24) In our implementation stiff of the Kaps-Rentrop algorithm,we have carried out the (outside replacement (16.6.19)explicitly in equations (16.6.24),so you need not concern yourself about it.Simply provide a routine (called derivs in stiff)that returns f(called dydx)as a North Software. function of x and y.Also supply a routine jacobn that returns f(dfdy)and of/ox(dfdx) as functions of and y.If does not occur explicitly on the right-hand side,then dfdx will be zero.Usually the Jacobian matrix will be available to you by analytic differentiation of the right-hand side f.If not,your routine will have to compute it by numerical differencing with appropriate increments Ay. Kaps and Rentrop gave two different sets of parameters,which have slightly different stability properties.Several other sets have been proposed.Our default choice is that of Shampine [3],but we also give you one of the Kaps-Rentrop sets as an option.Some proposed parameter sets require function evaluations outside the domain of integration;we prefer to avoid that complication. The calling sequence of stiff is exactly the same as the nonstiff routines given earlier in this chapter.It is thus"plug-compatible"with them in the general ODE integrating routine
738 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Rosenbrock Methods These methods have the advantage of being relatively simple to understand and implement. For moderate accuracies ( <∼ 10−4 – 10−5 in the error criterion) and moderate-sized systems (N <∼ 10), they are competitive with the more complicated algorithms. For more stringent parameters, Rosenbrock methods remain reliable; they merely become less efficient than competitors like the semi-implicit extrapolation method (see below). A Rosenbrock method seeks a solution of the form y(x0 + h) = y0 +s i=1 ciki (16.6.21) where the corrections ki are found by solving s linear equations that generalize the structure in (16.6.17): (1 − γhf ) · ki = hf y0 +i−1 j=1 αijkj + hf · i−1 j=1 γijkj , i = 1,...,s (16.6.22) Here we denote the Jacobian matrix by f . The coefficients γ, ci, αij , and γij are fixed constants independent of the problem. If γ = γij = 0, this is simply a Runge-Kutta scheme. Equations (16.6.22) can be solved successively for k1, k2,... . Crucial to the success of a stiff integration scheme is an automatic stepsize adjustment algorithm. Kaps and Rentrop [2] discovered an embedded or Runge-Kutta-Fehlberg method as described in §16.2: Two estimates of the form (16.6.21) are computed, the “real” one y and a lower-order estimate y with different coefficients cˆi, i = 1,..., sˆ, where s<s ˆ but the ki are the same. The difference between y and y leads to an estimate of the local truncation error, which can then be used for stepsize control. Kaps and Rentrop showed that the smallest value of s for which embedding is possible is s = 4, sˆ = 3, leading to a fourth-order method. To minimize the matrix-vector multiplications on the right-hand side of (16.6.22), we rewrite the equations in terms of quantities gi = i−1 j=1 γijkj + γki (16.6.23) The equations then take the form (1/γh − f ) · g1 = f(y0) (1/γh − f ) · g2 = f(y0 + a21g1) + c21g1/h (1/γh − f ) · g3 = f(y0 + a31g1 + a32g2)+(c31g1 + c32g2)/h (1/γh − f ) · g4 = f(y0 + a41g1 + a42g2 + a43g3)+(c41g1 + c42g2 + c43g3)/h (16.6.24) In our implementation stiff of the Kaps-Rentrop algorithm, we have carried out the replacement (16.6.19) explicitly in equations (16.6.24), so you need not concern yourself about it. Simply provide a routine (called derivs in stiff) that returns f (called dydx) as a function of x and y. Also supply a routine jacobn that returns f (dfdy) and ∂f/∂x (dfdx) as functions of x and y. If x does not occur explicitly on the right-hand side, then dfdx will be zero. Usually the Jacobian matrix will be available to you by analytic differentiation of the right-hand side f. If not, your routine will have to compute it by numerical differencing with appropriate increments ∆y. Kaps and Rentrop gave two different sets of parameters, which have slightly different stability properties. Several other sets have been proposed. Our default choice is that of Shampine [3], but we also give you one of the Kaps-Rentrop sets as an option. Some proposed parameter sets require function evaluations outside the domain of integration; we prefer to avoid that complication. The calling sequence of stiff is exactly the same as the nonstiff routines given earlier in this chapter. It is thus “plug-compatible” with them in the general ODE integrating routine
16.6 Stiff Sets of Equations 739 odeint.This compatibility requires,unfortunately,one slight anomaly:While the user- supplied routine derivs is a dummy argument (which can therefore have any actual name). the other user-supplied routine is not an argument and must be named(exactly)jacobn stiff begins by saving the initial values,in case the step has to be repeated because the error tolerance is exceeded.The linear equations(16.6.24)are solved by first computing the LU decomposition of the matrix 1/h-f using the routine ludcmp.Then the four g;are found by back-substitution of the four different right-hand sides using lubksb.Note that each step of the integration requires one call to jacobn and three calls to derivs (one call to get dydx before calling stiff,and two calls inside stiff).The reason only three calls are needed and not four is that the parameters have been chosen so that the last two calls in equation (16.6.24)are done with the same arguments.Counting the evaluation of the Jacobian matrix as roughly equivalent to N evaluations of the right-hand side f,we see that the Kaps-Rentrop scheme involves about N +3 function evaluations per step.Note that if N is large and the Jacobian matrix is sparse,you should replace the LU decomposition 虽 18881892 by a suitable sparse matrix procedure. Stepsize control depends on the fact that 1-800 yexact =y+O(h) (16.6.25) yexact =+O(h) from NUMERICAL RECIPESI 6 Thus (Nort serve ly-yl=O(h) (16.6.26) Referring back to the steps leading from equation (16.2.4)to equation (16.2.10),we see that America computer, THE the new stepsize should be chosen as in equation(16.2.10)but with the exponents 1/4 and 1/5 ART replaced by 1/3 and 1/4,respectively.Also,experience shows that it is wise to prevent too large a stepsize change in one step,otherwise we will probably have to undo the large change in the Programs next step.We adopt 0.5 and 1.5 as the maximum allowed decrease and increase of h in one step. #include #include "nrutil.h" to dir #define SAFETY 0.9 #define GROW 1.5 #define PGROW -0.25 ectcustser OF SCIENTIFIC COMPUTING(ISBN #define SHRNK 0.5 #define PSHRNK (-1.0/3.0) #define ERRCON 0.1296 v@cam #define MAXTRY 40 10-621 Here NMAX is the maximum value of n;GROW and SHRNK are the largest and smallest factors by which stepsize can change in one step;ERRCON equals (GROW/SAFETY)raised to the power 1988-1992 by Numerical Recipes (1/PGROW)and handles the case when errmax ~0. -43108 #define GAM (1.0/2.0) #define A21 2.0 #define A31 (48.0/25.0) (outside #define A32(6.0/25.0) Software. #define C21 -8.0 North #def1neC31(372.0/25.0) #def1neC32(12.0/5.0) #define C41 (-112.0/125.0) #def1nec42(-54.0/125.0) #define c43(-2.0/5.0) #define B1 (19.0/9.0) #def1neB2(1.0/2.0) #def1neB3(25.0/108.0) #define B4 (125.0/108.0) #def1neE1(17.0/54.0) #define E2(7.0/36.0) #define E3 0.0 #define E4(125.0/108.0)
16.6 Stiff Sets of Equations 739 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). odeint. This compatibility requires, unfortunately, one slight anomaly: While the usersupplied routine derivs is a dummy argument (which can therefore have any actual name), the other user-supplied routine is not an argument and must be named (exactly) jacobn. stiff begins by saving the initial values, in case the step has to be repeated because the error tolerance is exceeded. The linear equations (16.6.24) are solved by first computing the LU decomposition of the matrix 1/γh − f using the routine ludcmp. Then the four gi are found by back-substitution of the four different right-hand sides using lubksb. Note that each step of the integration requires one call to jacobn and three calls to derivs (one call to get dydx before calling stiff, and two calls inside stiff). The reason only three calls are needed and not four is that the parameters have been chosen so that the last two calls in equation (16.6.24) are done with the same arguments. Counting the evaluation of the Jacobian matrix as roughly equivalent to N evaluations of the right-hand side f, we see that the Kaps-Rentrop scheme involves about N + 3 function evaluations per step. Note that if N is large and the Jacobian matrix is sparse, you should replace the LU decomposition by a suitable sparse matrix procedure. Stepsize control depends on the fact that yexact = y + O(h5 ) yexact = y + O(h4 ) (16.6.25) Thus |y − y| = O(h4 ) (16.6.26) Referring back to the steps leading from equation (16.2.4) to equation (16.2.10), we see that the new stepsize should be chosen as in equation (16.2.10) but with the exponents 1/4 and 1/5 replaced by 1/3 and 1/4, respectively. Also, experience shows that it is wise to prevent too large a stepsize change in one step, otherwise we will probably have to undo the large change in the next step. We adopt 0.5 and 1.5 as the maximum allowed decrease and increase of h in one step. #include #include "nrutil.h" #define SAFETY 0.9 #define GROW 1.5 #define PGROW -0.25 #define SHRNK 0.5 #define PSHRNK (-1.0/3.0) #define ERRCON 0.1296 #define MAXTRY 40 Here NMAX is the maximum value of n; GROW and SHRNK are the largest and smallest factors by which stepsize can change in one step; ERRCON equals (GROW/SAFETY) raised to the power (1/PGROW) and handles the case when errmax 0. #define GAM (1.0/2.0) #define A21 2.0 #define A31 (48.0/25.0) #define A32 (6.0/25.0) #define C21 -8.0 #define C31 (372.0/25.0) #define C32 (12.0/5.0) #define C41 (-112.0/125.0) #define C42 (-54.0/125.0) #define C43 (-2.0/5.0) #define B1 (19.0/9.0) #define B2 (1.0/2.0) #define B3 (25.0/108.0) #define B4 (125.0/108.0) #define E1 (17.0/54.0) #define E2 (7.0/36.0) #define E3 0.0 #define E4 (125.0/108.0)
740 Chapter 16. Integration of Ordinary Differential Equations #define C1X (1.0/2.0) #def1neC2x(-3.0/2.0) #define C3X (121.0/50.0) #define C4X (29.0/250.0) #define A2X 1.0 #define A3X (3.0/5.0) void stiff(f1oaty▣,f1 oat dydx,intn,f1oat*x,float htry,f1 oat eps, f1 oat yscal☐,f1oat*hdid,f1oat*hnext, void(*derivs)(f1oat,f1oat[,f1oat[])) Fourth-order Rosenbrock step for integrating stiff o.d.e.'s,with monitoring of local truncation error to adjust stepsize.Input are the dependent variable vector y[1..n]and its derivative dydx [1..n]at the starting value of the independent variable x.Also input are the stepsize to be attempted htry.the required accuracy eps,and the vector yscal[1..n]against which the error is scaled.On output,y and x are replaced by their new values,hdid is the stepsize 83g that was actually accomplished,and hnext is the estimated next stepsize.derivs is a user- granted for 18881992 supplied routine that computes the derivatives of the right-hand side with respect to x,while jacobn (a fixed name)is a user-supplied routine that computes the Jacobi matrix of derivatives 1-600 of the right-hand side with respect to the components of y void jacobn(float x,float y[],float dfdx[],float **dfdy,int n); void lubksb(float **a,int n,int *indx,float b[]); from NUMERICAL RECIPES IN void ludcmp(float **a,int n,int *indx,float *d); int i,j,jtry,*indx; float d,errmax,h,xsav,**a,*dfdx,**dfdy,*dysav,*err; float *g1,*g2,*g3,*g4,*ysav; (North America to any server computer,is sto make one paper THE indx=ivector(1,n); /Cambridge University Press.Programs ART a=matrix(1,n,1,n); dfdx-vector(1,n); dfdy=matrix(1,n,1,n); dysav=vector(1,n); copy for their err=vector(1,n); gi=vector(1,n); g2-vector(1,n); g3=vector(1,n); g4=vector(1,n); ysav=vector(1,n); xsav=(*x); Save initial values to directcustser for(1=1;1<=n;i++){ ysav[i]=y[i]; dysav[i]=dydx[i]; v@cambr jacobn(xsav,ysav,dfdx,dfdy,n): The user must supply this routine to return the n-by-n matrix dfdy and the vector dfdx h=htry; Set stepsize to the initial trial value. G7/Repe气maR 1988-1992 by Numerical Recipes OF SCIENTIFIC COMPUTING (ISBN 0-521 -43108 for (jtry=1;jtry<=MAXTRY;jtry++) for(i=1;i<=n;1++){ Set up the matrix 1-hf. for (j=1;j<=n;j++)a[i][j]=-dfdy[i][j]; (outside a[1][i]+=1.0/(GAM*h); North Software. ludcmp(a,n,indx,&d); LU decomposition of the matrix. for (i=1;i<=n;i++) Set up right-hand side for g1. Amer g1[i]=dysav[i]+h*C1X*dfdx[i]; lubksb(a,n,indx,g1); Solve for g visit website for (i=1;i<=n;i++) Compute intermediate values of y and x. 3 y[i]=ysav[i]+A21*g1[1]; *x=xsav+A2X*h: (*derivs)(*x,y,dydx)i Compute dydx at the intermediate values. for (i=1;i<=n;i++) Set up right-hand side for g2. g2[i]=dydx [i]+h*C2X*dfdx [i]+C21*g1[i]/h; lubksb(a,n,indx,g2); Solve for g2- for (i=1;i<=n;i++) Compute intermediate values of y and x. y[1]=ysav[1]+A31*g1[1]+A32*g2[1]:
740 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #define C1X (1.0/2.0) #define C2X (-3.0/2.0) #define C3X (121.0/50.0) #define C4X (29.0/250.0) #define A2X 1.0 #define A3X (3.0/5.0) void stiff(float y[], float dydx[], int n, float *x, float htry, float eps, float yscal[], float *hdid, float *hnext, void (*derivs)(float, float [], float [])) Fourth-order Rosenbrock step for integrating stiffo.d.e.’s, with monitoring of local truncation error to adjust stepsize. Input are the dependent variable vector y[1..n] and its derivative dydx[1..n] at the starting value of the independent variable x. Also input are the stepsize to be attempted htry, the required accuracy eps, and the vector yscal[1..n] against which the error is scaled. On output, y and x are replaced by their new values, hdid is the stepsize that was actually accomplished, and hnext is the estimated next stepsize. derivs is a usersupplied routine that computes the derivatives of the right-hand side with respect to x, while jacobn (a fixed name) is a user-supplied routine that computes the Jacobi matrix of derivatives of the right-hand side with respect to the components of y. { void jacobn(float x, float y[], float dfdx[], float **dfdy, int n); void lubksb(float **a, int n, int *indx, float b[]); void ludcmp(float **a, int n, int *indx, float *d); int i,j,jtry,*indx; float d,errmax,h,xsav,**a,*dfdx,**dfdy,*dysav,*err; float *g1,*g2,*g3,*g4,*ysav; indx=ivector(1,n); a=matrix(1,n,1,n); dfdx=vector(1,n); dfdy=matrix(1,n,1,n); dysav=vector(1,n); err=vector(1,n); g1=vector(1,n); g2=vector(1,n); g3=vector(1,n); g4=vector(1,n); ysav=vector(1,n); xsav=(*x); Save initial values. for (i=1;i<=n;i++) { ysav[i]=y[i]; dysav[i]=dydx[i]; } jacobn(xsav,ysav,dfdx,dfdy,n); The user must supply this routine to return the n-by-n matrix dfdy and the vector dfdx. h=htry; Set stepsize to the initial trial value. for (jtry=1;jtry<=MAXTRY;jtry++) { for (i=1;i<=n;i++) { Set up the matrix 1 − γhf . for (j=1;j<=n;j++) a[i][j] = -dfdy[i][j]; a[i][i] += 1.0/(GAM*h); } ludcmp(a,n,indx,&d); LU decomposition of the matrix. for (i=1;i<=n;i++) Set up right-hand side for g1. g1[i]=dysav[i]+h*C1X*dfdx[i]; lubksb(a,n,indx,g1); Solve for g1. for (i=1;i<=n;i++) Compute intermediate values of y and x. y[i]=ysav[i]+A21*g1[i]; *x=xsav+A2X*h; (*derivs)(*x,y,dydx); Compute dydx at the intermediate values. for (i=1;i<=n;i++) Set up right-hand side for g2. g2[i]=dydx[i]+h*C2X*dfdx[i]+C21*g1[i]/h; lubksb(a,n,indx,g2); Solve for g2. for (i=1;i<=n;i++) Compute intermediate values of y and x. y[i]=ysav[i]+A31*g1[i]+A32*g2[i];
16.6 Stiff Sets of Equations 741 *x=xsav+A3X*h (*derivs)(*x,y,dydx); Compute dydx at the intermediate values for (i=1;i0.0 FMAX(*hnext,SHRNK*h):FMIN(*hnext,SHRNK*h)) to dir Go back and re-try step nrerror("exceeded MAXTRY in stiff"); rectcustser OF SCIENTIFIC COMPUTING (ISBN 0-521 Here are the Kaps-Rentrop parameters,which can be substituted for those of Shampine simply by replacing the #define statements: 1988-1992 by Numerical Recipes #define GAM 0.231 idge.org -431085 #define A21 2.0 #def1neA314.52470820736 #define A32 4.16352878860 #define C21 -5.07167533877 (outside North America) Software. 共def1ne C316.02015272865 #define C320.159750684673 #define C41 -1.856343618677 #define C42 -8.50538085819 #define C43=2.08407513602 #define B13.95750374663 #define B2 4.62489238836 #def1neB30.617477263873 #def1neB41.282612945268 #define E1 -2.30215540292 #define E2 -3.07363448539 #def1neE30.873280801802 #def1neE41.282612945268 #define C1X GAM
16.6 Stiff Sets of Equations 741 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). *x=xsav+A3X*h; (*derivs)(*x,y,dydx); Compute dydx at the intermediate values. for (i=1;i ERRCON ? SAFETY*h*pow(errmax,PGROW) : GROW*h); free_vector(ysav,1,n); free_vector(g4,1,n); free_vector(g3,1,n); free_vector(g2,1,n); free_vector(g1,1,n); free_vector(err,1,n); free_vector(dysav,1,n); free_matrix(dfdy,1,n,1,n); free_vector(dfdx,1,n); free_matrix(a,1,n,1,n); free_ivector(indx,1,n); return; } else { Truncation error too large, reduce stepsize. *hnext=SAFETY*h*pow(errmax,PSHRNK); h=(h >= 0.0 ? FMAX(*hnext,SHRNK*h) : FMIN(*hnext,SHRNK*h)); } } Go back and re-try step. nrerror("exceeded MAXTRY in stiff"); } Here are the Kaps-Rentrop parameters, which can be substituted for those of Shampine simply by replacing the #define statements: #define GAM 0.231 #define A21 2.0 #define A31 4.52470820736 #define A32 4.16352878860 #define C21 -5.07167533877 #define C31 6.02015272865 #define C32 0.159750684673 #define C41 -1.856343618677 #define C42 -8.50538085819 #define C43 -2.08407513602 #define B1 3.95750374663 #define B2 4.62489238836 #define B3 0.617477263873 #define B4 1.282612945268 #define E1 -2.30215540292 #define E2 -3.07363448539 #define E3 0.873280801802 #define E4 1.282612945268 #define C1X GAM
742 Chapter 16. Integration of Ordinary Differential Equations #define C2X-0.396296677520e-01 #def1neC3x0.550778939579 #define C4X -0.553509845700e-01 #define A2X 0.462 #def1neA3X0.880208333333 As an example of how stiff is used,one can solve the system h=-.0131-1000y13 2=-2500y23 (16.6.27) 3=-.0131-1000y13-2500y23 83 with initial conditions 鱼 19881992 y1(0)=1,2(0)=1. 3(0)=0 (16.6.28) 1-800 (This is test problem D4 in [4].)We integrate the system up to=50 with an initial stepsize ofh=2.9 x 10-4 using odeint.The components of C in (16.6.20)are all set to unity. from NUMERICAL RECIPES I The routines derivs and jacobn for this problem are given below.Even though the ratio of largest to smallest decay constants for this problem is around 10,stiff succeeds in integrating this set in only 29 steps with e=10-4.By contrast,the Runge-Kutta routine (North rkgs requires 51,012 steps! America computer, make one paper University Press. THE void jacobn(float x,float y[],float dfdx[],float **dfdy,int n) ART 是 int i; for(1=1;1<=n;i++)dfdx[i]=0.0; send dfdy[1][1]=-0.013-1000.0*y[3]; st st Programs dfdy[1][2]=0.0; dfdy[1][3]=-1000.0*y[1]; Copyright (C) dfdy[2][1]=0.0; dfdy[2][2]=-2500.0*y[3]; dfdy[2][3]= -2500.0*y[2]; dfdy[3][1]=-0.013-1000.0*y[3]; email to directcustser OF SCIENTIFIC COMPUTING(ISBN dfdy[3][2]=-2500.0*y[3]; dfdy[3][3]=-1000.0*y[1]-2500.0*y[2]: void derivs(float x,float y[],float dydx[]) @cambridge.org 1988-1992 by Numerical Recipes 10621 dydx[1]=-0.013*y[1]-1000.0*y[1]*y[3]; 43108 dydx[2]=-2500.0*y[2]*y[3]; dydx[3]=-0.013*y[1]-1000.0*y[1]*y[3]-2500.0*y[2]*y[3]; Semi-implicit Extrapolation Method (outside North America) Software. machine The Bulirsch-Stoer method,which discretizes the differential equation using the modified midpoint rule,does not work for stiff problems.Bader and Deufhard [5]discovered a semi- implicit discretization that works very well and that lends itself to extrapolation exactly as in the original Bulirsch-Stoer method The starting point is an implicit form of the midpoint rule yn+1 -yn-1=2hf yn+1+yn-1 (16.6.29) 2
742 Chapter 16. Integration of Ordinary Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #define C2X -0.396296677520e-01 #define C3X 0.550778939579 #define C4X -0.553509845700e-01 #define A2X 0.462 #define A3X 0.880208333333 As an example of how stiff is used, one can solve the system y 1 = −.013y1 − 1000y1y3 y 2 = −2500y2y3 y 3 = −.013y1 − 1000y1y3 − 2500y2y3 (16.6.27) with initial conditions y1(0) = 1, y2(0) = 1, y3(0) = 0 (16.6.28) (This is test problem D4 in [4].) We integrate the system up to x = 50 with an initial stepsize of h = 2.9 × 10−4 using odeint. The components of C in (16.6.20) are all set to unity. The routines derivs and jacobn for this problem are given below. Even though the ratio of largest to smallest decay constants for this problem is around 106, stiff succeeds in integrating this set in only 29 steps with = 10−4. By contrast, the Runge-Kutta routine rkqs requires 51,012 steps! void jacobn(float x, float y[], float dfdx[], float **dfdy, int n) { int i; for (i=1;i<=n;i++) dfdx[i]=0.0; dfdy[1][1] = -0.013-1000.0*y[3]; dfdy[1][2]=0.0; dfdy[1][3] = -1000.0*y[1]; dfdy[2][1]=0.0; dfdy[2][2] = -2500.0*y[3]; dfdy[2][3] = -2500.0*y[2]; dfdy[3][1] = -0.013-1000.0*y[3]; dfdy[3][2] = -2500.0*y[3]; dfdy[3][3] = -1000.0*y[1]-2500.0*y[2]; } void derivs(float x, float y[], float dydx[]) { dydx[1] = -0.013*y[1]-1000.0*y[1]*y[3]; dydx[2] = -2500.0*y[2]*y[3]; dydx[3] = -0.013*y[1]-1000.0*y[1]*y[3]-2500.0*y[2]*y[3]; } Semi-implicit Extrapolation Method The Bulirsch-Stoer method, which discretizes the differential equation using the modified midpoint rule, does not work for stiff problems. Bader and Deuflhard [5] discovered a semiimplicit discretization that works very well and that lends itself to extrapolation exactly as in the original Bulirsch-Stoer method. The starting point is an implicit form of the midpoint rule: yn+1 − yn−1 = 2hf yn+1 + yn−1 2 (16.6.29)
16.6 Stiff Sets of Equations 743 Convert this equation into semi-implicit form by linearizing the right-hand side about f(y). The result is the semi-implicit midpoint rule: -+2h.)-】 af of yn+1=1+h (16.6.30) It is used with a special first step,the semi-implicit Euler step (16.6.17),and a special "smoothing"last step in which the last y is replaced by 了n三(yn+1十yn-1) (16.6.31) Bader and Deuflhard showed that the error series for this method once again involves only even powers of h. For practical implementation,it is better to rewrite the equations using=-y 8 With h=H/m,start by calculating 4=1-hr] oy .hf(yo) (16.6.32) y1=yo+△0 Then for 1,...,m-1,set 室2 6 △k=△k-1+21-h .[hfy)-△k-] (16.6.33) yk+1=yk+△k 8”, America Press. Finally compute △m=1- f- 9 ·[hfym)-△m-i] Programs (16.6.34) 了m=ym+△m It is easy to incorporate the replacement(16.6.19)in the above formulas.The additional terms in the Jacobian that come from of/o all cancel out of the semi-implicit midpoint rule (16.6.30).In the special first step (16.6.17),and in the corresponding equation (16.6.32),the term hf becomes hf+hof/Or.The remaining equations are all unchanged. 豪 SCIENTIFIC COMPUTING(ISBN This algorithm is implemented in the routine simpr: 19891892 #include "nrutil.h" void simpr(float y[],float dydx],float dfdx],float **dfdy,int n, float xs,float htot,int nstep,float yout☐, void (*derivs)(float,float []float []) Numerical Recipes 10-621 43108 Performs one step of semi-implicit midpoint rule.Input are the dependent variable y [1..n],its derivative dydx[1..n],the derivative of the right-hand side with respect to x,dfdx[1..n] and the Jacobian dfdy [1..n][1..n]at xs.Also input are htot,the total step to be taken. (outside and nstep,the number of substeps to be used.The output is returned as yout [1..n] Software. derivs is the user-supplied routine that calculates dydx. void lubksb(float **a,int n,int *indx,float b[]); Ame void ludcmp(float **a,int n,int *indx,float *d); int i,i,nn,*indx; float d,h,x,**a,*del,*ytemp; indx=ivector(1,n); a=matrix(1,n,1,n); del=vector(1,n); ytemp=vector(1,n) h=htot/nstep; Stepsize this trip. for(1=1:1<n;it+){ Set up the matrix 1-hf. for (j=1;j<=n;j++)a[i][j]=-h*dfdy[i][j];
16.6 Stiff Sets of Equations 743 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Convert this equation into semi-implicit form by linearizing the right-hand side about f(yn). The result is the semi-implicit midpoint rule: 1 − h ∂f ∂y · yn+1 = 1 + h ∂f ∂y · yn−1 + 2h f(yn) − ∂f ∂y · yn (16.6.30) It is used with a special first step, the semi-implicit Euler step (16.6.17), and a special “smoothing” last step in which the last yn is replaced by yn ≡ 1 2 (yn+1 + yn−1) (16.6.31) Bader and Deuflhard showed that the error series for this method once again involves only even powers of h. For practical implementation, it is better to rewrite the equations using ∆k ≡ yk+1 −yk. With h = H/m, start by calculating ∆0 = 1 − h ∂f ∂y −1 · hf(y0) y1 = y0 + ∆0 (16.6.32) Then for k = 1,...,m − 1, set ∆k = ∆k−1 + 2 1 − h ∂f ∂y −1 · [hf(yk) − ∆k−1] yk+1 = yk + ∆k (16.6.33) Finally compute ∆m = 1 − h ∂f ∂y −1 · [hf(ym) − ∆m−1] ym = ym + ∆m (16.6.34) It is easy to incorporate the replacement (16.6.19) in the above formulas. The additional terms in the Jacobian that come from ∂f/∂x all cancel out of the semi-implicit midpoint rule (16.6.30). In the special first step (16.6.17), and in the corresponding equation (16.6.32), the term hf becomes hf + h2∂f/∂x. The remaining equations are all unchanged. This algorithm is implemented in the routine simpr: #include "nrutil.h" void simpr(float y[], float dydx[], float dfdx[], float **dfdy, int n, float xs, float htot, int nstep, float yout[], void (*derivs)(float, float [], float [])) Performs one step of semi-implicit midpoint rule. Input are the dependent variable y[1..n], its derivative dydx[1..n], the derivative of the right-hand side with respect to x, dfdx[1..n], and the Jacobian dfdy[1..n][1..n] at xs. Also input are htot, the total step to be taken, and nstep, the number of substeps to be used. The output is returned as yout[1..n]. derivs is the user-supplied routine that calculates dydx. { void lubksb(float **a, int n, int *indx, float b[]); void ludcmp(float **a, int n, int *indx, float *d); int i,j,nn,*indx; float d,h,x,**a,*del,*ytemp; indx=ivector(1,n); a=matrix(1,n,1,n); del=vector(1,n); ytemp=vector(1,n); h=htot/nstep; Stepsize this trip. for (i=1;i<=n;i++) { Set up the matrix 1 − hf . for (j=1;j<=n;j++) a[i][j] = -h*dfdy[i][j];