178 Chapter 5. Evaluation of Functions Then the answer is 0 w=0 d 2 w≠0,c≥0 Vc+id= 4 (5.4.7) iw w≠0,c<0,d≥0 2w d叫 iw w≠0,c<0,d<0 2w Routines implementing these algorithms are listed in Appendix C. nted for CITED REFERENCES AND FURTHER READING: Midy,P,and Yakovlev,Y.1991,Mathematics and Computers in Simulation,vol.33,pp.33-49. Knuth,D.E.1981.Seminumerical Algorithms,2nd ed.,vol.2 of The Art of Computer Programming (Reading,MA:Addison-Wesley)[see solutions to exercises 4.2.1.16 and 4.6.4.41]. (North University RECIPES Press. THE 5.5 Recurrence Relations and Clenshaw's Recurrence Formula Programs Many useful functions satisfy recurrence relations,e.g., SCIENTIFIC (m+1)Pn+1(x)=(2m+1)xPn(x)-nPn-1(x) (5.5.1) 6 +()=2n ()-n-1() (5.5.2) nEn+i(z)=e-x-IEn(I) (5.5.3) cosne =2 cos0 cos(n-1)0-cos(n-2)0 (5.5.4) Numerical sin ne =2cos0 sin(n-1)0-sin(n-2)0 (5.5.5) where the first three functions are Legendre polynomials,Bessel functions of the first (outside kind,and exponential integrals,respectively.(For notation see [1].)These relations are useful for extending computational methods from two successive values of n to North other values,either larger or smaller. Equations(5.5.4)and(5.5.5)motivate us to say a few words about trigonometric functions.If your program's running time is dominated by evaluating trigonometric functions,you are probably doing something wrong.Trig functions whose arguments form a linear sequence 0 =0o+no,n =0,1,2,...,are efficiently calculated by the following recurrence, cos(0+6)=cos0-[a cos0+Bsin] (5.5.6) sin(0+6)=sin0-a sin-3 cos
178 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Then the answer is √ c + id = 0 w = 0 w + i d 2w w = 0, c ≥ 0 |d| 2w + iw w = 0, c < 0, d ≥ 0 |d| 2w − iw w = 0, c < 0, d < 0 (5.4.7) Routines implementing these algorithms are listed in Appendix C. CITED REFERENCES AND FURTHER READING: Midy, P., and Yakovlev, Y. 1991, Mathematics and Computers in Simulation, vol. 33, pp. 33–49. Knuth, D.E. 1981, Seminumerical Algorithms, 2nd ed., vol. 2 of The Art of Computer Programming (Reading, MA: Addison-Wesley) [see solutions to exercises 4.2.1.16 and 4.6.4.41]. 5.5 Recurrence Relations and Clenshaw’s Recurrence Formula Many useful functions satisfy recurrence relations, e.g., (n + 1)Pn+1(x) = (2n + 1)xPn(x) − nPn−1(x) (5.5.1) Jn+1(x) = 2n x Jn(x) − Jn−1(x) (5.5.2) nEn+1(x) = e−x − xEn(x) (5.5.3) cos nθ = 2 cos θ cos(n − 1)θ − cos(n − 2)θ (5.5.4) sin nθ = 2 cos θ sin(n − 1)θ − sin(n − 2)θ (5.5.5) where the first three functions are Legendre polynomials, Bessel functions of the first kind, and exponential integrals, respectively. (For notation see [1].) These relations are useful for extending computational methods from two successive values of n to other values, either larger or smaller. Equations (5.5.4) and (5.5.5) motivate us to say a few words about trigonometric functions. If your program’s running time is dominated by evaluating trigonometric functions, you are probably doing something wrong. Trig functions whose arguments form a linear sequence θ = θ0 + nδ, n = 0, 1, 2,... , are efficiently calculated by the following recurrence, cos(θ + δ) = cos θ − [α cos θ + β sin θ] sin(θ + δ) = sin θ − [α sin θ − β cos θ] (5.5.6)
5.5 Recurrence Relations and Clenshaw's Recurrence Formula 179 where o and B are the precomputed coefficients a三2sin2 B≡sind (5.5.7) The reason for doing things this way,rather than with the standard(and equivalent) identities for sums of angles,is that here a and B do not lose significance if the incremental 6 is small.Likewise,the adds in equation(5.5.6)should be done in the order indicated by square brackets.We will use(5.5.6)repeatedly in Chapter 12,when we deal with Fourier transforms. 81 Another trick,occasionally useful,is to note that both sin and cos can be calculated via a single call to tan: 思言司黑 1-t2 2t t三tan cos= 1+t2 sim=12 (5.5.8) ICAL 3 The cost of getting both sin and cos,if you need them,is thus the cost of tan plus RECIPES 2 multiplies,2 divides,and 2 adds.On machines with slow trig functions,this can be a savings.However,note that special treatment is required if +And also note that many modern machines have very fast trig functions;so you should not assume that equation(5.5.8)is faster without testing. E 9 Stability of Recurrences You need to be aware that recurrence relations are not necessarily stable against roundofferror in the direction that you propose to go (either increasing n or IENTIFIC decreasing n).A three-term linear recurrence relation 6 Un+1 anyn onyn-1 =0,n=1,2;... (5.5.9) has two linearly independent solutions,fn and gn say.Only one of these corresponds to the sequence of functions fn that you are trying to generate.The other one gn may be exponentially growing in the direction that you want to go,or exponentially Numerica 10.621 damped,or exponentially neutral(growing or dying as some power law,for example). If it is exponentially growing,then the recurrence relation is of little or no practical 431 use in that direction.This is the case,e.g.,for(5.5.2)in the direction of increasing Recipes n,when z<n.You cannot generate Bessel functions of high n by forward recurrence on (5.5.2). To state things a bit more formally,if North fn/gn→0asn→o (5.5.10) then fn is called the minimal solution of the recurrence relation(5.5.9).Nonminimal solutions like gn are called dominant solutions.The minimal solution is unique,if it exists,but dominant solutions are not-you can add an arbitrary multiple of fn to a given gn.You can evaluate any dominant solution by forward recurrence,but not the minimal solution.(Unfortunately it is sometimes the one you want.) Abramowitz and Stegun (in their Introduction)11]give a list of recurrences that are stable in the increasing or decreasing directions.That list does not contain all
5.5 Recurrence Relations and Clenshaw’s Recurrence Formula 179 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). where α and β are the precomputed coefficients α ≡ 2 sin2 δ 2 β ≡ sin δ (5.5.7) The reason for doing things this way, rather than with the standard (and equivalent) identities for sums of angles, is that here α and β do not lose significance if the incremental δ is small. Likewise, the adds in equation (5.5.6) should be done in the order indicated by square brackets. We will use (5.5.6) repeatedly in Chapter 12, when we deal with Fourier transforms. Another trick, occasionally useful, is to note that both sin θ and cos θ can be calculated via a single call to tan: t ≡ tan θ 2 cos θ = 1 − t2 1 + t2 sin θ = 2t 1 + t2 (5.5.8) The cost of getting both sin and cos, if you need them, is thus the cost of tan plus 2 multiplies, 2 divides, and 2 adds. On machines with slow trig functions, this can be a savings. However, note that special treatment is required if θ → ±π. And also note that many modern machines have very fast trig functions; so you should not assume that equation (5.5.8) is faster without testing. Stability of Recurrences You need to be aware that recurrence relations are not necessarily stable against roundoff error in the direction that you propose to go (either increasing n or decreasing n). A three-term linear recurrence relation yn+1 + anyn + bnyn−1 = 0, n = 1, 2,... (5.5.9) has two linearly independent solutions, fn and gn say. Only one of these corresponds to the sequence of functions fn that you are trying to generate. The other one g n may be exponentially growing in the direction that you want to go, or exponentially damped, or exponentially neutral (growing or dying as some power law, for example). If it is exponentially growing, then the recurrence relation is of little or no practical use in that direction. This is the case, e.g., for (5.5.2) in the direction of increasing n, when x<n. You cannot generate Bessel functions of high n by forward recurrence on (5.5.2). To state things a bit more formally, if fn/gn → 0 as n → ∞ (5.5.10) then fn is called the minimal solution of the recurrence relation (5.5.9). Nonminimal solutions like gn are called dominant solutions. The minimal solution is unique, if it exists, but dominant solutions are not — you can add an arbitrary multiple of f n to a given gn. You can evaluate any dominant solution by forward recurrence, but not the minimal solution. (Unfortunately it is sometimes the one you want.) Abramowitz and Stegun (in their Introduction) [1] give a list of recurrences that are stable in the increasing or decreasing directions. That list does not contain all
180 Chapter 5.Evaluation of Functions possible formulas,of course.Given a recurrence relation for some function fn() you can test it yourself with about five minutes of(human)labor:For a fixed z in your range of interest,start the recurrence not with true values of fi()and fj+(),but (first)with the values I and 0,respectively,and then (second)with 0 and 1,respectively.Generate 10 or 20 terms of the recursive sequences in the direction that you want to go(increasing or decreasing from j),for each of the two starting conditions.Look at the difference between the corresponding members of the two sequences.If the differences stay of order unity (absolute value less than 10,say),then the recurrence is stable.If they increase slowly,then the recurrence may be mildly unstable but quite tolerably so.If they increase catastrophically,then 81 there is an exponentially growing solution of the recurrence.If you know that the function that you want actually corresponds to the growing solution,then you can keep the recurrence formula anyway e.g.,the case of the Bessel function Yn()for increasing n,see $6.5;if you don't know which solution your function corresponds to,you must at this point reject the recurrence formula.Notice that you can do this test before you go to the trouble of finding a numerical method for computing the two starting functions fi()and fi+():stability is a property of the recurrence, not of the starting values. An alternative heuristic procedure for testing stability is to replace the recur- 9 rence relation by a similar one that is linear with constant coefficients.For example, the relation (5.5.2)becomes yn+1-2yn+n-1=0 (5.5.11) 孕是日6 9 where y=n/z is treated as a constant.You solve such recurrence relations by trying solutions of the form yn=a".Substituting into the above recur- rence gives a2-2a+1=0 or a=y+Vy2-1 (5.5.12) The recurrence is stable if at say,then gn+1/gn ~tino,fn+1/fn ~t2na (5.5.15)
180 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). possible formulas, of course. Given a recurrence relation for some function f n(x) you can test it yourself with about five minutes of (human) labor: For a fixed x in your range of interest, start the recurrence not with true values of f j (x) and fj+1(x), but (first) with the values 1 and 0, respectively, and then (second) with 0 and 1, respectively. Generate 10 or 20 terms of the recursive sequences in the direction that you want to go (increasing or decreasing from j), for each of the two starting conditions. Look at the difference between the corresponding members of the two sequences. If the differences stay of order unity (absolute value less than 10, say), then the recurrence is stable. If they increase slowly, then the recurrence may be mildly unstable but quite tolerably so. If they increase catastrophically, then there is an exponentially growing solution of the recurrence. If you know that the function that you want actually corresponds to the growing solution, then you can keep the recurrence formula anyway e.g., the case of the Bessel function Y n(x) for increasing n, see §6.5; if you don’t know which solution your function corresponds to, you must at this point reject the recurrence formula. Notice that you can do this test before you go to the trouble of finding a numerical method for computing the two starting functions fj (x) and fj+1(x): stability is a property of the recurrence, not of the starting values. An alternative heuristic procedure for testing stability is to replace the recurrence relation by a similar one that is linear with constant coefficients. For example, the relation (5.5.2) becomes yn+1 − 2γyn + yn−1 =0 (5.5.11) where γ ≡ n/x is treated as a constant. You solve such recurrence relations by trying solutions of the form yn = an. Substituting into the above recurrence gives a2 − 2γa +1=0 or a = γ ± γ2 − 1 (5.5.12) The recurrence is stable if |a| ≤ 1 for all solutions a. This holds (as you can verify) if |γ| ≤ 1 or n ≤ x. The recurrence (5.5.2) thus cannot be used, starting with J 0(x) and J1(x), to compute Jn(x) for large n. Possibly you would at this point like the security of some real theorems on this subject (although we ourselves always follow one of the heuristic procedures). Here are two theorems, due to Perron [2]: Theorem A. If in (5.5.9) an ∼ anα, bn ∼ bnβ as n → ∞, and β |t2| say, then gn+1/gn ∼ t1nα, fn+1/fn ∼ t2nα (5.5.15)
5.5 Recurrence Relations and Clenshaw's Recurrence Formula 181 and fn is again the minimal solution to (5.5.9).Cases other than those in these two theorems are inconclusive for the existence of minimal solutions.(For more on the stability of recurrences,see [3].) How do you proceed if the solution that you desire is the minimal solution?The answer lies in that old aphorism,that every cloud has a silver lining:If a recurrence relation is catastrophically unstable in one direction,then that (undesired)solution will decrease very rapidly in the reverse direction.This means that you can start with any seed values for the consecutive fi and fi+and(when you have gone enough steps in the stable direction)you will converge to the sequence of functions that you want,times an unknown normalization factor.If there is some other way to normalize the sequence (e.g.,by a formula for the sum of the fn's),then this can be a practical means of function evaluation.The method is called Miller's algorithm.An example often given [1.4]uses equation(5.5.2)in just this way,along with the normalization formula 会州 1=J(x+2J2(x)+2.J4(x)+2.J6(x)+ (5.5.16) RECIPESI Incidentally,there is an important relation between three-term recurrence o%2 2 relations and continued fractions.Rewrite the recurrence relation(5.5.9)as Yn bn (5.5.17) Press. 3n-1 an yn+1/yn Iterating this equation,starting with n,gives 9 Un=on bnti OF SCIENTIFIC (5.5.18) Un-1 an-an+1- 6 Pincherle's Theorem [2]tells us that (5.5.18)converges if and only if (5.5.9)has a minimal solution fn,in which case it converges to fn/fn-1.This result,usually for the case n=1 and combined with some way to determine fo,underlies many of the practical methods for computing special functions that we give in the next chapter. Clenshaw's Recurrence Formula Fuurggoglrion Numerical Recipes 10-521 Clenshaw's recurrence formula [5]is an elegant and efficient way to evaluate a 43106 sum of coefficients times functions that obey a recurrence formula,e.g., (outside N 膜 f(0)=>ck coske f()=>ckPk() North oftware. k=0 k=0 Amer Here is how it works:Suppose that the desired sum is N f()=>ckFk(x) (5.5.19) k=0 and that Fk obeys the recurrence relation Fn+1()=a(n,I)Fn(x)+B(n,x)Fn-1() (5.5.20)
5.5 Recurrence Relations and Clenshaw’s Recurrence Formula 181 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). and fn is again the minimal solution to (5.5.9). Cases other than those in these two theorems are inconclusive for the existence of minimal solutions. (For more on the stability of recurrences, see [3].) How do you proceed if the solution that you desire is the minimal solution? The answer lies in that old aphorism, that every cloud has a silver lining: If a recurrence relation is catastrophically unstable in one direction, then that (undesired) solution will decrease very rapidly in the reverse direction. This means that you can start with any seed values for the consecutive fj and fj+1 and (when you have gone enough steps in the stable direction) you will converge to the sequence of functions that you want, times an unknown normalization factor. If there is some other way to normalize the sequence (e.g., by a formula for the sum of the f n’s), then this can be a practical means of function evaluation. The method is called Miller’s algorithm. An example often given [1,4] uses equation (5.5.2) in just this way, along with the normalization formula 1 = J0(x)+2J2(x)+2J4(x)+2J6(x) + ··· (5.5.16) Incidentally, there is an important relation between three-term recurrence relations and continued fractions. Rewrite the recurrence relation (5.5.9) as yn yn−1 = − bn an + yn+1/yn (5.5.17) Iterating this equation, starting with n, gives yn yn−1 = − bn an − bn+1 an+1 − ··· (5.5.18) Pincherle’s Theorem [2] tells us that (5.5.18) converges if and only if (5.5.9) has a minimal solution fn, in which case it converges to fn/fn−1. This result, usually for the case n = 1 and combined with some way to determine f 0, underlies many of the practical methods for computing special functions that we give in the next chapter. Clenshaw’s Recurrence Formula Clenshaw’s recurrence formula [5] is an elegant and efficient way to evaluate a sum of coefficients times functions that obey a recurrence formula, e.g., f(θ) = N k=0 ck cos kθ or f(x) = N k=0 ckPk(x) Here is how it works: Suppose that the desired sum is f(x) = N k=0 ckFk(x) (5.5.19) and that Fk obeys the recurrence relation Fn+1(x) = α(n, x)Fn(x) + β(n, x)Fn−1(x) (5.5.20)
182 Chapter 5.Evaluation of Functions for some functions a(n,x)and B(n,x).Now define the quantities y (k N,N-1,...,1)by the following recurrence: N+2=yN+1=0 k=a(k,x)k+1+Fk+1,x)+2+ck(k=N,N-1,.,1) (5.5.21) If you solve equation(5.5.21)for ck on the left,and then write out explicitly the sum (5.5.19),it will look (in part)like this: f(x)=… +[y8-a(8,)49-B(9,x)y10]Fs(x) +[y7-a(7,x)s-8,x)g]F(x) Cam ICAL +[6-a(6,x)r-(7,x)s]F6(x) +[5-a(5,x)6-(6,x)yr]F(x) (5.5.22) 9 +[2-a(2,x)g-3(3,x)y4]F2(x) +[h-a(1,x)2-(2,x)3]F(x) +[co+(1,x)2-(1,x)2]F(x) 0么 IENTIFIC Notice that we have added and subtracted B(1,x)y2 in the last line.If you examine 61 the terms containing a factor of ys in(5.5.22),you will find that they sum to zero as a consequence of the recurrence relation(5.5.20);similarly all the other y's down through y2.The only surviving terms in(5.5.22)are f()=B(1,x)Fo()y2+F()y1+Fo(z)co (5.5.23) Numerica 10-521 Equations (5.5.21)and (5.5.23)are Clenshaw's recurrence formula for doing the sum(5.5.19):You make one pass down through the y's using (5.5.21);when you have reached y2 and y you apply(5.5.23)to get the desired answer. Clenshaw's recurrence as written above incorporates the coefficients ck in a ww downward order,with k decreasing.At each stage,the effect of all previous c&'s is "remembered"as two coefficients which multiply the functions F and F (ultimately Fo and F1).If the functions F are small when k is large,and if the coefficients ck are small when k is small,then the sum can be dominated by small Fk's.In this case the remembered coefficients will involve a delicate cancellation and there can be a catastrophic loss of significance.An example would be to sum the trivial series J15(1)=0×Jo(1)+0×J1(1)+..+0×J14(1)+1×J1s(1)(5.5.24) Here/15,which is tiny,ends up represented as a canceling linear combination of Jo and J1,which are of order unity
182 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for some functions α(n, x) and β(n, x). Now define the quantities y k (k = N,N − 1,..., 1) by the following recurrence: yN+2 = yN+1 = 0 yk = α(k, x)yk+1 + β(k + 1, x)yk+2 + ck (k = N,N − 1,..., 1) (5.5.21) If you solve equation (5.5.21) for c k on the left, and then write out explicitly the sum (5.5.19), it will look (in part) like this: f(x) = ··· + [y8 − α(8, x)y9 − β(9, x)y10]F8(x) + [y7 − α(7, x)y8 − β(8, x)y9]F7(x) + [y6 − α(6, x)y7 − β(7, x)y8]F6(x) + [y5 − α(5, x)y6 − β(6, x)y7]F5(x) + ··· + [y2 − α(2, x)y3 − β(3, x)y4]F2(x) + [y1 − α(1, x)y2 − β(2, x)y3]F1(x) + [c0 + β(1, x)y2 − β(1, x)y2]F0(x) (5.5.22) Notice that we have added and subtracted β(1, x)y2 in the last line. If you examine the terms containing a factor of y8 in (5.5.22), you will find that they sum to zero as a consequence of the recurrence relation (5.5.20); similarly all the other y k’s down through y2. The only surviving terms in (5.5.22) are f(x) = β(1, x)F0(x)y2 + F1(x)y1 + F0(x)c0 (5.5.23) Equations (5.5.21) and (5.5.23) are Clenshaw’s recurrence formula for doing the sum (5.5.19): You make one pass down through the y k’s using (5.5.21); when you have reached y2 and y1 you apply (5.5.23) to get the desired answer. Clenshaw’s recurrence as written above incorporates the coefficients c k in a downward order, with k decreasing. At each stage, the effect of all previous c k’s is “remembered” as two coefficients which multiply the functions Fk+1 and Fk (ultimately F0 and F1). If the functions Fk are small when k is large, and if the coefficients ck are small when k is small, then the sum can be dominated by small Fk’s. In this case the remembered coefficients will involve a delicate cancellation and there can be a catastrophic loss of significance. An example would be to sum the trivial series J15(1) = 0 × J0(1) + 0 × J1(1) + ... + 0 × J14(1) + 1 × J15(1) (5.5.24) Here J15, which is tiny, ends up represented as a canceling linear combination of J0 and J1, which are of order unity
5.6 Quadratic and Cubic Equations 183 The solution in such cases is to use an alternative Clenshaw recurrence that incorporates ck's in an upward direction.The relevant equations are y-2=y-1=0 (5.5.25) 1 张=k+1,可lk-2-ak,躲-1-c (k=0,1,,N-1) (5.5.26) f(x)=cvFv(z)-(N,x)FN-1(x)N-1-Fw(z)yN-2(5.5.27) The rare case where equations (5.5.25)-(5.5.27)should be used instead of 81 equations (5.5.21)and (5.5.23)can be detected automatically by testing whether the operands in the first sum in(5.5.23)are opposite in sign and nearly equal in magnitude.Other than in this special case,Clenshaw's recurrence is always stable, 餐 independent of whether the recurrence for the functions F is stable in the upward or downward direction. RECIPESI CITED REFERENCES AND FURTHER READING: 2 Abramowitz,M.,and Stegun,I.A.1964,Handbook of Mathematical Functions,Applied Mathe- matics Series,Volume 55 (Washington:National Bureau of Standards;reprinted 1968 by Dover Publications,New York).pp.xii,697.[1] Press. Gautschi,W.1967,S/AM Review,vol.9,pp.24-82.[2] Lakshmikantham,V.,and Trigiante,D.1988,Theory of Difference Equations:Numerical Methods and Applications (San Diego:Academic Press).[3] Acton,F.S.1970,Numerica/Methods That Work,1990,corrected edition (Washington:Mathe- matical Association of America),pp.20ff.[4] SCIENTIFIC Clenshaw,C.W.1962,Mathematica/Tables,vol.5,National Physical Laboratory (London:H.M. Stationery Office).[5] 6 Dahlquist,G..and Bjorck,A.1974,Numerica/Methods (Englewood Cliffs,NJ:Prentice-Hall). 84.4.3,p.111. Goodwin,E.T.(ed.)1961,Modern Computing Methods,2nd ed.(New York:Philosophical Li- brary),p.76. 10-521 5.6 Quadratic and Cubic Equations Numerical Recipes 43106 The roots of simple algebraic equations can be viewed as being functions of the (outside equations'coefficients.We are taught these functions in elementary algebra.Yet. North Software. surprisingly many people don't know the right way to solve a quadratic equation with two real roots,or to obtain the roots of a cubic equation. There are two ways to write the solution of the quadratic equation az2+bx+c=0 (5.6.1) with real coefficients a,b,c,namely x=-b±vP-4ac (5.6.2) 2a
5.6 Quadratic and Cubic Equations 183 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The solution in such cases is to use an alternative Clenshaw recurrence that incorporates ck’s in an upward direction. The relevant equations are y−2 = y−1 =0 (5.5.25) yk = 1 β(k + 1, x) [yk−2 − α(k, x)yk−1 − ck], (k = 0, 1,...,N − 1) (5.5.26) f(x) = cN FN (x) − β(N,x)FN−1(x)yN−1 − FN (x)yN−2 (5.5.27) The rare case where equations (5.5.25)–(5.5.27) should be used instead of equations (5.5.21) and (5.5.23) can be detected automatically by testing whether the operands in the first sum in (5.5.23) are opposite in sign and nearly equal in magnitude. Other than in this special case, Clenshaw’s recurrence is always stable, independent of whether the recurrence for the functions Fk is stable in the upward or downward direction. CITED REFERENCES AND FURTHER READING: Abramowitz, M., and Stegun, I.A. 1964, Handbook of Mathematical Functions, Applied Mathematics Series, Volume 55 (Washington: National Bureau of Standards; reprinted 1968 by Dover Publications, New York), pp. xiii, 697. [1] Gautschi, W. 1967, SIAM Review, vol. 9, pp. 24–82. [2] Lakshmikantham, V., and Trigiante, D. 1988, Theory of Difference Equations: Numerical Methods and Applications (San Diego: Academic Press). [3] Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), pp. 20ff. [4] Clenshaw, C.W. 1962, Mathematical Tables, vol. 5, National Physical Laboratory (London: H.M. Stationery Office). [5] Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall), §4.4.3, p. 111. Goodwin, E.T. (ed.) 1961, Modern Computing Methods, 2nd ed. (New York: Philosophical Library), p. 76. 5.6 Quadratic and Cubic Equations The roots of simple algebraic equations can be viewed as being functions of the equations’ coefficients. We are taught these functions in elementary algebra. Yet, surprisingly many people don’t know the right way to solve a quadratic equation with two real roots, or to obtain the roots of a cubic equation. There are two ways to write the solution of the quadratic equation ax2 + bx + c =0 (5.6.1) with real coefficients a, b, c, namely x = −b ± √ b2 − 4ac 2a (5.6.2)