当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 04.5 Integration of Functions 4.5 Gaussian Quadratures and Orthogonal Polynomials

资源类别:文库,文档格式:PDF,文档页数:15,文件大小:143.54KB,团购合买
点击下载完整版文档(PDF)

4.5 Gaussian Quadratures and Orthogonal Polynomials 147 Dahlquist.G.,and Bjorck,A.1974,Numerica/Methods (Englewood Cliffs,NJ:Prentice-Hall). 87.4.3,p.294. Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). 53.7,p.152. 4.5 Gaussian Quadratures and Orthogonal Polynomials In the formulas of 84.1,the integral of a function was approximated by the sum of its functional values at a set of equally spaced points.multiplied by certain aptly nted for chosen weighting coefficients.We saw that as we allowed ourselves more freedom in choosing the coefficients,we could achieve integration formulas of higher and higher order.The idea of Gaussian quadratures is to give ourselves the freedom to choose not only the weighting coefficients,but also the location of the abscissas at which the function is to be evaluated:They will no longer be equally spaced.Thus, 令 we will have twice the number of degrees of freedom at our disposal;it will turn out that we can achieve Gaussian quadrature formulas whose order is,essentially,twice that of the Newton-Cotes formula with the same number of function evaluations. Does this sound too good to be true?Well,in a sense it is.The catch is a familiar one,which cannot be overemphasized:High order is not the same as high accuracy.High order translates to high accuracy only when the integrand is very 的 smooth,in the sense of being"well-approximated by a polynomial." There is,however,one additional feature of Gaussian quadrature formulas that SCIENTIFIC( adds to their usefulness:We can arrange the choice of weights and abscissas to make the integral exact for a class of integrands"polynomials times some known function W(z)"rather than for the usual class of integrands "polynomials."The function W()can then be chosen to remove integrable singularities from the desired integral. Given W(z),in other words,and given an integer N,we can find a set of weights w;and abscissas j such that the approximation 10.621 N Wof(e)dz≈∑fe) (4.5.1) % Numerical Recipes 43106 j=1 is exact if f(x)is a polynomial.For example,to do the integral (outside exp(-cos2z)d North (4.5.2) -1 V1-x2 (not a very natural looking integral,it must be admitted),we might well be interested in a Gaussian quadrature formula based on the choice 1 w@)=√1- (4.5.3) in the interval (-1,1).(This particular choice is called Gauss-Chebyshey integration, for reasons that will become clear shortly

4.5 Gaussian Quadratures and Orthogonal Polynomials 147 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall), §7.4.3, p. 294. Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §3.7, p. 152. 4.5 Gaussian Quadratures and Orthogonal Polynomials In the formulas of §4.1, the integral of a function was approximated by the sum of its functional values at a set of equally spaced points, multiplied by certain aptly chosen weighting coefficients. We saw that as we allowed ourselves more freedom in choosing the coefficients, we could achieve integration formulas of higher and higher order. The idea of Gaussian quadratures is to give ourselves the freedom to choose not only the weighting coefficients, but also the location of the abscissas at which the function is to be evaluated: They will no longer be equally spaced. Thus, we will have twice the number of degrees of freedom at our disposal; it will turn out that we can achieve Gaussian quadrature formulas whose order is, essentially, twice that of the Newton-Cotes formula with the same number of function evaluations. Does this sound too good to be true? Well, in a sense it is. The catch is a familiar one, which cannot be overemphasized: High order is not the same as high accuracy. High order translates to high accuracy only when the integrand is very smooth, in the sense of being “well-approximated by a polynomial.” There is, however, one additional feature of Gaussian quadrature formulas that adds to their usefulness: We can arrange the choice of weights and abscissas to make the integral exact for a class of integrands “polynomials times some known function W(x)” rather than for the usual class of integrands “polynomials.” The function W(x) can then be chosen to remove integrable singularities from the desired integral. Given W(x), in other words, and given an integer N, we can find a set of weights wj and abscissas xj such that the approximation  b a W(x)f(x)dx ≈ N j=1 wjf(xj ) (4.5.1) is exact if f(x) is a polynomial. For example, to do the integral  1 −1 exp(− cos2 x) √ 1 − x2 dx (4.5.2) (not a very natural looking integral, it must be admitted), we might well be interested in a Gaussian quadrature formula based on the choice W(x) = 1 √ 1 − x2 (4.5.3) in the interval(−1, 1). (This particular choice is called Gauss-Chebyshev integration, for reasons that will become clear shortly.)

148 Chapter 4.Integration of Functions Notice that the integration formula(4.5.1)can also be written with the weight function W(x)not overtly visible:Define g(x)=W(x)f(x)and vj=wi/W(xj). Then (4.5.1)becomes g(x)dz≈y59() (4.5.4) j=1 Where did the function W(z)go?It is lurking there,ready to give high-order accuracy to integrands of the form polynomials times W(),and ready to deny high- order accuracy to integrands that are otherwise perfectly smooth and well-behaved. When you find tabulations of the weights and abscissas for a given W(z),you have to determine carefully whether they are to be used with a formula in the form of (4.5.1).or like(4.5.4). Here is an example of a quadrature routine that contains the tabulated abscissas and weights for the case W(z)=1 and N =10.Since the weights and abscissas are,in this case,symmetric around the midpoint of the range of integration,there RECIPES are actually only five distinct values of each: float qgaus(float (*func)(float),float a,float b) Returns the integral of the function func between a and b,by ten-point Gauss-Legendre inte- d 令 ress. gration:the function is evaluated exactly ten times at interior points in the range of integration. int j; float xr,xm,dx,s; 9 stat1cf1oatx☐=[0.0,0.1488743389,0.4333953941, The abscissas and weights. 0.6794095682,0.8650633666,0.9739065285]; First value of each array stat1cf1oat"0=[0.0,0.2955242247,0.2692667193 not used. IENTIFIC 0.2190863625,0.1494513491,0.0666713443]; 6 m=0.5*(b+a); xr=0.5*(b-a); s=0; Will be twice the average value of the function,since the for(j=1;j<=5;j+)[ ten weights (five numbers above each used twice) dx=xr*x[]; sum to 2. 9 s +=w[]((*func)(xm+dx)+(*func)(xm-dx)); 10.621 return s *xr; Scale the answer to the range of integration. Numerica uction Recipes 43108 The above routine illustrates that one can use Gaussian quadratures without (outside necessarily understanding the theory behind them:One just locates tabulated weights and abscissas in a book(e.g,[1]or [21).However,the theory is very pretty,and it 首 Software. will come in handy if you ever need to construct your own tabulation of weights and abscissas for an unusual choice of W().We will therefore give,without any proofs some useful results that will enable you to do this.Several of the results assume that W(x)does not change sign inside (a,b),which is usually the case in practice. The theory behind Gaussian quadratures goes back to Gauss in 1814,who used continued fractions to develop the subject.In 1826 Jacobi rederived Gauss's results by means of orthogonal polynomials.The systematic treatment of arbitrary weight functions W(x)using orthogonal polynomials is largely due to Christoffel in 1877.To introduce these orthogonal polynomials,let us fix the interval of interest to be (a,b).We can define the "scalar product of two functions f and g over a

148 Chapter 4. Integration of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Notice that the integration formula (4.5.1) can also be written with the weight function W(x) not overtly visible: Define g(x) ≡ W(x)f(x) and vj ≡ wj/W(xj ). Then (4.5.1) becomes  b a g(x)dx ≈ N j=1 vjg(xj ) (4.5.4) Where did the function W(x) go? It is lurking there, ready to give high-order accuracy to integrands of the form polynomials times W(x), and ready to deny high￾order accuracy to integrands that are otherwise perfectly smooth and well-behaved. When you find tabulations of the weights and abscissas for a given W(x), you have to determine carefully whether they are to be used with a formula in the form of (4.5.1), or like (4.5.4). Here is an example of a quadrature routine that contains the tabulated abscissas and weights for the case W(x)=1 and N = 10. Since the weights and abscissas are, in this case, symmetric around the midpoint of the range of integration, there are actually only five distinct values of each: float qgaus(float (*func)(float), float a, float b) Returns the integral of the function func between a and b, by ten-point Gauss-Legendre inte￾gration: the function is evaluated exactly ten times at interior points in the range of integration. { int j; float xr,xm,dx,s; static float x[]={0.0,0.1488743389,0.4333953941, The abscissas and weights. First value of each array not used. 0.6794095682,0.8650633666,0.9739065285}; static float w[]={0.0,0.2955242247,0.2692667193, 0.2190863625,0.1494513491,0.0666713443}; xm=0.5*(b+a); xr=0.5*(b-a); s=0; Will be twice the average value of the function, since the ten weights (five numbers above each used twice) sum to 2. for (j=1;j<=5;j++) { dx=xr*x[j]; s += w[j]*((*func)(xm+dx)+(*func)(xm-dx)); } return s *= xr; Scale the answer to the range of integration. } The above routine illustrates that one can use Gaussian quadratures without necessarily understanding the theory behind them: One just locates tabulated weights and abscissas in a book (e.g., [1] or [2]). However, the theory is very pretty, and it will come in handy if you ever need to construct your own tabulation of weights and abscissas for an unusual choice of W(x). We will therefore give, without any proofs, some useful results that will enable you to do this. Several of the results assume that W(x) does not change sign inside (a, b), which is usually the case in practice. The theory behind Gaussian quadratures goes back to Gauss in 1814, who used continued fractions to develop the subject. In 1826 Jacobi rederived Gauss’s results by means of orthogonal polynomials. The systematic treatment of arbitrary weight functions W(x) using orthogonal polynomials is largely due to Christoffel in 1877. To introduce these orthogonal polynomials, let us fix the interval of interest to be (a, b). We can define the “scalar product of two functions f and g over a

4.5 Gaussian Quadratures and Orthogonal Polynomials 149 weight function W"as (flg) W(x)f(x)g(x)dx (4.5.5) The scalar product is a number,not a function of x.Two functions are said to be orthogonal if their scalar product is zero.A function is said to be normalized if its scalar product with itself is unity.A set of functions that are all mutually orthogonal and also all individually normalized is called an orthonormal set. We can find a set of polynomials(i)that includes exactly one polynomial of order j,called p(),for eachj=0,1,2,...,and (ii)all of which are mutually orthogonal over the specified weight function W(z).A constructive procedure for B finding such a set is the recurrence relation 茶餐 P-1(x)≡0 Cam p0(x)≡1 (4.5.6) RECIPES p5+1(x)=(x-a)P5(c)-bjP-1(x)j=0,1,2,. 9 where 〈rpilp5.》 aj plp5》 j=0,1, 9 (4.5.7) bj= plp》 j=1,2,. Pj-1lp5-1)》 IENTIFIC The coefficient bo is arbitrary;we can take it to be zero. 可 The polynomials defined by (4.5.6)are monic,i.e.,the coefficient of their leading term for pi()]is unity.If we divide each pi()by the constant [/2we can render the set of polynomials orthonormal.One also encounters orthogonal polynomials with various other normalizations.You can convert from a given normalization to monic polynomials if you know that the coefficient of ←92 in p;is Aj,say;then the monic polynomials are obtained by dividing each pj Recipes Numerica 10621 by Aj.Note that the coefficients in the recurrence relation(4.5.6)depend on the 4310 adopted normalization. Recipes The polynomial p()can be shown to have exactly j distinct roots in the interval (a,b).Moreover,it can be shown that the roots of pi()"interleave"the j-1 roots of pj-1(),i.e.,there is exactly one root of the former in between each North two adjacent roots of the latter.This fact comes in handy if you need to find all the roots:You can start with the one root of p()and then,in turn,bracket the roots of each higher j,pinning them down at each stage more precisely by Newton's rule or some other root-finding scheme (see Chapter 9). Why would you ever want to find all the roots of an orthogonal polynomial pj()?Because the abscissas of the N-point Gaussian quadrature formulas(4.5.1) and(4.5.4)with weighting function W(z)in the interval (a,b)are precisely the roots of the orthogonal polynomial p(r)for the same interval and weighting function. This is the fundamental theorem of Gaussian quadratures,and lets you find the abscissas for any particular case

4.5 Gaussian Quadratures and Orthogonal Polynomials 149 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). weight function W ” as f|g ≡  b a W(x)f(x)g(x)dx (4.5.5) The scalar product is a number, not a function of x. Two functions are said to be orthogonal if their scalar product is zero. A function is said to be normalized if its scalar product with itself is unity. A set of functions that are all mutually orthogonal and also all individually normalized is called an orthonormal set. We can find a set of polynomials (i) that includes exactly one polynomial of order j, called pj (x), for each j = 0, 1, 2,... , and (ii) all of which are mutually orthogonal over the specified weight function W(x). A constructive procedure for finding such a set is the recurrence relation p−1(x) ≡ 0 p0(x) ≡ 1 pj+1(x)=(x − aj )pj (x) − bjpj−1(x) j = 0, 1, 2,... (4.5.6) where aj = xpj |pj  pj |pj  j = 0, 1,... bj = pj|pj  pj−1|pj−1 j = 1, 2,... (4.5.7) The coefficient b0 is arbitrary; we can take it to be zero. The polynomials defined by (4.5.6) are monic, i.e., the coefficient of their leading term [xj for pj(x)] is unity. If we divide each pj (x) by the constant [pj |pj ] 1/2 we can render the set of polynomials orthonormal. One also encounters orthogonal polynomials with various other normalizations. You can convert from a given normalization to monic polynomials if you know that the coefficient of xj in pj is λj , say; then the monic polynomials are obtained by dividing each p j by λj . Note that the coefficients in the recurrence relation (4.5.6) depend on the adopted normalization. The polynomial pj (x) can be shown to have exactly j distinct roots in the interval (a, b). Moreover, it can be shown that the roots of pj (x) “interleave” the j − 1 roots of pj−1(x), i.e., there is exactly one root of the former in between each two adjacent roots of the latter. This fact comes in handy if you need to find all the roots: You can start with the one root of p1(x) and then, in turn, bracket the roots of each higher j, pinning them down at each stage more precisely by Newton’s rule or some other root-finding scheme (see Chapter 9). Why would you ever want to find all the roots of an orthogonal polynomial pj (x)? Because the abscissas of the N-point Gaussian quadrature formulas (4.5.1) and (4.5.4) with weighting function W(x) in the interval(a, b) are precisely the roots of the orthogonal polynomial pN (x) for the same interval and weighting function. This is the fundamental theorem of Gaussian quadratures, and lets you find the abscissas for any particular case

150 Chapter 4.Integration of Functions Once you know the abscissas 1,...,N,you need to find the weights wj, j=1,...,N.One way to do this (not the most efficient)is to solve the set of linear equations Po(z1) Po(EN)》 1 W(x)po(x)dz P1(x1) P1(IN) 2 0 (4.5.8) -PN-1(E1) PN-1(EN) .WN 0 Equation(4.5.8)simply solves for those weights such that the quadrature(4.5.1) gives the correct answer for the integral of the first N orthogonal polynomials.Note that the zeros on the right-hand side of(4.5.8)appear because p(x),...,PN-1(x) are all orthogonal to po(),which is a constant.It can be shown that,with those weights,the integral ofthe next N-1 polynomials is also exact,so that the quadrature is exact for all polynomials of degree 2N-1 or less.Another way to evaluate the weights (though one whose proof is beyond our scope)is by the formula RECIPES y与=N-ipN-1) 9 (4.5.9) PN-1(j)pN(zj) where p(i)is the derivative of the orthogonal polynomial at its zero xj. The computation of Gaussian quadrature rules thus involves two distinct phases: (i)the generation of the orthogonal polynomials po,...,P,i.e.,the computation of the coefficients ai,bi in (4.5.6);(ii)the determination of the zeros of p(x),and the computation of the associated weights.For the case of the"classical"orthogonal polynomials,the coefficients aj and b;are explicitly known (equations 4.5.10- 61 4.5.14 below)and phase (i)can be omitted.However,if you are confronted with a "nonclassical"weight function W(z),and you don't know the coefficients aj and bj,the construction of the associated set of orthogonal polynomials is not trivial. We discuss it at the end of this section. 公 Numerica 10621 Computation of the Abscissas and Weights 43106 This task can range from easy to difficult,depending on how much you already Recipes know about your weight function and its associated polynomials.In the case of (outside classical,well-studied,orthogonal polynomials,practically everything is known. North Software. including good approximations for their zeros.These can be used as starting guesses, enabling Newton's method (to be discussed in $9.4)to converge very rapidly. Newton's method requires the derivative p(z),which is evaluated by standard relations in terms of pN and pN-1.The weights are then conveniently evaluated by equation (4.5.9).For the following named cases,this direct root-finding is faster, by a factor of 3 to 5,than any other method. Here are the weight functions,intervals,and recurrence relations that generate the most commonly used orthogonal polynomials and their corresponding Gaussian quadrature formulas

150 Chapter 4. Integration of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Once you know the abscissas x1,...,xN , you need to find the weights wj , j = 1,...,N. One way to do this (not the most efficient) is to solve the set of linear equations     p0(x1) ... p0(xN ) p1(x1) ... p1(xN ) . . . . . . pN−1(x1) ... pN−1(xN )         w1 w2 . . . wN     =     b a W(x)p0(x)dx 0 . . . 0     (4.5.8) Equation (4.5.8) simply solves for those weights such that the quadrature (4.5.1) gives the correct answer for the integral of the first N orthogonal polynomials. Note that the zeros on the right-hand side of (4.5.8) appear because p 1(x),...,pN−1(x) are all orthogonal to p0(x), which is a constant. It can be shown that, with those weights, the integral of the next N −1 polynomials is also exact, so that the quadrature is exact for all polynomials of degree 2N − 1 or less. Another way to evaluate the weights (though one whose proof is beyond our scope) is by the formula wj = pN−1|pN−1 pN−1(xj )p N (xj ) (4.5.9) where p N (xj ) is the derivative of the orthogonal polynomial at its zero xj . The computation of Gaussian quadrature rules thus involves two distinct phases: (i) the generation of the orthogonal polynomials p 0,...,pN , i.e., the computation of the coefficients aj , bj in (4.5.6); (ii) the determination of the zeros of p N (x), and the computation of the associated weights. For the case of the “classical” orthogonal polynomials, the coefficients aj and bj are explicitly known (equations 4.5.10 – 4.5.14 below) and phase (i) can be omitted. However, if you are confronted with a “nonclassical” weight function W(x), and you don’t know the coefficients a j and bj , the construction of the associated set of orthogonal polynomials is not trivial. We discuss it at the end of this section. Computation of the Abscissas and Weights This task can range from easy to difficult, depending on how much you already know about your weight function and its associated polynomials. In the case of classical, well-studied, orthogonal polynomials, practically everything is known, including good approximations for their zeros. These can be used as starting guesses, enabling Newton’s method (to be discussed in §9.4) to converge very rapidly. Newton’s method requires the derivative p N (x), which is evaluated by standard relations in terms of pN and pN−1. The weights are then conveniently evaluated by equation (4.5.9). For the following named cases, this direct root-finding is faster, by a factor of 3 to 5, than any other method. Here are the weight functions, intervals, and recurrence relations that generate the most commonly used orthogonal polynomials and their corresponding Gaussian quadrature formulas.

4.5 Gaussian Quadratures and Orthogonal Polynomials 151 Gauss-Legendre: W(x)=1 -1<x<1 (0+1)P+1=(2j+1)xP-jP-1 (4.5.10) Gauss-Chebyshev: W(x)=(1-x2)-1/2 -1<x<1 Ti+1 2xTi-Tj-1 (4.5.11)) Gauss-Laguerre: W(x)=xae-r0<x<∞ 令 (0+1)L9+1=(-x+2i+a+1)L9-(j+a)L9-1 (4.5.12) Gauss-Hermite: ⊙ RECIPES W(r)=e- -00<x<00 2 Hj+1=2xH1-2jH1-1 (4.5.13) 人 Press. Gauss-Jacobi: W(x)=(1-x)(1+x)9 -1<x<1 Progra OF SCIENTIFIC( GP4=4+eP-方P: (4.5.14) 61 where the coefficients cj,dj,ej,and fi are given by c3=2(0+1)0+a+3+1)(2j+a+3) d=(2i+a+3+1)(a2-32) (4.5.15) e5=(2j+a+)(2j+a+3+1)(2j+a+3+2) Numerical 10621 f)=2(j+a)0+)(2j+a+3+2) 431 We now give individual routines that calculate the abscissas and weights for Recipes these cases.First comes the most common set of abscissas and weights,those of (outside Gauss-Legendre.The routine,due to G.B.Rybicki,uses equation(4.5.9)in the North special form for the Gauss-Legendre case, 四,=(1-P以P (4.5.16 The routine also scales the range of integration from(x1,2)to(-1,1),and provides abscissas j and weights wj for the Gaussian formula (4.5.17) j=1

4.5 Gaussian Quadratures and Orthogonal Polynomials 151 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Gauss-Legendre: W(x)=1 − 1 <x< 1 (j + 1)Pj+1 = (2j + 1)xPj − jPj−1 (4.5.10) Gauss-Chebyshev: W(x) = (1 − x2) −1/2 − 1 <x< 1 Tj+1 = 2xTj − Tj−1 (4.5.11) Gauss-Laguerre: W(x) = xαe−x 0 <x< ∞ (j + 1)Lα j+1 = (−x + 2j + α + 1)Lα j − (j + α)Lα j−1 (4.5.12) Gauss-Hermite: W(x) = e−x2 − ∞ <x< ∞ Hj+1 = 2xHj − 2jHj−1 (4.5.13) Gauss-Jacobi: W(x) = (1 − x) α(1 + x) β − 1 <x< 1 cjP(α,β) j+1 = (dj + ejx)P(α,β) j − fjP(α,β) j−1 (4.5.14) where the coefficients cj , dj , ej, and fj are given by cj = 2(j + 1)(j + α + β + 1)(2j + α + β) dj = (2j + α + β + 1)(α2 − β2) ej = (2j + α + β)(2j + α + β + 1)(2j + α + β + 2) fj = 2(j + α)(j + β)(2j + α + β + 2) (4.5.15) We now give individual routines that calculate the abscissas and weights for these cases. First comes the most common set of abscissas and weights, those of Gauss-Legendre. The routine, due to G.B. Rybicki, uses equation (4.5.9) in the special form for the Gauss-Legendre case, wj = 2 (1 − x2 j )[P N (xj )]2 (4.5.16) The routine also scales the range of integration from (x1, x2) to (−1, 1), and provides abscissas xj and weights wj for the Gaussian formula  x2 x1 f(x)dx = N j=1 wjf(xj ) (4.5.17)

152 Chapter 4. Integration of Functions #include #define EPS 3.0e-11 EPS is the relative precision. void gauleg(float x1,float x2,float x[,float w[],int n) Given the lower and upper limits of integration x1 and x2,and given n,this routine returns arrays x[1..n]and w[1..n]of length n,containing the abscissas and weights of the Gauss- Legendre n-point quadrature formula. int m,j,i; double z1,z,xm,x1,pp,p3,p2,p1; High precision is a good idea for this rou- tine. m=(n+1)/2; The roots are symmetric in the interval,so xm=0.5*(x2+x1); we only have to find half of them. 8 x1=0.5*(x2-x1); for(1=1;1EPS); x[1]=xm-x1*z; Scale the root to the desired interval, x[n+1-1]=xm+x1*z; and put in its symmetric counterpart. IENTIFIC w[1]=2.0*x1/(1.0-z*z)pp*pp)i Compute the weight w[n+1-i]=w[i]; and its symmetric counterpart MPUTING Next we give three routines that use initial approximations for the roots given 1920 (ISBN by Stroud and Secrest [2].The first is for Gauss-Laguerre abscissas and weights,to be used with the integration formula Numerical N xe-rfx)dkc=∑ef(z) ucti (4.5.18) (outside Recipes 431086 j=1 North Software. #include #define EPS 3.0e-14 Increase EPS if you don't have this preci- #define MAXIT 10 sion. void gaulag(float x[],float w[],int n,float alf) Given alf,the parameter a of the Laguerre polynomials,this routine returns arrays x[1..n] and w[1..n]containing the abscissas and weights of the n-point Gauss-Laguerre quadrature formula.The smallest abscissa is returned in x[1],the largest in x[n]. float gammln(float xx); void nrerror(char error_text[]); int i,its,ji float ai;

152 Chapter 4. Integration of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include #define EPS 3.0e-11 EPS is the relative precision. void gauleg(float x1, float x2, float x[], float w[], int n) Given the lower and upper limits of integration x1 and x2, and given n, this routine returns arrays x[1..n] and w[1..n] of length n, containing the abscissas and weights of the Gauss￾Legendre n-point quadrature formula. { int m,j,i; double z1,z,xm,xl,pp,p3,p2,p1; High precision is a good idea for this rou￾tine. m=(n+1)/2; The roots are symmetric in the interval, so xm=0.5*(x2+x1); we only have to find half of them. xl=0.5*(x2-x1); for (i=1;i EPS); x[i]=xm-xl*z; Scale the root to the desired interval, x[n+1-i]=xm+xl*z; and put in its symmetric counterpart. w[i]=2.0*xl/((1.0-z*z)*pp*pp); Compute the weight w[n+1-i]=w[i]; and its symmetric counterpart. } } Next we give three routines that use initial approximations for the roots given by Stroud and Secrest [2]. The first is for Gauss-Laguerre abscissas and weights, to be used with the integration formula  ∞ 0 xαe−xf(x)dx = N j=1 wjf(xj ) (4.5.18) #include #define EPS 3.0e-14 Increase EPS if you don’t have this preci- #define MAXIT 10 sion. void gaulag(float x[], float w[], int n, float alf) Given alf, the parameter α of the Laguerre polynomials, this routine returns arrays x[1..n] and w[1..n] containing the abscissas and weights of the n-point Gauss-Laguerre quadrature formula. The smallest abscissa is returned in x[1], the largest in x[n]. { float gammln(float xx); void nrerror(char error_text[]); int i,its,j; float ai;

4.5 Gaussian Quadratures and Orthogonal Polynomials 153 double p1,p2,p3,pp,Z,z1; High precision is a good idea for this rou- tine. for(i=1;1<=n;1+)[ Loop over the desired roots. 1f(1==1)[ Initial guess for the smallest root z=(1.0+a1f)*(3.0+0.92*a1f)/(1.0+2.4*n+1.8*a1f); Je1se1f(1=2){ Initial guess for the second root. z+=(15.0+6.25*a1f)/(1.0+0.9*a1f+2.5*n); else Initial guess for the other roots. a1=i-2; z+=(1.0+2.55*a1)/(1.9*ai)+1.26*ai*a1f/ (1.0+3.5*a1))*(z-x[1-2])/(1.0+0.3*a1f); for (its=1;its<=MAXIT;its++){ Refinement by Newton's method. p1=1.0; p2=0.0; 83 for(j=1;j<=n;j++){ Loop up the recurrence relation to get the 鱼 p3=p2; Laguerre polynomial evaluated at z. p2=p1; 1.800 p1=((2*j-1+a1f-z)*p2-(j-1+a1f)*p3)/j p1 is now the desired Laguerre polynomial.We next compute pp,its derivative, by a standard relation involving also p2,the polynomial of one lower order. pp=(n*p1-(n+alf)*p2)/z; 21=2: server z=z1-p1/pp; Newton's formula. (Nort if (fabs(z-z1)<=EPS)break; computer, Ameri make one paper e University Press. THE if (its MAXIT)nrerror("too many iterations in gaulag"); ART x[1]=z; Store the root and the weight w[i]=-exp(gammln(alf+n)-gammln((float)n))/(pp*n*p2); Programs Next is a routine for Gauss-Hermite abscissas and weights.If we use the "standard"normalization of these functions,as given in equation(4.5.13),we find that the computations overflow for large N because of various factorials that occur. We can avoid this by using instead the orthonormal set of polynomials Hj.They are generated by the recurrence 1788-1982 OF SCIENTIFIC COMPUTING(ISBN 0-521 庄4=0,= +1=x√+H-+1 H1-1 (4.5.19) -43106 The formula for the weights becomes Numerical Recipes 2 (outside 01= (4.5.20) [HN(i)]2 North Software. while the formula for the derivative with this normalization is ;=2a-1 (4.5.21) The abscissas and weights returned by gauher are used with the integration formula (4.5.22

4.5 Gaussian Quadratures and Orthogonal Polynomials 153 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). double p1,p2,p3,pp,z,z1; High precision is a good idea for this rou￾tine. for (i=1;i MAXIT) nrerror("too many iterations in gaulag"); x[i]=z; Store the root and the weight. w[i] = -exp(gammln(alf+n)-gammln((float)n))/(pp*n*p2); } } Next is a routine for Gauss-Hermite abscissas and weights. If we use the “standard” normalization of these functions, as given in equation (4.5.13), we find that the computations overflow for large N because of various factorials that occur. We can avoid this by using instead the orthonormal set of polynomials H j . They are generated by the recurrence H −1 = 0, H 0 = 1 π1/4 , H j+1 = x 2 j + 1 H j − j j + 1 H j−1 (4.5.19) The formula for the weights becomes wj = 2 [H N (xj )]2 (4.5.20) while the formula for the derivative with this normalization is H j = 2jH j−1 (4.5.21) The abscissas and weights returned by gauher are used with the integration formula  ∞ −∞ e−x2 f(x)dx = N j=1 wjf(xj ) (4.5.22)

154 Chapter 4. Integration of Functions #include #define EPS 3.0e-14 Relative precision. #define PIM40.7511255444649425 1/m1/4 #define MAXIT 10 Maximum iterations void gauher(f1oatx],float w[▣,intn) Given n,this routine returns arrays x [1..n]and w[1..n]containing the abscissas and weights of the n-point Gauss-Hermite quadrature formula.The largest abscissa is returned in x[1],the most negative in x[n]. void nrerror(char error_text []) int i,its,j,m; double p1,p2,p3,pp,z,z1; High precision is a good idea for this rou- tine. m=(n+1)/2; The roots are symmetric about the origin,so we have to find only half of them. 鱼 for(1=1;1wf(=) (4.5.23)

154 Chapter 4. Integration of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include #define EPS 3.0e-14 Relative precision. #define PIM4 0.7511255444649425 1/π1/4. #define MAXIT 10 Maximum iterations. void gauher(float x[], float w[], int n) Given n, this routine returns arrays x[1..n] and w[1..n] containing the abscissas and weights of the n-point Gauss-Hermite quadrature formula. The largest abscissa is returned in x[1], the most negative in x[n]. { void nrerror(char error_text[]); int i,its,j,m; double p1,p2,p3,pp,z,z1; High precision is a good idea for this rou￾tine. m=(n+1)/2; The roots are symmetric about the origin, so we have to find only half of them. for (i=1;i MAXIT) nrerror("too many iterations in gauher"); x[i]=z; Store the root x[n+1-i] = -z; and its symmetric counterpart. w[i]=2.0/(pp*pp); Compute the weight w[n+1-i]=w[i]; and its symmetric counterpart. } } Finally, here is a routine for Gauss-Jacobi abscissas and weights, which implement the integration formula  1 −1 (1 − x) α(1 + x) βf(x)dx = N j=1 wjf(xj ) (4.5.23)

4.5 Gaussian Quadratures and Orthogonal Polynomials 155 #include #define EPS 3.0e-14 Increase EPS if you don't have this preci- #define MAXIT 10 sion. void gaujac(float x[],float w[],int n,float alf,float bet) Given alf and bet,the parameters o and 3 of the Jacobi polynomials,this routine returns arrays x[1..n]and w[1..n]containing the abscissas and weights of the n-point Gauss-Jacobi quadrature formula.The largest abscissa is returned in x [1],the smallest in x[n]. float gammln(float xx); void nrerror(char error_text[]); int i,its,j; float alfbet,an,bn,r1,r2,r3; double a,b,c,p1,p2,p3,pp,temp,z,z1; High precision is a good idea for this rou- tine. for(i=1;1<=n;i+){ Loop over the desired roots. 鱼 1f(1==1)[ Initial guess for the largest root. an=alf/n; 1-800 bn=bet/n; r1=(1.0+a1f)*(2.78/(4.0+nwn)+0.768*an/n); r2=1.0+1.48*an+0.96*bn+0.452*an*an+0.83*an*bn; from NUMERICAL RECIPESI 18881892 z=1.0-x1/x2: else if (i==2){ Initial guess for the second largest root. r1=(4.1+a1f)/(1.0+a1f)*(1.0+0.156*a1f); r2=1.0+0.06*(n-8.0)*(1.0+0.12*a1f)/n; (Nort server r3=1.0+0.012*bet*(1.0+0.25*fabs(a1f))/n; 2-(1.0-z)*r1*r2*r3; America computer, Je1se1f(1=3){ Initial guess for the third largest root. r1=(1.67+0.28*a1f)/(1.0+0.37*a1f); 9访 ART r2=1.0+0.22*(n-8.0)/n; r3=1.0+8.0*bet/((6.28+bet)*n*n); 9 Programs z-=(x[1]-z)*r1*r2*r3; else if (i==n-1){ Initial guess for the second smallest root. for their r1=(1.0+0.235*bet)/(0.766+0.119*bet); r2=1.0/(1.0+0.639*(n-4.0)/(1.0+0.71*(n-4.0))); r3=1.0/(1.0+20.0*a1f/((7.5+a1f)*n*n)); to dir z+=(z-x[n-3])*r1*r2*r3: ]e1se1f(1=n){ Initial guess for the smallest root. r1=(1.0+0.37*bet)/(1.67+0.28*bet): SCIENTIFIC COMPUTING(ISBN r2=1.0/(1.0+0.22*(n-8.0)/n); r3=1.0/(1.0+8.0*a1f/((6.28+a1f)*n*n)); += (z-x[n-2])*r1*r2*r3; v@cam else Initial guess for the other roots z=3.0*x[1-1]-3.0*x[1-2]+x[1-3]; alfbet=alf+bet; 19881982 Numerical Recipes 10-621 -43108 for (its=1;its<=MAXIT;its++){ Refinement by Newton's method. temp=2.0+alfbet; Start the recurrence with Po and Pi to avoid p1=(alf-bet+temp*z)/2.0; a division by zero when a+B=0 or (outside p2=1.0; -1. for (j=2;j<=n;j++){ Loop up the recurrence relation to get the North Software. p3=p2; Jacobi polynomial evaluated at z. p2=p1: Ame temp=2*j+alfbet; a=2**(i+alfbet)*(temp-2.0); b=(temp-1.0)*(alf*alf-bet*bet+temp*(temp-2.0)*z); c=2.0*(j-1+a1f)*(j-1+bet)*temp; p1=(b*p2-c*p3)/a; pp=(n*(alf-bet-temp*z)*p1+2.0*(n+alf)*(n+bet)*p2)/(temp*(1.0-z*z)); p1 is now the desired Jacobi polynomial.We next compute pp.its derivative,by a standard relation involving also p2,the polynomial of one lower order. 21=2: z=z1-p1/pPi Newton's formula

4.5 Gaussian Quadratures and Orthogonal Polynomials 155 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include #define EPS 3.0e-14 Increase EPS if you don’t have this preci- #define MAXIT 10 sion. void gaujac(float x[], float w[], int n, float alf, float bet) Given alf and bet, the parameters α and β of the Jacobi polynomials, this routine returns arrays x[1..n] and w[1..n] containing the abscissas and weights of the n-point Gauss-Jacobi quadrature formula. The largest abscissa is returned in x[1], the smallest in x[n]. { float gammln(float xx); void nrerror(char error_text[]); int i,its,j; float alfbet,an,bn,r1,r2,r3; double a,b,c,p1,p2,p3,pp,temp,z,z1; High precision is a good idea for this rou￾tine. for (i=1;i<=n;i++) { Loop over the desired roots. if (i == 1) { Initial guess for the largest root. an=alf/n; bn=bet/n; r1=(1.0+alf)*(2.78/(4.0+n*n)+0.768*an/n); r2=1.0+1.48*an+0.96*bn+0.452*an*an+0.83*an*bn; z=1.0-r1/r2; } else if (i == 2) { Initial guess for the second largest root. r1=(4.1+alf)/((1.0+alf)*(1.0+0.156*alf)); r2=1.0+0.06*(n-8.0)*(1.0+0.12*alf)/n; r3=1.0+0.012*bet*(1.0+0.25*fabs(alf))/n; z -= (1.0-z)*r1*r2*r3; } else if (i == 3) { Initial guess for the third largest root. r1=(1.67+0.28*alf)/(1.0+0.37*alf); r2=1.0+0.22*(n-8.0)/n; r3=1.0+8.0*bet/((6.28+bet)*n*n); z -= (x[1]-z)*r1*r2*r3; } else if (i == n-1) { Initial guess for the second smallest root. r1=(1.0+0.235*bet)/(0.766+0.119*bet); r2=1.0/(1.0+0.639*(n-4.0)/(1.0+0.71*(n-4.0))); r3=1.0/(1.0+20.0*alf/((7.5+alf)*n*n)); z += (z-x[n-3])*r1*r2*r3; } else if (i == n) { Initial guess for the smallest root. r1=(1.0+0.37*bet)/(1.67+0.28*bet); r2=1.0/(1.0+0.22*(n-8.0)/n); r3=1.0/(1.0+8.0*alf/((6.28+alf)*n*n)); z += (z-x[n-2])*r1*r2*r3; } else { Initial guess for the other roots. z=3.0*x[i-1]-3.0*x[i-2]+x[i-3]; } alfbet=alf+bet; for (its=1;its<=MAXIT;its++) { Refinement by Newton’s method. temp=2.0+alfbet; Start the recurrence with P0 and P1 to avoid a division by zero when α + β = 0 or −1. p1=(alf-bet+temp*z)/2.0; p2=1.0; for (j=2;j<=n;j++) { Loop up the recurrence relation to get the p3=p2; Jacobi polynomial evaluated at z. p2=p1; temp=2*j+alfbet; a=2*j*(j+alfbet)*(temp-2.0); b=(temp-1.0)*(alf*alf-bet*bet+temp*(temp-2.0)*z); c=2.0*(j-1+alf)*(j-1+bet)*temp; p1=(b*p2-c*p3)/a; } pp=(n*(alf-bet-temp*z)*p1+2.0*(n+alf)*(n+bet)*p2)/(temp*(1.0-z*z)); p1 is now the desired Jacobi polynomial. We next compute pp, its derivative, by a standard relation involving also p2, the polynomial of one lower order. z1=z; z=z1-p1/pp; Newton’s formula

156 Chapter 4.Integration of Functions if (fabs(z-z1)<=EPS)break; if (its MAXIT)nrerror("too many iterations in gaujac"); x[1]=z; Store the root and the weight w[i]-exp(gammln(alf+n)+gammln(bet+n)-gammln(n+1.0)- gammln(n+alfbet+1.0))*temp*pov(2.0,alfbet)/(pp*p2); Legendre polynomials are special cases of Jacobi polynomials with a=B=0, but it is worth having the separate routine for them,gauleg,given above.Chebyshev polynomials correspond to a =B=-1/2(see $5.8).They have analytic abscissas and weights: xj cos π(j-) (4.5.24) w防N R、 Case of Known Recurrences (North Turn now to the case where you do not know good initial guesses for the zeros of your America Press. orthogonal polynomials,but you do have available the coefficients a;and b,that generate ART them.As we have seen,the zeros of p()are the abscissas for the N-point Gaussian quadrature formula.The most useful computational formula for the weights is equation Program (4.5.9)above,since the derivative py can be efficiently computed by the derivative of(4.5.6) in the general case,or by special relations for the classical polynomials.Note that(4.5.9)is valid as written only for monic polynomials;for other normalizations,there is an extra factor of AN/AN-1,where An is the coefficient of x in pN. Except in those special cases already discussed,the best way to find the abscissas is not to dir to use a root-finding method like Newton's method on p().Rather,it is generally faster to use the Golub-Welsch [3]algorithm,which is based on a result of Wilf[4].This algorithm OF SCIENTIFIC COMPUTING(ISBN notes that if you bring the term p;to the left-hand side of(4.5.6)and the term pj+to the right-hand side,the recurrence relation can be written in matrix form as 1988-1992 ao po 「0 P1 by a11 P1 Further reproduction Numerical Recipes 10-621 0 43108 PN-2 bN-2 aN-2 PN-2 LPN-1」 bN-1 aN- LPN-1. PN or (outside xp=T·p+pNeN-1 (4.5.25) North Software. Here T is a tridiagonal matrix,p is a column vector of po,,-1,and eN-1 is a unit vector with a I in the (N-1)st(last)position and zeros elsewhere.The matrix T can be Ame symmetrized by a diagonal similarity transformation D to give ao Vbi a1 Vb2 J=DTD-1 (4.5.26) VbN-2 aN-2 VbN-1 √/bN-1 aN-1 The matrix J is called the Jacobi matrix (not to be confused with other matrices named after Jacobi that arise in completely different problems!).Now we see from (4.5.25)that

156 Chapter 4. Integration of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). if (fabs(z-z1) MAXIT) nrerror("too many iterations in gaujac"); x[i]=z; Store the root and the weight. w[i]=exp(gammln(alf+n)+gammln(bet+n)-gammln(n+1.0)- gammln(n+alfbet+1.0))*temp*pow(2.0,alfbet)/(pp*p2); } } Legendre polynomials are special cases of Jacobi polynomials with α = β = 0, but it is worth having the separate routine for them, gauleg, given above. Chebyshev polynomials correspond to α = β = −1/2 (see §5.8). They have analytic abscissas and weights: xj = cos π(j − 1 2 ) N  wj = π N (4.5.24) Case of Known Recurrences Turn now to the case where you do not know good initial guesses for the zeros of your orthogonal polynomials, but you do have available the coefficients aj and bj that generate them. As we have seen, the zeros of pN (x) are the abscissas for the N-point Gaussian quadrature formula. The most useful computational formula for the weights is equation (4.5.9) above, since the derivative p N can be efficiently computed by the derivative of (4.5.6) in the general case, or by special relations for the classical polynomials. Note that (4.5.9) is valid as written only for monic polynomials; for other normalizations, there is an extra factor of λN /λN−1, where λN is the coefficient of xN in pN . Except in those special cases already discussed, the best way to find the abscissas is not to use a root-finding method like Newton’s method on pN (x). Rather, it is generally faster to use the Golub-Welsch [3] algorithm, which is based on a result of Wilf [4]. This algorithm notes that if you bring the term xpj to the left-hand side of (4.5.6) and the term pj+1 to the right-hand side, the recurrence relation can be written in matrix form as x       p0 p1 . . . pN−2 pN−1       =       a0 1 b1 a1 1 . . . . . . bN−2 aN−2 1 bN−1 aN−1       ·       p0 p1 . . . pN−2 pN−1       +       0 0 . . . 0 pN       or xp = T · p + pN eN−1 (4.5.25) Here T is a tridiagonal matrix, p is a column vector of p0, p1,...,pN−1, and eN−1 is a unit vector with a 1 in the (N − 1)st (last) position and zeros elsewhere. The matrix T can be symmetrized by a diagonal similarity transformation D to give J = DTD−1 =       a0 √ √ b1 b1 a1 √b2 . . . . . √ . bN−2 aN−2 √ √ bN−1 bN−1 aN−1       (4.5.26) The matrix J is called the Jacobi matrix (not to be confused with other matrices named after Jacobi that arise in completely different problems!). Now we see from (4.5.25) that

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共15页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有