当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 05.7 Evaluation of Functions 5.7 Numerical Derivatives

资源类别:文库,文档格式:PDF,文档页数:5,文件大小:60.73KB,团购合买
点击下载完整版文档(PDF)

186 Chapter 5.Evaluation of Functions 5.7 Numerical Derivatives Imagine that you have a procedure which computes a function f(),and now you want to compute its derivative f'().Easy,right?The definition of the derivative,the limit as h-0 of f'()≈f+)-f) (5.7.1) h practically suggests the program:Pick a small value h;evaluate f(x+h);you probably have f()already evaluated,but if not,do it too;finally apply equation (5.7.1).What more needs to be said? 鱼 Quite a lot,actually.Applied uncritically,the above procedure is almost 二 guaranteed to produce inaccurate results.Applied properly,it can be the right way to compute a derivative only when the function f is fiercely expensive to compute, when you already have invested in computing f(z),and when,therefore,you want RECIPES to get the derivative in no more than a single additional function evaluation.In such a situation,the remaining issue is to choose h properly,an issue we now discuss: 言O。 9 There are two sources of error in equation(5.7.1).truncation error and roundoff error.The truncation error comes from higher terms in the Taylor series expansion, fe+)=f回+hf回+fa回)+f”a)+ (5.7.2) 。2 whence fe+)-f@=f+hf”+… 1 h (5.7.3) The roundoff error has various contributions.First there is roundoff error in h: Suppose,by way of an example,that you are at a point x=10.3 and you blindly choose h =0.0001.Neither x 10.3 nor x+h =10.30001 is a number with an exact representation in binary;each is therefore represented with some fractional error characteristic of the machine's floating-point format,em,whose value in single Recipes 10621 precision may be~10-7.The error in the effective value of h,namely the difference Numerica between x+h and x as represented in the machine,is therefore on the order of em, 4310 which implies a fractional error in h oforder~em/h10-2!By equation (5.7.1) Recipes this immediately implies at least the same large fractional error in the derivative. We arrive at Lesson 1:Always choose h so that +h and z differ by an exactly representable number.This can usually be accomplished by the program steps North temp=x+h (5.7.4) h temp- Some optimizing compilers,and some computers whose floating-point chips have higher internal accuracy than is stored externally,can foil this trick;if so,it is usually enough to declare temp as volatile,or else to call a dummy function donothing(temp)between the two equations(5.7.4).This forces temp into and out of addressable memory

186 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 5.7 Numerical Derivatives Imagine that you have a procedure which computes a function f(x), and now you want to compute its derivative f  (x). Easy, right? The definition of the derivative, the limit as h → 0 of f (x) ≈ f(x + h) − f(x) h (5.7.1) practically suggests the program: Pick a small value h; evaluate f(x + h); you probably have f(x) already evaluated, but if not, do it too; finally apply equation (5.7.1). What more needs to be said? Quite a lot, actually. Applied uncritically, the above procedure is almost guaranteed to produce inaccurate results. Applied properly, it can be the right way to compute a derivative only when the function f is fiercely expensive to compute, when you already have invested in computing f(x), and when, therefore, you want to get the derivative in no more than a single additional function evaluation. In such a situation, the remaining issue is to choose h properly, an issue we now discuss: There are two sources of error in equation (5.7.1), truncation error and roundoff error. The truncation error comes from higher terms in the Taylor series expansion, f(x + h) = f(x) + hf (x) + 1 2 h2f(x) + 1 6 h3f(x) + ··· (5.7.2) whence f(x + h) − f(x) h = f + 1 2 hf + ··· (5.7.3) The roundoff error has various contributions. First there is roundoff error in h: Suppose, by way of an example, that you are at a point x = 10.3 and you blindly choose h = 0.0001. Neither x = 10.3 nor x + h = 10.30001 is a number with an exact representation in binary; each is therefore represented with some fractional error characteristic of the machine’s floating-point format,  m, whose value in single precision may be ∼ 10−7. The error in the effective value of h, namely the difference between x + h and x as represented in the machine, is therefore on the order of  mx, which implies a fractional error in h of order ∼ mx/h ∼ 10−2! By equation (5.7.1) this immediately implies at least the same large fractional error in the derivative. We arrive at Lesson 1: Always choose h so that x+ h and x differ by an exactly representable number. This can usually be accomplished by the program steps temp = x + h h = temp − x (5.7.4) Some optimizing compilers, and some computers whose floating-point chips have higher internal accuracy than is stored externally, can foil this trick; if so, it is usually enough to declare temp as volatile, or else to call a dummy function donothing(temp) between the two equations (5.7.4). This forces temp into and out of addressable memory

5.7 Numerical Derivatives 187 With h an "exact"number,the roundoff error in equation (5.7.1)is er eflf()/h.Here ef is the fractional accuracy with which f is computed;for a simple function this may be comparable to the machine accuracy,efem,but for a complicated calculation with additional sources of inaccuracy it may be larger.The truncation error in equation (5.7.3)is on the order of ehf"().Varying h to minimize the sum e,+et gives the optimal choice of h, (5.7.5) 三 where(f/f")1/2 is the"curvature scale"of the function f,or"characteristic com or scale"over which it changes.In the absence of any other information,one often assumes re =x (except near z=0 where some other estimate of the typical x 人 scale should be used). With the choice of equation(5.7.5),the fractional accuracy of the computed derivative is (er+et)/f've(ff"/f)2v (5.7.6) 景墨9州 2 Here the last order-of-magnitude equality assumes that f,f',and f"all share the same characteristic length scale,usually the case.One sees that the simple finite-difference equation(5.7.1)gives at best only the square root of the machine accuracy Em. If you can afford two function evaluations for each derivative calculation,then 9名。∽ it is significantly better to use the symmetrized form f'(e)≈fc+)-fz-h) 6 (5.7.7) 2h In this case,by equation(5.7.2),the truncation error is e~h2f".The roundoff error er is about the same as before.The optimal choice ofh,by a short calculation analogous to the one above,is now 1/3 (5.7.8) Numerica 10.621 43126 and the fractional error is (e,+e)/If1~(ef)23f23(f"/3/f'~(e)23 (5.7.9) which will typically be an order of magnitude (single precision)or two orders of magnitude(double precision)better than equation(5.7.6).We have arrived at Lesson 2:Choose h to be the correct power of er or em times a characteristic scale e. You can easily derive the correct powers for other cases [1].For a function of two dimensions,for example,and the mixed derivative formula 82f [f(x+h,y+h)-f(x+h,y-h)]-[f(x-h,y+h)-f(x-h,y-h)] oxoy 4h2 (5.7.10)

5.7 Numerical Derivatives 187 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). With h an “exact” number, the roundoff error in equation (5.7.1) is e r ∼ f |f(x)/h|. Here f is the fractional accuracy with which f is computed; for a simple function this may be comparable to the machine accuracy,  f ≈ m, but for a complicated calculation with additional sources of inaccuracy it may be larger. The truncation error in equation (5.7.3) is on the order of e t ∼ |hf(x)|. Varying h to minimize the sum er + et gives the optimal choice of h, h ∼  f f f ≈ √fxc (5.7.5) where xc ≡ (f /f)1/2 is the “curvature scale” of the function f, or “characteristic scale” over which it changes. In the absence of any other information, one often assumes xc = x (except near x = 0 where some other estimate of the typical x scale should be used). With the choice of equation (5.7.5), the fractional accuracy of the computed derivative is (er + et)/|f | ∼ √f (ff/f2 ) 1/2 ∼ √f (5.7.6) Here the last order-of-magnitude equality assumes that f, f  , and f  all share the same characteristic length scale, usually the case. One sees that the simple finite-difference equation (5.7.1) gives at best only the square root of the machine accuracy m. If you can afford two function evaluations for each derivative calculation, then it is significantly better to use the symmetrized form f (x) ≈ f(x + h) − f(x − h) 2h (5.7.7) In this case, by equation (5.7.2), the truncation error is e t ∼ h2f. The roundoff error er is about the same as before. The optimal choice of h, by a short calculation analogous to the one above, is now h ∼ f f f 1/3 ∼ (f ) 1/3xc (5.7.8) and the fractional error is (er + et)/|f | ∼ (f ) 2/3f 2/3(f) 1/3/f ∼ (f ) 2/3 (5.7.9) which will typically be an order of magnitude (single precision) or two orders of magnitude (double precision) better than equation (5.7.6). We have arrived at Lesson 2: Choose h to be the correct power of f or m times a characteristic scale xc. You can easily derive the correct powers for other cases [1]. For a function of two dimensions, for example, and the mixed derivative formula ∂2f ∂x∂y = [f(x + h, y + h) − f(x + h, y − h)] − [f(x − h, y + h) − f(x − h, y − h)] 4h2 (5.7.10)

188 Chapter 5. Evaluation of Functions the correct scaling ish It is disappointing,certainly,that no simple finite-difference formula like equation (5.7.1)or(5.7.7)gives an accuracy comparable to the machine accuracy em,or even the lower accuracy to which f is evaluated,ef.Are there no better methods? Yes.there are.All.however,involve exploration of the function's behavior over scales comparable to e,plus some assumption of smoothness,or analyticity,so that the high-order terms in a Taylor expansion like equation(5.7.2)have some meaning. Such methods also involve multiple evaluations of the function f.so their increased accuracy must be weighed against increased cost. 8 The general idea of"Richardson's deferred approach to the limit"is particularly attractive.For numerical integrals,that idea leads to so-called Romberg integration (for review.see 84.3).For derivatives,one seeks to extrapolate,to h-0,the result of finite-difference calculations with smaller and smaller finite values of h.By the use of Neville's algorithm(83.1),one uses each new finite-difference calculation to produce both an extrapolation of higher order,and also extrapolations of previous, ≌cM lower,orders but with smaller scales h.Ridders [2]has given a nice implementation of this idea;the following program,dfridr,is based on his algorithm,modified by an improved termination criterion.Input to the routine is a function f(called func), a position z,and a largest stepsize h(more analogous to what we have called above than to what we have called h).Output is the returned value of the derivative, 公 Press. and an estimate of its error,err. 9 #include SCIENTIFIC #include "nrutil.h" #define CON 1.4 Stepsize is decreased by CON at each iteration. #define CON2 (CON*CON) #define BIG 1.0e30 #define NTAB 10 Sets maximum size of tableau. #define SAFE 2.0 Return when error is SAFE worse than the best so far. COMPUTING (ISBN 18881920 float dfridr(float (*func)(float),float x,float h,float terr) Returns the derivative of a function func at a point x by Ridders'method of polynomial extrapolation.The value h is input as an estimated initial stepsize;it need not be small,but 10621 rather should be an increment in x over which func changes substanrially.An estimate of the error in the derivative is returned as err. idge.org Numerical Recipes 43108 int i,ji float errt,fac,hh,**a,ans; (outside 膜 if (h ==0.0)nrerror("h must be nonzero in dfridr."); a=matrix(1,NTAB,1,NTAB); oftware. hh=hi a[1][1]=((*func)(x+hh)-(*func)(x-hh))/(2.0*hh); *err=BIG; for(i=2;1<=NTAB;1++)[ Successive columns in the Neville tableau will go to smaller stepsizes and higher orders of extrapolation. hh /CON; a[1][i]=((*func)(x+hh)-(*func)(x-hh))/(2.0*hh); Try new,smaller step fac=CON2; size. for(j=2;j<=1;j+){ Compute extrapolations of various orders,requiring a[j][i]=(a[j-1][i]*fac-a[j-1][i-1])/(fac-1.0); no new function eval- fac=CON2*fac: uations. errt=FMAX(fabs(a[j][i]-a[j-1][i]),fabs(a[j][i]-a[j-1][i-1]));

188 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). the correct scaling is h ∼  1/4 f xc. It is disappointing, certainly, that no simple finite-difference formula like equation (5.7.1) or (5.7.7) gives an accuracy comparable to the machine accuracy m, or even the lower accuracy to which f is evaluated,  f . Are there no better methods? Yes, there are. All, however, involve exploration of the function’s behavior over scales comparable to xc, plus some assumption of smoothness, or analyticity, so that the high-order terms in a Taylor expansion like equation (5.7.2) have some meaning. Such methods also involve multiple evaluations of the function f, so their increased accuracy must be weighed against increased cost. The general idea of “Richardson’s deferred approach to the limit” is particularly attractive. For numerical integrals, that idea leads to so-called Romberg integration (for review, see §4.3). For derivatives, one seeks to extrapolate, to h → 0, the result of finite-difference calculations with smaller and smaller finite values of h. By the use of Neville’s algorithm (§3.1), one uses each new finite-difference calculation to produce both an extrapolation of higher order, and also extrapolations of previous, lower, orders but with smaller scales h. Ridders [2] has given a nice implementation of this idea; the following program, dfridr, is based on his algorithm, modified by an improved termination criterion. Input to the routine is a function f (called func), a position x, and a largest stepsize h (more analogous to what we have called xc above than to what we have called h). Output is the returned value of the derivative, and an estimate of its error, err. #include #include "nrutil.h" #define CON 1.4 Stepsize is decreased by CON at each iteration. #define CON2 (CON*CON) #define BIG 1.0e30 #define NTAB 10 Sets maximum size of tableau. #define SAFE 2.0 Return when error is SAFE worse than the best so far. float dfridr(float (*func)(float), float x, float h, float *err) Returns the derivative of a function func at a point x by Ridders’ method of polynomial extrapolation. The value h is input as an estimated initial stepsize; it need not be small, but rather should be an increment in x over which func changes substantially. An estimate of the error in the derivative is returned as err. { int i,j; float errt,fac,hh,**a,ans; if (h == 0.0) nrerror("h must be nonzero in dfridr."); a=matrix(1,NTAB,1,NTAB); hh=h; a[1][1]=((*func)(x+hh)-(*func)(x-hh))/(2.0*hh); *err=BIG; for (i=2;i<=NTAB;i++) { Successive columns in the Neville tableau will go to smaller stepsizes and higher orders of extrapolation. hh /= CON; a[1][i]=((*func)(x+hh)-(*func)(x-hh))/(2.0*hh); Try new, smaller step￾fac=CON2; size. for (j=2;j<=i;j++) { Compute extrapolations of various orders, requiring no new function eval￾uations. a[j][i]=(a[j-1][i]*fac-a[j-1][i-1])/(fac-1.0); fac=CON2*fac; errt=FMAX(fabs(a[j][i]-a[j-1][i]),fabs(a[j][i]-a[j-1][i-1]));

5.7 Numerical Derivatives 189 The error strategy is to compare each new extrapolation to one order lower,both at the present stepsize and the previous one. if (errt =SAFE(*err))break; If higher order is worse by a significant factor SAFE,then quit early. free_matrix(a,1,NTAB,1,NTAB); return ans; 8 In dfridr,the number of evaluations of func is typically 6 to 12,but is allowed 鱼 18881892 to be as great as 2xNTAB.As a function of input h,it is typical for the accuracy to get better as h is made larger,until a sudden point is reached where nonsensical 1800 extrapolation produces early return with a large error.You should therefore choose a fairly large value for h,but monitor the returned value err,decreasing h if it is from NUMERICAL RECIPESI not small.For functions whose characteristic x scale is of order unity,we typically take h to be a few tenths. 令 Besides Ridders'method,there are other possible techniques.If your function is fairly smooth,and you know that you will want to evaluate its derivative many Press. times at arbitrary points in some interval,then it makes sense to construct a Chebyshev polynomial approximation to the function in that interval,and to evaluate the derivative directly from the resulting Chebyshev coefficients.This method is described in $85.8-5.9,following. Another technique applies when the function consists of data that is tabulated OF SCIENTIFIC at equally spaced intervals,and perhaps also noisy.One might then want,at each point,to least-squares fit a polynomial of some degree M,using an additional to dir number nL of points to the left and some number nR of points to the right of each desired x value.The estimated derivative is then the derivative of the resulting fitted polynomial.A very efficient way to do this construction is via Savitzky-Golay smoothing filters,which will be discussed later,in $14.8.There we will give a routine for getting filter coefficients that not only construct the fitting polynomial but, 10621 in the accumulation of a single sum of data points times filter coefficients,evaluate it as well.In fact,the routine given,savgol,has an argument ld that determines Fuurggoglrion Numerical Recipes 43106 which derivative of the fitted polynomial is evaluated.For the first derivative,the appropriate setting is 1d=1,and the value of the derivative is the accumulated sum divided by the sampling interval h. (outside North Software. CITED REFERENCES AND FURTHER READING: Dennis,J.E.,and Schnabel,R.B.1983,Numerical Methods for Unconstrained Optimization and Nonlinear Equations(Englewood Cliffs,NJ:Prentice-Hall),$5.4-5.6.[1] Ridders,C.J.F.1982,Advances in Engineering Software,vol.4,no.2.pp.75-76.[2]

5.7 Numerical Derivatives 189 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The error strategy is to compare each new extrapolation to one order lower, both at the present stepsize and the previous one. if (errt = SAFE*(*err)) break; If higher order is worse by a significant factor SAFE, then quit early. } free_matrix(a,1,NTAB,1,NTAB); return ans; } In dfridr, the number of evaluations of func is typically 6 to 12, but is allowed to be as great as 2×NTAB. As a function of input h, it is typical for the accuracy to get better as h is made larger, until a sudden point is reached where nonsensical extrapolation produces early return with a large error. You should therefore choose a fairly large value for h, but monitor the returned value err, decreasing h if it is not small. For functions whose characteristic x scale is of order unity, we typically take h to be a few tenths. Besides Ridders’ method, there are other possible techniques. If your function is fairly smooth, and you know that you will want to evaluate its derivative many times at arbitrary points in some interval, then it makes sense to construct a Chebyshev polynomial approximation to the function in that interval, and to evaluate the derivative directly from the resulting Chebyshev coefficients. This method is described in §§5.8–5.9, following. Another technique applies when the function consists of data that is tabulated at equally spaced intervals, and perhaps also noisy. One might then want, at each point, to least-squares fit a polynomial of some degree M, using an additional number nL of points to the left and some number nR of points to the right of each desired x value. The estimated derivative is then the derivative of the resulting fitted polynomial. A very efficient way to do this construction is via Savitzky-Golay smoothing filters, which will be discussed later, in §14.8. There we will give a routine for getting filter coefficients that not only construct the fitting polynomial but, in the accumulation of a single sum of data points times filter coefficients, evaluate it as well. In fact, the routine given, savgol, has an argument ld that determines which derivative of the fitted polynomial is evaluated. For the first derivative, the appropriate setting is ld=1, and the value of the derivative is the accumulated sum divided by the sampling interval h. CITED REFERENCES AND FURTHER READING: Dennis, J.E., and Schnabel, R.B. 1983, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Englewood Cliffs, NJ: Prentice-Hall), §§5.4–5.6. [1] Ridders, C.J.F. 1982, Advances in Engineering Software, vol. 4, no. 2, pp. 75–76. [2]

190 Chapter 5.Evaluation of Functions 5.8 Chebyshev Approximation The Chebyshev polynomial of degree n is denoted Tn(),and is given by the explicit formula Tn(x)=cos(n arccos x) (5.8.1) This may look trigonometric at first glance (and there is in fact a close relation between the Chebyshev polynomials and the discrete Fourier transform);however (5.8.1)can be combined with trigonometric identities to yield explicit expressions for Tn(r)(see Figure 5.8.1), T(x)=1 Cam Ti(x)=z T2(x)=2x2-1 T3(x)=4x3-3z (5.8.2) 9 T4(x)=8x4-8x2+1 Press. 44 9 Programs Tn+1()=2xTn(z)-Tn-1()n>1. IENTIFIC (There also exist inverse formulas for the powers of x in terms of the Tn's-see equations5.11.2-5.11.3.) 6 The Chebyshev polynomials are orthogonal in the interval [-1,1]over a weight (1-x2)-1/2. In particular, T:(x)T;(r) i卡j i=j≠0 (5.8.3) /1-x2 Numerical 10-521 i=j=0 43126 The polynomial Tn(x)has n zeros in the interval [-1,1],and they are located 是 at the points (outside (k- k=1,2,.,n (5.8.4) In this same interval there are n+1 extrema (maxima and minima),located at COS k=0,1,n (5.8.5) At all of the maxima Tn(z)=1,while at all of the minima Tn(x)=-1; it is precisely this property that makes the Chebyshev polynomials so useful in polynomial approximation of functions

190 Chapter 5. Evaluation of Functions Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 5.8 Chebyshev Approximation The Chebyshev polynomial of degree n is denoted T n(x), and is given by the explicit formula Tn(x) = cos(n arccos x) (5.8.1) This may look trigonometric at first glance (and there is in fact a close relation between the Chebyshev polynomials and the discrete Fourier transform); however (5.8.1) can be combined with trigonometric identities to yield explicit expressions for Tn(x) (see Figure 5.8.1), T0(x)=1 T1(x) = x T2(x)=2x2 − 1 T3(x)=4x3 − 3x T4(x)=8x4 − 8x2 + 1 ··· Tn+1(x)=2xTn(x) − Tn−1(x) n ≥ 1. (5.8.2) (There also exist inverse formulas for the powers of x in terms of the T n’s — see equations 5.11.2-5.11.3.) The Chebyshev polynomials are orthogonal in the interval [−1, 1] over a weight (1 − x2)−1/2. In particular,  1 −1 Ti(x)Tj (x) √ 1 − x2 dx = 0 i = j π/2 i = j = 0 π i = j = 0 (5.8.3) The polynomial Tn(x) has n zeros in the interval [−1, 1], and they are located at the points x = cos π(k − 1 2 ) n k = 1, 2,...,n (5.8.4) In this same interval there are n + 1 extrema (maxima and minima), located at x = cos πk n k = 0, 1,...,n (5.8.5) At all of the maxima Tn(x)=1, while at all of the minima Tn(x) = −1; it is precisely this property that makes the Chebyshev polynomials so useful in polynomial approximation of functions.

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有