正在加载图片...
15.2 Fitting Data to a Straight Line 661 as a distribution can be.Almost always,the cause of too good a chi-square fit is that the experimenter,in a"fit"of conservativism,has overestimated his or her measurement errors.Very rarely,too good a chi-square signals actual fraud,data that has been "fudged"to fit the model. A rule of thumb is that a"typical"value of x2 for a"moderately"good fit is X2v.More precise is the statement that the x2 statistic has a meanvand a standard deviation v2v,and,asymptotically for large v,becomes normally distributed. In some cases the uncertainties associated with a set of measurements are not known in advance,and considerations related to x-fitting are used to derive a value for o.If we assume that all measurements have the same standard deviation,=o, and that the model does fit well,then we can proceed by first assigning an arbitrary constant to all points,next fitting for the model parameters by minimizing2 and finally recomputing 突冷季 N 2 =贴-y(a2/N-M0 (15.1.6 =1 是足之d Obviously,this approach prohibits an independent assessment of goodness-of-fit,a 9 fact occasionally missed by its adherents.When.however.the measurement error is not known,this approach at least allows some kind of error bar to be assigned ress. to the points. If we take the derivative of equation(15.1.5)with respect to the parameters ak, we obtain equations that must hold at the chi-square minimum. SCIENTIFIC 0 dy(xi;...ak... k=1,.,M ak (15.1.7) 6 Equation(15.1.7)is,in general,a set of M nonlinear equations for the M unknown ak Various of the procedures described subsequently in this chapter derive from (15.1.7)and its specializations. Numerical Recipes 10.621 CITED REFERENCES AND FURTHER READING: Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: 43106 McGraw-Hill),Chapters 1-4. von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic Press),VI.C.[1] (outside North Software. 15.2 Fitting Data to a Straight Line A concrete example will make the considerations of the previous section more meaningful.We consider the problem of fitting a set of N data points(xi,yi)to a straight-line model y(x)=y(x;a,b)=a+bx (15.2.1)15.2 Fitting Data to a Straight Line 661 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). as a distribution can be. Almost always, the cause of too good a chi-square fit is that the experimenter, in a “fit” of conservativism, has overestimated his or her measurement errors. Very rarely, too good a chi-square signals actual fraud, data that has been “fudged” to fit the model. A rule of thumb is that a “typical” value of χ2 for a “moderately” good fit is χ2 ≈ ν. More precise is the statement that the χ2 statistic has a mean ν and a standard deviation √2ν, and, asymptotically for large ν, becomes normally distributed. In some cases the uncertainties associated with a set of measurements are not known in advance, and considerations related to χ2 fitting are used to derive a value for σ. If we assume that all measurements have the same standard deviation, σi = σ, and that the model does fit well, then we can proceed by first assigning an arbitrary constant σ to all points, next fitting for the model parameters by minimizing χ2, and finally recomputing σ2 =  N i=1 [yi − y(xi)]2/(N − M) (15.1.6) Obviously, this approach prohibits an independent assessment of goodness-of-fit, a fact occasionally missed by its adherents. When, however, the measurement error is not known, this approach at least allows some kind of error bar to be assigned to the points. If we take the derivative of equation (15.1.5) with respect to the parameters a k, we obtain equations that must hold at the chi-square minimum, 0 =  N i=1 yi − y(xi) σ2 i  ∂y(xi; ...ak ...) ∂ak  k = 1,...,M (15.1.7) Equation (15.1.7) is, in general, a set of M nonlinear equations for the M unknown ak. Various of the procedures described subsequently in this chapter derive from (15.1.7) and its specializations. CITED REFERENCES AND FURTHER READING: Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Chapters 1–4. von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), §VI.C. [1] 15.2 Fitting Data to a Straight Line A concrete example will make the considerations of the previous section more meaningful. We consider the problem of fitting a set of N data points (xi, yi) to a straight-line model y(x) = y(x; a, b) = a + bx (15.2.1)
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有