当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 15.1

资源类别:文库,文档格式:PDF,文档页数:5,文件大小:47.77KB,团购合买
点击下载完整版文档(PDF)

15.1 Least Squares as a Maximum Likelihood Estimator 657 should provide (i)parameters,(ii)error estimates on the parameters,and(iii)a statistical measure of goodness-of-fit.When the third item suggests that the model is an unlikely match to the data,then items (i)and (ii)are probably worthless. Unfortunately,many practitioners of parameter estimation never proceed beyond item(i).They deem a fit acceptable if a graph of data and model "looks good."This approach is known as chi-by-eve.Luckily,its practitioners get what they deserve. CITED REFERENCES AND FURTHER READING: Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill). Brownlee,K.A.1965,Statistical Theory and Methodology,2nd ed.(New York:Wiley). Martin,B.R.1971,Statistics for Physicists(New York:Academic Press). von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic Press),Chapter X. ICAL Korn,G.A.,and Korn,T.M.1968,Mathematical Handbook for Scientists and Engineers,2nd ed. (New York:McGraw-Hill),Chapters 18-19. RECIPES 15.1 Least Squares as a Maximum Likelihood gF2” 9 Estimator Suppose that we are fitting N data points (i,yi)i=1,...,N,to a model that has M adjustable parameters aj,j=1,...,M.The model predicts a functional IENTIFIC relationship between the measured independent and dependent variables. 6 y(x)=y(x;a1...aM) (15.1.1) where the dependence on the parameters is indicated explicitly on the right-hand side. What,exactly,do we want to minimize to get fitted values for the a;'s?The first thing that comes to mind is the familiar least-squares fit, Numerica 10.621 N 43126 minimize over a1...aM (15.1.2) i=1 But where does this come from?What general principles is it based on?The answer North to these questions takes us into the subject of maximm likelihood estimators. Given a particular data set of i's and yi's,we have the intuitive feeling that some parameter sets a1...aM are very unlikely-those for which the model function y()looks nothing like the data-while others may be very likely-those that closely resemble the data.How can we quantify this intuitive feeling?How can we select fitted parameters that are"most likely"to be correct?It is not meaningful to ask the question,"What is the probability that a particular set of fitted parameters a1...aM is correct?"The reason is that there is no statistical universe of models from which the parameters are drawn.There is just one model,the correct one,and a statistical universe of data sets that are drawn from it!

15.1 Least Squares as a Maximum Likelihood Estimator 657 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). should provide (i) parameters, (ii) error estimates on the parameters, and (iii) a statistical measure of goodness-of-fit. When the third item suggests that the model is an unlikely match to the data, then items (i) and (ii) are probably worthless. Unfortunately, many practitioners of parameter estimation never proceed beyond item (i). They deem a fit acceptable if a graph of data and model “looks good.” This approach is known as chi-by-eye. Luckily, its practitioners get what they deserve. CITED REFERENCES AND FURTHER READING: Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill). Brownlee, K.A. 1965, Statistical Theory and Methodology, 2nd ed. (New York: Wiley). Martin, B.R. 1971, Statistics for Physicists (New York: Academic Press). von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), Chapter X. Korn, G.A., and Korn, T.M. 1968, Mathematical Handbook for Scientists and Engineers, 2nd ed. (New York: McGraw-Hill), Chapters 18–19. 15.1 Least Squares as a Maximum Likelihood Estimator Suppose that we are fitting N data points (xi, yi) i = 1,...,N, to a model that has M adjustable parameters aj, j = 1,...,M. The model predicts a functional relationship between the measured independent and dependent variables, y(x) = y(x; a1 ...aM ) (15.1.1) where the dependence on the parameters is indicated explicitly on the right-hand side. What, exactly, do we want to minimize to get fitted values for the aj ’s? The first thing that comes to mind is the familiar least-squares fit, minimize over a1 ...aM :  N i=1 [yi − y(xi; a1 ...aM)]2 (15.1.2) But where does this come from? What general principles is it based on? The answer to these questions takes us into the subject of maximum likelihood estimators. Given a particular data set of xi’s and yi’s, we have the intuitive feeling that some parameter sets a1 ...aM are very unlikely — those for which the model function y(x) looks nothing like the data — while others may be very likely — those that closely resemble the data. How can we quantify this intuitive feeling? How can we select fitted parameters that are “most likely” to be correct? It is not meaningful to ask the question, “What is the probability that a particular set of fitted parameters a1 ...aM is correct?” The reason is that there is no statistical universe of models from which the parameters are drawn. There is just one model, the correct one, and a statistical universe of data sets that are drawn from it!

658 Chapter 15.Modeling of Data That being the case,we can,however,turn the question around,and ask."Given a particular set ofparameters,what is the probability that this data set could have occurred?"If the yi's take on continuous values,the probability will always be zero unless we add the phrase,"..plus or minus some fixed Ay on each data point." So let's always take this phrase as understood.If the probability of obtaining the data set is infinitesimally small,then we can conclude that the parameters under consideration are "unlikely"to be right.Conversely,our intuition tells us that the data set should not be too improbable for the correct choice of parameters. In other words,we identify the probability of the data given the parameters (which is a mathematically computable number),as the likelihood of the parameters 8 given the data.This identification is entirely based on intuition.It has no formal mathematical basis in and of itself as we already remarked,statistics is not a branch of mathematics! Once we make this intuitive identification,however,it is only a small further 餐 step to decide to fit for the parameters a...a precisely by finding those values that maximize the likelihood defined in the above way.This form of parameter estimation is maximum likelihood estimation. RECIPES We are now ready to make the connection to(15.1.2).Suppose that each data point yi has a measurement error that is independently random and distributed as a 男新 9 normal(Gaussian)distribution around the "true"model y(z).And suppose that the standard deviations o of these normal distributions are the same for all points.Then the probability of the data set is the product of the probabilities of each point, 力 (15.1.3) 6 Notice that there is a factor Ay in each term in the product.Maximizing(15.1.3)is equivalent to maximizing its logarithm,or minimizing the negative of its logarithm. namely, :-y()]2 202 -Nlog△y (15.1.4) 鱼 Numerica 10.621 43126 Since N,o,and Ay are all constants,minimizing this equation is equivalent to minimizing (15.1.2). What we see is that least-squares fitting is a maximum likelihood estimation of the fitted parameters if the measurement errors are independent and normally North distributed with constant standard deviation.Notice that we made no assumption about the linearity or nonlinearity of the model y(;a1...)in its parameters a1...aM.Just below,we will relax our assumption of constant standard deviations and obtain the very similar formulas for what is called "chi-square fitting"or "weighted least-squares fitting."First,however,let us discuss further our very stringent assumption of a normal distribution. For a hundred years or so,mathematical statisticians have been in love with the fact that the probability distribution of the sum of a very large number of very small random deviations almost always converges to a normal distribution.(For precise statements of this central limit theorem,consult [1]or other standard works

658 Chapter 15. Modeling of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). That being the case, we can, however, turn the question around, and ask, “Given a particular set of parameters, what is the probability that this data set could have occurred?” If the yi’s take on continuous values, the probability will always be zero unless we add the phrase, “...plus or minus some fixed ∆y on each data point.” So let’s always take this phrase as understood. If the probability of obtaining the data set is infinitesimally small, then we can conclude that the parameters under consideration are “unlikely” to be right. Conversely, our intuition tells us that the data set should not be too improbable for the correct choice of parameters. In other words, we identify the probability of the data given the parameters (which is a mathematically computable number), as the likelihood of the parameters given the data. This identification is entirely based on intuition. It has no formal mathematical basis in and of itself; as we already remarked, statistics is not a branch of mathematics! Once we make this intuitive identification, however, it is only a small further step to decide to fit for the parameters a1 ...aM precisely by finding those values that maximize the likelihood defined in the above way. This form of parameter estimation is maximum likelihood estimation. We are now ready to make the connection to (15.1.2). Suppose that each data point yi has a measurement error that is independently random and distributed as a normal (Gaussian) distribution around the “true” model y(x). And suppose that the standard deviations σ of these normal distributions are the same for all points. Then the probability of the data set is the product of the probabilities of each point, P ∝ N i=1 exp  −1 2 yi − y(xi) σ 2  ∆y  (15.1.3) Notice that there is a factor ∆y in each term in the product. Maximizing (15.1.3) is equivalent to maximizing its logarithm, or minimizing the negative of its logarithm, namely,   N i=1 [yi − y(xi)]2 2σ2  − N log ∆y (15.1.4) Since N, σ, and ∆y are all constants, minimizing this equation is equivalent to minimizing (15.1.2). What we see is that least-squares fitting is a maximum likelihood estimation of the fitted parameters if the measurement errors are independent and normally distributed with constant standard deviation. Notice that we made no assumption about the linearity or nonlinearity of the model y(x; a 1 ...) in its parameters a1 ...aM. Just below, we will relax our assumption of constant standard deviations and obtain the very similar formulas for what is called “chi-square fitting” or “weighted least-squares fitting.” First, however, let us discuss further our very stringent assumption of a normal distribution. For a hundred years or so, mathematical statisticians have been in love with the fact that the probability distribution of the sum of a very large number of very small random deviations almost always converges to a normal distribution. (For precise statements of this central limit theorem, consult [1] or other standard works

15.1 Least Squares as a Maximum Likelihood Estimator 659 on mathematical statistics.)This infatuation tended to focus interest away from the fact that,for real data,the normal distribution is often rather poorly realized,if it is realized at all.We are often taught,rather casually,that,on average,measurements will fall within to of the true value 68 percent of the time,within +2o 95 percent of the time,and within +30 99.7 percent of the time.Extending this,one would expect a measurement to be off by +20o only one time out of 2 x 1088.We all know that "glitches"are much more likely than that! In some instances,the deviations from a normal distribution are easy to understand and quantify.For example,in measurements obtained by counting 三 events,the measurement errors are usually distributed as a Poisson distribution, whose cumulative probability function was already discussed in $6.2.When the number ofcounts going into one data point is large,the Poisson distribution converges towards a Gaussian.However,the convergence is not uniform when measured in fractional accuracy.The more standard deviations out on the tail of the distribution. the larger the number of counts must be before a value close to the Gaussian is realized.The sign of the effect is always the same:The Gaussian predicts that"tail" events are much less likely than they actually (by Poisson)are.This causes such events,when they occur,to skew a least-squares fit much more than they ought. Other times,the deviations from a normal distribution are not so easy to 9 understand in detail.Experimental points are occasionally just way of Perhaps the power flickered during a point's measurement,or someone kicked the apparatus, or someone wrote down a wrong number.Points like this are called outliers. They can easily turn a least-squares fit on otherwise adequate data into nonsense. Their probability of occurrence in the assumed Gaussian model is so small that the 3。 maximum likelihood estimator is willing to distort the whole curve to try to bring OF SCIENTIFIC them,mistakenly,into line. The subject of robust statistics deals with cases where the normal or Gaussian 6 model is a bad approximation,or cases where outliers are important.We will discuss robust methods briefly in $15.7.All the sections between this one and that one assume,one way or the other,a Gaussian model for the measurement errors in the data.It it quite important that you keep the limitations of that model in mind,even as you use the very useful methods that follow from assuming it. Finally,note that our discussion of measurement errors has been limited to statistical errors,the kind that will average away if we only take enough data. Numerical Recipes 10621 43108 Measurements are also susceptible to systematic errors that will not go away with any amount of averaging.For example,the calibration of a metal meter stick might depend on its temperature.If we take all our measurements at the same wrong (outside temperature,then no amount of averaging or numerical processing will correct for this unrecognized systematic error. North Software. Chi-Square Fitting We considered the chi-square statistic once before,in $14.3.Here it arises in a slightly different context. If each data point (zi,y)has its own,known standard deviation oi,then equation(15.1.3)is modified only by putting a subscript i on the symbol o.That subscript also propagates docilely into (15.1.4),so that the maximum likelihood

15.1 Least Squares as a Maximum Likelihood Estimator 659 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). on mathematical statistics.) This infatuation tended to focus interest away from the fact that, for real data, the normal distribution is often rather poorly realized, if it is realized at all. We are often taught, rather casually, that, on average, measurements will fall within ±σ of the true value 68 percent of the time, within ±2σ 95 percent of the time, and within ±3σ 99.7 percent of the time. Extending this, one would expect a measurement to be off by ±20σ only one time out of 2 × 10 88. We all know that “glitches” are much more likely than that! In some instances, the deviations from a normal distribution are easy to understand and quantify. For example, in measurements obtained by counting events, the measurement errors are usually distributed as a Poisson distribution, whose cumulative probability function was already discussed in §6.2. When the number of counts going into one data point is large, the Poisson distribution converges towards a Gaussian. However, the convergence is not uniform when measured in fractional accuracy. The more standard deviations out on the tail of the distribution, the larger the number of counts must be before a value close to the Gaussian is realized. The sign of the effect is always the same: The Gaussian predicts that “tail” events are much less likely than they actually (by Poisson) are. This causes such events, when they occur, to skew a least-squares fit much more than they ought. Other times, the deviations from a normal distribution are not so easy to understand in detail. Experimental points are occasionally just way off. Perhaps the power flickered during a point’s measurement, or someone kicked the apparatus, or someone wrote down a wrong number. Points like this are called outliers. They can easily turn a least-squares fit on otherwise adequate data into nonsense. Their probability of occurrence in the assumed Gaussian model is so small that the maximum likelihood estimator is willing to distort the whole curve to try to bring them, mistakenly, into line. The subject of robust statistics deals with cases where the normal or Gaussian model is a bad approximation, or cases where outliers are important. We will discuss robust methods briefly in §15.7. All the sections between this one and that one assume, one way or the other, a Gaussian model for the measurement errors in the data. It it quite important that you keep the limitations of that model in mind, even as you use the very useful methods that follow from assuming it. Finally, note that our discussion of measurement errors has been limited to statistical errors, the kind that will average away if we only take enough data. Measurements are also susceptible to systematic errors that will not go away with any amount of averaging. For example, the calibration of a metal meter stick might depend on its temperature. If we take all our measurements at the same wrong temperature, then no amount of averaging or numerical processing will correct for this unrecognized systematic error. Chi-Square Fitting We considered the chi-square statistic once before, in §14.3. Here it arises in a slightly different context. If each data point (xi, yi) has its own, known standard deviation σi, then equation (15.1.3) is modified only by putting a subscript i on the symbol σ. That subscript also propagates docilely into (15.1.4), so that the maximum likelihood

660 Chapter 15.Modeling of Data estimate of the model parameters is obtained by minimizing the quantity N y5-yx;a1..aM】 (15.1.5) Gi called the“chi-square.” To whatever extent the measurement errors actually are normally distributed,the quantity x2 is correspondingly a sum of N squares of normally distributed quantities, each normalized to unit variance.Once we have adjusted the a1...aM to minimize 三 the value of x2,the terms in the sum are not all statistically independent.For models 81 that are linear in the a's,however,it turns out that the probability distribution for different values of x2 at its minimum can nevertheless be derived analytically,and is the chi-square distribution for N-M degrees of freedom.We learned how to compute this probability function using the incomplete gamma function gammq in $6.2.In particular,equation(6.2.18)gives the probability Q that the chi-square should exceed a particular value x2 by chance,where v=N-M is the number of degrees of freedom.The quantity Q,or its complement P =1-Q,is frequently 2 tabulated in appendices to statistics books,but we generally find it easier to use gammq and compute our own values:Q=gammq (0.5v,0.5x2).It is quite common, and usually not too wrong,to assume that the chi-square distribution holds even for Press. models that are not strictly linear in the a's This computed probability gives a quantitative measure for the goodness-of-fit of the model.If O is a very small probability for some particular data set,then the apparent discrepancies are unlikely to be chance fluctuations.Much more probably either(i)the model is wrong-can be statistically rejected,or(ii)someone has lied to OF SCIENTIFIC you about the size of the measurement errors o;-they are really larger than stated. It is an important point that the chi-square probability does not directly 6 measure the credibility of the assumption that the measurement errors are normally distributed.It assumes they are.In most,but not all,cases,however,the effect of nonnormal errors is to create an abundance of outlier points.These decrease the probability Q,so that we can add another possible,though less definitive,conclusion to the above list:(iii)the measurement errors may not be normally distributed Numerical Recipes 10621 Possibility (iii)is fairly common,and also fairly benign.It is for this reason that reasonable experimenters are often rather tolerant of low probabilities Q.It is E喜 43106 not uncommon to deem acceptable on equal terms any models with,say,>0.001. This is not as sloppy as it sounds:Truly wrong models will often be rejected with vastly smaller values of 10-18,say.However,if day-in and day-out you find (outside yourself accepting models with ~10-3,you really should track down the cause. North If you happen to know the actual distribution law of your measurement errors, then you might wish to Monte Carlo simulate some data sets drawn from a particular model,cf.87.2-87.3.You can then subject these synthetic data sets to your actual fitting procedure,so as to determine both the probability distribution of the x2 statistic,and also the accuracy with which your model parameters are reproduced by the fit.We discuss this further in 815.6.The technique is very general,but it can also be very expensive. At the opposite extreme,it sometimes happens that the probability is too large, too near to 1,literally too good to be true!Nonnormal measurement errors cannot in general produce this disease,since the normal distribution is about as "compact

660 Chapter 15. Modeling of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). estimate of the model parameters is obtained by minimizing the quantity χ2 ≡  N i=1 yi − y(xi; a1 ...aM) σi 2 (15.1.5) called the “chi-square.” To whatever extent the measurement errors actually are normally distributed, the quantity χ2 is correspondingly a sum of N squares of normally distributed quantities, each normalized to unit variance. Once we have adjusted the a 1 ...aM to minimize the value of χ2, the terms in the sum are not all statistically independent. For models that are linear in the a’s, however, it turns out that the probability distribution for different values of χ2 at its minimum can nevertheless be derived analytically, and is the chi-square distribution for N − M degrees of freedom. We learned how to compute this probability function using the incomplete gamma function gammq in §6.2. In particular, equation (6.2.18) gives the probability Q that the chi-square should exceed a particular value χ2 by chance, where ν = N − M is the number of degrees of freedom. The quantity Q, or its complement P ≡ 1 − Q, is frequently tabulated in appendices to statistics books, but we generally find it easier to use gammq and compute our own values: Q = gammq (0.5ν, 0.5χ2). It is quite common, and usually not too wrong, to assume that the chi-square distribution holds even for models that are not strictly linear in the a’s. This computed probability gives a quantitative measure for the goodness-of-fit of the model. If Q is a very small probability for some particular data set, then the apparent discrepancies are unlikely to be chance fluctuations. Much more probably either (i) the model is wrong — can be statistically rejected, or (ii) someone has lied to you about the size of the measurement errors σi — they are really larger than stated. It is an important point that the chi-square probability Q does not directly measure the credibility of the assumption that the measurement errors are normally distributed. It assumes they are. In most, but not all, cases, however, the effect of nonnormal errors is to create an abundance of outlier points. These decrease the probability Q, so that we can add another possible, though less definitive, conclusion to the above list: (iii) the measurement errors may not be normally distributed. Possibility (iii) is fairly common, and also fairly benign. It is for this reason that reasonable experimenters are often rather tolerant of low probabilities Q. It is not uncommon to deem acceptable on equal terms any models with, say, Q > 0.001. This is not as sloppy as it sounds: Truly wrong models will often be rejected with vastly smaller values of Q, 10−18, say. However, if day-in and day-out you find yourself accepting models with Q ∼ 10−3, you really should track down the cause. If you happen to know the actual distribution law of your measurement errors, then you might wish to Monte Carlo simulate some data sets drawn from a particular model, cf. §7.2–§7.3. You can then subject these synthetic data sets to your actual fitting procedure, so as to determine both the probability distribution of the χ 2 statistic, and also the accuracy with which your model parameters are reproduced by the fit. We discuss this further in §15.6. The technique is very general, but it can also be very expensive. At the opposite extreme, it sometimes happens that the probabilityQ is too large, too near to 1, literally too good to be true! Nonnormal measurement errors cannot in general produce this disease, since the normal distribution is about as “compact

15.2 Fitting Data to a Straight Line 661 as a distribution can be.Almost always,the cause of too good a chi-square fit is that the experimenter,in a"fit"of conservativism,has overestimated his or her measurement errors.Very rarely,too good a chi-square signals actual fraud,data that has been "fudged"to fit the model. A rule of thumb is that a"typical"value of x2 for a"moderately"good fit is X2v.More precise is the statement that the x2 statistic has a meanvand a standard deviation v2v,and,asymptotically for large v,becomes normally distributed. In some cases the uncertainties associated with a set of measurements are not known in advance,and considerations related to x-fitting are used to derive a value for o.If we assume that all measurements have the same standard deviation,=o, and that the model does fit well,then we can proceed by first assigning an arbitrary constant to all points,next fitting for the model parameters by minimizing2 and finally recomputing 突冷季 N 2 =贴-y(a2/N-M0 (15.1.6 =1 是足之d Obviously,this approach prohibits an independent assessment of goodness-of-fit,a 9 fact occasionally missed by its adherents.When.however.the measurement error is not known,this approach at least allows some kind of error bar to be assigned ress. to the points. If we take the derivative of equation(15.1.5)with respect to the parameters ak, we obtain equations that must hold at the chi-square minimum. SCIENTIFIC 0 dy(xi;...ak... k=1,.,M ak (15.1.7) 6 Equation(15.1.7)is,in general,a set of M nonlinear equations for the M unknown ak Various of the procedures described subsequently in this chapter derive from (15.1.7)and its specializations. Numerical Recipes 10.621 CITED REFERENCES AND FURTHER READING: Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: 43106 McGraw-Hill),Chapters 1-4. von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic Press),VI.C.[1] (outside North Software. 15.2 Fitting Data to a Straight Line A concrete example will make the considerations of the previous section more meaningful.We consider the problem of fitting a set of N data points(xi,yi)to a straight-line model y(x)=y(x;a,b)=a+bx (15.2.1)

15.2 Fitting Data to a Straight Line 661 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). as a distribution can be. Almost always, the cause of too good a chi-square fit is that the experimenter, in a “fit” of conservativism, has overestimated his or her measurement errors. Very rarely, too good a chi-square signals actual fraud, data that has been “fudged” to fit the model. A rule of thumb is that a “typical” value of χ2 for a “moderately” good fit is χ2 ≈ ν. More precise is the statement that the χ2 statistic has a mean ν and a standard deviation √2ν, and, asymptotically for large ν, becomes normally distributed. In some cases the uncertainties associated with a set of measurements are not known in advance, and considerations related to χ2 fitting are used to derive a value for σ. If we assume that all measurements have the same standard deviation, σi = σ, and that the model does fit well, then we can proceed by first assigning an arbitrary constant σ to all points, next fitting for the model parameters by minimizing χ2, and finally recomputing σ2 =  N i=1 [yi − y(xi)]2/(N − M) (15.1.6) Obviously, this approach prohibits an independent assessment of goodness-of-fit, a fact occasionally missed by its adherents. When, however, the measurement error is not known, this approach at least allows some kind of error bar to be assigned to the points. If we take the derivative of equation (15.1.5) with respect to the parameters a k, we obtain equations that must hold at the chi-square minimum, 0 =  N i=1 yi − y(xi) σ2 i  ∂y(xi; ...ak ...) ∂ak  k = 1,...,M (15.1.7) Equation (15.1.7) is, in general, a set of M nonlinear equations for the M unknown ak. Various of the procedures described subsequently in this chapter derive from (15.1.7) and its specializations. CITED REFERENCES AND FURTHER READING: Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Chapters 1–4. von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), §VI.C. [1] 15.2 Fitting Data to a Straight Line A concrete example will make the considerations of the previous section more meaningful. We consider the problem of fitting a set of N data points (xi, yi) to a straight-line model y(x) = y(x; a, b) = a + bx (15.2.1)

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有