正在加载图片...
15.4 General Linear Least Squares 671 15.4 General Linear Least Squares An immediate generalization of $15.2 is to fit a set of data points (xi,yi)to a model that is not just a linear combination of 1 and z(namely a+bz),but rather a linear combination of any M specified functions of x.For example,the functions could be 1,z2,...,M-1,in which case their general linear combination, y()=a1+a2z+a3z2+...+aMzM-1 (15.4.1) is a polynomial of degree M-1.Or,the functions could be sines and cosines,in which case their general linear combination is a harmonic series. The general form of this kind of model is M y()=>axXx(z) (15.4.2) k=1 ICAL where Xi(),...,XM()are arbitrary fixed functions of called the basis functions. Note that the functions X()can be wildly nonlinear functions of z.In this discussion"linear"refers only to the model's dependence on its parameters ak 之 9 For these linear models we generalize the discussion of the previous section by defining a merit function -∑1aXx( 73 (15.4.3) 9 =1 0 As before,o;is the measurement error (standard deviation)of the ith data point. presumed to be known.If the measurement errors are not known,they may all (as discussed at the end of $15.1)be set to the constant valueo=1. 61 Once again,we will pick as best parameters those that minimize x2.There are several different techniques available for finding this minimum.Two are particularly useful,and we will discuss both in this section.To introduce them and elucidate their relationship,we need some notation. Let A be a matrix whose N x M components are constructed from the M basis functions evaluated at the N abscissas zi,and from the N measurement errors oi,by the prescription Numerica 10621 4=) (15.4.4) 0 The matrix A is called the design matrix of the fitting problem.Notice that in general A has more rows than columns,N >M,since there must be more data points than model parameters to be solved for.(You can fit a straight line to two points,but not a very meaningful quintic!)The design matrix is shown schematically in Figure 15.4.1. Also define a vector b of length N by 6=班 (15.4.5 and denote the M vector whose components are the parameters to be fitted, a1,...,aM,by a.15.4 General Linear Least Squares 671 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 15.4 General Linear Least Squares An immediate generalization of §15.2 is to fit a set of data points (xi, yi) to a model that is not just a linear combination of 1 and x (namely a + bx), but rather a linear combination of any M specified functions of x. For example, the functions could be 1, x, x2,...,xM−1, in which case their general linear combination, y(x) = a1 + a2x + a3x2 + ··· + aM xM−1 (15.4.1) is a polynomial of degree M − 1. Or, the functions could be sines and cosines, in which case their general linear combination is a harmonic series. The general form of this kind of model is y(x) =  M k=1 akXk(x) (15.4.2) where X1(x),...,XM(x) are arbitrary fixed functions of x, called the basis functions. Note that the functions Xk(x) can be wildly nonlinear functions of x. In this discussion “linear” refers only to the model’s dependence on its parameters a k. For these linear models we generalize the discussion of the previous section by defining a merit function χ2 =  N i=1 yi − M k=1 akXk(xi) σi 2 (15.4.3) As before, σi is the measurement error (standard deviation) of the ith data point, presumed to be known. If the measurement errors are not known, they may all (as discussed at the end of §15.1) be set to the constant value σ = 1. Once again, we will pick as best parameters those that minimize χ2. There are several different techniques available for finding this minimum. Two are particularly useful, and we will discuss both in this section. To introduce them and elucidate their relationship, we need some notation. Let A be a matrix whose N × M components are constructed from the M basis functions evaluated at the N abscissas xi, and from the N measurement errors σi, by the prescription Aij = Xj (xi) σi (15.4.4) The matrix A is called the design matrix of the fitting problem. Notice that in general A has more rows than columns, N ≥M, since there must be more data points than model parameters to be solved for. (You can fit a straight line to two points, but not a very meaningful quintic!) The design matrix is shown schematically in Figure 15.4.1. Also define a vector b of length N by bi = yi σi (15.4.5) and denote the M vector whose components are the parameters to be fitted, a1,...,aM, by a.
<<向上翻页
©2008-现在 cucdc.com 高等教育资讯网 版权所有