正在加载图片...
674 Chapter 15.Modeling of Data computation of the matrix inverse.In theory,since AT.A is positive definite, Cholesky decomposition is the most efficient way to solve the normal equations. However,in practice most of the computing time is spent in looping over the data to form the equations,and Gauss-Jordan is quite adequate. We need to warn you that the solution of a least-squares problem directly from the normal equations is rather susceptible to roundoff error.An alternative,and preferred,technique involves QR decomposition (82.10,$11.3,and $11.6)of the design matrix A.This is essentially what we did at the end of 815.2 for fitting data to a straight line,but without invoking all the machinery of OR to derive the necessary formulas.Later in this section,we will discuss other difficulties in the least-squares problem,for which the cure is singular value decomposition(SVD),of which we give an implementation.It turns out that SVD also fixes the roundoff problem,so it is our recommended technique for all but"easyleast-squares problems.It is for these easy g problems that the following routine,which solves the normal equations,is intended The routine below introduces one bookkeeping trick that is quite useful in practical work.Frequently it is a matter of"art"to decide which parameters ak in a model should be fit from the data set,and which should be held constant at fixed values,for example values predicted by a theory or measured in a previous experiment.One wants,therefore,to have a convenient means for "freezing" and"unfreezing"the parameters ak.In the following routine the total number of THE parameters ak is denoted ma(called M above).As input to the routine,you supply ART an array ia[1..ma],whose components are either zero or nonzero (e.g..1).Zeros indicate that you want the corresponding elements of the parameter vector a [1..ma] Programs to be held fixed at their input values.Nonzeros indicate parameters that should be fitted for.On output,any frozen parameters will have their variances,and all their covariances,set to zero in the covariance matrix to dir #include "nrutil.h" void lfit(f1oatx[],f1oaty▣,f1 oat sig0,int ndat,f1oata▣,int ia0, OF SCIENTIFIC COMPUTING(ISBN int ma,float **covar,float *chisq, void (*funcs)(float,float []int)) 1988-19920 Given a set of data points x[1..ndat],y[1..ndat]with individual standard deviations sig[1..ndat],use x minimization to fit for some or all of the coefficients a[1..ma]of a function that depends linearly on a,y=>;ai x afunci(x).The input array ia[1..ma] indicates by nonzero entries those components of a that should be fitted for,and by zero entries those components that should be held fixed at their input values.The program returns values for a[1..ma],x2=chisq,and the covariance matrix covar [1..ma][1..ma].(Parameters Numerical Recipes 10-621 43108 held fixed will return zero covariances.The user supplies a routine funcs(x,afunc,ma)that returns the ma basis functions evaluated at =x in the array afunc[1..ma]. (outside void covsrt(float **covar,int ma,int ia[],int mfit); North Software. void gaussj(float **a,int n,float **b,int m); 1nt1,j,k,1,m,mf1t=0; float ym,wt,sum,sig2i,**beta,*afunc; Ame beta=matrix(1,ma,1,1); afunc-vector(1,ma); for (j=1;j<=ma;j++) if (ia[j])mfit++; if (mfit ==0)nrerror("lfit:no parameters to be fitted"); for (j=1;j<=mfit;j++) Initialize the (symmetric)matrix. for (k=1;k<=mfit;k++)covar [j][k]=0.0; beta[j][1]=0.0; 2 for (i=1;i<=ndat;i++){ Loop over data to accumulate coefficients of the normal equations.674 Chapter 15. Modeling of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). computation of the matrix inverse. In theory, since AT · A is positive definite, Cholesky decomposition is the most efficient way to solve the normal equations. However, in practice most of the computing time is spent in looping over the data to form the equations, and Gauss-Jordan is quite adequate. We need to warn you that the solution of a least-squares problem directly from the normal equations is rather susceptible to roundoff error. An alternative, and preferred, technique involves QR decomposition (§2.10, §11.3, and §11.6) of the design matrix A. This is essentially what we did at the end of §15.2 for fitting data to a straight line, but without invoking all the machinery of QR to derive the necessary formulas. Later in this section, we will discuss other difficulties in the least-squares problem, for which the cure issingular value decomposition (SVD), of which we give an implementation. It turns out that SVD also fixes the roundoff problem, so it is our recommended technique for all but “easy” least-squares problems. It is for these easy problems that the following routine, which solves the normal equations, is intended. The routine below introduces one bookkeeping trick that is quite useful in practical work. Frequently it is a matter of “art” to decide which parameters a k in a model should be fit from the data set, and which should be held constant at fixed values, for example values predicted by a theory or measured in a previous experiment. One wants, therefore, to have a convenient means for “freezing” and “unfreezing” the parameters ak. In the following routine the total number of parameters ak is denoted ma (called M above). As input to the routine, you supply an array ia[1..ma], whose components are either zero or nonzero (e.g., 1). Zeros indicate that you want the corresponding elements of the parameter vector a[1..ma] to be held fixed at their input values. Nonzeros indicate parameters that should be fitted for. On output, any frozen parameters will have their variances, and all their covariances, set to zero in the covariance matrix. #include "nrutil.h" void lfit(float x[], float y[], float sig[], int ndat, float a[], int ia[], int ma, float **covar, float *chisq, void (*funcs)(float, float [], int)) Given a set of data points x[1..ndat], y[1..ndat] with individual standard deviations sig[1..ndat], use χ2 minimization to fit for some or all of the coefficients a[1..ma] of a function that depends linearly on a, y = i ai × afunci(x). The input array ia[1..ma] indicates by nonzero entries those components of a that should be fitted for, and by zero entries those components that should be held fixed at their input values. The program returns values for a[1..ma], χ2 = chisq, and the covariance matrix covar[1..ma][1..ma]. (Parameters held fixed will return zero covariances.)The user supplies a routine funcs(x,afunc,ma) that returns the ma basis functions evaluated at x = x in the array afunc[1..ma]. { void covsrt(float **covar, int ma, int ia[], int mfit); void gaussj(float **a, int n, float **b, int m); int i,j,k,l,m,mfit=0; float ym,wt,sum,sig2i,**beta,*afunc; beta=matrix(1,ma,1,1); afunc=vector(1,ma); for (j=1;j<=ma;j++) if (ia[j]) mfit++; if (mfit == 0) nrerror("lfit: no parameters to be fitted"); for (j=1;j<=mfit;j++) { Initialize the (symmetric)matrix. for (k=1;k<=mfit;k++) covar[j][k]=0.0; beta[j][1]=0.0; } for (i=1;i<=ndat;i++) { Loop over data to accumulate coefficients of the normal equations.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有