正在加载图片...
676 Chapter 15.Modeling of Data Solution by Use of Singular Value Decomposition In some applications,the normal equations are perfectly adequate for linear least-squares problems.However,in many cases the normal equations are very close to singular.A zero pivot element may be encountered during the solution of the linear equations(e.g.,in gaussj),in which case you get no solution at all.Or a very small pivot may occur,in which case you typically get fitted parameters ak with very large magnitudes that are delicately(and unstably)balanced to cancel out almost precisely when the fitted function is evaluated. Why does this commonly occur?The reason is that,more often than experi- 81 menters would like to admit,data do not clearly distinguish between two or more of the basis functions provided.If two such functions,or two different combinations of functions,happen to fit the data about equally well-or equally badly-then the matrix [a],unable to distinguish between them,neatly folds up its tent and becomes singular.There is a certain mathematical irony in the fact that least-squares problems are both overdetermined(number of data points greater than number of parameters)and underdetermined (ambiguous combinations of parameters exist); but that is how it frequently is.The ambiguities can be extremely hard to notice a priori in complicated problems. 9 Enter singular value decomposition(SVD).This would be a good time for you to review the material in $2.6,which we will not repeat here.In the case of an overdetermined system,SVD produces a solution that is the best approximation in the least-squares sense,cf.equation(2.6.10).That is exactly what we want.In the case of an underdetermined system,SVD produces a solution whose values(for us, 、孕20 the ak's)are smallest in the least-squares sense,cf.equation (2.6.8).That is also what we want:When some combination of basis functions is irrelevant to the fit,that combination will be driven down to a small,innocuous,value,rather than pushed 名a2, 6 up to delicately canceling infinities. In terms of the design matrix A (equation 15.4.4)and the vector b (equation 15.4.5),minimization of x2 in (15.4.3)can be written as find a that minimizes x2=A.a-b2 (15.4.16) Numerical 10621 Comparing to equation(2.6.9),we see that this is precisely the problem that routines svdcmp and svbksb are designed to solve.The solution,which is given by equation (2.6.12),can be rewritten as follows:If U and V enter the SVD decomposition of A according to equation (2.6.1),as computed by svdcmp,then let the vectors U()i=1,...,M denote the columns of U (each one a vector of length N);and let the vectors V();i=1,...,M denote the columns of V (each one a vector of length M).Then the solution (2.6.12)of the least-squares problem (15.4.16) can be written as (15.4.17) where the w;are,as in 82.6,the singular values calculated by svdcmp. Equation(15.4.17)says that the fitted parameters a are linear combinations of the columns of V,with coefficients obtained by forming dot products of the columns676 Chapter 15. Modeling of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Solution by Use of Singular Value Decomposition In some applications, the normal equations are perfectly adequate for linear least-squares problems. However, in many cases the normal equations are very close to singular. A zero pivot element may be encountered during the solution of the linear equations (e.g., in gaussj), in which case you get no solution at all. Or a very small pivot may occur, in which case you typically get fitted parameters a k with very large magnitudes that are delicately (and unstably) balanced to cancel out almost precisely when the fitted function is evaluated. Why does this commonly occur? The reason is that, more often than experi￾menters would like to admit, data do not clearly distinguish between two or more of the basis functions provided. If two such functions, or two different combinations of functions, happen to fit the data about equally well — or equally badly — then the matrix [α], unable to distinguish between them, neatly folds up its tent and becomes singular. There is a certain mathematical irony in the fact that least-squares problems are both overdetermined (number of data points greater than number of parameters) and underdetermined (ambiguous combinations of parameters exist); but that is how it frequently is. The ambiguities can be extremely hard to notice a priori in complicated problems. Enter singular value decomposition (SVD). This would be a good time for you to review the material in §2.6, which we will not repeat here. In the case of an overdetermined system, SVD produces a solution that is the best approximation in the least-squares sense, cf. equation (2.6.10). That is exactly what we want. In the case of an underdetermined system, SVD produces a solution whose values (for us, the ak’s) are smallest in the least-squares sense, cf. equation (2.6.8). That is also what we want: When some combination of basis functions is irrelevant to the fit, that combination will be driven down to a small, innocuous, value, rather than pushed up to delicately canceling infinities. In terms of the design matrix A (equation 15.4.4) and the vector b (equation 15.4.5), minimization of χ2 in (15.4.3) can be written as find a that minimizes χ2 = |A · a − b| 2 (15.4.16) Comparing to equation (2.6.9), we see that this is precisely the problem that routines svdcmp and svbksb are designed to solve. The solution, which is given by equation (2.6.12), can be rewritten as follows: If U and V enter the SVD decomposition of A according to equation (2.6.1), as computed by svdcmp, then let the vectors U(i) i = 1,...,M denote the columns of U (each one a vector of length N); and let the vectors V(i);i = 1,...,M denote the columns of V (each one a vector of length M). Then the solution (2.6.12) of the least-squares problem (15.4.16) can be written as a =  M i=1 U(i) · b wi V(i) (15.4.17) where the wi are, as in §2.6, the singular values calculated by svdcmp. Equation (15.4.17) says that the fitted parameters a are linear combinations of the columns of V, with coefficients obtained by forming dot products of the columns
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有