正在加载图片...
15.4 General Linear Least Squares 677 of U with the weighted data vector (15.4.5).Though it is beyond our scope to prove here,it turns out that the standard(loosely,"probable")errors in the fitted parameters are also linear combinations of the columns of V.In fact,equation(15.4.17)can be written in a form displaying these errors as Vo) (15.4.18) W1 VOD WM Here each+is followed by a standard deviation.The amazing fact is that, decomposed in this fashion,the standard deviations are all mutually independent 81 (uncorrelated).Therefore they can be added together in root-mean-square fashion. What is going on is that the vectors V(are the principal axes of the error ellipsoid of the fitted parameters a(see $15.6). It follows that the variance in the estimate of a parameter aj is given by ICAL (15.4.19) 9 whose result should be identical with(15.4.14).As before,you should not be surprised at the formula for the covariances,here given without proof, Cov(aj:ak (15.4.20) 是后0 9 We introduced this subsection by noting that the normal equations can fail by encountering a zero pivot.We have not yet,however,mentioned how SVD overcomes this problem.The answer is:If any singular value w:is zero,its 61 reciprocal in equation (15.4.18)should be set to zero,not infinity.(Compare the discussion preceding equation 2.6.7.)This corresponds to adding to the fitted parameters aa zero multiple,rather than some random large multiple,of any linear combination of basis functions that are degenerate in the fit.It is a good thing to do! Moreover,if a singular value wi is nonzero but very small,you should also define its reciprocal to be zero,since its apparent value is probably an artifact of Numerica 10621 roundofferror,not a meaningful number.A plausible answer to the question"how 431 small is small?"is to edit in this fashion all singular values whose ratio to the Recipes largest singular value is less than N times the machine precision e.(You might argue for vN,or a constant,instead of N as the multiple;that starts getting into hardware-dependent questions. There is another reason for editing even additional singular values,ones large enough that roundoff error is not a question.Singular value decomposition allows you to identify linear combinations of variables that just happen not to contribute much to reducing the x2 of your data set.Editing these can sometimes reduce the probable error on your coefficients quite significantly,while increasing the minimum x2only negligibly.We will learn more about identifying and treating such cases in $15.6.In the following routine,the point at which this kind of editing would occur is indicated. Generally speaking,we recommend that you always use SVD techniques instead of using the normal equations.SVD's only significant disadvantage is that it requires15.4 General Linear Least Squares 677 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). of U with the weighted data vector (15.4.5). Though it is beyond our scope to prove here, it turns out that the standard (loosely, “probable”) errors in the fitted parameters are also linear combinations of the columns of V. In fact, equation (15.4.17) can be written in a form displaying these errors as a =  M i=1 U(i) · b wi V(i)  ± 1 w1 V(1) ±···± 1 wM V(M) (15.4.18) Here each ± is followed by a standard deviation. The amazing fact is that, decomposed in this fashion, the standard deviations are all mutually independent (uncorrelated). Therefore they can be added together in root-mean-square fashion. What is going on is that the vectors V(i) are the principal axes of the error ellipsoid of the fitted parameters a (see §15.6). It follows that the variance in the estimate of a parameter aj is given by σ2(aj ) =  M i=1 1 w2 i [V(i)] 2 j =  M i=1 Vji wi 2 (15.4.19) whose result should be identical with (15.4.14). As before, you should not be surprised at the formula for the covariances, here given without proof, Cov(aj , ak) =  M i=1 VjiVki w2 i (15.4.20) We introduced this subsection by noting that the normal equations can fail by encountering a zero pivot. We have not yet, however, mentioned how SVD overcomes this problem. The answer is: If any singular value wi is zero, its reciprocal in equation (15.4.18) should be set to zero, not infinity. (Compare the discussion preceding equation 2.6.7.) This corresponds to adding to the fitted parameters a a zero multiple, rather than some random large multiple, of any linear combination of basis functions that are degenerate in the fit. It is a good thing to do! Moreover, if a singular value wi is nonzero but very small, you should also define its reciprocal to be zero, since its apparent value is probably an artifact of roundoff error, not a meaningful number. A plausible answer to the question “how small is small?” is to edit in this fashion all singular values whose ratio to the largest singular value is less than N times the machine precision . (You might argue for √ N, or a constant, instead of N as the multiple; that starts getting into hardware-dependent questions.) There is another reason for editing even additional singular values, ones large enough that roundoff error is not a question. Singular value decomposition allows you to identify linear combinations of variables that just happen not to contribute much to reducing the χ2 of your data set. Editing these can sometimes reduce the probable error on your coefficients quite significantly, while increasing the minimum χ2 only negligibly. We will learn more about identifying and treating such cases in §15.6. In the following routine, the point at which this kind of editing would occur is indicated. Generally speaking, we recommend that you always use SVD techniques instead of using the normal equations. SVD’s only significant disadvantage is that it requires
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有