正在加载图片...
62 Chapter 2.Solution of Linear Algebraic Equations If we want to single out one particular member of this solution-set of vectors as a representative,we might want to pick the one with the smallest lengthx2.Here is how to find that vector using SVD:Simply replace 1/wj by zero ifwj =0.(It is not very often that one gets to set oo =0!)Then compute (working from right to left) x=V·[diag(1/u】·(ur.b) (2.6.7) This will be the solution vector of smallest length:the columns of V that are in the nullspace complete the specification of the solution set. Proof:Considerx+x',where x'lies in the nullspace.Then,if W-denotes the modified inverse of W with some elements zeroed. x+x=V.W-1.UT.b+x ICAL =V.(w-1.UT.b+vT.x) (2.6.8) =w-1.UT.b+vT.xl Here the first equality follows from(2.6.7),the second and third from the orthonor- mality of V.If you now examine the two terms that make up the sum on the 玉梦 9 right-hand side,you will see that the first one has nonzero j components only where 0,while the second one,since x'is in the nullspace,has nonzero j components only where w;=0.Therefore the minimum length obtains for x'=0,q.e.d. 里的 9 If b is not in the range of the singular matrix A,then the set of equations(2.6.6) has no solution.But here is some good news:If b is not in the range of A,then equation (2.6.7)can still be used to construct a "solution"vector x.This vector x will not exactly solve A.x b.But,among all possible vectors x,it will do the closest possible job in the least squares sense.In other words(2.6.7)finds x which minimizes r≡lA·x-bl (2.6.9) The number r is called the residual of the solution. 10621 The proof is similar to(2.6.8):Suppose we modify x by adding some arbitrary Numerica x'.Then A·x-b is modified by adding some b'≡A·x'.Obviously b'isin the range of A.We then have A.x-b+b/l=|(U.W.vT).(V.W-1.UT.b)-b+b'] =(U.W.W-1.U-1).b+b/ =U.[(W.w-1-1).UT.b+UT.b']l (2.6.10) (W .W-1-1).UT.b+UT.b' Now,(W.W-1-1)is a diagonal matrix which has nonzero j components only for w=0,while UTb'has nonzero j components only for wj0,since b'lies in the range of A.Therefore the minimum obtains for b'=0,g.e.d. Figure 2.6.1 summarizes our discussion of SVD thus far.62 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). If we want to single out one particular member of this solution-set of vectors as a representative, we might want to pick the one with the smallest length |x| 2 . Here is how to find that vector using SVD: Simply replace 1/wj by zero if wj = 0. (It is not very often that one gets to set ∞ = 0 !) Then compute (working from right to left) x = V · [diag (1/wj )] · (UT · b) (2.6.7) This will be the solution vector of smallest length; the columns of V that are in the nullspace complete the specification of the solution set. Proof: Consider |x + x |, where x lies in the nullspace. Then, if W−1 denotes the modified inverse of W with some elements zeroed, |x + x | =  V · W−1 · UT · b + x   =  V · (W−1 · UT · b + VT · x )   =  W−1 · UT · b + VT · x   (2.6.8) Here the first equality follows from (2.6.7), the second and third from the orthonor￾mality of V. If you now examine the two terms that make up the sum on the right-hand side, you will see that the first one has nonzero j components only where wj = 0, while the second one, since x is in the nullspace, has nonzero j components only where wj = 0. Therefore the minimum length obtains for x = 0, q.e.d. If b is not in the range of the singular matrix A, then the set of equations (2.6.6) has no solution. But here is some good news: If b is not in the range of A, then equation (2.6.7) can still be used to construct a “solution” vector x. This vector x will not exactly solve A · x = b. But, among all possible vectors x, it will do the closest possible job in the least squares sense. In other words (2.6.7) finds x which minimizes r ≡ |A · x − b| (2.6.9) The number r is called the residual of the solution. The proof is similar to (2.6.8): Suppose we modify x by adding some arbitrary x . Then A · x − b is modified by adding some b ≡ A · x . Obviously b is in the range of A. We then have  A · x − b + b   =  (U · W · VT ) · (V · W−1 · UT · b) − b + b   =  (U · W · W−1 · UT − 1) · b + b   =  U · (W · W−1 − 1) · UT · b + UT · b   =  (W · W−1 − 1) · UT · b + UT · b   (2.6.10) Now, (W · W−1 − 1) is a diagonal matrix which has nonzero j components only for wj = 0, while UT b has nonzero j components only for wj = 0, since b lies in the range of A. Therefore the minimum obtains for b = 0, q.e.d. Figure 2.6.1 summarizes our discussion of SVD thus far.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有