正在加载图片...
34 Chapter 2.Solution of Linear Algebraic Equations at this point.Occasionally it is useful to be able to peer through the veil,for example to pass a whole row a[i][j],j=1,...,N by the reference a[i]. Tasks of Computational Linear Algebra We will consider the following tasks as falling in the general purview of this chapter: Solution of the matrix equation A.x b for an unknown vector x,where A 三 is a square matrix of coefficients,raised dot denotes matrix multiplication, and b is a known right-hand side vector ($2.1-82.10). Solution of more than one matrix equation A.xi=bi,for a set of vectors xj,j=1,2,...,each corresponding to a different,known right-hand side vector b;.In this task the key simplification is that the matrix A is held 漆 constant,while the right-hand sides,the b's,are changed(82.1-82.10). Calculation ofthe matrix A which is the matrix inverse ofa square matrix A,i.e.,A.A-1=A-1.A=1,where 1 is the identity matrix (all zeros except for ones on the diagonal).This task is equivalent,for an N x N 9 matrix A,to the previous task with N different bi's (j=1,2,...,N), namely the unit vectors(b;=all zero elements except for I in the jth component).The corresponding x's are then the columns of the matrix inverse of A ($2.1 and $2.3). Calculation of the determinant of a square matrix A(82.3). If M<N,or if M=N but the equations are degenerate,then there OF SCIENTIFIC are effectively fewer equations than unknowns.In this case there can be either no solution,or else more than one solution vector x.In the latter event,the solution space consists of a particular solution xp added to any linear combination of(typically) N-M vectors (which are said to be in the nullspace of the matrix A).The task of finding the solution space of A involves Singular value decomposition of a matrix A. This subject is treated in 82.6. 10621 In the opposite case there are more equations than unknowns,M N.When Numerica this occurs there is,in general,no solution vector x to equation(2.0.1),and the set 431 of equations is said to be overdetermined.It happens frequently,however,that the Recipes best "compromise"solution is sought,the one that comes closest to satisfying all 腿 equations simultaneously.If closeness is defined in the least-squares sense,i.e.,that the sum of the squares of the differences between the left-and right-hand sides of equation(2.0.1)be minimized,then the overdetermined linear problem reduces to a (usually)solvable linear problem,called the Linear least-squares problem The reduced set ofequations to be solved can be written as the N x N set ofequations (AT.A)·x=(AT.b) (2.0.4 where AT denotes the transpose of the matrix A.Equations(2.0.4)are called the normal equations of the linear least-squares problem.There is a close connection34 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). at this point. Occasionally it is useful to be able to peer through the veil, for example to pass a whole row a[i][j], j=1,...,N by the reference a[i]. Tasks of Computational Linear Algebra We will consider the following tasks as falling in the general purview of this chapter: • Solution of the matrix equation A·x = b for an unknown vector x, where A is a square matrix of coefficients, raised dot denotes matrix multiplication, and b is a known right-hand side vector (§2.1–§2.10). • Solution of more than one matrix equation A · xj = bj , for a set of vectors xj , j = 1, 2,... , each corresponding to a different, known right-hand side vector bj . In this task the key simplification is that the matrix A is held constant, while the right-hand sides, the b’s, are changed (§2.1–§2.10). • Calculation of the matrix A−1 which is the matrix inverse of a square matrix A, i.e., A · A−1 = A−1 · A = 1, where 1 is the identity matrix (all zeros except for ones on the diagonal). This task is equivalent, for an N × N matrix A, to the previous task with N different bj ’s (j = 1, 2,...,N), namely the unit vectors (bj = all zero elements except for 1 in the jth component). The corresponding x’s are then the columns of the matrix inverse of A (§2.1 and §2.3). • Calculation of the determinant of a square matrix A (§2.3). If M<N, or if M = N but the equations are degenerate, then there are effectively fewer equations than unknowns. In this case there can be either no solution, or else more than one solution vector x. In the latter event, the solution space consists of a particular solution xp added to any linear combination of (typically) N − M vectors (which are said to be in the nullspace of the matrix A). The task of finding the solution space of A involves • Singular value decomposition of a matrix A. This subject is treated in §2.6. In the opposite case there are more equations than unknowns, M>N. When this occurs there is, in general, no solution vector x to equation (2.0.1), and the set of equations is said to be overdetermined. It happens frequently, however, that the best “compromise” solution is sought, the one that comes closest to satisfying all equations simultaneously. If closeness is defined in the least-squares sense, i.e., that the sum of the squares of the differences between the left- and right-hand sides of equation (2.0.1) be minimized, then the overdetermined linear problem reduces to a (usually) solvable linear problem, called the • Linear least-squares problem. The reduced set of equations to be solved can be written as the N ×N set of equations (AT · A) · x = (AT · b) (2.0.4) where AT denotes the transpose of the matrix A. Equations (2.0.4) are called the normal equations of the linear least-squares problem. There is a close connection
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有