2.2 Gaussian Elimination with Backsubstitution 41 which (peeling of the C-'s one at a time)implies a solution x=C1.C2.C3..b (2.1.8) Notice the essential difference between equation (2.1.8)and equation (2.1.6).In the latter case,the C's must be applied to b in the reverse order from that in which they become known.That is,they must all be stored along the way.This requirement greatly reduces the usefulness of column operations,generally restricting them to simple permutations,for example in support of full pivoting. CITED REFERENCES AND FURTHER READING: Wilkinson,J.H.1965,The Algebraic Eigenvalue Problem (New York:Oxford University Press).[1] Carnahan,B.,Luther,H.A,and Wilkes,J.O.1969.Applied Numerical Methods (New York: Wiley),Example 5.2,p.282. Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill),Program B-2,p.298. Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley). Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerica/Analysis,2nd ed.(New York: 巡 McGraw-Hill),89.3-1. 9 令 2.2 Gaussian Elimination with Backsubstitution Press. The usefulness of Gaussian elimination with backsubstitution is primarily Programs pedagogical.It stands between full elimination schemes such as Gauss-Jordan,and triangular decomposition schemes such as will be discussed in the next section. SCIENTIFIC Gaussian elimination reduces a matrix not all the way to the identity matrix,but only halfway,to a matrix whose components on the diagonal and above(say)remain 排 6 nontrivial.Let us now see what advantages accrue. Suppose that in doing Gauss-Jordan elimination,as described in 82.1,we at each stage subtract away rows only below the then-current pivot element.When a 22 is the pivot element,for example,we divide the second row by its value(as before), 10.621 but now use the pivot row to zero only a32 and a42,not a12 (see equation 2.1.1). Numerica Suppose,also,that we do only partial pivoting,never interchanging columns,so that 431 the order of the unknowns never needs to be modified. uction Then,when we have done this for all the pivots,we will be left with a reduced Recipes equation that looks like this(in the case of a single right-hand side vector): (outside a11 a12 a13 a14 T1 North 0 a22 023 T2 (2.2.1) 0 0 33 T3 0 0 44 4」 Here the primes signify that the a's and b's do not have their original numerical values,but have been modified by all the row operations in the elimination to this point.The procedure up to this point is termed Gaussian elimination
2.2 Gaussian Elimination with Backsubstitution 41 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). which (peeling of the C−1’s one at a time) implies a solution x = C1 · C2 · C3 ··· b (2.1.8) Notice the essential difference between equation (2.1.8) and equation (2.1.6). In the latter case, the C’s must be applied to b in the reverse order from that in which they become known. That is, they must all be stored along the way. This requirement greatly reduces the usefulness of column operations, generally restricting them to simple permutations, for example in support of full pivoting. CITED REFERENCES AND FURTHER READING: Wilkinson, J.H. 1965, The Algebraic Eigenvalue Problem (New York: Oxford University Press). [1] Carnahan, B., Luther, H.A., and Wilkes, J.O. 1969, Applied Numerical Methods (New York: Wiley), Example 5.2, p. 282. Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Program B-2, p. 298. Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §9.3–1. 2.2 Gaussian Elimination with Backsubstitution The usefulness of Gaussian elimination with backsubstitution is primarily pedagogical. It stands between full elimination schemes such as Gauss-Jordan, and triangular decomposition schemes such as will be discussed in the next section. Gaussian elimination reduces a matrix not all the way to the identity matrix, but only halfway, to a matrix whose components on the diagonal and above (say) remain nontrivial. Let us now see what advantages accrue. Suppose that in doing Gauss-Jordan elimination, as described in §2.1, we at each stage subtract away rows only below the then-current pivot element. When a 22 is the pivot element, for example, we divide the second row by its value (as before), but now use the pivot row to zero only a32 and a42, not a12 (see equation 2.1.1). Suppose, also, that we do only partial pivoting, never interchanging columns, so that the order of the unknowns never needs to be modified. Then, when we have done this for all the pivots, we will be left with a reduced equation that looks like this (in the case of a single right-hand side vector): a 11 a 12 a 13 a 14 0 a 22 a 23 a 24 0 0 a 33 a 34 000 a 44 · x1 x2 x3 x4 = b 1 b 2 b 3 b 4 (2.2.1) Here the primes signify that the a’s and b’s do not have their original numerical values, but have been modified by all the row operations in the elimination to this point. The procedure up to this point is termed Gaussian elimination
42 Chapter 2.Solution of Linear Algebraic Equations Backsubstitution But how do we solve for the x's?The last z(x4 in this example)is already isolated,namely x4=b/a44 (2.2.2) With the last x known we can move to the penultimate 3= -rsaiu (2.2.3) and then proceed with the z before that one.The typical step is Ti 6-∑ (2.2.4) ICAL j=i+1 The procedure defined by equation (2.2.4)is called backsubstitution.The com- bination of Gaussian elimination and backsubstitution yields a solution to the set of equations. The advantage of Gaussian elimination and backsubstitution over Gauss-Jordan elimination is simply that the former is faster in raw operations count:The 人0aN旧R Press. innermost loops of Gauss-Jordan elimination,each containing one subtraction and one multiplication,are executed N3 and N2M times(where there are N equations and M unknowns).The corresponding loops in Gaussian elimination are executed onlyN3 times (only half the matrix is reduced,and the increasing numbers of predictable zeros reduce the count to one-third),and N2M times,respectively. SCIENTIFIC( Each backsubstitution of a right-hand side is N2 executions of a similar loop (one multiplication plus one subtraction).For MN (only a few right-hand sides) 6 Gaussian elimination thus has about a factor three advantage over Gauss-Jordan. (We could reduce this advantage to a factor 1.5 by not computing the inverse matrix as part of the Gauss-Jordan scheme. For computing the inverse matrix(which we can view as the case of M =N right-hand sides,namely the N unit vectors which are the columns of the identity 10.621 matrix),Gaussianelimination and backsubstitution at first glance require N3(matrix Recipes Numerica reduction)+N3 (right-hand side manipulations)+N3(N backsubstitutions) 4310 =4N3 loop executions,which is more than the N3 for Gauss-Jordan.However,the Recipes unit vectors are quite special in containing all zeros except for one element.If this is taken into account,the right-side manipulations can be reduced to onlyN3 loop executions,and,for matrix inversion,the two methods have identical efficiencies. Both Gaussian elimination and Gauss-Jordan elimination share the disadvantage that all right-hand sides must be known in advance.The LU decomposition method in the next section does not share that deficiency,and also has an equally small operations count,both for solution with any number of right-hand sides,and for matrix inversion.For this reason we will not implement the method of Gaussian elimination as a routine. CITED REFERENCES AND FURTHER READING: Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),89.3-1
42 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Backsubstitution But how do we solve for the x’s? The last x (x4 in this example) is already isolated, namely x4 = b 4/a 44 (2.2.2) With the last x known we can move to the penultimate x, x3 = 1 a 33 [b 3 − x4a 34] (2.2.3) and then proceed with the x before that one. The typical step is xi = 1 a ii b i − N j=i+1 a ijxj (2.2.4) The procedure defined by equation (2.2.4) is called backsubstitution. The combination of Gaussian elimination and backsubstitution yields a solution to the set of equations. The advantage of Gaussian elimination and backsubstitution over Gauss-Jordan elimination is simply that the former is faster in raw operations count: The innermost loops of Gauss-Jordan elimination, each containing one subtraction and one multiplication, are executed N 3 and N2M times (where there are N equations and M unknowns). The corresponding loops in Gaussian elimination are executed only 1 3N3 times (only half the matrix is reduced, and the increasing numbers of predictable zeros reduce the count to one-third), and 1 2N2M times, respectively. Each backsubstitution of a right-hand side is 1 2N2 executions of a similar loop (one multiplication plus one subtraction). For M N (only a few right-hand sides) Gaussian elimination thus has about a factor three advantage over Gauss-Jordan. (We could reduce this advantage to a factor 1.5 by not computing the inverse matrix as part of the Gauss-Jordan scheme.) For computing the inverse matrix (which we can view as the case of M = N right-hand sides, namely the N unit vectors which are the columns of the identity matrix), Gaussian elimination and backsubstitution at first glance require 1 3N3 (matrix reduction) + 1 2N3 (right-hand side manipulations) + 1 2N3 (N backsubstitutions) = 4 3N3 loop executions, which is more than the N 3 for Gauss-Jordan. However, the unit vectors are quite special in containing all zeros except for one element. If this is taken into account, the right-side manipulations can be reduced to only 1 6N3 loop executions, and, for matrix inversion, the two methods have identical efficiencies. Both Gaussian elimination and Gauss-Jordan elimination share the disadvantage that all right-hand sides must be known in advance. The LU decomposition method in the next section does not share that deficiency, and also has an equally small operations count, both for solution with any number of right-hand sides, and for matrix inversion. For this reason we will not implement the method of Gaussian elimination as a routine. CITED REFERENCES AND FURTHER READING: Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §9.3–1
2.3 LU Decomposition and Its Applications 43 Isaacson,E.,and Keller,H.B.1966,Analysis of Numerical Methods (New York:Wiley),82.1. Johnson,L.W.,and Riess,R.D.1982,Numerica/Analysis,2nd ed.(Reading,MA:Addison- Wesley),$2.2.1. Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley). 2.3 LU Decomposition and Its Applications Suppose we are able to write the matrix A as a product of two matrices, 8= L.U=A (2.3.1) where L is lower triangular (has elements only on the diagonal and below)and U is upper triangular (has elements only on the diagonal and above).For the case of a 4 x 4 matrix A,for example,equation(2.3.1)would look like this: 011 0 0 0 311 12 613 B14 [a11 a12 a13 a14 server 2 a21 022 0 0 022 23 24 a21 a22 a23 a24 (Nort a31 a32 a33 0 0 0 334 a31 a32 a33 a34 041 a42 a43 044 0 0 344 a41 a42 a43 a44 America University Press. THE (2.3.2) ART We can use a decomposition such as(2.3.1)to solve the linear set Progra A·x=(LU)x=L·(Ux)=b (2.3.3) OF SCIENTIFIC( by first solving for the vector y such that 61 L·y=b (2.3.4) and then solving U·x=y (2.3.5) COMPUTING (ISBN 188810920 What is the advantage of breaking up one linear set into two successive ones? The advantage is that the solution of a triangular set of equations is quite trivial,as we have already seen in 82.2 (equation 2.2.4).Thus,equation(2.3.4)can be solved by forward substitution as follows, uurggoglrion Numerical Recipes 10-621 43106 b1 1= 011 (outside i- (2.3.6) North Software. = bi i=2,3,,N j=1 while(2.3.5)can then be solved by backsubstitution exactly as in equations(2.2.2)- (2.2.4), yN TN= BNN (2.3.7) i= i=N-1,N-2,,1 Bu
2.3 LU Decomposition and Its Applications 43 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Isaacson, E., and Keller, H.B. 1966, Analysis of Numerical Methods (New York: Wiley), §2.1. Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed. (Reading, MA: AddisonWesley), §2.2.1. Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). 2.3 LU Decomposition and Its Applications Suppose we are able to write the matrix A as a product of two matrices, L · U = A (2.3.1) where L is lower triangular (has elements only on the diagonal and below) and U is upper triangular (has elements only on the diagonal and above). For the case of a 4 × 4 matrix A, for example, equation (2.3.1) would look like this: α11 000 α21 α22 0 0 α31 α32 α33 0 α41 α42 α43 α44 · β11 β12 β13 β14 0 β22 β23 β24 0 0 β33 β34 000 β44 = a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 (2.3.2) We can use a decomposition such as (2.3.1) to solve the linear set A · x = (L · U) · x = L · (U · x) = b (2.3.3) by first solving for the vector y such that L · y = b (2.3.4) and then solving U · x = y (2.3.5) What is the advantage of breaking up one linear set into two successive ones? The advantage is that the solution of a triangular set of equations is quite trivial, as we have already seen in §2.2 (equation 2.2.4). Thus, equation (2.3.4) can be solved by forward substitution as follows, y1 = b1 α11 yi = 1 αii bi − i−1 j=1 αijyj i = 2, 3,...,N (2.3.6) while (2.3.5) can then be solved by backsubstitution exactly as in equations (2.2.2)– (2.2.4), xN = yN βNN xi = 1 βii yi − N j=i+1 βijxj i = N − 1, N − 2,..., 1 (2.3.7)