36 Chapter 2.Solution of Linear Algebraic Equations Coleman,T.F.,and Van Loan,C.1988.Handbook for Matrix Computations(Philadelphia:S.I.A.M.). Forsythe,G.E.,and Moler,C.B.1967,Computer Solution of Linear Algebraic Systems(Engle- wood Cliffs,NJ:Prentice-Hall). Wilkinson,J.H.,and Reinsch,C.1971,Linear Algebra,vol.ll of Handbook for Automatic Com- putation (New York:Springer-Verlag). Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley). Johnson,L.W.,and Riess,R.D.1982,Numerical Analysis,2nd ed.(Reading,MA:Addison- Wesley).Chapter 2. Ralston.A..and Rabinowitz.P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),Chapter 9. 81 2.1 Gauss-Jordan Elimination For inverting a matrix,Gauss-Jordan elimination is about as efficient as any other method.For solving sets of linear equations,Gauss-Jordan elimination ⊙ produces both the solution of the equations for one or more right-hand side vectors b,and also the matrix inverse A.However,its principal weaknesses are(i)that it 9 requires all the right-hand sides to be stored and manipulated at the same time,and (ii)that when the inverse matrix is not desired,Gauss-Jordan is three times slower than the best alternative technique for solving a single linear set(82.3).The method's principal strength is that it is as stable as any other direct method,perhaps even a bit more stable when full pivoting is used(see below). waea If you come along later with an additional right-hand side vector,you can multiply it by the inverse matrix,of course.This does give an answer,but one that is quite susceptible to roundofferror,not nearly as good as if the new vector had been 6 included with the set of right-hand side vectors in the first instance. For these reasons,Gauss-Jordan elimination should usually not be your method of first choice,either for solving linear equations or for matrix inversion.The decomposition methods in $2.3 are better.Why do we give you Gauss-Jordan at all? Because it is straightforward,understandable,solid as a rock,and an exceptionally 量、 Numerica 10621 good"psychological"backup for those times that something is going wrong and you think it might be your linear-equation solver. 4310 Some people believe that the backup is more than psychological,that Gauss- Jordan elimination is an"independent"numerical method.This turns out to be Recipes mostly myth.Except for the relatively minor differences in pivoting,described below,the actual sequence of operations performed in Gauss-Jordan elimination is North very closely related to that performed by the routines in the next two sections. For clarity,and to avoid writing endless ellipses(..)we will write out equations only for the case of four equations and four unknowns,and with three different right- hand side vectors that are known in advance.You can write bigger matrices and extend the equations to the case of N x N matrices,with M sets of right-hand side vectors,in completely analogous fashion.The routine implemented below is. of course,general
36 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Coleman, T.F., and Van Loan, C. 1988, Handbook for Matrix Computations (Philadelphia: S.I.A.M.). Forsythe, G.E., and Moler, C.B. 1967, Computer Solution of Linear Algebraic Systems (Englewood Cliffs, NJ: Prentice-Hall). Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag). Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed. (Reading, MA: AddisonWesley), Chapter 2. Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), Chapter 9. 2.1 Gauss-Jordan Elimination For inverting a matrix, Gauss-Jordan elimination is about as efficient as any other method. For solving sets of linear equations, Gauss-Jordan elimination produces both the solution of the equations for one or more right-hand side vectors b, and also the matrix inverse A−1. However, its principal weaknesses are (i) that it requires all the right-hand sides to be stored and manipulated at the same time, and (ii) that when the inverse matrix is not desired, Gauss-Jordan is three times slower than the best alternative technique for solving a single linear set (§2.3). The method’s principal strength is that it is as stable as any other direct method, perhaps even a bit more stable when full pivoting is used (see below). If you come along later with an additional right-hand side vector, you can multiply it by the inverse matrix, of course. This does give an answer, but one that is quite susceptible to roundoff error, not nearly as good as if the new vector had been included with the set of right-hand side vectors in the first instance. For these reasons, Gauss-Jordan elimination should usually not be your method of first choice, either for solving linear equations or for matrix inversion. The decomposition methods in §2.3 are better. Why do we give you Gauss-Jordan at all? Because it is straightforward, understandable, solid as a rock, and an exceptionally good “psychological” backup for those times that something is going wrong and you think it might be your linear-equation solver. Some people believe that the backup is more than psychological, that GaussJordan elimination is an “independent” numerical method. This turns out to be mostly myth. Except for the relatively minor differences in pivoting, described below, the actual sequence of operations performed in Gauss-Jordan elimination is very closely related to that performed by the routines in the next two sections. For clarity, and to avoid writing endless ellipses (···) we will write out equations only for the case of four equations and four unknowns, and with three different righthand side vectors that are known in advance. You can write bigger matrices and extend the equations to the case of N × N matrices, with M sets of right-hand side vectors, in completely analogous fashion. The routine implemented below is, of course, general
2.1 Gauss-Jordan Elimination 37 Elimination on Column-Augmented Matrices Consider the linear matrix equation a11a12 a13 a14 x11 x12 工13 11 12 13 14 021 a22 a23 a24 x21 工22 r23 21 22 23 2 a31 a32 a33 a34 工32 r33 31 32 34 041 a42 a43 14A 工42 工43 y41 y42 y43 y44 1 0 0 0 22 02 b31 b32 b33 0 (2.1.1) 041 b43 0 0 83g Here the raised dot ()signifies matrix multiplication,while the operator L just nted for signifies column augmentation,that is,removing the abutting parentheses and making a wider matrix out of the operands of the L operator. It should not take you long to write out equation(2.1.1)and to see that it simply states that xij is the ith component(i=1,2,3,4)of the vector solution of the jth right-hand side (j=1,2,3),the one whose coefficients are bj,i=1,2,3,4;and that the matrix of unknown coefficients yi;is the inverse matrix of aij.In other words,the matrix solution of [A].x1UX2UX3LY]=[b1Ub2Ub3 U1] (2.1.2) 、 Press. where A and Y are square matrices,the bi's and xi's are column vectors,and 1 is 9 the identity matrix,simultaneously solves the linear sets 定go A·x1=b1 A·x2=b2A·x3=b3 (2.1.3) and A·Y=1 (2.1.4) Now it is also elementary to verify the following facts about(2.1.1): Interchanging any two rows of A and the corresponding rows of the b's and of 1,does not change (or scramble in any way)the solution x's and Y.Rather,it just corresponds to writing the same set of linear equations in a different order. Numerica 10621 Likewise,the solution set is unchanged and in no way scrambled if we Recipes 43106 replace any row in A by a linear combination of itself and any other row, as long as we do the same linear combination of the rows of the b's and 1 (which then is no longer the identity matrix,of course). (outside Interchanging any two columns of A gives the same solution set only North Software. if we simultaneously interchange corresponding rows of the x's and of Y.In other words,this interchange scrambles the order of the rows in the solution.If we do this,we will need to unscramble the solution by restoring the rows to their original order. Gauss-Jordan elimination uses one or more of the above operations to reduce the matrix A to the identity matrix.When this is accomplished,the right-hand side becomes the solution set,as one sees instantly from(2.1.2)
2.1 Gauss-Jordan Elimination 37 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Elimination on Column-Augmented Matrices Consider the linear matrix equation a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 · x11 x21 x31 x41 x12 x22 x32 x42 x13 x23 x33 x43 y11 y12 y13 y14 y21 y22 y23 y24 y31 y32 y33 y34 y41 y42 y43 y44 = b11 b21 b31 b41 b12 b22 b32 b42 b13 b23 b33 b43 1000 0100 0010 0001 (2.1.1) Here the raised dot (·) signifies matrix multiplication, while the operator just signifies column augmentation, that is, removing the abutting parentheses and making a wider matrix out of the operands of the operator. It should not take you long to write out equation (2.1.1) and to see that it simply states that xij is the ith component (i = 1, 2, 3, 4) of the vector solution of the jth right-hand side (j = 1, 2, 3), the one whose coefficients are b ij , i = 1, 2, 3, 4; and that the matrix of unknown coefficients yij is the inverse matrix of aij . In other words, the matrix solution of [A] · [x1 x2 x3 Y]=[b1 b2 b3 1] (2.1.2) where A and Y are square matrices, the bi’s and xi’s are column vectors, and 1 is the identity matrix, simultaneously solves the linear sets A · x1 = b1 A · x2 = b2 A · x3 = b3 (2.1.3) and A · Y = 1 (2.1.4) Now it is also elementary to verify the following facts about (2.1.1): • Interchanging any two rows of A and the corresponding rows of the b’s and of 1, does not change (or scramble in any way) the solution x’s and Y. Rather, it just corresponds to writing the same set of linear equations in a different order. • Likewise, the solution set is unchanged and in no way scrambled if we replace any row in A by a linear combination of itself and any other row, as long as we do the same linear combination of the rows of the b’s and 1 (which then is no longer the identity matrix, of course). • Interchanging any two columns of A gives the same solution set only if we simultaneously interchange corresponding rows of the x’s and of Y. In other words, this interchange scrambles the order of the rows in the solution. If we do this, we will need to unscramble the solution by restoring the rows to their original order. Gauss-Jordan elimination uses one or more of the above operations to reduce the matrix A to the identity matrix. When this is accomplished, the right-hand side becomes the solution set, as one sees instantly from (2.1.2)
38 Chapter 2.Solution of Linear Algebraic Equations Pivoting In"Gauss-Jordan elimination with no pivoting,"only the second operation in the above list is used.The first row is divided by the element a(this being a trivial linear combination of the first row with any other row-zero coefficient for the other row).Then the right amount of the first row is subtracted from each other row to make all the remaining ai's zero.The first column of A now agrees with the identity matrix.We move to the second column and divide the second row by a22,then subtract the right amount of the second row from rows 1,3,and 4,so as to make their entries in the second column zero.The second column is now reduced 81 to the identity form.And so on for the third and fourth columns.As we do these operations to A,we of course also do the corresponding operations to the b's and to 1(which by now no longer resembles the identity matrix in any way!). Obviously we will run into trouble if we ever encounter a zero element on the (then current)diagonal when we are going to divide by the diagonal element.(The element that we divide by,incidentally,is called the pivot element or pivot.Not so 皇空@ obvious.but true.is the fact that Gauss-Jordan elimination with no pivoting(no use of the first or third procedures in the above list)is numerically unstable in the presence 9 of any roundofferror,even when a zero pivot is not encountered.You must never do Gauss-Jordan elimination(or Gaussian elimination,see below)without pivoting! So what is this magic pivoting?Nothing more than interchanging rows(partial pivoting)or rows and columns (full pivoting),so as to put a particularly desirable element in the diagonal position from which the pivot is about to be selected.Since 9 we don't want to mess up the part of the identity matrix that we have already built up, e we can choose among elements that are both(i)on rows below (or on)the one that is about to be normalized,and also (ii)on columns to the right (or on)the column we are about to eliminate.Partial pivoting is easier than full pivoting,because we 6 don't have to keep track of the permutation of the solution vector.Partial pivoting makes available as pivots only the elements already in the correct column.It turns out that partial pivoting is"almost"as good as full pivoting,in a sense that can be made mathematically precise,but which need not concern us here (for discussion and references,see [11).To show you both variants,we do full pivoting in the routine Numerica 10621 in this section,partial pivoting in 82.3. We have to state how to recognize a particularly desirable pivot when we see 43106 one.The answer to this is not completely known theoretically.It is known,both Recipes theoretically and in practice,that simply picking the largest(in magnitude)available element as the pivot is a very good choice.A curiosity of this procedure,however,is 腿 that the choice of pivot will depend on the original scaling of the equations.If we take North the third linear equation in our original set and multiply it by a factor of a million,it is almost guaranteed that it will contribute the first pivot:yet the underlying solution of the equations is not changed by this multiplication!One therefore sometimes sees routines which choose as pivot that element which would have been largest if the original equations had all been scaled to have their largest coefficient normalized to unity.This is called implicit pivoting.There is some extra bookkeeping to keep track of the scale factors by which the rows would have been multiplied.(The routines in 82.3 include implicit pivoting.but the routine in this section does not. Finally,let us consider the storage requirements of the method.With a little reflection you will see that at every stage of the algorithm,either an element of A is
38 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Pivoting In “Gauss-Jordan elimination with no pivoting,” only the second operation in the above list is used. The first row is divided by the element a 11 (this being a trivial linear combination of the first row with any other row — zero coefficient for the other row). Then the right amount of the first row is subtracted from each other row to make all the remaining ai1’s zero. The first column of A now agrees with the identity matrix. We move to the second column and divide the second row by a22, then subtract the right amount of the second row from rows 1, 3, and 4, so as to make their entries in the second column zero. The second column is now reduced to the identity form. And so on for the third and fourth columns. As we do these operations to A, we of course also do the corresponding operations to the b’s and to 1 (which by now no longer resembles the identity matrix in any way!). Obviously we will run into trouble if we ever encounter a zero element on the (then current) diagonal when we are going to divide by the diagonal element. (The element that we divide by, incidentally, is called the pivot element or pivot.) Not so obvious, but true, is the fact that Gauss-Jordan elimination with no pivoting (no use of the first or third procedures in the above list) is numerically unstable in the presence of any roundoff error, even when a zero pivot is not encountered. You must never do Gauss-Jordan elimination (or Gaussian elimination, see below) without pivoting! So what is this magic pivoting? Nothing more than interchanging rows (partial pivoting) or rows and columns (full pivoting), so as to put a particularly desirable element in the diagonal position from which the pivot is about to be selected. Since we don’t want to mess up the part of the identity matrix that we have already built up, we can choose among elements that are both (i) on rows below (or on) the one that is about to be normalized, and also (ii) on columns to the right (or on) the column we are about to eliminate. Partial pivoting is easier than full pivoting, because we don’t have to keep track of the permutation of the solution vector. Partial pivoting makes available as pivots only the elements already in the correct column. It turns out that partial pivoting is “almost” as good as full pivoting, in a sense that can be made mathematically precise, but which need not concern us here (for discussion and references, see [1]). To show you both variants, we do full pivoting in the routine in this section, partial pivoting in §2.3. We have to state how to recognize a particularly desirable pivot when we see one. The answer to this is not completely known theoretically. It is known, both theoretically and in practice, that simply picking the largest (in magnitude) available element as the pivot is a very good choice. A curiosity of this procedure, however, is that the choice of pivot will depend on the original scaling of the equations. If we take the third linear equation in our original set and multiply it by a factor of a million, it is almost guaranteed that it will contribute the first pivot; yet the underlying solution of the equations is not changed by this multiplication! One therefore sometimes sees routines which choose as pivot that element which would have been largest if the original equations had all been scaled to have their largest coefficient normalized to unity. This is called implicit pivoting. There is some extra bookkeeping to keep track of the scale factors by which the rows would have been multiplied. (The routines in §2.3 include implicit pivoting, but the routine in this section does not.) Finally, let us consider the storage requirements of the method. With a little reflection you will see that at every stage of the algorithm, either an element of A is
2.1 Gauss-Jordan Elimination 39 predictably a one or zero (if it is already in a part of the matrix that has been reduced to identity form)or else the exactly corresponding element of the matrix that started as 1 is predictably a one or zero (if its mate in A has not been reduced to the identity form).Therefore the matrix 1 does not have to exist as separate storage:The matrix inverse of A is gradually built up in A as the original A is destroyed.Likewise, the solution vectors x can gradually replace the right-hand side vectors b and share the same storage,since after each column in A is reduced,the corresponding row entry in the b's is never again used. 常 Here is the routine for Gauss-Jordan elimination with full pivoting: #include #include "nrutil.h" 18881892 #define SWAP(a,b){temp=(a);(a)=(b);(b)=temp;} 11.800 void gaussj(float *a,int n,float **b,int m) Linear equation solution by Gauss-Jordan elimination,equation (2.1.1)above.a[1..n][1..n] is the input matrix.b[1..n][1..m]is input containing the m right-hand side vectors.On from NUMERICAL RECIPES I output,a is replaced by its matrix inverse,and b is replaced by the corresponding set of solution vectors. (North int *indxc,*indxr,*ipiv; int i,icol,irow,j,k,1,11; float big,dum,pivinv,temp; computer, Americ Press. ART indxc=ivector(1,n); The integer arrays ipiv,indxr.and indxc are indxr=ivector(1,n); used for bookkeeping on the pivoting. 9 Programs ipiv=ivector(1,n); for (j=1;j=big){ big=fabs(a[j][k]); 18881920 SCIENTIFIC COMPUTING(ISBN iro日=j; icol=k; 0621 2 ++(ipiv[icol]); Numerical Recipes -43108 We now have the pivot element,so we interchange rows,if needed,to put the pivot element on the diagonal.The columns are not physically interchanged,only relabeled: indxc [i],the column of the ith pivot element,is the ith column that is reduced,while (outside indxr[i]is the row in which that pivot element was originally located.If indxr[i] North Software. indxc [i]there is an implied column interchange.With this form of bookkeeping,the solution b's will end up in the correct order,and the inverse matrix will be scrambled by columns. Ame if (irow !icol){ for(1=1;1<=n;1+)SWAP(a[irow][1],a[ico1][1]) for(1=1;1<=m;1++)SWAP(b[irow][1],b[ico1][1]) indxr[i]=irow; We are now ready to divide the pivot row by the indxc[i]=icol; pivot element,located at irow and icol. if (a[icol][icol]==0.0)nrerror("gaussj:Singular Matrix"); pivinv=1.0/a[icol][icol]; a[1co1][ico1]=1.0: for (1=1;1<=n;1++)a[icol][l]*pivinv; for(1=1;1<=m;1+)b[ico1][1]*=p1vinv;
2.1 Gauss-Jordan Elimination 39 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). predictably a one or zero (if it is already in a part of the matrix that has been reduced to identity form) or else the exactly corresponding element of the matrix that started as 1 is predictably a one or zero (if its mate in A has not been reduced to the identity form). Therefore the matrix 1 does not have to exist as separate storage: The matrix inverse of A is gradually built up in A as the original A is destroyed. Likewise, the solution vectors x can gradually replace the right-hand side vectors b and share the same storage, since after each column in A is reduced, the corresponding row entry in the b’s is never again used. Here is the routine for Gauss-Jordan elimination with full pivoting: #include #include "nrutil.h" #define SWAP(a,b) {temp=(a);(a)=(b);(b)=temp;} void gaussj(float **a, int n, float **b, int m) Linear equation solution by Gauss-Jordan elimination, equation (2.1.1) above. a[1..n][1..n] is the input matrix. b[1..n][1..m] is input containing the m right-hand side vectors. On output, a is replaced by its matrix inverse, and b is replaced by the corresponding set of solution vectors. { int *indxc,*indxr,*ipiv; int i,icol,irow,j,k,l,ll; float big,dum,pivinv,temp; indxc=ivector(1,n); The integer arrays ipiv, indxr, and indxc are indxr=ivector(1,n); used for bookkeeping on the pivoting. ipiv=ivector(1,n); for (j=1;j= big) { big=fabs(a[j][k]); irow=j; icol=k; } } } ++(ipiv[icol]); We now have the pivot element, so we interchange rows, if needed, to put the pivot element on the diagonal. The columns are not physically interchanged, only relabeled: indxc[i], the column of the ith pivot element, is the ith column that is reduced, while indxr[i] is the row in which that pivot element was originally located. If indxr[i] = indxc[i] there is an implied column interchange. With this form of bookkeeping, the solution b’s will end up in the correct order, and the inverse matrix will be scrambled by columns. if (irow != icol) { for (l=1;l<=n;l++) SWAP(a[irow][l],a[icol][l]) for (l=1;l<=m;l++) SWAP(b[irow][l],b[icol][l]) } indxr[i]=irow; We are now ready to divide the pivot row by the indxc[i]=icol; pivot element, located at irow and icol. if (a[icol][icol] == 0.0) nrerror("gaussj: Singular Matrix"); pivinv=1.0/a[icol][icol]; a[icol][icol]=1.0; for (l=1;l<=n;l++) a[icol][l] *= pivinv; for (l=1;l<=m;l++) b[icol][l] *= pivinv;
40 Chapter 2. Solution of Linear Algebraic Equations for(11=1;11=1;1--){ if (indxr[l]!indxc[1]) for (k=1;k<=n;k++) SWAP(a[k][indxr [1]],a[k][indxc[1]]); And we are done. free_ivector(ipiv,1,n); free_ivector(indxr,1,n); 1.800 free_ivector(indxc,1,n); 872 to any NUMERICAL RECIPES Row versus Column Elimination Strategies (Nort server 令 The above discussion can be amplified by a modest amount of formalism.Row Press. THE operations on a matrix A correspond to pre-(that is,left-)multiplication by some simple ART matrix R.For example,the matrix R with components Programs (1ifi=jand i≠2,4 9 1 Rij= ifi=2,j=4 1 ifi=4,j=2 (2.1.5) SCIENTIFIC 0 otherwise effects the interchange of rows 2 and 4.Gauss-Jordan elimination by row operations alone 61 (including the possibility of partial pivoting)consists of a series of such left-multiplications, yielding successively A·x=b (…R3·R2-R1·A)·X=··R3·R2·R1·b (2.1.6) (1)·x=··Rg·R2·R1·b Numerica 10621 X=··R3·R2·R1·b uctio 43108 The key point is that since the R's build from right to left,the right-hand side is simply Recipes transformed at each stage from one vector to another. (outside Column operations,on the other hand,correspond to post-,or right-,multiplications by simple matrices,call them C.The matrix in equation (2.1.5),if right-multiplied onto a North Software. matrix A,will interchange A's second and fourth columns.Elimination by column operations involves (conceptually)inserting a column operator,and also its inverse,between the matrix A and the unknown vector x: A·x=b A·C1·Ci.x=b A.C1·C2C21.C11.x=b (2.1.7) (A.C1·C2C3-)…C31.C21.C11.x=b (1).C31.C21.C11.x=b
40 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (ll=1;ll=1;l--) { if (indxr[l] != indxc[l]) for (k=1;k<=n;k++) SWAP(a[k][indxr[l]],a[k][indxc[l]]); } And we are done. free_ivector(ipiv,1,n); free_ivector(indxr,1,n); free_ivector(indxc,1,n); } Row versus Column Elimination Strategies The above discussion can be amplified by a modest amount of formalism. Row operations on a matrix A correspond to pre- (that is, left-) multiplication by some simple matrix R. For example, the matrix R with components Rij = 1 if i = j and i = 2, 4 1 if i = 2, j = 4 1 if i = 4, j = 2 0 otherwise (2.1.5) effects the interchange of rows 2 and 4. Gauss-Jordan elimination by row operations alone (including the possibility of partial pivoting) consists of a series of such left-multiplications, yielding successively A · x = b (··· R3 · R2 · R1 · A) · x = ··· R3 · R2 · R1 · b (1) · x = ··· R3 · R2 · R1 · b x = ··· R3 · R2 · R1 · b (2.1.6) The key point is that since the R’s build from right to left, the right-hand side is simply transformed at each stage from one vector to another. Column operations, on the other hand, correspond to post-, or right-, multiplications by simple matrices, call them C. The matrix in equation (2.1.5), if right-multiplied onto a matrix A, will interchange A’s second and fourth columns. Elimination by column operations involves (conceptually) inserting a column operator, and also its inverse, between the matrix A and the unknown vector x: A · x = b A · C1 · C−1 1 · x = b A · C1 · C2 · C−1 2 · C−1 1 · x = b (A · C1 · C2 · C3 ···)··· C−1 3 · C−1 2 · C−1 1 · x = b (1)··· C−1 3 · C−1 2 · C−1 1 · x = b (2.1.7)
2.2 Gaussian Elimination with Backsubstitution 41 which (peeling of the C-'s one at a time)implies a solution x=C1.C2.C3..b (2.1.8) Notice the essential difference between equation (2.1.8)and equation (2.1.6).In the latter case,the C's must be applied to b in the reverse order from that in which they become known.That is,they must all be stored along the way.This requirement greatly reduces the usefulness of column operations,generally restricting them to simple permutations,for example in support of full pivoting. CITED REFERENCES AND FURTHER READING: Wilkinson,J.H.1965,The Algebraic Eigenvalue Problem (New York:Oxford University Press).[1] Carnahan,B.,Luther,H.A,and Wilkes,J.O.1969.Applied Numerical Methods (New York: Wiley),Example 5.2,p.282. Bevington,P.R.1969,Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill),Program B-2,p.298. Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley). Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerica/Analysis,2nd ed.(New York: 巡 McGraw-Hill),89.3-1. 9 令 2.2 Gaussian Elimination with Backsubstitution Press. The usefulness of Gaussian elimination with backsubstitution is primarily Programs pedagogical.It stands between full elimination schemes such as Gauss-Jordan,and triangular decomposition schemes such as will be discussed in the next section. SCIENTIFIC Gaussian elimination reduces a matrix not all the way to the identity matrix,but only halfway,to a matrix whose components on the diagonal and above(say)remain 排 6 nontrivial.Let us now see what advantages accrue. Suppose that in doing Gauss-Jordan elimination,as described in 82.1,we at each stage subtract away rows only below the then-current pivot element.When a 22 is the pivot element,for example,we divide the second row by its value(as before), 10.621 but now use the pivot row to zero only a32 and a42,not a12 (see equation 2.1.1). Numerica Suppose,also,that we do only partial pivoting,never interchanging columns,so that 431 the order of the unknowns never needs to be modified. uction Then,when we have done this for all the pivots,we will be left with a reduced Recipes equation that looks like this(in the case of a single right-hand side vector): (outside a11 a12 a13 a14 T1 North 0 a22 023 T2 (2.2.1) 0 0 33 T3 0 0 44 4」 Here the primes signify that the a's and b's do not have their original numerical values,but have been modified by all the row operations in the elimination to this point.The procedure up to this point is termed Gaussian elimination
2.2 Gaussian Elimination with Backsubstitution 41 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). which (peeling of the C−1’s one at a time) implies a solution x = C1 · C2 · C3 ··· b (2.1.8) Notice the essential difference between equation (2.1.8) and equation (2.1.6). In the latter case, the C’s must be applied to b in the reverse order from that in which they become known. That is, they must all be stored along the way. This requirement greatly reduces the usefulness of column operations, generally restricting them to simple permutations, for example in support of full pivoting. CITED REFERENCES AND FURTHER READING: Wilkinson, J.H. 1965, The Algebraic Eigenvalue Problem (New York: Oxford University Press). [1] Carnahan, B., Luther, H.A., and Wilkes, J.O. 1969, Applied Numerical Methods (New York: Wiley), Example 5.2, p. 282. Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Program B-2, p. 298. Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §9.3–1. 2.2 Gaussian Elimination with Backsubstitution The usefulness of Gaussian elimination with backsubstitution is primarily pedagogical. It stands between full elimination schemes such as Gauss-Jordan, and triangular decomposition schemes such as will be discussed in the next section. Gaussian elimination reduces a matrix not all the way to the identity matrix, but only halfway, to a matrix whose components on the diagonal and above (say) remain nontrivial. Let us now see what advantages accrue. Suppose that in doing Gauss-Jordan elimination, as described in §2.1, we at each stage subtract away rows only below the then-current pivot element. When a 22 is the pivot element, for example, we divide the second row by its value (as before), but now use the pivot row to zero only a32 and a42, not a12 (see equation 2.1.1). Suppose, also, that we do only partial pivoting, never interchanging columns, so that the order of the unknowns never needs to be modified. Then, when we have done this for all the pivots, we will be left with a reduced equation that looks like this (in the case of a single right-hand side vector): a 11 a 12 a 13 a 14 0 a 22 a 23 a 24 0 0 a 33 a 34 000 a 44 · x1 x2 x3 x4 = b 1 b 2 b 3 b 4 (2.2.1) Here the primes signify that the a’s and b’s do not have their original numerical values, but have been modified by all the row operations in the elimination to this point. The procedure up to this point is termed Gaussian elimination.