Chapter 2.Solution of Linear Algebraic Equations http://www.nr.com or call 2.0 Introduction 1-800-872 Cambridge NUMERICAL RECIPES IN A set of linear algebraic equations looks like this: to make University a11c1+a12x2+a13c3+··+a1NEN=b1 one paper Press a21E1+a22T2+a23r3+···+a2NEN=b2 9 a31T1+a32x2+a33T3+·+a3NEN=b3 (2.0.1) strictly Programs C:THE ART OF SCIENTIFIC aM1I1 aM2T2 aM33+..+aMNIN bM 6 yright(C) Here the N unknowns zj,j=1,2,...,N are related by M equations.The COMPUTING coefficients aij with i =1,2,...,M and j=1,2,...,N are known numbers,as are the right-hand side quantities bi.i=1.2.....M 188-1992 v@cambr urther Nonsingular versus Singular Sets of Equations Numerical Recipes 58/06211401099 If N =M then there are as many equations as unknowns,and there is a good chance of solving for a unique solution set of is.Analytically,there can fail to be a unique solution if one or more of the M equations is a linear combination of (outside the others,a condition called row degeneracy,or if all equations contain certain North Software. variables only in exactly the same linear combination,called column degeneracy. (For square matrices,a row degeneracy implies a column degeneracy,and vice versa.)A set of equations that is degenerate is called singular.We will consider America) visit website singular matrices in some detail in 82.6. machine Numerically,at least two additional things can go wrong: While not exact linear combinations of each other,some of the equations may be so close to linearly dependent that roundoff errors in the machine render them linearly dependent at some stage in the solution process.In this case your numerical procedure will fail,and it can tell you that it has failed. 32
Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Chapter 2. Solution of Linear Algebraic Equations 2.0 Introduction A set of linear algebraic equations looks like this: a11x1 + a12x2 + a13x3 + ··· + a1N xN = b1 a21x1 + a22x2 + a23x3 + ··· + a2N xN = b2 a31x1 + a32x2 + a33x3 + ··· + a3N xN = b3 ··· ··· aM1x1 + aM2x2 + aM3x3 + ··· + aMN xN = bM (2.0.1) Here the N unknowns xj , j = 1, 2,...,N are related by M equations. The coefficients aij with i = 1, 2,...,M and j = 1, 2,...,N are known numbers, as are the right-hand side quantities bi, i = 1, 2,...,M. Nonsingular versus Singular Sets of Equations If N = M then there are as many equations as unknowns, and there is a good chance of solving for a unique solution set of xj ’s. Analytically, there can fail to be a unique solution if one or more of the M equations is a linear combination of the others, a condition called row degeneracy, or if all equations contain certain variables only in exactly the same linear combination, called column degeneracy. (For square matrices, a row degeneracy implies a column degeneracy, and vice versa.) A set of equations that is degenerate is called singular. We will consider singular matrices in some detail in §2.6. Numerically, at least two additional things can go wrong: • While not exact linear combinations of each other, some of the equations may be so close to linearly dependent that roundoff errors in the machine render them linearly dependent at some stage in the solution process. In this case your numerical procedure will fail, and it can tell you that it has failed. 32
2.0 Introduction 33 Accumulated roundoff errors in the solution process can swamp the true solution.This problem particularly emerges if N is too large.The numerical procedure does not fail algorithmically.However,it returns a set of r's that are wrong,as can be discovered by direct substitution back into the original equations.The closer a set ofequations is to being singular the more likely this is to happen,since increasingly close cancellations will occur during the solution.In fact,the preceding item can be viewed as the special case where the loss of significance is unfortunately total. Much of the sophistication of complicated"linear equation-solving packages" is devoted to the detection and/or correction of these two pathologies.As you work with large linear sets of equations,you will develop a feeling for when such sophistication is needed.It is difficult to give any firm guidelines,since there is no such thing as a"typical"linear problem.But here is a rough idea:Linear sets with N as large as 20 or 50 can be routinely solved in single precision(32 bit floating representations)without resorting to sophisticated methods,if the equations are not close to singular.With double precision(60 or 64 bits),this number can readily RECIPES be extended to N as large as several hundred,after which point the limiting factor 会3川 is generally machine time,not accuracy. 9 Even larger linear sets,N in the thousands or greater,can be solved when the coefficients are sparse (that is,mostly zero),by methods that take advantage of the sparseness.We discuss this further in 82.7. At the other end of the spectrum,one seems just as often to encounter linear 9 problems which,by their underlying nature,are close to singular.In this case,you might need to resort to sophisticated methods even for the case of N=10(though rarely for N=5).Singular value decomposition($2.6)is a technique that can sometimes turn singular problems into nonsingular ones,in which case additional 61 sophistication becomes unnecessary. Matrices Equation(2.0.1)can be written in matrix form as Numerical Recipes 10.621 A·X=b (2.0.2) 43106 Here the raised dot denotes matrix multiplication,A is the matrix of coefficients,and b is the right-hand side written as a column vector, (outside Software. a11 012 aIN b1 North a21 A 022 a2N Y (2.0.3) A LaMi aM2 aMN bM By convention,the first index on an element ai;denotes its row,the second index its column.For most purposes you don't need to know how a matrix is stored in a computer's physical memory;you simply reference matrix elements by their two-dimensional addresses,e.g.,a34 =a[3][4].We have already seen,in $1.2. that this C notation can in fact hide a rather subtle and versatile physical storage scheme,"pointer to array of pointers to rows."You might wish to review that section
2.0 Introduction 33 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). • Accumulated roundoff errors in the solution process can swamp the true solution. This problem particularly emerges if N is too large. The numerical procedure does not fail algorithmically. However, it returns a set of x’s that are wrong, as can be discovered by direct substitution back into the original equations. The closer a set of equations is to being singular, the more likely this is to happen, since increasingly close cancellations will occur during the solution. In fact, the preceding item can be viewed as the special case where the loss of significance is unfortunately total. Much of the sophistication of complicated “linear equation-solving packages” is devoted to the detection and/or correction of these two pathologies. As you work with large linear sets of equations, you will develop a feeling for when such sophistication is needed. It is difficult to give any firm guidelines, since there is no such thing as a “typical” linear problem. But here is a rough idea: Linear sets with N as large as 20 or 50 can be routinely solved in single precision (32 bit floating representations) without resorting to sophisticated methods, if the equations are not close to singular. With double precision (60 or 64 bits), this number can readily be extended to N as large as several hundred, after which point the limiting factor is generally machine time, not accuracy. Even larger linear sets, N in the thousands or greater, can be solved when the coefficients are sparse (that is, mostly zero), by methods that take advantage of the sparseness. We discuss this further in §2.7. At the other end of the spectrum, one seems just as often to encounter linear problems which, by their underlying nature, are close to singular. In this case, you might need to resort to sophisticated methods even for the case of N = 10 (though rarely for N = 5). Singular value decomposition (§2.6) is a technique that can sometimes turn singular problems into nonsingular ones, in which case additional sophistication becomes unnecessary. Matrices Equation (2.0.1) can be written in matrix form as A · x = b (2.0.2) Here the raised dot denotes matrix multiplication, A is the matrix of coefficients, and b is the right-hand side written as a column vector, A = a11 a12 ... a1N a21 a22 ... a2N ··· aM1 aM2 ... aMN b = b1 b2 ··· bM (2.0.3) By convention, the first index on an element aij denotes its row, the second index its column. For most purposes you don’t need to know how a matrix is stored in a computer’s physical memory; you simply reference matrix elements by their two-dimensional addresses, e.g., a34 = a[3][4]. We have already seen, in §1.2, that this C notation can in fact hide a rather subtle and versatile physical storage scheme, “pointer to array of pointers to rows.” You might wish to review that section
34 Chapter 2.Solution of Linear Algebraic Equations at this point.Occasionally it is useful to be able to peer through the veil,for example to pass a whole row a[i][j],j=1,...,N by the reference a[i]. Tasks of Computational Linear Algebra We will consider the following tasks as falling in the general purview of this chapter: Solution of the matrix equation A.x b for an unknown vector x,where A 三 is a square matrix of coefficients,raised dot denotes matrix multiplication, and b is a known right-hand side vector ($2.1-82.10). Solution of more than one matrix equation A.xi=bi,for a set of vectors xj,j=1,2,...,each corresponding to a different,known right-hand side vector b;.In this task the key simplification is that the matrix A is held 漆 constant,while the right-hand sides,the b's,are changed(82.1-82.10). Calculation ofthe matrix A which is the matrix inverse ofa square matrix A,i.e.,A.A-1=A-1.A=1,where 1 is the identity matrix (all zeros except for ones on the diagonal).This task is equivalent,for an N x N 9 matrix A,to the previous task with N different bi's (j=1,2,...,N), namely the unit vectors(b;=all zero elements except for I in the jth component).The corresponding x's are then the columns of the matrix inverse of A ($2.1 and $2.3). Calculation of the determinant of a square matrix A(82.3). If M<N,or if M=N but the equations are degenerate,then there OF SCIENTIFIC are effectively fewer equations than unknowns.In this case there can be either no solution,or else more than one solution vector x.In the latter event,the solution space consists of a particular solution xp added to any linear combination of(typically) N-M vectors (which are said to be in the nullspace of the matrix A).The task of finding the solution space of A involves Singular value decomposition of a matrix A. This subject is treated in 82.6. 10621 In the opposite case there are more equations than unknowns,M N.When Numerica this occurs there is,in general,no solution vector x to equation(2.0.1),and the set 431 of equations is said to be overdetermined.It happens frequently,however,that the Recipes best "compromise"solution is sought,the one that comes closest to satisfying all 腿 equations simultaneously.If closeness is defined in the least-squares sense,i.e.,that the sum of the squares of the differences between the left-and right-hand sides of equation(2.0.1)be minimized,then the overdetermined linear problem reduces to a (usually)solvable linear problem,called the Linear least-squares problem The reduced set ofequations to be solved can be written as the N x N set ofequations (AT.A)·x=(AT.b) (2.0.4 where AT denotes the transpose of the matrix A.Equations(2.0.4)are called the normal equations of the linear least-squares problem.There is a close connection
34 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). at this point. Occasionally it is useful to be able to peer through the veil, for example to pass a whole row a[i][j], j=1,...,N by the reference a[i]. Tasks of Computational Linear Algebra We will consider the following tasks as falling in the general purview of this chapter: • Solution of the matrix equation A·x = b for an unknown vector x, where A is a square matrix of coefficients, raised dot denotes matrix multiplication, and b is a known right-hand side vector (§2.1–§2.10). • Solution of more than one matrix equation A · xj = bj , for a set of vectors xj , j = 1, 2,... , each corresponding to a different, known right-hand side vector bj . In this task the key simplification is that the matrix A is held constant, while the right-hand sides, the b’s, are changed (§2.1–§2.10). • Calculation of the matrix A−1 which is the matrix inverse of a square matrix A, i.e., A · A−1 = A−1 · A = 1, where 1 is the identity matrix (all zeros except for ones on the diagonal). This task is equivalent, for an N × N matrix A, to the previous task with N different bj ’s (j = 1, 2,...,N), namely the unit vectors (bj = all zero elements except for 1 in the jth component). The corresponding x’s are then the columns of the matrix inverse of A (§2.1 and §2.3). • Calculation of the determinant of a square matrix A (§2.3). If MN. When this occurs there is, in general, no solution vector x to equation (2.0.1), and the set of equations is said to be overdetermined. It happens frequently, however, that the best “compromise” solution is sought, the one that comes closest to satisfying all equations simultaneously. If closeness is defined in the least-squares sense, i.e., that the sum of the squares of the differences between the left- and right-hand sides of equation (2.0.1) be minimized, then the overdetermined linear problem reduces to a (usually) solvable linear problem, called the • Linear least-squares problem. The reduced set of equations to be solved can be written as the N ×N set of equations (AT · A) · x = (AT · b) (2.0.4) where AT denotes the transpose of the matrix A. Equations (2.0.4) are called the normal equations of the linear least-squares problem. There is a close connection
2.0 Introduction 35 between singular value decomposition and the linear least-squares problem,and the latter is also discussed in $2.6.You should be warned that direct solution of the normal equations(2.0.4)is not generally the best way to find least-squares solutions. Some other topics in this chapter include Iterative improvement of a solution (82.5) Various special forms:symmetric positive-definite (82.9),tridiagonal (82.4),band diagonal (82.4),Toeplitz (2.8),Vandermonde (82.8),sparse (52.7) ●Strassen's“fast matrix inversion'”(s2.ll). Standard Subroutine Packages We cannot hope,in this chapter or in this book,to tell you everything there is to know about the tasks that have been defined above.In many cases you will have no alternative but to use sophisticated black-box program packages.Several good ones are available,though not always in C.LINPACK was developed at Argonne National ⊙ Laboratories and deserves particular mention because it is published,documented 令 and available for free use.A successor to LINPACK,LAPACK,is now becoming available.Packages available commercially (though not necessarily in C)include Press. those in the IMSL and NAG libraries. 里s%a You should keep in mind that the sophisticated packages are designed with very large linear systems in mind.They therefore go to great effort to minimize not only 9 the number of operations,but also the required storage.Routines for the various NOY tasks are usually provided in several versions,corresponding to several possible OF SCIENTIFIC simplifications in the form of the input coefficient matrix:symmetric,triangular. banded,positive definite,etc.If you have a large matrix in one of these forms, 6 you should certainly take advantage of the increased efficiency provided by these different routines,and not just use the form provided for general matrices. There is also a great watershed dividing routines that are direct(i.e.,execute in a predictable number of operations)from routines that are iterative (i.e.,attempt to converge to the desired answer in however many steps are necessary).Iterative 10621 methods become preferable when the battle against loss of significance is in danger Numerica of being lost,either due to large N or because the problem is close to singular.We 431 will treat iterative methods only incompletely in this book,in 82.7 and in Chapters Recipes 18 and 19.These methods are important,but mostly beyond our scope.We will, however,discuss in detail a technique which is on the borderline between direct (outside and iterative methods,namely the iterative improvement of a solution that has been North Software. obtained by direct methods (82.5). CITED REFERENCES AND FURTHER READING: Golub,G.H.,and Van Loan,C.F.1989,Matrix Computations,2nd ed.(Baltimore:Johns Hopkins University Press). Gill,P.E.,Murray,W.,and Wright.M.H.1991,Numerical Linear Algebra and Optimization,vol.1 (Redwood City,CA:Addison-Wesley). Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag), Chapter 4. Dongarra,J.J.,et al.1979,LINPACK User's Guide (Philadelphia:S.I.A.M.)
2.0 Introduction 35 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). between singular value decomposition and the linear least-squares problem, and the latter is also discussed in §2.6. You should be warned that direct solution of the normal equations (2.0.4) is not generally the best way to find least-squares solutions. Some other topics in this chapter include • Iterative improvement of a solution (§2.5) • Various special forms: symmetric positive-definite (§2.9), tridiagonal (§2.4), band diagonal (§2.4), Toeplitz (§2.8), Vandermonde (§2.8), sparse (§2.7) • Strassen’s “fast matrix inversion” (§2.11). Standard Subroutine Packages We cannot hope, in this chapter or in this book, to tell you everything there is to know about the tasks that have been defined above. In many cases you will have no alternative but to use sophisticated black-box program packages. Several good ones are available, though not always in C. LINPACK was developed at Argonne National Laboratories and deserves particular mention because it is published, documented, and available for free use. A successor to LINPACK, LAPACK, is now becoming available. Packages available commercially (though not necessarily in C) include those in the IMSL and NAG libraries. You should keep in mind that the sophisticated packages are designed with very large linear systems in mind. They therefore go to great effort to minimize not only the number of operations, but also the required storage. Routines for the various tasks are usually provided in several versions, corresponding to several possible simplifications in the form of the input coefficient matrix: symmetric, triangular, banded, positive definite, etc. If you have a large matrix in one of these forms, you should certainly take advantage of the increased efficiency provided by these different routines, and not just use the form provided for general matrices. There is also a great watershed dividing routines that are direct (i.e., execute in a predictable number of operations) from routines that are iterative (i.e., attempt to converge to the desired answer in however many steps are necessary). Iterative methods become preferable when the battle against loss of significance is in danger of being lost, either due to large N or because the problem is close to singular. We will treat iterative methods only incompletely in this book, in §2.7 and in Chapters 18 and 19. These methods are important, but mostly beyond our scope. We will, however, discuss in detail a technique which is on the borderline between direct and iterative methods, namely the iterative improvement of a solution that has been obtained by direct methods (§2.5). CITED REFERENCES AND FURTHER READING: Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins University Press). Gill, P.E., Murray, W., and Wright, M.H. 1991, Numerical Linear Algebra and Optimization, vol. 1 (Redwood City, CA: Addison-Wesley). Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 4. Dongarra, J.J., et al. 1979, LINPACK User’s Guide (Philadelphia: S.I.A.M.)
36 Chapter 2.Solution of Linear Algebraic Equations Coleman,T.F.,and Van Loan,C.1988.Handbook for Matrix Computations(Philadelphia:S.I.A.M.). Forsythe,G.E.,and Moler,C.B.1967,Computer Solution of Linear Algebraic Systems(Engle- wood Cliffs,NJ:Prentice-Hall). Wilkinson,J.H.,and Reinsch,C.1971,Linear Algebra,vol.ll of Handbook for Automatic Com- putation (New York:Springer-Verlag). Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley). Johnson,L.W.,and Riess,R.D.1982,Numerical Analysis,2nd ed.(Reading,MA:Addison- Wesley).Chapter 2. Ralston.A..and Rabinowitz.P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),Chapter 9. 81 2.1 Gauss-Jordan Elimination For inverting a matrix,Gauss-Jordan elimination is about as efficient as any other method.For solving sets of linear equations,Gauss-Jordan elimination ⊙ produces both the solution of the equations for one or more right-hand side vectors b,and also the matrix inverse A.However,its principal weaknesses are(i)that it 9 requires all the right-hand sides to be stored and manipulated at the same time,and (ii)that when the inverse matrix is not desired,Gauss-Jordan is three times slower than the best alternative technique for solving a single linear set(82.3).The method's principal strength is that it is as stable as any other direct method,perhaps even a bit more stable when full pivoting is used(see below). waea If you come along later with an additional right-hand side vector,you can multiply it by the inverse matrix,of course.This does give an answer,but one that is quite susceptible to roundofferror,not nearly as good as if the new vector had been 6 included with the set of right-hand side vectors in the first instance. For these reasons,Gauss-Jordan elimination should usually not be your method of first choice,either for solving linear equations or for matrix inversion.The decomposition methods in $2.3 are better.Why do we give you Gauss-Jordan at all? Because it is straightforward,understandable,solid as a rock,and an exceptionally 量、 Numerica 10621 good"psychological"backup for those times that something is going wrong and you think it might be your linear-equation solver. 4310 Some people believe that the backup is more than psychological,that Gauss- Jordan elimination is an"independent"numerical method.This turns out to be Recipes mostly myth.Except for the relatively minor differences in pivoting,described below,the actual sequence of operations performed in Gauss-Jordan elimination is North very closely related to that performed by the routines in the next two sections. For clarity,and to avoid writing endless ellipses(..)we will write out equations only for the case of four equations and four unknowns,and with three different right- hand side vectors that are known in advance.You can write bigger matrices and extend the equations to the case of N x N matrices,with M sets of right-hand side vectors,in completely analogous fashion.The routine implemented below is. of course,general
36 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Coleman, T.F., and Van Loan, C. 1988, Handbook for Matrix Computations (Philadelphia: S.I.A.M.). Forsythe, G.E., and Moler, C.B. 1967, Computer Solution of Linear Algebraic Systems (Englewood Cliffs, NJ: Prentice-Hall). Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag). Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed. (Reading, MA: AddisonWesley), Chapter 2. Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), Chapter 9. 2.1 Gauss-Jordan Elimination For inverting a matrix, Gauss-Jordan elimination is about as efficient as any other method. For solving sets of linear equations, Gauss-Jordan elimination produces both the solution of the equations for one or more right-hand side vectors b, and also the matrix inverse A−1. However, its principal weaknesses are (i) that it requires all the right-hand sides to be stored and manipulated at the same time, and (ii) that when the inverse matrix is not desired, Gauss-Jordan is three times slower than the best alternative technique for solving a single linear set (§2.3). The method’s principal strength is that it is as stable as any other direct method, perhaps even a bit more stable when full pivoting is used (see below). If you come along later with an additional right-hand side vector, you can multiply it by the inverse matrix, of course. This does give an answer, but one that is quite susceptible to roundoff error, not nearly as good as if the new vector had been included with the set of right-hand side vectors in the first instance. For these reasons, Gauss-Jordan elimination should usually not be your method of first choice, either for solving linear equations or for matrix inversion. The decomposition methods in §2.3 are better. Why do we give you Gauss-Jordan at all? Because it is straightforward, understandable, solid as a rock, and an exceptionally good “psychological” backup for those times that something is going wrong and you think it might be your linear-equation solver. Some people believe that the backup is more than psychological, that GaussJordan elimination is an “independent” numerical method. This turns out to be mostly myth. Except for the relatively minor differences in pivoting, described below, the actual sequence of operations performed in Gauss-Jordan elimination is very closely related to that performed by the routines in the next two sections. For clarity, and to avoid writing endless ellipses (···) we will write out equations only for the case of four equations and four unknowns, and with three different righthand side vectors that are known in advance. You can write bigger matrices and extend the equations to the case of N × N matrices, with M sets of right-hand side vectors, in completely analogous fashion. The routine implemented below is, of course, general