2.7 Sparse Linear Systems 89 void dsprstx(double sa[],unsigned long ija[],double x[],double b[], unsigned long n); These are double versions of sprsax and sprstx. if (itrnsp)dsprstx(sa,ija,x,r,n); else dsprsax(sa,ija,x,r,n); 2 extern unsigned long ija☐; extern double sa☐; The matrix is stored somewhere void asolve(unsigned long n,double b[],double x[],int itrnsp) 83g 19881992 unsigned long i; for(i=1;i<=n;i+)x[i]=(sa[i]I=0.0?b[i]/sa[i]:b[i]); 三之h会州 1-800 The matrix A is the diagonal part of A,stored in the first n elements of sa.Since the transpose matrix has the same diagonal,the flag itrnsp is not used. from NUMERICAL RECIPES I America computer, Press. CITED REFERENCES AND FURTHER READING: Tewarson,R.P.1973,Sparse Matrices (New York:Academic Press).[1] Jacobs,D.A.H.(ed.)1977,The State of the Art in Numerical Analysis (London:Academic Press). 9 Chapter 1.3 (by J.K.Reid).[2] George,A.,and Liu,J.W.H.1981,Computer Solution of Large Sparse Positive Definite Systems SCIENTIFIC (Englewood Cliffs,NJ:Prentice-Hall).[3] NAG Fortran Library (Numerical Algorithms Group,256 Banbury Road,Oxford OX27DE,U.K.). [4 IMSL Math/Library Users Manual (IMSL Inc..2500 CityWest Boulevard,Houston TX 77042).[5] Eisenstat,S.C.,Gursky,M.C.,Schultz,M.H.,and Sherman,A.H.1977,Yale Sparse Matrix Pack- age,Technical Reports 112 and 114(Yale University Department of Computer Science).[6] Knuth.D.E.1968.Fundamental Algorithms,vol.1 of The Art of Computer Programming(Reading. MA:Addison-Wesley),82.2.6.[7] Kincaid,D.R.,Respess,J.R.,Young,D.M.,and Grimes,R.G.1982,ACM Transactions on Math- ematical Software,vol.8,pp.302-322.[8] Numerical Recipes 10621 43106 PCGPAK User's Guide (New Haven:Scientific Computing Associates,Inc.).[9] Bentley.J.1986,Programming Pearls(Reading,MA:Addison-Wesley).$9.[10] (outside Golub,G.H.,and Van Loan,C.F.1989,Matrix Computations,2nd ed.(Baltimore:Johns Hopkins University Press).Chapters 4 and 10,particularly $10.2-10.3.[11] North Software. Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). Chapter 8.[12] Baker,L.1991,More C Tools for Scientists and Engineers (New York:McGraw-Hill).[13] Fletcher,R.1976,in Numerical Analysis Dundee 1975,Lecture Notes in Mathematics,vol.506 A.Dold and B Eckmann,eds.(Berlin:Springer-Verlag),pp.73-89.[14] Saad,Y.,and Schulz,M.1986,SIAM Journal on Scientific and Statistical Computing.vol.7, pp.856-869.[15] Bunch.J.R.,and Rose.D.J.(eds.)1976.Sparse Matrix Computations (New York:Academic Press). Duff,I.S.,and Stewart,G.W.(eds.)1979,Sparse Matrix Proceedings 1978 (Philadelphia S.IA.M.)
2.7 Sparse Linear Systems 89 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). void dsprstx(double sa[], unsigned long ija[], double x[], double b[], unsigned long n); These are double versions of sprsax and sprstx. if (itrnsp) dsprstx(sa,ija,x,r,n); else dsprsax(sa,ija,x,r,n); } extern unsigned long ija[]; extern double sa[]; The matrix is stored somewhere. void asolve(unsigned long n, double b[], double x[], int itrnsp) { unsigned long i; for(i=1;i<=n;i++) x[i]=(sa[i] != 0.0 ? b[i]/sa[i] : b[i]); The matrix A is the diagonal part of A, stored in the first n elements of sa. Since the transpose matrix has the same diagonal, the flag itrnsp is not used. } CITED REFERENCES AND FURTHER READING: Tewarson, R.P. 1973, Sparse Matrices (New York: Academic Press). [1] Jacobs, D.A.H. (ed.) 1977, The State of the Art in Numerical Analysis (London: Academic Press), Chapter I.3 (by J.K. Reid). [2] George, A., and Liu, J.W.H. 1981, Computer Solution of Large Sparse Positive Definite Systems (Englewood Cliffs, NJ: Prentice-Hall). [3] NAG Fortran Library (Numerical Algorithms Group, 256 Banbury Road, Oxford OX27DE, U.K.). [4] IMSL Math/Library Users Manual (IMSL Inc., 2500 CityWest Boulevard, Houston TX 77042). [5] Eisenstat, S.C., Gursky, M.C., Schultz, M.H., and Sherman, A.H. 1977, Yale Sparse Matrix Package, Technical Reports 112 and 114 (Yale University Department of Computer Science). [6] Knuth, D.E. 1968, Fundamental Algorithms, vol. 1 of The Art of Computer Programming (Reading, MA: Addison-Wesley), §2.2.6. [7] Kincaid, D.R., Respess, J.R., Young, D.M., and Grimes, R.G. 1982, ACM Transactions on Mathematical Software, vol. 8, pp. 302–322. [8] PCGPAK User’s Guide (New Haven: Scientific Computing Associates, Inc.). [9] Bentley, J. 1986, Programming Pearls (Reading, MA: Addison-Wesley), §9. [10] Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins University Press), Chapters 4 and 10, particularly §§10.2–10.3. [11] Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 8. [12] Baker, L. 1991, More C Tools for Scientists and Engineers (New York: McGraw-Hill). [13] Fletcher, R. 1976, in Numerical Analysis Dundee 1975, Lecture Notes in Mathematics, vol. 506, A. Dold and B Eckmann, eds. (Berlin: Springer-Verlag), pp. 73–89. [14] Saad, Y., and Schulz, M. 1986, SIAM Journal on Scientific and Statistical Computing, vol. 7, pp. 856–869. [15] Bunch, J.R., and Rose, D.J. (eds.) 1976, Sparse Matrix Computations (New York: Academic Press). Duff, I.S., and Stewart, G.W. (eds.) 1979, Sparse Matrix Proceedings 1978 (Philadelphia: S.I.A.M.)
90 Chapter 2.Solution of Linear Algebraic Equations 2.8 Vandermonde Matrices and Toeplitz Matrices In 82.4 the case of a tridiagonal matrix was treated specially,because that particular type of linear system admits a solution in only of order N operations, rather than of order N3 for the general linear problem.When such particular types exist,it is important to know about them.Your computational savings,should you ever happen to be working on a problem that involves the right kind of particular type,can be enormous. This section treats two special types of matrices that can be solved in of order N2 operations,not as good as tridiagonal,but a lot better than the general case. ted for (Other than the operations count,these two types having nothing in common.) Matrices of the first type,termed Vandermonde matrices,occur in some problems having to do with the fitting of polynomials,the reconstruction of distributions from their moments.and also other contexts.In this book.for example.a Vandermonde RECIPES I problem crops up in 83.5.Matrices of the second type,termed Toeplit=matrices, 2 tend to occur in problems involving deconvolution and signal processing.In this book,a Toeplitz problem is encountered in $13.7. These are not the only special types of matrices worth knowing about.The Press. Hilbert matrices,whose components are of the form aij=1/(i+j-1),i,j= 1,...,N can be inverted by an exact integer algorithm,and are very difficult to invert in any other way,since they are notoriously ill-conditioned (see [1]for details) 9 The Sherman-Morrison and Woodbury formulas,discussed in $2.7,can sometimes be used to convert new special forms into old ones.Reference [2]gives some other OF SCIENTIFIC( special forms.We have not found these additional forms to arise as frequently as 6 the two that we now discuss. 1988-19920 COMPUTING (ISBN Vandermonde Matrices 10-521 A Vandermonde matrix of size N x N is completely determined by N arbitrary Numerical Recipes 43106 numbers 1,22,...,N,in terms of which its N2 components are the integer powers 1...N.Evidently there are two possible such forms,depending on whether (outside we view the i's as rows,i's as columns,or vice versa.In the former case,we get a linear system of equations that looks like this, North Software. 2 2 4+ Y2 (2.8.1) : x一 CN N Performing the matrix multiplication,you will see that this equation solves for the unknown coefficients c;which fit a polynomial to the N pairs of abscissas and ordinates (xj,y). Precisely this problem will arise in 83.5,and the routine given there will solve (2.8.1)by the method that we are about to describe
90 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). 2.8 Vandermonde Matrices and Toeplitz Matrices In §2.4 the case of a tridiagonal matrix was treated specially, because that particular type of linear system admits a solution in only of order N operations, rather than of order N 3 for the general linear problem. When such particular types exist, it is important to know about them. Your computational savings, should you ever happen to be working on a problem that involves the right kind of particular type, can be enormous. This section treats two special types of matrices that can be solved in of order N2 operations, not as good as tridiagonal, but a lot better than the general case. (Other than the operations count, these two types having nothing in common.) Matrices of the first type, termed Vandermonde matrices, occur in some problems having to do with the fitting of polynomials, the reconstruction of distributions from their moments, and also other contexts. In this book, for example, a Vandermonde problem crops up in §3.5. Matrices of the second type, termed Toeplitz matrices, tend to occur in problems involving deconvolution and signal processing. In this book, a Toeplitz problem is encountered in §13.7. These are not the only special types of matrices worth knowing about. The Hilbert matrices, whose components are of the form aij = 1/(i + j − 1), i, j = 1,...,N can be inverted by an exact integer algorithm, and are very difficult to invert in any other way, since they are notoriously ill-conditioned (see [1] for details). The Sherman-Morrison and Woodbury formulas, discussed in §2.7, can sometimes be used to convert new special forms into old ones. Reference [2] gives some other special forms. We have not found these additional forms to arise as frequently as the two that we now discuss. Vandermonde Matrices A Vandermonde matrix of size N × N is completely determined by N arbitrary numbers x1, x2,...,xN , in terms of which its N2 components are the integer powers xj−1 i , i, j = 1,...,N. Evidently there are two possible such forms, depending on whether we view the i’s as rows, j’s as columns, or vice versa. In the former case, we get a linear system of equations that looks like this, 1 x1 x2 1 ··· xN−1 1 1 x2 x2 2 ··· xN−1 2 . . . . . . . . . . . . 1 xN x2 N ··· xN−1 N · c1 c2 . . . cN = y1 y2 . . . yN (2.8.1) Performing the matrix multiplication, you will see that this equation solves for the unknown coefficients ci which fit a polynomial to the N pairs of abscissas and ordinates (xj , yj ). Precisely this problem will arise in §3.5, and the routine given there will solve (2.8.1) by the method that we are about to describe
2.8 Vandermonde Matrices and Toeplitz Matrices 91 The alternative identification of rows and columns leads to the set of equations 1 1 1 W1 工2 2 名 N W3 3 (2.8.2) 21 4。 IN-I N Write this out and you will see that it relates to the problem of moments:Given the values of N points ri,find the unknown weights wi,assigned so as to match the given values gi of the first N moments.(For more on this problem,consult [3).)The routine given in this section solves (2.8.2). EOEE 8 The method of solution of both (2.8.1)and (2.8.2)is closely related to Lagrange's polynomial interpolation formula,which we will not formally meet until $3.I below.Notwith- standing,the following derivation should be comprehensible: Let Pi()be the polynomial of degree N-1 defined by N ICAL P(x)= E一tn Ij -In =∑Ax-1 (2.8.3) k三1 RECIPES Here the meaning of the last equality is to define the components of the matrix As as the coefficients that arise when the product is multiplied out and like terms collected. The polynomial Pi()is a function of generally.But you will notice that it is specifically designed so that it takes on a value of zero at all i with ij,and has a value of unity at x =j.In other words, Press. Programs B(e)==∑Ax-1 (2.8.4) k=1 But(2.8.4)says that A is exactly the inverse of the matrix of components,which appears in (2.8.2),with the subscript as the column index.Therefore the solution of(2.8.2) IENTIFIC is just that matrix inverse times the right-hand side, N =∑A9 (2.8.5) k As for the transpose problem(2.8.1),we can use the fact that the inverse of the transpose is the transpose of the inverse,so Numer (2.8.6) The routine in 83.5 implements this. It remains to find a good way of multiplying out the monomial terms in (2.8.3),in order to get the components of Ajk.This is essentially a bookkeeping problem,and we will let you read the routine itself to see how it can be solved.One trick is to define a master P()by North Software. N P(x)≡ (-2n) (2.8.7) work out its coefficients,and then obtain the numerators and denominators of the specific P,'s viasynthetic division by the one supernumerary term.(See 5.3 for more on synthetic division.) Since each such division is only a process of order N,the total procedure is of order N2 You should be warned that Vandermonde systems are notoriously ill-conditioned,by their very nature.(As an aside anticipating $5.8,the reason is the same as that which makes Chebyshev fitting so impressively accurate:there exist high-order polynomials that are very good uniform fits to zero.Hence roundoff error can introduce rather substantial coefficients of the leading terms of these polynomials.It is a good idea always to compute Vandermonde problems in double precision
2.8 Vandermonde Matrices and Toeplitz Matrices 91 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The alternative identification of rows and columns leads to the set of equations 1 1 ··· 1 x1 x2 ··· xN x2 1 x2 2 ··· x2 N ··· xN−1 1 xN−1 2 ··· xN−1 N · w1 w2 w3 ··· wN = q1 q2 q3 ··· qN (2.8.2) Write this out and you will see that it relates to the problem of moments: Given the values of N points xi, find the unknown weights wi, assigned so as to match the given values qj of the first N moments. (For more on this problem, consult [3].) The routine given in this section solves (2.8.2). The method of solution of both (2.8.1) and (2.8.2) is closely related to Lagrange’s polynomial interpolation formula, which we will not formally meet until §3.1 below. Notwithstanding, the following derivation should be comprehensible: Let Pj (x) be the polynomial of degree N − 1 defined by Pj (x) = N n=1 (n=j) x − xn xj − xn = N k=1 Ajkxk−1 (2.8.3) Here the meaning of the last equality is to define the components of the matrix Aij as the coefficients that arise when the product is multiplied out and like terms collected. The polynomial Pj (x) is a function of x generally. But you will notice that it is specifically designed so that it takes on a value of zero at all xi with i = j, and has a value of unity at x = xj . In other words, Pj (xi) = δij = N k=1 Ajkxk−1 i (2.8.4) But (2.8.4) says that Ajk is exactly the inverse of the matrix of components xk−1 i , which appears in (2.8.2), with the subscript as the column index. Therefore the solution of (2.8.2) is just that matrix inverse times the right-hand side, wj = N k=1 Ajkqk (2.8.5) As for the transpose problem (2.8.1), we can use the fact that the inverse of the transpose is the transpose of the inverse, so cj = N k=1 Akj yk (2.8.6) The routine in §3.5 implements this. It remains to find a good way of multiplying out the monomial terms in (2.8.3), in order to get the components of Ajk. This is essentially a bookkeeping problem, and we will let you read the routine itself to see how it can be solved. One trick is to define a master P(x) by P(x) ≡ N n=1 (x − xn) (2.8.7) work out its coefficients, and then obtain the numerators and denominators of the specific Pj ’s via synthetic division by the one supernumerary term. (See §5.3 for more on synthetic division.) Since each such division is only a process of order N, the total procedure is of order N2. You should be warned that Vandermonde systems are notoriously ill-conditioned, by their very nature. (As an aside anticipating §5.8, the reason is the same as that which makes Chebyshev fitting so impressively accurate: there exist high-order polynomials that are very good uniform fits to zero. Hence roundoff error can introduce rather substantial coefficients of the leading terms of these polynomials.) It is a good idea always to compute Vandermonde problems in double precision
92 Chapter 2.Solution of Linear Algebraic Equations The routine for (2.8.2)which follows is due to G.B.Rybicki. #include "nrutil.h" void vander(double x[],double w[],double g[],int n) Solves the Vandermonde linear system(1..N).Input consists of the vectors x[1..n]and q[1..n];the vector w[1..n]is output. f int i,j,k; double b,s,t,xx; double米c; c=dvector(1,n); if(n=1)[1]=g[1] 83 else 鱼 granted for 19881992 for(i=1;i=2;k--)[ is synthetically divided, ART b=c[k]+xx*b; s+=qk-1]*b; matrix-multiplied by the right-hand side, t=xx*t+b; st st Programs [1]=s/t; and supplied with a denominator 2 free_dvector(c,1,n); to dir 18881920 OF SCIENTIFIC COMPUTING(ISBN Toeplitz Matrices v@cam 12-521 An N x N Toeplitz matrix is specified by giving 2N-1 numbers R&,k =-N+ 1,...,-1,0,1,...,N-1.Those numbers are then emplaced as matrix elements constant Numerical Recipes 43108 along the (upper-left to lower-right)diagonals of the matrix: Ro R-1 R-2 R-(N-2) R-N-1) Ro R-1 R-(N-3) (outside R-(N-2) R2 Ri Ro R-(N-4) R-(N-3) (2.8.8) North Software. 。。 RN-2 N-3 RN-4 Ro R-1 Ame .RN-1 RN-2 RN-3 Ri Ro The linear Toeplitz problem can thus be written as Ri-jxj =Vi (i=1,,N) (2.8.9) j=】 where the j's,j=1,...,N,are the unknowns to be solved for. The Toeplitz matrix is symmetric if Rk=R-k for all k.Levinson [4]developed an algorithm for fast solution of the symmetric Toeplitz problem,by a bordering method,that is
92 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The routine for (2.8.2) which follows is due to G.B. Rybicki. #include "nrutil.h" void vander(double x[], double w[], double q[], int n) Solves the Vandermonde linear system N i=1 xk−1 i wi = qk (k = 1,...,N). Input consists of the vectors x[1..n] and q[1..n]; the vector w[1..n] is output. { int i,j,k; double b,s,t,xx; double *c; c=dvector(1,n); if (n == 1) w[1]=q[1]; else { for (i=1;i=2;k--) { is synthetically divided, b=c[k]+xx*b; s += q[k-1]*b; matrix-multiplied by the right-hand side, t=xx*t+b; } w[i]=s/t; and supplied with a denominator. } } free_dvector(c,1,n); } Toeplitz Matrices An N × N Toeplitz matrix is specified by giving 2N − 1 numbers Rk, k = −N + 1,..., −1, 0, 1,...,N − 1. Those numbers are then emplaced as matrix elements constant along the (upper-left to lower-right) diagonals of the matrix: R0 R−1 R−2 ··· R−(N−2) R−(N−1) R1 R0 R−1 ··· R−(N−3) R−(N−2) R2 R1 R0 ··· R−(N−4) R−(N−3) ··· ··· RN−2 RN−3 RN−4 ··· R0 R−1 RN−1 RN−2 RN−3 ··· R1 R0 (2.8.8) The linear Toeplitz problem can thus be written as N j=1 Ri−jxj = yi (i = 1,...,N) (2.8.9) where the xj ’s, j = 1,...,N, are the unknowns to be solved for. The Toeplitz matrix is symmetric if Rk = R−k for all k. Levinson [4] developed an algorithm for fast solution of the symmetric Toeplitz problem, by a bordering method, that is
2.8 Vandermonde Matrices and Toeplitz Matrices 93 a recursive procedure that solves the M-dimensional Toeplitz problem M R-= (i=1,,M0 (2.8.10) j=1 intum for M1,2..until M-N,the desired result,is finally reached.The vector) is the result at the Mth stage,and becomes the desired answer only when N is reached. Levinson's method is well documented in standard texts(e.g.,[5]).The useful fact that the method generalizes to the nonsymmetric case seems to be less well known.At some risk of excessive detail,we therefore give a derivation here,due to G.B.Rybicki. In following a recursion from step M to step M+1 we find that our developing solution (M)changes in this way: ∑R-0= i=1,,M (2.8.11) j=1 becomes M RM)+R-+)=i=1....,M+1 (2.8.12) A8/ j=1 By eliminating y we find ∑R( C(MD)-2(M+) =R-(M+1) i=1,,M (2.8.13) America Press. = or by letting i M +1-i and jM+1-j, Programs ∑R-G)=R- (2.8.14) SCIENTIFIC j=1 where 6 c0=, (2.8.15) COMPUTING (ISBN To put this another way, ,=牌--G写” j=1,,M (2.8.16) Thus,if we can use recursion to find the order M quantities(M)and G()and the single Numerical 10621 orderM+1 quantity,then all of the other)will follow.Fortunately,the 43106 quantity)follows from equation (28.12)withiM+1. (outside Recipes M RM+1-写+)+RoxH)=M+1 (2.8.17) North Software. j=1 For the unknown orderM+1 quantities we can substitute the previous order quantities in G since c2-,= 0-x+ M+1) (2.8.18) IM+1 The result of this operation is = ∑Ru+1-0-1 (M) (2.8.19) ∑1R+1-,G21-,-R
2.8 Vandermonde Matrices and Toeplitz Matrices 93 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). a recursive procedure that solves the M-dimensional Toeplitz problem M j=1 Ri−jx(M) j = yi (i = 1,...,M) (2.8.10) in turn for M = 1, 2,... until M = N, the desired result, is finally reached. The vector x(M) j is the result at the Mth stage, and becomes the desired answer only when N is reached. Levinson’s method is well documented in standard texts (e.g., [5]). The useful fact that the method generalizes to the nonsymmetric case seems to be less well known. At some risk of excessive detail, we therefore give a derivation here, due to G.B. Rybicki. In following a recursion from step M to step M + 1 we find that our developing solution x(M) changes in this way: M j=1 Ri−jx(M) j = yi i = 1,...,M (2.8.11) becomes M j=1 Ri−jx(M+1) j + Ri−(M+1)x(M+1) M+1 = yi i = 1,...,M +1 (2.8.12) By eliminating yi we find M j=1 Ri−j x(M) j − x(M+1) j x(M+1) M+1 = Ri−(M+1) i = 1,...,M (2.8.13) or by letting i → M + 1 − i and j → M + 1 − j, M j=1 Rj−iG(M) j = R−i (2.8.14) where G(M) j ≡ x(M) M+1−j − x(M+1) M+1−j x(M+1) M+1 (2.8.15) To put this another way, x(M+1) M+1−j = x(M) M+1−j − x(M+1) M+1 G(M) j j = 1,...,M (2.8.16) Thus, if we can use recursion to find the order M quantities x(M) and G(M) and the single order M + 1 quantity x(M+1) M+1 , then all of the other x(M+1) j will follow. Fortunately, the quantity x(M+1) M+1 follows from equation (2.8.12) with i = M + 1, M j=1 RM+1−jx(M+1) j + R0x(M+1) M+1 = yM+1 (2.8.17) For the unknown order M + 1 quantities x(M+1) j we can substitute the previous order quantities in G since G(M) M+1−j = x(M) j − x(M+1) j x(M+1) M+1 (2.8.18) The result of this operation is x(M+1) M+1 = M j=1 RM+1−jx(M) j − yM+1 M j=1 RM+1−jG(M) M+1−j − R0 (2.8.19)
94 Chapter 2.Solution of Linear Algebraic Equations The only remaining problem is to develop a recursion relation for G.Before we do that,however,we should point out that there are actually two distinct sets of solutions to the original linear problem for a nonsymmetric matrix,namely right-hand solutions(which we have been discussing)and left-hand solutions.The formalism for the left-hand solutions differs only in that we deal with the equations ∑R-0=h i=1,,M (2.8.20) j= Then,the same sequence of operations on this set leads to ∑R-H=R (2.8.21) j= where 4--4出 (M) 三 (2.8.22) (compare with 2.8.14-2.8.15).The reason for mentioning the left-hand solutions now is that,by equation (2.8.21),the Hj satisfy exactly the same equation as the j except for RECIPES the substitution yaRi on the right-hand side. Therefore we can quickly deduce from equation (2.8.19)that 4= ∑01RM+1-H-RM+1 M (2.8.23) Press. 1RaM+1-G+1-)-Bo 9 By the same token,G satisfies the same equation as z,except for the substitution yR. 9 This gives G4= ∑R-M-1G0-R-M- (2.8.24) IENTIFIC M ∑g-M-1H4--Ro The same"morphism"also turns equation(2.8.16),and its partner for z,into the final equations GM)=)-G (M) 4+=0-H4c- (2.8.25) (ISBN Now,starting with the initial values )=y/Ro G)=R-1/Ro H)=Ri/Ro Numerical (2.8.26) ucti -43126 we can recurse away.At each stage M we use equations (2.8.23)and (2.8.24)to find ,and then equation(2.8.25)to find the other components of ) Recipes From there the vectors(M+)and/or (+1)are easily calculated. The program below does this.It incorporates the second equation in(2.8.25)in the form H巴,=4-,-H4G (2.8.27) so that the computation can be done“in place.” 香 Notice that the above algorithm fails if Ro =0.In fact,because the bordering method does not allow pivoting,the algorithm will fail if any of the diagonal principal minors of the original Toeplitz matrix vanish.(Compare with discussion of the tridiagonal algorithm in 82.4.)If the algorithm fails,your matrix is not necessarily singular-you might just have to solve your problem by a slower and more general algorithm such as LU decomposition with pivoting. The routine that implements equations(2.8.23)(2.8.27)is also due to Rybicki.Note that the routine's r[n+j]is equal to Rj above,so that subscripts on the r array vary from 1to2N-1
94 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The only remaining problem is to develop a recursion relation for G. Before we do that, however, we should point out that there are actually two distinct sets of solutions to the original linear problem for a nonsymmetric matrix, namely right-hand solutions (which we have been discussing) and left-hand solutions zi. The formalism for the left-hand solutions differs only in that we deal with the equations M j=1 Rj−iz(M) j = yi i = 1,...,M (2.8.20) Then, the same sequence of operations on this set leads to M j=1 Ri−jH(M) j = Ri (2.8.21) where H(M) j ≡ z (M) M+1−j − z(M+1) M+1−j z(M+1) M+1 (2.8.22) (compare with 2.8.14 – 2.8.15). The reason for mentioning the left-hand solutions now is that, by equation (2.8.21), the Hj satisfy exactly the same equation as the xj except for the substitution yi → Ri on the right-hand side. Therefore we can quickly deduce from equation (2.8.19) that H(M+1) M+1 = M j=1 RM+1−jH(M) j − RM+1 M j=1 RM+1−jG(M) M+1−j − R0 (2.8.23) By the same token, G satisfies the same equation as z, except for the substitution yi → R−i. This gives G(M+1) M+1 = M j=1 Rj−M−1G(M) j − R−M−1 M j=1 Rj−M−1H(M) M+1−j − R0 (2.8.24) The same “morphism” also turns equation (2.8.16), and its partner for z, into the final equations G(M+1) j = G(M) j − G(M+1) M+1 H(M) M+1−j H(M+1) j = H(M) j − H(M+1) M+1 G(M) M+1−j (2.8.25) Now, starting with the initial values x(1) 1 = y1/R0 G(1) 1 = R−1/R0 H(1) 1 = R1/R0 (2.8.26) we can recurse away. At each stage M we use equations (2.8.23) and (2.8.24) to find H(M+1) M+1 , G(M+1) M+1 , and then equation (2.8.25) to find the other components of H(M+1) , G(M+1). From there the vectors x(M+1) and/or z(M+1) are easily calculated. The program below does this. It incorporates the second equation in (2.8.25) in the form H(M+1) M+1−j = H(M) M+1−j − H(M+1) M+1 G(M) j (2.8.27) so that the computation can be done “in place.” Notice that the above algorithm fails if R0 = 0. In fact, because the bordering method does not allow pivoting, the algorithm will fail if any of the diagonal principal minors of the original Toeplitz matrix vanish. (Compare with discussion of the tridiagonal algorithm in §2.4.) If the algorithm fails, your matrix is not necessarily singular — you might just have to solve your problem by a slower and more general algorithm such as LU decomposition with pivoting. The routine that implements equations (2.8.23)–(2.8.27) is also due to Rybicki. Note that the routine’s r[n+j] is equal to Rj above, so that subscripts on the r array vary from 1 to 2N − 1
2.8 Vandermonde Matrices and Toeplitz Matrices 95 #include "nrutil.h" #define FREERETURN {free_vector(h,1,n);free_vector(g,1,n);return;} void toeplz(float r[],float x[],float y],int n) Solves the Toeplitz systemR(N+)=(i=1,....N).The Toeplitz matrix need not be symmetric.y[1..n]and r[1..2*n-1]are input arrays;x[1..n]is the output array. int j,k,m,m1,m2 float pp,pt1,pt2,qq,qt1,qt2,sd,sgd,sgn,shn,sxn; float *g,*h; http://www.nr. readable files if (r[n]=0.0)nrerror("toeplz-1 singular principal minor"); Permission is opyright(C Sample page g=vector(1,n); h-vector(1,n); x[1]=y[1]/r[n]; Initialize for the recursion. 83g if (n=1)FREERETURN granted fori g[1]=r[n-1]/x[n]; h[1]=r[n+1]/x[n]; for (m=1;m1; pp=g[m1]: Ito directcustserv@cambridge.org 88 personal use.Further reproduction,or 1988-1992 by Numerical Recipes (j=1;j<=m2;j+)( -431085 pt1=g[j]; pt2=g [k]: qti=h[j]; qt2=h [k]; Software. g[j]=pt1-pp*qt2; gk幻=pt2-Pp*qt1; h[j]=qt1-qq*pt2; (outside North America) h[k--]=qt2-qq*pt1 Back for another recurrence. nrerror("toeplz should not arrive here!"); If you are in the business of solving very large Toeplitz systems,you should find out about so-called "new,fast"algorithms,which require only on the order of N(log N)operations compared to N-for Levinson's method.These methods are too complicated to include here
2.8 Vandermonde Matrices and Toeplitz Matrices 95 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). #include "nrutil.h" #define FREERETURN {free_vector(h,1,n);free_vector(g,1,n);return;} void toeplz(float r[], float x[], float y[], int n) Solves the Toeplitz system N j=1 R(N+i−j)xj = yi (i = 1,...,N). The Toeplitz matrix need not be symmetric. y[1..n] and r[1..2*n-1] are input arrays; x[1..n] is the output array. { int j,k,m,m1,m2; float pp,pt1,pt2,qq,qt1,qt2,sd,sgd,sgn,shn,sxn; float *g,*h; if (r[n] == 0.0) nrerror("toeplz-1 singular principal minor"); g=vector(1,n); h=vector(1,n); x[1]=y[1]/r[n]; Initialize for the recursion. if (n == 1) FREERETURN g[1]=r[n-1]/r[n]; h[1]=r[n+1]/r[n]; for (m=1;m> 1; pp=g[m1]; qq=h[m1]; for (j=1;j<=m2;j++) { pt1=g[j]; pt2=g[k]; qt1=h[j]; qt2=h[k]; g[j]=pt1-pp*qt2; g[k]=pt2-pp*qt1; h[j]=qt1-qq*pt2; h[k--]=qt2-qq*pt1; } } Back for another recurrence. nrerror("toeplz - should not arrive here!"); } If you are in the business of solving very large Toeplitz systems, you should find out about so-called “new, fast” algorithms, which require only on the order of N(log N) 2 operations, compared to N2 for Levinson’s method. These methods are too complicated to include here
96 Chapter 2.Solution of Linear Algebraic Equations Papers by Bunch[6]and de Hoog [7]will give entry to the literature. CITED REFERENCES AND FURTHER READING: Golub,G.H.,and Van Loan,C.F.1989,Matrix Computations,2nd ed.(Baltimore:Johns Hopkins University Press),Chapter 5 [also treats some other special forms]. Forsythe,G.E.,and Moler,C.B.1967.Computer Solution of Linear Algebraic Systems(Engle- wood Cliffs,NJ:Prentice-Hall),$19.[1] Westlake,J.R.1968,A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York:Wiley).[2] von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic 81 Press),pp.394ff.[3] Levinson,N.,Appendix B of N.Wiener,1949,Extrapolation,Interpolation and Smoothing of Stationary Time Series (New York:Wiley).[4] Robinson,E.A.,and Treitel,S.1980,Geophysical SignalAnalysis(Englewood Cliffs,NJ:Prentice- Hall),pp.163ff.[5] Bunch,J.R.1985,SIAM Journal on Scientific and Statistical Computing,vol.6,pp.349-364.[6] de Hoog,F.1987,Linear Algebra and Its Applications,vol.88/89,pp.123-138.[7] 、% (North University Americ computer, Press. THE 2.9 Cholesky Decomposition 只 9 Programs If a square matrix A happens to be symmetric and positive definite,then it has a special,more efficient,triangular decomposition.Symmetric means that aj=aj for i,j=1,...,N,while positive definite means that CIENTIFI v·A·v>0 for all vectors v (2.9.1) 6 (In Chapter 1I we will see that positive definite has the equivalent interpretation that A has all positive eigenvalues.)While symmetric,positive definite matrices are rather special,they occur quite frequently in some applications,so their special factorization,called Cholesky decomposition,is good to know about.When you can use it,Cholesky decomposition is about a factor of two faster than alternative methods for solving linear equations. Instead of seeking arbitrary lower and upper triangular factors L and U,Cholesky decomposition constructs a lower triangular matrix L whose transpose LT can itself serve as Numerical 10-621 the upper triangular part.In other words we replace equation(2.3.1)by 431 L.LT=A (2.9.2) (outs Recipes This factorization is sometimes referred to as"taking the square root"of the matrix A.The components of LT are of course related to those of L by 话=L (2.9.3) Writing out equation (2.9.2)in components,one readily obtains the analogs of equations (2.3.12-(2.3.13). (2.9.4) and m=六(a2ar) j=i+1,i+2,,N (2.9.5)
96 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Papers by Bunch [6] and de Hoog [7] will give entry to the literature. CITED REFERENCES AND FURTHER READING: Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins University Press), Chapter 5 [also treats some other special forms]. Forsythe, G.E., and Moler, C.B. 1967, Computer Solution of Linear Algebraic Systems (Englewood Cliffs, NJ: Prentice-Hall), §19. [1] Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). [2] von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), pp. 394ff. [3] Levinson, N., Appendix B of N. Wiener, 1949, Extrapolation, Interpolation and Smoothing of Stationary Time Series (New York: Wiley). [4] Robinson, E.A., and Treitel, S. 1980, Geophysical Signal Analysis (Englewood Cliffs, NJ: PrenticeHall), pp. 163ff. [5] Bunch, J.R. 1985, SIAM Journal on Scientific and Statistical Computing, vol. 6, pp. 349–364. [6] de Hoog, F. 1987, Linear Algebra and Its Applications, vol. 88/89, pp. 123–138. [7] 2.9 Cholesky Decomposition If a square matrix A happens to be symmetric and positive definite, then it has a special, more efficient, triangular decomposition. Symmetric means that aij = aji for i, j = 1,...,N, while positive definite means that v · A · v > 0 for all vectors v (2.9.1) (In Chapter 11 we will see that positive definite has the equivalent interpretation that A has all positive eigenvalues.) While symmetric, positive definite matrices are rather special, they occur quite frequently in some applications, so their special factorization, called Cholesky decomposition, is good to know about. When you can use it, Cholesky decomposition is about a factor of two faster than alternative methods for solving linear equations. Instead of seeking arbitrary lower and upper triangular factors L and U, Cholesky decomposition constructs a lower triangular matrix L whose transpose LT can itself serve as the upper triangular part. In other words we replace equation (2.3.1) by L · LT = A (2.9.2) This factorization is sometimes referred to as “taking the square root” of the matrix A. The components of LT are of course related to those of L by LT ij = Lji (2.9.3) Writing out equation (2.9.2) in components, one readily obtains the analogs of equations (2.3.12)–(2.3.13), Lii = aii − i−1 k=1 L2 ik1/2 (2.9.4) and Lji = 1 Lii aij − i−1 k=1 LikLjk j = i + 1, i + 2,...,N (2.9.5)
2.9 Cholesky Decomposition 97 If you apply equations (2.9.4)and (2.9.5)in the order i=1,2,...,N,you will see that the L's that occur on the right-hand side are already determined by the time they are needed.Also,only components ai;withj>i are referenced.(Since A is symmetric, these have complete information.It is convenient,then,to have the factor L overwrite the subdiagonal (lower triangular but not including the diagonal)part of A,preserving the input upper triangular values of A.Only one extra vector of length N is needed to store the diagonal part of L.The operations count is N3/6 executions of the inner loop (consisting of one multiply and one subtract),with also N square roots.As already mentioned,this is about a factor 2 better than LU decomposition of A(where its symmetry would be ignored). A straightforward implementation is #include void choldc(float *a,int n,float p]) Given a positive-definite symmetric matrix a[1.]1.],this routine constructs its Cholesky nted for decomposition,A =L.LT.On input,only the upper triangle of a need be given;it is not modified.The Cholesky factor L is returned in the lower triangle of a,except for its diagonal elements which are returned in p[1..n]. void nrerror(char error._text[☐); int i,j,k; RECIPES float sum; for(i=1;1=1;k--)sum -a[i][k]*a[j][k]; America computer Press. 1f(1==j){ ART if (sum =1;k--)sum -a[i][k]*x[k]; x[i]=sum/p[i]; for (i=n;i>=1;i--){ Solve LT.x=y. for (sum=x[i],k=i+1;k<mn;k++)sum -a[k][i]*x[k]
2.9 Cholesky Decomposition 97 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). If you apply equations (2.9.4) and (2.9.5) in the order i = 1, 2,...,N, you will see that the L’s that occur on the right-hand side are already determined by the time they are needed. Also, only components aij with j ≥ i are referenced. (Since A is symmetric, these have complete information.) It is convenient, then, to have the factor L overwrite the subdiagonal (lower triangular but not including the diagonal) part of A, preserving the input upper triangular values of A. Only one extra vector of length N is needed to store the diagonal part of L. The operations count is N3/6 executions of the inner loop (consisting of one multiply and one subtract), with also N square roots. As already mentioned, this is about a factor 2 better than LU decomposition of A (where its symmetry would be ignored). A straightforward implementation is #include void choldc(float **a, int n, float p[]) Given a positive-definite symmetric matrix a[1..n][1..n], this routine constructs its Cholesky decomposition, A = L · LT . On input, only the upper triangle of a need be given; it is not modified. The Cholesky factor L is returned in the lower triangle of a, except for its diagonal elements which are returned in p[1..n]. { void nrerror(char error_text[]); int i,j,k; float sum; for (i=1;i=1;k--) sum -= a[i][k]*a[j][k]; if (i == j) { if (sum =1;k--) sum -= a[i][k]*x[k]; x[i]=sum/p[i]; } for (i=n;i>=1;i--) { Solve LT · x = y. for (sum=x[i],k=i+1;k<=n;k++) sum -= a[k][i]*x[k];
98 Chapter 2. Solution of Linear Algebraic Equations x[i]=sum/p[i]; 22 A typical use of choldc and cholsl is in the inversion of covariance matrices describing the fit of data to a model;see,e.g.,15.6.In this,and many other applications,one often needs L-1.The lower triangle of this matrix can be efficiently found from the output of choldc: for(1=1;1<n;1+)f a[1][i]=1.0/p[i]; for(j=1+1;j<n;j++)( sum=0.0; for (k=i;k<j;k++)sum -a[j][k]*a[k][i]; a[i][i]=sum/p[j]; 18881892 NUMERICAL CITED REFERENCES AND FURTHER READING: Wilkinson,J.H.,and Reinsch,C.1971,Linear Algebra,vol.ll of Handbook for Automatic Com- putation (New York:Springer-Verlag),Chapter 1/1. Gill,P.E.,Murray,W.,and Wright,M.H.1991,Numerical Linear Algebra and Optimization,vol.1 (Redwood City,CA:Addison-Wesley),84.9.2. p2今 Press. Dahlquist,G.,and Bjorck,A.1974,Numerica/Methods (Englewood Cliffs,NJ:Prentice-Hall). 65.3.5. Golub,G.H.,and Van Loan,C.F.1989,Matrix Computations,2nd ed.(Baltimore:Johns Hopkins University Press),84.2. IENTIFIC 2.10 QR Decomposition There is another matrix factorization that is sometimes very useful,the so-called OR (ISBN decomposition, Numer A=0R (2.10.1) Here R is upper triangular,while Q is orthogonal,that is, 431086 Q.Q=1 (2.10.2) where QT is the transpose matrix of Q.Although the decomposition exists for a general rectangular matrix,we shall restrict our treatment to the case when all the matrices are square, with dimensions N x N. Like the other matrix factorizations we have met (LU,SVD,Cholesky),OR decompo- sition can be used to solve systems of linear equations.To solve A·x=b (2.10.3) first form Q.b and then solve R.x=QT.b (2.10.4) by backsubstitution.Since OR decomposition involves about twice as many operations as LU decomposition,it is not used for typical systems of linear equations.However,we will meet special cases where OR is the method of choice
98 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). x[i]=sum/p[i]; } } A typical use of choldc and cholsl is in the inversion of covariance matrices describing the fit of data to a model; see, e.g., §15.6. In this, and many other applications, one often needs L−1. The lower triangle of this matrix can be efficiently found from the output of choldc: for (i=1;i<=n;i++) { a[i][i]=1.0/p[i]; for (j=i+1;j<=n;j++) { sum=0.0; for (k=i;k<j;k++) sum -= a[j][k]*a[k][i]; a[j][i]=sum/p[j]; } } CITED REFERENCES AND FURTHER READING: Wilkinson, J.H., and Reinsch, C. 1971, Linear Algebra, vol. II of Handbook for Automatic Computation (New York: Springer-Verlag), Chapter I/1. Gill, P.E., Murray, W., and Wright, M.H. 1991, Numerical Linear Algebra and Optimization, vol. 1 (Redwood City, CA: Addison-Wesley), §4.9.2. Dahlquist, G., and Bjorck, A. 1974, Numerical Methods (Englewood Cliffs, NJ: Prentice-Hall), §5.3.5. Golub, G.H., and Van Loan, C.F. 1989, Matrix Computations, 2nd ed. (Baltimore: Johns Hopkins University Press), §4.2. 2.10 QR Decomposition There is another matrix factorization that is sometimes very useful, the so-called QR decomposition, A = Q · R (2.10.1) Here R is upper triangular, while Q is orthogonal, that is, QT · Q = 1 (2.10.2) where QT is the transpose matrix of Q. Although the decomposition exists for a general rectangular matrix, we shall restrict our treatment to the case when all the matrices are square, with dimensions N × N. Like the other matrix factorizations we have met (LU, SVD, Cholesky), QR decomposition can be used to solve systems of linear equations. To solve A · x = b (2.10.3) first form QT · b and then solve R · x = QT · b (2.10.4) by backsubstitution. Since QR decomposition involves about twice as many operations as LU decomposition, it is not used for typical systems of linear equations. However, we will meet special cases where QR is the method of choice