正在加载图片...
42 Chapter 2.Solution of Linear Algebraic Equations Backsubstitution But how do we solve for the x's?The last z(x4 in this example)is already isolated,namely x4=b/a44 (2.2.2) With the last x known we can move to the penultimate 3= -rsaiu (2.2.3) and then proceed with the z before that one.The typical step is Ti 6-∑ (2.2.4) ICAL j=i+1 The procedure defined by equation (2.2.4)is called backsubstitution.The com- bination of Gaussian elimination and backsubstitution yields a solution to the set of equations. The advantage of Gaussian elimination and backsubstitution over Gauss-Jordan elimination is simply that the former is faster in raw operations count:The 人0aN旧R Press. innermost loops of Gauss-Jordan elimination,each containing one subtraction and one multiplication,are executed N3 and N2M times(where there are N equations and M unknowns).The corresponding loops in Gaussian elimination are executed onlyN3 times (only half the matrix is reduced,and the increasing numbers of predictable zeros reduce the count to one-third),and N2M times,respectively. SCIENTIFIC( Each backsubstitution of a right-hand side is N2 executions of a similar loop (one multiplication plus one subtraction).For MN (only a few right-hand sides) 6 Gaussian elimination thus has about a factor three advantage over Gauss-Jordan. (We could reduce this advantage to a factor 1.5 by not computing the inverse matrix as part of the Gauss-Jordan scheme. For computing the inverse matrix(which we can view as the case of M =N right-hand sides,namely the N unit vectors which are the columns of the identity 10.621 matrix),Gaussianelimination and backsubstitution at first glance require N3(matrix Recipes Numerica reduction)+N3 (right-hand side manipulations)+N3(N backsubstitutions) 4310 =4N3 loop executions,which is more than the N3 for Gauss-Jordan.However,the Recipes unit vectors are quite special in containing all zeros except for one element.If this is taken into account,the right-side manipulations can be reduced to onlyN3 loop executions,and,for matrix inversion,the two methods have identical efficiencies. Both Gaussian elimination and Gauss-Jordan elimination share the disadvantage that all right-hand sides must be known in advance.The LU decomposition method in the next section does not share that deficiency,and also has an equally small operations count,both for solution with any number of right-hand sides,and for matrix inversion.For this reason we will not implement the method of Gaussian elimination as a routine. CITED REFERENCES AND FURTHER READING: Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),89.3-1.42 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Backsubstitution But how do we solve for the x’s? The last x (x4 in this example) is already isolated, namely x4 = b 4/a 44 (2.2.2) With the last x known we can move to the penultimate x, x3 = 1 a 33 [b 3 − x4a 34] (2.2.3) and then proceed with the x before that one. The typical step is xi = 1 a ii  b i −  N j=i+1 a ijxj   (2.2.4) The procedure defined by equation (2.2.4) is called backsubstitution. The com￾bination of Gaussian elimination and backsubstitution yields a solution to the set of equations. The advantage of Gaussian elimination and backsubstitution over Gauss-Jordan elimination is simply that the former is faster in raw operations count: The innermost loops of Gauss-Jordan elimination, each containing one subtraction and one multiplication, are executed N 3 and N2M times (where there are N equations and M unknowns). The corresponding loops in Gaussian elimination are executed only 1 3N3 times (only half the matrix is reduced, and the increasing numbers of predictable zeros reduce the count to one-third), and 1 2N2M times, respectively. Each backsubstitution of a right-hand side is 1 2N2 executions of a similar loop (one multiplication plus one subtraction). For M  N (only a few right-hand sides) Gaussian elimination thus has about a factor three advantage over Gauss-Jordan. (We could reduce this advantage to a factor 1.5 by not computing the inverse matrix as part of the Gauss-Jordan scheme.) For computing the inverse matrix (which we can view as the case of M = N right-hand sides, namely the N unit vectors which are the columns of the identity matrix), Gaussian elimination and backsubstitution at first glance require 1 3N3 (matrix reduction) + 1 2N3 (right-hand side manipulations) + 1 2N3 (N backsubstitutions) = 4 3N3 loop executions, which is more than the N 3 for Gauss-Jordan. However, the unit vectors are quite special in containing all zeros except for one element. If this is taken into account, the right-side manipulations can be reduced to only 1 6N3 loop executions, and, for matrix inversion, the two methods have identical efficiencies. Both Gaussian elimination and Gauss-Jordan elimination share the disadvantage that all right-hand sides must be known in advance. The LU decomposition method in the next section does not share that deficiency, and also has an equally small operations count, both for solution with any number of right-hand sides, and for matrix inversion. For this reason we will not implement the method of Gaussian elimination as a routine. CITED REFERENCES AND FURTHER READING: Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §9.3–1
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有