正在加载图片...
19.5 Relaxation Methods for Boundary Value Problems 865 we can consider splitting A as A=L+D+U (19.5.8) where D is the diagonal part of A,L is the lower triangle of A with zeros on the diagonal,and U is the upper triangle of A with zeros on the diagonal. In the Jacobi method we write for the rth step of iteration D.xo)=-(L+U·xr-1)+b (19.5.9) 81 For our model problem(19.5.5),D is simply the identity matrix.The Jacobi method converges for matrices A that are"diagonally dominant"in a sense that can be made mathematically precise.For matrices arising from finite differencing,this condition is usually met. What is the rate of convergence of the Jacobi method?A detailed analysis is beyond our scope,but here is some of the flavor:The matrix-D-1.(L+U)is the iteration matrix which,apart from an additive term,maps one set of x's into the next.The iteration matrix has eigenvalues,each one of which reflects the factor by which the amplitude of a particular eigenmode of undesired residual is suppressed during one iteration.Evidently those factors had better all have modulus<1 for 号, 9 the relaxation to work at all!The rate of convergence of the method is set by the rate for the slowest-decaying eigenmode,i.e.,the factor with largest modulus.The modulus of this largest factor,therefore lying between 0 and 1,is called the spectral 9 radius of the relaxation operator,denoted ps The number of iterations r required to reduce the overall error by a factor 10-P is thus estimated by pln10 T≈ (19.5.10) (-Inps) In general,the spectral radius p goes asymptotically to the value 1 as the grid size J is increased,so that more iterations are required.For any given equation, Pa份Nw Numerica 10621 grid geometry,and boundary condition,the spectral radius can,in principle,be computed analytically.For example,for equation (19.5.5)on a Jx J grid with 431 Dirichlet boundary conditions on all four sides,the asymptotic formula for large Recipes Jturns out to be Ps≈1- 3 2.2 (19.5.11) The number of iterations r required to reduce the error by a factor of 10-P is thus 2p.J21n10 1 T2 J2 (19.5.12) In other words,the number of iterations is proportional to the number of mesh points, J2.Since 100 x 100 and larger problems are common,it is clear that the Jacobi method is only of theoretical interest.19.5 Relaxation Methods for Boundary Value Problems 865 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). we can consider splitting A as A = L + D + U (19.5.8) where D is the diagonal part of A, L is the lower triangle of A with zeros on the diagonal, and U is the upper triangle of A with zeros on the diagonal. In the Jacobi method we write for the rth step of iteration D · x(r) = −(L + U) · x(r−1) + b (19.5.9) For our model problem (19.5.5), D is simply the identity matrix. The Jacobi method converges for matrices A that are “diagonally dominant” in a sense that can be made mathematically precise. For matrices arising from finite differencing, this condition is usually met. What is the rate of convergence of the Jacobi method? A detailed analysis is beyond our scope, but here is some of the flavor: The matrix −D −1 · (L + U) is the iteration matrix which, apart from an additive term, maps one set of x’s into the next. The iteration matrix has eigenvalues, each one of which reflects the factor by which the amplitude of a particular eigenmode of undesired residual is suppressed during one iteration. Evidently those factors had better all have modulus < 1 for the relaxation to work at all! The rate of convergence of the method is set by the rate for the slowest-decaying eigenmode, i.e., the factor with largest modulus. The modulus of this largest factor, therefore lying between 0 and 1, is called the spectral radius of the relaxation operator, denoted ρ s. The number of iterations r required to reduce the overall error by a factor 10−p is thus estimated by r ≈ p ln 10 (− ln ρs) (19.5.10) In general, the spectral radius ρs goes asymptotically to the value 1 as the grid size J is increased, so that more iterations are required. For any given equation, grid geometry, and boundary condition, the spectral radius can, in principle, be computed analytically. For example, for equation (19.5.5) on a J × J grid with Dirichlet boundary conditions on all four sides, the asymptotic formula for large J turns out to be ρs 1 − π2 2J2 (19.5.11) The number of iterations r required to reduce the error by a factor of 10 −p is thus r 2pJ2 ln 10 π2 1 2 pJ2 (19.5.12) In other words, the number of iterations is proportional to the number of mesh points, J2. Since 100 × 100 and larger problems are common, it is clear that the Jacobi method is only of theoretical interest
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有