正在加载图片...
19.5 Relaxation Methods for Boundary Value Problems 867 For this optimal choice,the spectral radius for SOR is PJacobi PSOR (19.5.20) As an application of the above results,consider our model problem for which PJacobi is given by equation (19.5.11).Then equations(19.5.19)and(19.5.20)give 2 w心1+刀 (19.5.21) pSOR≈1- 2π for large J (19.5.22) 菲 Equation(19.5.10)gives for the number of iterations to reduce the initial error by a factor of 10-P, ICAL 3 r≈pJln10.1 2π 3P (19.5.23) RECIPES Comparing with equation (19.5.12)or(19.5.15),we see that optimal SOR requires of orderJ iterations,as opposed to of orderJ2.Since J is typically 100 or larger, this makes a tremendous difference!Equation (19.5.23)leads to the mnemonic that 3-figure accuracy (p=3)requires a number of iterations equal to the number of mesh points along a side of the grid.For 6-figure accuracy,we require about twice as many iterations. as9色% How do we choose w for a problem for which the answer is not known analytically?That is just the weak point of SOR!The advantages of SOR obtain only in a fairly narrow window around the correct value of w.It is better to take w slightly too large,rather than slightly too small,but best to get it right. One way to choose w is to map your problem approximately onto a known problem,replacing the coefficients in the equation by average values.Note,however, that the known problem must have the same grid size and boundary conditions as the actual problem.We give for reference purposes the value of pJacobi for our model problem on a rectangularJ×L grid,allowing for the possibility that△x≠△y: Recipes Numerica 10.621 431 cos Recipes △ PJacobi (19.5.24) △x 1 △u North Equation(19.5.24)holds for homogeneous Dirichlet or Neumann boundary condi- tions.For periodic boundary conditions,make the replacement -2. A second way,which is especially useful if you plan to solve many similar elliptic equations each time with slightly different coefficients,is to determine the optimum value w empirically on the first equation and then use that value for the remaining equations.Various automated schemes for doing this and for "seeking out"the best values of w are described in the literature. While the matrix notation introduced earlier is useful for theoretical analyses, for practical implementation of the SOR algorithm we need explicit formulas19.5 Relaxation Methods for Boundary Value Problems 867 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). • For this optimal choice, the spectral radius for SOR is ρSOR =  ρJacobi 1 + 1 − ρ2 Jacobi 2 (19.5.20) As an application of the above results, consider our model problem for which ρJacobi is given by equation (19.5.11). Then equations (19.5.19) and (19.5.20) give ω 2 1 + π/J (19.5.21) ρSOR 1 − 2π J for large J (19.5.22) Equation (19.5.10) gives for the number of iterations to reduce the initial error by a factor of 10−p, r pJ ln 10 2π 1 3 pJ (19.5.23) Comparing with equation (19.5.12) or (19.5.15), we see that optimal SOR requires of order J iterations, as opposed to of order J 2. Since J is typically 100 or larger, this makes a tremendous difference! Equation (19.5.23) leads to the mnemonic that 3-figure accuracy (p = 3) requires a number of iterations equal to the number of mesh points along a side of the grid. For 6-figure accuracy, we require about twice as many iterations. How do we choose ω for a problem for which the answer is not known analytically? That is just the weak point of SOR! The advantages of SOR obtain only in a fairly narrow window around the correct value of ω. It is better to take ω slightly too large, rather than slightly too small, but best to get it right. One way to choose ω is to map your problem approximately onto a known problem, replacing the coefficients in the equation by average values. Note, however, that the known problem must have the same grid size and boundary conditions as the actual problem. We give for reference purposes the value of ρ Jacobi for our model problem on a rectangular J × L grid, allowing for the possibility that ∆x = ∆y: ρJacobi = cos π J + ∆x ∆y 2 cos π L 1 + ∆x ∆y 2 (19.5.24) Equation (19.5.24) holds for homogeneous Dirichlet or Neumann boundary condi￾tions. For periodic boundary conditions, make the replacement π → 2π. A second way, which is especially useful if you plan to solve many similar elliptic equations each time with slightly different coefficients, is to determine the optimum value ω empirically on the first equation and then use that value for the remaining equations. Various automated schemes for doing this and for “seeking out” the best values of ω are described in the literature. While the matrix notation introduced earlier is useful for theoretical analyses, for practical implementation of the SOR algorithm we need explicit formulas
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有