正在加载图片...
864 Chapter 19.Partial Differential Equations where C represents some elliptic operator and p is the source term.Rewrite the equation as a diffusion equation, du (19.5.2) Ot =Cu-p An initial distribution u relaxes to an equilibrium solution as t-oo. This equilibrium has all time derivatives vanishing.Therefore it is the solution of the original elliptic problem(19.5.1).We see that all the machinery of19.2,on diffusive initial value equations,can be brought to bear on the solution of boundary value problems by relaxation methods. Let us apply this idea to our model problem(19.0.3).The diffusion equation is 菲 0u02u.0u t=m2+ 6g2-p (19.5.3) 7 If we use FTCS differencing (cf.equation 19.2.4),we get △t =+(+1u+-1u+u+1+-1-4u别)-pAt(19.54) Recall from(19.2.6)that FTCS differencing is stable in one spatial dimension only if △t/△2≤.In two dimensions this becomes△t/△2≤子.Suppose we try to take —于王久 9 the largest possible timestep,and set At =A2/4.Then equation(19.5.4)becomes 42 石包∽ 9 4=(u写+u+u-11+u+1+-1)- 4 4P.1 (19.5.5) Thus the algorithm consists of using the average of u at its four nearest-neighbor points on the grid (plus the contribution from the source).This procedure is then 6 iterated until convergence. This method is in fact a classical method with origins dating back to the last century,called Jacobi's method (not to be confused with the Jacobi method for eigenvalues).The method is not practical because it converges too slowly. 第 However,it is the basis for understanding the modern methods,which are always 10621 compared with it. Numerica Another classical method is the Gauss-Seide/method.which turns out to be 431 important in multigrid methods (819.6).Here we make use of updated values of u on Recipes the right-hand side of (19.5.5)as soon as they become available.In other words,the averaging is done"in place"instead of being"copied"from an earlier timestep to a later one.If we are proceeding along the rows,incrementing j for fixed l,we have 时-(g+++4+)- △2 4p.1 (19.5.6) This method is also slowly converging and only of theoretical interest when used by itself,but some analysis of it will be instructive. Let us look at the Jacobi and Gauss-Seidel methods in terms of the matrix splitting concept.We change notation and call ux,"to conform to standard matrix notation.To solve A·x=b (19.5.7)864 Chapter 19. Partial Differential Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). where L represents some elliptic operator and ρ is the source term. Rewrite the equation as a diffusion equation, ∂u ∂t = Lu − ρ (19.5.2) An initial distribution u relaxes to an equilibrium solution as t → ∞. This equilibrium has all time derivatives vanishing. Therefore it is the solution of the original elliptic problem (19.5.1). We see that all the machinery of §19.2, on diffusive initial value equations, can be brought to bear on the solution of boundary value problems by relaxation methods. Let us apply this idea to our model problem (19.0.3). The diffusion equation is ∂u ∂t = ∂2u ∂x2 + ∂2u ∂y2 − ρ (19.5.3) If we use FTCS differencing (cf. equation 19.2.4), we get un+1 j,l = un j,l + ∆t ∆2 un j+1,l + un j−1,l + un j,l+1 + un j,l−1 − 4un j,l − ρj,l∆t (19.5.4) Recall from (19.2.6) that FTCS differencing is stable in one spatial dimension only if ∆t/∆2 ≤ 1 2 . In two dimensions this becomes ∆t/∆2 ≤ 1 4 . Suppose we try to take the largest possible timestep, and set ∆t = ∆2/4. Then equation (19.5.4) becomes un+1 j,l = 1 4 un j+1,l + un j−1,l + un j,l+1 + un j,l−1 − ∆2 4 ρj,l (19.5.5) Thus the algorithm consists of using the average of u at its four nearest-neighbor points on the grid (plus the contribution from the source). This procedure is then iterated until convergence. This method is in fact a classical method with origins dating back to the last century, called Jacobi’s method (not to be confused with the Jacobi method for eigenvalues). The method is not practical because it converges too slowly. However, it is the basis for understanding the modern methods, which are always compared with it. Another classical method is the Gauss-Seidel method, which turns out to be important in multigrid methods (§19.6). Here we make use of updated values of u on the right-hand side of (19.5.5) as soon as they become available. In other words, the averaging is done “in place” instead of being “copied” from an earlier timestep to a later one. If we are proceeding along the rows, incrementing j for fixed l, we have un+1 j,l = 1 4  un j+1,l + un+1 j−1,l + un j,l+1 + un+1 j,l−1  − ∆2 4 ρj,l (19.5.6) This method is also slowly converging and only of theoretical interest when used by itself, but some analysis of it will be instructive. Let us look at the Jacobi and Gauss-Seidel methods in terms of the matrix splitting concept. We change notation and call u “x,” to conform to standard matrix notation. To solve A · x = b (19.5.7)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有