正在加载图片...
19.5 Relaxation Methods for Boundary Value Problems 869 The beauty of Chebyshev acceleration is that the norm of the error always decreases with each iteration.(This is the norm of the actual error in uj.t.The norm of the residual Ei.t need not decrease monotonically.)While the asymptotic rate of convergence is the same as ordinary SOR,there is never any excuse for not using Chebyshev acceleration to reduce the total number of iterations required. Here we give a routine for SOR with Chebyshev acceleration. #include <math.h> #define MAXITS 1000 #define EPS 1.0e-5 83g granted for 19881992 void sor(double **a,double **b,double **c,double **d,double *e. double **f,double **u,int jmax,double rjac) Successive overrelaxation solution of equation (19.5.25)with Chebyshev acceleration.a,b,c. 1.200 d,e,and f are input as the coefficients of the equation,each dimensioned to the grid size [1..jmax][1..jmax].u is input as the initial guess to the solution,usually zero,and returns with the final value.rjac is input as the spectral radius of the Jacobi iteration,or an estimate from NUMERICAL RECIPES IN of it. void nrerror(char error.-text[☐); (Nort server computer, to make int ipass,j,jsw,1,1sw,n; double anorm,anormf=0.0,omega=1.0,resid; America one paper UnN电.t THE Double precision is a good idea for jmax bigger than about 25. ART for (j=2;j<jmax;j++) Compute initial norm of residual and terminate iteration when norm has been reduced by Progra a factor EPS. for(1=2;1<jmax;1+ for their anormf +fabs(f[j][l]): Assumes initial u is zero. for (n=1;n<=MAXITS;n++) anorm=0.0; jsw-1; for (ipass-1iipass<-2;ipass++){ Odd-even ordering. lsw=jsw; OF SCIENTIFIC COMPUTING(ISBN for (j=2;j<jmax;j++){ to directcustsen for(1=1sw+1:1<jmax:1+=2)C resid=a[j][1]*u[j+1][1] +b[j][1]*u[j-1][1] +c[j][1]*u[j][1+1] +d[j][1]*u[j][1-1] +e[][1]*u[j][1] @cambridge.org 1988-1992 by Numerical Recipes 10-:6211 43108 -f[j][1]; anorm +fabs(resid); u[j][1]-omega*resid/e[j][1]; lsw-3-lsw jsw=3-jsw; omega=(n =1&ipass =11.0/(1.0-0.5*rjac*rjac): (outside North America) Software. 1.0/(1.0-0.25*rjac*rjac*omega)); if (anorm EPS*anormf)return; nrerror("MAXITS exceeded"); The main advantage of SOR is that it is very easy to program.Its main disadvantage is that it is still very inefficient on large problems.19.5 Relaxation Methods for Boundary Value Problems 869 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The beauty of Chebyshev acceleration is that the norm of the error always decreases with each iteration. (This is the norm of the actual error in uj,l. The norm of the residual ξj,l need not decrease monotonically.) While the asymptotic rate of convergence is the same as ordinary SOR, there is never any excuse for not using Chebyshev acceleration to reduce the total number of iterations required. Here we give a routine for SOR with Chebyshev acceleration. #include <math.h> #define MAXITS 1000 #define EPS 1.0e-5 void sor(double **a, double **b, double **c, double **d, double **e, double **f, double **u, int jmax, double rjac) Successive overrelaxation solution of equation (19.5.25) with Chebyshev acceleration. a, b, c, d, e, and f are input as the coefficients of the equation, each dimensioned to the grid size [1..jmax][1..jmax]. u is input as the initial guess to the solution, usually zero, and returns with the final value. rjac is input as the spectral radius of the Jacobi iteration, or an estimate of it. { void nrerror(char error_text[]); int ipass,j,jsw,l,lsw,n; double anorm,anormf=0.0,omega=1.0,resid; Double precision is a good idea for jmax bigger than about 25. for (j=2;j<jmax;j++) Compute initial norm of residual and terminate iteration when norm has been reduced by a factor EPS. for (l=2;l<jmax;l++) anormf += fabs(f[j][l]); Assumes initial u is zero. for (n=1;n<=MAXITS;n++) { anorm=0.0; jsw=1; for (ipass=1;ipass<=2;ipass++) { Odd-even ordering. lsw=jsw; for (j=2;j<jmax;j++) { for (l=lsw+1;l<jmax;l+=2) { resid=a[j][l]*u[j+1][l] +b[j][l]*u[j-1][l] +c[j][l]*u[j][l+1] +d[j][l]*u[j][l-1] +e[j][l]*u[j][l] -f[j][l]; anorm += fabs(resid); u[j][l] -= omega*resid/e[j][l]; } lsw=3-lsw; } jsw=3-jsw; omega=(n == 1 && ipass == 1 ? 1.0/(1.0-0.5*rjac*rjac) : 1.0/(1.0-0.25*rjac*rjac*omega)); } if (anorm < EPS*anormf) return; } nrerror("MAXITS exceeded"); } The main advantage of SOR is that it is very easy to program. Its main disadvantage is that it is still very inefficient on large problems
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有