762 Chapter 17.Two Point Boundary Value Problems for(i=1;i<=n;i+)f[i]=f1[i]-f2[1]: free_vector(y,1,nvar); free_vector(f2,1,nvar); free_vector(f1,1,nvar); There are boundary value problems where even shooting to a fitting point fails -the integration interval has to be partitioned by several fitting points with the solution being matched at each such point.For more details see [1]. CITED REFERENCES AND FURTHER READING: Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America). Keller,H.B.1968,Numerical Methods for Two-Point Boundary-Value Problems (Waltham,MA: Blaisdell). Stoer.J.,and Bulirsch.R.1980.Introduction to Numerical Analysis(New York:Springer-Verlag). 397.3.5-7.3.6.[1] RECIPES 17.3 Relaxation Methods 6 Press. In relaxation methods we replace ODEs by approximate finite-difference equations (FDEs)on a grid or mesh of points that spans the domain of interest.As a typical example, we could replace a general first-order differential equation IENTIFIC dy =g(x,y) (17.3.1) dx 6 with an algebraic equation relating function values at two points k,k-1: 张-k-1-(xk-xk-1)g[2(xk+xk-1,(十张-1】=0 (17.3.2) The form of the FDE in (17.3.2)illustrates the idea,but not uniquely:There are many ways to turn the ODE into an FDE.When the problem involves N coupled first-order ODEs represented by FDEs on a mesh of M points,a solution consists of values for N dependent 10621 functions given at each of the M mesh points,or Nx M variables in all.The relaxation Numerica method determines the solution by starting with a guess and improving it,iteratively.As the 43106 iterations improve the solution,the result is said to relax to the true solution. While several iteration schemes are possible,for most problems our old standby,multi- Recipes dimensional Newton's method,works well.The method produces a matrix equation that must be solved,but the matrix takes a special,"block diagonal"form,that allows it to be inverted far more economically both in time and storage than would be possible for a general North Software. matrix of size (MN)x (MN).Since MN can easily be several thousand,this is crucial for the feasibility of the method Our implementation couples at most pairs of points,as in equation (17.3.2).More points can be coupled,but then the method becomes more complex. We will provide enough background so that you can write a more general scheme if you have the patience to do so. Let us develop a general set ofalgebraic equations that represent the ODEs by FDEs.The ODE problem is exactly identical to that expressed in equations(17.0.1)(17.0.3)where we had N coupled first-order equations that satisfy n boundary conditions at and n2=N-n boundary conditions at 2.We first define a mesh or grid by a set ofk=1,2,...,M points at which we supply values for the independent variable r.In particular,1 is the initial boundary,and is the final boundary.We use the notation y to refer to the entire set of
762 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (i=1;i<=n;i++) f[i]=f1[i]-f2[i]; free_vector(y,1,nvar); free_vector(f2,1,nvar); free_vector(f1,1,nvar); } There are boundary value problems where even shooting to a fitting point fails — the integration interval has to be partitioned by several fitting points with the solution being matched at each such point. For more details see [1]. CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America). Keller, H.B. 1968, Numerical Methods for Two-Point Boundary-Value Problems (Waltham, MA: Blaisdell). Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §§7.3.5–7.3.6. [1] 17.3 Relaxation Methods In relaxation methods we replace ODEs by approximate finite-difference equations (FDEs) on a grid or mesh of points that spans the domain of interest. As a typical example, we could replace a general first-order differential equation dy dx = g(x, y) (17.3.1) with an algebraic equation relating function values at two points k, k − 1: yk − yk−1 − (xk − xk−1) g 1 2 (xk + xk−1), 1 2 (yk + yk−1) =0 (17.3.2) The form of the FDE in (17.3.2) illustrates the idea, but not uniquely: There are many ways to turn the ODE into an FDE. When the problem involves N coupled first-order ODEs represented by FDEs on a mesh of M points, a solution consists of values for N dependent functions given at each of the M mesh points, or N × M variables in all. The relaxation method determines the solution by starting with a guess and improving it, iteratively. As the iterations improve the solution, the result is said to relax to the true solution. While several iteration schemes are possible, for most problems our old standby, multidimensional Newton’s method, works well. The method produces a matrix equation that must be solved, but the matrix takes a special, “block diagonal” form, that allows it to be inverted far more economically both in time and storage than would be possible for a general matrix of size (MN) × (MN). Since MN can easily be several thousand, this is crucial for the feasibility of the method. Our implementation couples at most pairs of points, as in equation (17.3.2). More points can be coupled, but then the method becomes more complex. We will provide enough background so that you can write a more general scheme if you have the patience to do so. Let us develop a general set of algebraic equations that represent the ODEs by FDEs. The ODE problem is exactly identical to that expressed in equations (17.0.1)–(17.0.3) where we had N coupled first-order equations that satisfy n1 boundary conditions at x1 and n2 = N − n1 boundary conditions at x2. We first define a mesh or grid by a set of k = 1, 2, ..., M points at which we supply values for the independent variable xk. In particular, x1 is the initial boundary, and xM is the final boundary. We use the notation yk to refer to the entire set of
17.3 Relaxation Methods 763 dependent variables ,y....,yN at point.At an arbitrary point k in the middle of the mesh,we approximate the set of N first-order ODEs by algebraic relations of the form 0=Ek=yk-yk-1-(k-2k-1)gk(k,k-1:Yk:Yk-1),k =2,3,...,M (17.3.3) The notation signifies that g can be evaluated using information from both pointsk,k-1. The FDEs labeled by Ek provide N equations coupling 2N variables at points k,k-1.There are M-1 points,k=2,3,...,M,at which difference equations of the form(17.3.3)apply. Thus the FDEs provide a total of(M-1)N equations for the MN unknowns.The remaining N equations come from the boundary conditions. At the first boundary we have 0=E1≡B(x1,y1) (17.3.4) 81 while at the second boundary 0=EM+1≡C(xaM,yM) (17.3.5) The vectors E and B have only n nonzero components,corresponding to the m boundary conditions at 1.It will turn out to be useful to take these nonzero components to be the last n components.In other words,Ej.0 only for j n2 +1,n2 +2,...,N.At the other boundary,only the first n2 components of E+and C are nonzero:Ei.+0 only for=1,2,...,n2. RECIPES The "solution"of the FDE problem in (17.3.3)(17.3.5)consists of a set of variables y.&,the values of the N variables y;at the M points k.The algorithm we describe 暴 below requires an initial guess for the y..We then determine increments Ay.such that yi.k+Ayi,k is an improved approximation to the solution. Equations for the increments are developed by expanding the FDEs in first-order Taylor 器 Press. series with respect to small changes Ay.At an interior point,k=2,3,...,M this gives: Ek(yk+△yk,yk-1+△yk-i)≈Ek(yk,yk-1) 9 N N +∑OE AUn-1+∑E△gnk (17.3.6) SCIENTIFIC n=1 0Un.k-1 n=i Oun.k 6 For a solution we want the updated value E(y+Ay)to be zero,so the general set of equations at an interior point can be written in matrix form as Sj.nAyn-N.k =-Ej.k:j=1,2,...,N (17.3.7) n=1 n=N+1 where 10621 8Ej.k 0E5,k n=1,2,.,N (17.3.8) Numerica i,n=’Jn+N-0yn.天’ 431 The quantity S.n is an N x 2N matrix at each point k.Each interior point thus supplies a Recipes block of N equations coupling 2N corrections to the solution variables at the points k,k-1. Similarly,the algebraic relations at the boundaries can be expanded in a first-order (outside Taylor series for increments that improve the solution.Since E depends only on y1,we find at the first boundary: North Software. N Si,n△n,1=-E.1,j=n2+1,n2+2,.,N (17.3.9) where Sj.n= 8Ej ,n=1,2,,N (17.3.10) 0yn.1 At the second boundary, N Sj.n Ayn,M =-Ej.M+1,j=1,2,...,n2 (17.3.11)
17.3 Relaxation Methods 763 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). dependent variables y1, y2,...,yN at point xk. At an arbitrary point k in the middle of the mesh, we approximate the set of N first-order ODEs by algebraic relations of the form 0 = Ek ≡ yk − yk−1 − (xk − xk−1)gk(xk, xk−1, yk, yk−1), k = 2, 3,...,M (17.3.3) The notation signifies that gk can be evaluated using information from both points k, k − 1. The FDEs labeled by Ek provide N equations coupling 2N variables at points k, k −1. There are M − 1 points, k = 2, 3,...,M, at which difference equations of the form (17.3.3) apply. Thus the FDEs provide a total of (M −1)N equations for the MN unknowns. The remaining N equations come from the boundary conditions. At the first boundary we have 0 = E1 ≡ B(x1, y1) (17.3.4) while at the second boundary 0 = EM+1 ≡ C(xM, yM) (17.3.5) The vectors E1 and B have only n1 nonzero components, corresponding to the n1 boundary conditions at x1. It will turn out to be useful to take these nonzero components to be the last n1 components. In other words, Ej,1 = 0 only for j = n2 + 1, n2 + 2,...,N. At the other boundary, only the first n2 components of EM+1 and C are nonzero: Ej,M+1 = 0 only for j = 1, 2,...,n2. The “solution” of the FDE problem in (17.3.3)–(17.3.5) consists of a set of variables yj,k, the values of the N variables yj at the M points xk. The algorithm we describe below requires an initial guess for the yj,k. We then determine increments ∆yj,k such that yj,k + ∆yj,k is an improved approximation to the solution. Equations for the increments are developed by expanding the FDEs in first-order Taylor series with respect to small changes ∆yk. At an interior point, k = 2, 3,...,M this gives: Ek(yk + ∆yk, yk−1 + ∆yk−1) ≈ Ek(yk, yk−1) + N n=1 ∂Ek ∂yn,k−1 ∆yn,k−1 + N n=1 ∂Ek ∂yn,k ∆yn,k (17.3.6) For a solution we want the updated value E(y + ∆y) to be zero, so the general set of equations at an interior point can be written in matrix form as N n=1 Sj,n∆yn,k−1 + 2N n=N+1 Sj,n∆yn−N,k = −Ej,k, j = 1, 2,...,N (17.3.7) where Sj,n = ∂Ej,k ∂yn,k−1 , Sj,n+N = ∂Ej,k ∂yn,k , n = 1, 2,...,N (17.3.8) The quantity Sj,n is an N × 2N matrix at each point k. Each interior point thus supplies a block of N equations coupling 2N corrections to the solution variables at the points k, k − 1. Similarly, the algebraic relations at the boundaries can be expanded in a first-order Taylor series for increments that improve the solution. Since E1 depends only on y1, we find at the first boundary: N n=1 Sj,n∆yn,1 = −Ej,1, j = n2 + 1, n2 + 2,...,N (17.3.9) where Sj,n = ∂Ej,1 ∂yn,1 , n = 1, 2,...,N (17.3.10) At the second boundary, N n=1 Sj,n∆yn,M = −Ej,M+1, j = 1, 2,...,n2 (17.3.11)
764 Chapter 17.Two Point Boundary Value Problems where Si.n= 0Ej.M+1 Oyn.M n=1,2,..,N (17.3.12) We thus have in equations (17.3.7)(17.3.12)a set of linear equations to be solved for the corrections Ay,iterating until the corrections are sufficiently small.The equations have a special structure,because each Si.n couples only points k,k-1.Figure 17.3.1 illustrates the typical structure of the complete matrix equation for the case of 5 variables and 4 mesh points,with 3 boundary conditions at the first boundary and 2 at the second.The 3 x 5 block of nonzero entries in the top left-hand corner of the matrix comes from the boundary condition Sin at point k 1.The next three 5 x 10 blocks are the Sin at the interior points,coupling variables at mesh points (2,1),(3,2),and (4,3).Finally we have the block 8 corresponding to the second boundary condition. We can solve equations (17.3.7)(17.3.12)for the increments Ay using a form of Gaussian elimination that exploits the special structure of the matrix to minimize the total number of operations,and that minimizes storage of matrix coefficients by packing the elements in a special blocked structure.(You might wish to review Chapter 2,especially 82.2,if you are unfamiliar with the steps involved in Gaussian elimination.)Recall that Gaussian elimination consists of manipulating the equations by elementary operations such from NUMERICAL RECIPESI 19881992 as dividing rows of coefficients by a common factor to produce unity in diagonal elements, and adding appropriate multiples of other rows to produce zeros below the diagonal.Here we take advantage of the block structure by performing a bit more reduction than in pure Gaussian elimination,so that the storage of coefficients is minimized.Figure 17.3.2 shows the form that we wish to achieve by elimination,just prior to the backsubstitution step.Only a 。27 THE small subset of the reduced MN x MN matrix elements needs to be stored as the elimination progresses.Once the matrix elements reach the stage in Figure 17.3.2,the solution follows ART quickly by a backsubstitution procedure. Furthermore,the entire procedure,except the backsubstitution step,operates only on 9 Programs one block of the matrix at a time.The procedure contains four types of operations:(1) partial reduction to zero of certain elements of a block using results from a previous step, (2)elimination of the square structure of the remaining block elements such that the square section contains unity along the diagonal,and zero in off-diagonal elements,(3)storage of the remaining nonzero coefficients for use in later steps,and (4)backsubstitution.We illustrate to dir the steps schematically by figures. Consider the block ofequations describing corrections available from the initial boundary OF SCIENTIFIC COMPUTING(ISBN conditions.We have n equations for N unknown corrections.We wish to transform the first 1988-19920 block so that its left-hand n x n square section becomes unity along the diagonal,and zero in off-diagonal elements.Figure 17.3.3 shows the original and final form of the first block of the matrix.In the figure we designate matrix elements that are subject to diagonalization Numerical Recipes 10-621 by "D",and elements that will be altered by "A";in the final block,elements that are stored are labeled by“S”.We get from start to finish by selecting in turn n1“pivot'”elements from 43106 among the first n columns,normalizing the pivot row so that the value of the "pivot"element is unity,and adding appropriate multiples of this row to the remaining rows so that they contain zeros in the pivot column.In its final form,the reduced block expresses values for the (outside corrections to the first n variables at mesh point 1 in terms of values for the remaining n2 unknown corrections at point 1,i.e.,we now know what the first n elements are in terms of North Software. the remaining n2 elements.We store only the final set of n2 nonzero columns from the initial block,plus the column for the altered right-hand side of the matrix equation We must emphasize here an important detail of the method.To exploit the reduced storage allowed by operating on blocks,it is essential that the ordering of columns in the s matrix of derivatives be such that pivot elements can be found among the first n rows of the matrix.This means that the n boundary conditions at the first point must contain some dependence on the first j=1,2,...,m dependent variables,y[j][1].If not,then the original square nxn subsection of the first block will appear to be singular,and the method will fail.Alternatively,we would have to allow the search for pivot elements to involve all N columns of the block,and this would require column swapping and far more bookkeeping The code provides a simple method of reordering the variables,i.e.,the columns of the s matrix,so that this can be done easily.End of important detail
764 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). where Sj,n = ∂Ej,M+1 ∂yn,M , n = 1, 2,...,N (17.3.12) We thus have in equations (17.3.7)–(17.3.12) a set of linear equations to be solved for the corrections ∆y, iterating until the corrections are sufficiently small. The equations have a special structure, because each Sj,n couples only points k, k − 1. Figure 17.3.1 illustrates the typical structure of the complete matrix equation for the case of 5 variables and 4 mesh points, with 3 boundary conditions at the first boundary and 2 at the second. The 3 × 5 block of nonzero entries in the top left-hand corner of the matrix comes from the boundary condition Sj,n at point k = 1. The next three 5 × 10 blocks are the Sj,n at the interior points, coupling variables at mesh points (2,1), (3,2), and (4,3). Finally we have the block corresponding to the second boundary condition. We can solve equations (17.3.7)–(17.3.12) for the increments ∆y using a form of Gaussian elimination that exploits the special structure of the matrix to minimize the total number of operations, and that minimizes storage of matrix coefficients by packing the elements in a special blocked structure. (You might wish to review Chapter 2, especially §2.2, if you are unfamiliar with the steps involved in Gaussian elimination.) Recall that Gaussian elimination consists of manipulating the equations by elementary operations such as dividing rows of coefficients by a common factor to produce unity in diagonal elements, and adding appropriate multiples of other rows to produce zeros below the diagonal. Here we take advantage of the block structure by performing a bit more reduction than in pure Gaussian elimination, so that the storage of coefficients is minimized. Figure 17.3.2 shows the form that we wish to achieve by elimination, just prior to the backsubstitution step. Only a small subset of the reduced MN ×MN matrix elements needs to be stored as the elimination progresses. Once the matrix elements reach the stage in Figure 17.3.2, the solution follows quickly by a backsubstitution procedure. Furthermore, the entire procedure, except the backsubstitution step, operates only on one block of the matrix at a time. The procedure contains four types of operations: (1) partial reduction to zero of certain elements of a block using results from a previous step, (2) elimination of the square structure of the remaining block elements such that the square section contains unity along the diagonal, and zero in off-diagonal elements, (3) storage of the remaining nonzero coefficients for use in later steps, and (4) backsubstitution. We illustrate the steps schematically by figures. Consider the block of equations describing corrections available from the initial boundary conditions. We have n1 equations for N unknown corrections. We wish to transform the first block so that its left-hand n1 × n1 square section becomes unity along the diagonal, and zero in off-diagonal elements. Figure 17.3.3 shows the original and final form of the first block of the matrix. In the figure we designate matrix elements that are subject to diagonalization by “D”, and elements that will be altered by “A”; in the final block, elements that are stored are labeled by “S”. We get from start to finish by selecting in turn n1 “pivot” elements from among the first n1 columns, normalizing the pivot row so that the value of the “pivot” element is unity, and adding appropriate multiples of this row to the remaining rows so that they contain zeros in the pivot column. In its final form, the reduced block expresses values for the corrections to the first n1 variables at mesh point 1 in terms of values for the remaining n2 unknown corrections at point 1, i.e., we now know what the first n1 elements are in terms of the remaining n2 elements. We store only the final set of n2 nonzero columns from the initial block, plus the column for the altered right-hand side of the matrix equation. We must emphasize here an important detail of the method. To exploit the reduced storage allowed by operating on blocks, it is essential that the ordering of columns in the s matrix of derivatives be such that pivot elements can be found among the first n1 rows of the matrix. This means that the n1 boundary conditions at the first point must contain some dependence on the first j=1,2,...,n1 dependent variables, y[j][1]. If not, then the original square n1 ×n1 subsection of the first block will appear to be singular, and the method will fail. Alternatively, we would have to allow the search for pivot elements to involve all N columns of the block, and this would require column swapping and far more bookkeeping. The code provides a simple method of reordering the variables, i.e., the columns of the s matrix, so that this can be done easily. End of important detail
17.3 Relaxation Methods 765 XXXXX B XXXXX V B XXXXX V XXXXXXXXXX XXXXXXXXXX V ⊙ XXXXX X XXXX ⊙ XXXXXXXXXX B XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX V Permission is XXXXXXXXXX V B XXXXXXXXXX V XXXXXXXXXX V BB XXXXXXXXXX B XXXXXXXXXX V B .com or call 1-800-872- (including this one) granted fori XXXXXXXXXX V internet XXXXXXXXXX V B XXXXXXXXXX V B XXXXX 7423 (North America to any server computer, 1988-1992 by Cambridge University Press. from NUMERICAL RECIPES IN C: XXXXX B Figure 17.3.1.Matrix structure of a set of linear finite-difference equations (FDEs)with boundary conditions imposed at both endpoints.Here X represents a coefficient of the FDEs,V represents a t users to make one paper component of the unknown solution vector,and B is a component of the known right-hand side.Empty spaces represent zeros.The matrix equation is to be solved by a special form of Gaussian elimination. (See text for details.) is is copy for their Programs XX B XX 0 Copyright (C) 1 XX B 1 XX B 1 1 十 B 1 X 1 X B To order Numerical Recipes books or 1988-1992 by Numerical Recipes THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) 1 XX X B 1 1 B 1 X XX 1 B @cambridge.org(outside North America). Software. X B 1 XX B ying of machine visit website 1 V B 1 0 Figure 17.3.2.Target structure of the Gaussian elimination.Once the matrix of Figure 17.3.1 has been reduced to this form,the solution follows quickly by backsubstitution
17.3 Relaxation Methods 765 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X V V V V V V V V V V V V V V V V V V V V B B B B B B B B B B B B B B B B B B B B Figure 17.3.1. Matrix structure of a set of linear finite-difference equations (FDEs) with boundary conditions imposed at both endpoints. Here X represents a coefficient of the FDEs, V represents a component of the unknown solution vector, and B is a component of the known right-hand side. Empty spaces represent zeros. The matrix equation is to be solved by a special form of Gaussian elimination. (See text for details.) 1 1 1 X X X 1 X X X 1 1 1 1 X X X X X 1 X X X X X 1 1 1 1 X X X X X 1 1 1 1 X X X X X 1 X X X X X 1 V V V V V V V V V V V V V V V V V V V V B B B B B B B B B B B B B B B B B B B B X X X X X 1 Figure 17.3.2. Target structure of the Gaussian elimination. Once the matrix of Figure 17.3.1 has been reduced to this form, the solution follows quickly by backsubstitution
766 Chapter 17.Two Point Boundary Value Problems (a)DD D AA A DDDAA V A DDDAA A (b)100SS V 010sS V 001SS Figure 17.3.3.Reduction process for the first (upper left)block of the matrix in Figure 17.3.1.(a) Original form of the block,(b)final form.(See text for explanation.) (a)100SS V 83 010SS 001SS D ZZZ DDDD DA A V A ZZZ D DDD DA A V g 111800-672 Z ZZ DD DD D AA A to any ZZZD DDD DA A A NUMERICAL RECIPES IN C: ZZZD DD D DA A A (b)100SS V v -7423 (North America server computer, e University Press. THE 010SS 001SS V 00010000SS V 00001000sS V only),or Programs 00000100SS 00000010SS V 00000001SS email to dir Copyright (C) Figure 17.3.4.Reduction process for intermediate blocks of the matrix in Figure 17.3.1.(a)Original form,(b)final form.(See text for explanation.) (a)00010000SS V 00001000SS ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) 00000100SS V 00000010SS D 1988-1992 by Numerical Recipes 00000001SS V ZZZDD A ZZZDD A (b)00010000SS V Software. 00001000SS @cambridge.org(outside North America). 00000100SS V 00000010SS v 00000001SS V 00010 V 00001 Figure 17.3.5.Reduction process for the last (lower right)block of the matrix in Figure 17.3.1.(a) Original form,(b)final form.(See text for explanation.)
766 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). (a) (b) D D D 1 0 0 D D D 0 1 0 D D D 0 0 1 A A A S S S A A A S S S V V V V V V A A A S S S Figure 17.3.3. Reduction process for the first (upper left) block of the matrix in Figure 17.3.1. (a) Original form of the block, (b) final form. (See text for explanation.) (a) 1 0 0 Z Z Z Z Z V V V V V V V V S S S A A A A A (b) 1 0 0 0 0 0 0 0 0 0 1 0 0 V V V V V V V V S S S S S S S S 0 1 0 Z Z Z Z Z 0 0 1 Z Z Z Z Z S S S D D D D D S S S D D D D D D D D D D D D D D D D D D D D A A A A A A A A A A 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 S S S 1 0 0 0 0 S S S 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 S S S S S S S S S S Figure 17.3.4. Reduction process for intermediate blocks of the matrix in Figure 17.3.1. (a) Original form, (b) final form. (See text for explanation.) (a) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 Z Z 0 0 0 1 0 Z Z 0 0 0 0 1 Z Z S S S S S D D S S S S S D D V V V V V V V S S S S S A A (b) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 S S S S S 1 0 S S S S S 0 1 V V V V V V V S S S S S S S Figure 17.3.5. Reduction process for the last (lower right) block of the matrix in Figure 17.3.1. (a) Original form, (b) final form. (See text for explanation.)
17.3 Relaxation Methods 767 Next consider the block of N equations representing the FDEs that describe the relation between the 2N corrections at points 2 and 1.The elements of that block,together with results from the previous step,are illustrated in Figure 17.3.4.Note that by adding suitable multiples of rows from the first block we can reduce to zero the first n columns of the block (labeled by "Z"),and,to do so,we will need to alter only the columns from n+1 to N and the vector element on the right-hand side.Of the remaining columns we can diagonalize a square subsection of N x N elements,labeled by "D"in the figure.In the process we alter the final set of n2 +1 columns,denoted "A"in the figure.The second half of the figure shows the block when we finish operating on it,with the stored (n2+1)x N elements labeled by "S." If we operate on the next set ofequations corresponding to the FDEs coupling corrections at points 3 and 2,we see that the state of available results and new equations exactly reproduces the situation described in the previous paragraph.Thus,we can carry out those steps again 8 for each block in turn through block M.Finally on block M+1 we encounter the remaining boundary conditions. Figure 17.3.5 shows the final block of n2 FDEs relating the N corrections for variables at mesh point M,together with the result of reducing the previous block.Again,we can first use the prior results to zero the first n columns of the block.Now,when we diagonalize the remaining square section,we strike gold:We get values for the final n2 corrections at mesh point M. With the final block reduced,the matrix has the desired form shown previously in Figure 17.3.2,and the matrix is ripe for backsubstitution.Starting with the bottom row and 令 working up towards the top,at each stage we can simply determine one unknown correction in terms of known quantities. The function solvde organizes the steps described above.The principal procedures used in the algorithm are performed by functions called internally by solvde.The function Press. ART red eliminates leading columns of the s matrix using results from prior blocks.pinvs diagonalizes the square subsection of s and stores unreduced coefficients.bksub carries out the backsubstitution step.The user of solvde must understand the calling arguments, 9 Programs as described below,and supply a function difeg,called by solvde,that evaluates the s matrix for each block. Most of the arguments in the call to solvde have already been described,but some require discussion.Array y[j][k]contains the initial guess for the solution,with j labeling the dependent variables at mesh points k.The problem involves ne FDEs spanning points k=1,...,m.nb boundary conditions apply at the first point k=1.The array indexv[j] establishes the correspondence between columns of the s matrix,equations(17.3.8),(17.3.10). SCIENTIFIC COMPUTING(ISBN and (17.3.12),and the dependent variables.As described above it is essential that the nb 198918920 boundary conditions at k=1 involve the dependent variables referenced by the first nb columns of the s matrix.Thus,columns j of the s matrix can be ordered by the user in difeg to refer to derivatives with respect to the dependent variable indexv [j]. Numerical Recipes 10-621 The function only attempts itmax correction cycles before returning,even if the solution has not converged.The parameters conv,slowc,scalv relate to convergence.Each -43108 inversion of the matrix produces corrections for ne variables at m mesh points.We want these to become vanishingly small as the iterations proceed,but we must define a measure for the size of corrections. This error"norm"is very problem specific,so the user might wish to (outside rewrite this section of the code as appropriate.In the program below we compute a value for the average correction err by summing the absolute value of all corrections,weighted by North Software. a scale factor appropriate to each type of variable 言 m ne 1 |△Y[][k] err= (17.3.13) m×ne scalv[j] k=1j=1 When err<conv,the method has converged.Note that the user gets to supply an array scalv which measures the typical size of each variable. Obviously,if err is large,we are far from a solution,and perhaps it is a bad idea to believe that the corrections generated from a first-order Taylor series are accurate.The number slowc modulates application of corrections.After each iteration we apply only a
17.3 Relaxation Methods 767 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Next consider the block of N equations representing the FDEs that describe the relation between the 2N corrections at points 2 and 1. The elements of that block, together with results from the previous step, are illustrated in Figure 17.3.4. Note that by adding suitable multiples of rows from the first block we can reduce to zero the first n1 columns of the block (labeled by “Z”), and, to do so, we will need to alter only the columns from n1 + 1 to N and the vector element on the right-hand side. Of the remaining columns we can diagonalize a square subsection of N × N elements, labeled by “D” in the figure. In the process we alter the final set of n2 + 1 columns, denoted “A” in the figure. The second half of the figure shows the block when we finish operating on it, with the stored (n2 + 1) × N elements labeled by “S.” If we operate on the next set of equations corresponding to the FDEs coupling corrections at points 3 and 2, we see that the state of available results and new equations exactly reproduces the situation described in the previous paragraph. Thus, we can carry out those steps again for each block in turn through block M. Finally on block M + 1 we encounter the remaining boundary conditions. Figure 17.3.5 shows the final block of n2 FDEs relating the N corrections for variables at mesh point M, together with the result of reducing the previous block. Again, we can first use the prior results to zero the first n1 columns of the block. Now, when we diagonalize the remaining square section, we strike gold: We get values for the final n2 corrections at mesh point M. With the final block reduced, the matrix has the desired form shown previously in Figure 17.3.2, and the matrix is ripe for backsubstitution. Starting with the bottom row and working up towards the top, at each stage we can simply determine one unknown correction in terms of known quantities. The function solvde organizes the steps described above. The principal procedures used in the algorithm are performed by functions called internally by solvde. The function red eliminates leading columns of the s matrix using results from prior blocks. pinvs diagonalizes the square subsection of s and stores unreduced coefficients. bksub carries out the backsubstitution step. The user of solvde must understand the calling arguments, as described below, and supply a function difeq, called by solvde, that evaluates the s matrix for each block. Most of the arguments in the call to solvde have already been described, but some require discussion. Array y[j][k] contains the initial guess for the solution, with j labeling the dependent variables at mesh points k. The problem involves ne FDEs spanning points k=1,..., m. nb boundary conditions apply at the first point k=1. The array indexv[j] establishes the correspondence between columns of the s matrix, equations (17.3.8), (17.3.10), and (17.3.12), and the dependent variables. As described above it is essential that the nb boundary conditions at k=1 involve the dependent variables referenced by the first nb columns of the s matrix. Thus, columns j of the s matrix can be ordered by the user in difeq to refer to derivatives with respect to the dependent variable indexv[j]. The function only attempts itmax correction cycles before returning, even if the solution has not converged. The parameters conv, slowc, scalv relate to convergence. Each inversion of the matrix produces corrections for ne variables at m mesh points. We want these to become vanishingly small as the iterations proceed, but we must define a measure for the size of corrections. This error “norm” is very problem specific, so the user might wish to rewrite this section of the code as appropriate. In the program below we compute a value for the average correction err by summing the absolute value of all corrections, weighted by a scale factor appropriate to each type of variable: err = 1 m × ne m k=1 ne j=1 |∆Y [j][k]| scalv[j] (17.3.13) When err≤conv, the method has converged. Note that the user gets to supply an array scalv which measures the typical size of each variable. Obviously, if err is large, we are far from a solution, and perhaps it is a bad idea to believe that the corrections generated from a first-order Taylor series are accurate. The number slowc modulates application of corrections. After each iteration we apply only a
768 Chapter 17.Two Point Boundary Value Problems fraction of the corrections found by matrix inversion: max(slovc,AY[j][k] slowc Y[j][x]→Y[][k]+ (17.3.14) Thus,when err>slowc only a fraction of the corrections are used,but when errk2.the block involves the boundary conditions at the first or final points;otherwise the block acts on FDEs coupling variables at points k-1,k. Press. The convention on storing information into the array s[i][i]follows that used in equations (17.3.8),(17.3.10),and (17.3.12):Rows i label equations,columns i refer to derivatives with respect to dependent variables in the solution.Recall that each equation will depend on the ne dependent variables at either one or two points.Thus,j runs from 1 to either ne or 2*ne.The column ordering for dependent variables at each point must agree with the list supplied in indexv [j].Thus,for a block not at a boundary,the first column SCIENTIFIC multiplies AY(l=indexv [1],k-1),and the column ne+1 multiplies AY(1=indexv[1],k). is1,isf give the numbers of the starting and final rows that need to be filled in the s matrix for this block.jsf labels the column in which the difference equations E.k of equations (17.3.3)(17.3.5)are stored.Thus,-s [i][jsf]is the vector on the right-hand side of the matrix.The reason for the minus sign is that difeg supplies the actual difference equation, Ej.k,not its negative.Note that solvde supplies a value for jsf such that the difference equation is put in the column just after all derivatives in the s matrix.Thus,difeg expects to find values entered into s[i][j]for rows is1≤i≤isf and1≤j≤jsf. Finally,s[1..nsi][1..nsj]and y[1..nyj][1..nyk]supply difeq with storage 10621 for s and the solution variables y for this iteration.An example of how to use this routine Numerica is given in the next section. Recipes 431086 Many ideas in the following code are due to Eggleton [1]. #include (outside #include 首 Software. #include "nrutil.h" void solvde(int itmax,float conv,float slowc,float scalv[],int indexv[], int ne,int nb,int m,float *y,float ***c,float **s) 盆 Driver routine for solution of two point boundary value problems by relaxation.itmax is the maximum number of iterations.conv is the convergence criterion (see text).slowc controls the fraction of corrections actually used after each iteration.scalv[1..ne]contains typical sizes for each dependent variable,used to weight errors.indexv[1..ne]lists the column ordering of variables used to construct the matrix s[1..ne][1..2*ne+1]of derivatives.(The nb boundary conditions at the first mesh point must contain some dependence on the first nb variables listed in indexv.)The problem involves ne equations for ne adjustable dependent variables at each point.At the first mesh point there are nb boundary conditions.There are a total of m mesh points.y[1..ne][1..m]is the two-dimensional array that contains the initial guess for all the dependent variables at each mesh point.On each iteration,it is updated by the
768 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). fraction of the corrections found by matrix inversion: Y [j][k] → Y [j][k] + slowc max(slowc,err) ∆Y [j][k] (17.3.14) Thus, when err>slowc only a fraction of the corrections are used, but when err≤slowc the entire correction gets applied. The call statement also supplies solvde with the array y[1..nyj][1..nyk] containing the initial trial solution, and workspace arrays c[1..ne][1..ne-nb+1][1..m+1], s[1..ne][1..2*ne+1]. The array c is the blockbuster: It stores the unreduced elements of the matrix built up for the backsubstitution step. If there are m mesh points, then there will be m+1 blocks, each requiring ne rows and ne-nb+1 columns. Although large, this is small compared with (ne×m)2 elements required for the whole matrix if we did not break it into blocks. We now describe the workings of the user-supplied function difeq. The synopsis of the function is void difeq(int k, int k1, int k2, int jsf, int is1, int isf, int indexv[], int ne, float **s, float **y); The only information passed from difeq to solvde is the matrix of derivatives s[1..ne][1..2*ne+1]; all other arguments are input to difeq and should not be altered. k indicates the current mesh point, or block number. k1,k2 label the first and last point in the mesh. If k=k1 or k>k2, the block involves the boundary conditions at the first or final points; otherwise the block acts on FDEs coupling variables at points k-1, k. The convention on storing information into the array s[i][j] follows that used in equations (17.3.8), (17.3.10), and (17.3.12): Rows i label equations, columns j refer to derivatives with respect to dependent variables in the solution. Recall that each equation will depend on the ne dependent variables at either one or two points. Thus, j runs from 1 to either ne or 2*ne. The column ordering for dependent variables at each point must agree with the list supplied in indexv[j]. Thus, for a block not at a boundary, the first column multiplies ∆Y (l=indexv[1],k-1), and the column ne+1 multiplies ∆Y (l=indexv[1],k). is1,isf give the numbers of the starting and final rows that need to be filled in the s matrix for this block. jsf labels the column in which the difference equations Ej,k of equations (17.3.3)–(17.3.5) are stored. Thus, −s[i][jsf] is the vector on the right-hand side of the matrix. The reason for the minus sign is that difeq supplies the actual difference equation, Ej,k, not its negative. Note that solvde supplies a value for jsf such that the difference equation is put in the column just after all derivatives in the s matrix. Thus, difeq expects to find values entered into s[i][j] for rows is1 ≤ i ≤ isf and 1 ≤ j ≤ jsf. Finally, s[1..nsi][1..nsj] and y[1..nyj][1..nyk] supply difeq with storage for s and the solution variables y for this iteration. An example of how to use this routine is given in the next section. Many ideas in the following code are due to Eggleton [1]. #include #include #include "nrutil.h" void solvde(int itmax, float conv, float slowc, float scalv[], int indexv[], int ne, int nb, int m, float **y, float ***c, float **s) Driver routine for solution of two point boundary value problems by relaxation. itmax is the maximum number of iterations. conv is the convergence criterion (see text). slowc controls the fraction of corrections actually used after each iteration. scalv[1..ne] contains typical sizes for each dependent variable, used to weight errors. indexv[1..ne] lists the column ordering of variables used to construct the matrix s[1..ne][1..2*ne+1] of derivatives. (The nb boundary conditions at the first mesh point must contain some dependence on the first nb variables listed in indexv.) The problem involves ne equations for ne adjustable dependent variables at each point. At the first mesh point there are nb boundary conditions. There are a total of m mesh points. y[1..ne][1..m] is the two-dimensional array that contains the initial guess for all the dependent variables at each mesh point. On each iteration, it is updated by the
17.3 Relaxation Methods 769 calculated correction.The arrays c[1..ne][1..ne-nb+1][1..m+1]and s supply dummy storage used by the relaxation code. void bksub(int ne,int nb,int jf,int k1,int k2,float ***c); void difeg(int k,int k1,int k2,int jsf,int is1,int isf, int indexv[],int ne,float **s,float **y); void pinvs(int iel,int ie2,int jel,int jsf,int jc1,int k, f1oat*米*c,f1oat米*s); void red(int iz1,int iz2,int jz1,int jz2,int jmi,int jm2,int jmf, int icl,int jcl,int jcf,int kc,float **c,float **s); int1c1,1c2,1c3,1c4,1t,j,j1,j2,j3,j4,j5,j6,j7,j8,j9: int jc1,jcf,jv,k,k1,k2,km,kp,nvars,*kmax; float err,errj,fac,vmax,vz,*ermax; kmax=ivector(1,ne); ermax=vector(1,ne); k1=1; Set up row and column markers. k2=m; 1.800 (including this one) nvars=ne*m; j1=1; 872 j2=nb; ambridg 7423 from NUMERICAL RECIPESI 19881992 9 j3=nb+1; 14=ne: j5=j4+j1: j6=j4+j2: j7=j4+j3: j8=j4+j4; (North America to any server computer, to make one paper UnN电.t THE j9=j8+j1; ART 1c1=1; ic2-ne-nb; 1c3=1c2+1; strictly proh Programs ic4=ne; jc1=1: jcf=ic3; for (it=1;it<=itmax;it++){ Primary iteration loop. k=k1; Boundary conditions at first point difeg(k,k1,k2,j9,ic3,ic4,indexv,ne,s,y); to dir pinvs(ic3,ic4,j5,j9,jc1,k1,c,s); for(k=k1+1;k<=k2;k++) Finite difference equations at all point pairs. 18881920 OF SCIENTIFIC COMPUTING(ISBN pk-1; difeg(k,k1,k2,i9,ic1,ic4,indexv,ne,s,y); red(ic1,ic4,j1,j2,j3,j4,j9,ic3,jc1,jcf,kp,c,s); pinvs(ic1,ic4,j3,j9,jc1,k,c,s); 10-621 k=k2+1; Final boundary conditions. difeg(k,k1,k2,j9,ic1,ic2,indexv,ne,s,y); .Further reproduction, Numerical Recipes 43108 red(ic1,ic2,j5,j6,j7,j8,j9,ic3,jc1,jcf,k2,c,s); pinvs(ic1,ic2,17,j9,jcf,k2+1,c,s); bksub(ne,nb,jcf,k1,k2,c); Backsubstitution. (outside err=0.0; for (j=1;j<=ne;j++){ Convergence check,accumulate average er- North Software. jv=indexv[j]; ror. errj=vmax=0.0; Ame km=0; for (k=k1;k<=k2;k++){ Find point with largest error,for each de- vz=fabs(c[jv][1][k]); pendent variable. if (vz vmax) vmax=VZ; km=k; errj +vz; err +errj/scalv[j]; Note weighting for each dependent variable ermax[j]=c[jv][1][km]/scalv[j];
17.3 Relaxation Methods 769 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). calculated correction. The arrays c[1..ne][1..ne-nb+1][1..m+1] and s supply dummy storage used by the relaxation code. { void bksub(int ne, int nb, int jf, int k1, int k2, float ***c); void difeq(int k, int k1, int k2, int jsf, int is1, int isf, int indexv[], int ne, float **s, float **y); void pinvs(int ie1, int ie2, int je1, int jsf, int jc1, int k, float ***c, float **s); void red(int iz1, int iz2, int jz1, int jz2, int jm1, int jm2, int jmf, int ic1, int jc1, int jcf, int kc, float ***c, float **s); int ic1,ic2,ic3,ic4,it,j,j1,j2,j3,j4,j5,j6,j7,j8,j9; int jc1,jcf,jv,k,k1,k2,km,kp,nvars,*kmax; float err,errj,fac,vmax,vz,*ermax; kmax=ivector(1,ne); ermax=vector(1,ne); k1=1; Set up row and column markers. k2=m; nvars=ne*m; j1=1; j2=nb; j3=nb+1; j4=ne; j5=j4+j1; j6=j4+j2; j7=j4+j3; j8=j4+j4; j9=j8+j1; ic1=1; ic2=ne-nb; ic3=ic2+1; ic4=ne; jc1=1; jcf=ic3; for (it=1;it vmax) { vmax=vz; km=k; } errj += vz; } err += errj/scalv[j]; Note weighting for each dependent variable. ermax[j]=c[jv][1][km]/scalv[j];
770 Chapter 17.Two Point Boundary Value Problems kmax[j]=km; err /nvars; fac=(err slowc slowc/err 1.0); Reduce correction applied when error is large for (j=1;j=k1;k--){ Use recurrence relations to eliminate remaining de- for their if (k ==k1)im=nbf+1; pendences. kp=k+1; for (j=1;j #include "nrutil.h" North Amer Software. void pinvs(int ie1,int ie2,int jel,int jsf,int jc1,int k,float ***c, f1oat米*s) Diagonalize the square subsection of the s matrix,and store the recursion coefficients in c; used internally by solvde. f int js1,jpiv,jp,je2,jcoff,j,irow,ipiv,id,icoff,i,*indxr; float pivinv,piv,dum,big,*pscl; indxr=ivector(ie1,ie2); pscl=vector(ie1,ie2); je2=je1+ie2-ie1; js1=je2+1;
770 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). kmax[j]=km; } err /= nvars; fac=(err > slowc ? slowc/err : 1.0); Reduce correction applied when error is large. for (j=1;j=k1;k--) { Use recurrence relations to eliminate remaining deif (k == k1) im=nbf+1; pendences. kp=k+1; for (j=1;j #include "nrutil.h" void pinvs(int ie1, int ie2, int je1, int jsf, int jc1, int k, float ***c, float **s) Diagonalize the square subsection of the s matrix, and store the recursion coefficients in c; used internally by solvde. { int js1,jpiv,jp,je2,jcoff,j,irow,ipiv,id,icoff,i,*indxr; float pivinv,piv,dum,big,*pscl; indxr=ivector(ie1,ie2); pscl=vector(ie1,ie2); je2=je1+ie2-ie1; js1=je2+1;
17.3 Relaxation Methods 771 for (i-iel;ibig)big=fabs(s[i][j]); if (big ==0.0)nrerror("Singular matrix row all 0,in pinvs"); pscl[i]=1.0/big; indxr[i]=0; for (id=ie1;idbig){ 83g jp=j; (including this one) granted for 19881992 big=fabs(s[i][j]); 11-800 if (big*pscl[i]piv){ -872 ipiv=i; jpiv=jp; to any server computer, by Cambridge University Press. from NUMERICAL RECIPES IN piv=big*pscl[i]; 1 if (s[ipiv][jpiv]==0.0)nrerror("Singular matrix in routine pinvs"); (North America THE indxr[ipiv]=jpiv; In place reduction.Save column ordering. one paper ART pivinv=1.0/s[ipiv][jpiv]; for (j=je1;j<=jsf;j++)s[ipiv][j]*pivinv; Normalize pivot row. s[ipiv][jpiv]=1.0; for (1=1e1;1<=1e2;1+)( Reduce nonpivot elements in column. if(indxr[i]!=jpiv)[ if (s[i][ipiv]){ st st Programs dum=s[i][jpiv] for (j=je1;j<=jsf;j++) s[i][i]-dum*s[ipiv][j]; s[i][jpiv]=0.0; jcoff=jc1-js1; Sort and store unreduced coefficients v@cambri icoff=ie1-je1; for (i-iel;i<=ie2;i++){ irow=indxr [i]+icoff; 1988-1992 by Numerical Recipes OF SCIENTIFIC COMPUTING(ISBN 10-:6211 43108 for (j=js1;j<=jsf;j++)c[irow][j+jcoff][k]=s[i][j]; free_vector(pscl,ie1,ie2); free_ivector(indxr,ie1,ie2); 2 (outside North America) Software. visit website ying of void red(int iz1,int iz2,int jz1,int jz2,int jmi,int jm2,int jmf, machine int icl,int jci,int jcf,int kc,float ***c,float **s) Reduce columns jz1-jz2 of the s matrix,using previous results as stored in the c matrix.Only columns jm1-jm2,jmf are affected by the prior results.red is used internally by solvde. int loff,1,j,ic,i; float vx; loff=jc1-jm1; 1c=1c1:
17.3 Relaxation Methods 771 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (i=ie1;i big) big=fabs(s[i][j]); if (big == 0.0) nrerror("Singular matrix - row all 0, in pinvs"); pscl[i]=1.0/big; indxr[i]=0; } for (id=ie1;id big) { jp=j; big=fabs(s[i][j]); } } if (big*pscl[i] > piv) { ipiv=i; jpiv=jp; piv=big*pscl[i]; } } } if (s[ipiv][jpiv] == 0.0) nrerror("Singular matrix in routine pinvs"); indxr[ipiv]=jpiv; In place reduction. Save column ordering. pivinv=1.0/s[ipiv][jpiv]; for (j=je1;j<=jsf;j++) s[ipiv][j] *= pivinv; Normalize pivot row. s[ipiv][jpiv]=1.0; for (i=ie1;i<=ie2;i++) { Reduce nonpivot elements in column. if (indxr[i] != jpiv) { if (s[i][jpiv]) { dum=s[i][jpiv]; for (j=je1;j<=jsf;j++) s[i][j] -= dum*s[ipiv][j]; s[i][jpiv]=0.0; } } } } jcoff=jc1-js1; Sort and store unreduced coefficients. icoff=ie1-je1; for (i=ie1;i<=ie2;i++) { irow=indxr[i]+icoff; for (j=js1;j<=jsf;j++) c[irow][j+jcoff][k]=s[i][j]; } free_vector(pscl,ie1,ie2); free_ivector(indxr,ie1,ie2); } void red(int iz1, int iz2, int jz1, int jz2, int jm1, int jm2, int jmf, int ic1, int jc1, int jcf, int kc, float ***c, float **s) Reduce columns jz1-jz2 of the s matrix, using previous results as stored in the c matrix. Only columns jm1-jm2,jmf are affected by the prior results. red is used internally by solvde. { int loff,l,j,ic,i; float vx; loff=jc1-jm1; ic=ic1;