正在加载图片...
Advanced Review where L and U are block triangular but U is not nec onward,computing extra precision residuals became essarily tri gular.This is in general different from problematic and this spurred research into fixed pre trictive analog of iterative refin nt,whe 13.2 it.In re only one preci hack int as a because modern processors cither have extra precision ecision mu aster than in c onsingular of block I u factori re oextra precision computation The n erical stabilit is less satisfactory than for the usual Lu factorization The following theorem summarizes the benefits However,if A is diagonally dominant by columns,or block diagonally dominant by columns in the sense that Theorem 4 Let i applied to the nonsingular linear system Ax=b in c iunction A=1-1-∑1Al≥0,i=1:m, with GE with partial pivoting.Provided A is no reduce ces an元f be shown to be mumerically rization ted by th desire to maximize efficiency on modern con uter K-e≈{ondA,x,forf for mixed through the use of matrix-matrix operations.It has wbere cond(A,x)=ll A-1 Alx also been widely used for block tridi 11 iter small componentwise backward error,as first shown even for fixed precision refinement.Fo ITERATIVE REFINEMENT diono terativercnesa procedure for improving compu CE Th system Ax e process repeat CONCLUSION the three steps ues tobe the standard 1.Compute r=b-A. not so larg ng lin 2 Solve Ad- or storage dictate the use of irerative methods.The first 3.Update 元+d partial pivoting wa hi code imp te In the abs ofruntgtemos主ishe it is still not understood why the numerical stabilin rs vitiare all thre of this method is so good in practice,or equivalentl steps and the pro cess is iterative.Forcomputed by GE,the system Ad=ris solved using the LU factor computed,so each iteration requires related topics,including 2 the 1960 .row or column scaling (or equilibration), which we chi ion iterative re ment.On some ‘Gheoianeionaiomnwtohatehstaeg lin elem ts both at L inner products in extra precision in hardware.mk pally implementation of the process easy.From the 1980s inversio 236 2011 John Wiley Sons.Inc Volume 3.Mayllune 2011 Advanced Review wires.wiley.com/compstats where L and U are block triangular but U is not nec￾essarily triangular. This is in general different from the usual LU factorization. A less restrictive analog of Theorem 1 holds (Ref 5, Thm. 13.2). Theorem 3 The matrix A = (Aij) m i,j=1 ∈ Rn×n has a unique block LU factorization if and only if the first m − 1 leading principal block submatrices of A are nonsingular. The numerical stability of block LU factorization is less satisfactory than for the usual LU factorization. However, if A is diagonally dominant by columns, or block diagonally dominant by columns in the sense that A−1 jj −1 −n i=1 i =j Aij ≥ 0, j = 1: n, then the factorization can be shown to be numerically stable (Ref 5, Ch. 13). Block LU factorization is motivated by the desire to maximize efficiency on modern computers through the use of matrix–matrix operations. It has also been widely used for block tridiagonal matrices arising in the discretization of partial differential equations. ITERATIVE REFINEMENT Iterative refinement is a procedure for improving a computed solution x to a linear system Ax = b—usually one computed by GE. The process repeats the three steps 1. Compute r = b − Ax. 2. Solve Ad = r. 3. Update x ← x + d. In the absence of rounding errors, x is the exact solu￾tion to the system after one iteration of the three steps. In practice, rounding errors vitiate all three steps and the process is iterative. For x computed by GE, the system Ad = r is solved using the LU factor￾ization already computed, so each iteration requires only O(n2) flops. Iterative refinement was popular in the 1960s and 1970s, when it was implemented with the resid￾ual r computed at twice the working precision, which we call mixed precision iterative refinement. On some machines of that era it was possible to accumulate inner products in extra precision in hardware, making implementation of the process easy. From the 1980s onward, computing extra precision residuals became problematic and this spurred research into fixed pre￾cision iterative refinement, where only one precision is used throughout. In the last few years mixed pre￾cision iterative refinement has come back into favor, because modern processors either have extra precision registers or can perform arithmetic in single precision much faster than in double precision but also because standardized routines for extra precision computation are now available.28–30 The following theorem summarizes the benefits iterative refinement brings to the forward error (Ref 5, Sect. 12.1). Theorem 4 Let iterative refinement be applied to the nonsingular linear system Ax = b in conjunction with GE with partial pivoting. Provided A is not too ill conditioned, iterative refinement reduces the forward error at each stage until it produces an x for which x −x∞ x∞ ≈  u, for mixed precision, cond(A, x)u, for fixed precision, where cond(A, x) =|A−1Ax| ∞/x∞. This theorem tells only part of the story. Under suitable assumptions, iterative refinement leads to a small componentwise backward error, as first shown by Skeel31—even for fixed precision refinement. For the definition of componentwise backward error and further details, see Ref 5, Sect. 12.2. CONCLUSION GE with partial pivoting continues to be the standard numerical method for solving linear systems that are not so large that considerations of computational cost or storage dictate the use of iterative methods. The first computer program for GE with partial pivoting was probably that of Wilkinson35 (his code implemented iterative refinement too). It is perhaps surprising that it is still not understood why the numerical stability of this method is so good in practice, or equivalently why large element growth with partial pivoting is not seen in practical computations. This overview has omitted a number of GE￾related topics, including • row or column scaling (or equilibration), • Gauss–Jordan elimination, in which at each stage of the elimination elements both above and below the diagonal are eliminated, and which is principally used as a method for matrix inversion, 236  2011 John Wiley & Son s, In c. Volume 3, May/June 2011
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有