正在加载图片...
18.5 Linear Regularization Methods 813 and(ii)it does exploit gradient information,which we can readily compute:The gradient of equation (18.5.6)is V(A+B)=2[(AT.A+H).-AT.b] (18.5.20) (cf.18.5.8).Evaluation of both the function and the gradient should of course take advantage of the sparsity of A,for example via the routines sprsax and sprstx in 82.7.We will discuss the conjugate gradient technique further in $18.7,in the context of the(nonlinear)maximum entropy method.Some of that discussion can apply here as well. The conjugate gradient method notwithstanding,application of the unsophis- ticated steepest descent method(see $10.6)can sometimes produce useful results, 菲 邑 particularly when combined with projections onto convex sets (see below).If the solution after k iterations is denoted),then after+1 iterations we have ICAL 3 (+1)=[1-E(AT.A+H)].()+eAT.b (18.5.21) RECIPES Here e is a parameter that dictates how far to move in the downhill gradient direction. 9 The method converges when e is small enough,in particular satisfying 2 0<e< (18.5.22) max eigenvalue(A.A +AH) es9爱兰 9 There exist complicated schemes for finding optimal values or sequences for e, see [7];or,one can adopt an experimental approach,evaluating (18.5.6)to be sure that downhill steps are in fact being taken. 6 In those image processing problems where the final measure of success is somewhat subjective(e.g.,"how good does the picture look?"),iteration (18.5.21) sometimes produces significantly improved images long before convergence is achieved.This probably accounts for much of its use,since its mathematical convergence is extremely slow.In fact,(18.5.21)can be used with H =0,in which 盖 10621 case the solution is not regularized at all,and full convergence would be disastrous! Numerica This is called Van Cittert's method and goes back to the 1930s.A number of 431 iterations the order of 1000 is not uncommon [7]. Recipes Deterministic Constraints:Projections onto Convex Sets North A set of possible underlying functions (or images){u is said to be corex if, for any two elements ua and u in the set,all the linearly interpolated combinations (1-)ia+ni60≤n≤1 (18.5.23) are also in the set.Many deterministic constraints that one might want to impose on the solution u to an inverse problem in fact define convex sets,for example: ·positivity compact support (1.e.,zero value outside of a certain region)18.5 Linear Regularization Methods 813 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). and (ii) it does exploit gradient information, which we can readily compute: The gradient of equation (18.5.6) is ∇(A + λB) = 2[(AT · A + λH) · u − AT · b] (18.5.20) (cf. 18.5.8). Evaluation of both the function and the gradient should of course take advantage of the sparsity of A, for example via the routines sprsax and sprstx in §2.7. We will discuss the conjugate gradient technique further in §18.7, in the context of the (nonlinear) maximum entropy method. Some of that discussion can apply here as well. The conjugate gradient method notwithstanding, application of the unsophis￾ticated steepest descent method (see §10.6) can sometimes produce useful results, particularly when combined with projections onto convex sets (see below). If the solution after k iterations is denoted u(k) , then after k + 1 iterations we have u(k+1) = [1 − (AT · A + λH)] · u(k) + AT · b (18.5.21) Here  is a parameter that dictates how far to move in the downhill gradient direction. The method converges when  is small enough, in particular satisfying 0 << 2 max eigenvalue (AT · A + λH) (18.5.22) There exist complicated schemes for finding optimal values or sequences for , see [7]; or, one can adopt an experimental approach, evaluating (18.5.6) to be sure that downhill steps are in fact being taken. In those image processing problems where the final measure of success is somewhat subjective (e.g., “how good does the picture look?”), iteration (18.5.21) sometimes produces significantly improved images long before convergence is achieved. This probably accounts for much of its use, since its mathematical convergence is extremely slow. In fact, (18.5.21) can be used with H = 0, in which case the solution is not regularized at all, and full convergence would be disastrous! This is called Van Cittert’s method and goes back to the 1930s. A number of iterations the order of 1000 is not uncommon [7]. Deterministic Constraints: Projections onto Convex Sets A set of possible underlying functions (or images) {u} is said to be convex if, for any two elements ua and ub in the set, all the linearly interpolated combinations (1 − η)ua + ηub 0 ≤ η ≤ 1 (18.5.23) are also in the set. Many deterministic constraints that one might want to impose on the solution u to an inverse problem in fact define convex sets, for example: • positivity • compact support (i.e., zero value outside of a certain region)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有