Chapter 18.Integral Equations http://www.nr. and Inverse Theory .com 18.0 Introduction A6o会 NUMERICAL Cambridge Many people,otherwise numerically knowledgable,imagine that the numerical solution of integral equations must be an extremely arcane topic,since,until recently. RECIPES IN it was almost never treated in numerical analysis textbooks.Actually there is a large and growing literature on the numerical solution of integral equations;several monographs have by now appeared [1-31.One reason for the sheer volume of this Press C:THEA activity is that there are many different kinds of equations,each with many different possible pitfalls;often many different algorithms have been proposed to deal with a single case 9 是 There is a close correspondence between linear integral equations,which specify linear,integral relations among functions in an infinite-dimensional function space, SCIENTIFIC and plain old linear equations,which specify analogous relations among vectors in a finite-dimensional vector space.Because this correspondence lies at the heart of most computational algorithms,it is worth making it explicit as we recall how 量 integral equations are classified. Fredholm equations involve definite integrals with fixed upper and lower limits. 92y An inhomogeneous Fredholm equation of the first kind has the form g(t)= K(t,s)f(s)ds (18.0.1) Recipes Here f(t)is the unknown function to be solved for,while g(t)is a known"right-hand side."(In integral equations,for some odd reason,the familiar"right-hand side"is (outside ecipes conventionally written on the left!)The function of two variables,K(t,s)is called Software. the kernel.Equation(18.0.1)is analogous to the matrix equation K.f=g (18.0.2) America visit website whose solution isf=K-1.g,where K-is the matrix inverse.Like equation(18.0.2), machine- equation (18.0.1)has a unique solution whenever g is nonzero(the homogeneous case with g=0 is almost never useful)and K is invertible.However,as we shall see,this latter condition is as often the exception as the rule. The analog of the finite-dimensional eigenvalue problem (K-σ1)·f=g (18.0.3) 788
Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Chapter 18. Integral Equations and Inverse Theory 18.0 Introduction Many people, otherwise numerically knowledgable, imagine that the numerical solution of integral equations must be an extremely arcane topic, since, until recently, it was almost never treated in numerical analysis textbooks. Actually there is a large and growing literature on the numerical solution of integral equations; several monographs have by now appeared [1-3]. One reason for the sheer volume of this activity is that there are many different kinds of equations, each with many different possible pitfalls; often many different algorithms have been proposed to deal with a single case. There is a close correspondencebetween linear integral equations, which specify linear, integral relations among functions in an infinite-dimensional function space, and plain old linear equations, which specify analogous relations among vectors in a finite-dimensional vector space. Because this correspondence lies at the heart of most computational algorithms, it is worth making it explicit as we recall how integral equations are classified. Fredholm equations involve definite integrals with fixed upper and lower limits. An inhomogeneous Fredholm equation of the first kind has the form g(t) = b a K(t, s)f(s) ds (18.0.1) Here f(t) is the unknown function to be solved for, while g(t) is a known “right-hand side.” (In integral equations, for some odd reason, the familiar “right-hand side” is conventionally written on the left!) The function of two variables, K(t, s) is called the kernel. Equation (18.0.1) is analogous to the matrix equation K · f = g (18.0.2) whose solution isf = K−1·g, where K−1 is the matrix inverse. Like equation (18.0.2), equation (18.0.1) has a unique solution whenever g is nonzero (the homogeneous case with g = 0 is almost never useful) and K is invertible. However, as we shall see, this latter condition is as often the exception as the rule. The analog of the finite-dimensional eigenvalue problem (K − σ1) · f = g (18.0.3) 788
18.0 Introduction 789 is called a Fredholm equation of the second kind.usually written f(t)=入 K(t,s)f(s)ds+g(t) (18.0.4) Again,the notational conventions do not exactly correspond:A in equation(18.0.4) is 1/o in (18.0.3),while g is -g/A.If g (or g)is zero,then the equation is said to be homogeneous.If the kernel K(t,s)is bounded,then,like equation (18.0.3), equation (18.0.4)has the property that its homogeneous form has solutions for at most a denumerably infinite set A =An,n =1,2,...,the eigenvalues.The 8 corresponding solutions fn(t)are the eigenfunctions.The eigenvalues are real if the kernel is symmetric. In the inhomogeneous case of nonzero g (or g),equations (18.0.3)and (18.0.4) are soluble except when A (or o)is an eigenvalue-because the integral operator (or matrix)is singular then.In integral equations this dichotomy is called the Fredholm alternative. Fredholm equations of the first kind are often extremely ill-conditioned.Ap- RECIPES plying the kernel to a function is generally a smoothing operation,so the solution, which requires inverting the operator,will be extremely sensitive to small changes or errors in the input.Smoothing often actually loses information,and there is no way to get it back in an inverse operation.Specialized methods have been developed 93P for such equations,which are often called inverse problems.In general,a method must augment the information given with some prior knowledge of the nature of the solution.This prior knowledge is then used,in one way or another,to restore lost 、s是%o information.We will introduce such techniques in 818.4. Inhomogeneous Fredholm equations of the second kind are much less often ill-conditioned.Equation (18.0.4)can be rewritten as 61 K(t,s)-08(t-s)]f(s)ds=-ag(t) (18.0.5) where 6(t-s)is a Dirac delta function (and where we have changed from A to its 10.621 reciprocal o for clarity).If o is large enough in magnitude,then equation (18.0.5) is,in effect,diagonally dominant and thus well-conditioned.Only if o is small do Numerical 431 we go back to the ill-conditioned case. Recipes Homogeneous Fredholm equations of the second kind are likewise not partic- ularly ill-posed.If K is a smoothing operator,then it will map many f's to zero, or near-zero:there will thus be a large number of degenerate or nearly degenerate North eigenvalues around o=0(A-oo),but this will cause no particular computational difficulties.In fact,we can now see that the magnitude of o needed to rescue the inhomogeneous equation(18.0.5)from an ill-conditioned fate is generally much less than that required for diagonal dominance.Since the o term shifts all eigenvalues, it is enough that it be large enough to shift a smoothing operator's forest of near- zero eigenvalues away from zero,so that the resulting operator becomes invertible (except,of course,at the discrete eigenvalues). Volterra equations are a special case of Fredholm equations with K(t,s)=0 for s>t.Chopping off the unnecessary part ofthe integration,Volterra equations are written in a form where the upper limit of integration is the independent variable t
18.0 Introduction 789 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). is called a Fredholm equation of the second kind, usually written f(t) = λ b a K(t, s)f(s) ds + g(t) (18.0.4) Again, the notational conventions do not exactly correspond: λ in equation (18.0.4) is 1/σ in (18.0.3), while g is −g/λ. If g (or g) is zero, then the equation is said to be homogeneous. If the kernel K(t, s) is bounded, then, like equation (18.0.3), equation (18.0.4) has the property that its homogeneous form has solutions for at most a denumerably infinite set λ = λn, n = 1, 2,... , the eigenvalues. The corresponding solutions fn(t) are the eigenfunctions. The eigenvalues are real if the kernel is symmetric. In the inhomogeneous case of nonzero g (or g), equations (18.0.3) and (18.0.4) are soluble except when λ (or σ) is an eigenvalue — because the integral operator (or matrix) is singular then. In integral equations this dichotomy is called the Fredholm alternative. Fredholm equations of the first kind are often extremely ill-conditioned. Applying the kernel to a function is generally a smoothing operation, so the solution, which requires inverting the operator, will be extremely sensitive to small changes or errors in the input. Smoothing often actually loses information, and there is no way to get it back in an inverse operation. Specialized methods have been developed for such equations, which are often called inverse problems. In general, a method must augment the information given with some prior knowledge of the nature of the solution. This prior knowledge is then used, in one way or another, to restore lost information. We will introduce such techniques in §18.4. Inhomogeneous Fredholm equations of the second kind are much less often ill-conditioned. Equation (18.0.4) can be rewritten as b a [K(t, s) − σδ(t − s)]f(s) ds = −σg(t) (18.0.5) where δ(t − s) is a Dirac delta function (and where we have changed from λ to its reciprocal σ for clarity). If σ is large enough in magnitude, then equation (18.0.5) is, in effect, diagonally dominant and thus well-conditioned. Only if σ is small do we go back to the ill-conditioned case. Homogeneous Fredholm equations of the second kind are likewise not particularly ill-posed. If K is a smoothing operator, then it will map many f’s to zero, or near-zero; there will thus be a large number of degenerate or nearly degenerate eigenvalues around σ = 0 (λ → ∞), but this will cause no particular computational difficulties. In fact, we can now see that the magnitude of σ needed to rescue the inhomogeneous equation (18.0.5) from an ill-conditioned fate is generally much less than that required for diagonal dominance. Since the σ term shifts all eigenvalues, it is enough that it be large enough to shift a smoothing operator’s forest of nearzero eigenvalues away from zero, so that the resulting operator becomes invertible (except, of course, at the discrete eigenvalues). Volterra equations are a special case of Fredholm equations with K(t, s)=0 for s>t. Chopping off the unnecessary part of the integration, Volterra equations are written in a form where the upper limit of integration is the independent variable t
790 Chapter 18.Integral Equations and Inverse Theory The Volterra equation of the first kind g(t)= K(t,s)f(s)ds (18.0.6) has as its analog the matrix equation(now written out in components) Kkf方=9k (18.0.7) j=1 Comparing with equation (18.0.2),we see that the Volterra equation corresponds to a matrix K that is lower(i.e.,left)triangular,with zero entries above the diagonal As we know from Chapter 2,such matrix equations are trivially soluble by forward substitution.Techniques for solving Volterra equations are similarly straightforward. When experimental measurement noise does not dominate,Volterra equations of the first kind tend not to be ill-conditioned;the upper limit to the integral introduces a sharp step that conveniently spoils any smoothing properties of the kernel. RECIPES The Volterra equation of the second kind is written 9 f(t)=K(t,s)f(s)ds+g(t) (18.0.8) 县P whose matrix analog is the equation 9分% 9 (K-1)·f=g (18.0.9) with K lower triangular.The reason there is no A in these equations is that(i)in the inhomogeneous case (nonzero g)it can be absorbed into K,while(ii)in the homogeneous case(g =0),it is a theorem that Volterra equations of the second kind with bounded kernels have no eigenvalues with square-integrable eigenfunctions. We have specialized our definitions to the case of linear integral equations. The integrand in a nonlinear version of equation (18.0.1)or (18.0.6)would be 复N K(t,s,f(s))instead of K(t,s)f(s);a nonlinear version of equation (18.0.4)or (18.0.8)would have an integrand K(t,s,f(t),f(s)).Nonlinear Fredholm equations Numerica 10-521 are considerably more complicated than their linear counterparts.Fortunately,they 431 do not occur as frequently in practice and we shall by and large ignore them in this Recipes chapter.By contrast,solving nonlinear Volterra equations usually involves only a slight modification of the algorithm for linear equations,as we shall see. Almost all methods for solving integral equations numerically make use of guadrature rules,frequently Gaussian quadratures.This would be a good time for you to go back and review 84.5,especially the advanced material towards the end of that section. In the sections that follow,we first discuss Fredholm equations of the second kind with smooth kernels (818.1).Nontrivial quadrature rules come into the discussion. but we will be dealing with well-conditioned systems of equations.We then return to Volterra equations($18.2),and find that simple and straightforward methods are generally satisfactory for these equations. In 818.3 we discuss how to proceed in the case of singular kernels,focusing largely on Fredholm equations (both first and second kinds).Singularities require
790 Chapter 18. Integral Equations and Inverse Theory Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). The Volterra equation of the first kind g(t) = t a K(t, s)f(s) ds (18.0.6) has as its analog the matrix equation (now written out in components) k j=1 Kkjfj = gk (18.0.7) Comparing with equation (18.0.2), we see that the Volterra equation corresponds to a matrix K that is lower (i.e., left) triangular, with zero entries above the diagonal. As we know from Chapter 2, such matrix equations are trivially soluble by forward substitution. Techniques for solving Volterra equations are similarly straightforward. When experimental measurement noise does not dominate, Volterra equations of the first kind tend not to be ill-conditioned; the upper limit to the integral introduces a sharp step that conveniently spoils any smoothing properties of the kernel. The Volterra equation of the second kind is written f(t) = t a K(t, s)f(s) ds + g(t) (18.0.8) whose matrix analog is the equation (K − 1) · f = g (18.0.9) with K lower triangular. The reason there is no λ in these equations is that (i) in the inhomogeneous case (nonzero g) it can be absorbed into K, while (ii) in the homogeneous case (g = 0), it is a theorem that Volterra equations of the second kind with bounded kernels have no eigenvalues with square-integrable eigenfunctions. We have specialized our definitions to the case of linear integral equations. The integrand in a nonlinear version of equation (18.0.1) or (18.0.6) would be K(t, s, f(s)) instead of K(t, s)f(s); a nonlinear version of equation (18.0.4) or (18.0.8) would have an integrand K(t, s, f(t), f(s)). Nonlinear Fredholm equations are considerably more complicated than their linear counterparts. Fortunately, they do not occur as frequently in practice and we shall by and large ignore them in this chapter. By contrast, solving nonlinear Volterra equations usually involves only a slight modification of the algorithm for linear equations, as we shall see. Almost all methods for solving integral equations numerically make use of quadrature rules, frequently Gaussian quadratures. This would be a good time for you to go back and review §4.5, especially the advanced material towards the end of that section. In the sections that follow, we first discuss Fredholm equations of the second kind with smooth kernels (§18.1). Nontrivial quadrature rules come into the discussion, but we will be dealing with well-conditioned systems of equations. We then return to Volterra equations (§18.2), and find that simple and straightforward methods are generally satisfactory for these equations. In §18.3 we discuss how to proceed in the case of singular kernels, focusing largely on Fredholm equations (both first and second kinds). Singularities require
18.1 Fredholm Equations of the Second Kind 791 special quadrature rules,but they are also sometimes blessings in disguise,since they can spoil a kernel's smoothing and make problems well-conditioned. In 8818.4-18.7 we face up to the issues of inverse problems.$18.4 is an introduction to this large subject. We should note here that wavelet transforms.already discussed in $13.10.are applicable not only to data compression and signal processing,but can also be used to transform some classes of integral equations into sparse linear problems that allow fast solution.You may wish to review $13.10 as part of reading this chapter. Some subjects,such as integro-differential equations,we must simply declare to be beyond our scope.For a review of methods for integro-differential equations, see Brunner [4]. It should go without saying that this one short chapter can only barely touch on 鱼君 a few of the most basic methods involved in this complicated subject. ICAL CITED REFERENCES AND FURTHER READING: Delves,L.M.,and Mohamed,J.L.1985,Computational Methods for Integral Equations (Cam- RECIPES bridge,U.K.:Cambridge University Press).[1] Linz,P.1985,Analytical and Numerical Methods for Volterra Equations(Philadelphia:S.I.A.M.). 9 [2] Atkinson,K.E.1976,A Survey of Numerical Methods for the Solution of Fredholm Integral Equations of the Second Kind (Philadelphia:S.I.A.M.).[3] Brunner,H.1988,in Numerica/Analysis 1987,Pitman Research Notes in Mathematics vol.170, D.F.Griffiths and G.A.Watson,eds.(Harlow,Essex,U.K.:Longman Scientific and Tech- nical),pp.18-38.[4 8S是鸟分 Smithies,F.1958,Integra/Equations(Cambridge,U.K.:Cambridge University Press). Kanwal,R.P.1971,Linear Integra/Equations(New York:Academic Press). IENTIFIC Green,C.D.1969,Integral Equation Methods (New York:Barnes Noble). 18.1 Fredholm Equations of the Second Kind (ISBN Numerica 10.621 We desire a numerical solution for f(t)in the equation 43108 (outside Recipes f(t)=入 K(t,s)f(s)ds+g(t) (18.1.1) The method we describe,a very basic one,is called the Nystrom method.It requires the choice of some approximate quadrature rule: N (s)ds-∑w5(s) (18.1.2) j=1 Here the set [wi}are the weights of the quadrature rule,while the N points {sj} are the abscissas. What quadrature rule should we use?It is certainly possible to solve integral equations with low-order quadrature rules like the repeated trapezoidal or Simpson's
18.1 Fredholm Equations of the Second Kind 791 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). special quadrature rules, but they are also sometimes blessings in disguise, since they can spoil a kernel’s smoothing and make problems well-conditioned. In §§18.4–18.7 we face up to the issues of inverse problems. §18.4 is an introduction to this large subject. We should note here that wavelet transforms, already discussed in §13.10, are applicable not only to data compression and signal processing, but can also be used to transform some classes of integral equations into sparse linear problems that allow fast solution. You may wish to review §13.10 as part of reading this chapter. Some subjects, such as integro-differential equations, we must simply declare to be beyond our scope. For a review of methods for integro-differential equations, see Brunner [4]. It should go without saying that this one short chapter can only barely touch on a few of the most basic methods involved in this complicated subject. CITED REFERENCES AND FURTHER READING: Delves, L.M., and Mohamed, J.L. 1985, Computational Methods for Integral Equations (Cambridge, U.K.: Cambridge University Press). [1] Linz, P. 1985, Analytical and Numerical Methods for Volterra Equations (Philadelphia: S.I.A.M.). [2] Atkinson, K.E. 1976, A Survey of Numerical Methods for the Solution of Fredholm Integral Equations of the Second Kind (Philadelphia: S.I.A.M.). [3] Brunner, H. 1988, in Numerical Analysis 1987, Pitman Research Notes in Mathematics vol. 170, D.F. Griffiths and G.A. Watson, eds. (Harlow, Essex, U.K.: Longman Scientific and Technical), pp. 18–38. [4] Smithies, F. 1958, Integral Equations (Cambridge, U.K.: Cambridge University Press). Kanwal, R.P. 1971, Linear Integral Equations (New York: Academic Press). Green, C.D. 1969, Integral Equation Methods (New York: Barnes & Noble). 18.1 Fredholm Equations of the Second Kind We desire a numerical solution for f(t) in the equation f(t) = λ b a K(t, s)f(s) ds + g(t) (18.1.1) The method we describe, a very basic one, is called the Nystrom method. It requires the choice of some approximate quadrature rule: b a y(s) ds = N j=1 wjy(sj ) (18.1.2) Here the set {wj} are the weights of the quadrature rule, while the N points {s j} are the abscissas. What quadrature rule should we use? It is certainly possible to solve integral equations with low-order quadrature rules like the repeated trapezoidal or Simpson’s