正在加载图片...
494 Chapter 11.Eigensystems where bk and Tk are our current guesses for some eigenvector and eigenvalue of interest (let's say,xn and An).Normalize b&so that bk.bk=1.The exact eigenvector and eigenvalue satisfy A·Xn=入nXn (11.7.7) SO (A-kl)·xn=(入n-Tk)xn (11.7.8) Since y of(11.7.6)is an improved approximation to xn,we normalize it and set be+1s之 yl (11.7.9) We get an improved estimate of the eigenvalue by substituting our improved guess y for xn in (11.7.8).By (11.7.6),the left-hand side is bk,so calling An our new value Tk+1,we find ICAL 1 Tk+1=Tk十 (11.7.10) bk·y While the above formulas look simple enough,in practice the implementation RECIPES can be quite tricky.The first question to be resolved is when to use inverse iteration. 9 Most of the computational load occurs in solving the linear system(11.7.6).Thus a possible strategy is first to reduce the matrix A to a special form that allows easy solution of(11.7.6).Tridiagonal form for symmetric matrices or Hessenberg for nonsymmetric are the obvious choices.Then apply inverse iteration to generate all the eigenvectors.While this is an O(N3)method for symmetric matrices,it is many times less efficient than the QL method given earlier.In fact,even the 、县芳 best inverse iteration packages are less efficient than the OL method as soon as more than about 25 percent of the eigenvectors are required.Accordingly,inverse iteration is generally used when one already has good eigenvalues and wants only a few selected eigenvectors. You can write a simple inverse iteration routine yourself using LU decompo- sition to solve (11.7.6).You can decide whether to use the general LU algorithm we gave in Chapter 2 or whether to take advantage of tridiagonal or Hessenberg form.Note that,since the linear system (11.7.6)is nearly singular,you must be NA Numerica 10-521 careful to use a version of LU decomposition like that in 82.3 which replaces a zero pivot with a very small number. We have chosen not to give a general inverse iteration routine in this book, because it is quite cumbersome to take account of all the cases that can arise. 9 Routines are given,for example,in [1.21.If you use these,or write your own routine, you may appreciate the following pointers. North One starts by supplying an initial value To for the eigenvalue An of interest. Choose a random normalized vector bo as the initial guess for the eigenvector xn, and solve (11.7.6).The new vector y is bigger than bo by a "growth factor"lyl, which ideally should be large.Equivalently,the change in the eigenvalue,which by (11.7.10)is essentially 1/ly,should be small.The following cases can arise: If the growth factor is too small initially,then we assume we have made a "bad"choice of random vector.This can happen not just because of a small Bn in (11.7.5),but also in the case of a defective matrix,when (11.7.5)does not even apply (see,e.g.,[1]or [3]for details).We go back to the beginning and choose a new initial vector.494 Chapter 11. Eigensystems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). where bk and τk are our current guesses for some eigenvector and eigenvalue of interest (let’s say, xn and λn). Normalize bk so that bk · bk = 1. The exact eigenvector and eigenvalue satisfy A · xn = λnxn (11.7.7) so (A − τk1) · xn = (λn − τk)xn (11.7.8) Since y of (11.7.6) is an improved approximation to x n, we normalize it and set bk+1 = y |y| (11.7.9) We get an improved estimate of the eigenvalue by substituting our improved guess y for xn in (11.7.8). By (11.7.6), the left-hand side is b k, so calling λn our new value τk+1, we find τk+1 = τk + 1 bk · y (11.7.10) While the above formulas look simple enough, in practice the implementation can be quite tricky. The first question to be resolved is when to use inverse iteration. Most of the computational load occurs in solving the linear system (11.7.6). Thus a possible strategy is first to reduce the matrix A to a special form that allows easy solution of (11.7.6). Tridiagonal form for symmetric matrices or Hessenberg for nonsymmetric are the obvious choices. Then apply inverse iteration to generate all the eigenvectors. While this is an O(N 3) method for symmetric matrices, it is many times less efficient than the QL method given earlier. In fact, even the best inverse iteration packages are less efficient than the QL method as soon as more than about 25 percent of the eigenvectors are required. Accordingly, inverse iteration is generally used when one already has good eigenvalues and wants only a few selected eigenvectors. You can write a simple inverse iteration routine yourself using LU decompo￾sition to solve (11.7.6). You can decide whether to use the general LU algorithm we gave in Chapter 2 or whether to take advantage of tridiagonal or Hessenberg form. Note that, since the linear system (11.7.6) is nearly singular, you must be careful to use a version of LU decomposition like that in §2.3 which replaces a zero pivot with a very small number. We have chosen not to give a general inverse iteration routine in this book, because it is quite cumbersome to take account of all the cases that can arise. Routines are given, for example, in [1,2]. If you use these, or write your own routine, you may appreciate the following pointers. One starts by supplying an initial value τ0 for the eigenvalue λn of interest. Choose a random normalized vector b0 as the initial guess for the eigenvector xn, and solve (11.7.6). The new vector y is bigger than b 0 by a “growth factor” |y|, which ideally should be large. Equivalently, the change in the eigenvalue, which by (11.7.10) is essentially 1/ |y|, should be small. The following cases can arise: • If the growth factor is too small initially, then we assume we have made a “bad” choice of random vector. This can happen not just because of a small βn in (11.7.5), but also in the case of a defective matrix, when (11.7.5) does not even apply (see, e.g., [1] or [3] for details). We go back to the beginning and choose a new initial vector
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有