正在加载图片...
574 Chapter 13.Fourier and Spectral Applications they are residual discrepancies from linear prediction.Although we will not prove it formally, it is intuitively believable that the x's are independently random and therefore have a flat (white noise)spectrum.(Roughly speaking,any residual correlations left in the a's would have allowed a more accurate linear prediction,and would have been removed.)The overall normalization of this flat spectrum is just the mean square amplitude of the r's.But this is exactly the quantity computed in equation (13.6.13)and returned by the routine memcof as xms.Thus,the coefficients ao and ak in equation (13.7.4)are related to the LP coefficients returned by memcof simply by o=xms ak=-d(k),k=1,..,M (13.7.6) There is also another way to describe the relation between the ak's and the autocorrelation components o.The Wiener-Khinchin theorem (12.0.12)says that the Fourier transform of 8 the autocorrelation is equal to the power spectrum.In z-transform language,this Fourier transform is just a Laurent series in z.The equation that is to be satisfied by the coefficients in equation (13.7.4)is thus ao (13.7.7) 3 1+ k三1 RECIPES The approximately equal sign in (13.7.7)has a somewhat special interpretation.It means that the series expansion of the left-hand side is supposed to agree with the right-hand side term by term from M to zM.Outside this range of terms,the right-hand side is obviously zero,while the left-hand side will still have nonzero terms.Notice that M,the number of coefficients in the approximation on the left-hand side,can be any integer up to N,the total 子g Press. number of autocorrelations available.(In practice,one often chooses M much smaller than N.)M is called the order or number of poles of the approximation. Whatever the chosen value of M,the series expansion of the left-hand side of (13.7.7) Programs defines a certain sort of extrapolation of the autocorrelation function to lags larger than M,in fact even to lags larger than N,i.e.,larger than the run of data can actually measure.It turns SCIENTIFIC out that this particular extrapolation can be shown to have,among all possible extrapolations, the maximum entropy in a definable information-theoretic sense.Hence the name maximum entropy method,or MEM.The maximum entropy property has caused MEM to acquire a certain"cult"popularity;one sometimes hears that it gives an intrinsically "better"estimate 排 than is given by other methods.Don't believe it.MEM has the very cute property of being able to fit sharp spectral features,but there is nothing else magical about its power spectrum estimates. The operations count in memcof scales as the product of N (the number of data points) and M (the desired order of the MEM approximation).If M were chosen to be as large as 10621 N,then the method would be much slower than the Nlog N FFT methods of the previous Numerica section.In practice,however,one usually wants to limit the order (or number of poles)of the uctio Recipes 43106 MEM approximation to a few times the number of sharp spectral features that one desires it to fit.With this restricted number of poles,the method will smooth the spectrum somewhat, but this is often a desirable property.While exact values depend on the application,one (outside might take M=10 or 20 or 50 for N =1000 or 10000.In that case MEM estimation is not much slower than FFT estimation. North Software. We feel obliged to warn you that memcof can be a bit quirky at times.If the number of poles or number of data points is too large,roundoff error can be a problem,even in double precision.With"peaky"data (i.e.,data with extremely sharp spectral features),the algorithm may suggest split peaks even at modest orders,and the peaks may shift with the phase of the sine wave.Also,with noisy input functions,if you choose too high an order,you will find spurious peaks galore!Some experts recommend the use of this algorithm in conjunction with more conservative methods,like periodograms,to help choose the correct model order,and to avoid getting too fooled by spurious spectral features.MEM can be finicky,but it can also do remarkable things.We recommend that you try it out,cautiously,on your own problems.We now turn to the evaluation of the MEM spectral estimate from its coefficients. The MEM estimation(13.7.4)is a function of continuously varying frequency f.There is no special significance to specific equally spaced frequencies as there was in the FFT case.574 Chapter 13. Fourier and Spectral Applications Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). they are residual discrepancies from linear prediction. Although we will not prove it formally, it is intuitively believable that the x’s are independently random and therefore have a flat (white noise) spectrum. (Roughly speaking, any residual correlations left in the x’s would have allowed a more accurate linear prediction, and would have been removed.) The overall normalization of this flat spectrum is just the mean square amplitude of the x’s. But this is exactly the quantity computed in equation (13.6.13) and returned by the routine memcof as xms. Thus, the coefficients a0 and ak in equation (13.7.4) are related to the LP coefficients returned by memcof simply by a0 = xms ak = −d(k), k = 1,...,M (13.7.6) There is also another way to describe the relation between the ak’s and the autocorrelation components φk. The Wiener-Khinchin theorem (12.0.12) says that the Fourier transform of the autocorrelation is equal to the power spectrum. In z-transform language, this Fourier transform is just a Laurent series in z. The equation that is to be satisfied by the coefficients in equation (13.7.4) is thus a0     1 + M k=1 akzk     2 ≈ M j=−M φj zj (13.7.7) The approximately equal sign in (13.7.7) has a somewhat special interpretation. It means that the series expansion of the left-hand side is supposed to agree with the right-hand side term by term from z−M to zM. Outside this range of terms, the right-hand side is obviously zero, while the left-hand side will still have nonzero terms. Notice that M, the number of coefficients in the approximation on the left-hand side, can be any integer up to N, the total number of autocorrelations available. (In practice, one often chooses M much smaller than N.) M is called the order or number of poles of the approximation. Whatever the chosen value of M, the series expansion of the left-hand side of (13.7.7) defines a certain sort of extrapolation of the autocorrelation function to lags larger than M, in fact even to lags larger than N, i.e., larger than the run of data can actually measure. It turns out that this particular extrapolation can be shown to have, among all possible extrapolations, the maximum entropy in a definable information-theoretic sense. Hence the name maximum entropy method, or MEM. The maximum entropy property has caused MEM to acquire a certain “cult” popularity; one sometimes hears that it gives an intrinsically “better” estimate than is given by other methods. Don’t believe it. MEM has the very cute property of being able to fit sharp spectral features, but there is nothing else magical about its power spectrum estimates. The operations count in memcof scales as the product of N (the number of data points) and M (the desired order of the MEM approximation). If M were chosen to be as large as N, then the method would be much slower than the N log N FFT methods of the previous section. In practice, however, one usually wants to limit the order (or number of poles) of the MEM approximation to a few times the number of sharp spectral features that one desires it to fit. With this restricted number of poles, the method will smooth the spectrum somewhat, but this is often a desirable property. While exact values depend on the application, one might take M = 10 or 20 or 50 for N = 1000 or 10000. In that case MEM estimation is not much slower than FFT estimation. We feel obliged to warn you that memcof can be a bit quirky at times. If the number of poles or number of data points is too large, roundoff error can be a problem, even in double precision. With “peaky” data (i.e., data with extremely sharp spectral features), the algorithm may suggest split peaks even at modest orders, and the peaks may shift with the phase of the sine wave. Also, with noisy input functions, if you choose too high an order, you will find spurious peaks galore! Some experts recommend the use of this algorithm in conjunction with more conservative methods, like periodograms, to help choose the correct model order, and to avoid getting too fooled by spurious spectral features. MEM can be finicky, but it can also do remarkable things. We recommend that you try it out, cautiously, on your own problems. We now turn to the evaluation of the MEM spectral estimate from its coefficients. The MEM estimation (13.7.4) is a function of continuously varying frequency f. There is no special significance to specific equally spaced frequencies as there was in the FFT case.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有