正在加载图片...
552 Chapter 13.Fourier and Spectral Applications Turn now to another question about the periodogram estimate.What is the variance of that estimate as N goes to infinity?In other words,as we take more sampled points from the original function(either sampling a longer stretch of data at the same sampling rate,or else by resampling the same stretch of data with a faster sampling rate),then how much more accurate do the estimates P&become?The unpleasant answer is that the periodogram estimates do not become more accurate at all!In fact,the variance of the periodogram estimate at a frequency f is always equal to the square of its expectation value at that frequency.In other words,the standard deviation is always 100 percent of the value,independent of N!How can this be?Where did all the information go as we added points?It all went into 81 producing estimates at a greater number of discrete frequencies f.If we sample a longer run of data using the same sampling rate,then the Nyquist critical frequency fe is unchanged,but we now have finer frequency resolution(more f's)within the Nyquist frequency interval;alternatively,if we sample the same length of data with a finer sampling interval,then our frequency resolution is unchanged,but the Nyquist range now extends up to a higher frequency.In neither case do the additional samples 物 reduce the variance of any one particular frequency's estimated PSD You don't have to live with PSD estimates with 100 percent standard deviations, however.You simply have to know some techniques for reducing the variance of 9 the estimates.Here are two techniques that are very nearly identical mathematically, though different in implementation.The first is to compute a periodogram estimate with finer discrete frequency spacing than you really need,and then to sum the periodogram estimates at K consecutive discrete frequencies to get one"smoother" estimate at the mid frequency of those K.The variance of that summed estimate 菩色D 9 will be smaller than the estimate itself by a factor of exactly 1/K,i.e.,the standard OF SCIENTIFIC deviation will be smaller than 100 percent by a factor 1/VK.Thus,to estimate the power spectrum at M+1 discrete frequencies between 0 and fe inclusive,you begin 6 by taking the FFT of 2MK points(which number had better be an integer power of two!).You then take the modulus square of the resulting coefficients,add positive and negative frequency pairs,and divide by (2MK)2,all according to equation (13.4.5)with N =2MK.Finally,you"bin"the results into summed (not averaged) groups of K.This procedure is very easy to program,so we will not bother to give a routine for it.The reason that you sum,rather than average,K consecutive points Numerica 10621 is so that your final PSD estimate will preserve the normalization property that the S sum of its M+1 values equals the mean square value of the function. 431 A second technique for estimating the PSD at M+1 discrete frequencies in Recipes the range 0 to fe is to partition the original sampled data into K segments each of 腿 2M consecutive sampled points.Each segment is separately FFT'd to produce a periodogram estimate(equation 13.4.5 with N =2M).Finally,the K periodogram North estimates are averaged at each frequency.It is this final averaging that reduces the variance of the estimate by a factor K(standard deviation by vR).This second technique is computationally more efficient than the first technique above by a modest factor,since it is logarithmically more efficient to take many shorter FFTs than one longer one.The principal advantage of the second technique,however,is that only 2M data points are manipulated at a single time,not 2KM as in the first technique This means that the second technique is the natural choice for processing long runs of data,as from a magnetic tape or other data record.We will give a routine later for implementing this second technique,but we need first to return to the matters of552 Chapter 13. Fourier and Spectral Applications Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Turn now to another question about the periodogram estimate. What is the variance of that estimate as N goes to infinity? In other words, as we take more sampled points from the original function (either sampling a longer stretch of data at the same sampling rate, or else by resampling the same stretch of data with a faster sampling rate), then how much more accurate do the estimates Pk become? The unpleasant answer is that the periodogram estimates do not become more accurate at all! In fact, the variance of the periodogram estimate at a frequency f k is always equal to the square of its expectation value at that frequency. In other words, the standard deviation is always 100 percent of the value, independent of N! How can this be? Where did all the information go as we added points? It all went into producing estimates at a greater number of discrete frequencies f k. If we sample a longer run of data using the same sampling rate, then the Nyquist critical frequency fc is unchanged, but we now have finer frequency resolution (more f k’s) within the Nyquist frequency interval; alternatively, if we sample the same length of data with a finer sampling interval, then our frequency resolution is unchanged, but the Nyquist range now extends up to a higher frequency. In neither case do the additional samples reduce the variance of any one particular frequency’s estimated PSD. You don’t have to live with PSD estimates with 100 percent standard deviations, however. You simply have to know some techniques for reducing the variance of the estimates. Here are two techniques that are very nearly identical mathematically, though different in implementation. The first is to compute a periodogram estimate with finer discrete frequency spacing than you really need, and then to sum the periodogram estimates at K consecutive discrete frequencies to get one “smoother” estimate at the mid frequency of those K. The variance of that summed estimate will be smaller than the estimate itself by a factor of exactly 1/K, i.e., the standard deviation will be smaller than 100 percent by a factor 1/ √ K. Thus, to estimate the power spectrum at M + 1 discrete frequencies between 0 and f c inclusive, you begin by taking the FFT of 2MK points (which number had better be an integer power of two!). You then take the modulus square of the resulting coefficients, add positive and negative frequency pairs, and divide by (2MK) 2, all according to equation (13.4.5) with N = 2MK. Finally, you “bin” the results into summed (not averaged) groups of K. This procedure is very easy to program, so we will not bother to give a routine for it. The reason that you sum, rather than average, K consecutive points is so that your final PSD estimate will preserve the normalization property that the sum of its M + 1 values equals the mean square value of the function. A second technique for estimating the PSD at M + 1 discrete frequencies in the range 0 to fc is to partition the original sampled data into K segments each of 2M consecutive sampled points. Each segment is separately FFT’d to produce a periodogram estimate (equation 13.4.5 with N ≡ 2M). Finally, the K periodogram estimates are averaged at each frequency. It is this final averaging that reduces the variance of the estimate by a factor K (standard deviation by √ K). This second technique is computationally more efficient than the first technique above by a modest factor, since it is logarithmically more efficient to take many shorter FFTs than one longer one. The principal advantage of the second technique, however, is that only 2M data points are manipulated at a single time, not 2KM as in the first technique. This means that the second technique is the natural choice for processing long runs of data, as from a magnetic tape or other data record. We will give a routine later for implementing this second technique, but we need first to return to the matters of
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有