正在加载图片...
566 Chapter 13.Fourier and Spectral Applications which is readily recognizable as equation(13.3.6)with SN2What is going on is this:For the case of equally spaced data points,and in the Fourier domain,autocorrelations become simply squares of Fourier amplitudes (Wiener- Khinchin theorem,equation 12.0.12),and the optimal filter can be constructed algebraically,as equation(13.6.9),without inverting any matrix. More generally,in the time domain,or any other domain,an optimal filter(one that minimizes the square of the discrepancy from the underlying true value in the presence of measurement noise)can be constructed by estimating the autocorrelation matrices 8 and nas,and applying equation (13.6.6)with *y.(Equation 13.6.8 is in fact the basis for the $13.3's statement that even crude optimal filtering 81 can be quite effective. Linear Prediction ICAL Classical linear prediction specializes to the case where the data points ys are equally spaced along a line,y,i=1,2,...,N,and we want to use M consecutive values of yi to predict an M+1st.Stationarity is assumed.That is,the RECIPES autocorrelation (yjy)is assumed to depend only on the difference j-k|,and not on j or k individually,so that the autocorrelation has only a single index, 9 N-i 1 p三(〈y+》≈ N-j 左y+方 (13.6.10) 9 i=1 星是 9 Here,the approximate equality shows one way to use the actual data set values to estimate the autocorrelation components.(In fact,there is a better way to make these estimates;see below.)In the situation described,the estimation equation(13.6.2)is M Un= djyn-j+In (13.6.11) j=1 (compare equation 13.5.1)and equation(13.6.5)becomes the set of M equations for the M unknown di's,now called the linear prediction (LP)coefficients, Numerica 10621 43126 ∑-= (k=1,,M) (13.6.12) j=1 Notice that while noise is not explicitly included in the equations,it is properly North accounted for,if it is point-to-point uncorrelated:o,as estimated by equation (13.6.10)using measuredvaluesy,actually estimates the diagonal part of+n, above.The mean square discrepancy (is estimated by equation(13.6.7)as (x品〉=0-p1d-p2d2-…-pMdM (13.6.13 To use linear prediction,we first compute the di's,using equations(13.6.10) and (13.6.12).We then calculate equation (13.6.13)or,more concretely,apply (13.6.11)to the known record to get an idea of how large are the discrepancies i. If the discrepancies are small,then we can continue applying(13.6.11)right on into566 Chapter 13. Fourier and Spectral Applications Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machine￾readable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). which is readily recognizable as equation (13.3.6) with S 2 → φγγ, N2 → ηγγ. What is going on is this: For the case of equally spaced data points, and in the Fourier domain, autocorrelations become simply squares of Fourier amplitudes (Wiener￾Khinchin theorem, equation 12.0.12), and the optimal filter can be constructed algebraically, as equation (13.6.9), without inverting any matrix. More generally, in the time domain, or any other domain, an optimal filter (one that minimizes the square of the discrepancy from the underlying true value in the presence of measurement noise) can be constructed by estimating the autocorrelation matrices φαβ and ηαβ, and applying equation (13.6.6) with → γ. (Equation 13.6.8 is in fact the basis for the §13.3’s statement that even crude optimal filtering can be quite effective.) Linear Prediction Classical linear prediction specializes to the case where the data points yβ are equally spaced along a line, yi, i = 1, 2,...,N, and we want to use M consecutive values of yi to predict an M + 1st. Stationarity is assumed. That is, the autocorrelation yjyk is assumed to depend only on the difference |j − k|, and not on j or k individually, so that the autocorrelation φ has only a single index, φj ≡ yiyi+j  ≈ 1 N − j N −j i=1 yiyi+j (13.6.10) Here, the approximate equality shows one way to use the actual data set values to estimate the autocorrelation components. (In fact, there is a better way to make these estimates; see below.) In the situation described, the estimation equation (13.6.2) is yn =  M j=1 djyn−j + xn (13.6.11) (compare equation 13.5.1) and equation (13.6.5) becomes the set of M equations for the M unknown dj ’s, now called the linear prediction (LP) coefficients,  M j=1 φ|j−k| dj = φk (k = 1,...,M) (13.6.12) Notice that while noise is not explicitly included in the equations, it is properly accounted for, if it is point-to-point uncorrelated: φ0, as estimated by equation (13.6.10) using measured values y i, actually estimates the diagonal part of φαα+ηαα, above. The mean square discrepancy x2 n is estimated by equation (13.6.7) as x2 n = φ0 − φ1d1 − φ2d2 −···− φM dM (13.6.13) To use linear prediction, we first compute the dj ’s, using equations (13.6.10) and (13.6.12). We then calculate equation (13.6.13) or, more concretely, apply (13.6.11) to the known record to get an idea of how large are the discrepancies x i. If the discrepancies are small, then we can continue applying (13.6.11) right on into
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有