16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde Lecture 12 Non-zero power at zero frequency Non-zero power at non-zero frequency If R(r) includes a sinusoidal component corresponding to the component x()=Asin(o41+6) where 0 is uniformly distributed over 2t, A is random independent of 0, that component will be R(T)==A2 CoSOpt A coS O re erdt 121 J jer +e-jcur le-jordt o-eb) 1z[6(o-a)+6(+a) Page 1 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 1 of 8 Lecture 12 Non-zero power at zero frequency Non-zero power at non-zero frequency If ( ) Rxx τ includes a sinusoidal component corresponding to the component 0 xt A t ( ) sin( ) = + ω θ where θ is uniformly distributed over 2π , A is random, independent of θ , that component will be () () ( )( ) 0 0 0 0 2 0 2 0 2 2 2 0 0 1 ( ) cos 2 1 ( ) cos 2 1 1 2 2 1 1 2 2 1 2 xx j xx j j j j j R A S A ed A e e ed Ae e d A ωτ ωτ ωτ ωτ ωωτ ωω τ τ ωτ ω ωτ τ τ τ π δω ω δω ω ∞ − −∞ ∞ − − −∞ ∞ − −+ −∞ = = = + ⎡ ⎤ ⎣ ⎦ = + ⎡ ⎤ ⎣ ⎦ = −+ + ⎡ ⎤ ⎣ ⎦ ∫ ∫ ∫
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde units of s Mean squared value per unit frequency interval Usually S2(o)=∫R2(r)emdr 12x s(o)do S()df, where f=o In this case e, Sx HE Next most common S(o)=-「R(r) e- Jerde x2=s(o)do In this case, s There is an alternate form of the power spectral density function Since S(o) is a measure of the power density of the harmonic components of x(O), one should be able to get S(o) also from the Fourier Transform of x(t) which is a direct decomposition of x(t) into its infinitesimal harmonic components. This is true, and is the approach taken in the text. One difficulty is that the Fourier Transform does not converge for members of stationary ensembles. The mathematics are handled by a limiting process x(t) Page 2 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 8 Units of xx S Mean squared value per unit frequency interval. Usually: 2 ( ) () 1 ( ) 2 ( ) , where 2 j xx xx xx xx S R ed x Sd S f df f ωτ ω ττ ω ω π ω π ∞ − ∞ ∞ −∞ ∞ −∞ = = = = ∫ ∫ ∫ In this case, 2 xx ~ q S Hz Next most common: 2 1 ( ) () 2 ( ) j xx xx xx S R ed x Sd ωτ ω τ τ π ω ω ∞ − −∞ ∞ −∞ = = ∫ ∫ In this case, 2 2 ~ sec rad / sec xx q S q = There is an alternate form of the power spectral density function. Since ( ) xx S ω is a measure of the power density of the harmonic components of x( )t , one should be able to get ( ) xx S ω also from the Fourier Transform of x( )t which is a direct decomposition of x( )t into its infinitesimal harmonic components. This is true, and is the approach taken in the text. One difficulty is that the Fourier Transform does not converge for members of stationary ensembles. The mathematics are handled by a limiting process
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Define x,(o) 0 eisewne X(o)=x,(0e e d x(ne dt Then X()X(o) 2T 2T in opposite order here than if the transform of the autocorrelation function is o Notice that the operations of transforming and averaging and product are dor calculated If one has only a finite record of a single random function, and Sa(o) is to be so calculated approximately under the ergodic hypothesis, it can be done either R(r)= x(o)x(t+ r)di T S(o)=2 R(r)cosordr x(o)= x(n)e -je dt X(o)X(o Standard deviation of Sxr measured this way is approximately equal to mean. 4
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 3 of 8 Define ( ), ( ) ( ) 0, elsewhere T x t TtT x t ⎧ − << = ⎨ ⎩ ( ) () ( ) j T T T j t T X x te d x t e dt ωτ ω ω τ ∞ − −∞ − − = = ∫ ∫ Then * 2 () () ( ) lim 2 ( ) lim 2 T T xx T T T X X S T X T ω ω ω ω →∞ →∞ = = Notice that the operations of transforming and averaging and product are done in opposite order here than if the transform of the autocorrelation function is calculated. If one has only a finite record of a single random function, and ( ) xx S ω is to be so calculated approximately under the ergodic hypothesis, it can be done either way. max 0 0 * 2 ( ) () ( ) ( ) 2 ( )cos ( ) () () () ( ) 2 T xx xx xx T j t T xx R xtxt d T S Rd x x t e dt X X S T τ τ ω τ τ τ τ ω τ ωτ τ ω ω ω ω − − − = + − = = = ∫ ∫ ∫ Standard deviation of xx S measured this way is approximately equal to mean
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde The second approach is faster In fact, with the advent of the Fast Fourier Transform(initiated by Cooley and Tukey), even if one wanted to calculate Ra(r) from x(o), it is faster to transform x(o)to X(o), form S(o), and transform to get R(r)than to integrate x(O)x(t+r) directly for all desired values of t The Fast Fourier Transform is an amazingly efficient procedure for digital calculation of finite fourier transforms References: Full issue-IEEE Transactions on audio and electroacoustics Vol. au-15, No. 2 June 1967 Tutorial article: Brighton, E O and Morrow, R.E.: The Fast Fourier Transform, IEEE Spectrum; Dec 1967. Cross spectral density In dealing with more than one random process, the cross power spectral densit arises naturally. For example, if 二(1)=x(1)+y(1) where x(o) and y(o are members of random ensembles, then we found before R(r=R(t)+r(t)+r(r)+r(r) so that S(o)=R_(r)e dr SH(o)+S,(o)+S(o)+Sy(o) where we have defined cross spectral density functions S2(o)=∫R(r)e-da This is equivalent to the definition Page 4 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 4 of 8 The second approach is faster. In fact, with the advent of the Fast Fourier Transform (initiated by Cooley and Tukey), even if one wanted to calculate ( ) Rxx τ from x( )t , it is faster to transform x( )t to X ( ) ω , form ( ) xx S ω , and transform to get ( ) Rxx τ than to integrate xtxt () ( ) +τ directly for all desired values of τ . The Fast Fourier Transform is an amazingly efficient procedure for digital calculation of finite Fourier Transforms. References: Full issue – IEEE Transactions on Audio and Electroacoustics, Vol. AU-15, No.2; June 1967. Tutorial article: Brighton, E.O. and Morrow, R.E.: The Fast Fourier Transform, IEEE Spectrum; Dec. 1967. Cross spectral density In dealing with more than one random process, the cross power spectral density arises naturally. For example, if zt xt yt () () () = + where x( )t and y t( ) are members of random ensembles, then we found before that () () () () () RRRRR zz xx xy yx yy τ =+++ ττττ so that ( ) () () () () () j zz zz xx xy yx yy S R ed SSSS ωτ ω ττ ω ωωω ∞ − −∞ = =+++ ∫ where we have defined cross spectral density functions: ( ) () ( ) () j xy xy j yx yx S R ed S R ed ωτ ωτ ω τ τ ω τ τ ∞ − −∞ ∞ − −∞ = = ∫ ∫ This is equivalent to the definition
16.322 Stochastic Estimation and Control, Fall 2004 Prof Vander Velde S(o)=li X(OY(o) S_(o)=lim Yr(o)Xr (o) 2T We note from this that 0)= so that the sum of these two as they appear in S(o)is real Also note that S(o)=S(@)=S(o) Examples of Random Processes Analytically Defined Example: Randon step function Amplitude am independent, rando Change points t,, Poisson-distributed with average density a(points per secon (t) P(k)=(x1)e P(0)=e Rx(r)=E[x(1)x(+) P(at least one change point in a+P(no change point in rpa 2a2+e-da a2e-4+a2 Page 5 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 5 of 8 * * () () ( ) lim 2 () () ( ) lim 2 T T xy T T T yx T X Y S T Y X S T ω ω ω ω ω ω →∞ →∞ = = We note from this that: * () () yx xy S S ω = ω so that the sum of these two as they appear in ( ) zz S ω is real. Also note that * ( ) () () xy xy yx S SS −= = ω ω ω . Examples of Random Processes Analytically Defined Example: Random step function Amplitude n a independent, random Change points nt , Poisson-distributed with average density λ (points per second) ( ) 1 ( ) ! (0) k Pk e k P e λτ λ τ λ τ − − = = [ ] ( ) 2 2 2 2 2 2 ( ) () ( ) (at least one change point in ) (no change point in ) 1 xx a R E xtxt P aP a e aea e a λτ λτ λ τ τ τ τ τ σ − − − = + = + =− + = +
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde atole 2xa'S(o)+oze jeeardr+dze-jore-irdr =2ra2o(m)+ (2-p)r =2na26()+σ n-j@ 2+ 2xa'5(o)+2004 S、(o) 20 Example: A signal random process Reference: Newton, G C, L.A. Gould and J F Kaiser. Design of linear Feedback Controls. John Wiley, 1961. P 100 Papoulis calls this the semirandom telegraph signal; p. 288 A signal takes the values plus and minus xo only. It switches from one leve to the other at " event points" which are Poisson distributed in time with constant average frequency a. This is sometimes called a telegraph signal
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 6 of 8 ( ) ( ) 2 2 0 22 2 0 0 2 2 2 0 2 2 2 2 2 2 ( ) 2 () 2 () 1 1 2 () 2 2 () j xx a j j a a a a j j a a S a e ed a e ed e e d ae e j j a j j a λ τ ωτ ωτ λτ ωτ λτ λ ωτ λ ωτ ωσ τ π δω σ τ σ τ σ σ π δω λω λω π δω σ λ ωλ ω σ λ π δω λ ω ∞ − − −∞ ∞ − −− −∞ ∞ − −+ −∞ = + ⎡ ⎤ ⎣ ⎦ =+ + =+ + − −− ⎡ ⎤ =+ + ⎢ ⎥ ⎣ ⎦ − + = + + ∫ ∫ ∫ Example: A signal random process Reference: Newton, G.C, L.A. Gould and J.F. Kaiser. Design of Linear Feedback Controls. John Wiley, 1961. p.100. Papoulis calls this the semirandom telegraph signal; p.288. A signal takes the values plus and minus 0 x only. It switches from one level to the other at “event points” which are Poisson distributed in time with constant average frequency λ . This is sometimes called a telegraph signal
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde x(t) verage rate of occurrence of change points =2 points per second R2()=E[x(x+] P(even number of change points in the interval x)xo +P(odd number of change points in the interval =(-xo) ( 2 k(1)(az) xae xoe You are generating higher harmonics in this case, as at each change point the amplitude changes sign. In the previous example, changes in amplitude at change point may be far smaller S2(o)=2(2)2 Example: Binary function with an arbitrary amplitude distribution The problem considers a binary function with a more general amplitude distribution. The distribution of a is restricted to +l with equal I probability That gives the pseudo-random binary code used by GPs Page 7 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 7 of 8 Average rate of occurrence of change points = λ points per second ( ) ( ) ( ) ( ) 2 0 2 0 2 2 0 0 0,2,4,... 1,3,5,... 2 0 ( ) () ( ) (even number of change points in the interval ) (odd number of change points in the interval )( ) 1 1 ! ! 1 1 ! xx k k k k k k k R E xtxt P x P x x ex e k k e x k λτ λτ λ τ τ τ τ τ λτ λτ λ τ − − = = − = + ⎡ ⎤ ⎣ ⎦ = + − = − = − ∑ ∑ ( ) 0,1,2,... 2 0 0 2 2 0 ! k k x e k x e λ τ λ τ λ τ = ∞ − = − − = = ∑ ∑ You are generating higher harmonics in this case, as at each change point the amplitude changes sign. In the previous example, changes in amplitude at change point may be far smaller. ( ) 2 0 2 2 2(2) ( ) 2 xx x S λ ω ω λ = + Example: Binary function with an arbitrary amplitude distribution The problem considers a binary function with a more general amplitude distribution. The distribution of a is restricted to ±1with equal probability. That gives the pseudo-random binary code used by GPS
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde x(t) T +2T to is uniformly distributed over(0,T) Change points are periodic with period T Amplitudes independent with a, a Ru(r)=E[r(Ox(t+r) P(I or more change points in)a+ P(No change point in pa TST +a,≤r R R XX f 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 8 of 8 0t is uniformly distributed over (0, ) T Change points are periodic with period T Amplitudes independent with 2 a a, [ ] 2 2 2 2 2 2 2 ( ) () ( ) (1 or more change points in ) (No change point in ) 1 , 1 , ( ) , xx a xx R E xtxt P aP a a aT T T a T R T a T τ τ τ τ τ τ τ τ σ τ τ τ = + = + ⎛ ⎞ = +− ≤ ⎜ ⎟ ⎝ ⎠ ⎧ ⎛ ⎞ ⎪ ⎜ ⎟ −+ ≤ = ⎨ ⎝ ⎠ ⎪ > ⎩