16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Lecture 13 x(t) t+T +2T R(r) (>7) e T =2na2(o)+2a2「 cos ordt =2nd26()+2(1-cos7o) SI 2Ia(o)+To2 To 2兀 mplitude of Sx falls off, but not very rapidly
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 1 of 8 Lecture 13 Last time: ( ) ( ) 2 2 2 1 , ( ) , a xx a T R T a T τ σ τ τ τ ⎧ ⎛ ⎞ ⎪ +− ≤ ⎜ ⎟ = ⎨ ⎝ ⎠ ⎪ > ⎩ ( ) 2 2 2 2 0 2 2 2 2 2 2 () 1 2 ( ) 2 1 cos 2 2 ( ) 1 cos sin 2 2 () 2 T j j xx a T T a a a S ae d e d T a d T a T T T a T T ωτ ωτ τ ω τσ τ τ π δ ω σ ωτ τ σ π δω ω ω ω π δω σ ω ∞ − − −∞ − ⎛ ⎞ = +− ⎜ ⎟ ⎝ ⎠ ⎛ ⎞ = +− ⎜ ⎟ ⎝ ⎠ = +− ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ = + ⎜ ⎟ ⎜ ⎟ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∫ ∫ ∫ Amplitude of xx S falls off, but not very rapidly
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde Use error between early and late indicator to lock onto signal. Error is a linear function of shift, within the range (-T,T) Return to the 1 st example process and take the case where the change points are Poisson distributed x(t) a=0 220 S(o= Take the limit of this as o and a become large in a particular relation: to establish the desired relation, replace A→kA S(o= kako k1 and take the limit as k→∞ lim Sr(o)=lim 2 k210a Note this is independent of frequency
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 8 Use error between early and late indicator to lock onto signal. Error is a linear function of shift, within the range ( ,) −T T . Return to the 1st example process and take the case where the change points are Poisson distributed. 2 2 2 2 ( ) a xx S λσ ω ω λ = + Take the limit of this as 2 σ a and λ become large in a particular relation: to establish the desired relation, replace 2 2 2 2 2 2 ( ) ( ) a a a xx k k k k S k σ σ λ λ λ σ ω ω λ → → = + and take the limit as k → ∞ . ( ) 2 2 2 2 2 lim ( ) lim 2 a xx k k a k S k λσ ω λ σ λ →∞ →∞ = = Note this is independent of frequency
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde S、(o) This is defined to be a white noise by analogy with white light, which is supposed to have equal participation by all wavelengths S、(o) Can shape x(o) to the correct spectrum so that it can be analyzed in this manner, by adding a shaping filter in the state-space formulation n(t) System white Filter Definition of a white noise process White means constant spectral density. S(o)=so,constant de =S0(z) White noise processes only have a defined power density. The variance of a white noise process is not defined If you start with almost any process x(o) lim√ax(an)→ is a white noise ge 3 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 3 of 8 This is defined to be a “white noise” by analogy with white light, which is supposed to have equal participation by all wavelengths. Can shape x( )t to the correct spectrum so that it can be analyzed in this manner, by adding a shaping filter in the state-space formulation. Definition of a white noise process White means constant spectral density. 0 ( ) , constant xx S S ω = 0 0 1 ( ) 2 ( ) j Rxx Se d S τω τ ω π δ τ ∞ −∞ = = ∫ White noise processes only have a defined power density. The variance of a white noise process is not defined. If you start with almost any process x( )t , lim ( ) a a x at →∞ ⇒ is a white noise
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde o far we have looked only at the R_ and S of processes. We shall find that if we wish to determine only the Rw or S(thus the y )of outputs of linear systems, all we need to know about the inputs are their R r or Sxr But what if we wanted to know the probability that the error in a dynamic system would exceed some bound? For this we need the first probability density function of the system error-an output very difficult in general n(t) y(t) System The pdf of the output y(t) satisfies the Fokker-Planck partial differential equation-also called the Kolmagorov forward equation. Applies to a continuous dynamic system driven by a white noise process Gaussian process Linear process System One case is easy: Gaussian process into a linear system, output is Gaussian Gaussian processes are defined by the property that probability density functions of all order are normal functions n(xr 41, x2,42,;.x,, t,=n-dimensional normal f(x)= LMI M is the covariance matrix for x(2) x() Thus, fn (x) for all n is determined by M-the covariance matrix for x
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 4 of 8 So far we have looked only at the Rxx and xx S of processes. We shall find that if we wish to determine only the Ryy or yy S (thus the 2 y ) of outputs of linear systems, all we need to know about the inputs are their Rxx or xx S . But what if we wanted to know the probability that the error in a dynamic system would exceed some bound? For this we need the first probability density function of the system error – an output. Very difficult in general. The pdf of the output y t( ) satisfies the Fokker-Planck partial differential equation – also called the Kolmagorov forward equation. Applies to a continuous dynamic system driven by a white noise process. One case is easy: Gaussian process into a linear system, output is Gaussian. Gaussian processes are defined by the property that probability density functions of all order are normal functions. 11 2 2 ( , ; , ;... , ) -dimensional normal n nn f xtxt xt n = 1 1 2 2 1 ( ) (2 ) TxM x n n fx e π M − − = M is the covariance matrix for 1 2 ( ) ( ) ( ) n x t x t x x t ⎡ ⎤ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ M Thus, ( ) nf x for all n is determined by M - the covariance matrix for x
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde l=x(1)x( =R(11) Thus for a Gaussian process, the autocorrelation function completely defines all the statistical properties of the process since it defines the probability density functions of all order. This means IfR,(,t,)=R(-4), the process is stationary If two processes x(o), y(o)are jointly Gaussian, and are uncorrelated (R,(L, t, )=0), they are independent processes Most important: Gaussian input> linear system> Gaussian output. In this case all the statistical properties of the output are determined by the correlation function of the output- for which we shall require only the correlation functions for the inputs Upcoming lectures will not cover several sections that deal with Narrow band Gaussian processes Fast Fourier transform Pseudorandom binary coded signals These are important topics for your general knowledge Characteristics of Linear Systems Definition of linear system: If l1(D)→>y1(1) l2(1)→>y2() a4(1)+bn2(1)→ayv(1)+by2( Page 5 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 5 of 8 ()( ) (, ) ij i j xx i j M xt xt R t t = = Thus for a Gaussian process, the autocorrelation function completely defines all the statistical properties of the process since it defines the probability density functions of all order. This means: If Ryx i j xx j i (, ) tt R t t = − ( ), the process is stationary. If two processes x( ), ( ) t yt are jointly Gaussian, and are uncorrelated ( ) (, ) 0 R tt xy i j = , they are independent processes. Most important: Gaussian input Æ linear system Æ Gaussian output. In this case all the statistical properties of the output are determined by the correlation function of the output – for which we shall require only the correlation functions for the inputs. Upcoming lectures will not cover several sections that deal with: Narrow band Gaussian processes Fast Fourier Transform Pseudorandom binary coded signals These are important topics for your general knowledge. Characteristics of Linear Systems Definition of linear system: If 1 1 2 2 () () () () ut yt ut yt → → Then 12 12 au t bu t ay t by t () () () () +→+
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde By characterizing the response to a standard input, the response to any input can be constructed by superposing responses using the system' s linearit δ(τ) t, w(t,r)is the weighting function tu(t)△r1 y(=1in∑v()m(x)Ax7→Jwmr)r The central limit theorem says y()->normal if u(t) is white noise Stable if bounded input gives rise to a bounded output r(rdr= const.<∞→ bounded for all t Realizable if causal (,)=0,(t<r) State Space- An alternate characterization for a linear diferential system
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 6 of 8 By characterizing the response to a standard input, the response to any input can be constructed by superposing responses using the system’s linearity. w t(, ) τ is the weighting function. 0 ( ) lim ( , ) ( ) ( , ) ( ) i t i ii i yt wt u wt u d τ τ τ τ τ ττ ∆ → −∞ = ∆→ ∑ ∫ The central limit theorem says y t( ) → normal if u t( ) is white noise. Stable if every bounded input gives rise to a bounded output. wt d t ( , const. bounded for all τ τ ∞ −∞ ∫ = <∞ ⇒ Realizable if causal. wt t ( , ) 0, τ = < ( ) τ State Space – An alternate characterization for a linear differential system
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde If the input and output are related by an nth order linear differential equation, once can also relate input to output by a set of n linear first order differential equations u(t Sy stem X(1)=A(1)x(1)+B(1)(t) y(1)=C(1)x(1) The solution form is y(1)=C(1)x(1) x(=(46)x)+B()(rdr where a(t, r) satisfie da(t, r)=4(o(l, r), o(t, r)=I note that any system which can be cast in this form is not only mathematically realizable but practically realizable as well. Must add a gain times u to y to get as many zeroes as poles For comparison with the weighting function description, take u and y to b scalars, and take t,=-o0. For stable systems, the transition from -oo to any finite time is zero Specialize the state space model to single-input, single-output (SISO) and d(t,-∞)=0 y()=C(1)x() x(0=ao(l,r)b(r)u(r)dr y(=∫c(,n( r)u(r)dr w(t, r)=c(o p (t, r)b(r) which we recognize by comparison with the earlier expression for y(t) Page 7 of 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 7 of 8 If the input and output are related by an nth order linear differential equation, once can also relate input to output by a set of n linear first order differential equations. () () () () () () () () x t At xt Bt ut yt Ctxt = + = & The solution form is: 0 0 0 () () () () (, ) ( ) (, ) ( ) ( ) t t yt Ctxt x t tt xt t B u d τ τ ττ = =Φ + Φ∫ where Φ(, ) t τ satisfies: ( , ) ( ) ( , ), ( , ) d t At t I dt Φ =Φ Φ = τ τ ττ Note that any system which can be cast in this form is not only mathematically realizable but practically realizable as well. Must add a gain times u to y to get as many zeroes as poles. For comparison with the weighting function description, take u and y to be scalars, and take 0t = −∞ . For stable systems, the transition from −∞ to any finite time is zero. Specialize the state space model to single-input, single-output (SISO) and 0t → −∞ : (, ) 0 () () () () (, ) ( ) ( ) () () (, ) ( ) ( ) (, ) () (, ) ( ) T t t T T t yt Ct xt xt t b u d yt Ct t b u d wt Ct t b τ τ ττ τ τ ττ τ ττ −∞ −∞ Φ −∞ = = = Φ = Φ = Φ ∫ ∫ which we recognize by comparison with the earlier expression for y t( )
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde For an invariant system, the shape of w(t, t)is independent of the time the input was applied; the output depends only on the elapsed time since the application of the input 6(r1)8(τ2) w(t, r)>w(t-r=w(0), [=0 by convention oilit (-)ldr=∫hold=cont<o Note this implies w(r)is Fourier transformable. Realizability v(t-r)=0,(t<r) v(t)=0 (t<0) f 8
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 8 of 8 For an invariant system, the shape of w t(, ) τ is independent of the time the input was applied; the output depends only on the elapsed time since the application of the input. wt wt wt ( , ) ( ) ( ), 0 by convention τ → −= = τ τ Stability: w t d w t dt ( ) ( ) const. τ τ ∞ ∞ −∞ −∞ ∫ ∫ − = = <∞ Note this implies w t( ) is Fourier transformable. Realizability: ( ) 0, ( ) ( ) 0, ( 0) wt t wt t −= < τ τ = <