16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Lecture 16 Last time: Wide band inpr otice R(t)=sow(r) x(oy(t+r) so if t is the current time in a real time situation, we cannot compute R(r) for t>0 which is necessary since w,() is nonzero only for r>0 But we showed earlier that R (t)=r(r) y(1)x(t-z) This can be computed in real time by averaging the product of the current output, y(o), and the delayed input, x(I-r), for r>0 Shaping Filter A shaping filter is a filter that produces an output process with a white noise input, which has a specified spectrum. If a given system is driven by a colored noise,we can precede it with the appropriate shaping filter and always treat the augmented system as driven by white noise Shaping x(t) System white noise Fier」s(o)gven (o)= We usually work with rational spectra-so consider S,(o) to be a rational function. That is, it is a ratio of polynomials in a Since S(o) is always an even function, it is a ratio of polynomials having only even powers of a bo +b,o+b,o+ We wish to characterize the filter in terms of the Laplace variable s. The relation between s and o is s=jo
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 1 of 5 Lecture 16 Last time: Wide band input Notice that 0 () () () ( ) R Sw xy xt yt τ τ τ = = + so if t is the current time in a real time situation, we cannot compute ( ) Rxy τ for τ > 0 which is necessary since ( ) wx τ is nonzero only for τ > 0. But we showed earlier that () () () ( ) R R xy yx ytxt τ τ τ = = − This can be computed in real time by averaging the product of the current output, y t( ), and the delayed input, x t( ) −τ , for τ > 0. The Shaping Filter A shaping filter is a filter that produces an output process with a white noise input, which has a specified spectrum. If a given system is driven by a colored noise, we can precede it with the appropriate shaping filter and always treat the augmented system as driven by white noise. We usually work with rational spectra – so consider ( ) xx S ω to be a rational function. That is, it is a ratio of polynomials in ω . Since ( ) xx S ω is always an even function, it is a ratio of polynomials having only even powers of ω . 2 4 01 2 2 4 01 2 ... ( ) ... xx aa a S bb b ω ω ω ω ω +++ = +++ We wish to characterize the filter in terms of the Laplace variable s . The relation between s and ω is s j = ω
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Rewrite S in terms of s. Since only even powers of appear there will be no imaginary terms etc as+as S(s)= bo-bs32+b2s4-b3s° Factor the numerator and denominator in terms of s S(s=c d)(2-a) The poles and zeroes are located at Zeroes at s=±√c;, where i:=1,2 Poles at s=td, where i=1,2 If c, is real and positive, then e is real If c, is real and negative, then c is purely imaginary This kind of root always appears with even multiplicity Notice that in terms of @, this must be true to avoid a change of sign of S(o) For small o,(o-c) is positive. For large o,(O-c) is negative If c is complex, then c, is another roo The poles or zeroes are at Page 2 of 5
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 5 Rewrite xx S in terms of s . Since only even powers of ω appear there will be no imaginary terms. 2 2 4 4 6 6 etc. s s s ω ω ω = − = + = − 246 01 2 3 2 4 36 01 2 ... ( ) ... xx a as as as S s b bs b s b s −+− = −+ − Factor the numerator and denominator in terms of 2 s . ( )( ) ( )( ) 2 2 1 2 2 2 1 1 ... ( ) ... xx s cs c Ss c sdsd − − = − − The poles and zeroes are located at: Zeroes at i s c = ± , where i = 1, 2,... Poles at i s d = ± , where i = 1, 2,... If i c is real and positive, then i c is real If i c is real and negative, then i c is purely imaginary This kind of root always appears with even multiplicity. 2 2 ( )( ) i i sc c − →− − ω Notice that in terms of ω , this must be true to avoid a change of sign of ( ) xx S ω . For small ω , 2 ( )i − − ω c is positive. For large ω , 2 ( )i − − ω c is negative. If i c is complex, then * i c is another root. The poles or zeroes are at: * * i i i i c a jb c a jb s c a jb c a jb ⎧ + =+ + ⎪ ⎪ − =− − ⎪ = ⎨ ⎪+ =+ − ⎪ ⎪− =− + ⎩
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Collect the factors which define poles or zeroes in the left-half plane together with the square root of any factorable constant(c). Call the product S(s)L he left hand factor of the spectrum Collect the remaining factors, which define poles or zeroes in the right-half plane, together with the square root of the factorable constant. Call the product S(se-the right hand factor of the spectrum Then S(S)=S(S)LS(sr From the symmetry of the pole-zero pattern, we can see that (S)=Sx(-s) Call the shaping filter transfer function F(s). The input-output relation for al density functions S(s=F(sF(s)s(s) But with S(s)factored and n(o) white, S(s)S(s = F(s)F(-s)s S(Ss(s)= F(SF(s)s One solution for F(s) which produces the desired output spectrum is We can choose s to be 1 if we wish By construction, this filter is stable and minimum pha If S(o) is not broadband relative to the system
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 3 of 5 Collect the factors which define poles or zeroes in the left-half plane together with the square root of any factorable constant ( c ). Call the product ( ) xx L S s - the left hand factor of the spectrum. Collect the remaining factors, which define poles or zeroes in the right-half plane, together with the square root of the factorable constant. Call the product ( ) xx R S s - the right hand factor of the spectrum. Then: () () () xx xx L xx R S s S sS s = From the symmetry of the pole-zero pattern, we can see that () ( ) xx R xx L Ss S s = − Call the shaping filter transfer function F s( ) . The input-output relation for power spectral density functions in terms of s is () () ( ) () xx nn S s FsF sS s = − But with ( ) xx S s factored and n t( ) white, () () () ( ) () ( ) () ( ) xx L xx R n xx L xx L n S s S s FsF sS S s S s FsF sS = − −= − One solution for F s( ) which produces the desired output spectrum is 1 () () xx L n Fs S s S = We can choose n S to be 1 if we wish. By construction, this filter is stable and minimum phase. If ( ) nn S ω is not broadband relative to the system
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Example Suppose a disturbance process has a spectrum with the following shape Suppose this function is well approximated by 10a2 S(o O4+502+4 Then: S(s)= s4-5s2+4 √os(-√10s) (s+1)s+2)(s-1s-2) With unit intensity noise driving the shaping filter, the filter transfer function (s+1s+2)s2+3s+2 If you are going to work in terms of a state space model, you would now have to define a state variable realization of this transfer, and augment the state space model of the original system with it
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 4 of 5 ( ) ( ) ( ) xx L nn L S s F s S s = Example Suppose a disturbance process has a spectrum with the following shape. Suppose this function is well approximated by 2 4 2 10 ( ) 5 4 xx S ω ω ω ω = + + Then: 2 4 2 2 2 () () 10 ( ) 5 4 10 ( 10 ) ( 1)( 4) 10 10 ( 1)( 2) ( 1)( 2) xx L xx R xx Ss Ss s S s s s s s s s s s ss ss − = − + − = − − ⎡ ⎤⎡ ⎤ = ⎢ ⎥⎢ ⎥ ++ −− ⎣ ⎦⎣ ⎦ 1442443 1442443 With unit intensity noise driving the shaping filter, the filter transfer function is 2 10 10 ( ) ( 1)( 2) 3 2 s s F s s s ss = = + + ++ If you are going to work in terms of a state space model, you would now have to define a state variable realization of this transfer, and augment the state space model of the original system with it
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde General Introduction to Filtering and Control System Design Suppose you have input data up to time t, and at that time you want to produce a smoothed estimate of the signal (which is a real time operation). This called filte Suppose you want an estimate of a signal at time t+t in the future. This is called predicting Suppose you want an estimate of a signal at time I-r in the past, using data available up to time t. This is called smoothing t-t tt+ Ground rules We shall treat in a fairly general manner the problem of optimum design of systems under the following conditions 1. The system is assumed to be linear and physically realizable 2. The inputs to the system(.g, signals, noise, disturbances), are members of stationary random pre 3. The desired operation of the system is a linear time-invariant operation on the 4. The optimum system is defined to be that which yields the minimum steady state mean squared error between actual output and derived output With stationary inputs, the input correlation functions are invariant with time these are the invariants which are the basis for prediction. Thus we shall be finding the realizable linear system which operating signal and noise yields an output which best approximates in the mear sense the output of an ideal linear operator, not necessarily realizable on the signal only. We shall find optimum systems in two senses 1. Configuration fixed -find optimum parameter values 2. Configuration semi-free -find best linear realizable compensator a. Configuration free-find best linear realizable system
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 5 of 5 General Introduction to Filtering and Control System Design Suppose you have input data up to time t , and at that time you want to produce a smoothed estimate of the signal (which is a real time operation). This called filtering. Suppose you want an estimate of a signal at time t +τ in the future. This is called predicting. Suppose you want an estimate of a signal at time t −τ in the past, using data available up to time t . This is called smoothing. Ground rules We shall treat in a fairly general manner the problem of optimum design of systems under the following conditions: 1. The system is assumed to be linear and physically realizable. 2. The inputs to the system (e.g., signals, noise, disturbances), are members of stationary random processes. 3. The desired operation of the system is a linear time-invariant operation on the signal. 4. The optimum system is defined to be that which yields the minimum steady state mean squared error between actual output and derived output. With stationary inputs, the input correlation functions are invariant with time; these are the invariants which are the basis for prediction. Thus we shall be finding the realizable linear system which operating on the signal and noise yields an output which best approximates in the mean squared sense the output of an ideal linear operator, not necessarily realizable, operating on the signal only. We shall find optimum systems in two senses: 1. Configuration fixed – find optimum parameter values 2. Configuration semi-free – find best linear realizable compensator a. Configuration free – find best linear realizable system