16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde Lecture 20 ast time: Completed solution to the optimum linear filter in real-time operation emi-free configuration: H0(s) dp D()F(p),s 2TJF(S),F(S)LS(S)L N F(PRS(P) [(p)] Special ca is rational. In this solution formula we can carry out the indicated integrations in literal form in the case in which[(p)is rational In our work, we deal in a practical way only with rational F, Sis, and S, so this function will be rational if D(p)is rational. This will be true of every desired operation except a predictor. Thus except in the case of prediction, the above function which will be symbolized as [ can be expanded into [(p)]=[(p)]+(p) where[l, has poles only in LHP and has poles only in RHP. The zeroes may be anywhere For rational[, this expansion is made by expanding into partial fractions, then adding together the terms defining LHP poles to form[I, and adding together the terms defining RHP poles to form[ Actually, only [l, will be required where f(1) [(p)]e"4=0.10 ote that fr( is the inverse transform of a function which is analytic in LHP, thus fR(=0 for t>0 and Page 1 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 1 of 7 Lecture 20 Last time: Completed solution to the optimum linear filter in real-time operation Semi-free configuration: ( ) 0 0 ()( ) () ( ) 2 () ( ) () ( ) ( ) j st pt L is L L ii L R ii R j p DpF p S p H s dte dp e π jF s F s S s F p S p ∞ ∞ − − ∞ ⎡ ⎤ ⎣ ⎦ 1 − = − ∫ ∫ 144424443 Special case: ⎡ ⎤ ( p) ⎣ ⎦ is rational: In this solution formula we can carry out the indicated integrations in literal form in the case in which ⎡ ⎤ ( p) ⎣ ⎦ is rational. In our work, we deal in a practical way only with rational F , is S , and ii S , so this function will be rational if D p( ) is rational. This will be true of every desired operation except a predictor. Thus except in the case of prediction, the above function which will be symbolized as [ ] can be expanded into () () () L R ⎡ ⎤⎡ ⎤ ⎡ ⎤ p = + p p ⎣ ⎦⎣ ⎦ ⎣ ⎦ where [ ]L has poles only in LHP and [ ]R has poles only in RHP. The zeroes may be anywhere. For rational [ ], this expansion is made by expanding into partial fractions, then adding together the terms defining LHP poles to form [ ]L and adding together the terms defining RHP poles to form [ ]R . Actually, only [ ]L will be required. { } () () 0 00 1 () () 2 j st pt st st L R L R j dte dp p p e f t e dt f t e dt π j ∞ ∞∞ ∞ − −− − ∞ ⎡⎤⎡⎤ += + ∫∫ ∫ ∫ ⎣⎦⎣⎦ where ( ) ( ) 1 ( ) 0, 0 2 1 ( ) 0, 0 2 j pt L L j j pt R R j f t p e dp t j f t p e dp t j π π ∞ − ∞ ∞ − ∞ = = ⎡ ⎤ ⎣ ⎦ ∫ ∫ Note that ( ) Rf t is the inverse transform of a function which is analytic in LHP; thus () 0 Rf t = for t > 0 and
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde ∫f(e-"t=0 Also fi(t) is the inverse transform of a function which is analytic in RHP; thus f(t=0 for t<0. Thus ∫f(e-"t=f(e"=[(s) Thus finally D(SF(SS(S) F(SRS(S) H0(s) F(s)2F(-s)2S2(s) In the usual case, F(s)is a stable, minimum phase function. In that case, F(S)=F(S), F(S =l; that is, all the poles and zeroes of F(s) are in the LHP. Similarly, F(S)=1. Then D(sS(s) S(s) H()=F(s)S2(S) Thus in this case the optimum transfer function from input to output D(S)S(s) F(s)H0(s)= S(s)2 S2(s) nd the optimum function to be cascaded with the fixed part is obtained from nis by division by F(s), so that the fixed part is compensated out by cancellation Free configuration problem: D(sS,(s) S,(S) H0(s) S(S) Optimum free configuration filter H(S) F(s) Page 2 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 7 0 () 0 st Rf t e dt ∞ − = ∫ Also ( ) Lf t is the inverse transform of a function which is analytic in RHP; thus () 0 Lf t = for t < 0 . Thus ( ) 0 () () st st L L L f t e dt f t e dt s ∞ ∞ − − −∞ = =⎡ ⎤ ∫ ∫ ⎣ ⎦ Thus finally, 0 () ( ) () () () ( ) () ( ) () L ij R ii R L L L ii L DsF s S s Fs S s H s Fs F s S s ⎡ ⎤ − ⎢ ⎥ ⎣ ⎦ = − In the usual case, F s( ) is a stable, minimum phase function. In that case, () () Fs Fs L = , () 1 F s R = ; that is, all the poles and zeroes of F s( ) are in the LHP. Similarly, () 1 F s − = L . Then 0 () () ( ) ( ) () () ij ii R L ii L DsS s S s H s FsS s ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ = Thus in this case the optimum transfer function from input to output is 0 () () ( ) () () ( ) ij ii R L ii L DsS s S s FsH s S s ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ = and the optimum function to be cascaded with the fixed part is obtained from this by division by F s( ) , so that the fixed part is compensated out by cancellation. Free configuration problem: 0 () () ( ) ( ) ( ) ij ii R L ii L DsS s S s H s S s ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ = Optimum free configuration filter:
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde We started with a closed-loop configuration C(s) F(s) B(s) H(s) 1+C(SF(SB(s) 1-H(SF(SB(s) The loop will be stable, but C(s)may be unstable Special comments about the application of these formulae: a) Unstable F(s) cannot be treated because the Fourier transform of w(t) does not converge in that case. To treat this system, first close a feedback loop around F(s) to create a stable"fixed"part and work with this stable feedback system as F(s). When the optimum compensation is found, it can be collected with the original compensation if desired b)An F(s) which has poles on the jo axis is the limiting case of functions for which the Fourier transform converges. You can move the poles just into the LHP by adding a real part +e to the pole locations. Solve the problem with this 8 and at the end set it to zero Zeroes of F(s)on j@ axis can be included in either factor and the result will be the same. This will permit cancellation compensation of poles of F(s)on the jo axis, including poles at the origin c)In factoring Si(s)into S (S)LS(S)R, any constant factor in Si(s) can be divided between S, (S) and S (s)e in any convenient way. The same true of F(s)and F(s) d)Problems should be well-posed in the first place. Avoid combinations of D(s)and S(o)which imply infinite d(t) because that may assume infinite e- for any realizable filter. Such as a differentiator on a sign which falls off as The point at (=0 was left hanging in several steps of the derivation of the solution formula. Don t bother checking the individual steps; just check the final solution to see if it satisfies the necessary conditions Page 3 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 3 of 7 We started with a closed-loop configuration: ” ( ) ( ) 1 () () () ( ) ( ) 1 () () () C s H s CsFsBs H s C s H sFsBs = + = − The loop will be stable, but C s( ) may be unstable. Special comments about the application of these formulae: a) Unstable F s( ) cannot be treated because the Fourier transform of ( ) w t F does not converge in that case. To treat this system, first close a feedback loop around F s( ) to create a stable “fixed” part and work with this stable feedback system as F s( ) . When the optimum compensation is found, it can be collected with the original compensation if desired. b) An F s( ) which has poles on the jω axis is the limiting case of functions for which the Fourier transform converges. You can move the poles just into the LHP by adding a real part +ε to the pole locations. Solve the problem with this ε and at the end set it to zero. Zeroes of F s( ) on jω axis can be included in either factor and the result will be the same. This will permit cancellation compensation of poles of F s( ) on the jω axis, including poles at the origin. c) In factoring ( ) ii S s into () () ii L ii R SsSs , any constant factor in ( ) ii S s can be divided between ( ) ii L S s and ( ) ii R S s in any convenient way. The same is true of F s( ) and F s ( ) − . d) Problems should be well-posed in the first place. Avoid combinations of D s( ) and ( ) ss S ω which imply infinite 2 d t( ) because that may assume infinite 2 e for any realizable filter. Such as a differentiator on a signal which falls off as 2 1 ω . e) The point at t = 0 was left hanging in several steps of the derivation of the solution formula. Don’t bother checking the individual steps; just check the final solution to see if it satisfies the necessary conditions
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde The Wiener-Hopf equation requires l([)=0 for r,20. Thus L(s)should be analytic in LHP and go to zero at least as fast as for large Isl L(S=F()()F(SS(s)-F-S)D(s)Sis(s) We have solved the problem of the optimum linear filter under the least mean squared error criterion Further analysis shows that if the inputs, signal and noise, are Gaussian, the result we have is the optimum filter. This is there is no filter linear or nonlinear which will yield smaller mean squared error If the inputs are not both Gaussian, it is almost sure that some nonlinear filters can do better than the Wiener filter. But theory for this is only beginning to be developed on an approximate basis Note that if we only know the second order statistics of the inputs, the optimum linear filter is the best we can do. To take advantage of nonlinear filtering we must know the distributions of the inputs Example: Free configuration predictor (real time) t+T S(s) S(s)=s The sn are uncorrelated Page 4 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 4 of 7 The Wiener-Hopf equation requires 1 l() 0 τ = for 1 τ ≥ 0 . Thus L s( ) should be analytic in LHP and go to zero at least as fast as 1 s for large s . 0 () ( ) () () () ( ) () () Ls F sH sFsS s F sDsS s =− −− ii is We have solved the problem of the optimum linear filter under the least mean squared error criterion. Further analysis shows that if the inputs, signal and noise, are Gaussian, the result we have is the optimum filter. This is, there is no filter, linear or nonlinear which will yield smaller mean squared error. If the inputs are not both Gaussian, it is almost sure that some nonlinear filters can do better than the Wiener filter. But theory for this is only beginning to be developed on an approximate basis. Note that if we only know the second order statistics of the inputs, the optimum linear filter is the best we can do. To take advantage of nonlinear filtering we must know the distributions of the inputs. Example: Free configuration predictor (real time) 2 2 ( ) ( ) ss nn n A S s a s Ss S = − = The s n, are uncorrelated. ( ) sT Ds e =
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Use the solution form H0(s)= D(p)S(p) 2TjS, (s) ∫de"∫d S2(P) 中p Sis(p)plu+T) S,(p)a S(s)=S(s)+Sm(s) +s ta-s S b2 S S where S2(p)=S()=了 (a+s)(a-s) Si(p) A(a-s) S, (p)r (a+sa-s)(b-s) (a+s)(b-s) -b+aa+ b a+s b U a+b。P(+ 2 0. otherwise Page 5 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 5 of 7 Use the solution form 0 0 ( ) 0 1 () () ( ) 2 () ( ) 1 () 2 () ( ) j st pt is ii L ii R j j st p t T is ii L ii R j D pS p H s dte dp e jS s S p S p dte dp e jS s S p π π ∞ ∞ − − ∞ ∞ ∞ − + − ∞ = = ∫ ∫ ∫ ∫ 2 2 2 2 2 2 2 2 2 2 () () () ii ss nn n n n n n Ss S s S s A S a s A a s S S a s b s S a s bs bs S as as = + = + + ⎡ ⎤ + − ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ − ⎢ ⎥ ⎣ ⎦ − = − ⎡ ⎤⎡ ⎤ + − = ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ + − where 2 2 n A b a S = + . () () ( )( ) () ( ) ( ) ( )( )( ) ( )( ) is ss is ii R A Sp Sp a sa s S p Aa s S p a sa sb s A a sb s A A ba ab as bs = = + − − = +−− = + − + + = + + − Using the integral form, ( ) ( ) 1 , 2 0, otherwise j at T pt T j A A e tT a b e dp a b π j pa ∞ − + + − ∞ ⎧ ⎪ > − + = ⎨ + + ⎪ ⎩ ∫
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde ={a+b 2 0. otherwise e-a(+rdT dt a+b a-aT (S+a) H0(s) a+b (s+a)S,(s+b) S,(a+b)s+b A here b Note the bandwidth of this filter and the gain which is the correlation between S(O and S(t+T) Example: Semi-free problen with oni-minimumn phase F Optimum compensator K s(d+ S(s)= The s n are uncorrelated Servo example where wed like the output to track the input, so the desired operator, D(s)=1 Page 6 of 7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 6 of 7 ( ) ( ) 1 , 2 0, otherwise j bt T pt T j A A e tT a b e dp a b π j bp ∞ + + − ∞ ⎧ ⎪ < − + = ⎨ + − ⎪ ⎩ ∫ () () 0 0 1 st a t T aT s a t aT A A e e dT e e dt ab ab A e ab sa ∞ ∞ − − + − −+ − = + + = + + ∫ ∫ 0 () 1 ( ) ( )( ) ( ) aT aT n n A s a Ae Hs e a b s aS s b S a b s b − − + = = + + + ++ where 2 n A b a S = + . Note the bandwidth of this filter and the gain ~ aT e− which is the correlation between S t( ) and St T ( ) + Example: Semi-free problem with non-minimum phase F . Optimum compensator ( ) ( ) 2 2 ( ) ( ) ( ) ss nn n c s Fs K sd s A S s a s Ss S − = + = − = The s n, are uncorrelated. Servo example where we’d like the output to track the input, so the desired operator, D s() 1 =
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde b2-s2 S(S)=s t s S(S=s b FO (s+E)(d+s) F(s) FCS=K (E-s(d C+S D(s)F(-s)2S2(s) F(SRS(S), (c+s)A(a-s) A(c+s) (a+s(a-s)(c-s)(b-s)(a+s(c-s)(b- Page 7 of7
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 7 of 7 2 2 2 2 ( ) ( ) ( ) ii n ii L n ii R n b s Ss S a s b s Ss S a s b s Ss S a s − = − + = + − = − ( )( ) ( )( ) ( ) ( ) ( ) ( ) L R L K F s s ds Fs c s c s Fs K sd s F s cs ε ε = + + = − + − = − − − =+ ( ) ( )( ) ( )( )( )( ) ( ) ( )( )( ) () ( ) () () () L is R ii R DsF s S s s Fs S s c s Aa s Ac s asascsbs ascsbs − ⎡ ⎤ = ⎣ ⎦ +− + = = +−−− +−−