当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《随机预算与调节》(英文版)Lecture 21 Last time:

资源类别:文库,文档格式:PDF,文档页数:8,文件大小:203.57KB,团购合买
This means that in the cascade of Ho(s) with F(s), the effect of the RHP zero on the amplitude of the product is cancelled out, but the effect on the phase is not In fact, the phase lag due to the RHP zero is doubled in the product. Note:
点击下载完整版文档(PDF)

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Lecture 21 Last time F(s=K s(d+s) A(c+s) (a+s)(c-s)(b-s) (c+a)(b+a a+s Plugging in for the optimum compensator (s)= F(S,FCS,S,(s) Bs(d+s(a+s) (a+sK(c+s),(b+s) Bs(d+s) S, K(c+s(b+s) This means that in the cascade of Ho(s) with F(s), the effect of the RHP zero on the amplitude of the product is cancelled out, but the effect on the phase is not In fact, the phase lag due to the rhp zero is doubled in the product. Note Cancellation of K and two LHP poles in F(s) Zero at S=c(RHP) is not cancelled, but a pole is placed at the symmetri point s=-c Another pole is added at s=-b which is beyond the signal cut-off by an amount which depends on d Also the gain depends on Also: (s). has the form (e+/s) b

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 1 of 8 Lecture 21 Last time: ( ) ( ) ( ) c s Fs K sd s − = + ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) L Ac s s ascsbs Ac a caba B s as as + ⎡ ⎤ = ⎣ ⎦ +−− − + + ⎡ ⎤ = ≡ ⎣ ⎦ + + Plugging in for the optimum compensator: ( ) ( )( ) ( )( ) ( ) ( ) ( )( ) 0 ( ) () ( ) () L L L ii L n n s H s Fs F s S s Bs d s a s a sKc sS b s Bs d s SK c s b s ⎡ ⎤ ⎣ ⎦ = − + + = ++ + + = + + This means that in the cascade of 0 H s( ) with F s( ) , the effect of the RHP zero on the amplitude of the product is cancelled out, but the effect on the phase is not. In fact, the phase lag due to the RHP zero is doubled in the product. Note: • Cancellation of K and two LHP poles in F s( ) • Zero at s c = (RHP) is not cancelled, but a pole is placed at the symmetric point s c = − • Another pole is added at s b = − which is beyond the signal cut-off by an amount which depends on n A S • Also the gain n B S depends on n A S directly Also: ( ) R ⎡ ⎤ s ⎣ ⎦ has the form ( ) ( )( ) C D e fs cs bs csbs + + = − − −−

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde L(s)=-F(-)F(s)S(s)[(s) K(c-s(b-s(e+fs s(d s(a-s(c-s)(b-s) K(e+fs) (E-s)(d-s)(a-s) which is analytic in the lhp and varies as for large s Necessary condition: L(s)must be analytic in LHP and go to zero for large[sat least as fast as Now find the loop compensator if the feedback transfer is unity C(s) F(s) D=1 s(d+s) 1+hFD b Bc Cancellation of the pole at the origin leaves the system with an uncontrollable not attenuate. As a practical matter, it might be better to move that zero awap u mode corresponding to that pole. This is not good since that normal mode do from the origin a bit That also means the system will not have unit input-output sensitivity Also note that the in-the-loop compensator need not be stable. It depends on the parameter values. End of Quiz 2 material

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 2 of 8 Then ( ) ( )( )( ) ( )( )( )( )( ) ( ) ( )( )( ) () ( ) () () R R ii R R Ls F s Fs S s s K c s b s e fs sd sascsbs K e fs sd sas ε ε =− − ⎡ ⎤ ⎣ ⎦ −−+ = − − −−−− + = − −−− which is analytic in the LHP and varies as 2 1 s for large s. Necessary condition: L s( ) must be analytic in LHP and go to zero for large s at least as fast as 1 s . Now find the loop compensator if the feedback transfer is unity. ( ) 0 1 0 2 n n n B sd s H KS C H FD B Bc s b c s cb S S + = = + ⎛ ⎞⎛ ⎞ + ++ + − ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ Cancellation of the pole at the origin leaves the system with an uncontrollable mode corresponding to that pole. This is not good since that normal mode does not attenuate. As a practical matter, it might be better to move that zero away from the origin a bit. That also means the system will not have unit input-output sensitivity. Also note that the in-the-loop compensator need not be stable. It depends on the parameter values. End of Quiz 2 material

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Estimation We wish to estimate the values of a set of parameters which may be static dynamic based on all available information measurements prior knowledge physical constraints The measurements may be direct inferential(e.g. airspeed indicator-infers airspeed from dynamic pressure measurement) of varying quality Discussion Problem formulation tic or dy rameters static affords some simplifications but consider the whole problem: what is to be done with the estimates? All information should be used better quality information weighted more heavily than poorer Value of measurements depends both on noise and sensitivity to least certain parameters poorer quality radar gh quality radar Suppose you have a radar looking at satellites overflying your position. Having a second location 90 degrees around the world would clarify parameters that are not were characterized by a single sensor Include the knowledge you have prior to the current set of measurements Physical constraints add information-use all known

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 3 of 8 Estimation We wish to estimate the values of a set of parameters which may be - static - dynamic based on all available information - measurements - prior knowledge - physical constraints The measurements may be - direct - inferential (e.g. airspeed indicator – infers airspeed from dynamic pressure measurement) - of varying quality. Discussion Problem formulation - static or dynamic parameters - static affords some simplifications but consider the whole problem: what is to be done with the estimates? All information should be used - better quality information weighted more heavily than poorer Value of measurements depends both on noise and sensitivity to least certain parameters. Suppose you have a radar looking at satellites overflying your position. Having a second location 90 degrees around the world would clarify parameters that are not were characterized by a single sensor. Include the knowledge you have prior to the current set of measurements. Physical constraints add information – use all known

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde Differential equations relating dynamic parameters Physical characteristics of environment Suppose you are given a set of data and asked to smooth it. Unless you know the physics of the situation you cannot opt for one scheme over another(e. g elevation data for a satellite overpass) To determine where to point your telescope, you can use prior data to calculate the constants that describe the satellite' s orbit, but it may be easier to use the satellite' s current position and velocity to estimate its future position Direct measurements are convenient, but not always possible e. g. temperature of a star. Inferential measurements depend on physical constraints relating measured to estimated quantities. There may be uncertainty in these relationships, which should be modeled somehow A basic principle: should formulate the complete estimate problem at once especially if any non-linear relations would be involved in deriving the desired quantities Example estimation problems Example: Estimate x scalar constant A deterministic quantity, x, is being observed directly-these observations are not necessarily each of equal quality e observations Ek =x+nk The noises are independent, unbiased, normal random variables. They may have different variances ok. The conditional probability density function for Fk, conditioned on a given value of x, is just the density function for n, Page 4 of 8

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 4 of 8 Differential equations relating dynamic parameters Physical characteristics of environment Suppose you are given a set of data and asked to smooth it. Unless you know the physics of the situation you cannot opt for one scheme over another (e.g., elevation data for a satellite overpass). To determine where to point your telescope, you can use prior data to calculate the constants that describe the satellite’s orbit, but it may be easier to use the satellite’s current position and velocity to estimate its future position. Direct measurements are convenient, but not always possible e.g., temperature of a star. Inferential measurements depend on physical constraints relating measured to estimated quantities. There may be uncertainty in these relationships, which should be modeled somehow. A basic principle: should formulate the complete estimate problem at once – especially if any non-linear relations would be involved in deriving the desired quantities. Example estimation problems Example: Estimate x scalar constant A deterministic quantity, x , is being observed directly – these observations are not necessarily each of equal quality. The observations: k k z xn = + The noises are independent, unbiased, normal random variables. They may have different variances 2 σ k . The conditional probability density function for k z , conditioned on a given value of x , is just the density function for k n centered around x

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Classical Maximum Likelihood estimate: The i maximizes f(=,EN Ix f(=|x)=fn(-x)= 丌ok In scalar form f(二1,…=N|x) (2 The maximum value of f is min min O The numerator(=x-x)is the measurement residual. k=l Ok We call this the weighted least squares estimate So the estimator is a linear combination of all the observations -the constants of the combination being inversely proportional(note proportional) to the variances of the measurement noises. This says that every piece of data should be used and will have a nonzero effect if its variance is less than The statistics of the estimate are Page 5 of 8

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 5 of 8 Classical Maximum Likelihood estimate: The xˆ maximizes 1 ( ,..., | ) N f z zx ( )2 2 1 2 ( |) ( ) 2 k k z x k nk k fz x f z x e σ πσ ⎛ ⎞ − ⎜ ⎟ − ⎜ ⎟ ⎝ ⎠ = −= In scalar form, ( ) 2 1 1 2 1 2 1 1 ( ,..., | ) 2 ... N k k k z x N N N fz z x e σ π σσ = ⎡ ⎤ ⎛ ⎞ − ⎢− ⎥ ⎜ ⎟ ⎢ ⎥ ⎝ ⎠ ⎣ ⎦ ∑ = The maximum value of f is ( ) 2 2 2 1 1 min min N N k k k k k k z x z x = = σ σ ⎛ ⎞ − − ⎜ ⎟ = ⎝ ⎠ ∑ ∑ The numerator ( )2 k z x − is the measurement residual. ( ) 2 1 2 1 2 1 2 0 ˆ 1 N k k k N k k k N k k z x x z x σ σ σ = = = ∂ − =− = ∂ = ∑ ∑ ∑ We call this the weighted least squares estimate. So the estimator is a linear combination of all the observations – the constants of the combination being inversely proportional (note proportional) to the variances of the measurement noises. This says that every piece of data should be used and will have a nonzero effect if its variance is less than infinite. The statistics of the estimate are:

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde ∑益x∑ 1, an unbiased estimate error=0 Ⅴar since the central statistics of the =k are those of the ng which are independent But note that the standard deviation of is o, so the standard deviation of k is 1 and that of The variance of -k is then Var(x) An easier way to remember this result is The addition of any new piece of data, no matter how large its variance, thus reduces the variance of x In the special case of equal quality data, o=on k the ordinary average, or arithmetic mean, of the data

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 6 of 8 2 2 1 1 2 2 1 1 1 ˆ 1 1 N N k k k k k N N k k k k z x x x σ σ σ σ = = = = == = ∑ ∑ ∑ ∑ , an unbiased estimate error 0 = 2 1 2 2 1 Var Var( )ˆ 1 N k k k N k k z x σ σ = = ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ = ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ ∑ ∑ since the central statistics of the k z are those of the k n which are independent. But note that the standard deviation of k z is σ k , so the standard deviation of k k z σ is 1 and that of 2 k k z σ is 1 σ k . The variance of 2 k k z σ is then 2 1 σ k . 2 1 2 2 2 1 1 1 1 Var( )ˆ 1 1 N k k N N k k k k x σ σ σ = = = = = ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ ∑ ∑ ∑ An easier way to remember this result is 2 2 ˆ 1 1 1 N σ x k k= σ = ∑ The addition of any new piece of data, no matter how large its variance, thus reduces the variance of xˆ . In the special case of equal quality data, σ k n = σ 2 1 1 2 1 1 1 ˆ 1 1 N k N n k N k k n k z x z N σ σ = = = = = ∑ ∑ ∑ the ordinary average, or arithmetic mean, of the data

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander Velde The standard deviation of the estimate( the average) goes down with the square root of the number of observations This estimator for x can be shown to be the optimum linear estimate of x in the mean squared error sense for arbitrary distributions of the n, if x is treated as an arbitrary constant. That is, any other linear combination of the =k will yield a larger mean squared difference between x and x if x is an arbitrary constant For normal noise the min variance linear estimate is the min variance estimate An important factor which should bear on the estimation of x and which has not yet been mentioned is the possibility of some a priori information about x Clearly if we already had a reasonably accurate notion of the value of x and then took some additional data points-say of poor quality-we certainly would not want to derive an estimate based simply on the new data and ignore the a priori information Example: Supplemental measurenents Take n measurements starting with no prior information x Late take total of n ∑= Page 7 of 8

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 7 of 8 22 2 ˆ 1 2 2 ˆ ˆ 1 1 1 1 1 N xn n k x n x N N N σ σ σ σ σ σ σ = = = = = ∑ The standard deviation of the estimate (the average) goes down with the square root of the number of observations. This estimator for x can be shown to be the optimum linear estimate of x in the mean squared error sense for arbitrary distributions of the k n if x is treated as an arbitrary constant. That is, any other linear combination of the k z will yield a larger mean squared difference between xˆ and x if x is an arbitrary constant. For normal noise the min variance linear estimate is the min variance estimate. An important factor which should bear on the estimation of x and which has not yet been mentioned is the possibility of some a priori information about x . Clearly if we already had a reasonably accurate notion of the value of x and then took some additional data points – say of poor quality – we certainly would not want to derive an estimate based simply on the new data and ignore the a priori information. Example: Supplemental measurements Take N1 measurements starting with no prior information 1 1 2 1 1 2 1 ˆ 1 N k k k N k k z x σ σ = = = ∑ ∑ Later, we take more measurements, total of N 1 1 1 2 2 2 1 1 1 22 2 11 1 ˆ 11 1 N N N k k k k k k kN k k NN N k k kN kk k z z z x σ σ σ σ σ σ = = =+ = = =+ + = = + ∑ ∑ ∑ ∑∑∑ 1 1 2 2 1 1 1 1 ˆ N N k k k k k z x = = σ σ ∑ ∑ =

16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde x f 8

16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Page 8 of 8 1 1 1 1 1 1 2 2 ˆ 1 1 2 2 1 ˆ 1 2 2 ˆ 1 2 2 ˆ 1 1 1 ˆ ˆ ˆ 1 1 N x k k N k k k x N k x k k N N x k k N z x x z x σ σ σ σ σ σ σ σ = = = + = + = = + = + ∑ ∑ ∑ ∑

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有