当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

自适应控制(英文)_Lecture5

资源类别:文库,文档格式:PDF,文档页数:6,文件大小:133.32KB,团购合买
点击下载完整版文档(PDF)

Introduction re Stochastic self- Tuners self-tuning regulator 1.Introduct ion EStimation 2. Minimum variance co nt ro Controller 3. Est imat ion of noise models parameters 4. Stochast ic self-tuners Controller Process 5. Feedforward cont rol 6. P redict ive co nt rol 7. Conclusio ns nt eresting res ults Here An e amp Process dynamics y(t+1)+ay(t)=bu(t)+e(t+1)+ce(t) If parameters are know n the co nt rol law is Mini mum Variance and Moving Average Control u(t)=-y(t) b,3(t . Motivat io n The output then beco mes yt=et . The general case Notice Out put is white noise Innovations represent ation of c< 1 C K.J. Ast ro m and B Wittenmark

Stochastic Self-Tuners 1. Introduction 2. Minimum variance control 3. Estimation of noise models 4. Stochastic Self-tuners 5. Feedforward control 6. Predictive control 7. Conclusions Introduction Same idea as before Process parameters Controller design Estimation Controller Process Controller parameters Reference Input Output Specification Self-tuning regulator But now use  Design based on stochastic control theory  Some very interesting results  Here is where it started Minimum Variance and Moving Average Control  Motivation  An Example  The General Case An Example Process dynamics y(t + 1) + ay(t) = bu(t) + e(t + 1) + ce(t) If parameters are known the control law is u(t) = ￾y(t) = ￾ c ￾ a b ; y(t) The output then becomes y(t) = e(t) Notice  Output is white noise  Prediction very simple with the model  Innovations representation  Importance of jcj < 1 c K. J. Åström and B. Wittenmark 1

The C-poly nomial The model xam C(a=z+ (+)=A1(q (t) Spect ral density Distur bances e=dc(e c(e-in 1(q e() But A2(q) C(z)C(z-)=(z+2)(z-+2) 4(z+0.5)(z-1+0.5) y(+)=m(t)+v(t) B1 q C u(t)+ The disturbance We can write t his as y(t)=e(1)+2e(t-1) A(g)y(t)=b(gu(t)+ C(ae(t) with Ee2= l can thus be represent ed as The st and ard model y(t)=∈(t)+0.5e(t-1) ith ee Prediction The general case . Model c st a ble Process model c(a) y(k+m)=A(g) e(k+m) (q-1) (k+m) A(gy(h)= b(gu(k)+c(g)e(k) deg a-deg b=d, deg C=n F(a e(h+ m)+9 4(q-1) (k+m) C stable SIso. Innovat io n model Predict Desig n criteria: Mini mize G under the condition that the closed loo p Predict ion error stable May ass ume any causal no nlinear con- (k+m|k)=F(q-1)e(k+m) Optimal predictor dynamics C(a) C K. J. Ast ro m and B. Wittenmark

The Model Process dynamics x(t) = B1(q) A1(q) u(t) Disturbances v(t) = C1(q) A2(q) e(t) Output y(t) = x(t) + v(t) = B1(q) A1(q) u(t) + C1(q) A2(q) e(t) We can write this as A(q)y(t) = B(q)u(t) + C(q)e(t) The standard model!!! The C-polynomial Example C(z) = z + 2 Spectral density (e i!h) = 1 2C(e i!h)C(e￾i!h) But C(z)C(z￾1 )=(z + 2)(z￾1 + 2) = 4(z + 0:5)(z￾1 + 0:5) The disturbance y(t) = e(1) + 2e(t ￾ 1) with Ee2 = 1 can thus be represented as y(t) = (t)+0:5(t ￾ 1) with E2 = 4 The General Case  Process model A(q)y(k) = B(q)u(k) + C(q)e(k) deg A ￾ deg B = d, deg C = n C stable SISO, Innovation model  Design criteria: Minimize E(y 2 + u2 ) under the condition that the closed loop system is stable  May assume any causal nonlinear con￾troller Prediction  Model C stable y(k + m) = C(q) A(q) e(k + m) = C (q￾1 ) A (q￾1 ) e(k + m) = F (q￾1 )e(k + m) + q￾m G (q￾1 ) A (q￾1 ) e(k + m)  Predictor y^(k + mjk) = G (q￾1 ) C (q￾1 ) y(k) = qG(q) C(q) y(k)  Prediction error y~(k + mjk) = F (q￾1 )e(k + m)  Optimal predictor dynamics C(q) c K. J. Åström and B. Wittenmark 2

Mi nim um var ance cont rol Minim um vari ance control ▲ Output y(t) System wit h st able inve rse B(a C( y(t+dolt A(g-1)9“a(k)+Cg2 B"(q A(-)e(k) Pre dict the out put C Input u(t) y(k+d) A'(g-e(+d)+B(9 F(a e(h+d) u(t)=? 4(-e)+B2(g-1u(k) (q-1) Com pute old innov at io ns A Choose do = d and u(k) such t hat e(8)=y(8)-g-分 u(h) y(k+dk)=0! Minim um vari ance control contd Pole Placem ent Interpretat: on Process =F以(b+d)+c(6)+-9() A"y bu+c'e a(k) is fu nct ion of y (k), y(k-1),..and Closed loop system u(k-1),u(k-2 (A·BF+q-B"G)y=BF"C Ey2(k+)=E(F e(k+d) H ence B Fx R=B F (k) We h (1+f2+…+ bt ained fo u(k) B*(g-1)p*(0-y(k)=-Gg flay(e) whe re deg F*=n-l. he nce (zF(2+2-G=2-C(a Minim um variance cont roller K.J. Ast rom and b Witte nm

Minimum variance control = Prediction Output y(t) t Input u(t) u( t)=? y ˆ (t + d0 t)=? t + d0 t Choose d0 = d and u(k) such that y^(k + djk)=0! Minimum Variance Control System with stable inverse y(k) = B(q) A(q) u(k) + C(q) A(q) e(k) = B (q￾1 ) A (q￾1 ) q￾du(k) + C (q￾1 ) A (q￾1 ) e(k) Predict the output y(k + d) = C (q￾1 ) A (q￾1 ) e(k + d) + B (q￾1 ) A (q￾1 ) u(k) = F (q￾1 )e(k + d) + G (q￾1 ) A (q￾1 ) e(k) + B (q￾1 ) A (q￾1 ) u(k) Compute old innovations e(k) = A C y(k) ￾ q￾d B C u(k) Minimum Variance Control Cont'd y(k + d) = F e(k + d) + G C y(k) ￾ q￾d BG AC u(k) + B A u(k) = F e(k + d) + G C y(k) + BF C u(k) u(k) is function of y(k); y(k ￾ 1);::: and u(k ￾ 1); u(k ￾ 2);::: . Then Ey 2 (k + d)=E￾F e(k + d)2 + E G C y(k) + BF C u(k)2 It follows that Ey2 (k + d)  ￾1 + f 2 1 +  + f 2 d￾12 Equality is obtained for u(k) = G (q￾1 ) B (q￾1 )F (q￾1 ) y(k) = ￾ G(q) B(q)F (q) y(k) Minimum variance controller Pole Placement Interpretation Controller u = ￾ G BF y Process Ay = q￾dBu + Ce Closed loop system (ABF + q￾dBG )y = BF Ce Hence R = BF S = G We have C (z￾1 ) A (z￾1 ) = F (z￾1 ) + z￾m G (z￾1 ) A (z￾1 ) We can also write this as (A (z￾1 )F (z￾1 ) + z￾mG (z￾1 ) = C (z￾1 ) where deg F = n ￾ 1. Hence A(z)F (z) + zn￾mG = zn￾1C(z) c K. J. Åström and B. Wittenmark 3

Adapt:ve Cont rol Process dy nam ics Minim umn vari ance self-tuners y(t+1)+ay(t)=bu(t)+e(t+1)+ce(t) ple in principle Est im ate param ete rs in the mod How to estim ate? Ay= Bu+ce l y(t+1)=6y(t)+u(t) An Ecam ple The le res est im ate is · Surprise d! · A sim ple case 6(1)=25y((k+1)-u(8) (t)=-6(t)y(t) How to Estim ate Noise models An Exam ple An exam ple y(t+1)+ay()=bu()+e(t+1)+ce(t) y(t+1)+ay(t)=bu(t)+e(t+1)+ce(t 0.9. and c=-03Mi nIm um variance cont roller u(t) A reg ression model estim ates a(0)=c0)=0 and b(0)=1 y(t+1)=-ay(t)+bu()+ce(t)+e(t+1 le do not know e(t) but we can appro cimate with E(t) He nce =(a,b,c) (t)=(-y(t),u(t),∈(t) (+)=y(t)-92(t-1)(t-1) (t)=6(t-1)+K(t)e(t) a=A+(t-1)P(t-1)y(t-1) Self-tu hing co nt rol K(+)=P(t-1)p(t-1)=a Minim um variance co P(t)=(I-K(t)P(t-1)y2(t-1)P(t-1) K.J. Ast rom and b Witte nm ark

Minimum Variance Self-tuners  Simple in principle  How to estimate? Ay = Bu + C e  Cheating!!  An Example  Surprised!!  A simple case  A general result Adaptive Control Process dynamics y(t + 1) + ay(t) = bu(t) + e(t + 1) + ce(t) Estimate parameters in the model y(t + 1) = y(t) + u(t) The Least squares estimate is ^ (t) = Pt￾1 k=0 y(k)￾y(k + 1) ￾ u(k) Pt￾1 k=0 y2 (k) Control law u(t) = ￾^ (t)y(t) How to Estimate Noise Models An example y(t + 1) + ay(t) = bu(t) + e(t + 1) + ce(t) A regression model y(t + 1) = ￾ay(t) + bu(t) + ce(t) + e(t + 1) We do not know e(t) but we can approximate it with (t). Hence  = (a; b; c) '(t)=(￾y(t); u(t); (t)) (t) = y(t) ￾ ' T (t ￾ 1)(t ￾ 1) (t) = (t ￾ 1) + K(t)(t) =  + 'T (t ￾ 1)P (t ￾ 1)'(t ￾ 1)) K(t) = P (t ￾ 1)'(t ￾ 1)= P (t)=(I ￾ K(t)P (t ￾ 1)'T (t ￾ 1))P (t ￾ 1) An Example Consider y(t + 1) + ay(t) = bu(t) + e(t + 1) + ce(t) with a = ￾0:9; b = 3, and c = ￾0:3. Minimum variance controller u(t) = ￾0:2y(t) Initial estimates a^(0) = ^c(0) = 0 and ^ b(0) = 1. 0 100 200 300 400 500 −5 0 5 0 100 200 300 400 500 −2 0 2 Time Time y u 0 100 200 300 400 500 0 200 400 600 Time Self-tuning control Minimum variance control c K. J. Åström and B. Wittenmark 4

An direct selftune Explanat io n ocess Explain the surprising result y(t+1)+ay(t=but+et+1)+cet Parameters a=_0.9.6=3 and c =_0.3 Direct self-tuner based o a(t 2(k) y(t+1)=rout+ Soy (t Cortrd law with fixed ro=1. Contrd law ut=-e(ty(t ro P ty()+u(y() =t∑(6t-6()y() Sdf-turing contrd Vinimumvariance contrd r(1)=mE∑(+1)y()=0 D irect self-tune rs The direct selftune Use the drect sdf-tuner withE " Parameter To=bo is either fixed estimated Estimate parameters in Property 1: If the reg ression vectors are y(t+d=R(g-y t+s(g-)y,(t baunded the dased-loop system has the R"(Q-l)=ro+rig-1 properties y(t+T)y(t=0 T=dd 4(5=9(o-1y ut y(t+Tus=0 T=dd+1,., d+k where k deg R*and l= deg sw (t y(t Prop erty 2: If the ith least squares. Use cortrd law Ag)y=b(gut+ caet R(q-lut=-S(g-y(t and if min(k, l)>n-1 then ice d and samping period are key desig n (t+ T) t=0 =dd If parameters converge we will thus obtain moving average cortrol K.J. AstO m and B wittenmark

An Direct Self-tuner Process y(t + 1) + ay(t) = bu(t) + e(t + 1) + ce(t) Parameters: a = ￾0:9; b = 3, and c = ￾0:3 Direct self-tuner based on y(t + 1) = r0u(t) + s0y(t) with xed r0 = 1. Control law u(t) = ￾ s^0 r^0 y(t) 0 100 200 300 400 500 0.0 0.5 1.0 1.5 Time s^0=r^0 0 100 200 300 400 500 0 200 400 600 Time Self-tuning control Minimum variance control Explanation Explain the surprising result ^ (t) = Pt￾1 k=0 y(k)￾y(k + 1) ￾ u(k) Pt￾1 k=0 y2 (k) Control law u(t) = ￾^ (t)y(t) Properties 1 t Xt￾1 k=0 y(k + 1)y(k) = 1 t Xt￾1 k=0  ^ (t)y 2 (k) + u(k)y(k) = 1 t Xt￾1 k=0  ^ (t) ￾ ^ (k) y 2 (k) r^ y (1) = lim t!1 1 t Xt￾1 k=0 y(k + 1)y(k)=0 The Direct Self-tuner Estimate parameters in y(t + d) = R (q￾1 )uf (t) + S (q￾1 )yf (t) R (q￾1 ) = r0 + r1q￾1 +  + rkq￾k S (q￾1 ) = s0 + s1q￾1 +  + slq￾l uf (t) = Q (q￾1 ) P (q￾1 ) u(t) yf (t) = Q (q￾1 ) P (q￾1 ) y(t) with least squares. Use control law R (q￾1 )u(t) = ￾S (q￾1 )y(t) Notice d and sampling period are key design parameters. Direct Self-tuners Use the direct self-tuner with Q=P = 1. Parameter r0 = b0 is either xed or estimated. Property 1: If the regression vectors are bounded the closed-loop system has the properties y(t +  )y(t)=0  = d; d + 1;::: ;d + l y(t +  )u(t)=0  = d; d + 1;::: ;d + k where k deg R and l = deg S Property 2: If the process is described by A(q)y = B(q)u(t) + C(q)e(t) and if min(k; l)  n ￾ 1 then y(t +  )y(t)=0  = d; d + 1;::: If parameters converge we will thus obtain moving average control! c K. J. Åström and B. Wittenmark 5

nteg rator with Time delay Feed fo A(q)=q(q-1) Easy to inclu de feedforward B(a)=h-r)+T=h-q+ Estimate paramete rs Cq=d+g yt+ d =R (q)u,(t+S(q()(t+Si,qr),(t Minimum phase if T <h/2. Cont roller wit h d= 1, T changed from 0. 4 to 0 vf filtered fee dfo rward signal (a)si,y Cont rol law R (q()u(t=5(()yt-Si,(q()v(t Fee dfo rward has proven ve ry useful in applica tion Cont roller with d= 2 Dis )计M Command signals can also be inc hu ded R (q( )u(t =T(q()u(t-5(()yt e command signal (set point, re fe rence signal) Command signals and fee dfo rward can be combined observat i Indirect self-tuners re quire estimation of Direct self-tuners have unexpectedly nice pro pe rtles . Self-tuners drive cova rances to zero Com pare PI cont rol The number of covariances de pend on the paramete rs Wit h sufficiently many paramete rs we btain The parameters do not necessarily c Design paramete rs are pre dictions horizon d, sampling pe riod ane Rand s polynomia ls It is easy to inclu de fee forward .Easy to c hec h in o pe ration . Pe rformance assessment O K. Ast rom and B Wittenmark

Integrator with Time Delay A(q) = q(q ￾ 1) B(q)=(h ￾  )q +  = (h ￾  )(q +  h ￾  ) C(q) = q(q + c) Minimum phase if  < h=2. Controller with d = 1,  changed from 0.4 to 0.6 at time 100. 0 100 200 300 400 −5 0 5 0 100 200 300 400 −20 0 20 Time Time (a) y u Controller with d = 2 0 100 200 300 400 −5 0 5 0 100 200 300 400 −20 0 20 Time Time (b) y u Feedforward Easy to include feedforward! Estimate parameters in y(t + d) = R (q￾1 )uf (t) + S (q￾1 )yf (t) + S f f (q￾1 )vf (t) vf ltered feedforward signal Control law R^ (q￾1 )u(t) = ￾S^ (q￾1 )y(t) ￾ S^ f f (q￾1 )v(t) Feedforward has proven very useful in applica￾tions! Discuss why! Command signals can also be included R^ (q￾1 )u(t) = T (q￾1 )uc(t) ￾ S^ (q￾1 )y(t) uc command signal (set point, reference signal) Command signals and feedforward can be combined Observations  Indirect self-tuners require estimation of C-polynomial  Direct self-tuners have unexpectedly nice properties  Self-tuners drive covariances to zero  Compare PI control  The number of covariances depend on the parameters  With suciently many parameters we obtain moving average control  The parameters do not necessarily con￾verge  Design parameters are predictions horizon d, sampling period and number of parame￾ters in R and S polynomials  It is easy to include feedforward  Easy to check in operation  Performance assessment c K. J. Åström and B. Wittenmark 6

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有