当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《自动化仪表与过程控制》课程学习资料:Mathematical Control Theory

资源类别:文库,文档格式:PDF,文档页数:24,文件大小:266.12KB,团购合买
点击下载完整版文档(PDF)

Chapter 1 Introduction 1.1 What Is Mathematical Control Theory Mathematical control theory is the area of application-oriented mathematics that deals with the basic principles underlying the analysis and design of control systems. To control an object means to influence its behavior so as to achieve a desired goal. In order to implement this influence, engineers build devices that incorporate various mathematical techniques. These devices range from Watt steam engine governor, designed during the English Industrial Revolution, to the sophisticated microprocessor controllers found in consumer items-such as CD players and automobiles- or in industrial robots and airplane autopilots The study of these devices and their interaction with the object being con- trolled is the subject of this book. While on the one hand one wants to un- derstand the fundamental limitations that mathematics imposes on what is achievable, irrespective of the precise technology being used, it is also true that technology may well influence the type of question to be asked and the choice of mathematical model. An example of this is the use of difference rather than differential equations when one is interested in digital control Roughly speaking, there have been two main lines of work in control the- ry, which sometimes have seemed to proceed in very different directions but which are in fact complementary. One of these is based on the idea that a good model of the object to be controlled is available and that one wants to some- how optimize its behavior. For instance, physical principles and engineering specifications can be -and are- used in order to calculate that trajectory of a spacecraft which minimizes total travel time or fuel consumption. The tech- niques here are closely related to the classical calculus of variations and to other areas of optimization theory; the end result is typically a preprogrammed fight plan. The other main line of work is that based on the constraints imposed by uncertainty about the model or about the environment in which the object operates. The central tool here is the use of feedback in order to correct for deviations from the desired behavior For instance, various feedback control

Chapter 1 Introduction 1.1 What Is Mathematical Control Theory? Mathematical control theory is the area of application-oriented mathematics that deals with the basic principles underlying the analysis and design of control systems. To control an object means to influence its behavior so as to achieve a desired goal. In order to implement this influence, engineers build devices that incorporate various mathematical techniques. These devices range from Watt’s steam engine governor, designed during the English Industrial Revolution, to the sophisticated microprocessor controllers found in consumer items —such as CD players and automobiles— or in industrial robots and airplane autopilots. The study of these devices and their interaction with the object being con￾trolled is the subject of this book. While on the one hand one wants to un￾derstand the fundamental limitations that mathematics imposes on what is achievable, irrespective of the precise technology being used, it is also true that technology may well influence the type of question to be asked and the choice of mathematical model. An example of this is the use of difference rather than differential equations when one is interested in digital control. Roughly speaking, there have been two main lines of work in control the￾ory, which sometimes have seemed to proceed in very different directions but which are in fact complementary. One of these is based on the idea that a good model of the object to be controlled is available and that one wants to some￾how optimize its behavior. For instance, physical principles and engineering specifications can be —and are— used in order to calculate that trajectory of a spacecraft which minimizes total travel time or fuel consumption. The tech￾niques here are closely related to the classical calculus of variations and to other areas of optimization theory; the end result is typically a preprogrammed flight plan. The other main line of work is that based on the constraints imposed by uncertainty about the model or about the environment in which the object operates. The central tool here is the use of feedback in order to correct for deviations from the desired behavior. For instance, various feedback control 1

1. Introduction systems are used during actual space fight in order to compensate for errors from the precomputed trajectory. Mathematically, stability theory, dynamical systems, and especially the theory of functions of a complex variable, have had a strong influence on this approach. It is widely recognized today that these two broad lines of work deal just with different aspects of the same problems and we do not make an artificial distinction between them in this book Later on we shall give an axiomatic definition of what we mean by a"system or"machine. " Its role will be somewhat analogous to that played in mathematics oy the definition of"function"as a set of ordered pairs: not itself the object of study, but a necessary foundation upon which the entire theoretical development will rest. In this Chapter, however, we dispense with precise definitions and will use a very simple physical example in order to give an intuitive presentation of some of the goals, terminology, and methodology of control theory. The discussion here will be informal and not rigorous, but the reader is encouraged to follow it in detail, since the ideas to be given underlie everything else in the book. Without them, many problems may look artificial. Later, we often refer back to this Chapter for motivation 1.2 Proportional-Derivative Control One of the simplest problems in robotics is that of controlling the position of a single-link rotational joint using a motor placed at the pivot. Mathematically, this is just a pendulum to which one can apply a torque as an external force see Figure 1.1) Figure 1.1: Pendulum. e assume that friction is negligible, that all of the mass is concentrated at the end, and that the rod has unit length. From Newtons law for rotating objects, there results, in terms of the variable 0 that describes the counterclock wise angle with respect to the vertical, the second-order nonlinear differential me(t)+mg sin e(t)=u(t)

2 1. Introduction systems are used during actual space flight in order to compensate for errors from the precomputed trajectory. Mathematically, stability theory, dynamical systems, and especially the theory of functions of a complex variable, have had a strong influence on this approach. It is widely recognized today that these two broad lines of work deal just with different aspects of the same problems, and we do not make an artificial distinction between them in this book. Later on we shall give an axiomatic definition of what we mean by a “system” or “machine.” Its role will be somewhat analogous to that played in mathematics by the definition of “function” as a set of ordered pairs: not itself the object of study, but a necessary foundation upon which the entire theoretical development will rest. In this Chapter, however, we dispense with precise definitions and will use a very simple physical example in order to give an intuitive presentation of some of the goals, terminology, and methodology of control theory. The discussion here will be informal and not rigorous, but the reader is encouraged to follow it in detail, since the ideas to be given underlie everything else in the book. Without them, many problems may look artificial. Later, we often refer back to this Chapter for motivation. 1.2 Proportional-Derivative Control One of the simplest problems in robotics is that of controlling the position of a single-link rotational joint using a motor placed at the pivot. Mathematically, this is just a pendulum to which one can apply a torque as an external force (see Figure 1.1). mg u mg sin θ θ Figure 1.1: Pendulum. We assume that friction is negligible, that all of the mass is concentrated at the end, and that the rod has unit length. From Newton’s law for rotating objects, there results, in terms of the variable θ that describes the counterclock￾wise angle with respect to the vertical, the second-order nonlinear differential equation m¨θ(t) + mg sin θ(t) = u(t), (1.1)

1.2. Proportional-Derivative Control where m is the mass, g the acceleration due to gravity, and u(t) the value of the external torque at time t(counterclockwise being positive). We call u( the input or control function. To avoid having to keep track of constants, let us assume that units of time and distance have been chosen so that m=g=l The vertical stationary position(0=T, 0=0) is an equilibrium when no control is being applied(u E 0), but a small deviation from this will result in n unstable motion. Let us assume that our objective is to apply torques as needed to correct for such deviations. For small 8-T, (6-丌)+o(6-丌) Here we use the standard"little-o"notation: o(z)stands for some function g(a) for which g(a) Since only small deviations are of interest, we drop the nonlinear part repre- sented by the term o(0-T). Thus, with o: =0-T as a new variable, we replace equation(1. 1) by the linear differential equation ()-g(t)=u(t) as our object of study. (See Figure 1.2.) Later we will analyze the effect of the ignored nonlinearity u 1.2: Inverted pendulum Our objective then is to bring y and y to zero, for any small nonzero initial values p(0), p(O) in equation(1.2), and preferably to do so as fast as possible, with few oscillations, and without ever letting the angle and velocity become too large. Although this is a highly simplified system, this kind of "servo oblem illustrates what is done in engineering practice. One typically wants to achieve a desired value for certain variables, such as the correct idling spe in an automobile's electronic ignition system or the position of the read write head in a disk drive controller a naive first attempt at solving this control problem would be as follows: If re are to the left of the vertical, that is, if p=0-t>0, then we wish to move to the right, and therefore, we apply a negative torque. If instead we are to

1.2. Proportional-Derivative Control 3 where m is the mass, g the acceleration due to gravity, and u(t) the value of the external torque at time t (counterclockwise being positive). We call u(·) the input or control function. To avoid having to keep track of constants, let us assume that units of time and distance have been chosen so that m = g = 1. The vertical stationary position (θ = π, ˙ θ = 0) is an equilibrium when no control is being applied (u ≡ 0), but a small deviation from this will result in an unstable motion. Let us assume that our objective is to apply torques as needed to correct for such deviations. For small θ − π, sin θ = −(θ − π) + o(θ − π). Here we use the standard “little-o” notation: o(x) stands for some function g(x) for which limx→0 g(x) x = 0 . Since only small deviations are of interest, we drop the nonlinear part repre￾sented by the term o(θ−π). Thus, with ϕ := θ−π as a new variable, we replace equation (1.1) by the linear differential equation ϕ¨(t) − ϕ(t) = u(t) (1.2) as our object of study. (See Figure 1.2.) Later we will analyze the effect of the ignored nonlinearity. u φ Figure 1.2: Inverted pendulum. Our objective then is to bring ϕ and ˙ϕ to zero, for any small nonzero initial values ϕ(0), ϕ˙(0) in equation (1.2), and preferably to do so as fast as possible, with few oscillations, and without ever letting the angle and velocity become too large. Although this is a highly simplified system, this kind of “servo” problem illustrates what is done in engineering practice. One typically wants to achieve a desired value for certain variables, such as the correct idling speed in an automobile’s electronic ignition system or the position of the read/write head in a disk drive controller. A naive first attempt at solving this control problem would be as follows: If we are to the left of the vertical, that is, if ϕ = θ −π > 0, then we wish to move to the right, and therefore, we apply a negative torque. If instead we are to

the right, we apply a positive, that is to say counterclockwise, torque. In other words, we apply proportional feedback (t) (1.3) where a is positive real number, the feedback gain ze the resulting closed-loop equation obtained when the value of the control given by(1.3)is substituted into the open-loop original equation (1.2) p(t-p(t)+oo(t)=0 If a>1, the solutions of this differential equation are all oscillatory, since the roots of the associated characteristic equation are purely imaginary z =tiVa-1. If instead a0 Therefore, also and hence yp increase, and the pendulum moves away, rather than toward, the vertical position. When a>l the problem is more subtle The torque is being applied in the correct direction to counteract the natural stability of the pendulum, but this feedback helps build too much inertia In particular, when already close to (0)=0 but moving at a relatively large speed, the controller(1.3)keeps pushing toward the vertical, and overshoot and eventual oscillation result The obvious solution is to keep a> l but to modify the proportional feed- back(1.3)through the addition of a term that acts as a brake, penalizing ve- locities. In other words, one needs to add damping to the system. We arrive then at a PD, or proportional-derivative feedback law, u(t)=-ap(t)-B(t with a> I and B>0. In practice, implementing such a controller involves measurement of both the angular position and the velocity. If only the former is easily available, then one must estimate the velocity as part of the control gorithm; this will lead later to the idea of observers, which are techniques for

4 1. Introduction the right, we apply a positive, that is to say counterclockwise, torque. In other words, we apply proportional feedback u(t) = −αϕ(t), (1.3) where α is some positive real number, the feedback gain. Let us analyze the resulting closed-loop equation obtained when the value of the control given by (1.3) is substituted into the open-loop original equation (1.2), that is ϕ¨(t) − ϕ(t) + αϕ(t)=0 . (1.4) If α > 1, the solutions of this differential equation are all oscillatory, since the roots of the associated characteristic equation z2 + α − 1 = 0 (1.5) are purely imaginary, z = ±i √α − 1. If instead α 0. Therefore, also ˙ϕ and hence ϕ increase, and the pendulum moves away, rather than toward, the vertical position. When α > 1 the problem is more subtle: The torque is being applied in the correct direction to counteract the natural instability of the pendulum, but this feedback helps build too much inertia. In particular, when already close to ϕ(0) = 0 but moving at a relatively large speed, the controller (1.3) keeps pushing toward the vertical, and overshoot and eventual oscillation result. The obvious solution is to keep α > 1 but to modify the proportional feed￾back (1.3) through the addition of a term that acts as a brake, penalizing ve￾locities. In other words, one needs to add damping to the system. We arrive then at a PD, or proportional-derivative feedback law, u(t) = −αϕ(t) − βϕ˙(t), (1.6) with α > 1 and β > 0. In practice, implementing such a controller involves measurement of both the angular position and the velocity. If only the former is easily available, then one must estimate the velocity as part of the control algorithm; this will lead later to the idea of observers, which are techniques for

1.2. Proportional-Derivative Control reliably performing such an estimation. We assume here that y can indeed be easured. Consider then the resulting closed-loop system p(t)+B(t)+(a-1)y(t)=0 The roots of its associated characteristic equation z2+Bz+a-1=0 2 both of which have negative real parts. Thus all the solutions of(1. 2)converge to zero. The system has been stabilized under feedback. This convergence may be oscillatory, but if we design the controller in such a way that in addition t the above conditions on a and B it is true that then all of the solutions are combinations of decaying exponentials and no os- cillation results We conclude from the above discussion that through a suitable choice of the gains a and B it is possible to attain the desired behavior, at least for the linearized model. That this same design will still work for the original nonlinear model, and, hence, assuming that this model was accurate, for a real pendulum is due to what is perhaps the most important fact in control theory -and for that matter in much of mathematics- namely that first-order approximations re sufficient to characterize local behavior. Informally, we have the following linearization principle Designs based on linearizations work locally for the original system The term "local"refers to the fact that satisfactory behavior only can be ex- pected for those initial conditions that are close to the point about which the linearization was made. Of course, as with any "principle, this is not a theorem. It can only become so when precise meanings are assigned to the various terms and proper technical assumptions are made. Indeed, we will invest some effort this text to isolate cases where this principle may be made rigorous. One of these cases will be that of stabilization, and the theorem there will imply that if we can stabilize the linearized system (1.2) for a certain choice of parameters a, B in the law(1.6), then the same control law does bring initial conditions of (1.1)that start close to 0=T, 6=0 to the vertical equilibrium Basically because of the linearization principle, a great deal of the literature in control theory deals exclusively with linear systems. From an engineering t of view, local solutions to control problems are often enough; when they not, ad hoc methods sometimes may be used in order to "patch"together h local solutions, a procedure called gain scheduling. Sometimes, one may

1.2. Proportional-Derivative Control 5 reliably performing such an estimation. We assume here that ˙ϕ can indeed be measured. Consider then the resulting closed-loop system, ϕ¨(t) + βϕ˙(t)+(α − 1)ϕ(t)=0 . (1.7) The roots of its associated characteristic equation z2 + βz + α − 1 = 0 (1.8) are −β ± pβ2 − 4(α − 1) 2 , both of which have negative real parts. Thus all the solutions of (1.2) converge to zero. The system has been stabilized under feedback. This convergence may be oscillatory, but if we design the controller in such a way that in addition to the above conditions on α and β it is true that β2 > 4(α − 1), (1.9) then all of the solutions are combinations of decaying exponentials and no os￾cillation results. We conclude from the above discussion that through a suitable choice of the gains α and β it is possible to attain the desired behavior, at least for the linearized model. That this same design will still work for the original nonlinear model, and, hence, assuming that this model was accurate, for a real pendulum, is due to what is perhaps the most important fact in control theory —and for that matter in much of mathematics— namely that first-order approximations are sufficient to characterize local behavior. Informally, we have the following linearization principle: Designs based on linearizations work locally for the original system The term “local” refers to the fact that satisfactory behavior only can be ex￾pected for those initial conditions that are close to the point about which the linearization was made. Of course, as with any “principle,” this is not a theorem. It can only become so when precise meanings are assigned to the various terms and proper technical assumptions are made. Indeed, we will invest some effort in this text to isolate cases where this principle may be made rigorous. One of these cases will be that of stabilization, and the theorem there will imply that if we can stabilize the linearized system (1.2) for a certain choice of parameters α,β in the law (1.6), then the same control law does bring initial conditions of (1.1) that start close to θ = π, ˙ θ = 0 to the vertical equilibrium. Basically because of the linearization principle, a great deal of the literature in control theory deals exclusively with linear systems. From an engineering point of view, local solutions to control problems are often enough; when they are not, ad hoc methods sometimes may be used in order to “patch” together such local solutions, a procedure called gain scheduling. Sometimes, one may

1. Introduction even be lucky and find a way to transform the problem of interest into one that is globally linear; we explain this later using again the pendulum as an example In many other cases, however, a genuinely nonlinear approach is needed, and much research effort during the past few years has been directed toward that goal. In this text, when we develop the basic definitions and results for the linear theory we will always do so with an eye toward extensions to nonlinear obal. results An Exercise As remarked earlier, proportional control (1. 3) by itself is inadequate for the original nonlinear model. Using again =8-T, the closed-loop equatio p(t)-sin(t)+ap(t)=0 (1.10) The next exercise claims that solutions of this equation typically will not ap proach zero, no matter how the feedback gain a is picked Exercise 1.2.1 Assume that a is any fixed real number, and consider the("en- ergy")function of two real variables COSa 1+(ax2+y2) (1.11) Show that V((t),p(t) is constant along the solutions of (1.10). Using that v(a, O)is an analytic function and therefore that its zero at z=0 is isolated conclude that there are initial conditions of the type P(0)=E,(0)=0, with e arbitrarily small, for which the corresponding solution of (1.10) does not satisfy that p(t)→0andp(t)→0ast→∝ 1. 3 Digital Control The actual physical implementation of (1.6)need not concern us here, but some remarks are in order. Assuming again that the values p(t)and p(t), or equiva- lently g(t)and 0(t), can be measured, it is necessary to take a linear combina- tion of these in order to determine the torque u(t) that the motor must apply Such combinations are readily carried out by circuits built out of devices called perational amplifiers. Alternatively, the damping term can be separately im- plemented directly through the use of an appropriate device(a"dashpot"), and he torque is then made proportional to p(t) A more modern alternative, attractive especially for larger systems, is to convert position and velocity to digital form and to use a computer to calcu- late the necessary controls. Still using the linearized inverted pendulum as an illustration, we now describe some of the mathematical problems that this leads

6 1. Introduction even be lucky and find a way to transform the problem of interest into one that is globally linear; we explain this later using again the pendulum as an example. In many other cases, however, a genuinely nonlinear approach is needed, and much research effort during the past few years has been directed toward that goal. In this text, when we develop the basic definitions and results for the linear theory we will always do so with an eye toward extensions to nonlinear, global, results. An Exercise As remarked earlier, proportional control (1.3) by itself is inadequate for the original nonlinear model. Using again ϕ = θ − π, the closed-loop equation becomes ϕ¨(t) − sin ϕ(t) + αϕ(t)=0 . (1.10) The next exercise claims that solutions of this equation typically will not ap￾proach zero, no matter how the feedback gain α is picked. Exercise 1.2.1 Assume that α is any fixed real number, and consider the (“en￾ergy”) function of two real variables V (x,y) := cos x − 1 + 1 2 (αx2 + y2). (1.11) Show that V (ϕ(t), ϕ˙(t)) is constant along the solutions of (1.10). Using that V (x, 0) is an analytic function and therefore that its zero at x = 0 is isolated, conclude that there are initial conditions of the type ϕ(0) = ε, ϕ˙(0) = 0, with ε arbitrarily small, for which the corresponding solution of (1.10) does not satisfy that ϕ(t) → 0 and ˙ϕ(t) → 0 as t → ∞. ✷ 1.3 Digital Control The actual physical implementation of (1.6) need not concern us here, but some remarks are in order. Assuming again that the values ϕ(t) and ˙ϕ(t), or equiva￾lently θ(t) and ˙ θ(t), can be measured, it is necessary to take a linear combina￾tion of these in order to determine the torque u(t) that the motor must apply. Such combinations are readily carried out by circuits built out of devices called operational amplifiers. Alternatively, the damping term can be separately im￾plemented directly through the use of an appropriate device (a “dashpot”), and the torque is then made proportional to ϕ(t). A more modern alternative, attractive especially for larger systems, is to convert position and velocity to digital form and to use a computer to calcu￾late the necessary controls. Still using the linearized inverted pendulum as an illustration, we now describe some of the mathematical problems that this leads to

1.3. Digital Control A typical approach to computer control is based on the sample-and-hold technique, which can be described as follows. The values p(t)and (t)are measured only at discrete instants or sampling times The control law is updated by a program at each time t= ko on the basis of the sampled values (ko) and p(kd). The output of this program, a value Uk is then fed into the system as a control (held constant at that value) during the interval [ k6, k8 +d v4 a4+ Figure 1.3: Sampled control For simplicity we assume here that the computation of uk can be done quickly relative to the length o of the sampling intervals; otherwise, the model must be modified to account for the extra delay. To calculate the effect of applying the constant control u(t)≡ Uk if t∈[ko,k+6 (1.12) we solve the differential equation(1. 2)with this function u. By differentiation one can verify that the general solution is, for tE 6, ko+d p()=y(k60+5(0)+e-k6+(k6)-92(k6)+。-t+k6 vk,(1.13) (k6)+9(k6)+-k6-y(k6)- e-t+kd (1.14) hus, applying the constant control u gives rise to new values for p(k8 +8 and p(kd+d at the end of the interval via the formula (k6+6) p(k6+6) Buk

1.3. Digital Control 7 A typical approach to computer control is based on the sample-and-hold technique, which can be described as follows. The values ϕ(t) and ˙ϕ(t) are measured only at discrete instants or sampling times 0, δ, 2δ, 3δ,...,kδ,... The control law is updated by a program at each time t = kδ on the basis of the sampled values ϕ(kδ) and ˙ϕ(kδ). The output of this program, a value vk, is then fed into the system as a control (held constant at that value) during the interval [kδ,kδ + δ]. v v v v v 0 1 2 3 4 u(t) t u δ 2345 δδδδ Figure 1.3: Sampled control. For simplicity we assume here that the computation of vk can be done quickly relative to the length δ of the sampling intervals; otherwise, the model must be modified to account for the extra delay. To calculate the effect of applying the constant control u(t) ≡ vk if t ∈ [kδ,kδ + δ] (1.12) we solve the differential equation (1.2) with this function u. By differentiation one can verify that the general solution is, for t ∈ [kδ,kδ + δ], ϕ(t) = ϕ(kδ)+ ˙ϕ(kδ) + vk 2 et−kδ + ϕ(kδ) − ϕ˙(kδ) + vk 2 e−t+kδ − vk , (1.13) so ϕ˙(t) = ϕ(kδ)+ ˙ϕ(kδ) + vk 2 et−kδ − ϕ(kδ) − ϕ˙(kδ) + vk 2 e−t+kδ . (1.14) Thus, applying the constant control u gives rise to new values for ϕ(kδ +δ) and ϕ˙(kδ + δ) at the end of the interval via the formula  ϕ(kδ + δ) ϕ˙(kδ + δ)  = A  ϕ(kδ) ϕ˙(kδ)  + Bvk , (1.15)

coho sinh d A sinhs cosh (1.16) inh d In other words, if we let ao, Tl,... denote the sequence of two dimensional g2(k6 p(k6) then Ek satisfies the recursion (1.18) me now that we wish to program our computer to calculate the constant ol values vk to be applied during any interval via a linear transformation of the measured values of position and velocity at the start of the interval. Here F is just a row vector (1, f2) that gives the coefficients of a linear combination of these measured values. Formally we are in a situation analogous to the PD control(1.6), except that we now assume that the measurements are being made only at discrete times and that a constant control will be applied on each intervaL. Substituting(1. 19) into the difference equation(1. 18), there results the new difference equation k+1=(A+ BF). k nce for any Tk+2=(A+BF)2ck it follows that, if one finds gains f, and f2 with the property that the matrix A+ BF is nilpotent, that is (A+BF)2=0 then we would have a controller with the property that after two sampling steps necessarily Ik+2=0. That is, both y and c vanish after these two steps, and he system remains at rest after that. This is the objective that we wanted to achieve all along. We now show that this choice of gains is always possible Consider the characteristic polynomial det(2I-A-BF)= 22+(2 cosh8-f2 sinh -fi cosh+ f1)z f1 cosh+1+f1+f2 sinh 8 It follows from the Cayley-Hamilton Theorem that condition(1. 22) will hold provided that this polynomial reduces to just z. So we need to solve for th

8 1. Introduction where A =  cosh δ sinh δ sinh δ cosh δ  (1.16) and B =  cosh δ − 1 sinh δ  . (1.17) In other words, if we let x0,x1,... denote the sequence of two dimensional vectors xk :=  ϕ(kδ) ϕ˙(kδ)  , then {xk} satisfies the recursion xk+1 = Axk + Bvk . (1.18) Assume now that we wish to program our computer to calculate the constant control values vk to be applied during any interval via a linear transformation vk := Fxk (1.19) of the measured values of position and velocity at the start of the interval. Here F is just a row vector (f1,f2) that gives the coefficients of a linear combination of these measured values. Formally we are in a situation analogous to the PD control (1.6), except that we now assume that the measurements are being made only at discrete times and that a constant control will be applied on each interval. Substituting (1.19) into the difference equation (1.18), there results the new difference equation xk+1 = (A + BF)xk . (1.20) Since for any k xk+2 = (A + BF) 2xk , (1.21) it follows that, if one finds gains f1 and f2 with the property that the matrix A + BF is nilpotent, that is, (A + BF) 2 = 0 , (1.22) then we would have a controller with the property that after two sampling steps necessarily xk+2 = 0. That is, both ϕ and ˙ϕ vanish after these two steps, and the system remains at rest after that. This is the objective that we wanted to achieve all along. We now show that this choice of gains is always possible. Consider the characteristic polynomial det(zI − A − BF) = z2 + (−2cosh δ − f2 sinh δ − f1 cosh δ + f1)z − f1 cosh δ +1+ f1 + f2 sinh δ . (1.23) It follows from the Cayley-Hamilton Theorem that condition (1.22) will hold provided that this polynomial reduces to just z2. So we need to solve for the

1.4. Feedback Versus Precomputed Control fi the system of equations resulting from setting the coefficient of z and the constant term to zero. This gives 2 coshd-1 2 cosh+1 Ji=-1/2 s andf2=-1/2 sinh o (1.24) We conclude that it is always possible to find a matrix F as desired. In other words, using sampled control we have been able to achieve stabilization of the system. Moreover, this stability is of a very strong type, in that, at least theoretically, it is possible to bring the position and velocity exactly to zero in fi- nite time, rather than only asymptotically as with a continuous-time controller. This strong type of stabilization is often called deadbeat control; its possibility (together with ease of implementation and maintenance, and reliability)consti- tutes a major advantage of digital techniques. 1.4 Feedback Versus Precomputed Control Note that the first solution that we provided to the pendulum control problem was in the form of a feedback law (1.6), where u(t) could be calculated in terms of the current position and velocity, which are"fed back "after suitable weighings This is in contrast to open-loop"design, where the expression of the entire control function u( is given in terms of the initial conditions (0), p(0),and one applies this function u( blindly thereafter, with no further observation of positions and velocities. In real systems there will be random perturbations that are not accounted for in the mathematical model. While a feedback law will tend to correct automatically for these a precomputed control takes no account of them. This can be illustrated by the following simple examples Assume that we are only interested in the problem of controlling(1. 2)when starting from the initial position p(0)=l and velocity (0)=-2. Some trial and error gives us that the control function is adequate for this purpose, since the solution when applying this forcing term p(t) It is certainly true that o(t) and its derivative approach zero, actually rather quickly. So(1.25 )solves the original problem. If we made any mistakes in stimating the initial velocity, however, the control (1.25)is no longer very Exercise 1. 4.1 Show that if the tial equation(1.2)is again solved with the right-hand side equal to(1. 25) using instead the initial conditions g2(0)=1,(0)=-2+E

1.4. Feedback Versus Precomputed Control 9 fi the system of equations resulting from setting the coefficient of z and the constant term to zero. This gives f1 = −1/2 2cosh δ − 1 cosh δ − 1 and f2 = −1/2 2cosh δ + 1 sinh δ . (1.24) We conclude that it is always possible to find a matrix F as desired. In other words, using sampled control we have been able to achieve stabilization of the system. Moreover, this stability is of a very strong type, in that, at least theoretically, it is possible to bring the position and velocity exactly to zero in fi- nite time, rather than only asymptotically as with a continuous-time controller. This strong type of stabilization is often called deadbeat control; its possibility (together with ease of implementation and maintenance, and reliability) consti￾tutes a major advantage of digital techniques. 1.4 Feedback Versus Precomputed Control Note that the first solution that we provided to the pendulum control problem was in the form of a feedback law (1.6), where u(t) could be calculated in terms of the current position and velocity, which are “fed back” after suitable weighings. This is in contrast to “open-loop” design, where the expression of the entire control function u(·) is given in terms of the initial conditions ϕ(0), ϕ˙(0), and one applies this function u(·) blindly thereafter, with no further observation of positions and velocities. In real systems there will be random perturbations that are not accounted for in the mathematical model. While a feedback law will tend to correct automatically for these, a precomputed control takes no account of them. This can be illustrated by the following simple examples. Assume that we are only interested in the problem of controlling (1.2) when starting from the initial position ϕ(0) = 1 and velocity ˙ϕ(0) = −2. Some trial and error gives us that the control function u(t)=3e−2t (1.25) is adequate for this purpose, since the solution when applying this forcing term is ϕ(t) = e−2t . It is certainly true that ϕ(t) and its derivative approach zero, actually rather quickly. So (1.25) solves the original problem. If we made any mistakes in estimating the initial velocity, however, the control (1.25) is no longer very useful: Exercise 1.4.1 Show that if the differential equation (1.2) is again solved with the right-hand side equal to (1.25) but now using instead the initial conditions ϕ(0) = 1, ϕ˙(0) = −2 + ε

1. Introduction where e is any positive number(no matter how small), then the solution satisfies t→+a(t)=+∞ If e<0. show that then the limit is -oo. A variation of this is as follows. Suppose that we measured correctly the ini- tial conditions but that a momentary power surge affects the motor controlling the pendulum. To model this, we assume that the differential equation is now p(t-pot=u(t)+d(t) d that the disturbance d( is the function d(t ∈ift∈[,2] 0 otherwise Here E is some positive real number. Exercise 1.4.2 Show that the solution of (1.26) with initial conditions p(0) 1,p(0)=-2, and u chosen according to(1.25)diverges, but that the solution p(t-o(t=-ap(t)-Bp(t)+d(t) (1.27) still satisfies the condition that yp and cp approach zero One can prove in fact that the solution of equation(1.27) approaches zero even if d( )is an arbitrary decaying function; this is an easy consequence of results on the input/output stability of linear systems Not only is it more robust to errors, but the feedback solution is also in this case simpler than the open-loop one, in that the explicit form of the control as a function of time need not be calculated. Of course, the cost of implementing the feedback controller is that the position and velocity must be continuously There are various manners in which to make the advantages of feedback mathematically precise. One may include in the model some characterization of the uncertainty, for instance, by means of specifying a probability law for a disturbance input such as the above d(). In any case, one can always pose a control problem directly as one of finding feedback solutions, and we shall often do so The second solution(1. 12)-(1. 19) that we gave to the pendulum problem via digital control, is in a sense a combination of feedback and precomputed control. But in terms of the sampled model(1. 18), which ignores the behavior of the system in between sampling times, digital control can be thought of as a purely feedback law. For the times of interest, (1.19) expresses the control terms of the“ current” state variables

10 1. Introduction where ε is any positive number (no matter how small), then the solution satisfies lim t→+∞ ϕ(t)=+∞. If ε < 0, show that then the limit is −∞. ✷ A variation of this is as follows. Suppose that we measured correctly the ini￾tial conditions but that a momentary power surge affects the motor controlling the pendulum. To model this, we assume that the differential equation is now ϕ¨(t) − ϕ(t) = u(t) + d(t), (1.26) and that the disturbance d(·) is the function d(t) =  ε if t ∈ [1, 2] 0 otherwise. Here ε is some positive real number. Exercise 1.4.2 Show that the solution of (1.26) with initial conditions ϕ(0) = 1, ϕ˙(0) = −2, and u chosen according to (1.25) diverges, but that the solution of ϕ¨(t) − ϕ(t) = −αϕ(t) − βϕ˙(t) + d(t) (1.27) still satisfies the condition that ϕ and ˙ϕ approach zero. ✷ One can prove in fact that the solution of equation (1.27) approaches zero even if d(·) is an arbitrary decaying function; this is an easy consequence of results on the input/output stability of linear systems. Not only is it more robust to errors, but the feedback solution is also in this case simpler than the open-loop one, in that the explicit form of the control as a function of time need not be calculated. Of course, the cost of implementing the feedback controller is that the position and velocity must be continuously monitored. There are various manners in which to make the advantages of feedback mathematically precise. One may include in the model some characterization of the uncertainty, for instance, by means of specifying a probability law for a disturbance input such as the above d(·). In any case, one can always pose a control problem directly as one of finding feedback solutions, and we shall often do so. The second solution (1.12)-(1.19) that we gave to the pendulum problem, via digital control, is in a sense a combination of feedback and precomputed control. But in terms of the sampled model (1.18), which ignores the behavior of the system in between sampling times, digital control can be thought of as a purely feedback law. For the times of interest, (1.19) expresses the control in terms of the “current” state variables

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共24页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有