Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j(Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 3: Continuous Dependence On Parameters Arguments based on continuity of functions are common in dynamical system analy They rarely apply to quantitative statements, instead being used mostly for proofs of existence of certain objects(equilibria, open or closed invariant set, etc. ) Alternatively, continuity arguments can be used to show that certain qualitative conditions cannot be satisfied for a class of systems 3.1 Uniqueness Of Solutions In this section our main objective is to establish sufficient conditions under which solutions of ode with given initial conditions are unique. 3.1.1 A counterexample Continuity of the function a: R"HR on the right side of ODE c(t)=a(r(t)), a(to)=io does not guarantee uniqueness of solutions Example 3.1 The ODE i(t)=31x(t)2/3,x(0)=0 has solutions a(t)=0 and x(t)=t(actually, there are infinitely many solutions in this I Version of September 12, 2003
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 3: Continuous Dependence On Parameters1 Arguments based on continuity of functions are common in dynamical system analysis. They rarely apply to quantitative statements, instead being used mostly for proofs of existence of certain objects (equilibria, open or closed invariant set, etc.) Alternatively, continuity arguments can be used to show that certain qualitative conditions cannot be satisfied for a class of systems. 3.1 Uniqueness Of Solutions In this section our main objective is to establish sufficient conditions under which solutions of ODE with given initial conditions are unique. 3.1.1 A counterexample Continuity of the function a : Rn ∈� Rn on the right side of ODE x˙ (t) = a(x(t)), x(t0) = ¯x0 (3.1) does not guarantee uniqueness of solutions. Example 3.1 The ODE x˙ (t) = 3|x(t)| 2/3 , x(0) = 0 has solutions x(t) ≥ 0 and x(t) ≥ t3 (actually, there are infinitely many solutions in this case). 1Version of September 12, 2003
3.1.2 A general uniqueness theorem The key issue for uniqueness of solutions turns out to be the maximal slope of a=a(a) to guarantee uniqueness on time interval T=[to, t,, it is sufficient to require existence of a constant M such that a(x1)-a(22)≤M|z1-2 for all 1, i2 from a neigborhood of a solution r: [ to, t]+R of (3.1). The proof of both existence and uniqueness is so simple in this case that we will formulate the statement for a much more general class of integral equations Theorem 3.1 Let X be a subset of r containing a ball B,(x)={∈R of radius r>0, and let ti> to be real numbers. Assume that function a: Xx[to, t, x to, ti kr is such that there exist constants M, K satisfying a(x1,7,t)-a(x2,T,圳≤K1-2V1,2∈B1(5o),to≤T≤t≤t1,(3.2) a(z,T,)≤MW∈B1(50),to≤r≤t≤t1 (3.3) for a sufficiently small t, to, there exists unique function a: [to, tfl satisfying r(t)=o+a(a(r),T, t)dr vtE[o, tyl (3.4) A proof of the theorem is given in the next section. When a does not depend on the third argument. we have the standard Ode case i(t)=a(a(t), t) In general, Theorem 3. 1 covers a variety of nonlinear systems with an infinite dimensional state space, such as feedback interconnections of convolution operators and memoryless which the forward loop is an LTI system with input u, output w, and transfer function e nonlinear transformations. For example, to prove well-posedness of a feedback system G(s and the feedback loop is defined by v(t)=sin(w(t), one can apply Theorem 3.1 with a(, T, t) sin(2)+h(t),t-1≤r≤t, h(t) otherwise where h=h(t)is a given continuous function depending on the initial conditions
� 2 3.1.2 A general uniqueness theorem The key issue for uniqueness of solutions turns out to be the maximal slope of a = a(x): to guarantee uniqueness on time interval T = [t0, tf ], it is sufficient to require existence of a constant M such that |a(¯x1) − a(¯ ¯ x2)| ∃ M|x1 − x¯2| for all x¯1, x¯2 from a neigborhood of a solution x : [t0, tf ] ∈� Rn of (3.1). The proof of both existence and uniqueness is so simple in this case that we will formulate the statement for a much more general class of integral equations. Theorem 3.1 Let X be a subset of Rn containing a ball Br(¯x0) = {x¯ ≤ R x − ¯ n : |¯ x0| ∃ r} of radius r > 0, and let t1 > t0 be real numbers. Assume that function a : X × [t0, t1] × [t0, t1] ∈� Rn is such that there exist constants M, K satisfying |a(¯x1, �, t) − a(¯x2, �, t)| ∃ K|x¯1 − x¯2| � x¯1, x¯2 ≤ Br(¯x0), t0 ∃ � ∃ t ∃ t1, (3.2) and |a(¯x, �, t)| ∃ M � x¯ ≤ Br(¯x0), t0 ∃ � ∃ t ∃ t1. (3.3) Then, for a sufficiently small tf > t0, there exists unique function x : [t0, tf ] ∈� X satisfying t x(t) = x¯0 + a(x(� ), �, t)d� � t ≤ [t0, tf ]. (3.4) t0 A proof of the theorem is given in the next section. When a does not depend on the third argument, we have the standard ODE case x˙ (t) = a(x(t), t). In general, Theorem 3.1 covers a variety of nonlinear systems with an infinite dimensional state space, such as feedback interconnections of convolution operators and memoryless nonlinear transformations. For example, to prove well-posedness of a feedback system in which the forward loop is an LTI system with input v, output w, and transfer function e−s − 1 G(s) = , s and the feedback loop is defined by v(t) = sin(w(t)), one can apply Theorem 3.1 with sin(¯x) + h(t), t − 1 ∃ � ∃ t, a(¯x, �, t) = h(t), otherwise, where h = h(t) is a given continuous function depending on the initial conditions
3.1.3 Proof of theorem 3.1 First prove existence. Choose tf ti such that tf -to <r/M and tf -to <1/(2K) Define functions k: to,tt 0()三,xk+1()=5+/a(x(,,dr By(3. 3 )and by tf-to <r/M we have k(t)E Br(io) for all t E [ to, t/]. Hence by (3.2) and by ty-to 1/(2K) we have x+(0)-xk()≤/a(xk(r),1)-a(xk-1(x),r,)|ar Kck(T)-ck-1(T)ldr <0.5 max ck(t)-ck-1 t) Therefore one can conclude that max ck+(t)-ck(t) <0.5 max rk(t)-1k-1t)Il t∈[o,t Hence k (t)converges exponentially to a limit a(t)which, due to continuity of a with respoect to the first argument, is the desired solution of (3.4) Now let us prove uniqueness. Note that, due to tr-to <r/M, all solutions of (3.4) must satisfy a(t)E D(o) for tE [to, t/). If ra and xb are two such solutions then lra(t)-xb(t)ls la(aa(),T, t)-a(b(T),T,t)ldr ≤/K|xa()-xb()dr 05(-a) which immediately implies max lra(t)-1b(t)=0 The proof is complete now. Note that the same proof applies when (3.2), (3.3)are eplaced by the weaker conditions a(x1,7,t)-a(x2,T,t)≤K()z1-2Vx1,2∈B(o),to≤T≤t≤t a(x,T,圳)≤m(t)Vz∈B-(xo),to≤T≤t≤t1, where the functions K( and M( are integrable over to, t
3 3.1.3 Proof of Theorem 3.1. First prove existence. Choose tf > t1 such that tf − t0 ∃ r/M and tf − t0 ∃ 1/(2K). Define functions xk : [t0, tf ] ∈� X by t x0(t) ≥ x¯0, xk+1(t) = ¯x0 + a(xk(� ), �, t)d�. t0 By (3.3) and by tf − t0 ∃ r/M we have xk(t) ≤ Br(¯x0) for all t ≤ [t0, tf ]. Hence by (3.2) and by tf − t0 ∃ 1/(2K) we have t |xk+1(t) − xk(t)| ∃ |a(xk(� ), �, t) − a(xk−1(� ), �, t)|d� t0 t ∃ K|xk(� ) − xk−1(� )|d� t0 ∃ 0.5 max {|xk(t) − xk−1(t)|}. t�[t0,tf ] Therefore one can conclude that max {|xk+1(t) − xk(t)|} ∃ 0.5 max {|xk(t) − xk−1(t)|}. t�[t0,tf ] t�[t0,tf ] Hence xk(t) converges exponentially to a limit x(t) which, due to continuity of a with respoect to the first argument, is the desired solution of (3.4). Now let us prove uniqueness. Note that, due to tf − t0 ∃ r/M, all solutions of (3.4) must satisfy x(t) ≤ Dr(¯x0) for t ≤ [t0, tf ]. If xa and xb are two such solutions then t |xa(t) − xb(t)| ∃ |a(xa(� ), �, t) − a(xb(� ), �, t)|d� t0 t ∃ K|xa(� ) − xb(� )|d� t0 ∃ 0.5 max {|xa(t) − xb(t)|}, t�[t0,tf ] which immediately implies max {|xa(t) − xb(t)|} = 0. t�[t0,tf ] The proof is complete now. Note that the same proof applies when (3.2),(3.3) are replaced by the weaker conditions |a(¯x1, �, t) − a(¯x2, �, t)| ∃ K(� )|x¯1 − x¯2| � x¯1, x¯2 ≤ Br(¯x0), t0 ∃ � ∃ t ∃ t1, and |a(¯x, �, t)| ∃ m(t) � x¯ ≤ Br(¯x0), t0 ∃ � ∃ t ∃ t1, where the functions K(·) and M(·) are integrable over [t0, t1]
3.2 Continuous Dependence On Parameters In this section our main objective is to establish sufficient conditions under which solutions of ode depend continuously on initial conditions and other parameters Consider the parameterized integral equation r(t, q)=io(q)+a(c(,q),T, t, q)dr, tE[to, ti], where q E R is a parameter. For every fixed value of q integral equation (3.5) has the form of (3.4) Theorem 3.2 Let a: to, t]+r be a solution of (3.5) with q=go. For some d>0 ∈R":彐t∈{o,t:|z-x(t)0 there exists>0 such that o(q1)-5o(q2) ∈q1,g∈(qo-d,9o d):q1-q2|<, a(,r;,t,q1)-a(x,r,t,)≤∈Vm,g∈(-d,9+d):l1-g2<6,∈Xn Then there erists d1 E(0, d) such that the solution r(t, q)of (3.5) is continuous on {(t,q)}={to,t×(o=d1,9o+d1) Condition(a) of Theorem 3.2 is the familiar Lipschitz continuity requirement of the dependence of a =a(c, T, t, q) on r in a neigborhood of the trajectory of zo.Condition (b)simply bounds a uniformly. Finally, condition(c)means continuous dependence of quations and initial conditions on parameter q The proof of Theorem 3. 2 is similar to that of Theorem 3.1
4 3.2 Continuous Dependence On Parameters In this section our main objective is to establish sufficient conditions under which solutions of ODE depend continuously on initial conditions and other parameters. Consider the parameterized integral equation t x(t, q) = ¯x0(q) + a(x(�, q), �, t, q)d�, t ≤ [t0, t1], (3.5) t0 where q ≤ R is a parameter. For every fixed value of q integral equation (3.5) has the form of (3.4). Theorem 3.2 Let x0 : [t0, tf ] ∈� Rn be a solution of (3.5) with q = q0. For some d > 0 let Xd = {x¯ ≤ R ¯ n : � t ≤ [t0, tf ] : |x − x0 (t)| 0 there exists � > 0 such that |x¯0(q1) − x¯0(q2)| ∃ � � q1, q2 ≤ (q0 − d, q0 + d) : |q1 − q2| < �, |a(¯x, �, t, q1) − a(¯x, �, t, q2)| ∃ � � q1, q2 ≤ (q0 − d, q0 + d) : |q1 − q2| < �, x¯ ≤ Xd . Then there exists d1 ≤ (0, d) such that the solution x(t, q) of (3.5) is continuous on {(t, q)} = [t0, tf ] × (q0 − d1, q0 + d1). Condition (a) of Theorem 3.2 is the familiar Lipschitz continuity requirement of the dependence of a = a(x, �, t, q) on x in a neigborhood of the trajectory of x0. Condition (b) simply bounds a uniformly. Finally, condition (c) means continuous dependence of equations and initial conditions on parameter q. The proof of Theorem 3.2 is similar to that of Theorem 3.1
3.3 Implications of continuous dependence on parameters This section contains some examples showing how the general continuous dependence of solutions on parameters allows one to derive qualitative statements about nonlinear syste 3.3.1 Differential flow Consider a time-invariant autonomous ODe i(t=a(r(t)), (38) where a: R"HR is satisfies the Lipschitz constraint (z1)-a(2川≤Mz1-2 (3.9) on every bounded subset of R". According to Theorem 3. 1, this implies existence and uniqueness of a maximal solution :(t_, t+)HR" of (3. 8)subject to given initial conditions . (to)=io(by this definition, t-0 such that r(t,)→oast→ oo for all i satisfying|o-到0 there exists>0 such that Ir(t, i)-iol e whenever t20 and . -iol <8
5 3.3 Implications of continuous dependence on parameters This section contains some examples showing how the general continuous dependence of solutions on parameters allows one to derive qualitative statements about nonlinear systems. 3.3.1 Differential flow Consider a time-invariant autonomous ODE x˙ (t) = a(x(t)), (3.8) where a : Rn ∈� Rm is satisfies the Lipschitz constraint |a(¯x1) − a(¯ ¯ x2)| ∃ M|x1 − x¯2| (3.9) on every bounded subset of Rn. According to Theorem 3.1, this implies existence and uniqueness of a maximal solution x : (t−, t+) ∈� Rn of (3.8) subject to given initial conditions x(t0) = ¯x0 (by this definition, t− 0 such that x(t, x¯) � x¯0 as t � ⊂ for all x¯ satisfying |x¯0 − x¯| 0 there exists � > 0 such that |x(t, x¯) − x¯0| < � whenever t → 0 and |x¯ − x¯0| < �
In other words, all solutions starting sufficiently close to an asymptotically stable equilibrium To converge to it as t-o0, and none of such solutions can escape far away before finally converging to T( Theorem 3.3 Let to E r be an asymptotically stable equilibrium of(3.8). The set A=A(x0)ofal∈R" such that a(t,1)→oast→ oo is an open subset of R,and its boundary is invariant under the transformations i Ha(t, i) The proof of the theorem follows easily from the continuity of x( 3.3.3 Limit points of a trajector For a fixed To E R", the set of all possible limits a(tk, To)-i the sequence tk) also converges to infinity, is called the limit set of the "trajectory t→x(t,元0) Theorem 3. 4 The limit set of a given trajectory is always closed and invariant under the transformations I Hr(t, a)
6 In other words, all solutions starting sufficiently close to an asymptotically stable equilibrium x¯0 converge to it as t � ⊂, and none of such solutions can escape far away before finally converging to x¯0. Theorem 3.3 Let x¯0 ≤ Rn be an asymptotically stable equilibrium of (3.8). The set A = A(¯x0) of all x¯ ≤ R x) � ¯ n such that x(t, ¯ x0 as t � ⊂ is an open subset of Rn, and its boundary is invariant under the transformations x¯ ∈� x(t, x¯). The proof of the theorem follows easily from the continuity of x(·, ·). 3.3.3 Limit points of a trajectory x0 ≤ Rn For a fixed ¯ , the set of all possible limits x(tk, x¯ ˜ as 0) � x k � ⊂, where the sequence {tk} also converges to infinity, is called the limit set of the “trajectory” t ∈� x(t, x¯0). Theorem 3.4 The limit set of a given trajectory is always closed and invariant under the transformations x¯ ∈� x(t, x¯)