正在加载图片...
observing system which is designed to obtain consistent and where complete observation on the vector y. Now let us present the last assumption. S(y(i))=a2, (12) (A.3)Observability:The observations formulated by Equa- tion (1)is distributedly observable defined by Definition 1. ∑= -ahL⑧EM++专EuIW: (13) 1)Unbiasedness and consistency of DRC:In this part,we show the unbiasedness and the consistency of DRC algorithm, and and we provide two theorems to illustrate them respectively. S0=孔S7T (14) Theorem 1.Consider the DRC algorithm is under the Especially,the record sequence uk(i)}izo at any node k is assumptions A.1-A.3 (Section IIl-A),the record sequence asymptotically normal: uk(i)}zo at node k is asymptotic unbiased V何(u()-y)→N(0,5(()) 1im[uk(i]=y,1≤k≤ (11) We provide the proof in Appendix B.Therefore,the error We defer the proof in Section V-A.Theorem 1 shows sequence {uk()-y}izo at each node can be regarded as the unbiasedness of the algorithm.It indicates that each being convergent to a normal distribution with a rate of node's estimation on the global data would be correct on the Up until now,we have presented asymptotic unbiasedness, average in the long run.The consistency of DRC algorithm is the consistence and the asymptotic normality of the DRC guaranteed by the following theorem. algorithm.In the next section,we present main properties of the DDA algorithm. Theorem 2.Consider the DRC algorithm is under the assumptions A.1-A.3 (Section IIl-A),the records sequence uk(i)}izo at node k is consistent B.Main Properties of DDA In this section,we prove the convergency of running average P[mu同=y,1≤k≤叫=1 (T)to the optimal parameter a*and derive the convergence rate of the DDA algorithm. We provide the proof in Appendix B.Based on Theorem 2, Now we present the following theorem. record sequence {u())1at every node,with probability 1, converges to the true vector y". Theorem 4.The random family {ok(t)o and {uk(t)o 2)Convergence rate analysis:We now analyze the conver- are generated by iteration(8)and (7),with the positive non- gence rate of the DRC algorithm via its deviation character- decreasing step-size sequence w(t)where is strongly istic.We first present a relative definition which is used to convex with respect to the normwith dual norm characterized the convergence rate of sequential process. Let the record error-lly⑧y‖be bounded by an arbitrary small constant Cerr-For anyθ'∈日and each node Definition 2.A sequence of records {u(i)}izo is asymptoti- k∈V,ve have cally normal if a positive semidefinite matrix S(y)exists and satisfies that f(0k(T),ik)-f(0",y)<OPT+NET+SAMP, lim vi(uk()-y*))→N(0M,Skk(y(i),1≤k≤n. where The matrix S(y(i))is called the asymptotic variance of the OPT=(T) observing sequence fy(i)}iz0.and Skk(y)ERMxM denotes 0)+名而 the k-th principal block of S(y(i)). 1 In the following part,we analyze the asymptotic normality NET= ∑a-4,训.】 of the DRC algorithm.Let Amin(YC EM +Ht)denote the smallest eigenvalue of [C EM +H].Recalling the noise covariance Se in (10),we present the following theorem to +(t)-(t) establish the asymptotic normality of the DRC algorithm. SAMP=LCerr, Theorem 3.Consider the DRC algorithm is under the 川 assumptions A.1-A.3 (Section IIl-A),with weight sequence (t)= 1 (a(i)}izo and [B(i)}izo that are given by a0=典0=7>0 a() Recall that T is the convexity parameter. Theorem 4 explicitly shows the difference between the for some a 0.Let the record sequence fu(i)izo be the state estimated results from the true optimality.It is bounded by sequence generated by (5).Then,for a> a value which is a sum of three types of errors:(1)The OPT we obtain error can be viewed as the optimization error;(2)the NET error is induced by various estimations of nodes;and (3)the V()(u()-1v8y*)→W(0,S(y(): SAMP error is incurred on account of the input noisy.The6 observing system which is designed to obtain consistent and complete observation on the vector y. Now let us present the last assumption. (A.3) Observability: The observations formulated by Equa￾tion (1) is distributedly observable defined by Definition 1. 1) Unbiasedness and consistency of DRC: In this part, we show the unbiasedness and the consistency of DRC algorithm, and we provide two theorems to illustrate them respectively. Theorem 1. Consider the DRC algorithm is under the assumptions A.1-A.3 (Section III-A), the record sequence {uk(i)}i≥0 at node k is asymptotic unbiased lim i→∞ E[uk(i)] = y ∗ , ∀1 ≤ k ≤ |V|. (11) We defer the proof in Section V-A. Theorem 1 shows the unbiasedness of the algorithm. It indicates that each node’s estimation on the global data would be correct on the average in the long run. The consistency of DRC algorithm is guaranteed by the following theorem. Theorem 2. Consider the DRC algorithm is under the assumptions A.1-A.3 (Section III-A),the records sequence {uk(i)}i≥0 at node k is consistent P [ lim i→∞ uk(i) = y ∗ , ∀1 ≤ k ≤ |V|] = 1. We provide the proof in Appendix B. Based on Theorem 2, record sequence {uk(i)}i≥1 at every node, with probability 1, converges to the true vector y ∗ . 2) Convergence rate analysis: We now analyze the conver￾gence rate of the DRC algorithm via its deviation character￾istic. We first present a relative definition which is used to characterized the convergence rate of sequential process. Definition 2. A sequence of records {u(i)}i≥0 is asymptoti￾cally normal if a positive semidefinite matrix S(y) exists and satisfies that lim i→∞ √ i(uk(i) − y ∗ ) → N (0M, Skk(y(i))), ∀1 ≤ k ≤ n. The matrix S(y(i)) is called the asymptotic variance of the observing sequence {y(i)}i≥0, and Skk(y) ∈ RM×M denotes the k-th principal block of S(y(i)). In the following part, we analyze the asymptotic normality of the DRC algorithm. Let λmin(γL ⊗ EM + H˜) denote the smallest eigenvalue of [γL ⊗ EM + H˜]. Recalling the noise covariance Sϵ in (10), we present the following theorem to establish the asymptotic normality of the DRC algorithm. Theorem 3. Consider the DRC algorithm is under the assumptions A.1-A.3 (Section III-A), with weight sequence {α(i)}i≥0 and {β(i)}i≥0 that are given by α(i) = a i + 1 , lim i→∞ α(i) β(i) = γ > 0, for some a > 0. Let the record sequence {u(i)}i≥0 be the state sequence generated by (5). Then, for a > 1 2λmin(γL⊗EM+H˜) , we obtain √ (i)(u(i) − 1|V| ⊗ y ∗ ) =⇒ N (0, S(y(i))), where S(y(i)) = a 2 ∫ ∞ 0 e ΣvS0e Σv dv, (12) Σ = −a[γL ⊗ EM + H˜] + 1 2 EM|V|, (13) and S0 = H¯SϵH¯T . (14) Especially, the record sequence {uk(i)}i≥0 at any node k is asymptotically normal: √ (i)(uk(i) − y ∗ ) =⇒ N (0, Skk(y(i))). We provide the proof in Appendix B. Therefore, the error sequence {uk(i) − y ∗}i≥0 at each node can be regarded as being convergent to a normal distribution with a rate of √ 1 i . Up until now, we have presented asymptotic unbiasedness, the consistence and the asymptotic normality of the DRC algorithm. In the next section, we present main properties of the DDA algorithm. B. Main Properties of DDA In this section, we prove the convergency of running average θˆ k(T) to the optimal parameter θ ∗ and derive the convergence rate of the DDA algorithm. Now we present the following theorem. Theorem 4. The random family {θk(t)}∞ t=0 and {µk (t)}∞ t=0 are generated by iteration (8) and (7), with the positive non￾decreasing step-size sequence {ω(t)}∞ t=0, where ϕ is strongly convex with respect to the norm ∥·∥ with dual norm ∥·∥∗ . Let the record error E[yˆk ] − 1|V| ⊗ y ∗ be bounded by an arbitrary small constant Cerr. For any θ ∗ ∈ Θ and each node k ∈ V, we have f(θˆ k(T), yˆk ) − f(θ ∗ , y ∗ ) ≤ OPT + NET + SAMP, where OP T = ω(T) T ϕ(θ ∗ ) + L 2 2T τ ∑ T t=1 1 ω(t) , NET = L T ∑ T t=1 1 ω(t) E [ 2 |V| ∑ |V| j=1 µ¯(t) − µj (t) ∗ + ∥µ¯(t) − µk (t)∥ ] , SAMP = LCerr, µ¯(t) = 1 |V| ∑ |V| k=1 µk (t). Recall that τ is the convexity parameter. Theorem 4 explicitly shows the difference between the estimated results from the true optimality. It is bounded by a value which is a sum of three types of errors: (1) The OPT error can be viewed as the optimization error; (2) the NET error is induced by various estimations of nodes; and (3) the SAMP error is incurred on account of the input noisy. The
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有