正在加载图片...
Of course, even under Condition 1.1, some further assumptions must be sat- isfied by the summand variables for the normal convergence(2) For instance, if the first variable accounts for some non-vanishing fraction of the total variability, it will strongly influence the limiting distribution, possi bly resulting in non-normal convergence. The Lindeberg-Feller central limit theorem, see], says that normal convergence(2)holds upon ruling out such situations by imposing the Lindeberg Condition VE>0 lim Ln, c=0 where In. =>EXi I(Xi, nI>e))(8) where for an event A, the indicator'random variable 1(A)takes on the value 1 if A occurs and the value 0 otherwise. Once known to be sufficient the Lindeberg condition was proved to be partially necessary by Feller and Levy, independently; see 8 for history. The appearance of the Lindeberg Condition is justified by explanations such as the one given by Feller [4], who roughly says that it requires the individual variances be due mainly to masses in an interval whose length is small in comparison to the overall variance. We present a probabilistic condition which is seemingly simpler, yet equivalent Our probabilistic approach to the Clt is through the so called zero bias transformation introduced in [6. For every distribution with mean zero, and finite non-zero variance a2 on a random variable x the zero bias transfor mation returns the unique X-zero biased distribution'on X* which satisfies o2Ef(X*)=ELXJ(X) for all absolutely continuous functions f for which these expectations exist The existence of a strong connection between the zero bias transformation and the normal distribution is made clear by the characterization of Stein 91, which implies that X' and X have the same distribution if and only if X has the w(0, a2)distribution, that is, that the normal distribution is the zero bias transformations unique fixed point the ne way to see the "if direction of Steins characterization, that is,why zero bias transformation fixes the normal, is to note that the density function o2(a)=o-p(o-r) of a N(, o)variable, with p(a) given by (3), satisfies the differential equation with a form"conjugate'to( 9) 5Of course, even under Condition 1.1, some further assumptions must be sat￾isfied by the summand variables for the normal convergence (2) to take place. For instance, if the first variable accounts for some non-vanishing fraction of the total variability, it will strongly influence the limiting distribution, possi￾bly resulting in non-normal convergence. The Lindeberg-Feller central limit theorem, see [4], says that normal convergence (2) holds upon ruling out such situations by imposing the Lindeberg Condition ∀ > 0 limn→∞ Ln, = 0 where Ln, = Xn i=1 E{X 2 i,n1(|Xi,n| ≥ )} (8) where for an event A, the ‘indicator’ random variable 1(A) takes on the value 1 if A occurs, and the value 0 otherwise. Once known to be sufficient, the Lindeberg condition was proved to be partially necessary by Feller and L´evy, independently; see [8] for history. The appearance of the Lindeberg Condition is justified by explanations such as the one given by Feller [4], who roughly says that it requires the individual variances be due mainly to masses in an interval whose length is small in comparison to the overall variance. We present a probabilistic condition which is seemingly simpler, yet equivalent. Our probabilistic approach to the CLT is through the so called zero bias transformation introduced in [6]. For every distribution with mean zero, and finite non-zero variance σ 2 on a random variable X, the zero bias transfor￾mation returns the unique ‘X-zero biased distribution’ on X∗ which satisfies σ 2Ef0 (X ∗ ) = E[Xf(X)] (9) for all absolutely continuous functions f for which these expectations exist. The existence of a strong connection between the zero bias transformation and the normal distribution is made clear by the characterization of Stein [9], which implies that X∗ and X have the same distribution if and only if X has the N (0, σ2 ) distribution, that is, that the normal distribution is the zero bias transformation’s unique fixed point. One way to see the ‘if’ direction of Stein’s characterization, that is, why the zero bias transformation fixes the normal, is to note that the density function ϕσ2 (x) = σ −1ϕ(σ −1x) of a N (0, σ2 ) variable, with ϕ(x) given by (3), satisfies the differential equation with a form ‘conjugate’ to (9), σ 2ϕ 0 σ2 (x) = −xϕσ2 (x), 5
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有