summand Xin of Wn has the mean zero normal distribution oZ with vari- ance 0-E(0, 1), and the Lindeberg condition is satisfied for the remaining variables, that is, that the limit is zero when taking the sum in( 8 )over all it 1. Since the sum of independent normal variables is again normal, I will converge in distribution to Z, but( 8) does not hold, since for all e>0 Ln=E{X2n1(X1|≥c)}=a2E{z21(o|2≥e}>0. efini 17 to use for excluding such cases, we have the following partial converse to Theorem 1.2 Theorem 1. 3 IfN, n=1, 2,.. satisfies Condition 1.1 and lim then the small zero bias condition is necessary for Wn-dZ We prove Theorem 1.3 in Section 5 by showing that Wn -d Z implies that W d Z, and that(18)implies XIn, n P 0. But then also Wn+Xin= Z and Wn+X z imply that X 0 These implications provide the probabilistic reason that the small zero bias condition, or Lindeberg condition, is necessary for normal convergence under Section 2 draws a parallel between the zero bias transformation and the one better known for size biasing, and there we consider its connection to the differential equation method of Stein using test functions. In Section 3 we prove the equivalence of the classical Lindeberg condition and the small zero bias condition and then, in Sections 4 and 5, its sufficiency and partial necessity for normal convergence Some pains have been taken to keep the treatment as elementary as pos- sible, in particular by avoiding the use of characteristic functions. Though some technical argument is needed, only real functions are involved and the development remains at a level as basic as the material permits. To help keep the presentation self contained two general type results appear in Section 6 10summand X1,n of Wn has the mean zero normal distribution σZ with variance σ 2 ∈ (0, 1), and the Lindeberg condition is satisfied for the remaining variables, that is, that the limit is zero when taking the sum in (8) over all i 6= 1. Since the sum of independent normal variables is again normal, Wn will converge in distribution to Z, but (8) does not hold, since for all > 0 limn→∞ Ln, = E{X 2 1,n1(|X1,n| ≥ )} = σ 2E{Z 21(σ|Z| ≥ )} > 0. Defining mn = max 1≤i≤n σ 2 i,n (17) to use for excluding such cases, we have the following partial converse to Theorem 1.2. Theorem 1.3 If Xn, n = 1, 2, . . . satisfies Condition 1.1 and limn→∞ mn = 0, (18) then the small zero bias condition is necessary for Wn →d Z. We prove Theorem 1.3 in Section 5 by showing that Wn →d Z implies that W∗ n →d Z, and that (18) implies XIn,n →p 0. But then also Wn + X∗ In,n = W∗ n + XIn,n →d Z, and now Wn →d Z and Wn + X ∗ In,n →d Z imply that X ∗ In,n →p 0. These implications provide the probabilistic reason that the small zero bias condition, or Lindeberg condition, is necessary for normal convergence under (18). Section 2 draws a parallel between the zero bias transformation and the one better known for size biasing, and there we consider its connection to the differential equation method of Stein using test functions. In Section 3 we prove the equivalence of the classical Lindeberg condition and the small zero bias condition and then, in Sections 4 and 5, its sufficiency and partial necessity for normal convergence. Some pains have been taken to keep the treatment as elementary as possible, in particular by avoiding the use of characteristic functions. Though some technical argument is needed, only real functions are involved and the development remains at a level as basic as the material permits. To help keep the presentation self contained two general type results appear in Section 6. 10