and has mean EX= p and variance Var(X)= p(1-p) e reco successes and failures in n independent trials is then given by an independent sequence X1, X2,.. Xn of these Bernuolli variables, and the total number of success S, by their sum Sn=X1+…+Xn. Exactly, Sn has the binomial distribution which specifies that P(Sn=k) (R)p(1-p)n-k for k=0, 1,...,n. For even moderate values of n managing the binomial coefficients(n) becomes unwieldy, to say nothing of computing the sum which yields the cumulative probability P(sn≤m)=∑(m)-(x-ny=k that there will be m or fewer successes providing table approx- imation to such probabilities that can be quite accurate even for moderate Standardizing the binomial Sn by subt dividing by its standard deviation to obtain the mean zero, variance random variable Wn=(Sn-np)/np(1-p), the CLT yields that Vr lim p(Wn≤x)=P(Z≤x) n→ where Z is N(0, 1), a standard, mean zero variance one, normal random variable, that is. the one with distribution function p(u)du where (u)=exp(-5u) We may therefore, for instance, approximate the cumbersome cumulative binomial probability P(Sn < m) by the simpler d((m-np)/np(1-p) It was only for the special case of the binomial that the normal approxi- mation was first considered. Only many years later with the work of Laplace around 1820 did it begin to be systematically realized that the same normal limit is obtained when the underlying Bernoulli variables are replaced by any variables with a finite variance. The result was the classical Central Limit which states that(2) holds whenever 2and has mean EX = p and variance Var(X) = p(1 − p). The record of successes and failures in n independent trials is then given by an independent sequence X1, X2, . . . , Xn of these Bernuolli variables, and the total number of success Sn by their sum Sn = X1 + · · · + Xn. (1) Exactly, Sn has the binomial distribution which specifies that P(Sn = k) = n k p k (1 − p) n−k for k = 0, 1, . . . , n. For even moderate values of n managing the binomial coefficients n k becomes unwieldy, to say nothing of computing the sum which yields the cumulative probability P(Sn ≤ m) = X k≤m n k p k (1 − p) n−k that there will be m or fewer successes. The great utility of the CLT is in providing an easily computable approximation to such probabilities that can be quite accurate even for moderate values of n. Standardizing the binomial Sn by subtracting its mean and dividing by its standard deviation to obtain the mean zero, variance one random variable Wn = (Sn − np)/ p np(1 − p), the CLT yields that ∀x limn→∞ P(Wn ≤ x) = P(Z ≤ x) (2) where Z is N (0, 1), a standard, mean zero variance one, normal random variable, that is, the one with distribution function Φ(x) = Z x −∞ ϕ(u)du where ϕ(u) = 1 √ 2π exp(− 1 2 u 2 ). (3) We may therefore, for instance, approximate the cumbersome cumulative binomial probability P(Sn ≤ m) by the simpler Φ((m − np)/ p np(1 − p)). It was only for the special case of the binomial that the normal approximation was first considered. Only many years later with the work of Laplace around 1820 did it begin to be systematically realized that the same normal limit is obtained when the underlying Bernoulli variables are replaced by any variables with a finite variance. The result was the classical Central Limit, which states that (2) holds whenever Wn = (Sn − nµ)/ √ nσ2 2