But they can also be calculated directly using the formula

...

and so finally

*C*(4, *k*)

*C*(16, *k*)

*C*(36, *k*)

*C*(64, *k*)

*C*(100, *k*)

As *k* varies,
the maximum value of *C*(*n*, *k*) occurs at
*n* / 2. For the graphs of *C*(*n*, *k*) to be compared as
*n* goes to infinity their centers must be lined up;
otherwise they would
drift off to infinity. Our first step in uniformizing the rows
is to shift the graph of *C*(*n*, *k*) leftward by
*n* / 2; the centers will now all be at 0.

With these translations and rescalings, the convergence of the central portions of the graphs becomes graphically evident:

*C*(4, *k*)*2 / 2^4, plotted
against (*k*-2) / 2

*C*(16, *k*)*4 / 2^16, plotted
against (*k*-8) / 4.

*C*(36, *k*)*6 / 2^36, plotted
against (*k*-18) / 6.

*C*(64, *k*)*8 / 2^64, plotted
against (*k*-32) / 8.

*C*(100, *k*)*10 / 2^100, plotted
against (*k*-50) / 10.

Our experiment is an example of the *Central Limit Theorem*,
a fundamental principle of probability theory (which brings us
back to Pascal). The theorem is stated in terms of *random
variables*. In this case, the basic random variable *X*
has values 0 or 1, each with probability 1/2 (this could be the
outcome of flipping a coin). So half the time,
at random, *X* = 0, and the rest of the time *X* = 1.
The *mean* or expected value of *X* is *E*(*X*)
= μ = 1/2 (0) + 1/2 (1) = 1/2 . Its
*variance* is defined by σ^{2}
= *E*(*X*^{2})-[*E*(*X*)]^{2}
= 1/4, so its *standard deviation* is σ = 1/2. The *n*-th
row in Pascal's triangle corresponds to the sum
*X*_{1} + ... + *X*_{n},
of *n* random variables, each identical to *X*.
The possible values of the sum are 0, 1, ..., *n*
and the value *k* is attained with probability
*C*(*n*, *k*)/2^{n}.
[So, for example, if we toss a coin four times, and count
1 for each head, 0 for each tail, then the probabilities
of the sums 0, 1, 2, 3, 4 are 1/16, 1/4, 3/8, 1/4, 1/16
respectively.] This set of values and probabilities is called
the *binomial distribution* with *p*= (1-*p*) = 1/2.
Its has mean μ_{n} = *n*/2 and
standard deviation σ_{n} = *n*^{1/2}/2.
In our normalizations we have shifted the means to 0 and stretched
or compressed the axis to achieve uniform standard deviation 1/2.
The Central Limit Theorem is more general, but it states
in this case that the limit of these normalized probability
distributions, as *n* goes to infinity, will be the normal
distribution with mean zero and standard deviation 1/2. This
distribution is represented by the function

(its graph is a "bell-shaped curve") in the sense that the probability of the limit random variable lying in the interval [

The normal distribution with μ = 0 and σ = 1/2.

Suppose you want to know the probability of between 4995 and 5005
heads in 10,000 coin tosses. The calculation with binomial
coefficients would be tedious; but it amounts to calculating
the area under the graph of *C*(10000, *k*) between
*k* = 4995 and *k* = 5005, relative to the total area.
This is equivalent to computing the relative area under the
normalized curve between (4995-5000) / 100 = -.05 and
(5005-5000) / 100 = .05; to a very good approximation this
is the integral of the normal distribution function *f*(*x*)
between -.05 and .05, i.e. 0.0998.

*Corrected, May 6 , 2017.
*