# 5.4: The Central Limit Theorem for Proportions

- Page ID
- 51798

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

The Central Limit Theorem tells us that the point estimate for the sample mean, \(\overline X\), comes from a normal distribution of \(\overline X\)'s. This theoretical distribution is called the sampling distribution of \(\overline X\)'s. We now investigate the sampling distribution for another important parameter we wish to estimate; \(p\) from the binomial probability density function.

If the random variable is discrete, such as for categorical data, then the parameter we wish to estimate is the population proportion. This is, of course, the probability of drawing a success in any one random draw. Unlike the case just discussed for a continuous random variable where we did not know the population distribution of \(X\)'s, here we actually know the underlying probability density function for these data; it is the binomial. The random variable is \(X =\) the number of successes in the sample and the parameter we wish to know is \(p\), the probability of drawing a success which is of course the proportion of successes in the population. The question at issue is: from what distribution was the sample proportion, \(\hat{p}=\frac{X}{n}\) drawn? The sample size is \(n\) and \(X\) is the number of successes found in that sample. This is a parallel question that was just answered by the Central Limit Theorem: from what distribution was the sample mean, \(\overline X\), drawn?

In order to find the distribution from which sample proportions come we need to develop the sampling distribution of sample proportions just as we did for sample means. So again imagine that we randomly sample say 50 people and ask them if they support the new school bond issue. From this we find a sample proportion, \(\hat{p}\), and graph it on the axis of \(p\)'s. We do this again and again until we have the theoretical distribution of \(\hat{p}\). Some sample proportions will show high favorability toward the bond issue and others will show low favorability because random sampling will reflect the variation of views within the population. What we have done can be seen in Figure \(\PageIndex{1}\). The top panel is the population distributions of probabilities for each possible value of the random variable \(X\). While we do not know what the specific distribution looks like because we do not know \(p\), the population parameter, we do know that it must look something like this. In reality, we do not know either the mean or the standard deviation of this population distribution, the same difficulty we faced when analyzing the \(X\)'s previously.

Figure \(\PageIndex{1}\)Figure \(\PageIndex{1}\) places the mean on the distribution of population probabilities as \(\mu=np\) but of course we do not actually know the population mean because we do not know the population probability of success, \(p\). Below the distribution of the population values is the sampling distribution of \(\hat{p}\). Again the Central Limit Theorem tells us that this distribution is normally distributed just like the case of the sampling distribution for \(\overline X\). This sampling distribution also has a mean, the mean of the \(p\)'s, and a standard deviation, \(\sigma_{\hat{p}}\).

Importantly, in the case of the analysis of the distribution of sample means, the Central Limit Theorem told us the expected value of the mean of the sample means in the sampling distribution, and the standard deviation of the sampling distribution. Again the Central Limit Theorem provides this information for the sampling distribution for proportions. The answers are:

- The mean of the sampling distribution of the sample proportion, \(\mu_{\hat{p}}\), is the population proportion, \(p\).
- The standard deviation of the sampling distribution of the sample proportion, \(\sigma_{\hat{p}}\), is the population standard deviation divided by the square root of the sample size, \(n\).

Both these conclusions are the same as we found for the sampling distribution for sample means. However in this case, because the mean and standard deviation of the binomial distribution both rely upon pp, the formula for the standard deviation of the sampling distribution requires algebraic manipulation to be useful. We will take that up in the next chapter. The proof of these important conclusions from the Central Limit Theorem is provided below.

\[E\left[\hat{p}\right]=E\left[\frac{X}{n}\right]=\left(\frac{1}{n}\right) E[X] =\left(\frac{1}{n}\right) np=p\nonumber\]

(The expected value of \(X\), \(E[X]\), is simply the mean of the binomial distribution which we know to be \(np\).)

\[\sigma_{\hat{p}}^{2}=\operatorname{Var}\left(\hat{p}\right)=\operatorname{Var}\left(\frac{X}{n}\right)=\frac{1}{n^{2}}(\operatorname{Var}(X))=\frac{1}{n^{2}}(n p(1-p))=\frac{p(1-p)}{n}\nonumber\]

The standard deviation of the sampling distribution for proportions is thus:

\[\sigma_{\hat{p}},=\sqrt{\frac{p(1-p)}{n}}\nonumber\]

Parameter | Population distribution | Sample | Sampling distribution of \(p\)'s |
---|---|---|---|

Mean | \(\mu = np\) | \(\hat{p}=\frac{X}{n}\)\) | \(\mu_{\hat{p}} = E[\hat{p}]=p\) |

Standard Deviation | \(\sigma=\sqrt{n p (1-p)}\) | \(\sigma_{\hat{p}}=\sqrt{\frac{p(1-p)}{n}}\) |

Table \(\PageIndex{1}\) summarizes these results and shows the relationship between the population, sample and sampling distribution.

Reviewing the formula for the standard deviation of the sampling distribution for proportions we see that as \(n\) increases the standard deviation decreases. This is the same observation we made for the standard deviation for the sampling distribution for means. Again, as the sample size increases, the point estimate for either \(\mu\) or \(p\) is found to come from a distribution with a narrower and narrower distribution.