8.2: Comparing Two Independent Population Means
- Page ID
- 51835
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)The comparison of two independent population means is very common and provides a way to test the hypothesis that the two groups differ from each other. Is the night shift less productive than the day shift, are the rates of return from fixed asset investments different from those from common stock investments, and so on? An observed difference between two sample means depends on both the means and the sample standard deviations. Very different means can occur by chance if there is great variation among the individual samples. The test statistic will have to account for this fact. The test comparing two independent population means with unknown and possibly unequal population standard deviations is called the Aspin-Welch \(t\)-test. The degrees of freedom formula we will see later was developed by Aspin-Welch.
When we developed the hypothesis test for the mean and proportions we began with the Central Limit Theorem. We recognized that a sample mean came from a distribution of sample means, and sample proportions came from the sampling distribution of sample proportions. This made our sample parameters, the sample means and sample proportions, into random variables. It was important for us to know the distribution that these random variables came from. The Central Limit Theorem gave us the answer: the normal distribution. Our \(Z\) and \(t\) statistics came from this theorem. This provided us with the solution to our question of how to measure the probability that a sample mean came from a distribution with a particular hypothesized value of the mean or proportion. In both cases that was the question: what is the probability that the mean (or proportion) from our sample data came from a population distribution with the hypothesized value we are interested in?
Now we are interested in whether or not two samples have the same mean. Our question has not changed: Do these two samples come from the same population distribution? To approach this problem we create a new random variable. We recognize that we have two sample means, one from each set of data, and thus we have two random variables coming from two unknown distributions. To solve the problem we create a new random variable, the difference between the sample means. This new random variable also has a distribution and, again, the Central Limit Theorem tells us that this new distribution is normally distributed, regardless of the underlying distributions of the original data. A graph may help to understand this concept.
Figure \(\PageIndex{2}\)Pictured are two distributions of data, \(X_1\) and \(X_2\), with unknown means and standard deviations. The second panel shows the sampling distribution of the newly created random variable (\(\overline{X}_{1}-\overline{X}_{2}\)). This distribution is the theoretical distribution of many many sample means from population 1 minus sample means from population 2. The Central Limit Theorem tells us that this theoretical sampling distribution of differences in sample means is normally distributed, regardless of the distribution of the actual population data shown in the top panel. Because the sampling distribution is normally distributed, we can develop a standardizing formula and calculate probabilities from the standard normal distribution in the bottom panel, the \(Z\) distribution. We have seen this same analysis before in Chapter 7 Figure \(\PageIndex{2}\) .
The Central Limit Theorem, as before, provides us with the standard deviation of the sampling distribution, and further, that the expected value of the mean of the distribution of differences in sample means is equal to the differences in the population means. Mathematically this can be stated:
\[E\left(\mu_{\overline{x}_{1}}-\mu_{\overline{x}_{2}}\right)=\mu_{1}-\mu_{2}\nonumber\]
Because we do not know the population standard deviations, we estimate them using the two sample standard deviations from our independent samples. For the hypothesis test, we calculate the estimated standard deviation, or standard error, of the difference in sample means, \(\overline{X}_{1}-\overline{X}_{2}\).
\[\textbf{The standard error is:}\nonumber\]
\[\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}\nonumber\]
We remember that substituting the sample variance for the population variance when we did not have the population variance was the technique we used when building the confidence interval and the test statistic for the test of hypothesis for a single mean back in Confidence Intervals and Hypothesis Testing with One Sample. The test statistic (t-score) is calculated as follows:
\[t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}}}\nonumber\]
where:
- \(s_1\) and \(s_2\), the sample standard deviations, are estimates of \(\sigma_1\) and \(\sigma_2\), respectively and
- \(\sigma_1\) and \(\sigma_2\) are the unknown population standard deviations.
- \(\overline{x}_{1}\) and \(\overline{x}_{2}\) are the sample means. \(\mu_1\) and \(\mu_2\) are the unknown population means.
The number of degrees of freedom (df) requires a somewhat complicated calculation. The \(df\) are not always a whole number. The test statistic above is approximated by the Student's \(t\)-distribution with \(df\) as follows:
Degrees of freedom
\[df=\frac{\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}+\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}{\left(\frac{1}{n_{1}-1}\right)\left(\frac{\left(s_{1}\right)^{2}}{n_{1}}\right)^{2}+\left(\frac{1}{n_{2}-1}\right)\left(\frac{\left(s_{2}\right)^{2}}{n_{2}}\right)^{2}}\nonumber\]
When both sample sizes \(n_1\) and \(n_2\) are 30 or larger, the Student's t approximation is very good. If each sample has more than 30 observations then the degrees of freedom can be calculated as \(n_1 + n_2 - 2\).
The format of the sampling distribution, differences in sample means, specifies that the format of the null and alternative hypothesis is:
\[H_{0} : \mu_{1}-\mu_{2}=\delta_{0}\nonumber\]
\[H_{\mathrm{a}} : \mu_{1}-\mu_{2} \neq \delta_{0}\nonumber\]
where \(\delta_{0}\) is the hypothesized difference between the two means. If the question is simply “is there any difference between the means?” then \(\delta_{0} = 0\) and the null and alternative hypotheses becomes:
\[H_{0} : \mu_{1}=\mu_{2}\nonumber\]
\[H_{\mathrm{a}} : \mu_{1} \neq \mu_{2}\nonumber\]
An example of when \(\delta_{0}\) might not be zero is when the comparison of the two groups requires a specific difference for the decision to be meaningful. Imagine that you are making a capital investment. You are considering changing from your current model machine to another. You measure the productivity of your machines by the speed they produce the product. It may be that a contender to replace the old model is faster in terms of product throughput, but is also more expensive. The second machine may also have more maintenance costs, setup costs, etc. The null hypothesis would be set up so that the new machine would have to be better than the old one by enough to cover these extra costs in terms of speed and cost of production. This form of the null and alternative hypothesis shows how valuable this particular hypothesis test can be. For most of our work we will be testing simple hypotheses asking if there is any difference between the two distribution means.
The Kona Iki Corporation produces coconut milk. They take coconuts and extract the milk inside by drilling a hole and pouring the milk into a vat for processing. They have both a day shift (called the B shift) and a night shift (called the G shift) to do this part of the process. They would like to know if the day shift and the night shift are equally efficient in processing the coconuts. A study is done sampling 9 shifts of the G shift and 16 shifts of the B shift. The results of the number of hours required to process 100 pounds of coconuts is presented in Table \(\PageIndex{1}\). A study is done and data are collected, resulting in the data in Table \(\PageIndex{1}\).
Sample Size | Average Number of Hours to Process 100 Pounds of Coconuts | Sample Standard Deviation | |
---|---|---|---|
G Shift | 9 | 2 | 0.8660.866 |
B Shift | 16 | 3.2 | 1.00 |
Is there a difference in the mean amount of time for each shift to process 100 pounds of coconuts? Test at the 5% level of significance.
- Answer
-
Solution 10.1
The population standard deviations are not known and cannot be assumed to equal each other. Let \(g\) be the subscript for the G Shift and \(b\) be the subscript for the B Shift. Then, \(\mu_g\) is the population mean for G Shift and \(\mu_b\) is the population mean for B Shift. This is a test of two independent groups, two population means.
Random variable: \(\overline{X}_{g}-\overline{X}_{b}\) = difference in the sample mean amount of time between the G Shift and the B Shift takes to process the coconuts.
\(\H_{0}: \mu_g = \mu_b\) \(\H_{0}: \mu_g – \mu_b = 0\)
\(H_a: \mu_g \neq \mu_b\) \(H_a: \mu_g – \mu_b \neq 0\)
The words "the same" tell you \(\H_{0}\) has an "=". Since there are no other words to indicate \(H_a\), is either faster or slower. This is a two tailed test.Distribution for the test: Use \(t_{df}\) where \(df\) is calculated using the \(df\) formula for independent groups, two population means above. Using a calculator, \(df\) is approximately 18.8462.
Graph:
Figure \(\PageIndex{3}\)\[\mathrm{t}_{\mathrm{c}}=\frac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\delta_{0}}{\sqrt{\frac{S_{1}^{2}}{n_{1}}+\frac{S_{2}^{2}}{n_{2}}}}=-3.01\nonumber\]
We next find the critical value on the \(t\)-table using the degrees of freedom from above. The critical value, 2.093, is found in the .025 column, this is \(\alpha/2\), at 19 degrees of freedom. (The convention is to round up the degrees of freedom to make the conclusion more conservative.) Next we calculate the test statistic and mark this on the \(t\)-distribution graph.
Make a decision: Since the calculated \(t\)-value is in the tail we cannot accept the null hypothesis that there is no difference between the two groups. The means are different.
The graph has included the sampling distribution of the differences in the sample means to show how the t-distribution aligns with the sampling distribution data. We see in the top panel that the calculated difference in the two means is -1.2 and the bottom panel shows that this is 3.01 standard deviations from the mean. Typically we do not need to show the sampling distribution graph and can rely on the graph of the test statistic, the t-distribution in this case, to reach our conclusion.
Conclusion: At the 5% level of significance, the sample data show there is sufficient evidence to conclude that the mean number of hours that the G Shift takes to process 100 pounds of coconuts is different from the B Shift (mean number of hours for the B Shift is greater than the mean number of hours for the G Shift).
When the sum of the sample sizes is larger than \(30\left(n_{1}+n_{2}>30\right)\) you can use the normal distribution to approximate the Student's \(t\).
A study is done to determine if Company A retains its workers longer than Company B. It is believed that Company A has a higher retention than Company B. The study finds that in a sample of 11 workers at Company A their average time with the company is four years with a standard deviation of 1.5 years. A sample of 9 workers at Company B finds that the average time with the company was 3.5 years with a standard deviation of 1 year. Test this proposition at the 1% level of significance.
a. Is this a test of two means or two proportions?
- Answer
-
Solution 10.2
a. two means because time is a continuous random variable.
b. Are the populations standard deviations known or unknown?
- Answer
-
Solution 10.2
b. unknown
c. Which distribution do you use to perform the test?
- Answer
-
Solution 10.2
c. Student's \(t\)
d. What is the random variable?
- Answer
-
Solution 10.2
d. \(\overline{X}_{A}-\overline{X}_{B}\)
e. What are the null and alternate hypotheses?
- Answer
-
Solution 10.2
e.
- \(H_{0} : \mu_{A} \leq \mu_{B}\)
- \(H_{a} : \mu_{A}>\mu_{B}\)
f. Is this test right-, left-, or two-tailed?
- Answer
-
Solution 10.2
f. right one-tailed test
Figure \(\PageIndex{4}\)
g. What is the value of the test statistic?
- Answer
-
Solution 10.2
g.
\(t_{c}=\frac{\left(\overline{X}_{1}-\overline{X}_{2}\right)-\delta_{0}}{\sqrt{\frac{S_{1}^{2}}{n_{1}}+\frac{S_{2}^{2}}{n_{2}}}}=0.89\)
h. Can you accept/reject the null hypothesis?
- Answer
-
Solution 10.2
h. Cannot reject the null hypothesis that there is no difference between the two groups. Test statistic is not in the tail. The critical value of the t distribution is 2.764 with 10 degrees of freedom. This example shows how difficult it is to reject a null hypothesis with a very small sample. The critical values require very large test statistics to reach the tail.
i. Conclusion:
- Answer
-
Solution 10.2
i. At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that the retention of workers at Company A is longer than Company B, on average.
An interesting research question is the effect, if any, that different types of teaching formats have on the grade outcomes of students. To investigate this issue one sample of students' grades was taken from a hybrid class and another sample taken from a standard lecture format class. Both classes were for the same subject. The mean course grade in percent for the 35 hybrid students is 74 with a standard deviation of 16. The mean grades of the 40 students form the standard lecture class was 76 percent with a standard deviation of 9. Test at 5% to see if there is any significant difference in the population mean grades between standard lecture course and hybrid class.
- Answer
-
Solution 10.3
We begin by noting that we have two groups, students from a hybrid class and students from a standard lecture format class. We also note that the random variable, what we are interested in, is students' grades, a continuous random variable. We could have asked the research question in a different way and had a binary random variable. For example, we could have studied the percentage of students with a failing grade, or with an A grade. Both of these would be binary and thus a test of proportions and not a test of means as is the case here. Finally, there is no presumption as to which format might lead to higher grades so the hypothesis is stated as a two-tailed test.
\(H_{0}: \mu_1 = \mu_2 \)
\(H_a: \mu_1 \neq \mu_2\)As would virtually always be the case, we do not know the population variances of the two distributions and thus our test statistic is:
\[t_{c}=\frac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\delta_{0}}{\sqrt{\frac{s^{2}}{n_{1}}+\frac{s^{2}}{n_{2}}}}=\frac{(74-76)-0}{\sqrt{\frac{16^{2}}{35}+\frac{9^{2}}{40}}}=-0.65\nonumber\]
To determine the critical value of the Student's t we need the degrees of freedom. For this case we use: \(df = n_1 + n_2 - 2 = 35 + 40 -2 = 73\). This is large enough to consider it the normal distribution thus \(t_{\alpha /2} = 1.96\). Again as always we determine if the calculated value is in the tail determined by the critical value. In this case we do not even need to look up the critical value: the calculated value of the difference in these two average grades is not even one standard deviation apart. Certainly not in the tail.
Conclusion: Cannot reject the null at \(\bf{\alpha = 5\%}\). Therefore, evidence does not exist to prove that the grades in hybrid and standard classes differ.