Skip to main content
Business LibreTexts

9.4: The F Distribution and the F-Ratio

  • Page ID
    50627
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The distribution used for the hypothesis test is a new one. It is called the \(\bf{F}\) distribution, invented by George Snedecor but named in honor of Sir Ronald Fisher, an English statistician. The \(F\) statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.

    For example, if \(F\) follows an \(F\) distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then \(F \sim F_{4,10}\).

    To calculate the \(\bf{F}\) ratio, two estimates of the variance are made.

    1. Variance between samples: An estimate of \(\sigma^2\) that is the variance of the sample means multiplied by n (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation.
    2. Variance within samples: An estimate of \(\sigma^2\) that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation.
    • \(SS_{between}\) = the sum of squares that represents the variation among the different samples
    • \(SS_{within}\) = the sum of squares that represents the variation within samples that is due to chance.

    To find a "sum of squares" means to add together squared quantities that, in some cases, may be weighted. We used sum of squares to calculate the sample variance and the sample standard deviation in Table 1.19.

    MS means "mean square." \(MS_{between}\) is the variance between groups, and \(MS_{within}\) is the variance within groups.

    Calculation of Sum of Squares and Mean Square

    • \(k\) = the number of different groups
    • \(n_j\) = the size of the \(j^{th}\) group
    • \(s_j\) = the sum of the values in the \(j^{th}\) group
    • \(n\) = total number of all the values combined (total sample size: \(\Sigma n_{j}\))
    • \(x\) = one value: \(\sum x=\sum s_{j}\)
    • Sum of squares of all values from every group combined: \(\sum x^{2}\)
    • Between group variability: \(SS_{total} =\sum x^{2}-\frac{\left(\sum x^{2}\right)}{n}\)
    • Total sum of squares: \(\sum x^{2}-\frac{\left(\sum x\right)^{2}}{n}\)
    • Explained variation: sum of squares representing variation among the different samples:
      \(SS_{between} =\sum\left[\frac{\left(s_{j}\right)^{2}}{n_{j}}\right]-\frac{\left(\sum s_{j}\right)^{2}}{n}\)
    • Unexplained variation: sum of squares representing variation within samples due to chance: \(S S_{\text { within }}=S S_{\text { total }}-S S_{\text { between }}\)
    • \(df\)'s for different groups (\(df\)'s for the numerator): \(df = k – 1\)
    • Equation for errors within samples (\(df\)'s for the denominator): \(df_{within} = n – k\)
    • Mean square (variance estimate) explained by the different groups: \(M S_{\text { between }}=\frac{S S_{\text { between }}}{d f_{\text { between }}}\)
    • Mean square (variance estimate) that is due to chance (unexplained): \(M S_{\mathrm{within}}=\frac{S S_{\mathrm{within}}}{d f_{\mathrm{within}}}\)

    \(MS_{between}\) and \(MS_{within}\) can be written as follows:

    • \(M S_{\mathrm{between}}=\frac{S S_{\mathrm{between}}}{d f_{\mathrm{between}}}=\frac{S S_{\mathrm{between}}}{k-1}\)
    • \(M S_{w i t h i n}=\frac{S S_{w i t h i n}}{d f_{w i t h i n}}=\frac{S S_{w i t h i n}}{n-k}\)

    The one-way ANOVA test depends on the fact that \(M S_{between}\) can be influenced by population differences among means of the several groups. Since \(M S_{w i t h i n}\) compares values of each group to its own group mean, the fact that group means might be different does not affect \(M S_{w i t h i n}\).

    The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, \(M S_{between}\) and \(M S_{w i t h i n}\) should both estimate the same value.

    Note

    The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances.

    \(\bf{F}\)-Ratio or \(\bf{F}\) Statistic

    \[F=\frac{M S_{\text { between }}}{M S_{\text { within }}}\]

    If \(M S_{between}\) and \(M S_{w i t h i n}\) estimate the same value (following the belief that \(H_0\) is true), then the \(F\)-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, \(M S_{between}\) consists of the population variance plus a variance produced from the differences between the samples. \(M S_{w i t h i n}\) is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, \(M S_{between}\) will generally be larger than MSwithin.Then the \(F\)-ratio will be larger than one. However, if the population effect is small, it is not unlikely that \(M S_{w i t h i n}\) will be larger in a given sample.

    The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F-ratio can be written as:

    \(\bf{F}\)-Ratio Formula when the groups are the same size

    \(F=\frac{n \cdot s_{\overline{x}}^{2}}{s^{2}_{ pooled }}\)

    where ...

    • \(n\) = the sample size
    • \(d f_{\text {numerator}}=k-1\)
    • \(d f_{\text {denominator}}=n-k\)
    • \(s_{pooled}^2\) = the mean of the sample variances (pooled variance)
    • \(s_{\overline x}^2\) = the variance of the sample means

    Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software.

    Table 9.3
    Source of variation Sum of squares (\(SS\)) Degrees of freedom (\(df\)) Mean square (\(MS\)) \(F\)
    Factor
    (Between)
    \(SS\)(Factor) \(k – 1\) \(MS(Factor) =
    SS(Factor)/(k– 1)\)
    \(F = MS(Factor)/MS(Error)\)
    Error
    (Within)
    \(SS\)(Error) \(n – k\) \(MS(Error) =
    SS(Error)/(n – k)\)
     
    Total \(SS\)(Total) \(n – 1\)    

    Example 9.2

    Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in Table 9.4.

    Table 9.4

     

    Plan 1: \(n_1 = 4\) Plan 2: \(n_2 = 3\) Plan 3: \(n_3 = 3\)
    5 3.5 8
    4.5 7 4
    4 4.5 3.5
    3    

     

     

    \(s_{1}=16.5, s_{2}=15, s_{3}=15.5\)

    Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test.

    \[S S(\text { between })=\sum\left[\frac{\left(s_{j}\right)^{2}}{n_{j}}\right]-\frac{\left(\sum s_{j}\right)^{2}}{n}\nonumber\]

    \[=\frac{s_{1}^{2}}{4}+\frac{s_{2}^{2}}{3}+\frac{s_{3}^{2}}{3}-\frac{\left(s_{1}+s_{2}+s_{3}\right)^{2}}{10}\nonumber\]

    where \(n_{1}=4, n_{2}=3, n_{3}=3\) and \(n=n_{1}+n_{2}+n_{3}=10\)

    \[=\frac{(16.5)^{2}}{4}+\frac{(15)^{2}}{3}+\frac{(15.5)^{2}}{3}-\frac{(16.5+15+15.5)^{2}}{10}\nonumber\]

    \[S S(\text {between})=2.2458\nonumber\]

    \[S(\text {total})=\sum x^{2}-\frac{\left(\sum x\right)^{2}}{n}\nonumber\]

    \[=\left(5^{2}+4.5^{2}+4^{2}+3^{2}+3.5^{2}+7^{2}+4.5^{2}+8^{2}+4^{2}+3.5^{2}\right)\nonumber\]

    \[-\frac{(5+4.5+4+3+3.5+7+4.5+8+4+3.5)^{2}}{10}\nonumber\]

    \[=244-\frac{47^{2}}{10}=244-220.9\nonumber\]

    \[S S(\text {total})=23.1\nonumber\]

    \[S S(\text {within})=S S(\text {total})-S S(\text {between})\nonumber\]

    \[=23.1-2.2458\nonumber\]

    \[S S(\text {within})=20.8542\nonumber\]

    Table 9.5
    Source of variation Sum of squares (\(SS\)) Degrees of freedom (\(df\)) Mean square (\(MS\)) \(F\)
    Factor
    (Between)

    \(SS\)(Factor) = \(SS\)(Between)

    = 2.2458

    \(k – 1\) =

    3 groups – 1 = 2

    \(MS(Factor) = SS(Factor)/(k– 1)\)

    = 2.2458/2 = 1.1229

    \(F = MS(Factor)/MS(Error)\) =

    1.1229/2.9792 = 0.3769

    Error
    (Within)

    \(SS\)(Error) = \(SS\)(Within)

    = 20.8542

    \(n – k\) =

    10 total data – 3 groups

    = 7

    \(MS(Error) = SS(Error)/(n – k)\)

    = 20.8542/7 = 2.9792

     
    Total

    \(SS\)(Total) =

    2.2458 + 20.8542 = 23.1

    \(n – 1\) =

    10 total data – 1 = 9

       

    Exercise 9.2

    As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments

    • bare soil
    • a commercial ground cover
    • black plastic
    • straw
    • compost

    All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the n = 15 plants:

    Bare: \(n_1 = 3\) Ground Cover: \(n_2 = 3\) Plastic: \(n_3 = 3\) Straw: \(n_4 = 3\) Compost: \(n_5 = 3\)
    2,625 5,348 6,583 7,285 6,277
    2,997 5,682 8,560 6,897 7,818
    4,915 5,482 3,830 9,230 8,677

    Table 9.6


    Create the one-way ANOVA table.

    The one-way ANOVA hypothesis test is always right-tailed because larger \(F\)-values are way out in the right tail of the F-distribution curve and tend to make us reject \(H_0\).

    Example 9.3

    Let’s return to the slicing tomato exercise in Try It. The means of the tomato yields under the five mulching conditions are represented by \(\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}, \mu_{5}\). We will conduct a hypothesis test to determine if all means are the same or at least one is different. Using a significance level of 5%, test the null hypothesis that there is no difference in mean yields among the five groups against the alternative hypothesis that at least one mean is different from the rest.

    Answer

    The null and alternative hypotheses are:

    \(H_{0} : \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}\)

    \(H_{a} : \mu_{i} \neq \mu_{j}\) some \(i \neq j\)

    The one-way ANOVA results are shown in Table 9.7

    Table 9.7
    Source of variation Sum of squares (\(SS\)) Degrees of freedom (\(df\)) Mean square (\(MS\)) F
    Factor (Between) 36,648,561 \(5 – 1 = 4\) \(\frac{36,648,561}{4}=9,162,140\)

    \(\frac{9,162,140}{2,044,672.6}=4.4810\)

    Error (Within) 20,446,726 \(15 – 5 = 10\)

    \(\frac{20,446,726}{10}=2,044,672.6\)

     
    Total 57,095,287 \(15 – 1 = 14\)    

    Distribution for the test: \(F_{4,10}\)

    \(df(num) = 5 – 1 = 4\)

    \(df(denom) = 15 – 5 = 10\)

    Test statistic: \(F = 4.4810\)

    This graph shows a nonsymmetrical F distribution curve. The horizontal axis extends from 0 - 5, and the vertical axis ranges from 0 - 0.7. The curve is strongly skewed to the right.
    Figure 9.4

    Probability Statement: \(p\text{-value }= P(F > 4.481) = 0.0248.\)

    Compare \(\bf{\alpha}\) and the \(\bf p\)-value: \(\alpha = 0.05\), \(p\text{-value }= 0.0248\)

    Make a decision: Since \(\alpha > p\)-value, we reject \(H_0\).

    Conclusion: At the 5% significance level, we have reasonably strong evidence that differences in mean yields for slicing tomato plants grown under different mulching conditions are unlikely to be due to chance alone. We may conclude that at least some of mulches led to different mean yields.

    Exercise 9.3

    MRSA, or Staphylococcus aureus, can cause a serious bacterial infections in hospital patients. Table 9.8 shows various colony counts from different patients who may or may not have MRSA. The data from the table is plotted in Figure 9.5.

    Table 9.8
    Conc = 0.6 Conc = 0.8 Conc = 1.0 Conc = 1.2 Conc = 1.4
    9 16 22 30 27
    66 93 147 199 168
    98 82 120 148 132

    Plot of the data for the different concentrations:

    This graph is a scatterplot for the data provided. The horizontal axis is labeled 'Colony counts' and extends from 0 - 200. The vertical axis is labeled 'Tryptone concentrations' and extends from 0.6 - 1.4.

    Figure 9.5

    Test whether the mean number of colonies are the same or are different. Construct the ANOVA table, find the p-value, and state your conclusion. Use a 5% significance level.

    Example 9.4

    Four sororities took a random sample of sisters regarding their grade means for the past term. The results are shown in Table 9.9.

    Table 9.9
    Sorority 1 Sorority 2 Sorority 3 Sorority 4
    2.17 2.63 2.63 3.79
    1.85 1.77 3.78 3.45
    2.83 3.25 4.00 3.08
    1.69 1.86 2.55 2.26
    3.33 2.21 2.45 3.18

    Using a significance level of 1%, is there a difference in mean grades among the sororities?

    Answer

    Solution 9.4

    Let \(\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}\) be the population means of the sororities. Remember that the null hypothesis claims that the sorority groups are from the same normal distribution. The alternate hypothesis says that at least two of the sorority groups come from populations with different normal distributions. Notice that the four sample sizes are each five.

    NOTE

    This is an example of a balanced design, because each factor (i.e., sorority) has the same number of observations.

    \(H_{0} : \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}\)

    \(H_a\): Not all of the means \(\mu_{1}, \mu_{2}, \mu_{3}, \mu_{4}\) are equal.

    Distribution for the test: \(F_{3,16}\)

    where \(k = 4\) groups and \(n = 20\) samples in total

    \(df(num)= k – 1 = 4 – 1 = 3\)

    \(df(denom) = n – k = 20 – 4 = 16\)

    Calculate the test statistic: \(F = 2.23\)

    Graph:

    This graph shows a nonsymmetrical F distribution curve with values of 0 and 2.23 on the x-axis representing the test statistic of sorority grade averages. The curve is slightly skewed to the right, but is approximately normal. A vertical upward line extends from 2.23 to the curve and the area to the right of this is shaded to represent the p-value.

    Figure 9.6

    Probability statement: \(p\text{-value }= P(F > 2.23) = 0.1241\)

    Compare \(\bf{\alpha}\) and the \(\bf p\)-value: \(\alpha = 0.01\)
    \(p\text{-value }= 0.1241\)
    \(\alpha < p\)-value

    Make a decision: Since \(\alpha < p\)-value, you cannot reject \(H_0\).

    Conclusion: There is not sufficient evidence to conclude that there is a difference among the mean grades for the sororities.

    Exercise 9.4

    Four sports teams took a random sample of players regarding their GPAs for the last year. The results are shown in Table 9.10.

    Table 9.10
    Basketball Baseball Hockey Lacrosse
    3.6 2.1 4.0 2.0
    2.9 2.6 2.0 3.6
    2.5 3.9 2.6 3.9
    3.3 3.1 3.2 2.7
    3.8 3.4 3.2 2.5

    Use a significance level of 5%, and determine if there is a difference in GPA among the teams.

    Example 9.5

    A fourth grade class is studying the environment. One of the assignments is to grow bean plants in different soils. Tommy chose to grow his bean plants in soil found outside his classroom mixed with dryer lint. Tara chose to grow her bean plants in potting soil bought at the local nursery. Nick chose to grow his bean plants in soil from his mother's garden. No chemicals were used on the plants, only water. They were grown inside the classroom next to a large window. Each child grew five plants. At the end of the growing period, each plant was measured, producing the data (in inches) in Table 9.11.

    Tommy's plants Tara's plants Nick's plants
    24 25 23
    21 31 27
    23 23 22
    30 20 30
    23 28 20

    Table 9.11

    Does it appear that the three media in which the bean plants were grown produce the same mean height? Test at a 3% level of significance.

    Answer

    Solution 9.5

    This time, we will perform the calculations that lead to the F' statistic. Notice that each group has the same number of plants, so we will use the formula \(F^{\prime}=\frac{n \cdot s_{\overline{x}}^{2}}{s^{2}_{pooled}}\).

    First, calculate the sample mean and sample variance of each group.

      Tommy's plants Tara's plants Nick's plants
    Sample mean 24.2 25.4 24.4
    Sample variance 11.7 18.3 16.3

    Table 9.12

    Next, calculate the variance of the three group means (Calculate the variance of 24.2, 25.4, and 24.4). Variance of the group means = 0.413 = \(s_{\overline{x}}^{2}\)

    Then \(M S_{b e t w e e n}=n s_{\overline{x}}^{2}=(5)(0.413)\) where \(n = 5\) is the sample size (number of plants each child grew).

    Calculate the mean of the three sample variances (Calculate the mean of 11.7, 18.3, and 16.3). Mean of the sample variances = 15.433 = \(\bf{s^2}\) pooled

    Then \(M S_{\text {within}}=s^{2} \text { pooled }=15.433\).

    The \(F\) statistic (or \(F\) ratio) is \(F=\frac{M S_{\text { between }}}{M S_{\text { within }}}=\frac{n s_{\overline{x}}^{2}}{s^{2} \text { pooled }}=\frac{(5)(0.413)}{15.433}=0.134\)

    The \(df\)s for the numerator = the number of groups \(– 1 = 3 – 1 = 2\).

    The \(df\)s for the denominator = the total number of samples – the number of groups \(= 15 – 3 = 12\)

    The distribution for the test is \(F_{2,12}\) and the \(F\) statistic is \(F = 0.134\)

    The \(p\)-value is \(P(F > 0.134) = 0.8759\).

    Decision: Since \(\alpha = 0.03\) and the \(p\text{-value }= 0.8759\), then you cannot reject H0. (Why?)

    Conclusion: With a 3% level of significance, from the sample data, the evidence is not sufficient to conclude that the mean heights of the bean plants are different.

     

    Notation

    The notation for the \(F\) distribution is \(F \sim F_{d f(n u m), d f(d e n o m)}\)

    where \(df(num) = df_{between}\) and \(df(denom) = df_{within}\)

    The mean for the \(F\) distribution is\(\mu=\frac{d f(n u m)}{d f(\text {denom})-2}\)


    This page titled 9.4: The F Distribution and the F-Ratio is shared under a CC BY license and was authored, remixed, and/or curated by .