Skip to main content
Business LibreTexts

3.2: Probability Terminology

  • Page ID
    51769
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Probability is a measure that is associated with how certain we are of outcomes of a particular experiment or activity. An experiment is a planned operation carried out under controlled conditions. If the result is not predetermined, then the experiment is said to be a chance or probability experiment. Flipping one fair coin twice is an example of a probability experiment.

    A result of an experiment is called an outcome. The sample space of an experiment is the set of all possible outcomes. Three ways to represent a sample space are: (1) to list the possible outcomes, (2) to create a tree diagram, or (3) to create a Venn diagram. The uppercase letter \(S\) is used to denote the sample space. For example, if you flip one fair coin, \(S = \{H, T\}\) where \(H =\) heads and \(T =\) tails are the outcomes.

    An event is any combination of outcomes. Upper case letters like \(A\) and \(B\) represent events. For example, if the experiment is to flip one fair coin, event \(A\) might be getting at most one head. The probability of an event \(A\) is written \(P(A)\).

    The probability of any outcome is the long-term relative frequency of that outcome. Probabilities are between zero and one, inclusive (that is, zero and one and all numbers between these values). \(P(A) = 0\) means the event \(A\) can never happen. \(P(A) = 1\) means the event \(A\) always happens. \(P(A) = 0.5\) means the event \(A\) is equally likely to occur or not to occur. For example, if you flip one fair coin repeatedly (from 20 to 2,000 to 20,000 times) the relative frequency of heads approaches 0.5 (the probability of heads).

    Equally likely means that each outcome of an experiment occurs with equal probability. For example, if you toss a fair, six-sided die, each face (1, 2, 3, 4, 5, or 6) is as likely to occur as any other face. If you toss a fair coin, a Head (H) and a Tail (T) are equally likely to occur. If you randomly guess the answer to a true/false question on an exam, you are equally likely to select a correct answer or an incorrect answer.

    To calculate the probability of an event A when all outcomes in the sample space are equally likely, count the number of outcomes for event A and divide by the total number of outcomes in the sample space. For example, if you toss a fair dime and a fair nickel, the sample space is \(\{HH, TH, HT, TT\}\) where \(T =\) tails and \(H =\) heads. The sample space has four outcomes. If we consider the event \(A\) = getting one head, there are two outcomes that meet this condition \(\{HT, TH\}\), so \(P(A) = \frac{2}{4} = 0.5\).

    Suppose you roll one fair six-sided die, with the numbers \(\{1, 2, 3, 4, 5, 6\}\) on its faces. Let event \(E =\) rolling a number that is at least five. There are two outcomes \(\{5, 6\}\) that satisfy this condition, so \(P(E) = \frac{2}{6}\). If you were to roll the die only a few times, you would not be surprised if your observed results did not match the probability. If you were to roll the die a very large number of times, you would expect that, overall, \(\frac{2}{6}\) of the rolls would result in an outcome of "at least five". You would not expect exactly \(\frac{2}{6}\). The long-term relative frequency of obtaining this result would approach the theoretical probability of \(\frac{2}{6}\) as the number of repetitions grows larger and larger.

    This important characteristic of probability experiments is known as the Law of Large Numbers which states that as the number of repetitions of an experiment is increased, the relative frequency obtained in the experiment tends to become closer and closer to the theoretical probability. Even though the outcomes do not happen according to any set pattern or order, overall, the long-term observed relative frequency will approach the theoretical probability. (The word empirical is often used instead of the word observed.)

    It is important to realize that in many situations, the outcomes are not equally likely. A coin or die may be unfair, or biased. Two math professors in Europe had their statistics students test the Belgian one Euro coin and discovered that in 250 trials, a head was obtained 56% of the time and a tail was obtained 44% of the time. The data seem to show that the coin is not a fair coin; more repetitions would be helpful to draw a more accurate conclusion about such bias. Some dice may be biased. Look at the dice in a game you have at home; the spots on each face are usually small holes carved out and then painted to make the spots visible. Your dice may or may not be biased; it is possible that the outcomes may be affected by the slight weight differences due to the different numbers of holes in the faces. Gambling casinos make a lot of money depending on outcomes from rolling dice, so casino dice are made differently to eliminate bias. Casino dice have flat faces; the holes are completely filled with paint having the same density as the material that the dice are made out of so that each face is equally likely to occur. Later we will learn techniques to use to work with probabilities for events that are not equally likely.

    "\(\cup\)" Event: The Union

    An outcome is in the event \(A \cup B\) if the outcome is in A or is in B or is in both A and B. For example, let \(A = \{1, 2, 3, 4, 5\}\) and \(B = \{4, 5, 6, 7, 8\}\), then \(A \cup B = \{1, 2, 3, 4, 5, 6, 7, 8\}\). Notice that 4 and 5 are NOT listed twice.

    "\(\cap \)" Event: The Intersection

    An outcome is in the event \(A \cap B\) if the outcome is in both A and B at the same time. For example, let \(A\) and \(B\) be \(\{1, 2, 3, 4, 5\}\) and \(\{4, 5, 6, 7, 8\}\), respectively. Then \(A \cap B = \{4, 5\}\).

    The complement of event \(A\) is denoted \(A^C\) (read "A complement"). \(A^C\) consists of all outcomes that are NOT in A. Notice that \(P(A) + P(A^C) = 1\). For example, let \(S = \{1, 2, 3, 4, 5, 6\}\) and let \(A = \{1, 2, 3, 4\}\). Then, \(A^C = \{5, 6\}\). Note that \(P(A) = \frac{4}{6}\), so \(P(A^C) = \frac{2}{6}\), and \(P(A) + P(A^C) = \frac{4}{6}+\frac{2}{6}=1\)

    The conditional probability of \(A\) given \(B\) is written \(P(A|B)\). The conditional probability \(P(A|B)\) is the probability that event \(A\) will occur given that the event \(B\) has already occurred. A conditional reduces the sample space. We calculate the probability of A from the reduced sample space \(B\). The formula to calculate \(P(A|B)\) is \(P(A | B)=\frac{P(A \cap B)}{P(B)}\) where \(P(B)\) is greater than zero.

    For example, suppose we toss one fair, six-sided die. The sample space \(S = \{1, 2, 3, 4, 5, 6\}\). Let \(A =\) face is 2 or 3 and \(B =\) face is even \((2, 4, 6)\). To calculate \(P(A|B)\), we count the number of outcomes 2 or 3 in the sample space \(B = \{2, 4, 6\}\). Then we divide that by the number of outcomes \(B\) (rather than \(S\)).

    We get the same result by using the formula. Remember that \(S\) has six outcomes.

    \(P(A|B) = \frac{\frac{(\text { the number of outcomes that are } 2 \text { or } 3 \text { and even in } S)}{6}}{\frac{(\text { the number of outcomes that are even in } S)}{6}}=\frac{\frac{1}{6}}{\frac{3}{6}}=\frac{1}{3}\)

    Odds

    The odds of an event presents the probability as a ratio of success to failure. This is common in various gambling formats. Mathematically, the odds of an event can be defined as:

    \[\frac{P(A)}{1-P(A)}\nonumber\]

    where \(P(A)\) is the probability of success and of course \(1 − P(A)\) is the probability of failure. Odds are always quoted as "numerator to denominator," e.g. 2 to 1. Here the probability of winning is twice that of losing; thus, the probability of winning is 0.66. A probability of winning of 0.60 would generate odds in favor of winning of 3 to 2. While the calculation of odds can be useful in gambling venues in determining payoff amounts, it is not helpful for understanding probability or statistical theory.

    Understanding Terminology and Symbols

    It is important to read each problem carefully to think about and understand what the events are. Understanding the wording is the first very important step in solving probability problems. Reread the problem several times if necessary. Clearly identify the event of interest. Determine whether there is a condition stated in the wording that would indicate that the probability is conditional; carefully identify the condition, if any.


    This page titled 3.2: Probability Terminology is shared under a CC BY license and was authored, remixed, and/or curated by .