# ProbabilityIndependence

In the context of a random experiment, two positive-probability events and are **independent** if knowledge of the occurrence of one of the events gives no information about the occurrence of the other event. In other words, and are independent if the probability of is the same as the conditional probability of given , and vice versa. In other words, and are independent if

Each of these equations rearranges to

This equation is symmetric in and , and it does not require that and have positive probability, so we take it as our fundamental independence equation for two events:

**Definition** (Independence)

If is a probability space, then two events and are said to be independent if

If we want to check whether two positive-probability events are independent, we may check any one of the equations or or , since they are all equivalent.

**Exercise**

Let be the result of a six-sided die roll. Consider the following events.

Are events and independent?

*Solution.*

- We have (because cannot be odd and even), while . Since , the events and are not independent.
- We have . Because , the events and are independent.

## Independence of random variables

We say that two random variables and are independent if the every pair of events of the form and are independent, where and .

**Exercise**

Suppose that and is the uniform probability measure on . Let be the number of heads in the first flip and let be the number of heads in the second flip. Show that and are independent.

*Solution.* The pair takes values in each with probability Since both and can be or with probability we conclude that and are independent.

Directly showing that random variables are independent can be tedious, because there are many events to check. However, there is a general way to construct to get independent random variables. The idea is to build as a rectangle:

**Theorem** (Product measure)

Suppose that and are probability spaces with associated probability mass functions and . Define a probability space by defining

and

for every . Let be the probability measure with probability mass function . Then the random variables and are independent.

We call a **product measure** and a **product space**.

We say that a collection of random variables is independent if

for any events .

We may extend the product measure construction to achieve as many independent random variables as desired: for three random variables we let be cube-shaped (that is, ), and so on.

**Exercise**

Define a probability space and 10 independent random variables which are uniformly distributed on .

*Solution.* We follow the product space construction and define to be the set of all length-10 tuples of elements in . For each let and let be the uniform probability mass function on Then desired probability space is where

together with probability mass function

for all We define the corresponding random variables by

for all integer values of ranging from to . Then for all of these random variables,

for any , as required.

The product measure construction can be extended further still to give a supply of *infinitely many* independent random variables. The idea is use a space of the form (whose elements are infinite tuples ) and define a measure which makes the random variables independent. We will not need the details of this construction, although we will use it indirectly when we discuss infinite sequences of independent random variables.

We say that a collection of events is independent if the corresponding

**Exercise**

Three events can be *pairwise* independent without being independent: Suppose that is selected uniformly at random from the set

and define to be the event that the first entry is 1, to be the event that the second entry is , and to be the event that the third entry is 1. For example, if , then and occurred but did not.

Show that and are independent, that and are independent, and that and are independent.

Show that the equation does **not** hold and that the triple of events is therefore not independent.

*Solution.* By definition, $B = \{(0, 1, 1), (1, 1, 0)\}$ and Therefore,

Now, , and whence

The same thing applies to and so are pairwise independent. However, since we have

and thus, and are not independent.

## Independence properties

Independence satisfies many basic relationships suggested by the intuition that random variables are independent if they are computed from separate sources of randomness. For example, if are independent random variables, then and are independent. We'll state this idea as a theorem and apply it to an exercise.

**Theorem** (persistence of independence)

Suppose that and are positive integers and that

are independent. If are functions, then the random variables

are independent.

**Exercise**

Consider as sequence of 8 independent coin flips. Show that the probability of getting at least one pair of consecutive heads is at least .

*Solution.* The probability that the first two flips are both heads is . Similarly, the probability that the third and fourth flips are heads and heads,

Continuing in this way, we find that the probability of getting consecutive heads in the first pair, the second pair, or the third pair of flips is , and finally the probability of getting consecutive heads somewhere in the four position pairs is .

Since there are other ways to get consecutive heads (for example, on flips 2 and 3), this number is an *under*-estimate of the actual probability of getting consecutive heads.