Statistics 5 minutes

# Understanding p-values

Hypothesis testing and p-values are often misused and misunderstood. In this article, I explain what a p-value is, and how to use it.

First, we must understand in which situations it is appropriate to use p-values.

## When to use p-valued hypothesis testing?

Hypothesis testing with p-values is appropriate when you must decide between two courses of action and one of them has significantly lower cost than the other.

For instance, a company must decide between:

• $H_0$: keep the actual number of waiters in my restaurants;
• $H_1$: increase the number of waiters in my restaurants.

Another example, a scientist must decide between:

• $H_0$: the current theory is valid;
• $H_1$: current theory is invalid and my new theory is better.

If we think of $H_0$ as being the current accepted model for the laws of physics and $H_1$ a new set of laws, the cost of switching to $H_1$ is huge. Every textbook must be updated, scientists must all learn the new theory, etc.

The cost for $H_0$ is null while the cost for $H_1$ is significantly higher. This is why $H_0$ is called the null hypothesis.

Our prefered course of action is $H_0$ and it is the course of action that we will follow by default.

## What is p-valued hypothesis testing?

Hypothesis testing is a tool that relies on data. It can tell us if the data is a counter-example to our hypothesis $H_0$, in which case we say that $H_0$ is rejected. Adn when the data is not a counter-example, then $H_0$ is neither rejected nor accepted.

The same process can be observed in abstract mathematics. To prove a theorem, we need a formal proof. But to reject it, we only need a counter-example.

Since in real life we don’t know the exact rules of “nature”, we can’t prove formally that $H_0$ is true. But we can try to find counter-examples in the available data.

So, by design the statistical test attempts to reject $H_0$ using the data. But contrary to abstract mathematics, in statistics we must deal with uncertainty. In particular, mismatch between the statistical test and the type of data provided can happen, in which case we can’t be 100% confident in the test’s output.

This is why a $p$-valued test will tell us how confident we can be in its answer. The output of such test is:

• “You should reject $H_0$. And here is the probability that I’m mistaken: $p$

When the $p$-value is small, there is little probability that the test is mistaken and we can be confident in rejecting $H_0$ (i.e. saying that the data is a counter-example to $H_0$).

When the $p$-value is large, however, there is high probability that the test is mistaken and we shouldn’t trust its output. So what can we do? We can use a different test; gather more data; or stay with $H_0$ until next time we attempt to reject it.

So, when the $p$-value is small, we can trust the tool we used. When it is big, we can’t trust the tool because it’s likely to produce bogus results.

But how small is small enough?

The common convention is to set the threshold at $% $. This means that we want a probability smaller that $0.05$ that the tool is bogus.

A good way to interpret this is in terms of frequency. Out of $100$, the tool produces gibberish $5$ times. In other words, the tool can be trusted only $95\%$ of the time.

Depending on the cost to implement $H_1$, we might want to requires that the tool be trusted $99\%$ of the time, in which case we will set the threshold at $% $.

## How to use it?

To use a statistical test, me must model our situation into a statistical formulation. This is precisely because of this modeling step that there can be a mismatch between our data and the test we use, and that we must quantify how trustworthy the test’s results are.

### Data

Concretely, we start by gathering some numerical data $(\sx_1, \cdots, \sx_\sn)$ under the conditions of the alternative hypothesis.

For the company example, the data could be:

Collected in some restaurants where the staff was actually increased for the purpose of testing.

We can’t directly compare the average amount of money spent (AAM) in the normal restaurants to AAM in the staffed restaurants because of uncertainty: if we collect more data, those averages might slightly change. If we see that one average is bigger than the other, does it mean that there is really a difference, or is it simply a random effect? In statistics, we model uncertainty using probability distributions. So, instead of comparing the averages, we use a statistical test to compare the underlying distributions.

### Model

Then, we use statistical modeling to model the null hypothesis $H_0$ by some probability distribution and the alternative $H_1$ by another probability distribution.

For instance the company wants to know if the average amount of money spent increased under $H_1$.

It already knows the average amout ($\sm_0$) of money spent by its customers in regular restaurants. So it can choose a gaussian distribution with mean $\sm_0$ for $H_0$. In statistical term, $H_0$ is modeled by:

For $H_1$, it want to test if the average amount increased so it can take a gaussian distribution with mean $\sm_1 > \sm_0$ for $H_1$:

The choice of a gaussian model requires statistical knowledge and is a potential source of mismatch between the statistical tools we will use and the actual data. Choosing a gaussian model means that we will use a test for gaussian distributions. Had we chosen a different model, we would have used a different test.

### Test

Then, we use the statistical test on the data and the models.

Here is what the test outputs:

• “Your data is incompatible with the distribution of $H_0$ and there is probability $p$ that I don’t know what I’m talking about”

When $p$ is small, we can be confident that the data has not been produced by $H_0$ and thus reject it. When $p$ is large, we only know that the test produced a useless result.