Bayes’ Theorem: An Introduction

Our statistician will drop in and explain why you have nothing to worry about.

© Science Cartoons Plus

Today, I’m going to look at a fundamental theory for understanding probability, a function in which instinctive human “reasoning” is almost universally terrible. People whose careers depend on the accuracy of their predictions, like medical doctors, often fail to understand probability theory, but it also has its uses in far less critical fields of study.

Bayes’ Theorem is starting to show up throughout academic circles, well beyond its origins in mathematics, and for good reason. Probability theory is applicable in just about every field that can be quantified, and by stretching the definition of “quantifiability”, it is now being encouraged even within traditionally non-scientific fields, notably that of the study of history. Richard Carrier in particular has been pushing for the use of Bayes’ Theorem in proving history through his website and dead-tree publications. He produced the excellent Bayes’ Theorem for Beginners: Formal Logic and Its Relevance to Historical Method back in 2008, which I highly recommend to anyone in the soft sciences.

My example today is a classic one. Indeed, it was my first exposure to Bayes’ Theorem, way back before Firefly went off the air. It’s not a historical piece, but a medical one. Its relatability is universal, however, and definitely proves its effectiveness in understanding probability as a concept.

Consider a new test that is being studied to determine whether a person is suffering from a particular disease. There are four possible scenarios, as shown in the table below:

Actual
Tested Positive Negative
Negative Type II Error Yay!
Positive Uh oh Type I Error

Note that there are two types of error: Type I is a “false positive” result, while a Type II error is a “false negative”. Depending on the particular scenario, and how a question is phrased, either one or the other type of error is the less desirable. Choose wisely!

Any new test on the market will have an error rate; sometimes, people will have a disease, yet it will be undetected by a particular test, giving a Type II error, while other times they will NOT have a disease, and the test result may come back indicating that they DO.

Let’s say that a new test is found to give the following results, after rigorous testing with people identified by other means as having or not having a disease:

False positives 10% of the time (Type I)
False negatives 1% of the time (Type II)

At any given time, 1 person in 1000 actually has this disease. Now, you’ve taken this test, and it claims that you have the disease. What are the odds you really do?

A cursory glance at the chart indicates that a false positive comes in 10% of the time, so you might think that you have a 90% chance of your result being a valid positive. Things aren’t looking so good. But here is where Bayes’ Theorem comes in. Mathematically, the Theorem is as follows:

\mathbb{P}(A_i|A) = \frac{\mathbb{P}(A_i)\mathbb{P}(A|A_i)}{\sum_{j=1}^{N}\mathbb{P}(A_j)\mathbb{P}(A|A_j)}

For a simple binary example, such as that in the example above (as either you DO, or you DON’T have a disease), Bayes’ Theorem becomes:

\mathbb{P}(B|A) = \frac{\mathbb{P}(B)\mathbb{P}(A|B)}{\mathbb{P}(A|B)\mathbb{P}(B)+\mathbb{P}(A|\neg{B})\mathbb{P}(\neg{B})}

where:

  • B is the probability of having the disease; and
  • A is the probability of testing positive

We are given that only 1 in 1000 people has the disease; this is our \mathbb{P}(B). We also know that 999 out of that 1000 people do not have the disease; this is our \mathbb{P}(\neg{B}). Since the test has already been taken, and given a positive result, we need to know the probability that you really have the disease. A  false positive is when our patient does not have the disease, but our test indicates they do; this happens 10% of the time, or 100 out of our 1000, and is our \mathbb{P}(A|\neg{B}). A true positive result, our  \mathbb{P}(A|B), has probability 1-\mathbb{P}(A|\neg{B}).

So, to calculate:

\mathbb{P}(B|A) = \frac{\mathbb{P}(A|B)\mathbb{P}(B)}{\mathbb{P}(A|B)\mathbb{P}(B)+\mathbb{P}(A|\neg{B})\mathbb{P}(\neg{B})} = \frac{0.9\times0.001}{0.9\times0.001+0.1\times0.999} \approx{0.00893}

Thus, if you test positive for a disease with this particular test, you have a less than 1% chance of actually having the disease, rather than the 90% intuited earlier. You will want a second opinion!

Makes sense, doesn’t it? A 10% chance of a false positive showing up in the first place means that you might be one of the lucky people who test positive, but don’t actually have a disease.

Let’s try a disease that affects 1 in a million people, and a test that gives false positives 0.1% of the time:

\mathbb{P}(B|A) = \frac{\mathbb{P}(A|B)\mathbb{P}(B)}{\mathbb{P}(A|B)\mathbb{P}(B)+\mathbb{P}(A|\neg{B})\mathbb{P}(\neg{B})} = \frac{0.999\times0.000001}{0.999\times0.000001+0.001\times0.999999} \approx{0.000998}

Still pretty low odds; consider, out of a million people, 1000 of them will test positive, but only one will actually have the disease.

And finally, a disease that affects 1 in 1000, and a test that gives false positives 0.1% of the time:

\mathbb{P}(B|A) = \frac{\mathbb{P}(A|B)\mathbb{P}(B)}{\mathbb{P}(A|B)\mathbb{P}(B)+\mathbb{P}(A|\neg{B})\mathbb{P}(\neg{B})} = \frac{0.999\times0.001}{0.999\times0.001+0.001\times0.999} = {0.5}

So, if a person tests positive, they have a 50% chance of having the disease.

Advertisements
This entry was posted in education, history, science and tagged , , , , , . Bookmark the permalink.

3 Responses to Bayes’ Theorem: An Introduction

  1. Milo Schield says:

    Medical testing is important bur easily misunderstood topic. Part of the problem involves an ambiguity in “accuracy.” E.g., “this test is 90% accurate.” There is accuracy in confirmation, P(positive|disease) and accuracy in prediction, P(disease | positive). Following the algebra can get complex. Here is a nice memorable result. To have a high prediction accuracy, the confirmation error rate must be less than the disease prevalence. Technically; To have more than a 50% chance of having the disease given a positive test, the test error rate, P(negative | disease), must be less than the prevalence of the disease, P(disease), in the group similar to the subject in question. See your last example where a prevalence of 1 in 1,000 and a confirmation error rate of 0.1% give a prediction accuracy of exactly 50%.
    Example: If only 1% of your graoup have the disease in question, the test must be at least 99% accurate in confirmation in order for a positive result to have at least a 50% prediction accuracy.

  2. Margaret says:

    this leaves Joe Q public to assume they don’t need medical intervention of whatever kind, because the likelihood they are ill is just too small. Unfortunately for those who are wrong about that, it will shortly be too late to hope for a cure. I vote on the side of caution.

  3. Milo Schield says:

    It depends on whether type 1 or type 2 error is the bigger problem. Caution minimizes type 2 error but increases type 1 error. The consequences of hundreds, thousands or millions of false positives at the societal level can be unbelievably costly and drain resources from those who have a real medical problem. .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s