The Prosecutor’s Fallacy

Conditional Probability in the Courtroom

The lasso of truth

Imagine you have been arrested for murder.

You know that you are innocent, but physical evidence at the scene of the crime matches your description. The prosecutor argues that you are guilty because the odds of finding this evidence given that you are innocent are so small that the jury should discard the probability that you did not actually commit the crime.

But those numbers don’t add up. The prosecutor has misapplied conditional probability and neglected the prior odds of you, the defendant, being guilty before they introduced the evidence.

The prosecutor’s fallacy is a courtroom misapplication of Bayes’ Theorem. Rather than ask the probability that the defendant is innocent given all the evidence, the prosecution, judge, and jury make the mistake of asking what the probability is that the evidence would occur if the defendant were innocent (a much smaller number):

P(defendant is guilty|all the evidence)

P(all the evidence|defendant is innocent)

Bayes Theorem

To illustrate why this difference can spell life or death, imagine yourself the defendant again. You want to prove to the court that you’re really telling the truth, so you agree to a polygraph test.

Coincidentally, the same man who invented the lie detector later created Wonder Woman and her lasso of truth.


William Moulton Marston debuted his invention in the case of James Alphonso Frye, who was accused of murder in 1922.

Frye being polygraphed by Marston

For our simulation, we’ll take the mean of a more modern polygraph from this paper (“Accuracy estimates of the CQT range from 74% to 89% for guilty examinees, with 1% to 13% false-negatives, and 59% to 83% for innocent examinees, with a false-positive ratio varying from 10% to 23%…”)

Examine these percentages a moment. Given that this study found that a vast majority of people are honest most of the time, and that “big lies” are things like “not telling your partner who you have really been with”, let’s generously assume that 15% of people would lie about murder under a polygraph test, and 85% would tell the truth.

If we tested 10,000 people with this lie detector under these assumptions…

1500 people out of 10000 are_lying

1215 people out of 10000 are_true_positives

120 people out of 10000 are_false_negatives

8500 people out of 10000 are_not_lying

1445 people out of 10000 are_false_positives

6035 people out of 10000 are_true_negatives

The important distinctions to know before we apply Bayes’ Theorem are these:

  • The true positives are the people who lied and failed the polygraph (they were screened correctly)
  • The false negatives are the people who lied and beat the polygraph (they were screened incorrectly)
  • The false positives are the people who told the truth but failed the polygraph anyway
  • The true negatives are the people who told the truth and passed the polygraph

Got it? Good.

Now: If you, defendant, got a positive lie detector test, what is the chance you were actually lying?

What the polygraph examiner really wants to know is not P(+|L), which is the accuracy of the test; but rather P(L|+), or the probability you were lying given that the test was positive. We know how P(+|L) relates to P(L|+).

P(L|+) = P(+|L)P(L) / P(+)

To figure out what P(+) is independent of our prior knowledge of whether or not someone was lying, we need to compute the total sample space of the event of testing positive using the Law of Total Probability:

P(L|+) = P(+|L)P(L) / P(+|L)P(L) + P(+|L^c)P(L^c)

That is to say, we need to know not only the probability of testing positive given that you are lying, but also the probability of testing positive given that you’re not lying (our false positive rate). The sum of those two terms gives us the total probability of testing positive. That allows us to finally determine the conditional probability that you are lying:

The probability that you are actually lying, given that you tested positive on the polygraph, is 45.68%.
The probability of a false positive is 54.32%.

The probability that you’re actually lying, given a positive test result, is only 45.68%. That’s worse than chance. Note how it differs from the test’s accuracy levels (81% true positives and 71% true negatives). Meanwhile, your risk of being falsely accused of lying, even if you’re telling the truth, is also close to-indeed, slightly higher than-chance, at 54.32%. Not reassuring.

Marston was, in fact, a notorious fraud.

The Frye court ruled that the polygraph test could not be trusted as evidence. To this day, lie detector tests are inadmissible in court because of their unreliability. But that does not stop the prosecutor’s fallacy from creeping in to court in other, more insidious ways.

This statistical reasoning error runs rampant in the criminal justice system and corrupts criminal cases that rely on everything from fingerprints to DNA evidence to cell tower data. What’s worse, courts often reject the expert testimony of statisticians because “it’s not rocket science”-it’s “common sense”:

  • In the Netherlands, a nurse named Lucia de Berk went to prison for life because she had been proximate to “suspicious” deaths that a statistical expert calculated had less than a 1 in 342 million chance of being random. The calculation, tainted by the prosecutor’s fallacy, was incorrect. The true figure was more like 1 in 50 (or even 1 in 5). What’s more, many of the “incidents” were only marked suspicious after investigators knew that she had been close by.
  • A British nurse, Ben Geen, was accused of inducing respiratory arrest for the “thrill” of reviving his patients, on the claim that respiratory arrest was too rare a phenomenon to occur by chance given that Green was near.
  • Mothers in the U.K. have been prosecuted for murdering their children, when really they died of SIDS, after experts erroneously quoted the odds of two children in the same family dying of SIDS as 1 in 73 million

Ben Geen

The data in Ben Geen’s case are available thanks to Freedom of Information requests — so I have briefly analyzed them.

# Hospital data file from the expert in Ben Geen's exoneration case
# Data acquired through FOI requests
# Admissions: no. patients admitted to ED by month
# CardioED: no. patients admitted to CC from ED by month with cardio-respiratory arrest
# RespED: no. patients admitted to CC from ED by month with respiratory arrest

The most comparable hospitals to the one in which Geen worked are large hospitals that saw at least one case of respiratory arrest (although “0” in the data most likely means “missing data” and not that zero incidents occurred).

ax = sns.boxplot(x='Year', y='CardioED', data=df)
ax = sns.pairplot(df, x_vars=['Year'], y_vars=['CardioED', 'RespED', 'Admissions'])
Pairplots for admissions data and cardiac vs. respiratory events

The four hospitals that are comparable to the one where Geen worked are Hexham, Solihull, Wansbeck, and Wycombe. The data for Solihull (for both CardioED and RespED) are extremely anomalous:

After accounting for the discrepancies in the data, we can calculate that respiratory events without accompanying cardiac events happen, on average, roughly a little under 5 times as often as cardiac events (4.669 CardioED admissions on average per RespED admission).

The average number of respiratory arrests per month unaccompanied by cardiac failure is approximately 1–2, with large fluctuations. That’s not particularly rare, and certainly not rare enough to send a nurse to prison for life. (You can read more about the case and this data here and see my jupyter notebook here.)

Common sense, it would seem, is hardly common — a problem which the judicial system should take much more seriously than it does.

Originally published at