“YOU KNOW PEOPLE LIKE YOU, THE NUMBER CRUNCHERS, REALLY GET ON MY NERVES.” So shouted one of the threatening emails Adam Kucharski received after publication of the minutes from a meeting of Britain’s Science Advisory Group for Emergencies (of which he was a member during the pandemic).
“That month had felt like a tug of war between the two theories about Delta [a new Covid variant], each piece a competitor pulling in one direction or the other,” Kucharski, a professor of mathematics at the London School of Hygiene and Tropical Medicine, recalls in Proof.
As that comment highlights, evidence is often contested and proof elusive, including during the pandemic, when many experts were urging governments to “follow the science”. But that irate email sender might have been speaking for many people in these post-evidence times. Indeed, looking at the US, the various conspiracy theories and non-factual beliefs that spread along with the coronavirus in the early 2020s seem rather low key by 2025 standards.
So this is either a particularly bad time, or a particularly good time, to publish a book about probability, the nature of evidence, and scientific communication.
Proof starts with the well-known Monty Hall problem, whose correct answer almost nobody can understand. Game show host Monty offers a contestant the choice of three doors, one of which hides a desirable new car, and the other two less desirable goats. Suppose the contestant chooses door 2. Monty opens door 3 to reveal a goat. Should the player stick with door 2 or switch to door 1?
The correct answer is to switch, as there is a higher probability the car is behind door 1; the new information from Monty’s action is not that there is a goat behind door 3 but rather that the host did not open door 1. Yet one of the disbelievers, the book tells us, was the genius mathematician Paul Erdős.
If even he couldn’t understand, what hope for the rest of us? Is the whole programme of basing decisions on evidence doomed to flounder on humans’ inability to understand probabilities and logic?
The book continues with a slice of the history of mathematics to demonstrate that even logic can’t deliver absolute truth: Riemannian geometry torpedoes our everyday Euclidian intuitions about the behaviour of straight lines and right angles; in 1931, Kurt Gödel demonstrated that axiomatic mathematical systems are either inconsistent or incomplete. This latter piece of early 20th-century mathematics shows “why software engineers sometimes struggle to develop successful decision-making algorithms”, Kucharski points out. Algorithms cannot cover all possible contexts or give internally consistent outcomes.
For example, consider Correctional Offender Management Profiling for Alternative Sanctions, a system widely used in the US to help judges make decisions about bail and parole through automated risk assessment. It feeds in variables such as age, education and history of violence into an algorithm to produce a risk score.

A ProPublica investigation in 2016 suggested that although race was not explicitly used in the algorithm, it assigned Black defendants to a higher risk category. Wasn’t this clearly unfair? It depends what definition of fairness applies. Is it that actual reoffending rates accurately match the predictions of the algorithm for all groups? Or is it that the likelihood of the algorithm misclassifying someone as either high or low risk is equal across groups — for example, the probability that someone who does not reoffend is misclassified as high risk is equal across groups? No algorithm can satisfy both definitions of “fair”.
Such examples hold an obvious warning for policymakers cantering towards using artificial intelligence to cut costs or improve efficiency in areas such as criminal justice, healthcare or welfare. But, as Kucharski goes on to discuss, there are challenges in determining the “correct” answer on the basis of evidence and quantitative methods in the law, in medicine, and indeed in all areas where strong claims are made for “evidence-based” conclusions.
One chapter, which might surprise some readers, casts serious doubt on the claim that randomised control trials (RCTs) are actually the “gold standard” for evidence on medical efficacy. The entrenched idea that RCTs sit at the top of a hierarchy of proof dates to recommendations from a Canadian task force in 1979. But there are very many situations where other types of proof — including experience — are more relevant.
So what does this imply for the number crunchers, those who still believe in the importance of evidence, but recognise both the prevalence of bad practice in science and the inherent difficulty of ever establishing scientific claims?
The book’s answer is do more of the same, but do it better, and, above all, acknowledge and communicate the uncertainties. This just might rebuild public trust in “the evidence”. This seems both the right thing to do and, in today’s context, hopelessly inadequate.
Proof: The Uncertain Science of Certainty by Adam Kucharski Profile £22/Basic Books $32, 368 pages
Diane Coyle is professor of public policy at the University of Cambridge
Join our online book group on Facebook at FT Books Café and follow FT Weekend on Instagram and X