This is the second part of a three-part series from Balance Legal Capital on “Litigation Superforecasting”, inspired by a recent book called “Superforecasting – The Art & Science of Prediction” by Philip Tetlock and Dan Gardner.
This is the second part of a three-part series from Balance Legal Capital on “Litigation Superforecasting”, inspired by a recent book called “Superforecasting – The Art & Science of Prediction” by Philip Tetlock and Dan Gardner.
In part 1, I explained that litigation and litigation funding involves making predictions about future outcomes, and called for lawyers and their clients to embrace the use of percentage probabilities in legal advice in order to reduce confusion and enhance the accuracy of forecasts.
In this part 2, I focus on the characteristics of “superforecasters” – those participants in Tetlock’s 20 year “Expert Judgment Project”, and the subsequent “Good Judgment Project”, who consistently made more accurate forecasts than even professional intelligence analysts in possession of classified information. What are those characteristics? How can we spot them? And how can they be developed in the litigation context?
In 1984, Tetlock launched the 20 year-long Expert Judgment Project (EJP), in which he asked 284 experts (academics, pundits, advisers, economists and so on) to make thousands of predictions about the economy, stocks, elections, wars and other issues. He recorded the 28,000 predictions and then tested whether the experts were right. The results showed that the “average expert” had no more foresight than a “dart-throwing chimp”, or a person making random guesses.
The results showed that the experts clustered into two distinct groups either side of the average. One group made forecasts that were significantly worse than random guessing, while the other managed to consistently beat the chimpanzee.
Tetlock examined the differences between the under-performing group and the over-performing group, and named them “hedgehogs” and “foxes”, respectively (borrowing from Isiah Berlin). Tetlock found that the critical difference between them had nothing to do with education, qualifications, age or any political ideologies; it was the way they thought.
The EJP showed that foxes consistently beat hedgehogs on forecasting accuracy. The hedgehogs’ patterns of thought exhibited unchecked cognitive biases, whereas the foxes appeared to have developed ways to correct for them.
Perhaps amusingly, data from the EJP showed that the more famous the expert, the less accurate their predictions. Hedgehogs are more likely to be picked for television than foxes because hedgehogs are prone to present adamant, overly simplistic views, that audiences find satisfying. Certain political candidates come to mind.
For a thorough examination of cognitive biases, Daniel Kahneman’s “Thinking, Fast and Slow” is essential reading. Fans of the book will be familiar with Kahneman’s use of the “System 1” and “System 2” models for understanding the way our brains process information and make decisions.
System 1 is responsible for a range of proven cognitive biases. Examples include:
These biases and others are at play in our minds all the time. They explain our susceptibility to the “God complex” – the tendency of experts to place too much weight on their own intuitions without evidence, and the tendency of people to believe those experts. As Kahneman said – “It is wise to take admissions of uncertainty seriously. Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
In 2011, Tetlock and others launched the Good Judgment Project (“GJP”). The aim was to build on the data of the EJP in order to determine which factors improved foresight and to see how good the forecasts could become if best practices were combined.
The GJP was part of a research effort sponsored by the US intelligence community to improve American intelligence, spurred on by the debacle around Iraq’s missing nuclear weapons programme. The GJP was one of five teams that entered a forecasting tournament set up by the US intelligence community. In year one, GJP beat the official control group by 60%, and by 78% in year two. GJP outperformed professional intelligence analysts who had access to classified data. GJP was doing so well after two years that the other four teams were dropped.
Within the GJP, Tetlock identified the outstandingly accurate forecasters and distilled their thought processes and characteristics, building on the hedgehog and fox models. He named them “Superforecasters” and found that they:
Lawyers must surely score high marks in “need for cognition” tests, and have an ability to work hard. However, lawyers are not always seen (or rather, they do not always see themselves) as particularly numerate. In the UK at least, there is the perception that many studied humanities and gladly gave up maths, seeing themselves as better with words than figures. This is reflected in some of the costs budgets we see. Less charitable observers of the legal profession might well point to some stubborn egos, and there will be examples of lawyers who are famously unreceptive to other people’s views or new ways of doing things.
What emerges from Tetlock’s data is that if such a stereotypical lawyer exists, then he/she is probably not a great forecaster. Maybe maths and statistics should be part of all law degrees, and lawyers should seek maths training as well as updates on contract and tort law? Lawyers could also aim to develop some of the other traits listed above.
Is there perhaps a divergence here between the traits that would suit a good litigation adviser, and the traits required for a persuasive advocate? Judges and tribunal members can labour under the same cognitive biases that crave coherence and favour simple ideas, delivered with unwavering confidence. Should we therefore seek a QC that delivers these styles of advocacy for the trial, but give more weight to the view of the pragmatic senior junior when it comes to assessing the merits of the case? Maybe the ideal lawyer is able to exhibit fox-like characteristics when reaching their views and advising clients, but is also able to switch to hedgehog-mode when they get to their feet at the hearing or in a mediation?
In the litigation funding space, we often see the “God complex” at work in the context of QC opinions. Applicants for litigation funding place significant weight on a QC merits opinion giving the case a 60% chance of success and may expect that this alone can satisfy our due diligence. At Balance, we are mindful of the variability among QCs and senior lawyers who may have relied too heavily on their intuitions and not controlled for their own cognitive biases. We look at issues afresh and test senior advisers on their assumptions whenever possible.
Whilst we are all susceptible to cognitive biases (as to which see the info box above), the data shows that those who are able to acknowledge this and control for it, are better decision makers and forecasters than those who cling to big ideas and heuristics. Those who keep an open mind, and actively seek multiple points of view are more likely to stumble across the correct one and incorporate it into their analysis.
Read part 3, where I take a closer look at the methods used by Tetlock’s “Superforecasters” to control for cognitive bias and build accurate forecasts, and suggest how lawyers might adopt them when advising clients on litigation.
Take the Litigation Superforecasting Survey: Put a number on it.