Button Text
May 17, 2016

Litigation Superforecasting, Part 2: Hedgehogs and Foxes

This is the second part of a three-part series from Balance Legal Capital on “Litigation Superforecasting”, inspired by a recent book called “Superforecasting – The Art & Science of Prediction” by Philip Tetlock and Dan Gardner.

Robert Rothkopf
Managing Partner

This is the second part of a three-part series from Balance Legal Capital on “Litigation Superforecasting”, inspired by a recent book called “Superforecasting – The Art & Science of Prediction” by Philip Tetlock and Dan Gardner.

In part 1, I explained that litigation and litigation funding involves making predictions about future outcomes, and called for lawyers and their clients to embrace the use of percentage probabilities in legal advice in order to reduce confusion and enhance the accuracy of forecasts.

In this part 2, I focus on the characteristics of “superforecasters” – those participants in Tetlock’s 20 year “Expert Judgment Project”, and the subsequent “Good Judgment Project”, who consistently made more accurate forecasts than even professional intelligence analysts in possession of classified information. What are those characteristics? How can we spot them? And how can they be developed in the litigation context?

DART-THROWING CHIMPS, HEDGEHOGS AND FOXES

In 1984, Tetlock launched the 20 year-long Expert Judgment Project (EJP), in which he asked 284 experts (academics, pundits, advisers, economists and so on) to make thousands of predictions about the economy, stocks, elections, wars and other issues. He recorded the 28,000 predictions and then tested whether the experts were right. The results showed that the “average expert” had no more foresight than a “dart-throwing chimp”, or a person making random guesses.

The results showed that the experts clustered into two distinct groups either side of the average. One group made forecasts that were significantly worse than random guessing, while the other managed to consistently beat the chimpanzee.

Tetlock examined the differences between the under-performing group and the over-performing group, and named them “hedgehogs” and “foxes”, respectively (borrowing from Isiah Berlin). Tetlock found that the critical difference between them had nothing to do with education, qualifications, age or any political ideologies; it was the way they thought.

HEDGEHOGS AND FOXES

  1. Hedgehogs – tend to organize their thinking around big ideas or “rules of thumb”; they cast complex problems onto preferred cause-effect templates; treat anything that doesn’t fit as irrelevant; exhibit unusually high confidence – using additive language to compound their reasoning such as “furthermore”, “additionally”. They are quick to declare some outcomes “impossible” or “certain”. They stay committed to their conclusions, reluctant to change their mind. Even when shown to be wrong, they answer “just you wait.”
  2. Foxes – tend to be pragmatic; use many tools and multiple sources of information; express and acknowledge their doubts; grapple with complexity; use language with transitional markers – “however”, “but”, “although”. They readily admit to being wrong and are willing to change their mind.

The EJP showed that foxes consistently beat hedgehogs on forecasting accuracy. The hedgehogs’ patterns of thought exhibited unchecked cognitive biases, whereas the foxes appeared to have developed ways to correct for them.

Perhaps amusingly, data from the EJP showed that the more famous the expert, the less accurate their predictions. Hedgehogs are more likely to be picked for television than foxes because hedgehogs are prone to present adamant, overly simplistic views, that audiences find satisfying. Certain political candidates come to mind.

INFO BOX: SYSTEM 1 & 2 AND COGNITIVE BIASES

For a thorough examination of cognitive biases, Daniel Kahneman’s “Thinking, Fast and Slow” is essential reading. Fans of the book will be familiar with Kahneman’s use of the “System 1” and “System 2” models for understanding the way our brains process information and make decisions.

  • System 1 is fast, automatic, and constantly running in the background. It uses heuristics (rules of thumb) to make decisions based on little information.
  • System 2 is deeper, conscious thought which can be recruited to check and critique System 1’s automatic answer. System 2 requires effort to be engaged, while System 1 cannot be switched off.

System 1 is responsible for a range of proven cognitive biases. Examples include:

  1. Availability heuristic – the tendency to perceive things to be more common than they really are based purely on personal experience or cultural/media focus.
  2. Confirmation bias – the inclination to seize upon the first plausible explanation for something and then to gather supportive evidence without checking its reliability, and discarding or discounting evidence that does not fit with your preferred explanation.
  3. Attribute substitution – or – “bait & switch” – when faced with a difficult question, System 1 surreptitiously replaces it with an easier one which has worked as a proxy through our evolutionary history. The question: “Should we fear the shadow in the long grass?” is difficult to answer without more data. The brain switches the question to: “Can we remember a lion jumping out from the shadow?” In the jungle, the cost of not running and instead waiting to think could cost your life.
  4. Anchoring – estimates are skewed by the first number you consider. Kahneman showed that random displays of large numbers drove up the estimates of subjects asked to guess the number of countries in the world, or population sizes.
  5. Coherence or causation bias – the inclination towards identifying causation or meaning from the evidence at hand. For example: “The Dow rose ninety-five points today on the news that…” – a quick check will often reveal that the news that supposedly drove the market change came out long after the market had risen.

These biases and others are at play in our minds all the time. They explain our susceptibility to the “God complex” – the tendency of experts to place too much weight on their own intuitions without evidence, and the tendency of people to believe those experts. As Kahneman said – “It is wise to take admissions of uncertainty seriously. Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.

TRAITS OF SUPERFORECASTERS

In 2011, Tetlock and others launched the Good Judgment Project (“GJP”). The aim was to build on the data of the EJP in order to determine which factors improved foresight and to see how good the forecasts could become if best practices were combined.

The GJP was part of a research effort sponsored by the US intelligence community to improve American intelligence, spurred on by the debacle around Iraq’s missing nuclear weapons programme. The GJP was one of five teams that entered a forecasting tournament set up by the US intelligence community. In year one, GJP beat the official control group by 60%, and by 78% in year two. GJP outperformed professional intelligence analysts who had access to classified data. GJP was doing so well after two years that the other four teams were dropped.

Within the GJP, Tetlock identified the outstandingly accurate forecasters and distilled their thought processes and characteristics, building on the hedgehog and fox models. He named them “Superforecasters” and found that they:

  1. Score highly in “need for cognition” tests – they enjoy puzzles and like to think.
  2. Are actively open-minded, intellectually curious and enjoy variety in life.
  3. Are good with numbers. Although mathematics rarely played a role in making forecasts, Superforecasters all have a facility with numbers.
  4. Are good probabilistic thinkers – comfortable with giving % chances, rather than resorting to just three settings of “Yes, No, Maybe”.
  5. Use granular precision – prediction to the nearest 5% or 1% rather than the nearest 10%.
  6. Acknowledge that there is an irreducible uncertainty in predicting anything. Nothing is 100% or 0%.
  7. Are not religious – Superforecasters did not adhere to a “divine plan” or believe in fate.
  8. Have less ego – Superforecasters are more likely to adjust beliefs on new information.
  9. Consult multiple views and sources.
  10. Regularly update forecasts based on new information.
  11. Have a “growth mind-set” – an attitude that seeks self-improvement and sharpening of skills with practice and study.
  12. Show grit – are hard-working and determined.

WHAT CAN THE LEGAL PROFESSION LEARN FROM THIS?

Lawyers must surely score high marks in “need for cognition” tests, and have an ability to work hard. However, lawyers are not always seen (or rather, they do not always see themselves) as particularly numerate. In the UK at least, there is the perception that many studied humanities and gladly gave up maths, seeing themselves as better with words than figures. This is reflected in some of the costs budgets we see. Less charitable observers of the legal profession might well point to some stubborn egos, and there will be examples of lawyers who are famously unreceptive to other people’s views or new ways of doing things.

What emerges from Tetlock’s data is that if such a stereotypical lawyer exists, then he/she is probably not a great forecaster. Maybe maths and statistics should be part of all law degrees, and lawyers should seek maths training as well as updates on contract and tort law? Lawyers could also aim to develop some of the other traits listed above.

Is there perhaps a divergence here between the traits that would suit a good litigation adviser, and the traits required for a persuasive advocate? Judges and tribunal members can labour under the same cognitive biases that crave coherence and favour simple ideas, delivered with unwavering confidence. Should we therefore seek a QC that delivers these styles of advocacy for the trial, but give more weight to the view of the pragmatic senior junior when it comes to assessing the merits of the case? Maybe the ideal lawyer is able to exhibit fox-like characteristics when reaching their views and advising clients, but is also able to switch to hedgehog-mode when they get to their feet at the hearing or in a mediation?

In the litigation funding space, we often see the “God complex” at work in the context of QC opinions. Applicants for litigation funding place significant weight on a QC merits opinion giving the case a 60% chance of success and may expect that this alone can satisfy our due diligence. At Balance, we are mindful of the variability among QCs and senior lawyers who may have relied too heavily on their intuitions and not controlled for their own cognitive biases. We look at issues afresh and test senior advisers on their assumptions whenever possible.

Whilst we are all susceptible to cognitive biases (as to which see the info box above), the data shows that those who are able to acknowledge this and control for it, are better decision makers and forecasters than those who cling to big ideas and heuristics. Those who keep an open mind, and actively seek multiple points of view are more likely to stumble across the correct one and incorporate it into their analysis.

Read part 3, where I take a closer look at the methods used by Tetlock’s “Superforecasters” to control for cognitive bias and build accurate forecasts, and suggest how lawyers might adopt them when advising clients on litigation.

Take the Litigation Superforecasting Survey: Put a number on it.

insights

Other posts you may be
interested in

August 6, 2024
Balance proudly supports the Justice and Equity Centre

Balance is proud to share that we are supporting the Justice and Equity Centre (formerly the Public Interest Advocacy Centre) in its pursuit of strategic social justice litigation.

 

July 26, 2024
Balance funds Leaseholder Action to recover secret commissions

Balance is proud to support Velitor Law in their pursuit of landlords and insurance brokers for secret commissions.

July 15, 2024
Balance ranked in the Chambers and Partners 2024 Litigation Support Guide

Balance is proud to have been ranked again in the Chambers and Partners 2024 Litigation Support Guide in both the UK and Australia.

 

June 4, 2024
Event: The Commercial Litigators Forum panel discussing The Post Office Case

Robert Rothkopf, has been invited to speak at The Commercial Litigators Forum panel event discussing The Post Office Case on 12 June 2024.

March 8, 2024
Webinar: Litigation Funding Post-PACCAR – What has happened since the decision and what happens next?

Jeremy Humm will be taking part in the upcoming Thought Leaders 4 FIRE Webinar “Litigation Funding Post-PACCAR – What has happened since the decision and what happens next?”

Subscribe to our mailing list

We'd love to stay in touch with you and share our news and insights.
Subscribe

Put a number on it.

What do you mean by "strong prospects"?
Take our survey