As a litigator, I make intuitive predictions all the time. But does the mere fact that I make them often mean that they’re more accurate than those who make them sporadically? Does experience improve accuracy?
Philip Tetlock says no. This may come as a surprise to the legal profession, a field with no shortage of traditions, in which deference is generously afforded to senior counsel.
This is not always a bad thing. But in the exercise of prediction-making, it might be. According to Tetlock, while experience does not increase prediction accuracy, it does lead to increased confidence in the prediction.
So if we can’t expect senior members of the bar to make more accurate predictions of a legal action than a junior member, who or what can we turn to?
Our answer is base rates.
Paul E. Meehl, famed clinical psychologist, devoted his career to understanding why statistical predictions (base rates) continuously outperformed clinical predictions (expert judgment). Eventually, he established two theories:
The first reason is that humans try too hard to be clever. Humans have a desire to think outside the box and consider complex combination of facts when making predictions. Complexity may work in the occasional scenario but, more often than not, it reduces validity.
The second reason is that humans are persistently inconsistent when it comes to making summary judgments. When asked to evaluate the same information twice, we give different responses. It’s been demonstrated that expert radiologists contradict themselves 20% of the time when they see the same x-ray on separate occasions. How many litigators have changed their minds about the merits of a case when reassessing the same evidence at a later date? I know I have and it may have been a combination of increased trust with the client or familiarity of opposing counsel, although both have little to do with reducing uncertainty to generate a better prediction of success.
I believe that inaccurate summary judgments fail to incorporate (or properly apply) a common statistical principle: regression to the mean. Because litigators do not benefit from repeated exposure to a complete set of facts, we engage in a process called intensity-matching that requires us to weigh the limited information we have in order to generate an assessment of the outcome (risk and reward). This is a dangerous practice as it involves finding an answer to a substitute question in the absence of other information.
According to Kahneman, intensity-matching exercises yield extreme predictions when based on extreme evidence, leading people to give the same answer to two different questions. Kahneman offers the following example:
Julie is a senior in university. By the time that she was four years old, she was already a fluent reader. What is her grade point average (GPA)?
When broken down, these are fundamentally two questions:
1. What is Julie’s percentile score on reading precocity?
2. What is Julie’s percentile score on GPA?
To help us reach the correct answer, a schematic formula must be used:
Reading age = shared factors + factors specific to reading age = 100%
GPA = shared factors + factors specific to GPA = 100%
The shared factors include genetically-determined aptitude, the degree to which her family supports academic interests, and any other factors that would lead to people becoming precocious readers as children and academically successful adults.
Now, we must need to assess the correlation between the two measures: reading age and GPA. This correlation is equal to the proportion of shared factors among their determinants.
In this scenario, Kahneman assigns an optimistic guess of 30%.
We now have everything we need to reach an unbiased prediction:
1. Start with an estimate of average GPA.
2. Determine the GPA that matches your impression of the evidence.
3. Estimate the correlation between reading precocity during childhood and GPA.
4. If the correlation is .30, move 30% of the distance from the average to the matching GPA.
The first step ascertains the baseline, which is the GPA that we would have predicted if we knew nothing else about Julie other than she was a senior at college.
The second step involves our summary judgment, which is our intuitive judgment of the evidence.
The third step involves the active process of moving away from the baseline towards our intuitive prediction but only to a degree that matches our estimate of the correlation.
The final step provides us with our answer: a prediction influenced by our intuition predicated on an unbiased base rate.
I believe that a systems-based approach to assessing the merits of a case is a superior — and logical — method than exclusively relying on expert judgment. Although our subjective judgment remains an element of the process, the predictions are moderated due to a regression to the mean.
The consequence, however, is that it eliminates extreme predictions. Or what Nassim Taleb calls “black swan” events. These are civil cases that lead to record-breaking damage awards and land a spot in the news. In these extreme situations, litigators will have to lean on factors outside the merits of the case such as the client’s appetite for risk and disposition of the opposing party. Lastly, calculating that a client may have a reduced chance of success does not mean that the matter shouldn’t process to trial but simply litigators and their clients be on the same page as to what the odds are and how those odds were calculated.
At my law firm, I strive to provide not only expert advocacy but sound advice. The latter cannot be achieved unless our forecasts are devoid of cognitive bias and irrational decision making. That involves the humility to accept that human intuition requires guidance from unbiased data and a systemic — even statistical — method of analysis to wed the two. Sound judgment depends on facts not feelings. This is not only an empirical truth but a goal worthy of pursuit.