Executives may improve the quality of their judgments by taking the time to think about the unexpected.
Overconfidence is so pervasive and harmful that Nobel Prize winner in economics Daniel Kahneman deemed it “the most significant of the cognitive biases” in his book Thinking, Fast and Slow. Misdiagnoses in medicine, excessive stock market trading, improper allocation of corporate investment, and even economic downturns have all been linked to overconfidence. Several high-profile tragedies, like the Chernobyl explosion, the loss of the Titanic, and the Deepwater Horizon oil leak, have been linked to overconfidence.
A lot of effort has been put into studying the causes of overconfidence and creating efficient methods for boosting calibration, or accuracy-related expertise. My research with Philip Fernbach (Colorado Boulder), Craig Fox (UCLA), and Steven Sloman (Brown University) has led to a novel method for lowering people’s inherent sense of certainty by having them consciously take into account the gaps in their knowledge. Our research study, titled “Known Unknowns: A Key Determinant of Confidence and Calibration,” appeared in Management Science.
Putting a brake on arrogance
Our first study had participants rate their degree of confidence in their answers to ten multiple-choice questions testing their general knowledge. In addition, we asked them to explain why they were either more or less certain about their assessments. These justifications were then ranked according to how much they relied on established fact against conjecture. Uncertainty involving unknown evidence may arise for a number of reasons; for instance, if the question is “Which has more calories, a Subway meatball sandwich or a McDonald’s Quarter Pounder with cheese?,” not knowing the number or size of meatballs in the Subway sandwich could be a reason for less confidence.
There was a discrepancy between individuals’ reported levels of confidence and their actual levels of accuracy (mean: 67%). (62 percent). Yet, participants’ ratings of their confidence were more calibrated and no less accurate when they paid more attention to unknown evidence.
Subjects in our second research responded to a series of multiple-choice inquiries. We asked a sample to identify two pieces of data that were not provided but would have led them to the correct response in each case. We had the second set of people consider if an alternative response they had not chosen may have been the right one and write down two arguments in favor of it. The first team “played devil’s advocate,” or analyzed possible negative outcomes, while the second team “evaluated the alternative.” Participants in the control group were asked to simply respond to the questions and rate their degree of confidence in their answers.
Although both thinking about the unknowns and “playing the devil’s advocate” helped minimize overconfidence, thinking about the unknowns was more beneficial. When compared to the control group, it reduced overconfidence by 8 percentage points (16 vs. 24), whereas thinking about the alternative only reduced it by 6 percentage points.
In the third experiment, we looked at whether or not thinking about the hazy future affected our confidence levels or how well we were able to calibrate. People show a lack of confidence and an excess of caution in a variety of domains. It is only when people are overconfident that they find that thinking about the unknowns causes them to feel less confident; when they are properly calibrated or underconfident, this does not happen. Subjects in this study were asked to respond to two different types of general knowledge questions. Participants’ levels of overconfidence and underconfidence in their answers varied across nine knowledge categories (e.g., state populations, calorie counts). Participants in this research also had the option of thinking about what they didn’t know or thinking about what would be better (the devil’s advocate strategy), like in the second trial. Both treatments were evaluated in comparison to a control group that was not prompted to consider supplementary materials.
Contrary to what we expected, playing devil’s advocate had the same effect on both overconfident and underconfident domains, but examining the unknowns only lowered confidence when it was misplaced (in overconfident domains). The following diagram compares the two methods’ levels of confidence and overconfidence in various settings.
The art of finding a happy medium
Confirmation bias, the systematic propensity to seek or overweight evidence supporting a favoured hypothesis above its alternatives, has been the focus of a considerable deal of study on overconfidence. Our findings, however, suggest that the devil’s advocate approach isn’t always the most effective tool, since some people may unintentionally lose confidence when they analyze all the possible ways in which they are incorrect. Even if people’s initial evaluations were accurate, encouraging them to second-guess themselves might cause them to doubt their abilities. Picture a CFO weighing the pros and disadvantages of a possible purchase. Although overconfidence on the part of the CFO is something no investor likes to see, underconfidence that leads to lost opportunities may be just as expensive.
Our experience tells us that overconfidence typically results from people failing to account for gaps in their knowledge. What we recommend to supervisors is basic. Take out a piece of paper and ask yourself, “What is it that I don’t know?” before making any hasty probability assessments. The sheer act of considering the unknowns is helpful, even if it is not committed to paper. It’s also not done enough. The fear of being judged as an idiot is a common deterrent. But, businesses that permit executive arrogance to run rampant might anticipate a large price to pay at some point in the future.