banner

Overconfidence bias is the tendency to have excessive confidence in one’s own answers to questions, and to not fully recognize the uncertainty of the world and one’s ignorance of it.

Overconfidence bias is the tendency to have excessive confidence in one’s own answers to questions, and to not fully recognize the uncertainty of the world and one’s ignorance of it.

Overconfidence bias is the tendency to have excessive confidence in one’s own answers to questions, and to not fully recognize the uncertainty of the world and one’s ignorance of it. People have been shown to be prone to what is called the “illusion of certainty” in (a) overestimating how much they understand and (b) underestimating the role of chance events and lack of knowledge, in effect underestimating the variability of events they are exposed to in their lives (Pallier et al. 2002, Moore and Healy 2008, Proeger and Meub 2014).* Overconfidence bias is found with both laypeople and experts (Fabricius and Büttgen 2015).

Overconfidence bias is fed by illusions of certainty, which are fed by hindsight bias, also known as the “I-knew-it-all-along effect.” Availability bias — the tendency to overweigh whatever comes to mind — similarly feeds overconfidence bias. Availability is influenced by the recency of memories and by how unusual or emotionally charged they may be, with more recent, more unusual, and more emotional memories being more easily recalled. Overconfidence bias is a type of optimism and it feeds overall optimism bias.

A simple way to illustrate overconfidence bias is to ask people to estimate confidence intervals for statistical outcomes. In one experiment, the Chief Financial Officers (CFOs) of large US corporations were asked to estimate the return next year on shares in the relevant Standard & Poor’s index (Kahneman 2011: 261). In addition, the CFOs were asked to give their best guess of the 80 percent confidence interval for the estimated returns by estimating a value for returns they were 90 percent sure would be too low and a second value they were 90 percent sure would be too high, with 80 percent of returns estimated to fall between these two values (and 20 percent outside).

The human brain, including the brains of experts, spontaneously underestimates variance. For whatever reason, humans seem hardwired for this.

Comparing actual returns with the estimated confidence interval, it was found that 67 percent of actual returns fell outside the estimated 80-percent confidence interval, or 3.35 times as many as estimated. The actual variance of outcomes was grossly underestimated by these financial experts, which is the same as saying they grossly underestimated risk. It is a typical finding. The human brain, including the brains of experts, spontaneously underestimates variance. For whatever reason, humans seem hardwired for this.

In management, overconfidence bias is unfortunately built into the very tools experts use for quantitative risk management. The tools, which are typically based on computer models using so-called Monte-Carlo simulations, or similar, look scientific and objective, but are anything but. Again, this is easy to document. You simply compare assumed variance in a specific, planned project with actual, historic variance for its project type, and you find the same result as for the CFOs above (Batselier and Vanhoucke 2016).

Garbage in, garbage out, as always.

The bias is generated by experts assuming thin-tailed distributions of risk (normal or near-normal), when the real distributions are fat-tailed (lognormal, power law, or similar probability distribution) (Taleb 2004). The error is not with Monte-Carlo models as such, but with erroneous input into the models. Garbage in, garbage out, as always. To eliminate overconfidence bias you want a more objective method that takes all distributional information into account, not just the distributional information experts can think of, which is subject to availability bias. The method needs to run on historical data from projects that have actually been completed. Flyvbjerg (2006) describes such a method.

There is evidence that the experts who are most in demand are the most overconfident. People are willing to pay for confidence more than expertise.

In the thrall of overconfidence bias, decision makers underestimate risk by overrating their level of knowledge and ignoring or underrating the role of chance events in deciding outcomes. Hiring experts will generally not help, because experts are just as susceptible to overconfidence bias as laypeople and therefore tend to underestimate risk, too. There is even evidence that the experts who are most in demand are the most overconfident. That is, people are attracted to, and willing to pay for, confidence, more than expertise (Kahneman 2011: 263, Tetlock 2005). Risk underestimation feeds the Iron Law of project management and is the most common cause of project downfall. Good project leaders must know how to avoid this.

Individuals produce confidence by storytelling. The more coherent a story we can tell about what we see, the more confident we feel. But coherence does not necessarily equal validity. People tend to assume “what you see is all there is,” called WYSIATI by Kahneman (2011: 87–88), who gives this concept pride of place in explaining a long list of biases, including overconfidence bias. People spin a story based on what they see. Under the influence of WYSIATI they spontaneously impose a coherent pattern on reality, while they suppress doubt and ambiguity and fail to allow for missing evidence, according to Kahneman.

The human brain excels at inferring patterns and generating meaning based on skimpy, or even non-existent, evidence.

The human brain excels at inferring patterns and generating meaning based on skimpy, or even non-existent, evidence. But coherence based on faulty or insufficient data is not true coherence, needless to say. If we are not careful, our brains quickly settle for anything that looks like coherence and uses it as a proxy for validity. This may not be a big problem most of the time, and may even be effective, on average, in evolutionary terms, which may be why the brain works like this. But for big consequential decisions it is not an advisable strategy. Nevertheless, leaders and their teams often have a very coherent — and very wrong — story about their projects, for instance that a project is unique or that it may be completed faster and cheaper than the average project, or that everything will go according to plan. The antidote is better, more carefully curated stories, based on better data.

Gigerenzer (2018: 324) has rightly observed that overconfidence, presented by psychologists as a non-deliberate cognitive bias, is in fact often a deliberate strategic bias used to achieve predefined objectives, i.e., it is strategic misrepresentation. Financial analysts, for instance, “who earn their money by mostly incorrect predictions such as forecasting exchange rates or the stock market had better be overconfident; otherwise few would buy their advice,” argues Gigerenzer, who further observes about this fundamental confusion of one type of bias for a completely different one that, “[c]onceptual clarity is desperately needed” (ibid.).

Powerful individuals have been shown to be more susceptible to availability bias than individuals who are not powerful.

Finally, regarding the relationship between power bias and cognitive bias, powerful individuals have been shown to be more susceptible to availability bias than individuals who are not powerful. The causal mechanism seems to be that powerful individuals are affected more strongly by ease of retrieval than by the content they retrieve, because they are more likely to “go with the flow” and trust their intuition than individuals who are not powerful (Weick and Guinote 2008).

This finding has been largely ignored by behavioral economists, including by Thaler (2015) in his history of the field. This is unfortunate, because the finding documents convexity (amplification) to the second degree for situations with power. By overlooking this, behavioral economists make the same mistake they criticize conventional economists for, namely overlooking and underestimating variance and risk. Conventional economists make the mistake by disregarding cognitive bias; behavioral economists by ignoring power bias and its effect on cognitive bias. Underestimating convexity is a very human mistake, to be sure. We all do it. But it needs to be accounted for if we want to understand all relevant risks and protect ourselves against them in decision-making.

Behavioral economists make the same mistake they criticize conventional economists for, namely overlooking and underestimating variance and risk.

For full references and a full version of the text, see Flyvbjerg, Bent, 2021, “Top Ten Behavioral Biases in Project Management: An Overview,” Project Management Journal, vol. 52, no. 6, pp. 531–546. Free download here.

Source: https://medium.com/geekculture/overconfidence-bias-hindsight-bias-availability-bias-fb5e41f23269