Michael Lewis starts his new book, The Undoing Project, with the caveat that it is kind of like the inverse of Moneyball. Instead of focusing on how people might use pure data and analytics to compensate for the fallibility of human judgement, he is going to write about the study of those foibles themselves. Although I had already read “Thinking, Fast and Slow,” Nobel laureate Daniel Kahneman’s magnum opus on how the Human brain uses heuristics, not pure rationality, to make decisions, I was still riveted by the narrative of how he and collaborator Amos Tversky worked together on these ideas with very different, but complimentary styles.
Similar to using optical illusions to understand how vision works, Kahneman and Tversky used surveys to demonstrate mental blind spots hardwired into the brain.
Even if you know about it, the Conjunction fallacy is very hard to resist:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
(A) Linda is a bank teller.
(B) Linda is a bank teller and is active in the feminist movement.
Since the set of all bank tellers includes bank tellers who are feminists, (A) must be more likely. By adding more restrictions, (“active in the feminist movement”) it tricks us into thinking it will be more likely. Another example: Estimate the likelihood that at least 1,000 people will have to evacuate from California this year. Now estimate the likelihood that a forest fire will start in Southern California and 1,000 people will have to evacuate this year. Again, the brain uses the representative of the narrative as an imperfect proxy for how probable we should think something is.
The main principle is that real people are affected by the way choices are framed – whether as a loss or a gain – or by how representative a description sounds. While we might now consider some of these findings obvious – people are clearly not omniscient, perfectly rational, purely selfish members of homo economicus – Behavioral economics upset a great deal of Economic theory. This is because mathematical models of economic behavior rely on assumptions like stable preferences of rational actors. The idea of “bounded rationality” makes everything a muddle. And if people are Predictably Irrational, then the errors are not just random noise, but rather a systematic bias that won’t even wash out on average.
[Parenthetically, this is another great example of the difference between uncorrelated errors, which can be improved with aggregation, versus systematic bias, which cannot]
The upside, however, is that if real people are not perfectly rational, they can be nudged into doing the right thing, like saving for retirement, with simple changes to the framing or “choice architecture,” which is basically how the options are framed.
Appendix – Behavioral Economics Bibliography: