Have you ever wondered why you reach for the chocolate bar instead of the apple, even when you know which one is healthier?
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
In our minds, there are two ways of thinking that are always working together: System 1, the fast thinker, and System 2, the slow thinker. Imagine this: you hear a loud bang! Your attention instantly snaps toward the sound – that’s System 1, always quick and on guard. Now imagine you’re solving a tricky math problem; you have to focus and think through it carefully – that’s System 2 hard at work.
In his book, Thinking Fast and Slow, Nobel Prize-winning psychologist, Daniel Kahneman explains how understanding these thinking systems can help us make better choices and understand ourselves more deeply.
- What did I get out of it?
- Autopilot: When Your Brain Chooses for You
- Snap Judgments: When Your Brain Jumps to Conclusions
- Heuristics: Your Brain’s Mental Shortcuts
- No Head for Numbers: When Stats Trick Us
- Past Imperfect: Your Brain’s Tricky Memory
- Mind Over Matter: How Focus Changes Everything
- Taking Chances: The Way Probabilities Are Presented To Us
- Not Robots: Why Logic Isn’t Always King
- Gut Feeling: When Emotions Trump Logic
- False Images: When Your Mental Shortcuts Lead You Astray
- Who Should Read It?
What did I get out of it?
A somewhat repetitive book, but it has great lessons on how we make our decisions. For me personally, this book opened my eyes to the hidden forces that shape our choices – even when we think we’re in full control.
Autopilot: When Your Brain Chooses for You
Priming and Repetition
Have you ever filled in a word blank like SO_P and automatically thought “SOUP” after seeing the word “EAT”? That’s called priming, and it shows how easily our brains can be influenced without us even realizing it!
A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact.
Our Brains Get Tricked: The more we hear something, the more familiar it becomes, and our brains mistake that familiarity for truth. It’s a built-in shortcut that gets exploited.
Priming can change how we act! In one study, just thinking about old age made people walk slower, showing how priming can change our actions.
. This means things around us, from words to images and even ideas about money, can secretly nudge our decisions and behavior.
“We must be inclined to believe it because it has been repeated so often, but let’s think it through again.”
A powerful tool for combating misinformation and making better decisions.
Repetition ≠ Truth: Just because you’ve heard something often doesn’t automatically make it true. This is a tactic used by manipulators, advertisers, and those spreading misinformation.
The Importance of Critical Thinking: System 1 wants us to believe whatever feels familiar, but good decisions require engaging System 2.
The Power of “Why”: Asking ourselves “Why do I believe this?” helps identify when we might be falling into the trap of believing something based purely on repetition.
Justifications
A remarkable aspect of your mental life is that you are rarely stumped.
System 1 in Action: Our brains are wired to always produce an answer, even with incomplete or flawed information. The danger is that we rarely notice how often this quick-thinking system leads us astray. Feeling like you have an answer doesn’t mean your answer is right.
Encourage open discussion where it’s okay to say, ‘I don’t know.’ Model for children that gathering more information is always wiser than providing a quick answer that may be incorrect.
Resist the urge to jump to conclusions.
Snap Judgments: When Your Brain Jumps to Conclusions
Halo Effect
Ever meet someone at a party and instantly decide you like them, even though you barely know anything about them? That’s your brain making a snap judgment. Psychologists call this the “halo effect” – when one good trait makes you assume someone is great all around.
If you like the president’s politics, you probably like his voice and his appearance as well. The tendency to like (or dislike) everything about a person — including things you have not observed — is known as the halo effect.
You’re not in as much control of your likes as you think.
Halo Effect: A Perception Shortcut Our brains like to take mental shortcuts. If we form a positive impression of someone (their politics in this case), we tend to unconsciously extend that positive feeling to other aspects of that person, even if we haven’t observed them.
Hence, first impressions are important.
The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increases the weight of first impressions, sometimes to the point that subsequent information is mostly wasted.
What we learn first is often completely random (did you meet them on a good day or a bad day?). This shows how easily our judgments about people can be swayed by factors outside their control.
Assumptions And Judgements
Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities.
Our minds are always on the starting block, ready to race to make up stories.
Our Brain’s Focus on People: We have an inherent drive to see intention behind actions, even when it’s not present. This makes us quick to form judgments about the motivations of individuals or organizations.
But there’s another trick our brain plays: confirmation bias. If someone asks, “Is James friendly?”, you’re more likely to think “yes” just because the idea was planted in your head!
The Danger of Assumptions: Assuming a deliberate agent is behind an event can lead to misinterpretations. An error might be accidental, not malicious. A market shift might be due to complex factors, not a conspiracy.
Anthropomorphization: We tend to apply human characteristics and motivations to non-human things (like the weather or a computer program). This impacts how we react to situations.
Todorov has found that people judge competence by combining the two dimensions of strength and trustworthiness. The faces that exude competence combine a strong chin with a slight confident-appearing smile.
Political scientists followed up on Todorov’s initial research by identifying a category of voters for whom the automatic preferences of System 1 are particularly likely to play a large role. They found what they were looking for among politically uninformed voters who watch a great deal of television.
We unconsciously assess competence based on facial features.
Shiny websites, slick presentations, and charismatic sales pitches shouldn’t outweigh the core fundamentals of any investment opportunity.
Question your first impressions of clients or colleagues. Do they inspire confidence because of their knowledge and work, or are you making a judgment based on how they look or present themselves.
System 1 is not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.
The false conclusions of System 1 can spread like a virus throughout your thinking, unless countered quickly.
Negativity Sticks: It’s easier for our brains to hold onto negative information than positive. Even if something is quickly debunked, the doubt lingers subconsciously. This is why rumor and misinformation spread easily.
Don’t let gossip or speculation take root.
For an example, take the sex of six babies born in sequence at a hospital. The sequence of boys and girls is obviously random; the events are independent of each other, and the number of boys and girls who were born in the hospital in the last few hours has no effect whatsoever on the sex of the next baby. Now consider three possible sequences: BBBGGG GGGGGG BGBBGB Are the sequences equally likely? The intuitive answer — “ of course not!” — is false. Because the events are independent and because the outcomes B and G are (approximately) equally likely, then any possible sequence of six births is as likely as any other. Even now that you know this conclusion is true, it remains counterintuitive, because only the third sequence appears random. As expected, BGBBGB is judged much more likely than the other two sequences.
Understanding how our brains struggle with randomness is crucial for making decisions in many areas of life.
Although it is common, prediction by representativeness is not statistically optimal. Michael Lewis’s bestselling Moneyball is a story about the inefficiency of this mode of prediction. Professional baseball scouts traditionally forecast the success of possible players in part by their build and look. The hero of Lewis’s book is Billy Beane, the manager of the Oakland A’s, who made the unpopular decision to overrule his scouts and to select players by the statistics of past performance. The players the A’s picked were inexpensive, because other teams had rejected them for not looking the part. The team soon achieved excellent results at low cost.
Judging By Appearances: We often assume that if someone looks a certain way, they’ll perform a certain way. In the case of baseball scouts, they might have overlooked players with less athletic builds, even if those players had a better track record.
The Power of Statistics: Billy Beane’s innovation was to ignore “representativeness” and focus solely on past performance data. This evidence-based approach led to a winning team that no one else saw coming.
Appearances can be deceiving. It’s better to use objective criteria (past performance, data) to make accurate judgments about people or investments.
Regression To Mean
Instead, I used chalk to mark a target on the floor. I asked every officer in the room to turn his back to the target and throw two coins at it in immediate succession, without looking. We measured the distances from the target and wrote the two results of each contestant on the blackboard. Then we rewrote the results in order, from the best to the worst performance on the first try. It was apparent that most (but not all) of those who had done best the first time deteriorated on their second try, and those who had done poorly on the first attempt generally improved. I pointed out to the instructors that what they saw on the board coincided with what we had heard about the performance of aerobatic maneuvers on successive attempts: poor performance was typically followed by improvement and good performance by deterioration, without any help from either praise or punishment.
Kahneman tells a story about IAF officers believing that praise made pilots perform worse, while scolding made them perform better. He was brought in to consult about this. Instead, he showed that this belief was misplaced.
A stock’s hot streak won’t last forever, and an extended slump might be followed by a rebound. Understanding regression to the mean can help you make more realistic investment decisions.
Random variation can cause temporary highs and lows, but performance generally tends to regress back to an individual’s or group’s average ability over time.
Regression effects are ubiquitous, and so are misguided causal stories to explain them. A well-known example is the “Sports Illustrated jinx,” the claim that an athlete whose picture appears on the cover of the magazine is doomed to perform poorly the following season. Overconfidence and the pressure of meeting high expectations are often offered as explanations. But there is a simpler account of the jinx: an athlete who gets to be on the cover of Sports Illustrated must have performed exceptionally well in the preceding season, probably with the assistance of a nudge from luck — and luck is fickle.
[It] took Francis Galton several years to figure out that correlation and regression are not two concepts — they are different perspectives on the same concept. The general rule is straightforward but has surprising consequences: whenever the correlation between two scores is imperfect, there will be regression to the mean.
An example of the same concept, as demonstrated by the Sports Illustrated jinx.
Indeed, the statistician David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case.
Depressed children treated with an energy drink improve significantly over a three-month period. I made up this newspaper headline, but the fact it reports is true: if you treated a group of depressed children for some time with an energy drink, they would show a clinically significant improvement. It is also the case that depressed children who spend some time standing on their head or hug a cat for twenty minutes a day will also show improvement. Most readers of such headlines will automatically infer that the energy drink or the cat hugging caused an improvement, but this conclusion is completely unjustified.
The concept of regression isn’t necessarily intuitive, because remember — we love a good story.
Misunderstanding Regression to the Mean: People tend to assume any change after an intervention means the intervention caused the change. This ignores natural fluctuations or other factors that could be at play.
False Causation: Depressed children will likely start to feel better over time regardless of what they do. Attributing that improvement to a random treatment (energy drink, hugging a cat) is incorrect.
Market fluctuations make it easy to make a correlation look like causation when it’s often just coincidence.
Don’t overreact to an isolated problem and assume the whole system is broken when you might just be seeing natural variation.
Heuristics: Your Brain’s Mental Shortcuts
Laziness
Imagine you’re in a rush and need to grab a snack. You see a familiar brand and think, “That’s probably good.” That’s a heuristic – a mental shortcut to help you make fast decisions.
A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action.
When given a choice, we often default to the path that requires the least amount of thinking.
It’s tempting to fall into routines or stick with familiar solutions to problems. Be aware of this tendency and take intentional moments to think of ways to improve, change, and try something new. Man with a hammer?
Finding such causal connections is part of understanding a story and is an automatic operation of System 1. System 2, your conscious self, was offered the causal interpretation and accepted it.
On stories. Humans love a good story. This is why it’s so important to develop story telling skills.
The Danger of Quick Conclusions: System 1 might provide an explanation, but it might not be the right one. True understanding requires the slower and more analytical System 2 to verify.
Passively Accepting Explanations: It’s easy for our conscious mind (System 2) to get lazy and simply accept the first explanation that feels right, even if it’s flawed.
Double-check work, especially if I find yourself quickly jumping to a solution. There may be subtle factors I’ve overlooked in your rush to make sense of the data.
Availability Heuristic
One of our projects was the study of what we called the availability heuristic. We thought of that heuristic when we asked ourselves what people actually do when they wish to estimate the frequency of a category, such as “people who divorce after the age of 60” or “dangerous plants.” The answer was straightforward: instances of the class will be retrieved from memory, and if retrieval is easy and fluent, the category will be judged to be large.
The Availability Heuristic: Our brains assess how common something is by how easily we can bring examples to mind. If we can think of many examples quickly (like people travelling on business class or people getting divorced), we assume it happens a lot, even if that’s not statistically true.
The Problem: This is a flawed system because it relies on memory retrieval, which can be biased by factors like social media feed, media attention or personal experiences, not actual occurrence.
A professor at UCLA found an ingenious way to exploit the availability bias. He asked different groups of students to list ways to improve the course, and he varied the required number of improvements. As expected, the students who listed more ways to improve the class rated it higher!
It was the act of coming up with lots of improvements, not actually finding good ideas, that changed the students’ perception of the class.
If you ask everyone to come up with a lot of complaints, you can inadvertently create negative feelings even if the individual issues aren’t that serious.
No Head for Numbers: When Stats Trick Us
How good are you at guessing things? Tricks used by statisticians are called the base rate and anchoring.
Base Rates
There are two ideas to keep in mind about Bayesian reasoning and how we tend to mess it up. The first is that base rates matter, even in the presence of evidence about the case at hand. This is often not intuitively obvious. The second is that intuitive impressions of the diagnosticity of evidence are often exaggerated. The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves.
Base Rates: The starting probability of something being true before any specific evidence is considered. Our brains often ignore this, focusing solely on new information. Example: A rare disease test is positive, but we forget that even with a highly accurate test, if the disease is rare the positive result is most likely wrong.
Overestimating Evidence: We tend to think a single piece of evidence is more definitive than it actually is. This is due to System 1 seeking simple stories to explain events. Example: We see an employee make one mistake and exaggerate how much this means that they’re always careless.
WYSIATI: Stands for “What You See Is All There Is.” Our brains don’t like uncertainty, so we subconsciously fill in information gaps and over-rely on the limited data we have.
Don’t jump to conclusions about someone’s intent based on one action. Remember the broader context (base rate) of how they usually behave, and consider there might be factors you’re unaware of.
Think in terms of long-term probabilities, and be wary of assuming a single data point tells the whole story.
Look at an error in the context of someone’s overall performance record rather than assuming it defines their work quality.
The phenomenon we were studying is so common and so important in the everyday world that you should know its name: it is an anchoring effect. It occurs when people consider a particular value for an unknown quantity before estimating that quantity. What happens is one of the most reliable and robust results of experimental psychology: the estimates stay close to the number that people considered — hence the image of an anchor.
Be very aware of that first piece of information you get on any topic. It’s a lot harder than you think to shake its influence on your later thinking.
Anchoring
Amos liked the idea of an adjust-and-anchor heuristic as a strategy for estimating uncertain quantities: start from an anchoring number, assess whether it is too high or too low, and gradually adjust your estimate by mentally “moving” from the anchor. The adjustment typically ends prematurely, because people stop when they are no longer certain that they should move farther.
People adjust less (stay closer to the anchor) when their mental resources are depleted, either because their memory is loaded with digits or because they are slightly drunk. Insufficient adjustment is a failure of a weak or lazy System 2.
How does anchoring actually work in humans?
It’s not that we never adjust our initial impression, it’s that we often adjust insufficiently. We start with the anchor (that first bit of info) and tweak it slightly, not moving far enough to get to a more accurate answer.
Why We Stop Short: Adjusting takes mental effort. We naturally stop adjusting when we no longer feel confident about how much further to go. This is where laziness or even outside factors affecting our mental focus play a role.
Okay, but how big can this effect be? Here are couple of examples:
Some visitors at the San Francisco Exploratorium were asked the following two questions: Is the height of the tallest redwood more or less than 1,200 feet? What is your best guess about the height of the tallest redwood? The “high anchor” in this experiment was 1,200 feet. For other participants, the first question referred to a “low anchor” of 180 feet. The difference between the two anchors was 1,020 feet. As expected, the two groups produced very different mean estimates: 844 and 282 feet. The difference between them was 562 feet. The anchoring index is simply the ratio of the two differences (562/ 1,020) expressed as a percentage: 55%. The anchoring measure would be 100% for people who slavishly adopt the anchor as an estimate, and zero for people who are able to ignore the anchor altogether. The value of 55% that was observed in this example is typical. Similar values have been observed in numerous other problems.
In an experiment conducted some years ago, real-estate agents were given an opportunity to assess the value of a house that was actually on the market. They visited the house and studied a comprehensive booklet of information that included an asking price. Half the agents saw an asking price that was substantially higher than the listed price of the house; the other half saw an asking price that was substantially lower. Each agent gave her opinion about a reasonable buying price for the house and the lowest price at which she would agree to sell the house if she owned it. The agents were then asked about the factors that had affected their judgment. Remarkably, the asking price was not one of these factors; the agents took pride in their ability to ignore it. They insisted that the listing price had no effect on their responses, but they were wrong: the anchoring effect was 41%.
You should avoid being at the mercy of judges’ sentencing, but in case you ever are…
The power of random anchors has been demonstrated in some unsettling ways. German judges with an average of more than fifteen years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9. As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice. Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter. On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%.
The psychologists Adam Galinsky and Thomas Mussweiler proposed more subtle ways to resist the anchoring effect in negotiations. They instructed negotiators to focus their attention and search their memory for arguments against the anchor. The instruction to activate System 2 was successful. For example, the anchoring effect is reduced or eliminated when the second mover focuses his attention on the minimal offer that the opponent would accept, or on the costs to the opponent of failing to reach an agreement. In general, a strategy of deliberately “thinking the opposite” may be a good defense against anchoring effects, because it negates the biased recruitment of thoughts that produces these effects.
Combating Anchoring: Knowing the anchoring effect is one thing, but how do we overcome it? Here, the solution lies in actively engaging System 2’s analytical thinking.
Focus & Counter-Arguments: The study suggests negotiators who consider reasons against the initial offer (anchor) are less swayed by it. This refocuses attention and activates System 2 to find counter-arguments to the anchor’s influence.
Think Opposite: A broader strategy is to deliberately consider the opposing viewpoint. This disrupts the biased thinking System 1 uses to latch onto the initial anchor and forces a more balanced analysis
Past Imperfect: Your Brain’s Tricky Memory
Building Narratives
Think about a bad day you had. Do you remember every awful moment, or just the worst parts and how it ended? That’s your brain simplifying things! It has two memory systems:
- The “Live” Memory: Records feelings in the moment – like the pain of stubbing your toe.
- The “Story” Memory: Makes a highlight reel later – focusing on the most intense moments and how the whole thing ended.
Problem: The “Story” memory usually wins! as the story skips the boring bits: A long, somewhat bad concert is remembered worse than a short but terrible one because your brain focuses on the awful ending.
We have limited information about what happened on a day, and System 1 is adept at finding a coherent causal story that links the fragments of knowledge at its disposal.
Our Brains Fill the Gaps: When we only have partial information, our brains crave a complete story. System 1 leaps in to generate a plausible explanation, even if it’s not necessarily true, simply to make us more comfortable.
False Narratives: These stories created by System 1 can be based on assumptions, biases, and even our current mood. This leads to a potentially inaccurate interpretation of events.
Be mindful of this tendency when misunderstandings happen. Don’t jump to conclusions based on what is obvious.
Are there missing data points? Are there alternate ways to interpret the figures?
We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation. They are products of System 1.
This isn’t learned behavior, it’s instinctive. It’s healthy to question your intuitions, even the ones that feel deeply ingrained.
This means some gut instincts about why things happen are likely wrong, especially early in life. Our brains want to make fast connections, even without sufficient evidence.
Mind Over Matter: How Focus Changes Everything
Relax vs Focus Mode
Our brain has two modes:
- Relax Mode (Cognitive Ease): Our “fast thinker” (System 1) runs the show. In this mode we are more creative but also prone to slip-ups. It’s like being on autopilot.
- Focus Mode (Cognitive Strain): Our “slow thinker” (System 2) takes over. In this mode we are more careful and analytical, but less spontaneous. It’s like actively studying for a test.
If you care about being thought credible and intelligent, do not use complex language where simpler language will do.
Put your ideas in verse if you can; they will be more likely to be taken as truth.
Finally, if you quote a source, choose one with a name that is easy to pronounce.
Things that can influence how people judge our ideas and credibility:
- Clear, straightforward language is actually more likely to make us seem credible.
- Information that rhymes or has a pleasing rhythm is easier for our brains to process. This makes it seem more true, even if it isn’t!
- we tend to trust information more if it comes from a source with an easily pronounced name. This is an unconscious bias.
When trying to explain something important, focus on clarity, not fancy words. Break down complex ideas into smaller, digestible pieces.
Be wary of presentations loaded with jargon just for the sake of it. Does the presenter truly understand their subject matter, or are they hiding behind buzzwords?
This isn’t about dumbing things down. It’s about respecting the audience by making our ideas easy to grasp.
The often-used phrase “pay attention” is apt: you dispose of a limited budget of attention that you can allocate to activities, and if you try to go beyond your budget, you will fail.
The term “pay attention” perfectly captures this dynamic: we possess a finite budget of attention to distribute among tasks. Exceeding this budget leads to inefficacy.
System 2 typically governs attention, demanding considerable effort and focus.
It’s a common misconception that focus is ever-present; however, our cognitive capacity is inherently limited.
Attempting to concentrate deeply on several tasks simultaneously often results in failing all of them.
Much like the electricity meter outside your house or apartment, the pupils offer an index of the current rate at which mental energy is used.
Reading other people, especially kids and spouse: Paying attention to subtle changes in someone’s pupils can give clues about their mental state. Are they struggling to understand what you’re saying? Are they truly engaged in the conversation?
To note: pupil dilation can also be affected by light levels and other factors.
Cognitive Strain
Remember that System 2 is lazy and that mental effort is aversive. If possible, the recipients of your message want to stay away from anything that reminds them of effort, including a source with a complicated name.
Cognitive strain, whatever its source, mobilizes System 2, which is more likely to reject the intuitive answer suggested by System 1.
The mere exposure effect occurs, Zajonc claimed, because the repeated exposure of a stimulus is followed by nothing bad. Such a stimulus will eventually become a safety signal, and safety is good.
Mood evidently affects the operation of System 1: when we are uncomfortable and unhappy, we lose touch with our intuition.
At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together.
System 2: Effort Avoider Our brains naturally lean towards System 1’s fast and easy thinking. Mental effort takes work and feels unpleasant.
Stressed Brain = Bad Decisions: Stress, whether from work or personal life, favors System 1’s impulsive reactions.
System 2 Activation: Mental discomfort (including sadness), suspicion, and vigilance forces our slower, more analytical System 2 to take the lead. This can be useful for careful decision-making.
Important Note: This is not an endorsement of being constantly sad! However, it’s recognizing that our emotional state significantly influences how we process information and make choices.
System 1 does not keep track of alternatives that it rejects, or even of the fact that there were alternatives.
System 1 is gullible and biased to believe, System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy, and often lazy.
Think you’re outsmarting this process? Nope…
Our brains naturally go with the first plausible idea. System 1 doesn’t consider other options or question what it’s presented with - it’s all about efficiency.
Our critical thinking and ability to weigh alternatives lies with System 2. However, engaging System 2 takes effort, and sometimes we just don’t want to put in the work.
Willpower
Baumeister’s group has repeatedly found that an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around. The phenomenon has been named ego depletion.
The testers found that training attention not only improved executive control; scores on nonverbal tests of intelligence also improved and the improvement was maintained for several months.
Willpower as a Resource: Just like a muscle, our ability to focus, make tough choices, and resist temptations can get tired out. This is called “ego depletion.”
After Hard Effort, Less Control: If we spend mental energy resisting cookies all day, we may be more likely to give in to an impulse purchase later. Our self-control is worn down.
Taking Chances: The Way Probabilities Are Presented To Us
Perception
We think if everyone sees the same stats, they’d make the same choices, right? Wrong! How risks are presented can drastically change how we see them.
Consider these two scenarios, which were presented to different groups, with a request to evaluate their probability: A massive flood somewhere in North America next year, in which more than 1,000 people drown An earthquake in California sometime next year, causing a flood in which more than 1,000 people drown The California earthquake scenario is more plausible than the North America scenario, although its probability is certainly smaller. As expected, probability judgments were higher for the richer and more detailed scenario, contrary to logic. This is a trap for forecasters and their clients: adding detail to scenarios makes them more persuasive, but less likely to come true.
The laziness of System 2 is an important fact of life, and the observation that representativeness can block the application of an obvious logical rule is also of some interest.
Details vs. Logic: The more specific and detailed a scenario, the more plausible it feels, even if logically it’s less likely.
System 2 Laziness: It takes mental effort to override the “feels true” response and think through probabilities carefully. Our brains often take the easier path of going with what intuitively feels probable.
Don’t be swayed by emotionally vivid “worst-case scenarios” when making decisions. A detailed description of a potential problem doesn’t make it more likely to occur. Consider realistic probabilities.
Don’t confuse what feels plausible with what is statistically probable. Plausibility is the domain of the easily-fooled System 1, true probability requires System 2 engagement
Not Robots: Why Logic Isn’t Always King
Irrational Beings
Think you make decisions based solely on facts and logic? Think again! Economists used to assume everyone was an “Econ” – a perfectly rational being who always chooses what’s the most logical.
But that’s not how our brains work. Here’s why:
- It’s Not Just About the Prize: Imagine winning $100! Now imagine losing $100. Even though the amounts are equal, most people feel the loss more intensely. It’s emotional, not just logical.
- Your Starting Point Matters: Two people win the same lottery prize. One started out broke, the other was already rich. The one who was broke will likely be much happier, even though they have the same amount now!
Overconfidence
I had recently discovered that I was not a good intuitive statistician, and I did not believe that I was worse than others.
Humans are quite bad at intuitive statistics. Smart, educated guys like Kahneman were no exception.
Overconfidence: We all tend to overestimate our abilities, especially in areas where we have some knowledge (illusion of knowledge).
A little self-doubt can be a good thing! If we’re feeling overly confident about an investment opportunity, that’s a sign to dig deeper, not just go with our gut.
Affect Heuristic
We concluded that people must somehow simplify that impossible task, and we set out to find how they do it. Our answer was that when called upon to judge probability, people actually judge something else and believe they have judged probability.
The dominance of conclusions over arguments is most pronounced where emotions are involved. The psychologist Paul Slovic has proposed an affect heuristic in which people let their likes and dislikes determine their beliefs about the world.
With so much in the world that we don’t know, how does this happen?
Brains Take Shortcuts: When assessing probability (like the risk of something happening), it’s an incredibly complex mental task. System 1 substitutes a simpler judgment based on gut feelings or emotion.
Affect Heuristic: This is the term for our tendency to base important decisions on emotional reactions rather than a careful evaluation of the facts.
Emotional Bias
Slovic’s research team surveyed opinions about various technologies, including water fluoridation, chemical plants, food preservatives, and cars, and asked their respondents to list both the benefits and the risks of each technology. They observed an implausibly high negative correlation between two estimates that their respondents made: the level of benefit and the level of risk that they attributed to the technologies. When people were favorably disposed toward a technology, they rated it as offering large benefits and imposing little risk; when they disliked a technology, they could think only of its disadvantages, and few advantages came to mind.
The implication is clear: as the psychologist Jonathan Haidt said in another context, “The emotional tail wags the rational dog.”
As Slovic has argued, the amount of concern is not adequately sensitive to the probability of harm; you are imagining the numerator — the tragic story you saw on the news — and not thinking about the denominator. Sunstein has coined the phrase “probability neglect” to describe the pattern.
Emotional Bias: When we like something, we minimize the potential risks. Conversely, when we strongly dislike something, we exaggerate the risks and ignore potential benefits.
The Power of Stories: Our risk perception is driven by memorable “worst-case scenarios,” not on statistical probabilities. This is why news stories about rare events can incite intense fear, even if statistically unlikely to happen to you.
Risk/benefit assessment should be data-driven. Be mindful of strong emotions skewing your logic.
Objectivity
Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is costless is wrong. The costs are worth paying to achieve a better society, but denying that the costs exist, while satisfying to the soul and politically correct, is not scientifically defensible. Reliance on the affect heuristic is common in politically charged arguments. The positions we favor have no cost and those we oppose have no benefits. We should be able to do better.
Stereotyping: The act of judging someone based on a group affiliation rather than their individual merits. While clearly unfair, avoiding stereotypes altogether can be difficult.
Be aware of your own biases and how they might influence your judgments of others.
Cost of Anti-Stereotyping: This challenges the overly simplistic view that fighting stereotypes has no downsides. It can require mental effort to challenge biases and consider individuals on a case-by-case basis.
Emotion in Arguments: Discussions about sensitive topics like race, gender, etc., often trigger emotional responses. This can activate the affect heuristic, leading us to demonize opposing viewpoints and ignore any potential benefits they might offer.
Even if you disagree with someone’s position, try to understand their reasoning and the potential benefits of their arguments.
Striving for Objectivity: The ideal is to have balanced, logical arguments, even on emotionally charged topics.
Nuanced Thinking: Complex social issues require complex thinking. Avoid resorting to oversimplified stereotypes or demonizing opposing viewpoints.
Gut Feeling: When Emotions Trump Logic
Prospect Theory
Remember the “Econs,” those perfectly logical decision-makers? Turns out, humans don’t operate like that! Kahneman’s prospect theory explains why.
Characteristic of unbiased predictions is that they permit the prediction of rare or extreme events only when the information is very good. If you expect your predictions to be of modest validity, you will never guess an outcome that is either rare or far from the mean. If your predictions are unbiased, you will never have the satisfying experience of correctly calling an extreme case.
Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System 1. It is natural for the associative machinery to match the extremeness of predictions to the perceived extremeness of evidence on which it is based — this is how substitution works. And it is natural for System 1 to generate overconfident judgments, because confidence, as we have seen, is determined by the coherence of the best story you can tell from the evidence at hand. Be warned: your intuitions will deliver predictions that are too extreme, and you will be inclined to put far too much faith in them.
We will not learn to understand regression from experience. Even when a regression is identified, as we saw in the story of the flight instructors, it will be given a causal interpretation that is almost always wrong.
Intuition Leads to Overconfidence: If we think the evidence clearly points in a certain direction, we often assume the outcome will be extreme. Our System 1 makes these predictions effortlessly, which amplifies our confidence in their correctness.
Regression to the Mean is Subtle: Our brains struggle to grasp this concept and its impact on future outcomes. Even when it’s pointed out, we tend to explain it away as specific circumstances, not a general pattern.
The Limits of Experience: Simply going through life won’t make us any better at identifying regression to the mean at play. The tendency of extreme outcomes to move back towards average is counterintuitive.
The more extreme the prediction, the greater the need for exceptionally strong evidence to support it. Be humble in assessment of ability to predict the future.
People are asked for a prediction but they substitute an evaluation of the evidence, without noticing that the question they answer is not the one they were asked. This process is guaranteed to generate predictions that are systematically biased; they completely ignore regression to the mean.
A cognitive trap where people confuse how likely something is to happen with how easy it is to predict.
Misinterpreting the Question: When asked to predict a future outcome, people often focus on how strong the current evidence is (evaluation) instead of the actual likelihood of the event happening (prediction).
Don’t base your expectations for child’s future behavior solely on their recent actions. They might be going through a phase (good or bad) that won’t necessarily define their long-term personality or abilities.
Be cautious about extrapolating short-term trends into long-term forecasts. Unexpected events or changes in the market can quickly alter positive or negative outlooks.
When making predictions, distinguish between how easy it is to understand a situation based on current information and the actual probability of a future outcome. Consider regression to the mean to avoid biased predictions based on temporary highs or lows.
False Images: When Your Mental Shortcuts Lead You Astray
Drawing Conclusions
Our brain loves to fill in the blanks to create a complete picture of the world. It’s like having a file of mental snapshots for things like “summer” or “job interview.” The problem? These snapshots can be misleading!
Because the circumstances of the recurrence were the same, the second incident was sufficient to create an active expectation: for months, perhaps for years, after the event we were reminded of burning cars whenever we reached that spot of the road and were quite prepared to see another one (but of course we never did).
Kahneman once saw a burning car on the road.
Have you ever been pulled over? I have. Almost always, from that point forward I had an expectation of police at that exact spot whenever I drove past it.
While expecting another incident was System 1’s fast reaction, the critical thinking of System 2 realized the actual likelihood of being pulled over again or seeing another burning car in the same place is low.
System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.
Easily Fooled: System 1 is about quick conclusions, not careful analysis. It doesn’t matter if the information it’s working with is flawed, incomplete, or just plain wrong – it will still churn out an answer.
Recency Bias
After each significant earthquake, Californians are for a while diligent in purchasing insurance and adopting measures of protection and mitigation. They tie down their boiler to reduce quake damage, seal their basement doors against floods, and maintain emergency supplies in good order. However, the memories of the disaster dim over time, and so do worry and diligence. The dynamics of memory help explain the recurrent cycles of disaster, concern, and growing complacency that are familiar to students of large-scale emergencies.
As long ago as pharaonic Egypt, societies have tracked the high-water mark of rivers that periodically flood — and have always prepared accordingly, apparently assuming that floods will not rise higher than the existing high-water mark. Images of a worse disaster do not come easily to mind.
How humans react to natural disasters.
Disaster Amnesia: Intense experiences like an earthquake create a strong fear response initially. This drives preparedness. But as time passes, the memory fades, and so does that sense of urgency.
Failure to Imagine Worse: We rely on past experiences to guide future planning, but often underestimate the possibility of events exceeding those past experiences.
Complacency Cycle: This leads to a pattern of intense preparedness immediately after a disaster which slowly degrades over time until the next disaster occurs.
Who Should Read It?
I used to think my I could outsmart the clock. “This task? I’ll knock it out – I’m efficient!” Then deadlines would pass by, leaving me baffled. Was I secretly lazy? Was I incompetent? Hopefully neither. Though I was falling for classic traps: overconfidence, ignoring base rates … and a whole host of stuff I learned about in “Thinking, Fast and Slow.”
If you want to understand how your mind works and equip yourself to make smarter choices and actually learn from your missteps, this is your book. It unveils how our brains have these sneaky autopilot systems that sabotage us, even when we’re trying our best.
Fair warning: it might feel a little dry at first glance. At least, it did for me. Once I started applying the ideas – even to small stuff, at work or in my personal life – the book clicked. I’d find myself revisiting my highlights and rereading some parts.
And I you want to dig deeper, definitely check out Michael Mauboussin’s “Think Twice” and Annie Duke’s “Thinking in Bets”.
