Superforecasting: The Art and Science of Prediction

Superforecasting: The Art and Science of Prediction

What is the cost of making an incorrect prediction?

Loss of Life? A few billion dollars? Sullied reputation? A lost job?

It could be any or all of these or more.

Before the Iraq War, the Bush administration made some bold claims about Iraq having weapons of mass destruction. They said these WMDs posed a serious threat to the U.S. and its allies and used this as the main reason for invading Iraq.

The CIA and other intelligence agencies backed up these claims with reports and assessments. Even top officials like Colin Powell went to the UN Security Council and said Iraq definitely had active WMD programs and stockpiles.

But after the invasion, when the U.S. took over Iraq, it turned out the intelligence was way off base. They didn’t find any active WMD programs or big stockpiles like they expected.

This had some big consequences. It made the U.S. government look bad, both at home and abroad, because the main justification for the war turned out to be based on faulty intel. And the war itself really messed up Iraq, leading to years of instability, violence between different groups, and lots of lives lost.

In their book “Superforecasting,” Philip Tetlock and Dan Gardner bring up this example to show how hard it is to make accurate predictions in complex, high-stakes situations. Keeping the conspiracy theories behind the invasion aside, Tetlock and Gardner say the intelligence community and policymakers fell into some common traps, like groupthink, confirmation bias, and not really questioning their assumptions. The faulty prediction changed the course of history.

Everything we do involves forecasts about how the future will unfold. Whether buying a new house or changing job, designing a new product or getting married, our decisions are governed by implicit predictions of how things are likely to turn out. The problem is, we’re not very good at it.

In Superforecasting, Tetlock and Gardner teach us how to make better forecasts and predictions. They do so, through insights gathered from Good Judgement Project (GJP) from 2011 to 2015. In this prediction tournament, they recruited thousands of people who made thousands of predictions on various geopolitical events. The results of their predictions were quantified using Brier score.

What did I get out of it?

So, what does it take to get better at forecasting? Well, for starters, it’s not about being born with some kind of special gift or being super smart.

Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.

although superforecasters are well above average, they did not score off-the-charts high and most fall well short of so-called genius territory, a problematic concept often arbitrarily defined as the top 1%, or an IQ of 135 and up. So, it seems intelligence and knowledge help, but they add little beyond a certain threshold—so superforecasting does not require a Harvard PhD and the ability to speak five languages.

In other words, superforecasting is a skill that can be learned and developed. The key is to have a growth mindset.

Perpetual beta and growth over fixed mindset vital

What super forecasters thought less important than how they thought.

For superforecasters, how they think is more important than what they think. Now, let’s take a look at how we can forecast like a superforecaster.

Fermi-ize the Question

Enrico Fermi loved brain teasers that made you estimate things. He understood that by breaking down the question, we can separate what we know from what we don’t know.

What Fermi understood is that by breaking down the question, we can better separate the knowable and the unknowable. So guessing—pulling a number out of the black box—isn’t eliminated. But we have brought our guessing process out into the light of day where we can inspect it. And the net result tends to be a more accurate estimate than whatever number happened to pop out of the black box when we first read the question. Of course, all this means we have to overcome our deep-rooted fear of looking dumb. Fermi-izing dares us to be wrong.

This doesn’t mean we stop guessing, but it makes our guessing process more transparent. We can take a closer look at it. And usually, this leads to a more accurate estimate than the first number that popped into our head. Of course, to do this, we have to get over our fear of looking dumb. Fermi-izing means being okay with being wrong.

When problems are complex, they can be overwhelming and lead to mistakes. That’s why it’s helpful to break them down into smaller, more manageable sub-problems. This allows us to focus on each part separately and better assess each component. By doing this, we can make more accurate predictions. The key is to figure out what information we can know and what we can’t.

Think about an accountant forecasting financials. They can break down each line item on the financial statements into basic parts. This approach makes the assumptions clear and allows for more precise adjustments as new information comes in. This leads to more reliable financial planning.

The same goes for looking at potential investments. Break each one down into key parts, like the company’s earnings, market conditions, and any new developments. This way, you’re not just relying on your instincts. You’re making decisions based on a clearer understanding of what’s really happening.

Strike the Right Balance between Outside Views and Inside Views

The authors talk about two different ways of looking at a situation: the “outside view” and the “inside view.” The outside view is all about the base rate - how common something is in general.

base rate — how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”—in contrast to the “inside view,” which is the specifics of the particular case.

It turns out that the best forecasters start by anchoring their predictions in a way that doesn’t rely too much on their gut feelings about the details of the story. They always start with the base rate.

For example, when forecasting financial statements, start with how similar projects or the industry as a whole have performed in the past. This sets a realistic starting point for your estimates, so you’re less likely to be swayed by the unique aspects of the current project.

You may wonder why the outside view should come first. After all, you could dive into the inside view and draw conclusions, then turn to the outside view. Wouldn’t that work as well? Unfortunately, no, it probably wouldn’t. The reason is a basic psychological concept called anchoring. When we make estimates, we tend to start with some number and adjust. The number we start with is called the anchor. It’s important because we typically under adjust, which means a bad anchor can easily produce a bad estimate. And it’s astonishingly easy to settle on a bad anchor. In classic experiments, Daniel Kahneman and Amos Tversky showed you could influence people’s judgment merely by exposing them to a number—any number, even one that is obviously meaningless, like one randomly selected by the spin of a wheel. So, a forecaster who starts by diving into the inside view risks being swayed by a number that may have little or no meaning. But if she starts with the outside view, her analysis will begin with an anchor that is meaningful. And a better anchor is a distinct advantage.

When we make estimates, we tend to start with a number and then adjust from there. The number we start with is the anchor. The problem is, we usually don’t adjust enough. So, if we start with a bad anchor, we can easily end up with a bad estimate.

But if you start with the outside view, the analysis begins with a meaningful anchor, which gives you a better starting point.

Once you have that starting point, the real research work begins. This is where you read intensely and do some serious detective work to get good information. You use what you learn to adjust your base rate estimate up or down.

Reacting to New and Disconfirming Information

When you make a forecast, it’s based on what you know at that moment. But to keep your forecast relevant, you need to keep updating the chances of different outcomes as new information comes in.

Super forecasters incrementally update forecasts often as they get better data and weigh relevance and importance of data in updates.

Super forecasters are really good at this. They update their forecasts often, even if the updates are small, as they get better data. They also think carefully about how relevant and important the new data is before they update their forecast. The more often they update, and the more specific the updates are, the better their performance tends to be.

It’s important to update your forecasts as new evidence shows up, but you shouldn’t make big changes without a good reason. Keep evaluating how much weight the new evidence has compared to what you already know.

It’s also really important to understand the arguments against your beliefs. For anything you believe, you should know what evidence would make you change your mind.

For any belief, must know what disconfirming evidence would change your mind

Charlie Munger, one of the best investors ever, said “I never allow myself to hold an opinion on anything that I don’t know the other side’s argument better than they do.” That’s a great way to think about it.

When you’re setting financial forecasts or budgets, it’s a good idea to decide ahead of time what specific signs or results would make you change your predictions. That way, you’re ready to adapt quickly if things change, and your financial planning stays relevant and strong.

You need to consider all sides of an argument and put them together into a clear understanding that includes all the nuances. Tetlock and Gardner point out that Superforecasters often use words like “however,” “but,” “although,” and “on the other hand” when they’re describing their views. They talk about possibilities and probabilities, not certainties.

Avoid Strongly Held Opinions

Just because someone explains something confidently doesn’t mean their prediction is going to be right. Amos Tversky, a famous cognitive psychologist who studied how people make decisions, said that most people have three settings: “gonna happen,” “not gonna happen,” and “maybe.”

Tetlock and Gardner borrow an idea from the philosopher Isiah Berlin about two types of people: hedgehogs and foxes.

Hedgehogs see the world through one big idea. They try to make everything fit into this single way of thinking. They’re confident in their explanations and predictions, but they can miss information that doesn’t fit their theory.

Foxes, on the other hand, are more practical. They use a wide range of ideas and experiences to understand the world. They’re okay with things being complicated and uncertain, and they’ll change their minds when new evidence shows up.

Foxes don’t fare so well in the media. They’re less confident, less likely to say something is “certain” or “impossible,” and are likelier to settle on shades of “maybe.” And their stories are complex, full of “howevers” and “on the other hands,” because they look at problems one way, then another, and another. This aggregation of many perspectives is bad TV. But it’s good forecasting. Indeed, it’s essential.

Tetlock and Gardner found that when it comes to forecasting, foxes usually make better predictions than hedgehogs. Foxes are more likely to think about multiple perspectives, question their own assumptions, and change their thinking based on new information.

Hedgehogs, on the other hand, are more likely to stick to their beliefs even when the evidence doesn’t support them. They can be overconfident and miss important details.

hedgehog forecasters first see things from the tip-of-your-nose perspective. That’s natural enough. But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next. Think of that Big Idea like a pair of glasses that the hedgehog never takes off. The hedgehog sees everything through those glasses. And they aren’t ordinary glasses. They’re green-tinted glasses—like the glasses that visitors to the Emerald City were required to wear in L. Frank Baum’s The Wonderful Wizard of Oz. Now, wearing green-tinted glasses may sometimes be helpful, in that they accentuate something real that might otherwise be overlooked. Maybe there is just a trace of green in a tablecloth that a naked eye might miss, or a subtle shade of green in running water. But far more often, green-tinted glasses distort reality. Everywhere you look, you see green, whether it’s there or not. And very often, it’s not. The Emerald City wasn’t even emerald in the fable. People only thought it was because they were forced to wear green-tinted glasses! So, the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it. And more information doesn’t help because it’s all seen through the same tinted glasses. It may increase the hedgehog’s confidence, but not his accuracy. That’s a bad combination.

Hedgehog forecasters see things from their own narrow perspective. They have one big idea that they use over and over to try to figure out what will happen next. It’s like they’re wearing green-tinted glasses that they never take off. They see everything through those glasses, and it distorts their view of reality.

When it comes to your investments, try not to get stuck in a single investment strategy or economic viewpoint. Mix up your approach and stay open to new information that might challenge your initial assumptions. This can help you avoid biases that could cloud your investment decisions and hurt your portfolio’s performance.

Conducting Postmortems

You often see forecasters making excuses for their wrong predictions, even when they made those predictions with a lot of confidence.

Tetlock and Gardner talk a lot about something called hindsight bias. This is when people think past events were more predictable or obvious than they really were before they happened.

When forecasts span months or years, the wait for a result allows the flaws of memory to creep in. You know how you feel now about the future. But as events unfold, will you be able to recall your forecast accurately? There is a good chance you won’t. Not only will you have to contend with ordinary forgetfulness, you are likely to be afflicted by what psychologists call hindsight bias.

To minimize hindsight bias, it’s a good idea to write down the facts and reasoning you’re using when you make your initial prediction. Be specific. Then, after the event happens, do a thorough postmortem - even if things went well.

Not all successes imply that your reasoning was right. You may have just lucked out by making offsetting errors. And if you keep confidently reasoning along the same lines, you are setting yourself up for a nasty surprise.

Like most things in life, success in forecasting is more about the process than the outcome. You have to understand whether the outcome happened because of good reasoning or just a lucky break.

Don’t let a few successful investments make you think a potentially flawed strategy is a good one. Keep evaluating the reasons behind the outcomes to make sure your approach is still solid. This helps you stay humble and recognize that you might not know everything.

One way to do this is to use a decision journal to write down your estimates. It’s a good tool for keeping track of your thinking and seeing how it changes over time.

Have your Views Challenged

After you’re done with your analysis, the best thing you can do is share it with others and have them challenge it.

explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly.

Now, you might be wondering what a dragonfly has to do with forecasting. It’s a metaphor for having a clear and complete understanding by bringing together multiple perspectives. A dragonfly has compound eyes, which means each eye is made up of thousands of tiny units. This lets the dragonfly see in almost every direction at the same time. It can process all this visual information to get a comprehensive view of its surroundings.

Having a variety of opinions and estimates gives you different perspectives. It might take longer to make your forecast, but it helps counter overconfidence and anchoring bias.

Now look at how foxes approach forecasting. They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate.

But what if you can’t get the “wisdom of the crowd”? You can do something similar by challenging your own views and bringing together different perspectives in your own head.

Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person. The same effect was produced simply by letting several weeks pass before asking people to make a second estimate. This approach, built on the “wisdom of the crowd” concept, has been called “the crowd within.”

When you’re making financial forecasts or budgets, set up a process where you go back and question your initial predictions after some time has passed or after you’ve gathered more information. This will help you get the benefits of aggregating different perspectives, even if it’s just in your own head.

The Characteristics of Superforecaster

The characteristics of a superforecaster:

  • Cautious as nothing is certain
  • Humble as reality is infinitely complex.
  • Non-deterministic as what happens is not meant to be does not have to happen.
  • Open-minded since beliefs are hypothesis to be tested, not treasures to be protected.
  • Intelligent and knowledgeable with a need for cognition.
  • Reflective and introspective
  • Numerate
  • Pragmatic and not wedded to any idea or agenda
  • Analytical and capable of stepping back from the tip-of your nose perspective.
  • Dragonfly-eyed in considering diverse perspectives.
  • Probabilistic thinker and makes judgements on a scale of maybes.
  • Thoughtful updater of estimates when facts change and employs Bayesian approach.
  • Possesses growth mindset with a belief that it’s possible to get better.
  • Has grit and determination to keep at it however long it takes.

The Checklist for Making Forecasts

The five step process checklist:

  • Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized.
  • Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.
  • Adopt the inside view that plays up the uniqueness of the problem.
  • Explore the similarities and differences between your views and those of others—and pay special attention to prediction markets and other methods of extracting wisdom from crowds.
  • Express your judgment as precisely as you can, using a finely grained scale of probability.

Who is This Book For

If you often have to make important decisions when things are uncertain and there’s an element of chance involved, then this book is definitely for you.

And since we all live in an unpredictable world, that really means everyone.

You don’t have to work in finance or be an economist to make predictions and estimates. From everyday things like planning a purchase to big life choices like raising kids, we’re constantly betting on the future. And when we bet on the future, we’re making predictions and estimates about what might happen.

This book is all about how to correctly figure out the probability of things happening when there’s a lot of uncertainty involved. It goes into great detail about the methods used by a handful of “superforecasters” - just regular people - to come up with the right probability for these highly uncertain events.

So whether you’re a business executive making strategic decisions, a government policy maker, or just someone trying to navigate life’s uncertainties, the insights in this book can help you make better predictions and decisions. It’s not about having a crystal ball or being able to see the future perfectly. It’s about understanding how to think about uncertainty and using the right tools and mindset to assign accurate probabilities to different possible outcomes.

At the end of the day, we all have to make choices based on incomplete information and our best guesses about what might happen. This book gives you a framework for making those guesses as educated and accurate as possible. So, if you want to improve your forecasting skills and make better decisions in the face of uncertainty, this book is definitely worth checking out.