A few years ago, I was involved in an Enterprise Performance Management (EPM) implementation. Our goal was simple: increase operational efficiency and streamline financial reporting processes. We were convinced we had found the perfect solution, a cutting-edge SaaS product that promised the newest and best features.
But as we dove deeper into the project, things started to unravel:
- Our initial 9-month timeline stretched into two years.
- The product struggled to meet our specifications, falling short of operating at the required scale.
- We kept waiting for crucial functionality to be released, causing further delays.
Despite these red flags, we kept pouring more resources into the project. The deeper we got, the harder it became to communicate the mounting costs and missed deadlines up the chain. Ultimately, the project failed to deliver the desired results, despite our enormous investment.
Reading through Bent Flyvbjerg and Dan Gardner book “How Big Things Get Done” took me back to the nightmare project implementation. Flyvbjerg and Gardner argue that managing large projects often suffers from what’s known as a “fat-tailed distribution” of outcomes.
In the world of big projects, this usually means a skew towards the right tail. What does that mean? It’s like this: while there’s a slim chance a project might finish early or under budget (left tail), there’s a much higher chance it’ll run way over time and cost (right tail).
Most big projects are not merely at risk of not delivering as promised. Nor are they only at risk of going seriously wrong. They are at risk of going disastrously wrong because their risk is fat-tailed. Against that background, it is interesting to note that the project management literature almost completely ignores systematic study of the fat-tailedness of project risk.
Flyvbjerg and Gardner tell us why this happens. They show that most big projects, whether it’s building a skyscraper, launching a space mission, or even implementing a new IT system, tend to run late and over budget. It’s not just bad luck - it’s a systemic issue.
But here’s the good news: their book isn’t just about pointing out problems. It’s packed with insights on how to avoid these pitfalls. They’ve studied successful projects from around the world and across history, from the Empire State Building to the development of COVID-19 vaccines, to figure out what works.
If you’re involved in any kind of big project - or just curious about how the world’s biggest things get built - this book is like a roadmap for avoiding the trap I fell into with my EPM project. It’s about learning to see the warning signs before it’s too late and making decisions based on present realities.
What Did I Get Out of It?
The book’s main focus is on big infrastructure projects - bridges, tunnels, museums, and the like. But don’t let that fool you. The insights here can be applied to all sorts of projects, even smaller ones.
As I read, I kept thinking about how these lessons could help with projects I have been involved in and those I witness on a daily basis around me. There’s a lot to learn here, whether it’s implementing a new system or as simple as preparing for reporting season.
Let’s look at some key takeaways and how they might apply in the finance world.
The Rush to Commit
It’s also referred to “commitment fallacy.” It’s a fancy term for a simple idea: once we’ve decided to do something, we tend to stick with it, even when it’s clearly not working out.
In the world of big projects, this often shows up as what Flyvbjerg and Gardner call “strategic misrepresentation.” They explain it like this:
Strategic misrepresentation [is] the tendency to deliberately and systematically distort or misstate information for strategic purposes. If you want to win a contract or get a project approved, superficial planning is handy because it glosses over major challenges, which keeps the estimated cost and time down, which wins contracts and gets projects approved.
In finance, we might not be building bridges, but we’ve all seen projects that started with rosy projections and ended up way over budget. Maybe it’s a new accounting system that was supposed to streamline everything, or a reporting process that was going to save us tons of time.
The authors warn us:
Happy endings are rare when projects start in a rush based on the commitment fallacy.
How many times have we rushed into a project without really thinking it through? We get excited about the potential benefits and gloss over the challenges. Before we know it, we’re too far in to turn back.
Flyvbjerg and Gardner describe this process:
Purposes and goals are not carefully considered. Alternatives are not explored. Difficulties and risks are not investigated. Solutions are not found. Instead, shallow analysis is followed by quick lock-in to a decision that sweeps aside all the other forms the project could take.
This might look like committing to a specific software solution without fully exploring alternatives, or locking into a reporting format without considering how it might need to evolve.
The lesson here? Before diving in, take a step back. Whether it’s a system implementation or a new reporting process, we need to resist the urge to commit too quickly. Instead, let’s take the time to really understand the challenges, explore alternatives, and plan for risks. It might feel slower at first, but it could save us from a world of pain down the road.
The Power of Active Planning
We often feel the pressure to act quickly. There’s always a sense of urgency, whether it’s launching a new financial product, implementing a cost-cutting measure, or responding to market changes. But Flyvbjerg and Gardner’s book reminds us of an important truth: sometimes, slowing down is the fastest way to get things done.
The authors emphasize that active planning is crucial:
Pushing the project’s vision to the point where it is sufficiently researched, analyzed, tested and detailed so we have a reliable roadmap of the way forward.
Mistakes can be costly. Think about the last time your team rushed into a new initiative. Maybe it was a new budgeting process or a treasury management system. How much time did you spend fixing issues that could have been avoided with better planning?
Flyvbjerg and Gardner divide projects into two phases: planning and delivery. They argue that the planning phase is a low commitment phase where tests and experiments are relatively cheap. In contrast, the delivery phase is high commitment, where changes can be costly or even lead to project failure.
However, planning often gets a bad rap.
Planning has a bad reputation because it’s seen as a highly bureaucratic exercise.
We might associate planning with endless meetings and paperwork. But effective planning isn’t about bureaucracy; it’s about careful consideration, exploration, and risk mitigation.
Flyvbjerg and Gardner warn against the “planning fallacy,” our tendency to underestimate the time and resources needed for a task:
You expect to get downtown on a Saturday night within twenty minutes, but it takes forty minutes instead and now you’re late—just like last time and the time before that.
This might look like consistently underestimating the time needed for month-end close or the complexity of implementing a new accounting standard.
The book emphasizes the importance of thorough planning:
Think slow, act fast: That’s the secret of success.
This approach is crucial in finance. Whether we’re developing a new risk management strategy or overhauling our financial reporting system, taking the time to “sharpen our ax” can save us countless hours of trouble down the line.
The key takeaway? Don’t rush into the delivery phase. Remember: as tempting as it is to show quick progress:
Happy endings are rare when projects start in a rush based on the commitment fallacy.
Accuracy and reliability are of paramount importance. Hence, taking the time to plan thoroughly can be the difference between a successful project and a costly mistake.
So next time you’re feeling the pressure to jump into action, remember that thinking slow to act fast might just be the right thing to do.
Developing Prototypes
When we’re dealing with critical systems, tax compliance, financial reporting, and analysis, the idea of a “minimum viable product” can seem risky. After all, we can’t just release a half-baked financial system or an untested tax strategy into the wild. But Flyvbjerg and Gardner offer an intriguing alternative: the “maximum virtual product.”
When a minimum viable product is not feasible, use a maximum virtual product instead.
Imagine you’re planning to overhaul your entire financial reporting system. You can’t just implement a partial system and see how it goes - the risks are too high. Instead, you could create a detailed virtual model of the new system, allowing you to test and refine without exposing the company to real financial risk.
The book acknowledges the popularity of the lean startup model but argues for a broader view of planning:
Planning is doing, iteration and learning before you deliver at full scale.
This could mean creating detailed simulations of new tax strategies, mocking up new financial planning tools, or running extensive scenario analyses on proposed system changes.
The authors note:
The ideal testing method is: Test whatever you want to test in the real world with real people.
But they also recognize that this isn’t always possible, especially for big projects:
This type of testing is almost never possible for big projects because it is too expensive, compromises safety or would simply take way too long.
We can’t test a new accounting system with real company data due to security risks. We can’t trial a new tax strategy without potentially exposing the company to compliance issues. This is where virtual products come in.
Flyvbjerg and Gardner suggest:
Creating a maximum virtual product requires access to the necessary technology (which need not necessarily be sophisticated).
This could mean using data visualization tools to model new reporting formats, creating detailed flowcharts of proposed process changes, or using sandbox environments to test new financial software.
The authors remind us:
The minimum viable standard (for release) for e.g. skyscraper has a much higher bar than for a phone app.
Similarly, the bar for finance projects is high. A new budgeting process or tax compliance system needs to be extremely reliable before implementation. Virtual products allow us to aim for that high standard while minimizing risk and cost.
By embracing this approach, finance departments can innovate and improve without putting the company’s financial health at risk. It’s about finding that sweet spot between caution and progress, allowing us to think big while acting responsibly.
Managing Black Swans
Understanding the concept of the “window of doom” is crucial. Flyvbjerg and Gardner introduce this idea as:
The time that passes from the decision to do a project to its delivery, during which an event can ‘crash through’ and create trouble. That event can include a black swan.
This window represents a period of vulnerability. Imagine you’re in the middle of implementing a new ERP system when suddenly, a global pandemic hits or a major regulatory change is announced. These are the black swans that can derail even the most carefully planned projects.
The authors emphasize the importance of keeping this window as small as possible:
Projects that fail tend to drag on, while those that succeed zip along and finish. Why is that? Think of the duration of a project as an open window. The longer the duration, the more open the window. The more open the window, the more opportunity for something to crash through and cause trouble, including a big, bad black swan.
This could mean streamlining the implementation of new accounting standards, accelerating the rollout of improved financial planning tools, or expediting the transition to new tax compliance software.
The book warns us about the potential for cascading effects:
A black swan crashing through the window of vulnerability may itself cause a black swan outcome.
Consider a scenario where a delayed financial system upgrade coincides with a sudden economic downturn. The combination could lead to inaccurate forecasts, potentially triggering a series of poor business decisions.
So, how do we shrink this window of doom? The authors suggest:
Exhaustive planning that enables swift delivery, narrowing the time window that black swans can crash through, is an effective means of mitigating this risk.
This might involve:
- Thorough scenario planning before launching new financial processes
- Extensive testing of system changes in isolated environments
- Phased rollouts of new systems to limit exposure
The book states that the ultimate goal is simple but powerful:
Finishing is the ultimate form of black swan prevention; after a project is done, it can’t blow up, at least not as regards delivery.
This underscores the importance of not just starting strong but finishing swiftly. Whether it’s a system upgrade, a new financial reporting process, or a tax optimization strategy, the quicker we can move from planning to full implementation, the less time we leave for potential disasters to strike.
It’s Not Different
It’s easy to fall into the trap of thinking our projects are unique. Maybe we’re implementing a cutting-edge financial planning system or developing a novel tax strategy for a complex international structure. But Flyvbjerg and Gardner warn us about this “uniqueness bias” and its potential pitfalls.
Planners don’t value experience to the extent they should because they commonly suffer yet another behavioral bias, ‘uniqueness bias,’ which means they tend to see their projects as unique, one-off ventures that have little or nothing to learn from earlier projects.
This mindset can be particularly dangerous. We might think our new reporting system is so tailored to our company that we can’t learn from others’ experiences.
But as the book warns:
This time is different’ is the motto of uniqueness bias.
How often have we heard (or said) something similar when launching a new initiative?
The danger of this thinking is clear when we believe our project is unique:
- We miss opportunities to learn from similar projects in other companies or industries.
- We fail to apply valuable lessons from past experiences, thinking they don’t apply to our “unique” situation.
- We overlook crucial data from comparable projects that could inform our forecasts and risk assessments.
Instead, the authors advise:
TAKE THE OUTSIDE VIEW Your project is special, but unless you are doing what has literally never been done before—building a time machine, engineering a black hole—it is not unique; it is part of a larger class of projects.
This might mean:
- When implementing a new system, research similar implementations in other companies, even if they’re in different industries.
- In financial planning and analysis, study how other organizations have improved their forecasting accuracy or streamlined their budgeting processes.
By overcoming uniqueness bias, we open ourselves up to a wealth of knowledge and experience. We can better anticipate risks, make more accurate forecasts, and ultimately increase our chances of project success.
As Flyvbjerg and Gardner remind us:
As with reference-class forecasting, the big hurdle to black swan management is overcoming uniqueness bias. If you imagine that your project is so different from other projects that you have nothing to learn from them, you will overlook risks that you would catch and mitigate if you instead switched to the outside view.
As accuracy and risk management are paramount, adopting this “outside view” isn’t just helpful—it’s essential.
Finding the Right Anchor
The concept of anchoring is crucial when making forecasts for projects, budgets, or financial planning. Flyvbjerg and Gardner highlight the importance of selecting appropriate anchors:
Use a good anchor, and you greatly improve your chance of making a good forecast; use a bad anchor, get a bad forecast.
This is particularly relevant when we often base our projections on historical data or industry benchmarks. For instance, when forecasting the cost of a new system implementation or estimating the time for a complex restructuring, we typically start with a reference point and adjust from there.
However, the authors warn about the dangers of poor anchoring:
When we experience delays and cost overruns, we naturally go looking for things that are slowing the project down and driving up costs. But those delays and overruns are measured against benchmarks. Are the benchmarks reasonable?
This might mean questioning whether our initial cost estimates for a new reporting process were realistic, or if our timeline for a system upgrade was based on outdated information.
The book suggests a solution:
See your project as one in a class of similar projects already done, as ‘one of those.’ Use data from that class—about cost, time, benefits, or whatever else you want to forecast—as your anchor.
This could involve creating a list of past project data, including costs, timelines, and outcomes for various financial initiatives.
The authors emphasize the importance of keeping it simple:
Define the class broadly. Err on the side of inclusion. And adjust the average only when there are compelling reasons to do so, which means that data exist that support the adjustment.
When planning your own system implementation projects, this might mean looking at a wide range of projects across different industries, rather than limiting yourself to industry-specific examples.
By adopting this approach, finance teams can:
- Improve the accuracy of their project forecasts.
- Reduce the risk of cost overruns and delays.
- Set more realistic expectations for stakeholders.
Remember, as the authors note:
To create a successful project estimate, you must get the anchor right.
Accurate forecasting can make or break a project (and sometimes even a company); therefore, the first step to accurate forecasting is starting with the right anchor.
Risk Management
Understanding risk is paramount. Flyvbjerg and Gardner introduce a crucial concept that many project management professionals might overlook: the fat-tailed distribution of project risks.
Most project types have fat tails. This is crucial for knowing how to forecast project costs properly.
Unlike a normal distribution where most outcomes cluster around the mean, fat-tailed distributions have a higher probability of extreme events. In managing projects like system implementations, this could translate to projects running significantly over budget or far behind schedule.
With these fat-tailed distributions, the mean is not representative of the distribution and therefore is not a good estimator for forecasts.
This is a critical insight for financial planning. When estimating the cost of a new ERP system implementation or the timeline for a major tax restructuring, relying on average figures from past projects might severely underestimate the potential for extreme outcomes.
Flyvbjerg and Gardner caution:
Information technology projects have fat tails.
This is particularly relevant given that many finance projects involve IT systems. Whether it’s implementing a new financial reporting tool or upgrading treasury management systems, we need to be aware of the heightened risk of extreme outcomes.
The authors advise:
Following the precautionary principle, you should also err on the side of caution and assume that your project is part of a fat-tailed distribution, because this is more likely to be the case than not.
This might mean:
- Building in larger contingencies for major system implementations or process changes.
- Communicating the potential for extreme outcomes to stakeholders, even for seemingly “routine” projects.
It’s important to note that even smaller projects aren’t immune. The authors warn against thinking fat tails don’t apply to “small projects.” This could include things like implementing a new budgeting tool or changing a financial reporting process.
We can understand and account for fat-tailed distributions in project risks, we can:
- Develop more accurate and robust project forecasts.
- Implement more effective risk management strategies.
- Better prepare their organizations for potential extreme outcomes.
Navigating the Fat Tail
In finance, projects can range from implementing new systems to overhauling strategies; understanding how to manage risk in fat-tailed distributions is crucial. Flyvbjerg and Gardner offer valuable insights on this topic.
The authors emphasize a shift in mindset:
If you face a fat-tailed distribution, shift your mindset from forecasting a single outcome (‘The project will cost X’) to forecasting risk (‘The project is X percent likely to cost more than Y’), using the full range of the distribution.
This means moving away from point estimates. Instead of saying “Our new financial reporting system will cost $5 million,” we should be saying something like “There’s an 80% chance the system will cost between $5 million and $8 million, with a 20% chance it could cost significantly more.”
The book suggests a two-pronged approach to risk mitigation:
- For the main body of the distribution (about 80% of outcomes): Use regular risk mitigation strategies like contingencies and reserves. This might mean building in a buffer for cost overruns or timeline extensions in your budgeting process.
- For the tail outcomes (the remaining 20% - potential black swans): The authors advise: “Since very high contingencies are economically prohibitive (e.g. 300%, 400% etc.), cut off the tail (black swan management)”
This is where exhaustive planning comes in. As the book states:
Exhaustive planning that enables swift delivery, narrowing the time window that black swans can crash through, is an effective means of mitigating this risk.
This could involve:
- Detailed scenario planning for potential extreme events
- Rigorous testing of new systems before full implementation
- Phased rollouts of new processes to limit exposure
The authors remind us of the ultimate goal:
Finishing is the ultimate form of black swan prevention; after a project is done, it can’t blow up, at least not as regards delivery.
Who is This Book For?
Whenever I finish a book, one of the questions I ask myself is who would benefit most from this book. While it’s grounded in solid research, it’s far from dry academic text. If you are after a dense, scholarly work packed with statistics jargon, you might want to look elsewhere.
But if you’re like me - someone who’s looking to improve on project management skills - this book is a goldmine.
What really sets this book apart is how it bridges the gap between project management and forecasting. It’s not just about how to run a project; it’s about how to predict and prepare for the challenges you’ll face along the way. This blend of disciplines offers a unique and valuable perspective that I haven’t seen in other project management literature.
The authors bring their ideas to life with a whole host of real-world examples. These case studies span a diverse range of projects, from the iconic Sydney Opera House to the intimate setting of Jimi Hendrix’s Electric Lady Studios, making the book’s lessons feel relevant and applicable regardless of your field.
As I read, I kept thinking about how these lessons could help with projects I’ve been involved in. There’s a lot to learn here, whether it’s implementing a new system or as simple as preparing for reporting season. That’s why I’d especially recommend this book to anyone looking to sharpen their project planning skills.
In the end, “How Big Things Get Done” isn’t just for project managers or executives overseeing large initiatives. It’s for anyone who’s ever wondered why so many big projects go wrong, and more importantly, how we can make them go right. It’s changed the way I think about projects, and I’m betting it’ll do the same for you.
