Back Original

Stop Worrying About Survivorship Bias With This One Weird Trick

Stop Worrying About Survivorship Bias With This One Weird Trick

Executive summary: You don’t have to worry about survivorship bias if you read business history as instantiations of business concepts. Experts in business and investing do this, but it is pretty counter-intuitive if you aren’t used to reading history. Read on to find out why.


Here are two true but contradictory statements:

  1. When you’re reading history, survivorship bias is a problem. The vast majority of business histories are stories of successful businesses. While there are case studies of failed businesses, the failures that do get recorded are notable failures. This means there’s almost always going to be a selection bias when you’re reading history! If you read only cases of successful businesses, you’re going to suffer from survivorship bias. But even if you read recorded histories of both successful and failed businesses, you’re still going to suffer from selection bias due to the sampling effects around business failure. Perhaps it’s not worthwhile to read history in the first place? Whichever way you cut this, this is a real problem when you’re reading any kind of history. This brings us to …
  2. Many notable businesspeople and investors will tell you to read business history; they will say that they get a lot of value from reading biography and historical annual reports, and spend large amounts of their own time doing just that. These individuals range from Warren Buffett to Bill Gates, Charlie Munger to Mark Leonard, Chuck Akre to John D Rockefeller, and so on so forth, down the line.

How do you reconcile these two statements?

Whenever folks bring up the idea of studying history, it is almost inevitable that someone on the Internet will point out: “… but survivorship bias!” Every time I observe someone saying this, I immediately think that there must be two possibilities in their heads:

If you decide to engage with these critics, the odds are pretty good that they read very little actual history. Most people are like this, actually. Most folks live in some kind of ahistorical present. Everything is new to them, everything must be worked out from first principles. Sometimes this makes business life difficult — but mostly they muddle through.

But if you are sufficiently curious, and you are alert to this contradiction, you might ask a different question. There is a third possibility, an alternative to these two conclusions, though one that I’ve seen remarkably few people express. The third possibility is this:

This third possibility is a more humble formulation. Surely all of these super smart, super accomplished people would have thought of survivorship bias and would’ve come up with a way to deal with it. I have wondered exactly this question for many years. But I never found a satisfactory answer … until 2022.

What Experts Actually Do

In 2008, the US Air Force Research Laboratory and the US Department of Defence Accelerated Learning Technology Focus Team commissioned a report on accelerated expertise training programs. For three decades the US military had been funding expertise acceleration research with some success; this report was a synthesis of what worked. In 2016 the report was published as the book Accelerated Expertise. In 2021, I wrote and published a summary on Commoncog. In my summary I described how the vast majority of successful expertise acceleration programs were designed around two lesser-known theories of expertise: first, Cognitive Flexibility Theory, and second, Cognitive Transformation Theory. I duly conveyed summaries of both theories in the book, put “follow up on these theories” on my todo list, and went on my merry way.

In 2022, I was flipping through The Oxford Handbook of Expertise when I decided to read the chapter on Cognitive Flexibility Theory (CFT). I recognised the title, of course — CFT had been on my follow-up list for months at that point. The Oxford Handbook chapter was a retrospective on the theory, submitted by Professor Rand Spiro and his colleagues, who originated the ideas in 1988.

What I found floored me. I remember thinking to myself: here is an explanation of what all these businesspeople and investors were doing, in their heads, when reading business history.

And then: here is something we can copy.

CFT is a theory that explains how experts deal with novelty in their domains. This might sound banal, but in the 80s, when CFT emerged, novelty was something of a blind spot in most academic research into expertise development. It is worth it to spend a bit of time discussing why.

In the 60s through to the 80s, the vast majority of expertise research focused on easy-to-study domains like chess, typing (on typewriters!), classical music training, reading comprehension, and physics education. These early studies gave rise to ‘schema theories’ — the notion that experts have mental structures in their heads that organised knowledge differently from novices. Decades later, the Deliberate Practice research program built on top of these theories, and eventually came to be known as the ‘gold standard’ for training.

(Or, more accurately, the ‘gold standard’ for training in easily studied skill domains.)

But the theories were incomplete. By the 80s, a small group of researchers had begun to notice problems with these theories. First, schema theories explained how experts could do things that were ‘routine’ in their domains — chess players had a repository of endgames in their heads; expert pianists were very good at playing known compositions. The assumption was that experts became experts through practice, and this practice created schemas in their minds that could be reliably tapped during performance.

But what about novel situations? What about chess players inventing new endgames? What about pianists inventing new playing techniques (or writing new piano compositions from scratch)? Not that this was very common, of course — concert pianists aren’t expected to compose new symphonies; grandmasters aren’t expected to routinely invent new ways to win — and in fact this was part of the problem. Because the early expertise researchers picked domains that were easy to study, the domains that they picked constrained the kinds of expertise that was on display. These domains were ‘regular’, or ‘well-structured’ — meaning that there was not a great deal of variability in the skill. After all, the rules of chess don’t change from game to game; pianos don’t reconfigure themselves between concerts; the rules of physics do not change every time you take a physics exam.

But this was a limiting constraint. In truth, many skill domains that we care about are anything but regular. Think about business, or investing. The domain of business changes every decade or so. New technologies emerge. Consumer preferences change. Market structures shift. Or think about war, or firefighting. In war, no two battles are exactly alike — as the US Marine Corps’ experiences in WWII, and then Vietnam, and then Afghanistan would attest. This is also true for firefighting, and surgery: no two forest fires are put out the exact same way; no surgical operation is exactly the same.

By the late 80s, a small group of researchers had begun pointing out that first, schema theories alone were inadequate to explain ‘adaptive expertise’. Second, and more importantly — many, many skill domains were ‘ill-structured’ — that is, due to the complexity of the domain, every case that practitioners experience are unique and somewhat novel.

So how do you train folks when faced with such novelty?

The answer is that first, you study what the experts do in these domains. And then you reverse engineer what those experts do into a training approach. This was exactly what Professor Rand Spiro did in the late 80s. And what he found was that — in ill-structured domains, experts thought in terms of cases.

Spiro was studying doctors. This might be a little surprising to laypeople, but medicine is actually a rather ill-structured domain. Simple diseases like heart attacks may show up in vastly different ways depending on the specific details of the patient. (Some heart attacks strike immediately; others can last hours, or drag over the course of a week.) It is not just that disease instantiation is complex because human bodies are complex systems, but also that patients may have multiple health problems at the same time, confounding doctors.

Spiro’s work tells us that experts in ill-structured domains do two things differently from novices:

  1. First, they do not rely on first principles thinking alone. Instead, a large chunk of what they do is to do case comparisons with fragments of previous cases that they’d seen. These fragments are then combined to create ‘temporary schemas’, adapted for the unique problem at hand. The reason for this is simple: in an ill-structured domain, it is always possible for a known concept to show up in a completely novel way. Spiro found that most doctors couldn’t reliably go from symptom presentation back down to disease mechanism or vice versa; the amount of variability in patients was simply too large. So what expert doctors do instead is to rely on pattern matching against fragments of other cases that they’d seen before in order to guide their diagnosis.
  2. Second, because case presentation is so complex, experts in ill-structured domains have a healthy respect for the complexity and contingency in their domain. Spiro called this ‘the adaptive worldview’. Expert doctors are unfazed when encountering heart attack presentations they’ve never seen before; experienced investors who’ve been at the game for decades are unsurprised if they find themselves faced with a new business model or market setup they’d never seen before. In complex systems, novel outcomes are the norm thanks to emergence. As a result, experts in such domains were more likely to dedicate a significant portion of their time collecting case studies, in order to expand the set of fragments they have in their heads. This is why expert doctors convene at medical conventions to exchange notes on difficult cases, and why case studies are still published in medical journals today. It is why investors spend time reading business history.

Spiro points out that what experts do to learn in these domains is actually the opposite of what you would expect in a typical STEM education. In a high school math or physics class, for instance, the method for solving a math or physics problem is more important than the one-to-two examples used to illustrate the method. (As an example, knowing how to solve a quadratic equation is more important than the examples used to practice such solving.) But in an ill-structured domain, it is the cases that are important, and the concepts that are secondary. Or, more accurately, experts read cases to collect fragments, and these fragments may be recombined to help with sensemaking in new, novel cases.

And so here we have our answer. Survivorship bias doesn’t matter if you’re reading cases as instantiations of concepts.

Imagine a doctor reading a case study in The Lancet, or a lawyer keeping up with the latest case law in her practice area. It would be ridiculous to say “you should be careful of survivorship bias!” to these two professionals. And the reason is because they’re not reading cases for explanations. Instead, they are reading cases to expand the set of concept instantiations in their heads. In an ill-structured domain, pattern matching is such an important part of expertise that they cannot afford to not read cases.

Proof in Business

Do we have proof that investors and businesspeople think like this?

Well, yes, we do. We’ve built Commoncog around this method of learning business; we call this the Calibration Case Method. A couple of months ago we made a 13 minute video about the method:

We open that video with an anecdote from Katharine Graham’s biography. Graham was the publisher of the Washington Post, and was taught business by the legendary investor Warren Buffett. Buffett taught her using the Calibration Case Method (though he didn’t use that name). Here is an excerpt from Graham’s Personal History, all bold emphasis added:

Warren saw that I was uncomfortable with the nomenclature and language of business. He later told me that I had a kind of “priesthood approach” to business, and seemed to feel that, if I “hadn’t studied Latin and all that, I couldn’t make it into the priesthood.” He didn’t ask me to take anything on faith, but took out his pencil and explained things clearly. He saw that it would be helpful if we demystified a lot of what we were talking about, so he brought with him to our meetings as many annual reports as he could carry and took me through them, describing different kinds of businesses, illustrating his main points with real-world companies, noting why one was a good business and another bad, teaching me specifics in the process of imparting a great deal of his highly developed philosophy. He told me that, whereas Otis Chandler collected antique cars, he himself collected “antique financial statements … [because] just as with geography or humans, it is interesting to take a snapshot of a business at widely different points in time—and reflect on what factors produced change as well as what differentiates the specific pattern of development from others also observed.”

Warren is a great teacher, and his lessons “took.” I told him it seemed really possible that I might end up “able to add”—in which case “the empire might either collapse altogether or I might really get to be the most powerful woman in whatever-it-is. Trilby with Svengali lurking close behind.…” Though I didn’t learn as much as I would have liked, I was coming from such a deficit of knowledge that I nevertheless learned a great deal. Among other things, he impressed upon me that it is better to be a bad manager of a good business than a good manager of a bad business. Actually, what Warren favors is good managers of good businesses, but I got his point.
Stop Worrying About Survivorship Bias With This One Weird Trick

In investor Charlie Munger’s biography Damn Right!, we have a section where Munger addresses exactly this question (all bold emphasis ours):

BERKSHIRE HATHAWAY AND WESCO INVESTORS listen carefully to maxims about life, but they literally crowd the doorways to hear Munger and Buffett talk about investment issues. A frequently asked question is, how do you learn to be a great investor?

First of all, you have to understand your own nature, said Munger. “Each person has to play the game given his own marginal utility considerations and in a way that takes into account his own psychology. If losses are going to make you miserable-and some losses are inevitable-you might be wise to utilize a very conservative pattern of investment and saving all your life. So you have to adapt your strategy to your own nature and your own talents. I don’t think there’s a one-size-fits-all investment strategy that I can give you.”’

Then, says Munger, you have to gather information. “I think both Warren and I learn more from the great business magazines than we do anywhere else,” said Charlie. “It’s such an easy, shorthand way of getting a vast variety of business experience just to riffle through issue after issue covering a great variety of businesses. And if you get into the mental habit of relating what you’re reading to the basic structure of the underlying ideas being demonstrated, you gradually accumulate some wisdom about investing. I don’t think you can get to be a really good investor over a broad range without doing a massive amount of reading. I don’t think any one book will do it for you.”

Munger explained that a person’s reading should not be random: "… you have to have some idea of why you’re looking for the information. Don’t read annual reports the way Francis Bacon said you do science which, by the way, is not the way you do science — where you just collect endless data and then only later do you try to make sense of it. You have to start with some ideas about reality. And then you have to look to see whether what you’re seeing fits in with proven basic concepts.

Frequently, you’ll look at a business having fabulous results. And the question is, ’How long can this continue?’ Well, there’s only one way I know to answer that. And that’s to think about why the results are occurring now-and then to figure out the forces that could cause those results to stop occurring.”

(…) Observing business over time gives an investor greater perspective on this type of thinking. Munger said he remembers when the downtown department stores in many cities seemed invincible. They offered enormous selections, had large purchasing power, and owned the highest priced real estate in town, the corners where the streetcar lines crossed. However, as time passed, private cars became the prevalent mode of transportation. The streetcar lines were taken out, customers moved to the suburbs and shopping centers became the dominant shopping venues. Some simple changes in the way we live can completely alter the long-term value of a business.

In other words, figure out a concept and why it works the way it does, and then hunt for instantiations everywhere — in business magazines, in biographies, in news reports, in Value Line numbers, in financial statements. Try and see if your understanding of the concept holds up against cases. Follow those cases over the years to see if the concept plays out the way you think it should. Look for exceptions. Are those exceptions that prove the rule? If you spot enough exceptions, do you have to update your understanding of the concept? Perhaps you’ve come up with a new concept, like ‘moat’?

Of course, I earn no points by pointing out that ill-structuredness is a spectrum. One may argue that business and investing is more ill-structured than medicine. But this, I think, should be obvious.

Few Lessons, Mostly Patterns

In my experience, the most difficult thing about this approach to mastery is internalising that you don’t want to learn lessons from history. Most people listen to stories or read business history with the following assumption baked in: “What’s my takeaway here? What should I learn from this story?” This is quite a common reaction, because it stems from something most of us do intuitively: we ask “what should we learn from our experiences” and then think that this should apply when reading about other people’s experiences.

But it is almost certainly the case that reading history for lessons is the wrong approach. History is contingent. The factors that led to such-and-such a product or business succeeding is not going to be repeatable for another business or product. At most, the only lesson you can conclude from history is simply “this is something that can happen” — not a particularly strong lesson by any means.

Let’s drive this home with a contemporary example.

As I write this at the end of 2025, it is somewhat of a consensus belief that you should launch your product with a shiny launch video. Hundreds of AI startups do these fancy video announcements, shot on high quality cameras, with a polished script, skilled editing, and a not insignificant video budget.

However, one of the largest, most successful product launches in 2025 was Claude Code’s. AI lab Anthropic launched Claude Code with zero fanfare in a limited research preview on 24 February 2025. The product became generally available on May 2025. There was no launch video; there was barely a launch announcement. The product has since taken the entire software engineering world by storm.

This is a very common sort of thing that should occur in your business experience. Some set of events occur in your corner of the world, and you make conclusions based on what you observe. So, alright then: what ‘lessons’ should you take away from this series of events?

Should you conclude:

You see how difficult it might be to draw useful, generalisable conclusions from history. One simple way of approaching this series of questions is to seek out disconfirming evidence. For each of these statements, can you find a counter-example? If you can, then the conclusion is questionable, and you should discount your belief in the conclusion.

It is this sort of reasoning process that results in survivorship bias. Folks who want to believe in their explanations tend to jealously guard the cases they use to justify their beliefs (and therefore their decisions). They will find ways to exclude inconvenient cases from consideration. But notice: if you just think: “ok, launching without a launch video can work”, two things will happen:

  1. First, you no longer need to worry about survivorship bias. After all, an instantiation of a product launch is just that — one of many instantiations. You are simply using a case to calibrate yourself; the outcomes you observe may or may not happen given the specifics of your product launch. You are simply no longer surprised by what may happen.
  2. Second, you are forced to rely on other means to justify your decision. You might argue “I feel like this cannot harm us” or “my intuition tells me it will be helpful” or “it is more important to do a rigorous beta testing program” but you remove the ability to confidently cite history. Instead, you are forced to talk in terms of possible outcomes: “Claude Code shows us that it’s possible to succeed without fancy launch videos. I think we’re better off focusing our limited resources on an extended beta testing program …”

Why is this the case? The glib answer is that business is ill-structured, like so many other aspects of life. Half the battle is being properly calibrated during execution: knowing what good looks like, being unsurprised at unusual events, and knowing the full range of outcomes that may happen to you.

Another way of framing this is that experts in ill-structured domains have a healthy appreciation of the ill-structuredness of their domain. Because every case is so unique, very few things can be generalised from case to case; worse, what is improbable may well occur in a given situation. The best you can do is to be alert to the range of things that can happen, which is useful in reasoning. This explanation might not be satisfying to you, especially if you are used to clear frameworks and clear answers. But this is how reality works.

There is one final thing you can ask: should we care about the distribution of probable outcomes? Some may say that reading a series of cases about unusual outcomes biases you towards unusual things versus more common things. But I do not believe that it is very useful. There are two responses here.

First, let’s go back to the doctor analogy. If you are a practicing doctor, what is the point of reading about normal cases? You’re going to experience ‘normal’ cases in your day-to-day practice. Much of the value in reading cases is in exposure to rarer instantiations, so you are properly calibrated in your practice.

The reason Cognitive Flexibility Theory is considered a theory of accelerated expertise is because it gives us a way to expand our experience base without spending the years to collect rare cases. An expert doctor may have to put in a few decades of practice in the wild to see all the difficult cases — which differentiates them from their less experienced peers. We want to accelerate that exposure, which means exposing the doctor to more unusual cases earlier in their clinical practice.

Second, in business and in investing, exceptional cases are the point. Very few operators or investors are interested in the average case (which in many parts of business is synonymous with failure). This is why a common retort to someone saying “why are you studying famous investors — this is survivorship bias” is that “surviving is the point”.

If you are an operator in the domain, you do not care about achieving the average. You care about experiencing the odd, the unusual; the contingently rare outcome. And so it is worthwhile to study this sample.

This seems like a bit of a contradiction. We’ve just said that there are no generalisable lessons from history, which means there are no generalisable lessons from this sample of outliers. But here we can rely on an observation that Spiro and his collaborators made in the original CFT paper: even in the set of cases that are unique, fragments of those cases will be similar to each other. (This is where the old saying ‘history does not repeat itself but it rhymes’ comes from). Collecting fragments and recombining them in new and novel ways when examining new cases gives you better sensemaking ability. Ultimately this means that studying the set of outliers is still a productive exercise: first, you gain the ability to spot similar fragments in otherwise novel cases; second, you expand the set of possible actions in your mind. (“I think I can win by building my company in this particular way, because I’ve seen something similar done once, in this business from the 80s …”)

This is a rather counter-intuitive position to take. But countless investors and businesspeople have figured this out on their own for decades and decades. Cognitive Flexibility Theory just gives you an explanation for why it works.

How do You Deal With Survivorship Bias?

But … well, alright. Let’s say that you do want to read cases for lessons. Let’s say that you want to come up with theories from observations of business.

How do you deal with survivorship bias?

This answer is rather straightforward, though it puts us squarely back into ‘scientific method’ territory. Contrary to popular belief, it is possible to do science through observation alone. The method goes something like this (adapted from Brian Moon’s Darwin’s People, a book about the naturalistic approach as applied to studying people):

  1. Find a problem, some phenomenon of interest that needs an explanation.
  2. Come up with an explanation. It doesn’t have to be a good explanation — any explanation will do.
  3. Find cases of the phenomenon and look from all angles, including most importantly the perspectives and histories of the people directly involved in initiating, perpetuating and concluding the phenomenon, for evidence that tells you that your current explanation is NOT correct.
  4. Look for other cases where you might expect the phenomenon to be happening given your explanation but do not seem to be,
  5. Revise your explanation as the evidence informs
  6. Look for more evidence
  7. Keep looking, listening and investigating.

We often think of science as the use of experiments to falsify hypotheses. But it is also possible to do falsification through observation. In fact, this approach was the very method that Darwin used to come up with the Theory of Evolution. You could say that Darwin’s science is the science of the case study: he looked at examples of animals in a variety of habitats over the course of many years, traveling the globe to do so, and then told us stories about them. One name for this form of science is ‘naturalism’.

The strongest type of explanation that naturalists look for is of the form: “If X exists, Y may happen. However if X doesn’t exist, Y cannot happen.” If you can demonstrate that this is true for all cases observed through time, then you have found a universal explanation. You would have found some truth.

Does survivorship bias matter when generating such explanations? The answer is yes: of course it does. But the way to deal with survivorship bias isn’t to jettison the case study approach altogether. We already have a methodology for dealing with it, which goes all the way back to the philosopher of science Karl Popper: you simply look for ‘counterfactual cases’. (That is, cases that disprove your explanation). A single disconfirming case will disqualify your explanation, while no amount of positive examples will guarantee that your explanation is true. This is why falsification is more powerful than confirmation.

But I want to argue against finding universal, rigorous explanations if you are a practitioner.

Let’s go back to the structure of the ideal explanation in naturalism: “If X exists, Y may happen. However if X doesn’t exist, Y cannot happen.”

This is actually a very high bar to clear. If you don’t believe me, give it a try — come up with a couple of explanations of business behaviour over the next few weeks and see if you can make a statement of that form. You will find it very difficult to do so. The complexity and ill-structuredness of business is at least as complex as the complexity and ill-structuredness of nature; Darwin took 20 years to refine the Theory of Evolution, travelling across the globe to examine unusual ecosystems in the process of his investigation.

Science demands a level of rigour that most of us don’t care to match. Darwin is a bit of an extreme case, but observation on the order of a few years is the level of seriousness you should expect to bring if you want to generate universal explanations from case studies. In practice, however, most practitioners don’t have the time nor the desire to do such rigorous observation. And for good reason: while scientists want to find out what is true, practitioners want to know what is useful. The two desires lead to different approaches to truth.

The CFT approach to case studies leads to statements of the form “X can happen” or “Y is a way that concept Z (e.g. heart attack / intrinsic value / network effects) may show up.” This is so far from the gold standard of naturalistic science that it’s not even funny. And yet it still has a place.

Think about the example of a practicing doctor reading cases in The Lancet. She is not seeking a strong explanation of disease mechanism or even an understanding of symptom expression (if for instance the case is difficult because of interactions between different diseases). Such explanations are of limited use because they may only be applicable to a narrow slice of cases. Instead, the more useful thing that she’s doing is that she’s adding to the set of fragments she can pattern match against in her head. Since case studies do not add much to scientific understanding, they are often seen as lower in strength on the hierarchy of scientific evidence. But that doesn’t mean they’re not useful.

Stop Worrying About Survivorship Bias With This One Weird Trick
Hierarchy of Scientific Evidence (source)

What am I trying to say? I’m saying that if you are a practitioner, coming up with universal explanations should not be your goal. Your job is NOT to come up with generalisable theories, but to come up with rigorous reasoning for the specific decisions and cases that you must act on. And so worrying about survivorship bias is a red herring on at least two levels:

  1. The right move in most cases is to conclude things at the level of “this is a thing that can happen” — in other words, form conclusions that are so conservative they are not subject to survivorship bias.
  2. Even if you don’t form such conclusions, you shouldn’t be spending that much time coming up with universal explanations to begin with! You are not a scientist, after all, you are an investor or an operator. Your job is to make money, not fill your head with half-proven beliefs.

If you are a practitioner, there is a place for the kind of Popperian reasoning I’ve introduced in this piece: is is useful as a way to disqualify beliefs, as opposed to coming up with true beliefs that you can hold strongly. Because you are not a scientist or theorist, you are better served getting rid of wrong models, instead of spending years hunting for true ones. Such is the odd asymmetry of falsification.

Most experts in ill-structured domains get by with explanations that are contingent and are true only some of the time. If they can become world class with such explanations, so can you.

Wrapping Up

This essay has introduced a large number of ideas. Let’s go through some of them quickly as we wrap up:

  1. First, we walked through a brief history of expertise research. That was a set up to introduce the concept of ’ill-structured domains’, which are domains where concepts exist but the way they show up in reality are highly varied from case to case. Business and investing are two such domains; conventional theories of expertise and training do not deal with well with such domains.
  2. We then introduced the core ideas of Cognitive Flexibility Theory, which is a theory of expertise that explains what experts do in domains like these. I argued that the case method as recommended by CFT makes survivorship bias irrelevant, because you’re not reading cases to learn lessons.
  3. We used a contemporary example of Claude Code to make the point that generating explanations (‘lessons’) from business observations is actually very tricky. The reason survivorship bias gets so much play when we talk about cases is because folks will form beliefs from some set of cases, and then contort themselves to defend their beliefs when someone points out disconfirming cases. You can sidestep all of this if you refrain from forming such explanations.
  4. We then examined the idea that it’s not necessary to reason about the distribution of cases when picking cases to study. For two reasons: first, the whole point of studying cases is to expose you to exceptions, because exceptional cases are — by definition — rare in real world experience. CFT-based training programs are considered accelerated expertise training programs because they accelerate your exposure to such rare cases, so you can recognise them earlier in your career. Second, exceptional cases are useful to study because fragments of such cases tend to ‘rhyme’ with cases you may encounter in your own life; this is useful because exceptions are often what you are aiming for in business and investing (where the average outcome is failure.)
  5. Finally, we dealt with the idea that if you do want to generate explanations from cases, there is a well-established way to do that and avoid survivorship bias. Specifically, you have to do ‘naturalistic’ science: look for disconfirming cases, and aim for explanations of the form “when X is present, Y might happen; when X is not present, Y cannot happen.” However, I argue against doing this as a practitioner, because you don’t win from coming up with universally true theories of business, you win from making good decisions. The former is not necessary for the latter. Instead, it’s more important to not come up with false theories, more than it is to come up with universally true ones.

This is a dense essay, and I apologise for it — most essays that discuss better thinking methods tend to be. But hopefully you’ve learnt something about thinking well in business, which is an odd domain thanks to its ill-structuredness. In my experience, you’ll have to sit with these ideas for a few weeks for them to sink in.

On the bright side, however, you now know how to read business history like the experts.

Have at it, and enjoy.

Endnotes: Sources on Expertise Research

Here is a brief summary of sources necessary to piece together my narrative about expertise research. You do not have to read this unless you are an expertise wonk; I’m merely including this so you can hold me accountable for my claims.