Categories
Startup

Nassim Taleb – Black Swan

My Opinion

Highly recommend reading this book. While it took me some time to finish it, it was totally worth it. Taleb shares some highly interesting mental models in regard to probabilities, system theory, knowledge creation, life choices and many more domains.

Reading Recommendation: 9/10


Our world is steered by unknown, improbably events that can’t be forecasted. You can call them unknown unknowns or Black Swans, an expression shaped by Nassim Taleb.

  • These events are outliers. There is no regularity in their occurrence and no past data that gives a hint and prepares us to what happens. They carry an extreme impact. And, funny enough, in hindsight they can be easily explained. (Hindsight Bias)
  • These unknown unknowns happen because they are not supposed to happen. This might sound weird but think about it. When you get to know something, you can prevent it. If you don’t, you can’t. Plus, if you counterparts knows you know, they will react differently. Think about 9/11.

The problem of inductive reasoning is that the same data could confirm a theory and also its exact opposite.

  • There are systematic problems that arise when building knowledge based on empirical observations. In a nutshell, it’s fair to say that we know when we are wrong with a much higher confidence then knowing when we are right.
  • As an example, consider a turkey that is fed every day. With every single feeding he builds up his confidence that he will be fed every single day. One afternoon before Thanksgiving, this believe will be proven wrong.
  • What this example shows is that the turkey observations were in fact harmful. With every day that his confidence rose, so did the actual risk. The turkey’s feeling of safety peaked as the risk was the highest.
  • The same set of data can confirm a theory and also its exact opposite. If you survive another day, it could mean that you are getting closer to being immortal or that you are closer to death.

What often matters when learning about properties is how they behave in extreme situations under severe stress.

  • If you want to get know if you can really count on a friend, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Only then you will truly understand his personal ethics and his degree of integrity.
  • This is equally true when it comes to understanding health. Would it be possible to understand health without considering wild diseases and epidemics? At the very least it would significantly harder.
  • Indeed the normal is often irrelevant. Therefore, it sometimes helps to deliberately cause a system to fail to learn how and why it reacts that way.

At the first glance, the interconnectedness of globalisation reduces volatility and creates the impression of stability. At the second glance, it shows the increased fragility: There will be less Black Swans at much severe consequences.

  • Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated.
  • Instead of several loosely connected financial systems, we now have one gigantic, highly dependant system. When only one part fails the whole system crashes.
  • The increased concentration among banks seems to reduce the likelihood of a financial crisis. If one occurs, however, it will be at global scale with devastating consequences.
  • There is another problem. The rarer the event, the less we know about its odds. It means that we know less and less about the possibility of a crisis.

Living mostly in non-linear, black-swan driven environments where forecasts are difficult, we face a serious expert problem.

  • The researcher Philip Tetlock studied the forecasts of political and economic “experts.” He asked various specialists in these domains about the likelihood of a bunch of political, economic and military events occurring about five years ahead. In sum, he collected 27,000 predictions from almost 300 specialists. When looking at the results it turned out that an “expert” status didn’t matter. There was no difference between a PhD or an undergraduate degree. Interestingly enough, Tetlock noticed that the bigger the reputation of a subject, the worse predictors they were.
  • Part of the problem might be the illusion of familiarity. Just because we spent much time studying something, it doesn’t necessarily mean that we are particularly good at understanding where it’s heading.

Almost no discovery, no technologies of note, came from design and planning – they were just Black Swans.

  • It turns out that top-down planning is often much less relevant than we might expect. History is full of examples of serendipitous discoveries. In fact, it seems like randomness and sheer luck played a surprisingly large role in most of our great discoveries.
  • Penicillin is one example of a serendipitous discovery with massive impact. When Alexander Fleming was cleaning up his laboratory, he noticed that penicillium mold had contaminated one of his old experiments. He thus recognized the antibacterial properties of penicillin, the reason many of us are alive today.
  • Viagra, which effects on our society could be considered significant as well, was meant to be a hypertension drug.
  • The laser with various fields of application nowadays is another prime example of a “solution looking for a problem” type of discovery. When the inventor Charles Townes was asked about his discovery he replied that he was satisfying his desire to split light beams. Consider the effects of laser today: compact disks, eyesight corrections, microsurgery, data storage and retrieval. All totally unforeseen and based on some playful tinkering.
  • But this is not only true for complex or entirely new discoveries. It took 6000 years after the invention of wheels (by, we assume, the Mesopotamians) until somebody came up with the idea of adding wheels to suitcases. Isn’t that astonishing? We had been putting our suitcases on top of a cart with wheels, but nobody thought of putting tiny wheels directly under the suitcase. Technology is only trivial retrospectively–not prospectively.

Our epistemic arrogance lets us overestimate what we know and underestimate uncertainty.

  • The following experiment has been conducted many times with different subject matters and populations. The researchers present a question to each person in the room which answer is a number. They then ask the subjects to estimate a range of values for that number so that they have a 98 percent chance of being right. Although the subjects can literally pick any range that they feel confident with, the intended 2 percent error rate usually turns out to be between 15 percent and 30 percent (depending on the population and the subject matter).

Being a victim to the Survivorship bias, we systematically neglect the importance of silent evidence and the role of luck.

  • When researchers study successful people they often look at their similarities like courage, risk taking, optimism and so on. They assume these traits are what make successful people. If you take, however, silent evidence into consideration and look at the cemetery, you will notice that the graveyard is full of failed persons who shared these traits.
  • One question therefore will always be hard to answer: Were theses people successful because or despite these traits?
  • Between the population of successful millionaires and the failed people on the graveyard, there may be some differences in skills, but what truly separates them is one factor: luck.
  • There is a vicious attribute to the survivorship bias: it is the hardest to notice, when its impact is the largest. The more deadly the risks turn out to be the harder it is to find the silent evidence that is so crucial to take into consideration.
  • To avoid the survivorship bias, choosing the right reference point is crucial. Take the example of a gambler. When looking at the population of beginning gamblers, it’s almost certain that one of them will make a small fortune. If your reference point therefore is the entire population, there is no problem. But from the reference point of a winner (so without taking the losers into account, which happens all too often) there seems to be something greater going on than sheer luck.
  • Beyond silent evidence there is another factor to take into consideration. It’s the nature of evolution that it only works in the long-term and that its short-time outcomes often misleading and deceptive. It’s therefore often not obvious which traits are really good for you, especially because second order effects are not apparent.

“Cumulative advantage” is a theory that describes how winning now increases your odds of winning again in the future and vice versa.

  • Failure too is cumulative; losers are likely to also lose in the future, even if we don’t consider demoralization as a consequences of failing.
  • The English language provides a good example for this. Zipf’s law is a mechanism that describes how the more you use a word, the less effortful you will find it to use that word again, so you borrow words from your private dictionary in proportion to their past use. This demonstrates why out of the sixty thousand main words in English, only a few hundred a regularly used in writings, and even fewer commonly appear in conversation.
  • There are many more examples:
    • The more people live in a particular city, the more likely a stranger will be to pick that city as his destination.
    • The more people are using a certain platform, the more value it provides for every new user join.
    • The more successful you are in your job, the more opportunities will be provided to you. (see: Dominance hierarchy)
  • The underlying principle is as simple as this: The big get bigger and the small stay small, or get relatively smaller.

If knowing the underlying equation, predicting the outcome is often easy. However, reverse engineering the process, meaning deriving the equation based on the outcome, is often almost impossible.

  • For example, knowing the mathematical rule for a series of number, deriving the subsequent numbers is extremely easy. The reverse, however, is often extremely difficult.
  • The researcher P. C. Wason presented subjects with the three-number sequence 2, 4, 6 and asked them to try to guess the rule generating it. The subject had to present other three-number sequences based on the rule they had in their mind and wanted to test. The experimenter would answer with “yes” or “no” in regard to the consistency with the actual rule. Once the subjects were confident with their rule, they would formulate it. It turns out that the actual rule was simply “numbers in ascending order”. Very few subjects got this right since everybody tried to confirm their rule as opposed to falsifying it. Having their theory in mind, all the subjects were trying to find confirming evidence.
  • Note the similarities of this research with how we make sense of history. We tend to assume that history follows a certain logic and that, in theory, we should be able to forecast it. However, all we see are the events, never the rules, while still trying to derive overarching theories based on this.

We systematically overestimate the effects of both positive and negative future events on our lives. This tendency is called “anticipated utility” by Daniel Kahneman and “affective forecasting” by Dan Gilbert.

  • The problems seems to be that we don’t pay attention to our past experiences and are unable to learn from them.
  • Examples can be found everywhere. We assume the next promotion will change our life. We are afraid that things will never get back to normal after loosing a close relative. We believe if we get to build our dream house, we will be happy forever. And so on.
  • Unfortunately, this is not how we human beings work. Being the survival-driven social animals that we are, we are trained to quickly adapt to new circumstances and constantly developing new goals and desires to strive towards.
  • In scientific terms this characteristic is referred to as hedonic adaptation and describes the process of humans to constantly adapt to the status quo and to not judge our current state in absolute terms but instead to only perceive relative changes.
  • One of the most cited pieces of research in this domain is a study from 1978 where researchers interviewed two very different groups about their happiness – recent winners of the Illinois State Lottery and recent victims of catastrophic accidents, who were now paraplegic or quadriplegic. The participants were asked how much pleasure they derived from everyday activities such as chatting with a friend or laughing at a joke.
  • When the researchers analysed their results, they found that the recent accident victims reported gaining more happiness from these everyday pleasures than the lottery winners. And even though the lottery winners reported more present happiness than the accident victims (4 out of 5 as compared to 2.96) the authors concluded that “the paraplegic rating of present happiness is still above the midpoint of the scale and the accident victims did not appear nearly as unhappy as might have been expected.”

Round-trip fallacy describes the confusion of absence of evidence for evidence of absence.

  • When examining a patient for cancer, the doctor can share the negative results but saying we couldn’t find any evidence of cancer. The acronym used in the medical literature is NED, which stands for No Evidence of Disease. What the doctor can’t say is we found evidence for no cancer. There is no such thing as END, Evidence of No Disease.
  • One example of the round-trip fallacy can be found when looking at the case of mothers’ milk in the 1960s. Doctors looked down at mothers’ milk as something that could be equally well replicated by their laboratories. Unfortunately, they missed the many useful components of mothers’ milk that are crucial for the development of an infant. A simple confusion of absence of evidence of the benefits of mothers’ milk with evidence of absence of the benefits.
  • Those infants who were not breast-fed had an increased risk of a number of health problems, including a higher likelihood of developing certain types of cancer. Furthermore, benefits to mothers who breast-feed were also not taken into consideration, such as a reduction in the risk of breast cancer.
  • What this teaches us once again is that we know with a lot more confidence when we are wrong (i.e. falsification) then when we are right (confirmation).

Leave a Reply

Your email address will not be published. Required fields are marked *