A Chocolatier's 100-Year Predictions Highlight Our Limitations
A Futures Thinking Perspective
Sept 3, 2025
đ Hello friends,
Thank you for joining this week's edition of Brainwaves. I'm Drew Jackson, and today we're exploring:
Predictive Abilities & Limitations
Credit Gemini
Before we begin: Brainwaves arrives in your inbox every other Wednesday, exploring venture capital, economics, space, energy, intellectual property, philosophy, and beyond. I write as a curious explorer rather than an expert, and I value your insights and perspectives on each subject.
Time to Read: 60 minutes.
Letâs dive in!
The Owl is a very wise bird; and once, long ago, when the first oak sprouted in the forest, she called all the other Birds together and said to them, âYou see this tiny tree? If you take my advice, you will destroy it now when it is small: for when it grows big, the mistletoe will appear upon it, from which birdlime will be prepared for your destruction.â
Again, when the first flax was sown, she said to them,âGo and eat up that seed, for it is the seed of the flax, out of which men will one day make nets to catch you.â
Once more, when she saw the first archer, she warned the Birds that he was their deadly enemy, who would wing his arrows with their own feathers and shoot them.
But they took no notice of what she said: in fact, they thought she was rather mad, and laughed at her.
When, however, everything turned out as she had foretold, they changed their minds and conceived a great respect for her wisdom. Hence, whenever she appears, the Birds attend upon her in the hope of hearing something that may be for their good.
She, however, gives them advice no longer, but sits moping and pondering on the folly of her kind.
- Aesopâs fable, as translated in 1912
The future actively shapes our lives. Historically, the way humans have thought about and approached the future has been flawed. Futures Thinking is a modern approach to the future, rethinking how humans think about and approach the future.
Rather than trying to predict specific future events, Futures Thinking encourages a shift in how we conceptualize the future itselfâdrawing on diverse cultural perspectives, foundational world characteristics, deep modern literature reviews, and recognizing that our present actions and narratives significantly influence future outcomes. Since most major life decisions are essentially bets on the future, adopting this framework could transform how we approach education, careers, relationships, and other essential aspects of life.
Today, our discussion revolves around how our world is set up and how these underlying characteristics shape everything that goes on in the world, specifically focusing on Futures Thinking Tenet #5: Cognitive limitationsâbiases, blind spots, and simplificationâmake unpredictability inevitable.
Credit Rare Historical Photos
A CHOCOLATIERâS EXPECTATIONS FOR THE NEXT 100 YEARS - ENTIRE GROUPS OF PEOPLE ARE FOCUSED ON STUDYING AND PREDICTING THE FUTURE - ARE FUTURISTS BETTER AT PREDICTING THE FUTURE THAN THE AVERAGE PERSON?
In 1817, the Theodor Hildebrand & Son chocolate factory was formed by confectioner Theodor Hildebrand in Berlin, Germany. In 1830, following the development of the steam engine and its adoption in Europe, Hildebrand adopted them for use in their factories, enabling them to offer chocolates at a lower price.
Around 80 years after it was founded, now known as Hildebrandâs, the chocolate maker undertook a clever marketing campaign (especially clever in hindsight), labeled Germany In the Year 2000.
As part of the 1900 Paris Worldâs Fair, the chocolate company commissioned 12 postcards to predict what life would be like 100 years in the future (In the Year 2000, hence the name). You can view the 12 images here.
Hildebrand placed these postcards in the boxes of their chocolates from 1899 to 1910, an early form of collectible items (like the McDonaldâs mystery kidâs meal toys).
The associated postcards show a wide variety of predictions for the future, ranging from accurate to absurd to plausible.
Below is a list of the 12 predictions made about our lives today (some have multiple interpretations):
- Personal flying machines
- X-ray machines for police / x-ray surveillance devices
- Personal airships
- A live audiovisual broadcast of a theatre performance / watching a live performance while not in the theater
- Putting a roof over a city / weather-proof city roofing
- Moving an entire city block by rail / steam relocation of mobile houses
- Underwater ships for tourists / tourist submarines
- Riding and walking on water / strolling on a lake with the aid of balloons
- Hybrid rail-water warship
- Machine for creating good weather / weather-controlling machines
- Excursions to the North Pole / tourism at the North Pole
- Moveable sidewalks
Using hindsight bias, we can see the relative accuracy of these predictions, which over the long timespan, are unexpectedly much more accurate than you would expect of similar predictions we would make in 2000 about the year 2100.
There is an entire field dedicated to this, usually labeled under the term âfuturists.â Futurists and those who participate in âfutures studiesâ seek to study and predict possible futures through the analysis of trends, emerging technologies, societal shifts, and other drivers of change.
As Wikipedia puts it, âpart of the discipline thus seeks a systematic and pattern-based understanding of past and present, and to explore the possibility of future events and trends.â
Generally, futurists focus on the medium- and long-term horizons, planning and strategizing to anticipate possible future events far into the futureâspecifically interested in changes of a âtransformative impact, rather than those of an incremental or narrow scope.â
In the mid-1940s, the first professional âfuturistâ consulting institutions (RAND, SRI, etc.) began to engage in long-range planning, systemic trend watching, scenario development, and visioning. They began primarily under military and government contract during WWII, but after they began servicing private institutions and corporations.
Thereâs a lot to unpack here. Weâll slowly do that over the duration of this article.
To begin with, letâs start with the most damaging claim that opponents across the spectrum have made about these types of people (whether official futurists acting according to their profession or other casual predictors): They are no better than the common man at predicting the future.
One of those staunch opponents has been Nicholas Taleb through his book The Black Swan. In it, he aggressively states the following:
In other words, Taleb claims that these âexpertsâ in prediction take all of the credit when they are correct and donât take credit when they are incorrect (blaming countless other things). When practiced, this creates the impression that they are better at predicting than they actually are. To use an example from a completely different discipline, this is similar to how some religious zealots frame problems and things that happen in this world: if itâs good, it belongs to God, if itâs bad, it belongs to man.
Iâve already discussed in length how our approach to randomness (and likewise uncertainty) can lead us to fall victim to our own understanding of the world, but weâll continue elaborating on this train of thought throughout this article.
Taleb doesnât stop there, however, casting aspersions on the quality of the possible futures predicted by these futurists (and those like them).
To be clear, futurists and those who participate in the field of âfutures studiesâ donât truly âpredictâ the future in the sense of offering a single, definitive view of the future (and if they do they are ignorantly naive). Instead, they explore a variety of possible futures to help those interested understand the range of eventualities possible.
For instance, take the example from Hildebrandâs chocolates above. They offer many potential futures, and while many of the predictions are relatively accurate to our reality in the 21st century, there are a lot of key developments missing.
As Taleb puts it,
From Talebâs viewpoint, and weâll explore this more in-depth using his and other philosophersâ perspectives on the thought, many of the most significant developments throughout history simply could not be predicted (at least not far in advance and definitely not by a large number of people).
Sir Francis Bacon, in his 1620 piece Novum Organum, discussed how these most important advances are the least predictable ones, âhaving no affinity or parallelism with anything that is now known, but lying entirely out of the beat of the imagination, which have not yet been found out.â
As Taleb would label them, these events are âBlack Swans.â
Credit Unsplash
FUTURISTS ARE THE CHIROPRACTORS OF FUTURES THINKING - HIGHLIGHTING 5 KEY ISSUES WITH OUR PREDICTIVE ABILITIES - WE ARE TRULY ONLY GOOD AT PREDICTING THE BORING
When I get bored with the present reality I find myself in, I think itâs fun to see what people are thinking tomorrow may look like. I find these stories, predictions, and narratives of the future to be very entertaining.
For instance, I recently came across this article, detailing 10 predictions of what 2035 will look like (published in 2024). To summarize, here are the 10 subjects of the predictions:
- The Rise of Living Movies
- The Misinformation and Verification Economy
- Quantum Hegemony
- A(R) New Social Paradigm
- Preventive Genomics and Healthcare
- The New Reality of Work
- ClimateTech Goes Jetstream
- Hyper-Hyper-Personalization
- AGI, ASI, and Society
- The Convergence of Humans and Machines
Honestly, thereâs some cool stuff listed thereâin many ways, you could read this and be excited about the future (often these articles donât detail predictions of war, genocide, plague, disease, or any other negative event so the bias is quite evident).
That is⊠until you realize what these really are (or what they are more properly characterized as): hopes, wishes, faithâall for a âbetterâ world than we have today. Granted, Iâve cherry picked one article out of many, but the principle holds for quite a large portion.
Itâs difficult to apply hardened principles to futuristsâit severely constricts their profession. A bad analogy would be that they would be considered the chiropractors of Futures Thinking (maybe thatâs too far, but alas).
In the section above, I gave voice to the most damaging claim that opponents give when encountering futurists and those who make predictions about the future: They are no better than the common man at predicting the future.
In order to elaborate this claim, we must first understand the methodologies these practitioners use to predict the future. The main methods used by these practitioners are explained below:
Method #1: Trend Analysis & Extrapolation - A key way futurists predict the future is through the analysis of information to draw trends, which they then extrapolate into the future. Extrapolation is a simple form of âforecastingâ.
Method #2: Forecasting - Forecasting is the process of looking from the present to the future. It involves analyzing patterns and using statistical models to estimate future outcomes.
Method #3: Backcasting - Backcasting starts by envisioning a future state, then works backward to identify the steps and policies needed to achieve that vision. In other words, you start with the end in mind, then work back to the present.
Method #4: Wildcard Consideration - Wildcards are low-probability, high-impact events that could dramatically alter the future. While these events are hard to predict, considering their potential helps build resilience.
Method #5: Scenario Planning - Scenario planning involves developing multiple plausible alternative futures, crafting detailed stories or descriptions of what these different futures might look and feel like.
Method #6: Cross-Impact Analysis - Cross-impact analysis examines how different trends and events might influence each other, leveraging interdependencies and probabilistic assessments of potential scenarios.
People skeptical of future predictions (and those who are making the predictions) argue that our capacity to foresee the future is deeply flawed, offering many explanations in an attempt to poke holes in every aspect of the argument for making such forecasts.
These explanations form the basis for our discussion today, highlighting the innate world factors, biases, complexities, and blindspots present that restrict or fully prevent our ability to predict with any accuracy.
Prediction Problem #1: We Canât Predict Novel Futures Because Then They Would Exist in the Present
Iâm sure youâre wondering about the Taleb quote I listed above, describing how the three recent biggest developments (the computer, the internet, and the laser) werenât predicted. Is his claim legitimate? If so, what factors influence our failures to predict these major, novel futures?
Itâs difficult to pinpoint the very first âpredictionâ of the internet, but some sources argue that there were some vague representations of an âinternet-likeâ technology in the early 1900s. A 1879 prediction envisioned a source that provided a constant stream of news; a 1904 prediction envisioned a source that allowed the user to view events all around the world in real time; and a 1909 prediction envisioned a source that was a vast archive of information and an ability to communicate with others visually.
Fast forward a couple of decades to the early 1960s, and we see the first âconcreteâ prediction of the Internet by research scientists Licklider, Kleinrock, Baran, and Roberts. They envisioned a globally interconnected set of computers through which everyone could quickly and easily access data and programs from any site.
Over the next decade or two, what we now know as the Internet was formed.
Using the Internet as our core example of the three, it seems as though (in hindsight) we can draw faint lines to places throughout history where parts of the future were âpredictedâ, but in reality, the Internet was only competently foreseen right before it was actually invented.
The internet and its varying predictions showcase one of the fundamental issues with predicting novel, massive future developments (which Taleb would label âBlack Swanâ developments): the idea that if you are going to understand the future in order to predict it, you need to incorporate elements of the future itself.
Taleb uses a better, clearer example to illustrate this point:
Using the idea of the wheel in the Stone Age, Taleb illustrates that if you had the level of understanding of the wheel in order to âpredictâ it, then the âpredictionâ isnât a prediction of something completely unknown, but rather a description of something youâve already conceived.
This is what makes truly novel futures unpredictable:
1) The âPre-Conceptionâ Trap: For something to be truly novel and unpredictable, it must be something that, by its very nature, falls outside of our current frameworks, concepts, and imaginations. If you can describe it well enough to âpredictâ it, youâve already brought it into the realm of the conceivable.
2) Lack of Precursors/Analogies: Major, novel breakthroughs often lack clear precursors or direct analogies in the existing world.
3) Emergent Properties: Truly novel developments often lead to emergent properties that cannot be foreseen by simply analyzing their constituent parts.
4) Technological Dependencies and Leapfrogging: Many breakthroughs depend on a confluence of other, often unpredicted, technological advancements.
5) The âFuzzy Front Endâ of Innovation: As discussed in Tenet #3, innovation doesnât happen in a linear fashion. The initial stages are often characterized by ambiguity, false starts, and unexpected discoveries. What seems like a clear path in hindsight was a messy, uncertain process in the present. Predicting a specific outcome in this âfuzzy front endâ is exceedingly difficult because the very nature of the innovation is still being defined.
Taleb's point is that our predictive faculties are inherently limited when it comes to true novelty. If something is truly revolutionary and unprecedented, describing it well enough to "predict" it essentially means you've already conceptualized it. This act of conceptualization moves it from the realm of the truly unknown future into the present realm of invention or discovery.
This discussion relates to our discussion of uncertainties in Layer 3 and Layer 5 in Tenet #4. As Taleb writes, âPrediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.â
Therefore, the "biggest developments" are often unpredictable precisely because their novelty means they couldn't exist as clear concepts in the present until they were on the verge of, or actually, being created. Our "failures" to predict them aren't failures of effort, but rather a fundamental limitation stemming from the nature of groundbreaking innovation itself.
Prediction Problem #2: The Delusion of Repetition, Why History Is a Poor Guide for the Extraordinary
This prediction problem is actually two very closely linked issues: the problems with history and the problems with the extraordinary.
To begin, people have skepticism about the use of history to predict future events. Often, those creating predictions of the future assume that the future will be similar to the past. Using language we developed in Tenet #3, these people are assuming a relatively linear view of history and the future.
As Taleb writes, âThe only way you can imagine a future 'similarâ to the past is by assuming that it will be an exact projection of it, hence predictable.â In this case, Taleb is assuming a directly causal linear relationship of the past and future (a preposterous view in retrospect).
Relaxing this view slightly, Amar Bhide writes in his book Uncertainty and Enterprise: Venturing Beyond the Known that âKeynes and Knight were the first to seriously question whether patterns of the past always reveal the path to the future.â In relaxing Talebâs assumption of the future being an exact replica of the past, Keynes and Knight allow for the introduction of exponentials into the equation, adding flexibility to our linear projection.
As such, we can propose that the assumption of repetition can be flawed. In other words, history can be a poor guide for the future. This is especially highlighted in matters of the ordinary and extraordinary.
To add some structure to our discussion, letâs differentiate between ordinary events and extraordinary events. In our case, ordinary events would refer to events that are repeated very closely to how they occurred. Extraordinary events would refer to events that are âone-offsâ, events that are non-repeatable.
We discussed the idea of âone-offsâ at length in Tenet #4:
We, whether we like it or not, primarily learn from repetition. The example I love using to illustrate this point is the analogy of the turkey: It lives on a farm for the first 1000 days of its life without accident, eating and surviving well. If we assume this is an ordinary affair (subject to linear affairs as discussed in Tenet #3), we would assume that day 1001 would continue the same trend.
However, if this is an extraordinary affair (subject to exponential affairs as discussed in Tenet #3 and the vast uncertainty discussed in Tenet #4), we wouldnât be certain of anything, needing to consider whether or not the next day was Thanksgiving.
Our reliance on historical data and patterns, as discussed above, while seemingly logical, is inherently biased towards predicting routine, repeatable eventsâthe more routine the task, the better we get at predicting it.
This strength becomes our weakness when we are confronted with novel, non-routine, non-repeatable events (which Taleb labels as âBlack Swansâ). By assuming the future will simply be an extrapolation of the past, we systematically ignore these novel events and are then surprised when the extraordinary presents itself.
Taleb writes, âMy results were that regular events can predict regular events, but that extreme events, perhaps because they are more acute when people are unprepared, are almost never predicted from narrow reliance on the past.â
On the surface, this issue is difficult, but not impossible to deal with. However, this issue gets infinitely more complicated when we consider the rather unpleasant thought I ended with in my discussion of one-offs in Tenet #4:
To employ another Taleb quip, âHow can we know the future, given knowledge of the past?â
Prediction Problem #3: There Is No Reliable Way to Compute Small Probabilities
In his book, Taleb states, âThere is no reliable way to compute small probabilities.â This is one of the key problems he cites with our ability to predict any number of futures.
For instance, what is the probability of being able to travel at light speed in the next decade? It would be an incredibly improbable feat, but our ability to assign a direct probability to it is nearly impossible. Is it 1 in 1,000? 1 in 10,000,000? Who knows?
This is where the field of statistics begins to meet its match. Taleb characterized this as âstatistical undecidabilityâ, stating that âstatistics is fundamentally incomplete as a field, as it cannot predict the risk of rare events, a problem that is acute in proportion to the rarity of these events.â
In order to be able to accurately compute a very small probability (on the scale of millions, billions, or more), you would need an enormous amount of historical data or observations. For almost every problem with which there is this small of a probability, however, there is not anywhere near the level of data necessary to calculate a probability.
As such, when you donât have enough data to curate a prediction, you resort to theoretical models to create and estimate probabilities, but even these models arenât without fault. They are based on countless assumptions about the underlying distribution of events, and are arguably more inaccurate.
Furthermore, recently, researchers have found that people assign different probabilities to different future states of the world, which they call âsubjective probabilities.â As defined, subjective probability is a type of probability derived from an individualâs personal judgment or own experience about whether a specific outcome is likely to occur.
To clarify, Taleb doesnât mean itâs literally impossible to assign any number to a small probability (there exists some minuscule probability for each almost impossible event), but that any such number provided as an estimate of that probability is inherently unreliable, meaningless in practice, and dangerous to act upon.
An example of this in practice would be the effect of small errors. When you are estimating a very small probability of something occurring, even tiny errors in your methodology can lead to massive errors in the final probability. For example, if the true probability is 0.0001% and you estimate it as 0.001%, youâre off by a factor of 10, potentially huge when multiplied by the potential impact of the event.
Prediction Problem #4: Vague Predictions Are Often More Helpful Than Specific Predictions
Upon initial inspection, this statement might seem to be counterintuitive. In our predictions, we desire the utmost precision to ensure accuracy and usefulness.
To illustrate this point, consider the following two predictions:
- Prediction #1: The Vice President of the United States will be assassinated on domestic soil in the next year.
- Prediction #2: There will be a major attempt to destabilize the United Statesâ political regime at some point in the next year.
If you had to choose between the two, most people would choose the first due to its clarity and simplicity.
However, there are many drawbacks to preferring specific predictions over those that may be more vague and seemingly unhelpful.
The first issue with specific predictions (like the one listed above) is that they have a much higher probability of being incorrect. A highly specific prediction has an extremely low probability of being precisely correct, as even a slight deviation makes it âwrong.â A vaguer prediction has a much higher chance of aligning with reality. It provides a general direction or trend without committing to a single, easily falsifiable point. As such, thereâs more room to be right with a vague prediction.
The second issue with specific predictions is the inherent lack of flexibility and adaptability. As the future is inherently uncertain (discussed in Tenet #4), overly specific predictions can create a rigid mindset, leading organizations or individuals to pursue a single path even when new information is presented. Vague predictions, on the other hand, can allow for contingency planning and the ability to adapt as the future unfolds.
Given this information, in retrospect, which of the predictions would you now prefer?
People often prefer precise information, even when itâs less likely to be accurate, which some have called the âprecision paradox.â A precise prediction feels more authoritative and insightful, but more often than not, itâs simply a distracting illusion.
As such, specific predictions are like trying to hit a tiny bullseye in the dark. Youâre almost certainly going to miss. Vague predictions are like aiming at a much larger target. As Taleb puts it, we shouldnât try to predict precise Black Swans as it tends to make us more vulnerable to the ones we did not predict.
In their book, Ikigai: The Japanese Secret to a Long and Happy Life, Garcia and Miralles employ a statement by Jeff Howe which summarizes this issue concisely,
Prediction Problem #5: Predictions Change Behavior When They Are Widely Known
In Jerry Neumannâs 2015 article Strategies Against Systems, he left a helpful tiny nugget buried within his conclusion paragraph:
This statement describes a phenomenon dubbed by social scientists as âreflexive predictions.â A 1973 article by George D. Romanos in Philosophy of Science articulated this concept nicely:
To be clear, a reflexive prediction is one where the act of making and disseminating the prediction itself influences the outcome, with two main types: self-fulfilling prophecies and self-defeating prophecies.
A self-fulfilling prophecy is a prediction that, once made and believed by relevant parties, causes people to act in ways that make the prediction come true. An example of this could be the false rumor of a bank failing, which causes people to withdraw their money, leading to the bankâs actual collapse.
In contrast, a self-defeating prophecy is a prediction that, when widely known, prompts action to prevent the predicted outcome from happening, thus falsifying the original prediction. From the 1973 article:
These predictions are deemed âreflexiveâ as they refer to a circular relationship between cause and effect, where the cause (the prediction) influences the effect (the outcome), and the effect, in turn, can alter the original cause or how itâs perceived. How?
When a prediction is widely disseminated, it enters the collective knowledge base and can become a new piece of information that agents (individuals, organizations, markets, governments) incorporate into their decision-making processes.
Reflexivity makes prediction in these subjective domains (social sciences, economics, etc.) inherently more complex than in the natural sciences. A forecaster isnât just an observer; they are potentially an active participant in shaping the very future they are trying to predict (what some have hilariously dubbed the âOedipus effectâ). Neumann describes this phenomenon well:
The key is that in systems involving human agency and decision-making, knowledge of a prediction can change behavior.
Given the above 5 issues with our predictions, what types of predictions have we ruled out? Which types of predictions are still relatively valid?
Through a couple of factors, weâve ruled out predictions of things which are ânovelâ, ânon-routineâ, ânon-repeatableâ, âextraordinaryâ, âone-offâ, or ârareâ. Additionally, weâve ruled out specific predictions.
As such, what are we left with?
Weâre relatively good (Iâll elaborate on the relative goodness) at predicting anything ordinary, repetitive, boring, âinconsequentialâ, or anything governed by vague predictions.
Credit Adobe
UNCERTAINTY RESTRICTS OUR ABILITY TO PREDICT ACCURATELY - DISSECTING THE TRIPLET OF INFORMATION OPACITY - WE CANâT ACCURATELY PREDICT LAYERS 3-5 OF THE UNCERTAINTY FRAMEWORK
If your elementary or childhood education was like mine, at some point you either read or watched The Giver. No worries if you didnât or if youâve forgotten the story, hereâs the high-level SparkNotes:
Itâs a captivating story, full of despair, ignorance, hope, struggle, color (and lack thereof), and much moreâI would recommend a full read or a watch.
The society portrayed in The Giver is an excellent juxtaposition to the âreal worldâ we live in today in one key aspect: the realm of uncertainty.
You may have missed the significance in your hasty read of the summary, so I would recommend going back and rereading the first paragraph particularly. Specifically, there are 3 main ways in which uncertainty is highly limited in this society: âsamenessâ, assigned roles, and memory control.
Almost everything (as much as possible) is the same within this community; the community leaders have eliminated or significantly suppressed all choice, emotion, pain, weather, landscape fluctuations, and other key differences (racial, emotional, and even color).
At age 12, children are âassignedâ their lifelong professions (similar to how it is portrayed in the Divergent series). Additionally, any partners and children are also âassignedâ to them. Both of these factors remove a large amount of the uncertainty present in their futures.
Lastly, all memories of pain, historical struggles, and even intense joy are held by one person, the Receiver (later known as the Giver), so the rest of the community lives in a state of naive tranquility.
We should be glad our world isnât like this. If it were, we too would set off for Elsewhere, in the pursuit of a land wherein there was a larger amount of uncertainty.
Splicing this story into our concept at hand, weâre simultaneously blessed and cursed by the amount of uncertainty present in our world. To begin, a key factor in preventing our ability to accurately predict the world around us is also a beneficial property of it: uncertainty.
Jerry Neumann, the curator and proponent of the layers of uncertainty framework I harped on during Tenet #4, continues to provide key points to our topic at hand today, writing in his article Strategy Under Uncertainty:
Granted, he does say this within a business context, but the concept still holds strong: the presence of uncertainty within an issue prevents us from accurately predicting anything to do with that issue.
Compared to the more targeted predictive issues noted above, this is a much broader position of opposition.
To begin, letâs start with the issue of information, its quantity, and relevance in relation to uncertainty and predictions.
The relationship between information and prediction is fundamental: information is the raw material for prediction, and its quantity and relevance directly influence our ability to reduce uncertainty and, consequently, to make more accurate and confident forecasts.
When all of the information is present to make a decision, the resulting predictions will be more likely to be accurate than those predictions made with less and less information. As such, the quantity of problems that arise with our predictive abilities exponentially rises as the incompleteness of information increases.
Susan Cain broaches this issue from a practical point of view in her book Quiet: The Power of Introverts in a World That Canât Stop Talking:
Itâs difficult to derive the answer. Amar Bhide introduces this concept by stating, âConfronting uncertain options requires some awareness of what we donât know.â What do we need to be aware of?
In this case, it would be the extent of what Taleb has labeled the âtriplet of opacityâ:
Diving deeper into this triplet, each portion closely affects our abilities to predict.
Firstly, the illusion of understanding refers to our innate tendency to believe we grasp the worldâs complexities more thoroughly than we actually do. In other words, weâre succumbing to the pathology of thinking that the world we live in is more understandable, more explainable, and therefore more predictable than it actually is.
Our discussions of Tenets #1, #2, and #3 asserted this fact. This portion manifests itself in the following ways (which weâll discuss later on in this article):
- The Narrative Fallacy
- Confirmation Bias
- Hindsight Bias
- Ignorance of the Unknowns
This illusion is dangerous for our predictions because if you think you understand a system perfectly, you wonât seek out new information, challenge your assumptions, or prepare for unexpected outcomes. When the world eventually deviates from your simplified model, it leaves you open to blind spots and tricky situations.
The second portion of this triplet is the retrospective distortion. This biased viewpoint describes how history, when viewed in retrospect, appears far more orderly, logical, and predictable that it actually was in real-time. Weâll discuss this much further in our bias section below.
The third portion of this triplet is the overvaluation of factual information. This refers to the dangers of relying too heavily on overly precise, categorized, or expert-driven knowledge, especially when it attempts to force complex reality into neat, rigid mental âboxesâ. Again, we discussed this deeply in Tenet #4, how our simplification, while seeming to help in the short run, only hurts us in the long run. This portion manifests itself in the following ways (which weâll discuss later on in this article):
- Over-Reliance on âFactsâ
- Categorization and Reductionism
- The âExpert Blind Spotâ
- Ignoring Unknown Unknowns
This overvaluation exposes us to the problem of the âtunnel visionâ effect (again, weâll talk more in depth about this later on in this article).
As illustrated, this triplet of opacity is just one way in which information affects uncertainty and prediction. In our discussion of uncertainty in Tenet #4, we highlighted the importance of information throughout the layers of uncertainty:
The uncertainty angle helps provide depth and clarity to the above discussion of prediction, especially once we add in the specific layers from the uncertainty framework we discussed in Tenet #4:
In Layer 1, the future is relatively clear, often because we possess a high quantity of highly relevant, reliable, and timely information. This information allows us to identify stable trends, understand cause-and-effect relationships, and project forward with a relatively high degree of confidence.
In Layer 2, there are many alternative futures. Here we have some information, but itâs incomplete or ambiguous, leading to a limited set of distinct possible outcomes. The information to make predictions exists, but we donât personally have it. The goal within this layer is to collect more targeted information to determine which of these alternatives is most likely.
In Layer 3, there is a wide range of potential futures. Unfortunately, our information is insufficient to narrow them down into discrete and measurable alternatives. The relevance and quantity of available information are low; collecting more information helps us understand the boundaries of this range.
In Layer 4, we have reached a state of true ambiguity (fundamental unpredictability). Even with vast amounts of information, the future remains fundamentally ambiguous due to inherent randomness, complexity, or novel emergent properties of the world (discussed in Tenet #1).
In Layer 5, we are in a realm of true chaos (complete unpredictability). In truly chaotic systems, even tiny variations in initial conditions (which we can never perfectly measure or know, no matter how much information we gather) lead to vastly different outcomes.
Overall, information is crucial for prediction, but its effectiveness depends entirely on the nature of the uncertainty. For reducible uncertainties, more information is the key to better predictions. For irreducible uncertainties, even perfect information wonât yield precise predictions.
The above discussions of uncertainty layers and the triplet of opacity have helped develop the notion that uncertainty present in the world restricts our abilities to accurately predict future events.
One more perspective is necessary to fully define exactly which parts of the world weâre able to properly predict and which have restrictions.
Jerry Neumannâs 2020 article, Productive Uncertainty, provides an adequate outline for this discussion. Iâll interlay them with the uncertainty layers weâve discussed here and throughout Tenet #4:
Neumann starts by differentiating between novelty uncertainty and complexity uncertainty. Novelty uncertainty pertains to Layer 3 uncertainty, and complexity uncertainty pertains to Layer 4 and 5 uncertainty.
As we defined, novelty uncertainty is when there are things you just donât know, even after doing all of your research and thinking things through to their logical conclusions. Itâs especially common when someone does something for the first timeâhence the employment of the word novelty.
When something hasnât been done before, often no one can predict the outcome (with any abnormal accuracy that is). As Neumann writes, âPrediction relies on either inductive or deductive reasoning: the first requires data and the second requires an understanding of the process that produces the result. Novelty uncertainty results when we have neither.â
As we defined, complexity uncertainty is unknown unknowns, possibilities we cannot imagine, as well as known unknowns.
Itâs impossible to predict what many complex systems will do due to their interconnections and interdependencies. Information opacity and feedback loops make outcomes impossible to predict.
In either case, novelty or complexity, we face prediction limitations:
- Novelty: When you canât predict something because no one has done it before.
- Complexity: When you canât predict something because the system you are in is changing in an unpredictable way.
Through this, weâve proved that those matters which fall into Layers 3-5 of our uncertainty framework cannot, with any accuracy more than lucky one-offs, be predicted.
Ultimately, I agree with Amar Bhide on this issue: âUncertainty fascinates and challenges. An entirely predictable existence would be unbearably dull.â I would much rather live in our society than the society portrayed in The Giver.
Credit CBS 42
WE WANT TO BELIEVE THE STORY THAT BIRDS ARENâT REAL - HUMANS STRIVE TO CURATE NARRATIVES IN EVERYTHING WE DO - NARRATIVES BLIND US TO THE WORLD AROUND US, COMPROMISING OUR PREDICTIVE ABILITIES
I love a good conspiracy theory; some are truly believable (scarily so) and others are⊠less than believable.
According to a 2017 young man named Peter McIndoe, writing a poster after seeing pro-Trump counter-protestors at the 2017 Womenâs March in Tennessee, birds arenât real.
What followed was a large-scale satirical conspiracy theory detailing the extent to which the birds that exist today are fake (i.e., not real). Donât you love it!
The movement claims that in the United States, all birds were exterminated by the federal government between 1959 and 1971 and replaced by lookalike drones. These drones are used by the government to spy on citizens. The movement claims that birds sit on power lines to recharge themselves, that birds defecate on cars to track them, and, astonishingly, that President Kennedy was assassinated by the government because he was reluctant to pursue the mass killing of the birds.
Doesnât that explain everything youâve ever wondered about the world? Canât you see the truth?
Whether you believe in the conspiracy or not, the story is captivating. Even if youâre a delusional skeptic, you considered it for a second. We may never know the exact truthâare birds real or are they government drones?
Taleb suggests that through the study of uncertainty present in our world and our reactions to it, we can begin to see biases, complexities, and blind spots in our abilities to see and ultimately predict the world.
To begin, we must start with what Taleb calls the ânarrative fallacy.â On a simple scale, this fallacy speaks to a fundamental human cognitive bias: our innate desire to create coherent, simple stories to explain complex, random, and often unpredictable events.
There are multiple parts that make up the overarching narrative fallacy:
- Our craving for stories and narratives
- Simplification of the world around us
- The illusion of understanding
- Hindsight bias and retrospective predictability
- Ignoring randomness and âsilent evidenceâ
Human minds are wired to seek patterns, cause-and-effect relationships, and meaning in the world around us. When faced with a sequence of facts or events, our minds automatically try to connect them into a narrative, even if those connections are not truly logical or accurate. We want to be told stories; this is why humans tell stories to their kids, and this is what fascinates us with fiction. As Taleb writes:
As discussed in Tenet #4, we are constantly engaged in simplifying life around us. We do this to ignore or downplay sources of uncertainty that are too distant, too complex, or too slow-moving to impact our immediate decision-making.
To assist in our creation of these narratives, we employ these simplification techniques, helping to reduce the vast amounts of information and uncertainty around us into a digestible storyline. As Taleb writes, âOur minds are wonderful explanation machines, capable of making sense out of almost anything.â This simplification, as discussed in Tenet #3, often includes imposing a linear, causal structure where none truly exists, leaving us vulnerable to exponential outcomes.
This is where the illusion of understanding comes into play. Once we construct a compelling story, we tend to believe it to be true (or more accurately, a true representation of the reality around us)âeven if itâs based on incomplete information, hindsight bias, or pure speculation.
To quote what I wrote in a section above:
To bring us back to the topic at hand, this illusion of understanding makes us feel more confident in our ability to predict future events, leading to a dangerous overestimation of our knowledge.
This situation is also known as epistemic arrogance, the concept that refers to an exaggerated sense of oneâs own knowledge or understanding, often leading to a dismissal of alternative perspectives and a lack of openness to learning from others. In brief, itâs when we overestimate what we know and underestimate uncertainty, when we think ânarrowly.â
Bringing us back to our core discussion here, Taleb states, ânarrativity causes us to see past events as more predictable, more expected, and less random than they actually were.â
The narrative fallacy exposes our vulnerability to overinterpretation and our predilection for stories over raw truths. As Taleb writes, âit takes considerable effort to see facts (and remember them) while withholding judgment and resisting explanations.â Itâs difficult for humans to look at the facts without trying to weave an explanation through them.
The narrative fallacy is closely linked to the confirmation bias and the hindsight bias.
Confirmation bias is when we tend to seek out, interpret, and remember information in a way that confirms our existing beliefs or our constructed narratives. Per Wikipedia, âpeople display this bias when they select information that supports their views, ignoring contrary information or when they interpret ambiguous evidence as supporting their existing attitudes.â
A 2016 article by Ron, Oren, and Dar cited a 1993 article by Friedrich, stating, âConfirmation bias is the tendency to make predictions and examine them by searching for information that is expected to confirm anticipations or desirable beliefs, avoiding the collection of potential refuting evidence.â In other words, we see evidence we want to see, evidence that conforms to our narrative of the world.
Hindsight bias is the tendency for people to perceive past events as having been more predictable than they were. After an event has occurred, people often believe that they could have predicted it or perhaps even âknownâ what the outcome of the event would be before it occurred.
Taleb describes the hindsight bias as such:
Hindsight bias leads people to overestimate how well they could have predicted an event before it happened. It can make people forget that outcomes were uncertain at the time and fail to appreciate the complexity of the situation.
By focusing on neat narratives and our simplified view of the world, we often fail to account for the immense role of randomness and âsilent evidence.â
In our discussions of futurists, I voiced Talebâs concerns about our perceptions of randomness and their effects on our abilities to predict. In summary, he accused these people of attributing success to their own abilities and failures to things outside of their control.
To add a layer to his claims, Taleb specifically states, âWe humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness.â
In practicality, this is often called the self-serving bias. Whether we like it or not, we have a built-in mechanism to protect our ego and self-esteem. When good things happen, weâre quick to take credit, attributing our success to our skills, intelligence, hard work, or deep understanding. Inversely, when things go wrong, we tend to externalize the blame, pointing to external factors outside our control.
Taleb agrees with this built-in theory, claiming that âwe humans are the victims of an asymmetry in the perception of random events.â
This phrase, specifically highlighting the victimology, properly encapsulates this bias. We donât perceive success and failure symmetrically. While this approach can be beneficial for our mental well-being in many ways, it dramatically affects our ability to predict.
Specifically, this presents itself in four unique ways:
- Distorted Learning from Experience: When we succeed, we think itâs us. When we fail, we think itâs something other than us. This dramatically distorts our abilities to learn from our experiences.
- Overconfidence and Unrealistic Expectations: By constantly taking credit for successes and deflecting blame for failures, we develop an inflated sense of our own predictive capabilities.
- Resistance to Feedback and Learning: If you consistently attribute failures to external factors, youâll be less open to constructive criticism or feedback (the growth mindset).
- Poor Decision-Making: The self-serving bias compounds our unrealistic self-assessments, which can lead to suboptimal decisions (and likewise, suboptimal predictions).
As such, the immense role of randomness and our perception of it, is incredibly impactful on the narratives that we tell ourselves, and likewise, impactful on our ability to predict and learn from our predictions (via confirmation and hindsight biases).
Additionally, âsilent evidence,â as those versed in this world call it, is incredibly impactful on the narratives we tell ourselves. To define the term, silent evidence refers to information or evidence that is overlooked, ignored, or not readily available.
Taleb uses a vivid example to illustrate his point:
The main way we see silent evidence surface in the world is through the disregard of unsuccessful outcomes while magnifying successful outcomes, as witnessed in the example above.
Taleb gives another example to illustrate the opposite:
In this example, we see the opposite of the example above: silent evidence is the successful outcomes (the criminals who werenât caught), while weâre magnifying the unsuccessful outcomes (the criminals who were caught).
Throughout our narratives and daily realities we live in, we see the obvious, that which is directly in front of us (these things are confirming our views of the worldâconfirmation bias). We fail to see, overlook, or simply disregard the invisible and less obvious factsâyet those generally are the more meaningful events (weâre entering into Black Swan territory here).
Where this affects our predictive abilities is that, like Talebâs accusations of futurists failing to take responsibility for failed predictions, the silent evidence bias lowers our perception of the events of the pastâitâs what events use to conceal their randomness.
For instance, if we were to survive a significant crisis, the silent evidence bias would lower our perception of the risk weâve incurred, retrospectively underestimating how risky the situation actually was. In other words, specifically Talebâs, âDo not compute odds from the vantage point of the winning gambler⊠but from all those who started in the cohort.â
Unfortunately, as weâve seen through the presence of Black Swan events, the silent evidence bias hides best when it has the largest impact. As Taleb writes, âthe severely victimized are likely to be eliminated from the evidence.â
Taleb leaves us with the final word on silent evidence: âOnce we seep ourselves into the notion of silent evidence, so many things around us that were previously hidden start manifesting themselves.â
To reiterate and summarize the issues of randomness and silent evidence in regard to the narrative fallacy, through our strict focus on narratives and stories and our failure to account for the immense role of randomness and silent evidence, we create a skewed perception of reality.
Taleb offers a concluding remark on the narrative fallacy:
Credit Medium
NEVER BRING GYM TRAINING TO A STREET FIGHT - ADDRESSING KEY BIASES, COMPLEXITIES, AND BLIND SPOTS PRESENT IN OUR PREDICTIVE ABILITIES - WE CANâT ESTIMATE UNKNOWN EVENTS, BUT WE CAN ESTIMATE THEIR IMPACT
Have you ever seen one of those side-by-side pictures online which portrays two people, one is usually some kind of smaller renowned professional fighter and the other a huge muscular body builder, and has some sort of caption like âItâs hard to explain but the guy on the left would defeat the guy on the right in a fightâ?
If not, hereâs one below:
Credit Reddit
This is a funny example (in an abstract way) of one of our predictive fallacies (the Ludic fallacy, which weâll discuss below).
Often, when thinking about defending against and attacking other people, fighters often choose highly organized competitive fighting styles (Karate, Boxing, Krav Maga, Judo, etc.). These practices train athletes to excel within a specific set of rules, techniques, and permissible moves. Fighters trained in this manner focus on optimizing for the known combat scenarios.
However, this can contradict one of the most basic properties of real combat: real-life combat or street fighting has no rules. Itâs due to this that one of the most basic laws of self-defense was coined: fight in any way necessary to protect yourself.
Opponents can use dirty tricks, surprise weapons (never bring a knife to a gun fight), or attack in ways that are completely âillegalâ in the formal âgameâ of fighting.
Unpreparedness for unforeseen tactics or weapons can lead to defeat or death, even for highly skilled individuals who mistake their training for reality. In this case, our predictions of what lie ahead can literally leave us vulnerable to what is to come.
Besides the ever-present, overarching narrative fallacy and the triplet of opacity discussed previously, there are a multitude of other biases, complexities, and blind spots that compromise our abilities to predict. The main 7 are below:
- The Ludic Fallacy
- Tunneling
- Framing
- The Role of Luck
- Ideas Are Sticky
- Misrepresenting Past Predictions
- Ignorance of Our Prediction Accuracy & Timeframe
- Ignorance of the Unknowns
1) The Ludic Fallacy
The ludic fallacy, as it was dubbed by Taleb, refers to the misuse of games to model real-life situations. To paraphrase, the ludic fallacy is essentially the improper employment of structured games to understand and predict real-world situations.
To set up the situation at play, games have known rules and probabilities. For instance, the rules of blackjack are fixed and transparent. The probability of a certain card being drawn can be precisely calculated (ever watch a live-streamed tournament with the real-time stats?).
Unlike games, real life has unknown rules and probabilities. Real-world phenomenaâespecially those involving matters governed by Exponentland (as defined in Tenet #3)âdo not operate with such clear-cut rules or observable probabilities.
There are other factors at play here:
- Unknown unknowns (Layer 5 uncertainty as discussed in Tenet #4): In real life, we donât know all the possible variables, interactions, or even the underlying rules of the game. We canât list all potential outcomes, let alone assign precise probabilities to them.
- Silent evidence (as discussed above): We often miss crucial informationâthe things that didnât happen, the variables we didnât consider, or the non-linear effects that we struggle to grasp (as discussed in Tenet #3).
- Non-stationary factors (Layer 4 uncertainty as discussed in Tenet #4): The ârulesâ of the real world (whatever they may be at any time) can change over time. What was true yesterday might not be true today.
This fallacy leaves us vulnerable to many predictive issues. To characterize the obvious, an over-reliance on simplified and/or mass-adopted models (whether the Gaussian bell curve discussed in Tenet #3 or other Linearland properties) can fail when we face extraneous inputs, those governed by exponential or non-linear factors.
Additionally, this fallacy leads us to underestimate risk (similar to how we retroactively underestimated risk in the âsilent evidenceâ example above), have a false sense of security, and engage in tunneling.
2) Tunneling
To dive slightly further into the tunneling effect, the ludic fallacy reinforces the tunneling effect. The tunneling effect refers here to the fact that we focus only on known variables and ignore âunknown unknownsâ (Layer 5 uncertainties), blinding us to truly novel and impactful events.
Tunneling is our natural human tendency to focus on a limited, simplified, often pre-defined set of known variables and potential outcomes when trying to understand or predict the future, while systematically ignoring or overlooking everything else (here connecting the Ludic fallacy and the Narrative fallacy).
By ignoring those irreducible uncertainties, we behave as though Black Swan events donât exist, when, in fact, they doâleaving us blinded for when they eventually appear.
3) Framing
Framing is a way of presenting choices, decisions, and options. Amar Bhide illustrates this principle in a lovely way:
If you arenât familiar with the term or didnât grasp its meaning through the example provided above, the framing effect is when our decisions are influenced by the way information is presented. As youâve seen through the example, there are many ways in which equivalent information can be more or less attractive depending on what features are highlighted.
This effect can be present when all the necessary information is present or when you have no informationâdecisions based on framing are made by focusing on the way the information is presented instead of the information itself, so, in theory, it could be present in every single decision, prediction, etc.
To simplify, how information is presented matters. When evaluating information to make a prediction or when evaluating a prediction already made, the framing of the outcome can affect how accurate we perceive it to be. A 2023 Cambridge paper by Saiwing Yeung found that the framing of predictions, especially the directionality of these predictions (whether it was a prediction of success or failure), might influence how people evaluate the accuracy of the predictions.
4) The Role of Luck
Luck and prediction are distinct concepts, though they are very intertwined. Prediction, as weâve discussed at length today, is the act of forecasting future events based on analysis, experience, or other factors.
Luck, on the other hand, is generally associated with unexpected events that occur beyond oneâs control. Whether we notice it or not, we humans often underestimate the role of luck in every aspect of our lives.
Where luck and prediction interact is an interesting case study. For instance, unexpected events (whether positive, negative, or neutral) can and do affect the accuracy of predictions.
Using our framework from Tenet #3, in matters governed by Linearland, traditional predictive methods have much validity. Luck still plays a role, but its effects tend to average out over a large number of observations. In contrast, for matters governed by Exponentland, weâve seen that our abilities to predict are largely futile because the significant events (those one-off, non-repeated, novel events) are precisely the ones that were unpredictable. Here, luck becomes the dominant factor in determining outcomes, far more impactful than skill or meticulous planning.
5) Ideas Are Sticky
The theory that our ideas are sticky is very similar to the concept of tunneling. To call our ideas sticky is to personify them, providing the metaphor that they, once put into our head (through any number of means), continue to stick in there even if weâre presented with evidence that proves otherwise.
Itâs similar to the concept of a stubborn person; they are unreasonably persistent or unyielding in their opinions or actions, refusing to change their mind or behavior despite attempts to persuade them otherwise.
The core concept here is a combination of belief perseverance and confirmation bias. Once we form a belief or a theory (or a prediction) we tend to cling to it, even in the face of contradictory evidence. Our brains dislike cognitive dissonanceâthe discomfort of holding conflicting ideas. To avoid this, we often dismiss, reinterpret, or simply ignore information that challenges our existing views.
Taleb adds some advice to this idea, stating, âThe problem is that our ideas are sticky: once we produce a theory, we are not likely to change our mindsâso those who delay developing their theories are better off.â
What he means by delaying in developing theories is that we should avoid premature certainty; jumping to conclusions or solidifying a theory too early can be detrimental. In this case, we should maintain intellectual flexibility, learn from negative evidence, and adapt to changing information.
6) Misrepresenting Past Predictions
The past, present, and future are some of the most complicated entities. Weâll discuss them much more in depth during Tenet #10, but hereâs a small appetizer.
Thereâs a powerful blind spot in our predictive minds, one relating to the past, present, and future. When we think of tomorrow and subsequently try to predict what it will look like, we do so blindly.
We donât frame these predictions in terms of what we thought about today yesterday (or what we thought about yesterday on the day before yesterday). In other words, we donât factor in previous predictions (and their accuracy and what we can learn from them) into our future predictions.
In essence, we fail to learn recursively from past experiences, and this failure continues to blind us in our future experiences. We donât learn from history; we are bound to repeat it.
7) Ignorance of Our Prediction Accuracy & Timeframe
When someone makes a prediction, how do we know how accurate that prediction is?
In 95% of scenarios, we simply wait the amount of time necessary for the predicted event to occur or not occur and then we can see in hindsight how accurate the prediction was.
Is that the best way to determine prediction accuracy?
One of the most common ways people determine prediction accuracy is through whatâs known as a âforecast error rate.â The error rate refers to the difference between predicted values and actual observed values in a forecast. Itâs a measure of how well a forecast aligns with reality. But again, the issue with these forecast error rates is that theyâre viewed in hindsight.
Taleb provides a piece of much-needed clarity here: âWhat is surprising is not the magnitude of our forecast errors, but our absence of awareness of it.â
In other words, yes, forecast errors are more often than not witnessed in hindsight, but we should be aware and cognizant that they exist. Our blindness to the errors (and their magnitudes) is detrimental.
Why? If we arenât aware that our forecasts have errors, then we are subscribing to the utopian belief that our forecasts donât have errors. And, as weâve seen throughout this discussion, our forecasts most definitely have errors.
Taleb writes, âCorporate and government projections have an additional easy-to-spot flaw: they do not attach a possible error rate to their scenarios. Even in the absence of Black Swans this omission would be a mistake.â
Also known as our ârelative confidenceâ in our predictions, this error rate (whether known beforehand as a possible error rate or in hindsight as an error rate) is crucial information for our ability to predict. We shouldnât take our projections too seriously, without heeding their accuracy.
Furthermore, another concept is crucial to our predictive accuracy besides awareness and knowledge of the error rate: the timeframe of the forecast.
This concept is formally known as âforecast degradationâ, which refers to a decline in the accuracy or reliability of a prediction over time.
Since weâve set up our world and the properties of it in the first three tenets, we can vividly elaborate on the depths of this forecast degradation.
In Tenet #3, we solidified the proposition that our world is dominated by exponential factors. Starting from this position, letâs add in the concept of potential futures (known as possibilities in metaphysics). To explain, these represent all of the different futures that could exist from a given present moment.
To illustrate, letâs take the example of a person flipping a coin 4 times in a row. The present moment will be the second right before the first flip of the coin. In this moment, assuming all else is equal (a huge assumption here, but stay with me), there are two immediate potential futures: 1) the future where the coin lands heads up or 2) the future where the coin lands tails up.
Projecting these potential futures out further to include the second coin flip, there are now four potential futures:
- The future where the first coin lands heads and the second coin lands heads
- The future where the first coin lands heads and the second coin lands tails
- The future where the first coin lands tails and the second coin lands heads
- The future where the first coin lands tails and the second coin lands tails
Now project this out with two more coin flips (for a total of 4), and youâll see there are 16 possible futures.
If we were just predicting the first flip, we have a 50% chance of being correct. Inclusion of the second flip drops our odds to 25%, the third drops our odds to 12.5%, and the fourth drops our odds to 6.25%.
Obviously, this is an incredibly simplified version of this scenario which disregards every other factor at play here, but it does showcase how our forecast degrades as we lengthen our prediction timeframe.
To rephrase this in Futures Thinking terms, as we increase the prediction timeframe, we allow the exponential effects of the world more time to enact. As such, the possible futures become increasingly wild.
Failing to take into account our forecast degradation leaves us vulnerable to thinking predictions made very far into the future have the same level of possible accuracy as predictions made for tomorrow.
To explain this point, Taleb summarizes the work of Henri Poincaré, a French Mathematician in the 1800s:
Ultimately, awareness of the accuracy of our predictions is crucial to A) making predictions and B) heeding predictions already made.
8) Ignorance of the Unknowns
The illusion of understanding (epistemic arrogance) makes us overlook the vast number of variables, hidden interactions, and sheer randomness (luck) that are truly at play. We focus on the few factors we can easily identify and ignore the rest, often the most impactful ones.
I will never get to know the unknown since, by definition, it is unknown. However, I can always guess how it might affect me, and I should base my decisions around that.
In other words, we either donât or canât know those things within Layers 3, 4, and 5 of our uncertainty framework (from Tenet #4). Despite this, we can predict what the effect of any of these potential outcomes could be (at least those within the realm of known imagination), and as such, we can prepare for them.
As discussed in a section above, we canât estimate the small percentages of these events or possibilities taking place, but we can estimate their impact and prepare to mitigate them. To paraphrase, we can have a relatively clear idea of the consequences of an event without knowing how likely it will be to occur.
Taleb states, âThis idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you canât know) is the central idea of uncertainty. Much of my life is based on it.â
Credit Peopleimages
WE CANNOT PREDICT ACCURATELY - FIVE METHODS TO REFRAME OUR APPROACH TO PREDICTIONS - A SUMMARY OF OUR BLINDSPOTS AND A LOOK FORWARD
We simply cannot predict accurately.
Thatâs the gist of the lengthy above.
Weâve showcased that through the 5 predictive problems, the triplet of opacity, the presence of uncertainty, the narrative fallacy, and 8 biases, complexities, and blind spots.
If you donât believe what Iâm selling after all of that, then I donât know what more I could give you to convince you of the fact. For those of you left, letâs finish this line of reasoning.
If you are one that believes all of the above (i.e., that we have many issues that prevent or compromise our abilities to predict with any true everlasting accuracy), then how should we act given this hypothesis?
Iâve detailed the first methodology above: focus on the consequences (of an action, prediction, or other event) rather than the probability of that occurrence.
Besides this, there are 4 other methodologies with which we can address the uncertainties present in the world, our flaws at predicting the world, and other inherent aspects of the world that need to be addressed.
Method #1: The âRightâ Approach to Randomness
In his book, Taleb details a longwinded table of two different approaches one could take in response to the randomness present in the world; one is the âtraditional routeâ in which we are blind to many aspects of the world and our predictive abilities of it are incredibly flawed (weâll call this the âEveryday Man Approach to Randomnessâ, or âEMARâ for short); the other approach is a more nuanced, modern, flexible approach to randomness, one which proactively addresses the biases, flaws, and other characteristics of the world (weâll call this the âFutures Thinking Approach to Randomnessâ, or âFTARâ for short). Here are those comparisons:
Core Interest:
- EMAR: Focuses on the inside of the Platonic fold (what fits neatly into models and theories).
- FTAR: Interested in what lies outside the Platonic fold (the unmodeled, the unexpected).
Stance on Knowledge:
- EMAR: Response to criticism: "You keep criticizing these models. These models are all we have."
- FTAR: Respect for those who have the guts to say "I don't know."
Goal of Understanding:
- EMAR: Seeks to be precisely wrong (accurate within a narrow, flawed model).
- FTAR: Prefers to be broadly right (robust, adaptable across many scenarios).
View of Theory:
- EMAR: Everything needs to fit some grand, general socioeconomic model and "the rigor of economic theory"; frowns on the "descriptive".
- FTAR: Minimal theory, considers theorizing as a disease to resist (especially premature or grand theories).
Probability & Certainty:
- EMAR: Built their entire apparatus on the assumption that we can compute probabilities with precision.
- FTAR: Does not believe that we can easily compute probabilities (especially in complex systems).
Starting Assumption:
- EMAR: Assumes Linearland (where average, normal fluctuations dominate) as a starting point.
- FTAR: Assumes Exponentland (where extreme, rare events dominate) as a starting point.
Source of Randomness:
- EMAR: Thinks of ordinary fluctuations as a dominant source of randomness, with jumps as an afterthought.
- FTAR: Thinks of Black Swans as a dominant source of randomness (the big, unpredictable jumps).
Approach to Learning:
- EMAR: Top-down (applies general theories to specific situations).
- FTAR: Bottom-up (builds understanding from specific observations and experiences).
Knowledge Generation:
- EMAR: Relies on scientific papers, goes from books to practice.
- FTAR: Develops intuitions from practice, goes from observations to books.
Intellectual Basis:
- EMAR: Ideas based on beliefs, on what they think they know (often within their established frameworks).
- FTAR: Ideas based on skepticism, on the unread books in the library (acknowledging the vastness of the unknown).
Inspiration:
- EMAR: Inspired by physics, relies on abstract mathematics (seeking elegant, generalizable laws).
- FTAR: Not inspired by any single science, uses messy mathematics and computational methods (focus on empirical effectiveness).
Historical Model:
- EMAR: Model: Laplacian mechanics, views the world and the economy like a clock (deterministic, predictable).
- FTAR: Model: Sextus Empiricus and the school of evidence-based, minimum theory empirical medicine.
Personal Style:
- EMAR: Wears dark suits, white shirts; speaks in a boring tone.
- FTAR: Would ordinarily not wear suits (except to funerals).
Overall Characterization:
- EMAR: Characterized as poor science (intellectual elegance without real-world robustness).
- FTAR: Characterized as sophisticated craft (practical wisdom, knowing what works).
Goal of Precision:
- EMAR: Seeks to be perfectly right in a narrow model, under precise assumptions (fragile precision).
- FTAR: Seeks to be approximately right across a broad set of eventualities (robustness).
Method #2: Hyperconservation & Hyperaggression
Taleb writes, âIf you know that you are vulnerable to prediction errors, and if you accept that most ârisk measuresâ are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative.â
Luckily, our premise for this section is to take the assumptions listed in the quote as valid facts: yes, we are vulnerable to prediction errors and yes, we accept most, if not all risk measures are flawed.
As such, Taleb recommends a strategy of hyperconservation and hyperaggression. What, exactly, does that entail (since itâs definitely not what your financial advisor will recommend)?
Taleb labels this as the âbarbell strategy.â Itâs a less-than-intuitive approach to risk management that embraces human fallibility and the reality of Black Swans.
Most people and institutions try to find a âbalancedâ or âmoderateâ approach to risk. They might diversify their investments across a range of âmedium-riskâ assets, or they might engage in âmanaged riskâ strategies. This is whatâs hammered into you throughout finance classes, financial literature, and other adulting resources.
These people are choosing mildly conservative and mildly aggressive paths. Therein, the problem lies. This âmiddle groundâ makes these people vulnerable to negative Black Swan events without adequately positioning them for positive ones. To offer a bad example, itâs like standing in the middle of the road: you get hit by traffic from both directions.
In contrast, Taleb recommends his âbarbellâ approach, a mix of hyperconservative and hyperaggressive. By all means, this isnât an exact science, but hereâs an example of what this may look like in practice (from a financial perspective):
- Hyperconservative: Put the overwhelming majority (80-90%) of your resources into extremely safe, low-risk, predictable assets or ventures (e.g., U.S. government bonds). In theory, these are things that are robust to negative Black Swansâyou can be sure they will survive almost anything. The goal of this portion is survival and protection from ruin.
- Hyperaggressive: Allocate a small, truly non-essential portion (10-20%) of your resources to highly speculative, high-risk, high-reward ventures (e.g., highly volatile options, completely unproven ideas). These are bets that, if they pay off, could yield massive (hopefully unlimited) returns, subject to positive Black Swans. The goal of this portion is exposure to positive Black Swans without risking your core survival.
To elaborate on the rationale behind this approach and why itâs much better than the traditional approach, the hyperconservative part protects you from negative Black Swans. If a catastrophic event occurs, your core capital is safe. The hyperaggressive part gives you the optionality (think a real option) to benefit from highly improbable, massive upsides. Because your investment is small, even if it goes to zero, it doesnât hurt your survival.
The key to this approach is that it eliminates the middle fragility. By avoiding the âmildlyâ risky middle, you avoid the illusion of safety that often comes with it, and youâre not trying to predict the unpredictable.
To employ another example, here is how you could employ this strategy in your professional life:
- Hyperconservative: Have a stable, reliable job that pays the bills and provides good job security.
- Hyperaggressive: Spend your evenings and weekends on highly speculative personal projects, learning entirely new skills, or exploring radical business ideas that might never pan out but have massive upside potential if they do.
Method #3: The One Who Knows That He Cannot See Things Far Away
Taleb writes, âIt is often said that âis wise he who can see things coming.â Perhaps the wise one is the one who knows that he cannot see things far away.â
In many cultures and throughout history, foresight and the ability to predict future events have been highly valued. People who seem to anticipate trends, avoid pitfalls, or correctly forecast outcomes are admired.
As weâve discussed at length, this view is most likely flawed. Taleb proposes an opposite view: âPerhaps the wise one is the one who knows that he cannot see things far away.â
Taleb here is arguing that true wisdom isnât about having a perfect crystal ball (like the conventional understanding idolizes), but rather true wisdom is about epistemic humilityâunderstanding the profound limits of our knowledge and predictive abilities.
In this case, the âwise oneâ is not the one who sees the future, but the one who understands their inability to see the distant future and acts accordingly. In other words, the âwise oneâ is he who understands forecast errors and forecast degradation (as explained in a section above).
Method #4: Avoid Dependence on Large-Scale Predictions
Taleb writes, âWhat you should avoid is unnecessary dependence on large-scale harmful predictionsâthose and only those.â This statement distills his complex arguments into a clear piece of actionable advice.
The first piece of this statement, âunnecessary dependence,â highlights the fact that not all predictions are bad. We shouldnât live in a state of complete randomness, relying on no predictions at allâweâre making small-scale, local predictions constantly. These are often valid, low-stakes, and based on Linearland patternsâessentially, theyâre majorly harmless.
Where we falter is in our unnecessary dependence (a dangerous over-reliance) on predictions that are inherently unreliable. This is where the narrative fallacy and the ludic fallacy lead us astray. Taleb here is implying that there are alternatives to this unnecessary dependence (hence the unnecessary part) or that we should reduce the exposure to such predictions.
These large-scale, harmful predictions are driven by the complexity and interconnectedness in the world (as discussed in Tenet #1) and have far-reaching consequences, usually widespread, systemic, and significant consequences (exacerbated by the exponentialities discussed in Tenet #3).
Taleb provides two important qualifiers in his statement: âharmful predictionsâ and âthose and only those.â
To begin, Talebâs employment of âharmful predictionsâ highlights that he is particularly concerned with predictions where the cost of being wrong is vastly greater than the benefit of being right.
The final phrase, âthose and only thoseâ, narrows the scope of Talebâs advice. Taleb uses this to increase the precision of the warning: the effort to avoid prediction should be concentrated on these high-stakes, large-scale, inherently unreliable predictions, where the potential for harm is immense. By clarifying this, Taleb implies that for other types of predictions, ordinary caution or even optimism might be fine.
Weâve added some color to the viewpoint presented in this statement throughout the above discussions. Given that, I would suggest the following modification of the statement:
To navigate an uncertain world effectively, one must avoid unnecessary dependence on predictions, particularly those concerning large-scale harmful events, novel or non-routine phenomena (Black Swans), matters far into the future, and issues residing in the more complex and irreducibly uncertain Layers 3, 4, and 5 of our uncertainty framework. The overarching goal should be to foster a widespread awareness of both the inherent flaws in our predictive abilities and the significant real-world consequences these flaws can entail, thereby cultivating an ever-present skepticism towards predictions.
Congrats, weâve made it through Tenet #5. Hope you enjoyed it. Please give me any feedback you haveâhappy to clarify or elaborate further on anything discussed.
In future articles, weâre going to dive deeper into the ways we can reframe our approach to the future, starting with Tenet #6:
Diverse perspectives, critical thinking, systems thinking, and humility help navigate complexity and mitigate cognitive limitations.
Thatâs all for today. Iâll be back in your inbox on Saturday with The Saturday Morning Newsletter.
Thanks for reading,
Drew Jackson
Stay Connected
Website: brainwaves.me
Twitter: @brainwavesdotme
Email: brainwaves.me@gmail.com
Thank you for reading the Brainwaves newsletter. Please ask your friends, colleagues, and family members to sign up.
Brainwaves is a passion project educating everyone on critical topics that influence our future, key insights into the world today, and a glimpse into the past from a forward-looking lens.
To view previous editions of Brainwaves, go here.
Want to sponsor a post or advertise with us? Reach out to us via email.
Disclaimer: The views expressed here are my personal opinions and do not represent any current or former employers. This content is for informational and educational purposes only, not financial advice. Investments carry risksâplease conduct thorough research and consult financial professionals before making investment decisions. Any sponsorships or endorsements are clearly disclosed and do not influence editorial content.
brainwaves.me@gmail.com