What Superforecasters Know That You Don't
A Futures Thinking Perspective
Dec 24, 2025
đź‘‹ Hello friends,
Thank you for joining this week's edition of Brainwaves. I'm Drew Jackson, and today we're exploring:
Solutions to Cognitive Limitations
Credit Medium
Before we begin: Brainwaves arrives in your inbox every other Wednesday, exploring venture capital, economics, space, energy, intellectual property, philosophy, and beyond. I write as a curious explorer rather than an expert, and I value your insights and perspectives on each subject.
Time to Read: 49 minutes.
Let’s dive in!
A frog was hopping along the shore of a river looking for a place to cross. He came upon a scorpion sitting on the shore. “Hello, friend frog,” said the scorpion. “It appears you are looking to cross the river. I too want to cross. Would you mind carrying me?”
The frog was taken aback. “Why, if I let you on my back to cross the river, you’d sting me and I would die. I don’t think I’ll do that.”
The scorpion immediately replied, “There is no logic to your concern. If I sting you and you die, I will surely die as well, since I can’t swim. I wouldn’t need a ride if I could swim.”
The frog thought a moment and then said, “Your logic makes sense. Why would you kill me if it would result in your death? Come along and climb on my back and we’ll cross this river.”
The scorpion climbed on the frog’s back and off they went to cross the river.
About halfway across the river, the scorpion raised its tail and stung the frog. The frog was both astounded and disconsolate. “Why did you sting me? Now I will die and you will surely drown and die also.”
The scorpion replied, “I can’t help it. It’s who I am. It’s in my nature.”
- Lev Nitoburg, The German Quarter, 1993
The future actively shapes our lives. Historically, the way humans have thought about and approached the future has been flawed. Futures Thinking is a modern approach to the future, rethinking how humans think about and approach the future.
Rather than trying to predict specific future events, Futures Thinking encourages a shift in how we conceptualize the future itself—drawing on diverse cultural perspectives, foundational world characteristics, deep modern literature reviews, and recognizing that our present actions and narratives significantly influence future outcomes. Since most major life decisions are essentially bets on the future, adopting this framework could transform how we approach education, careers, relationships, and other essential aspects of life.
Today, our discussion revolves around how our world is set up and how these underlying characteristics shape everything that goes on in the world, specifically focusing on Futures Thinking Tenet #6: Diverse perspectives, critical thinking, systems thinking, and humility help navigate complexity and mitigate cognitive limitations.
Credit Them Frames
PREDICTIONS ARE CONSTANTS IN OUR LIVES - STRATEGIES TO ADDRESS OUR LIMITATIONS - DEVELOPING A NEW TOOLKIT FOR FUTURES THINKING
Philip Tetlock, discussed later, wrote “Leaders must decide, and to do that they must make and use forecasts.”
Whether we like it or not, predictions and forecasts are used in almost every aspect of our lives, from checking the weather app to seeing when an upcoming delivery is scheduled to autocorrecting in your texting app. The applications are seemingly endless.
In the conclusion of Tenet #5, I summarized the core issue very succinctly: We simply cannot predict accurately.
This is due to a wide range of cognitive limitations, most of which are detailed in the table below.
On the surface, this is an incredibly unfortunate combination of circumstances: we constantly make predictions throughout our lives, yet we cannot predict with any meaningful accuracy. How should we live, given this information?
All hope should not be lost, as there are a handful of strategies that can address, in some capacity and intensity or another, all of the limitations on our predictive abilities. To find them, we can draw upon modern scientific research, tacit knowledge, anecdotal evidence, and ancient wisdom, creating a series of perceptual tools and knowledge sources that broaden our predictive capabilities. These mitigation strategies are listed in the table below, along with the cognitive limitation they address.
|
Cognitive Limitation Hindering Our Predictive Abilities |
Description |
Mitigation Strategy(s) |
|
Hindsight Bias |
Believing past events were more predictable than they were |
Critical Thinking, Awareness: Practice active open-mindedness by regularly questioning whether outcomes were truly predictable; use Fermi problem methodology to break down past events into components that were knowable vs. unknowable at the time |
|
Ignorance of Our Limits |
Failing to recognize the boundaries of what we can predict |
Awareness, Humility: Cultivate intellectual humility and "confident humility" - have faith in capabilities while acknowledging tool limitations; regularly engage in metacognitive awareness to think about your own thinking |
|
Reliance on Repetition |
Assuming a past trend will continue indefinitely |
Diverse Perspectives, Rethinking, Critical Thinking: Start with outside view (base rates), then add inside view details; actively seek diverse perspectives that challenge trend assumptions; engage in regular rethinking cycles when new information emerges |
|
Confidence in Small Probabilities |
Misjudging the likelihood of rare but impactful events |
Critical Thinking, Humility, Awareness: Use probabilistic thinking with precise gradations (1% increments rather than 10%); acknowledge "unknown unknowns" and actively search for information that contradicts initial probability assessments |
|
Preference for Specificity |
Mistaking a precise forecast for an accurate one |
Critical Thinking, Awareness: Balance precision with accuracy by starting with outside view anchors; break complex predictions into parts using Fermi methodology; avoid false precision while maintaining appropriate granularity |
|
Disregarding Incomplete Information |
Making predictions without acknowledging what’s missing |
Diverse Perspectives, Awareness, Humility: Explicitly delineate between known and unknown data; actively seek out missing information from diverse sources; acknowledge information gaps in forecasts rather than ignoring them |
|
Illusion of Understanding |
Believing you understand a complex system more than you do |
Systems Thinking, Humility, Diverse Perspectives: Apply systems thinking - watch system behavior before intervening; study system history and interconnections; remain unattached to paradigms and invite others to challenge assumptions |
|
Empirical vs. Tacit Knowledge |
Valuing formal data over experiential, intuitive knowledge |
Diverse Perspectives: Aggregate multiple types of knowledge sources; use "dragonfly eye" approach to synthesize thousands of perspectives, including both empirical data and experiential insights |
|
Narrative Fallacy |
Creating a simple story for complex events, misrepresenting reality |
Systems Thinking, Diverse Perspectives, Critical Thinking: Resist single-cause explanations; use systems thinking to identify multiple causes and effects; actively seek perspectives that complicate or contradict simple narratives |
|
Confirmation Bias |
Seeking information that confirms what you already believe |
Diverse Perspectives, Critical Thinking, Rethinking: Practice active open-mindedness - treat beliefs as hypotheses to be tested; deliberately seek information that contradicts existing beliefs; build challenge networks of people who question your conclusions |
|
Ignorance of Unknowns |
Failure to consider the existence of “unknown unknowns” |
Humility, Diverse Perspectives, Awareness: Cultivate intellectual humility about reality's complexity; regularly question basic assumptions; use diverse perspectives to reveal blind spots; acknowledge the limits of current knowledge |
|
Simplification |
Creating overly simplified models or explanations for complex realities |
Systems Thinking, Diverse Perspectives: Expand system boundaries beyond comfortable limits; consider multiple causes and emergent phenomena; remember that boundaries are artificial constructs that should be reconsidered for each problem |
|
Ignorance of Randomness & Luck |
Underestimating the role of luck, chance, and randomness in outcomes |
Awareness, Humility: Reject deterministic "fate mindset" thinking; acknowledge bounded rationality - that actors make reasonable decisions within their limited information; incorporate probabilistic rather than deterministic thinking |
|
Ignorance of Silent Evidence |
Studying only what survived and succeeded, ignoring failures |
Diverse Perspectives, Critical Thinking: Actively seek out failure cases and "silent evidence"; expand information sources beyond conventional ones; study system history, including what didn't work or wasn't visible |
|
Ludic Fallacy |
Using game-like models to predict chaotic real-world randomness |
Systems Thinking, Awareness: Recognize the difference between structured games and complex reality; avoid oversimplified models; embrace the inherent unpredictability and nonlinearity of real-world systems |
|
Tunneling |
Focusing on a few variables and ignoring all others |
Diverse Perspectives, Systems Thinking: Use "dragonfly eye" approach to gather information from as many perspectives as possible; consciously expand system boundaries; actively seek variables and factors outside current focus |
|
Framing |
The way information is presented affects how people make decisions about it |
Diverse Perspectives, Critical Thinking, Awareness: Start with an outside view as the anchor before adding inside view details; actively reframe problems from multiple angles; seek diverse perspectives to reveal different framings of the same issue |
|
Ideas Are Sticky |
Resisting changing beliefs when faced with new evidence |
Rethinking, Humility: Engage in regular rethinking cycles; treat forecasts as living judgments that should be updated with new information; cultivate a growth mindset and view changing opinions as learning rather than weakness |
|
Failing to Factor In Past Predictions |
The inability to learn from past prediction failures |
Rethinking, Awareness, Humility: Track and score past predictions systematically; analyze both successes and failures for patterns; use past performance to calibrate confidence levels and identify systematic biases |
You may have read to this point and concluded that through these perceptual tools and knowledge sources, you can correct the errors in your predictive ways, and now can predict everything with 100% accuracy.
To be explicitly clear, if anyone, myself included, is trying to convince you of this fact, they are trying to sell you snake oil.
These mitigating factors will help you predict better than you did before, had you known about or implemented them in your process; however, they still aren’t perfect. They are better than nothing, and some are surprisingly good, but they cannot provide perfect, omniscient predictive abilities.
Let’s dive into each one!
Credit Earth
BLINDFOLDED DART THROWING MONKEYS - FORESIGHT IS REAL - START WITH THE OUTSIDE VIEW
Our story begins in the 1970s, with Princeton University economist Burton Malkiel. At this point, Malkiel was in his forties, having served in the army, worked in investment banking, and spent close to a decade teaching at Princeton. In 1973, he published his classic finance book A Random Walk Down Wall Street.
Popular now for the random walk hypothesis and efficient-market hypothesis leanings, it’s crucial for our discussion today for a small anecdote buried within the pages, which reads along the lines of “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by experts.”
Fast forward around three decades to a Wharton professor in the early 2000s named Philip Tetlock. In his landmark 2005 study and subsequent research, Tetlock tested these claims, seeking to determine whether these so-called experts could actually predict with meaningful accuracy beyond random chance.
The results? Writing in his collaborative book with Dan Gardner, Superforecasting: The Art and Science of Prediction, they explain their findings:
The first group was quickly dismissed as truly worse than blind monkeys, but the second, and much more enticing, group became the star of the show, eventually dubbed “superforecasters.”
Initially, the researchers were unsure whether the results were significant or due to random chance (i.e., luck or skill). The people participating in this study were volunteers, semi-ordinary people like you or me, including pipe installers, filmmakers, professors, mathematicians, farmers, and more.
Over the next couple of years, the researchers found that the superforecasters’ abilities held up phenomenally well. After year 1, individuals who placed in this elite category were placed on teams with fellow superforecasters. What happened? Their scores improved even more than their incredibly average peers'.
To be clear, these superforecasters weren’t infallible; many were still subject to luck, occasionally having a bad year of ordinary results. But, the significant result of this longitudinal study was the following: “We can conclude that the superforecasters were not just lucky. Mostly, their results reflected skill.”
Once apprised of this conclusion, Tetlock and Gardner set out to understand why these individuals were consistently outperforming. Over the next decade, they synthesized vast quantities of predictive data, qualitative survey results, conversational anecdotes, and other data sources to produce a near-comprehensive breakdown of this elite group.
And what they found was hopeful for the everyday person: foresight isn’t a mysterious genetic gift bestowed at birth, it’s the product of particular ways of thinking, of gathering information, and of updating beliefs. “These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.”
To identify specific characteristics to emulate, they found that the regular forecaster who volunteered scored higher on intelligence and knowledge tests than 70% of the population. In contrast, superforecasters scored around 80% higher than the population. They’re smart, but not outlandishly intelligent.
The first characteristic that sets superforecasters apart is their inclusion of a wide variety of perspectives.
Tetlock and Gardner use the analogy of a dragonfly to illustrate how superforecasters incorporate a wide range of viewpoints into their thinking. Like humans, dragonflies have two eyes; however, they are constructed as a bulging sphere, with the surface covered by as many as 30,000 individual lenses.
Information from each of these thousands of unique perspectives flows into the dragonfly’s brain, where it is synthesized into a centralized vision. This process enables dragonflies to see in almost every direction simultaneously, with the clarity and precision needed to track flying insects at high speeds.
There are two key aspects of the dragonfly to highlight that explain a portion of why superforecasters are as good as they are.
Firstly, superforecasters start by gathering as many perspectives as possible from as many sources as possible. Second, superforecasters take all those perspectives and aggregate them into a single, actionable viewpoint.
One strategy superforecasters employ is beginning with the outside view. The outside view, as discussed by Daniel Kahneman in his book Thinking, Fast and Slow, is similar to the statistical base rate. The base rate is the fundamental probability of an event or characteristic occurring in a population, representing the overall frequency without any specific information.
It’s the population-level rate of occurrence, without factoring in any specific information unique to the problem at hand. For instance, if you were interested in forecasting the question “how many white women will get in crashes this year?”, you would start with the base rate of what percentage of people, on average, get in a crash each year.
The outside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. This is often why inexperienced forecasters (which is most of us) are drawn to and begin with the inside view, as it’s usually concrete and filled with engaging detail about the specifics of the forecast in question.
Why do superforecasters begin with the outside view first? Whatever view you start with acts as the anchor to the rest of your thought process, in the classic psychological view of anchoring. When we make estimates, we tend to start with one number or thought and then adjust based on that—this number is called the anchor.
Forecasters who begin with the inside view risk being swayed by a number that may have little or no meaning. In contrast, starting with the outside view will provide a meaningful anchor.
So, superforecasters start with the outside view, then add details from the inside view, but they don’t stop there. They gather “as much information from as many sources as they could.”
In his 1996 book, The Art of the Long View, Peter Schwartz discusses how they employed this practice at Shell to great success:
While identifying and courting a wide variety of perspectives is valuable, aggregation is arguably the more critical part of the equation. This is where teams of superforecasters (dubbed “superteams”) demonstrated the power of consolidating information.
Superteams leveraged the “wisdom of the crowd.” Helpful information is often scattered, with many people holding scraps of relevant information. In the original experiments demonstrating the phenomenon, hundreds of people contributed valid information, creating a collective pool far greater than any of them could have possessed. When averaged, positive errors canceled out negative errors, yielding a near-perfect aggregation of the actual value.
Similarly, superforecasters and superteams participated in the same exercise. By drawing on many sources of data, viewpoints, and perspectives, they were able to aggregate them into a more holistic perspective on the matter at hand. In Tetlock and Gardner’s words:
Even when they all looked at the same evidence, it would be unlikely that they would all reach precisely the same conclusion. This variation, driven by their different backgrounds, provided much more value than if they had all reached the same conclusion.
Granted, stepping outside ourselves and really getting a differentiated view of reality is a struggle. Whether by virtue of temperament or habit or conscious effort, superforecasters tend to engage in the hard work of consulting other perspectives.
In the 21st century, it’s becoming increasingly more challenging to encounter truly diverse perspectives. Byrne Hobart and Tobias Huber discuss this in their book, Boom: Bubbles and the End of Stagnation, specifically how technology is collapsing the multifaceted human experience into a one-dimensional stream, flattening individual differences.
It’s a bleak view: “While our culture fetishizes novelty and diversity on the surface, the universalizing and totalizing nature of technology erases radical difference on a deeper level, only to preserve distinctions without real differences.”
However, as we’ve seen, diverse perspectives are critical to any foresight abilities we wish to foster, let alone their impact on the rest of our lives. To quote Joshua Stehr, a climate science nerd with a love for futurism, “If we are to be better prepared for our future, we need to expand who and what we’re taking seriously.”
Similarly, he discusses how it’s almost a moral obligation for us to challenge views of the future and propose alternatives. If we don’t, these visions of the future can become self-fulfilling prophecies. As he writes, “alternative futures exist, they need amplifying.”
This is the foundation for the NORMALS Studio New Future Archetypes thought framework. In their introductory article, they discuss how they hope to foster novel views of the future—to “shape something new”—by using creative thinking to break free from inherited constraints.
Arguably the most impactful phrase they employ is the following: “The diversity of futures we hold shapes our capacity to respond to complex and shifting realities.” That’s a daunting perspective on why diverse perspectives are essential.
Long story short, no matter what realm of literature or popular thought you draw from, the underlying message of the strength of diverse perspectives is throughout. Jonathan Haidt, a renowned positive psychologist, writes in his book The Happiness Hypothesis, “The wise are able to see things from others’ points of view, appreciate shades of gray, and then choose or advise a course of action that works out best for everyone in the long run.”
And finally, the renowned business operator Peter Thiel, writing in his book Zero To One, provides a perfect, overarching viewpoint to cap off this discussion:
Credit HomeGuide
ELEMENTARY, MY DEAR WATSON - THE NUMBER OF PIANO TUNERS IN CHICAGO - THE UNTHINKABLE HAITIAN REVOLUTION
Appearing in the late 1880s, Sherlock Holmes quickly became a widespread hit, arguably becoming the best-known fictional detective. The Guinness World Records lists him as the most portrayed human literary character in film and television history.
Created by Arthur Conan Doyle, Sherlock Holmes, as described by his friend Dr. Watson, is known for his proficiency with observation, deduction, forensic science, and logical reasoning, which he employs when investigating all manner of cases.
In 1924, Doyle wrote the shortest Sherlock Holmes story, presented below:
Another crucial factor of superforecasters is their ability to think critically, and Sherlock Holmes is an excellent exemplar of this characteristic. It’s a term commonly used in conversations and technical literature, but has slowly begun to lose its meaning. Formally defined, critical thinking is the process of analyzing available facts, evidence, observations, and arguments to make sound conclusions or informed choices.
Enrico Fermi, an Italian American physicist, renowned for being the creator of the world’s first artificial nuclear reactor and a member of the Manhattan Project, provides a methodology to deploy critical thinking in the realm of forecasting.
A Fermi problem is usually a back-of-the-envelope estimation problem that involves making justified guesses about quantities. I’ve seen them most often deployed in interview situations to test analytical reasoning and critical thinking skills. A commonly cited example is below.
Let’s begin by trying to answer this question: How many piano tuners are there in Chicago?
How do we begin concocting a reasonable answer? Most people would think about it for a second, then offer a best guess at an overall number, with the whole process taking less than 10 seconds. How did they come to this answer? When asked, most would shrug and say something along the lines of “it seems about right.”
Fermi knew people could do better. The key was breaking the question down into subquestions. For our question, we can break it down into the four key questions that wrap up into a main calculation:
- How many pianos are in Chicago?
- How often are pianos tuned each year?
- How long does it take to tune a piano?
- How many hours a year does the average piano tuner work?
With the first three, we can calculate the total amount of piano-tuning work in Chicago. We can divide by the last to get a reasonable estimate of how many piano tuners there are in Chicago.
However, we don’t have any of that information. We’ve simply split the question into four, so you may think we’ve created meaningless work. Not necessarily. What Fermi understood is that by breaking the question down, we can better separate the knowable from the unknowable. So, guessing isn’t eliminated from our process; however, we’ve broken it down into manageable chunks.
The net result tends to yield a more accurate estimate than whatever number popped out of the black box when we first read the question.
If you don’t believe me, try the following thought experiment, trying to find an answer to the question: How many tennis balls fit on an airplane? Start with a best-guess estimate, break it down as Fermi would, compare the two, then Google it. Which was better?
Across the board, superforecasters tended to engage in and enjoy hard mental slogs. They have strong preferences for variety and intellectual curiosity. A brilliant puzzle solver may have the raw material necessary for forecasting, but if they don’t constantly question fundamental beliefs about the world, they will be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.
Superforecasters are incredibly precise in their forecasts, often debating by a single percentage point. Barbara Mellers, a professor of psychology at the University of Pennsylvania, has shown that greater forecast granularity predicts greater accuracy. The average forecaster who predicts in 10% increments (20%, 30%, or 40%), is less accurate than the finer-grained forecaster who predicts in 5% increments (25%, 30%, or 35%), is less precise than the even finer-grained forecaster who uses 1% increments (20%, 21%, or 22%).
As we’ve seen, critical thinking is essential to our ability to forecast, as well as to other aspects of our lives. Broadening our scope to the entire Futures Thinking framework, author and practitioner Peter Schwartz provides some valuable color:
Marilee Adams, author of the popular book Change Your Questions, Change Your Life, offers the subsequent piece of the puzzle:
Combining these into a single thought stream: given the uncertainty in the world, we need to think critically to see the world more clearly. If we can’t think critically, the world will feel more uncertain and unpredictable.
That’s a powerful thought. It seems that critical thinking addresses many of the issues we face. And it does, in some ways. However, it isn’t a perfect solution.
In a 2025 article, one of his Permutations, Simon Hoher dives into what he calls “unthinkable futures.” Such moments, as the Haitian Revolution described above, are “unthinkable”, yet they are shaping our future.
Unthinkability, in Hoher’s definition, refers to the gap between what we can imagine and what might become possible. To incorporate terminology from this section, Hoher is essentially saying that critical thinking (and thinking and imagining in general) can only take you so far. Some things are truly beyond our grasp.
In other words, in the realm of “thinkable futures,” critical thinking helps us sort out the various cognitive limitations we face. However, when we venture into the realm of “unthinkable futures,” any benefit provided by critical thinking is lost.
Credit Explore Minnesota
AIRPORT LOUNGE PROVIDES INTEGRAL INSIGHTS - REFUTING MARCUS AURELIUS - A FURTHER GAP IN OUR EDUCATION
Recently, I was stuck in the Minneapolis-St. Paul Airport on a layover with no entertainment and no hope for a productive upcoming flight. I went searching through every bookstore in the airport and, with the help of Gemini, stumbled upon Adam Grant’s 2021 book, Think Again: The Power of Knowing What You Don’t Know.
Over the next 3 hours, I dove deep into the thought process behind what he calls “rethinking.” At that moment, the ideas were intriguing and resonated decently, but I didn’t truly understand the extent to which Grant’s methodologies could apply to Futures Thinking. A week later, when I read Superforecasting and discovered the following, everything came together.
Tetlock begins his account by referencing Daniel Kahneman’s work in Thinking, Fast and Slow, where he defines the two systems that make up our inner workings: “System 1” and “System 2.”
System 1 is our automatic system, running fast and constantly in the background. Conversely, System 2 is our slower, more deliberate, and methodical system, which requires conscious attention and effort.
In the book's title, System 1 enables us to think fast, and System 2 enables us to think slow. When we go to think, System 1 comes first. To quote Tetlock, “If a question is asked and you instantly know the answer, it sprang from System 1.” From there, System 2 gets involved, mainly in charge of dissecting the answer System 1 came up with to see if it holds up to scrutiny and evidence.
System 2 requires a conscious effort to intervene. Thinking a problem through using System 2 requires sustained focus and takes way longer relative to the snap judgment we get from System 1. As such, it’s normal human behavior to rely on our strong hunches, the outputs of System 1. For us, if it “feels good”, it probably is.
However, as Kahneman proved, System 1 is designed to jump to conclusions from little evidence. It’s a part of human DNA; we’re too quick to make up our minds and too slow to change them.
In most cases, this is beneficial. As Tetlock writes, “Indeed, it is the propulsive force behind all human efforts to comprehend reality. The problem is that we move too fast from confusion and uncertainty to a clear and confident conclusion without spending any time in between.”
Tetlock's research has shown that this effect has a measurable impact on our forecasting abilities. Superforecasters, unlike normal humans and even other forecasters, are more likely to think twice, thrice, or more about things.
For superforecasters, forecasts aren’t made and then locked away until they come to pass. Instead, they are framed as judgments based on the available information present at that moment. If material information becomes available at a later date, superforecasters take the opportunity to reconsider (rethink) their forecasts.
And this is one of the main drivers of forecast accuracy. A forecast that is updated based on new information is more likely to be closer to the real value than a previous one that isn’t so informed.
To begin with, superforecasters’ initial forecasts were at least 50% more accurate than those of regular forecasters. Their constant rethinking and updating of their forecasts pushed that premium even higher.
Superforecasters employ rethinking at each phase in the process. As I previewed, superforecasters begin with the outside view(s), then add in elements from the inside view(s). This process of aggregation and interpretation requires many rethinking cycles to produce the initial forecast. Then, when additional information becomes available, they begin the entire process again, engaging in further rounds of rethinking.
As I’ve described, rethinking is critical to our ability to forecast accurately and is especially useful for refining past forecasts. However, rethinking isn’t just beneficial in this niche area; in fact, as Grant will show, it’s vital in every part of our lives. We’ve known this anecdotally for a very long time as a society, but haven’t had much data to prove it until recently. For instance, in his Meditations, Marcus Aurelius states:
Similar to Tetlock, Grant begins by discussing how our ability to rethink is hindered by our “cognitive laziness.” We often prefer the ease of holding onto old views (whether those are snap judgements from System 1 or even older views) over the difficulty of grappling with new ones. We don’t just hesitate to rethink our answers; we hesitate at the very idea of rethinking. He highlights a unique rationale that fits perfectly into the overall Futures Thinking framework:
To paraphrase, rethinking brings to the forefront three ideas that we are uncomfortable addressing: 1) that we are uncertain about our answers, 2) that the world is unpredictable, and 3) that we potentially didn’t have the “right” answer the first time. That’s a scary admission, one that we all tend to shy away from.
As such, most humans tend to stick with our knowledge and opinions (what psychologists refer to as seizing and freezing). As we’ve seen in Tenet #4, we favor the comfort of certainty over the discomfort of doubt. This makes sense in a stable, unchanging world, where we get rewarded for having conviction behind our ideas. However, as we’ve seen throughout Tenet #1, the world we live in is rapidly changing, so our tendencies to remain fixed cause more harm than good.
This leads to two main biases: confirmation bias (we see what we expect to see) and desirability bias (we know what we want to see). Rethinking provides the solution. Contrary to what you might be thinking, the solution is not to decelerate our thinking; it’s to accelerate our rethinking.
Dissecting our problem further, knowledge often closes our minds to what we don’t know. As Grant writes, “We all have blind spots in our knowledge and opinions. The bad news is that they can leave us blind to our blindness, which gives us false confidence in our judgment and prevents us from rethinking.” We attach to these beliefs and hold on to them longer than we should.
To address this issue and many others, we need to employ the process of rethinking. Why? Grant provides a perfect big picture analogy, which cuts to the heart of Tenet #6:
Rethinking is the solution, working hand-in-hand with critical thinking, diverse perspectives, awareness, and humility to confidently address many of the cognitive limitations we face.
In his explanation of the benefits of rethinking, Grant references Tetlock’s work, citing “The single most important driver of forecasters’ success was how often they updated their beliefs. The best forecasters went through more rethinking cycles. They had the confident humility to doubt their judgments and the curiosity to discover new information that led them to revise their predictions.”
Superforecasters are eager and willing to think again, seeing their opinions more as best-guess hunches than as concrete truths. They constantly seek new information and better evidence, especially evidence that goes against their beliefs.
Bringing in the ideas discussed above, a key portion of rethinking is your network. Developing a challenge network whose goal is to point out our blind spots and weaknesses can push us into rethinking cycles through their diverse perspectives. Grant writes, “We learn more from people who challenge our thought processes than those who affirm our conclusions.”
One of my earliest Futures Thinking articles was titled “Addressing A Critical Flaw in Our Education.” In it, I discussed how education systems primarily teach linear thinking, even though the world is predominantly exponential.
Similarly, rethinking is another critical gap in our educational systems. There is now a strong emphasis on imparting knowledge and building confidence in kids, so many teachers don’t do enough to encourage students to doubt, face uncertainty, or recognize unpredictability.
Instead, we should be embracing the movement to encourage kids to think like fact-checkers. Grant details three main guidelines of the theory: 1) interrogate information instead of simply consuming it, 2) reject rank and popularity as a proxy for reliability, and 3) understand that the sender of information is often not its source.
This fact isn’t just limited to elementary education. Lectures, one of the primary teaching forms in colleges across the world, aren’t designed to accommodate a dialogue. Instead, they turn students into passive recipients of information rather than active thinkers engaged in critical and reflective cycles.
As valuable as rethinking is, we don’t do it enough. The world’s complexities demand continuous adaptation and rethinking to thrive; however, we often are stuck in our initial thoughts and beliefs. Given how broad and complex the world is, one key question here is what specifically we should be rethinking.
As we’ve discussed, we should be quick to rethink assumptions, intuitions, and opinions we’ve taken for granted. However, it’s less clear how we should approach more profound knowledge, core beliefs, and sacred values.
We shouldn’t recklessly abandon these foundational concepts of our lives, but we should continually re-evaluate where we stand. This isn’t an argument to rashly rethink the foundations of our being; however, we shouldn’t do the opposite and simply accept everything as truth. The ideal answer is somewhere in the middle.
Credit Sloww
LESSONS FROM THE CRACKED POT - HOW TO “KNOW THYSELF” - CULTIVATING ACTIVE OPEN-MINDEDNESS
A popular Indian folktale, called “The Cracked Pot”, reads as follows:
The most integral tools we can use to counter our cognitive limitations are awareness and humility. These two characteristics go hand-in-hand. First, we need to recognize that we have a limitation in the first place; second, we need the humility to implement a solution and change our behavior.
Christophe Andre, in his book, Looking at Mindfulness: Twenty-Five Paintings to Change the Way You Live, writes, “We must decide to open our mind’s door to all that lies beyond it, rather than hiding away in one of our inner fortresses, such as rumination, reflection, certainty, or expectation.”
As I’ve shown, humans have ingrained tendencies to favor certainty and absolutism. We dislike doubt and uncertainty. We love when the world is either black or white, and despise when there are many shades of gray in the middle. Andre provides a glimpse of the solution, one which the ancients have been preaching for millennia: awareness.
Awareness is a central concept of Buddhism. In the spiritual tradition, the teachings emphasize that true awareness is not tied to a specific object, but is a direct, nonconceptual knowing of the present moment. Through practices such as mindfulness and meditation, practitioners can observe their thoughts and emotions without being caught by them.
Socrates’s famous phrase, “Know thyself,” is a direct call for self-awareness. He believed that understanding one’s own nature, virtues, and vices is the key to wisdom and a virtuous life. Plato, building on the ideas of Socrates, employed the classic “Allegory of the Cave” to illustrate this issue. The prisoners in the cave, who were only aware of the shadows, represented a state of profound ignorance. The journey out of the cave to see the sun represented ascension to a higher level of awareness and knowledge.
Cultivating awareness enables us to consider broader perspectives and integrate them into our thought processes. It’s the first crucial step in our process. The second is cultivating and acting on our humility.
In his book, Grant explains how intellectual humility—knowing what we don’t know—is the first part of the rethinking cycle. We discussed the idea of knowing what we don’t know in Tenet #4, but Grant broadens the discussion:
The rethinking process favors humility over pride, doubt over certainty, and curiosity over closure. Our goal should be to cultivate humility grounded in confidence, which is the key to overcoming many cognitive limitations. Confident humility means having faith in our capabilities while appreciating that we may not have the right solution or even be addressing the right problem.
As you can see in the table above, the ideal situation is to believe in ourselves while being uncertain about the tools that we possess. Luckily, confident humility can be taught. In one experiment, when students read an article about the benefits of admitting what they don’t know rather than being certain, their odds of seeking help when confronted with complex issues went from 65% to 85%.
However, as novices advance to amateur and beyond, this can break the rethinking cycle. As we gain experience, we lose some of our humility. We enter an overconfidence cycle that prevents us from doubting what we know and from being curious about what we don’t.
Luckily, a dose of complexity can disrupt overconfidence cycles and spur rethinking, bringing us back on track. Grant writes, “It gives us more humility about our knowledge and more doubts about our opinions, and it can make us curious enough to discover information we were lacking.”
Tetlock found that superforecasters often have a spirit of humility—a feeling that the complexity of reality is staggering and our ability to comprehend it is limited. It includes the maturity to admit that it’s impossible ever to have all the answers.
Tetlock and Grant marry the two ideas together and show how they’re highly applicable to superforecasters in their discussion of active open-mindedness, an essential aspect of superforecasters.
As coined by psychologist Jonathan Baron in 1993, active open-minded thinking is characterized by a willingness to seek out and reflect on contrary evidence, with an openness to changing one's mind in light of new evidence.
Active open-mindedness requires metacognitive awareness, which is the ability to think about your own thinking. It means being conscious of your mental processes, biases, and the tendency to favor information that confirms what you already believe. It involves recognizing that your initial beliefs may be flawed or incomplete and that your gut reactions aren’t always accurate. This self-awareness allows you to step back from your own mental shortcuts and deliberately seek out alternative perspectives.
The practice of active open-mindedness is an exercise in intellectual humility. As I’ve highlighted, this is the recognition that your own knowledge is limited and that you are fallible. It means accepting the possibility of being wrong without feeling threatened or defensive. Instead of seeing a change of mind as a sign of weakness, an intellectually humble person sees it as an opportunity for growth and further accuracy.
Active open-mindedness helps participants combat overconfidence, confronting the limits of their knowledge and considering other possibilities. Similarly, it reduces reliance on confirmation bias by deliberately seeking information that challenges initial hypotheses, leading to a more complete and accurate understanding.
Furthermore, and most importantly, Tetlock’s research found a correlation between a team’s active open-mindedness and the accuracy of their forecasts. Similarly, a 2023 study by Haran, Ritov, and Mellers in the Cambridge University Press tested forecast abilities while varying the levels of control over the amount of information available for collection before estimating. Of their methods, only actively open-minded thinking statistically predicted performance.
Credit Harvard Business School
THE BEST ELECTION FORECASTER - FATE MINDSET CORRELATES WITH PREDICTIONS - THE CHARACTERISTICS OF SUPERFORECASTERS
When you read about their abilities in action (as written by Adam Grant), Tetlock’s superforecasters almost seem superhuman. I wouldn’t go that far, but in many ways, they're among the best at what they do.
We’ve seen how they curate the broadest possible spectrum of information when analyzing a new problem, drawing on a wide variety of inside and outside perspectives. Similarly, these practitioners continuously engage in critical thinking exercises, bombarding the data to identify weak points and gaps, and ultimately mold it into a cohesive viewpoint.
When approaching their analyses, they consider the full range of potential solutions, consistently rethinking and refining their perspective. Critical factors underlying their process are a broad awareness of the world, the humility to recognize that their mindset may be incomplete, and the willingness to change it.
These factors are core principles that enable superforecasters to achieve statistically significant improvements in their forecasting abilities.
To be clear, they aren’t doing anything that the average person, if they put their mind to it, couldn’t also do. For some of them, this framework comes easily; for others, this was a lifelong journey.
As such, if we were to invest time in improving in these areas, we too could predict better and respond to the fundamental characteristics of the world (complexity, nonlinearity, impermanence, etc.), the uncertainties present, the inherent disruptions leading to failures and collapse, and the unpredictability with ease.
I’ve harped on the benefits of superforecasters over the last 35 pages, but I think it’s a crucial modern development in the science of Futures Thinking. In addition to the plethora of the above, here are a few additional tidbits about superforecasters.
On a scale from 1 to 9, where 1 is the total rejection of it-was-meant-to-be thinking, and 9 is a complete embrace of it, the mean score of US adults is in the middle. Undergraduate students at the University of Pennsylvania scored slightly lower. Regular forecasters were a little lower than that. And superforecasters got the lowest scores of all, confidently on the rejection-of-fate side.
Another central feature of superforecasters is their comfort and confidence with numbers. Most are capable of putting them to practical use in complex models.
Tetlock provides a summary of the characteristics of superforecasters:
This framework for the ideal characteristics of superforecasters, as discussed extensively above, isn’t foolproof. Superforecasters often tackle problems in roughly the same way; however, there are minor differences.
Overall the methodology holds: 1) Delineate as sharply as possible between known and unknown data, 2) Approach first with an outside, overarching perspective, then add back in specific nuances from the uniqueness of the issue, 3) Compare and contrast other views, paying special attention to those which contradict your thinking, 4) Aggregate all of these into a singular vision of the problem, and 5) Express your answer as precisely as possible without succumbing to actual precision defects.
Through this methodology, you can significantly improve your forecasting abilities, mitigate the effects of cognitive biases on your thinking, and proactively address fundamental characteristics of the world.
Throughout Tenet #6, our pursuit has been to accomplish just that: minimize or eliminate the effect of cognitive limitations on our thought processes. The principles of superforecasting provide many solutions; however, they are not exhaustive. An additional resource, or thought genre, provides supplemental value critical to our narrative: the realm of systems thinking.
Credit Citizens for Global Solutions
SYSTEMS THINKING PROVIDES A SOLUTION TO COGNITIVE LIMITATIONS - ADDRESSING BOUNDARIES AND BOUNDED RATIONALITY - INVITING OTHERS TO CHALLENGE OUR ASSUMPTIONS
In her best-selling 2008 book, Thinking in Systems, Donella Meadows explores how systems thinkers address the inherent uncertainty of systems. In her words, “If you can’t understand, predict, and control, what is there to do?”
This expression hits at the heart of our discussion today—trying to find ways to address the cognitive limitations present in our predictive problems, but also, in a more 10,000-foot view, trying to address our responses to all of the volatilities, disorder, failures, collapses, Black Swan events, uncertainties, and unpredictabilities present in life.
We’ve seen above how the discovery of superforecasters provides a guide for how we can approach predicting better, with meaningful increases in measured performance. Systems thinking offers a further answer for facing the uncertainties present in life.
As it’s defined on Wikipedia, systems thinking is a “way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts.”
We’ve discussed previously in Tenet #1 how the world is a complex system (technically a system of systems), with many inherent interconnections, nonlinearities, randomness, and feedback loops. These give way to continuously changing variables and emergent phenomena.
Given these properties of the world, how have systems thinkers developed a proactive approach?
First, before intervening to disturb the system, systems thinkers watch how it behaves. As we’ll discuss in Tenet #7, this is a way to address the premature introduction of fragilities into the system.
Begin by learning the system's history. This focus on the system's behavior forces you to focus on facts without being blinded by your own beliefs, misconceptions, or other cognitive limitations. The history of the system directs us toward a dynamic, rather than static, analysis of the world, dissecting not only which elements are in the system but also how they are interconnected.
This isn’t always an intuitive viewpoint. As we’ve discussed, our minds tend to think in terms of single causes producing single effects. In contrast, systems thinking engages with multiple causes, multiple effects, and emergent phenomena throughout them.
Another key portion of systems thinking is the analysis of boundaries. As we discussed in Tenet #4, one of our natural responses to the uncertainty in the world is to simplify it into manageable portions (though we’ve seen that these don’t reflect the world as it truly is).
To understand complex systems, we have to simplify, which means we have to make boundaries. Where should we draw these boundaries? There is no single, legitimate boundary to draw around a system. In Meadows’ words, “We have to invest boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them.”
The world’s interconnectedness shows that there are no separate systems; the world is a continuum. We draw boundaries for analysis and discussion; however, the natural human tendency is to make them too small. As such, we leave out valuable information from our scope, creating cognitive limitations to our thinking and predictive abilities. Meadows writes:
Adam Smith and other influential economists developed the theory of rational economic agents, which has since been widely adopted as a basis for many economic actions and theories. It discusses two fundamental principles of human behavior: 1) humans act with perfect optimality on complete information, and 2) when many humans do this, their actions add up to the best possible outcome for everybody.
However, many economists and psychologists have shown, among them Daniel Kahneman and Dan Ariely, that humans more often make decisions based on the theory of bounded rationality. If you’re unfamiliar with the term, it refers to the idea that individuals’ decision-making is limited by factors such as incomplete information, time constraints, and cognitive limitations, leading them to make “good-enough” rather than “optimal” decisions.
Now that we have this understanding, it doesn’t provide an excuse for this narrow-minded behavior. Instead, it provides further understanding of why this behavior arises. Within the bounds of what that person in that part of the system can see and know, that behavior is reasonable.
How can we adapt to this new understanding? Going full circle, the solution comes from stepping outside the limited information available from any single place in the system (the “inside view”) and gaining an overview of the entire system (the “outside view”). As Meadows writes, “From a wider perspective, information flows, goals, incentives, and disincentives can be restructured so that separate, bounded, rational actions do add up to results that everyone desires.”
The bounded rationality of each actor in a system is determined by the information, incentives, disincentives, goals, stresses, and constraints impinging on that actor. One way to address this is through diverse perspectives and the integration of more information. Another way is much more abrupt: redesigning the system itself to improve the information, incentives, disincentives, goals, stresses, and constraints that affect actors within the system.
The most crucial contribution of systems thinking to our present discourse is that it encourages models to be open-sourced. Donella Meadows speaks in depth on this topic in her book:
As the world continues to change rapidly, systems thinking can help us manage, adapt, and recognize the wide range of choices we face. We can’t navigate well in an interconnected, feedback-dominated world unless we take our eyes off short-term immediacies and look for long-term behavior and structure; unless we are aware of false boundaries and bounded rationality; unless we take into account limiting factors and nonlinearities.
While systems thinking can help us understand many things we didn’t before, it can’t help us know everything.
Credit Reddit
SIMPLE SOLUTIONS TO PREDICT BETTER - THERE ARE SOME REALMS WE WILL NEVER BE ABLE TO PREDICT - PROACTIVE PREDICTION VS. REACTIVE RESPONSE
We embarked on a journey to address cognitive limitations with a large table detailing each limitation impacting our predictive abilities, as identified in Tenet #5, and their proposed solutions.
Hopefully, you’ve seen that there are simple steps we can take today to significantly address many of these limitations, minimizing their effect on our lives. The two leading solutions are the characteristics of superforecasters (diverse perspectives, critical thinking, rethinking, awareness, and humility) and the methodology of systems thinkers.
These frameworks do assist our forecasting. Unfortunately, they don’t solve every last predictive issue we face, although they do significantly help. There are still some fundamental aspects of the world that we simply cannot predict better than random chance.
Tetlock details some of these in the latter half of his book. As he showed throughout his studies and other researchers have similarly concluded, “human cognitive systems will never be able to forecast turning points in the lives of individuals or nations several years into the future.” In other words, our forecasting accuracy exponentially degrades the further into the future we wish to project. In his work, Tetlock found the accuracy of expert predictions declined toward random chance around 5 years into the future.
Tetlock also reflects on how his methodologies supplement his colleague Nassim Taleb's work in discussing Black Swan events. For these highly improbable events, which occur so infrequently, it would take decades, centuries, or millennia to aggregate enough data and forecasts to accurately determine whether people could correctly predict them with statistical significance.
As such, the current results tell us nothing about how well superforecasters are at spotting these Black Swan events—we simply don’t know how good they might be at this. Tetlock writes further:
Given these bits of commentary along with everything discussed throughout Tenets #4, #5, and now #6, we can fully flesh out the table of circumstances where we can and can’t predict, as well as highlight the gray area in between.
|
Predictable with Relative Accuracy Systems with regularity, stability, or abundant data |
Partially Predictable Systems with partial patterns, bounded uncertainty, outcomes influenced by luck & skill |
Effectively Unpredictable Systems dominated by novelty, complexity, or true randomness |
|
Ordinary things
|
Occasional things
|
Truly novel things (“one-offs”)
|
|
Well-Understood Systems
|
Complex Adaptive Systems
|
Chaotic/Emergent Systems
|
|
High-Frequency/Data-Rich
|
Medium-Frequency/Mixed Data
|
Low-Frequency/Novel Events
|
|
Linear/Mechanical Relationships
|
Non-Linear But Bounded
|
Non-Linear And Unbounded
|
|
Short-Term Continuations
|
Medium-Term Patterns
|
Long-Term Transformations
|
|
Instances where history literally repeats itself |
Instances where the future is similar to the past |
Instances with no resemblance to the past |
|
Regular Events |
Irregular Events |
Extreme Events |
|
Predictions fundamentally change the outcome (self-fulfilling/defeating prophecies) |
Prediction moderately influences outcomes |
Predictions don’t significantly alter outcomes |
|
Certainty |
Reducible uncertainty |
Irreducible uncertainty |
|
Complete information |
“Enough” information |
Incomplete information |
|
Uncertainty Layer 1 (Tenet #4) |
Uncertainty Layer 2 |
Uncertainty Layer 3, 4, and 5 |
|
Entirely skill |
Less luck than skill |
Majorly luck |
|
Immediate events |
Medium-term possibilities |
Long-term horizons |
|
Simple systems
|
Complex systems
|
Chaotic systems
|
|
Closed systems
|
Open systems with limited variables
|
Open systems with unlimited variables |
|
Vague notions of the future |
Generalized, yet sharper visions of the future |
Clear, specific instances of future events |
|
No feedback loops present |
Minimal feedback loops present |
Strong feedback loops present |
Undoubtedly, there are many areas in which we can and do predict with relative accuracy. Those are often near-term events governed by linear, known variables, with little noise.
Having said that, these domains are often the areas that, colloquially, don’t move the needle much in our lives. Yes, there are instances where a butterfly flapping its wings creates a tornado across the world, but that’s also the product of many other variables.
As you can see above, the gray area and the area where we can’t predict better than random chance are much more life-altering, with widespread consequences. This is worrying: we don’t have the capabilities, and probably won’t ever have them, to predict many of these events.
Given that, what should we do?
Luckily, that’s the subject of our next portion of Tenets, #7, #8, and #9. We’ll start by diving deeper into the other ways our lives (whether through our own doing or a natural characteristic of the world) are fragile and how we can recognize those aspects, beginning with Tenet #7:
Fragile systems amplify their own vulnerabilities, increasing the risk of failure and worsening our cognitive limitations.
That's a wrap on this deep dive.
Found this analysis valuable? The best way to support Brainwaves is to share it with someone who'd benefit from these insights.
Drew Jackson
Founder & Writer
Keep Exploring
- Next Deep Dive: Special Edition Interview - January 7th, 2026
- This Saturday: Weekly roundup of breaking developments across energy, space, venture capital, economics, intellectual property, and philosophy
- Previous Editions: View the archive here
Stay Connected
New to Brainwaves? Join thousands of readers getting bi-weekly deep dives into the forces reshaping our world.
Sponsor This Newsletter: Reach an engaged audience of forward-thinking readers. Email us for details.
Disclaimer: Views expressed are personal opinions, not financial advice. This content is educational only. Investment decisions carry risks - always consult professionals and do your own research. All sponsorships are clearly disclosed.
© 2025 Brainwaves. All rights reserved.
brainwaves.me@gmail.com