← All Essays
◆ Decoded Epistemology 9 min read

Sensemaking Under Uncertainty

Core Idea: The world is uncertain, information is incomplete, and you must act anyway. Sensemaking is the discipline of building workable models from fragmentary evidence, calibrating your confidence to match the actual strength of that evidence, and choosing actions that remain sound even when your models turn out to be wrong. The goal is not certainty. It is calibrated navigation.

In 1854, London was dying. Cholera was tearing through the Broad Street neighborhood of Soho, killing hundreds. The prevailing theory—miasma, the idea that disease spread through foul air—pointed in one direction. But John Snow, a physician with no special authority and no institutional backing, pointed in another. He mapped the deaths. He noticed they clustered around a single water pump. He could not prove the water was the cause—germ theory did not yet exist—but the pattern was strong enough to act on. He persuaded the parish to remove the pump handle. The epidemic stopped. Snow did not have certainty. He had a model that fit the evidence better than the alternative, and he acted on it. That is sensemaking under uncertainty.

The Challenge

Philosophy asks “how can we know?” Sensemaking asks something more urgent: “how do we navigate when we cannot know for certain?” The latter is the practical question that confronts every person, every organization, every institution, every day. Certainty is rare. Action is unavoidable. The gap between what we can verify and what we must decide on is where sensemaking lives.

We face overlapping layers of uncertainty. Our observations are noisy, biased, and incomplete—this is data uncertainty. Multiple models can explain the same evidence, and we cannot be sure which is right—this is model uncertainty. Even correct models face random outcomes—this is outcome uncertainty. And sometimes we are not even sure what outcomes we should want—this is value uncertainty. Traditional epistemology focuses on justified true belief. Sensemaking focuses on calibrated action despite all four layers operating simultaneously.

Building Models

Sensemaking begins with noticing. Something happens. You pay attention. The first act of sensemaking is simply observing with enough care to notice patterns—and, more importantly, to notice when something surprises you. Surprise is diagnostic. It means your current model did not predict what just arrived. That gap between expectation and observation is where learning lives.

From observation, we form hypotheses. What models could produce these observations? The discipline here is to hold multiple hypotheses simultaneously rather than committing too early to one. Karl Weick, the organizational theorist at the University of Michigan who pioneered the study of sensemaking, called this “requisite variety”—your set of explanations needs to be at least as varied as the phenomena you are trying to explain.

Good models generate predictions. This is the test that separates genuine understanding from post-hoc rationalization. A model that only explains what has already happened is a story. A model that predicts what will happen next is a tool. If the prediction lands, the model gains credibility. If it misses, the model loses credibility and needs updating.

The updating process is where Bayesian reasoning enters. Thomas Bayes, an eighteenth-century Presbyterian minister, formalized what good thinkers do intuitively: when new evidence arrives, adjust your confidence in each hypothesis proportionally to how well that hypothesis predicted the evidence. In other words, evidence that your model predicted strongly should raise your confidence in that model. Evidence that your model did not predict—or that a rival model predicted better—should lower it.

Confidence Calibration

Having beliefs is not enough. Having appropriately confident beliefs is what sensemaking demands. Calibration means that when you say you are 70 percent confident, you are right about 70 percent of the time. Overconfidence means you are wrong more often than your stated certainty suggests. Underconfidence means you are right more often than you credit yourself for.

Philip Tetlock, a political scientist at the University of Pennsylvania, spent decades studying forecasters and found that the best ones—he called them “superforecasters”—shared a specific trait: they were well-calibrated. They did not know more than everyone else. They knew what they knew and what they did not. They assigned probabilities carefully, updated frequently, and tracked their accuracy over time.

Calibration is a skill, not a talent. It improves with practice. The core practices are straightforward: make explicit predictions with probabilities attached, check outcomes, and update your self-model based on results. Notice the difference between “I think this will happen” and “I am 80 percent confident this will happen.” The second formulation is testable. Over time, you learn where your confidence is warranted and where it is not.

One of the most powerful calibration habits is actively seeking disconfirmation—looking for evidence against your current model rather than evidence that supports it. A model that has survived serious attempts to disprove it deserves more confidence than a model that has only been confirmed by friendly evidence.

Acting Under Uncertainty

Models inform action but do not determine it. Even with well-calibrated beliefs, the relationship between what you know and what you should do is not straightforward. Decision theorists distinguish three regimes, each requiring a different strategy.

Under risk, you know the probabilities but not the specific outcome. Expected value calculations (weighting each possible outcome by its probability) apply. This is the regime of insurance, gambling, and well-characterized medical decisions. The math is clean. The challenge is emotional: accepting that the best decision can still produce a bad outcome.

Under uncertainty, you do not know the probabilities. Robust strategies—ones that perform reasonably well across a range of scenarios—matter more than optimal strategies that depend on getting the probabilities right. In other words, when you cannot estimate the odds, focus on decisions that avoid catastrophe rather than decisions that maximize expected return.

Under ignorance, you do not even know the outcome space—you cannot list what might happen. Here, exploratory action dominates. Gather information. Run small experiments. Reduce ignorance before committing resources. The value of the next piece of information may exceed the value of the best available action.

Common Failures

Sensemaking fails in predictable ways, and knowing the failure modes is itself a form of calibration.

Premature closure is committing to a model before the evidence warrants it. The discomfort of uncertainty drives us toward the comfort of a settled view, even when that view is poorly supported. Comfortable certainty substitutes for accurate uncertainty. The antidote is to notice when your commitment to an explanation feels more like relief than like reasoning.

Infinite regression is the opposite failure—refusing to act until certainty arrives. But certainty never comes. Analysis paralysis is not cautious thinking. It is the failure to recognize that inaction is itself a decision, usually a bad one, because the world does not wait for us to finish deliberating.

Narrative override occurs when a coherent story overwhelms probabilistic reasoning. Good stories feel true. They have beginnings, middles, and ends. They feature causes and effects that satisfy our pattern-seeking minds. But the coherence of a narrative is not evidence of its accuracy. Many coherent stories are wrong. The world is frequently less story-shaped than we would like.

Base rate neglect is ignoring prior probabilities when evaluating evidence. If a disease affects one in ten thousand people and the test is 99 percent accurate, a positive result still means there is only about a one percent chance you actually have the disease. The base rate (how common the condition is) matters enormously, but we chronically underweight it in favor of the vivid, specific evidence in front of us.

The Practical Frame

Cross-domain evidence gathering is one structural antidote. If the same pattern appears in multiple independent domains, that convergence is stronger support for a model than any single domain can provide. Convergent inference—multiple independent paths reaching the same conclusion—raises confidence more than repetition within a single path.

First-principles reasoning is another. Rather than inheriting uncertain models from others, build from bedrock where possible. What must be true regardless of which model is correct? Conclusions derived from fundamentals are more robust than conclusions inherited from tradition.

The goal of sensemaking is not to eliminate uncertainty. That is impossible. The goal is to navigate it skillfully: knowing where you stand on the confidence spectrum, acting in ways that match your actual epistemic position, and updating as new evidence arrives. Not certainty. Calibrated navigation.

How This Was Decoded

This essay integrates decision theory, Bayesian epistemology, Karl Weick’s organizational sensemaking framework at the University of Michigan, and Philip Tetlock’s forecasting research at the University of Pennsylvania. Cross-verified: the same sensemaking structure—observe, hypothesize, predict, update—applies to personal, scientific, medical, and strategic decisions. The process is domain-invariant. Applied convergent confidence and signal-versus-noise principles throughout.

Want the compressed, high-density version? Read the agent/research version →

You're reading the human-friendly version Switch to Agent/Research Version →