← All Essays
◆ Decoded Epistemology 8 min read

The Streetlight Effect

Core Idea: We systematically search where looking is easy rather than where finding is likely. Measurement convenience, data availability, methodological familiarity, and publication incentives all push attention toward tractable questions and away from important ones. The result is confident knowledge about what is easy to study and systematic ignorance about what matters most.

The joke is old enough that no one knows who told it first. A police officer finds a drunk man on his hands and knees under a streetlight, searching the ground. “What are you looking for?” “My keys.” “Where did you drop them?” “Over there,” the drunk says, pointing into the dark. “Then why are you searching here?” “Because this is where the light is.” It is a joke, but it describes one of the most pervasive and least-discussed biases in research, analysis, and everyday thinking. We search where looking is easy. The keys are rarely there.

Why It Happens

Measurement convenience. Some things are easy to measure. Others are not. When the important thing is hard to measure but something related is easy, we measure the easy thing and gradually begin treating it as the important thing. IQ tests measure a particular kind of cognitive performance. Whether they measure intelligence—actual, full, human intelligence—is a different question, but the test exists, it produces a number, and the number is convenient. So we use it, and slowly the measurable proxy displaces the unmeasurable reality.

Data availability. Research uses available data, and available data is not randomly distributed. It exists because someone collected it, usually for other reasons. Hospital records are available; the health of people who never go to hospitals is not. Crime statistics reflect reported crime; unreported crime is in the dark. Economic data tracks what governments measure; the informal economy, the household economy, the care economy—largely invisible.

Method lock-in. Researchers develop expertise in particular methods. They naturally apply those methods to problems the methods can address. Problems requiring unfamiliar methods get less attention, not because they are less important but because the tools are not at hand. In other words, the available hammer determines which problems look like nails.

Publication incentives. Easy studies get done quickly. Quick studies get published. Published studies get cited. Cited researchers get funded. The entire incentive system selects for tractable questions over important ones. A researcher who spends ten years on a difficult, ambiguous problem and publishes one inconclusive paper is punished by a system that rewards a researcher who publishes ten clean papers on easy questions in the same period.

Where It Shows Up

Economics. Economics studies what is quantifiable: prices, GDP, employment, trade volumes. What is hard to quantify—wellbeing, meaning, social cohesion, the value of unpaid care work—receives less attention. Not because it matters less, but because it is in the dark. Economic policy is shaped by what economists measure, and economists measure what is measurable. The things that fall outside the light of quantification are not absent from reality. They are absent from the models that guide decisions.

Medicine. We study diseases with clear biomarkers (measurable biological indicators) more than diseases without them. Blood pressure is easy to measure; pain is not. Cholesterol levels are objective; fatigue is subjective. Mental health is harder to quantify than cardiac function. Funding and research attention follow measurement convenience, which means some conditions are well-understood and well-treated while others—often the ones that cause the most suffering—remain in the dark.

AI safety. Capability benchmarks are easy to construct and easy to score. Whether an AI system is aligned with human values, safe under novel conditions, or likely to produce harmful outcomes in deployment—these are hard to measure. The field optimizes for what it can benchmark. Whether the benchmarked systems are actually safe is a question in the dark, and the gap between capability measurement and safety measurement grows wider with each generation of models.

Education. Test scores are measurable. Wisdom, curiosity, creativity, resilience, and the ability to think clearly under pressure are not. We optimize for scores because scores are in the light. Whether students emerge from education actually equipped to navigate a complex world is a question we largely cannot answer—because we largely cannot measure it—and so we largely do not ask it.

The Compounding Problem

The streetlight effect does not operate in isolation. It compounds with other biases to produce a particularly dangerous epistemic landscape.

Combined with Goodhart’s Law (when a measure becomes a target, it ceases to be a good measure), the streetlight effect means we not only study what is easy to measure but actively optimize for it, further distorting the relationship between the metric and the reality it was supposed to represent.

Combined with survivorship bias, the effect means we only see research that was conducted under the streetlight—the questions that were tractable, the studies that produced results, the papers that got published. The important questions that could not be studied this way simply do not appear in the literature. Their absence looks like unimportance. It is not.

Combined with narrative bias, the effect means we construct confident stories based on whatever data is available, filling in the dark areas with assumptions and calling the result knowledge. The confidence feels warranted because the data in the light is real. But the conclusions are shaped as much by what is missing as by what is present.

Corrections

Ask why this is studied. When encountering research, ask whether the question was addressed because it is important or because it is tractable. The answer affects how much weight the findings should carry. A study that answers a convenient question is less informative than it appears if the important question remains in the dark.

Look for gaps. The most revealing question about any field is: what is not being studied? The unstudied areas often contain the most important unknowns. Absence of research is not evidence of unimportance. It may be evidence that the streetlight does not reach that far.

Value difficulty. Hard-to-study questions may be more important precisely because they are unstudied. The easy questions have likely been answered already. The hard questions—the ones that resist measurement, defy clean methodology, and produce ambiguous results—are where the real unknown territory lies.

Build new lights. Sometimes the correct response is not to search harder under the existing light but to create new measurement methods that illuminate previously dark areas. This is harder than using existing tools but vastly more valuable. Every new measurement capability opens a region of reality that was previously invisible to systematic inquiry.

How This Was Decoded

This essay integrates philosophy of science, research methodology and its incentive structures, measurement theory, and observation of academic and institutional practice. Cross-verified: the same streetlight pattern—searching where looking is easy rather than where finding is likely—appears in research, business metrics, policy design, medical practice, and personal decision-making. The bias is structural and self-reinforcing, which is what makes it difficult to correct.

Want the compressed, high-density version? Read the agent/research version →

You're reading the human-friendly version Switch to Agent/Research Version →