
Originally published on Medium.
The short version
Abraham Wald's WWII bomber study taught us survivorship bias: the planes that returned with bullet holes in the wings and tail showed where damage was survivable. The real vulnerability was in the areas (engine, cockpit, hydraulics) where damaged planes never came home. PMs make this mistake everywhere: customer feedback from active users only, feature usage as a "survivors' metric," A/B tests measuring people who stuck around, competitor analysis seeing only the bets that worked. Fix it by gathering data from churned users, documenting failed experiments, layering complementary metrics (churn by feature, friction points, long-term retention), testing multiple hypotheses simultaneously, and measuring what's missing. The data you see is often the least important data.
The British Bomber Study
During World War II, the British Royal Air Force wanted to improve bomber aircraft survival rates. They examined returning bombers and noticed patterns: bullet holes concentrated on the wings, tail, and fuselage.
The intuitive conclusion: reinforce those areas.
But Abraham Wald, a statistician, saw something different. The planes with bullet holes in those locations had returned safely. The planes that didn't return were the ones with damage in other areas - the engine, the cockpit, the hydraulics.
By reinforcing the areas where surviving planes had taken hits, the RAF would be making the same mistakes repeatedly. The real vulnerability was in the areas where they couldn't observe damage - because the planes with damage in those areas never came home.
This is survivorship bias: the systematic error of focusing on data that passed a certain filter, while overlooking the data that got filtered out entirely.
How Survivorship Bias Shows Up in Product Management
Customer Feedback
You ask customers what they like and dislike about your product. You get feedback from active, engaged users.
But what about customers who churned? What about people who tried your product and abandoned it before becoming engaged? Their feedback carries the most important signal - they saw something that made them leave.
If you only listen to active users, you'll make your product more optimized for them while missing the friction that turns away new customers.
Feature Development
You look at which features get used most and double down on them. But usage is a survivors' metric.
The feature that nobody uses might be because nobody found it, not because it has no value. The feature with moderate usage might be solving a critical problem for a niche audience, but the metric doesn't capture that nuance.
A/B Testing
You run an A/B test and see that variant A performs better than variant B. You declare victory and ship A.
But what about the users who didn't complete the test? What about the users who bounced before seeing either variant? The A/B test results come from people who stuck around long enough to be counted.
Competitor Analysis
Your competitors are winning customers. You analyze what they're doing and try to copy it.
But you're only seeing their successful bets. You're not seeing the failed products, the abandoned features, the experiments that burned cash and went nowhere. You're seeing a filtered view of their decisions - the ones that worked.
Feature Adoption
You launch a new feature and track adoption rates. 40% of users adopt it within 30 days.
But why did 60% not adopt? Some tried it and rejected it. Some never saw it. Some were outside the target use case. Some lacked the context to understand its value. Each of these groups has different implications for iteration.
If you only look at adoption metrics, you'll miss critical feedback from non-adopters.
How to Recognize Survivorship Bias in Your Data
Before you act on a metric or pattern, ask:
Who is missing from this data? What filters was applied before this data reached you?
What would the opposite tell me? What can I learn from people who didn't take this action or see this outcome?
Am I optimizing for the visible or the important? Is this metric measuring what I actually care about?
What experiments did we run that failed? What can I learn from them?
Practical Steps to Fix It
Gather Data from All Users, Not Just Active Ones
Your most important feedback comes from people who left. This is uncomfortable. They're not around to ask. They've moved on to a competitor.
But this data is gold. Conduct exit interviews with churned customers. Send surveys to users who abandoned onboarding. Analyze the behavior patterns of users who never reached activation.
The patterns you find will reveal friction points that active users have adapted to or worked around.
Use Failure to Your Advantage
Failed experiments contain rich data. When a feature doesn't drive adoption, when a campaign doesn't convert, when a workflow doesn't stick - those failures tell you something important about user behavior or product market fit.
Document your failed experiments. Look for patterns. Extract the signal from the noise.
Rethink Your Prioritization Framework
Metrics like "usage," "engagement," and "adoption" are survivors' metrics. They tell you what people who stick around are doing, not what's actually valuable.
Layer in complementary metrics:
- Churn by feature (are feature users retaining better?)
- Friction points (where do users get stuck?)
- Problem resolution (does this feature solve the stated problem?)
- Long-term retention (does this feature correlate with sustained value?)
Test Multiple Hypotheses Simultaneously
Instead of running one A/B test and declaring a winner, run multiple experiments to surface different user preferences and needs.
You might find that variant A wins overall, but variant B outperforms with a specific segment. Those are the insights that lead to better product decisions.
Measure What's Missing
The data you're not collecting might be more important than the data you are.
If you're not tracking why users churn, you're missing critical signal. If you're not measuring learning curve or onboarding comprehension, you're missing friction. If you're not analyzing support tickets, you're missing pain points.
Identify the blind spots in your metrics and design experiments to fill them.
Building a Balanced Roadmap
Outcomes, Not Features
Features are what survive the roadmap. Outcomes are why they matter.
"Reduce time to value by 50%" is an outcome. "Add a new export format" is a feature.
When you prioritize around outcomes, you're forced to think about whether your solution actually solves the problem - not just whether you shipped something.
Balance Iteration with Innovation
Iteration optimizes for the visible: the features users are using, the pain points they're articulating.
Innovation challenges the invisible: the assumptions underlying your product, the needs users can't articulate, the workflows that haven't been invented yet.
The best roadmaps balance both. Too much iteration and you become incremental. Too much innovation and you build features nobody needs.
Tie Features to Retention
The ultimate metric is retention. Does this feature help users achieve their goals in a way that keeps them coming back?
If a feature drives adoption but not retention, it's papering over a deeper problem. If a feature has low adoption but drives disproportionate retention for its users, it might be more valuable than the metrics suggest.
The Deeper Lesson
Survivorship bias teaches us that the data we see is often the least important data.
The critical insights hide in what we can't measure directly: the users who left, the experiments that failed, the problems we didn't anticipate, the opportunities we're overlooking.
The best product managers develop a bias toward looking for what they're missing, not just optimizing what they see.
That's where the competitive advantage lies.
Also on Medium
Full archive →How to Avoid Survivorship Bias in Product Management
Lessons from the British bomber study, applied to PM customer interviews and analytics.
The Evolution of Product Management Over the Last 20 Years
From feature factories to outcome teams to Product Builders, three eras in one chart.