The Impact Loop
The daily rhythm that replaces sprints, stand-ups, and roadmap reviews. Sense what's happening, build a response, measure the impact, amplify what works.
Why your process is making you slower
I'm going to say something that'll get me hate mail from the Agile certification people: Scrum, SAFe, Kanban, and Shape Up were designed to solve a problem that doesn't exist anymore.
These frameworks all share a core assumption: predictability and control. You estimate work. You batch it into sprints. You commit to deliverables. You measure velocity. The entire system is optimized so that you know, with some accuracy, what will ship in two weeks.
That made sense in 2008, when shipping was expensive and slow. Now? Your customer needs change Tuesday morning. Competitors launch something Friday afternoon. Your analytics tell you something broke or an opportunity opened yesterday. The person who spent two weeks estimating work in a sprint planning meeting isn't helping anymore - they're slowing you down.
I'm not saying Scrum is evil. It served a purpose. It taught the industry that you could ship incrementally instead of hoarding code for six months. That was real progress. But if you're still using a process built for predictability when your actual job is to respond faster than anyone else, you're optimizing for the wrong thing.
The Impact Loop doesn't optimize for predictability. It optimizes for responsiveness. For speed. For actually moving customer behavior, revenue, and retention metrics that matter.
Four beats, continuous rhythm
Think of the loop like breathing. Sense in. Build out. Measure and see. Amplify what works. Then loop back. No waiting for sprint planning. No ceremony. Just rhythm.
SENSE: Know what's actually happening
Before anything else, you need clarity. What's happening with your customers? What's the market doing? What signals are screaming for attention and what's just noise?
Most PMs get this wrong. They look at a dashboard and call that sensing. A dashboard shows you what was configured to show you three months ago. It's a snapshot of yesterday's questions, not today's problems.
Real sensing means:
- Customer signals: Support tickets, churn reasons, feature requests, but also the subtext. The customer who says "I need better reporting" might actually be saying "I don't trust my data."
- Behavior signals: How are users actually moving through your product? Where do they get stuck? Where are they succeeding faster than expected?
- Competitive signals: What did your competitors ship? Did they steal a feature you were building? Did they find a white space you missed?
- Market signals: Is your ICP growing more cautious? Are budgets shrinking? Is a new regulation coming that impacts your compliance requirements?
In the old world, you'd read spreadsheets and talk to customer success. In the new world, you have AI agents that can monitor all of this continuously and surface only the patterns that matter.
Here's what that looks like in practice: Every morning, instead of digging through Slack or email or dashboards, your sensing layer - powered by four core agents: Red Flag Detection Agent, Competitive Intelligence Agent, Support Signal Processing Agent, and NPS/CSAT Analysis Agent running daily - gives you a 2-minute brief. "Trial conversion is down 18% in the Enterprise segment. Support is seeing confusion around column-based access. Competitor X just announced role-based permissions. Customer Y, one of your top 5 accounts, had 6 support tickets this week."
Your job isn't to notice these signals. Your job is to judge them. Which one matters most right now? The conversion dip is a revenue problem. The access confusion is a retention problem. The competitor announcement is a 6-week headache. Your sensing agent found all three. You decide which one demands attention today.
That judgment - which signal is worth your focus - is irreplaceable. AI is excellent at pattern detection. You're excellent at knowing what patterns move the needle.
BUILD: Make the response real, not a plan for it
This is where a lot of PMs break down. You sense a problem, and then you do this:
- Write a requirements document (3 days)
- Get design to mock it up (5 days)
- Present to engineering (2 days of async feedback, 1 meeting)
- Engineering estimates it (1 day)
- 2 weeks waiting for the sprint to start
- 2 weeks of building (hopefully)
- 1 day of QA
You just spent 30+ days deciding whether a hypothesis was true. By then, customer behavior may have shifted. Competitors moved. Your sense of urgency faded.
The Impact Loop flips this. You don't plan. You build. You make a prototype of your idea and test it with real customers and real data.
Here's the secret: 80% of the features you're designing don't need the full engineering treatment to validate. They need a prototype. Something clickable. Something you can show a user. Something you can run an A/B test on.
Your job in the BUILD phase:
- Define the hypothesis clearly: "If we simplify the onboarding for Enterprise customers, their setup completion rate will increase."
- Work with your AI development partner to build a prototype. Not a sketch. Not a figma file. Something real. Something running. You might spend 4-6 hours on this. You describe the problem, you iterate: "Make the onboarding simpler" → "Add a progress indicator" → "Show smart defaults instead of blank fields" → "Add a short video for the hardest step."
- Review it yourself. Does it actually address the problem you sensed?
For most ideas, you'll prototype, measure it, kill it, and learn something. That's not failure. That's 5 days instead of 30 days. That's the compounding advantage right there.
For the 15% of ideas that show real promise, you hand off the prototype to engineering. Now they're not starting from a 40-page spec that may not even be correct. They're looking at a working prototype and a mountain of data saying "customers respond to this." Their job gets faster, easier, and more informed.
MEASURE: Quantify what actually happened
Here's where most impact loops break: PMs measure the wrong things.
Vanity metrics feel good but they lie. Users visited your new page? Great. Did they convert? Did they stay? Did they tell a friend? Did you make money? Those are the only metrics that matter.
An impact metric answers one question: Did customer behavior change the way we hoped? The Feature Adoption Agent and OKR Tracker Agent run daily at 4pm, giving you real outcome data automatically without waiting for manual analysis.
If you shipped a simplified onboarding, the impact metrics are:
- Setup completion rate (the behavioral change you wanted)
- Time-to-first-value (does it happen faster now?)
- Trial-to-paid conversion (does simplicity actually drive revenue?)
- Onboarding support tickets (did you reduce confusion?)
You don't measure one of these. You measure all of them. Context matters. Maybe completion went up but conversion went flat - that tells you something important. People get through faster but aren't convinced. The problem isn't speed, it's clarity of value.
In your loop, measurement is automatic and daily. Your analyst agent:
- Sets up tracking for every feature you ship (no manual instrumentation, no waiting on engineers)
- Runs the experiment with proper controls (A/B test or staged rollout)
- Analyzes results with statistical rigor (not "we shipped it and it feels good")
- Reports daily until you have enough data to decide
You don't wait for "the results." Results flow in continuously.
Your job is to interpret them. Is a 12% increase in completion meaningful? Depends on sample size, depends on the business impact, depends on how much effort this cost. Your analyst tells you the number. You tell it what it means.
AMPLIFY: Scale wins, kill losers, apply learnings
This is the easiest beat and yet most PMs skip it. If something works, you expand it. If something doesn't, you kill it fast and move on.
Amplification looks like:
- Double down on winners: The simplified onboarding worked for Enterprise. Test it for mid-market. Expand it to all customers. Brief engineering to build the production version so you're not running a prototype forever.
- Kill losers without guilt: You shipped the "suggest improvements" feature. Nobody uses it. Three people asked for it. Delete it. Get that code out of your codebase. That's a clean win - you learned it wasn't a real need.
- Apply learnings elsewhere: Enterprise customers were confused by the original onboarding. Are they confused by anything else? Your research agent can scan support tickets for similar patterns. Maybe the pricing page is confusing in the same way. Maybe the payment setup is. You've learned something about how your customers think. Apply it.
Amplification at scale is where your compounding advantage explodes. You're not just shipping features. You're building a feedback loop that makes each subsequent iteration better. You learn from one loop and immediately apply it to the next.
A complete loop from sensing to amplifying
Let me walk you through exactly how this works in practice. I'm going to use a scenario that's painfully real.
Tuesday morning, 8:15am - SENSE
Your monitoring agent alerts you: trial-to-paid conversion for your "Starter" product tier dropped 8% last week. That's 24 fewer paying customers than expected. At $29/month, that's $700/month in ARR, or about $8,400 annualized. Not catastrophic, but directional.
Your research agent digs. It finds:
- 14 support tickets in the last week, all from trial users in their second week
- Common theme: "I don't understand how to use the custom fields feature"
- Your main competitor just released a "quick setup assistant" in a blog post you saw yesterday
- Your user data shows trial users hit the custom fields screen on day 3, spend 8 minutes there (usually it's 2 minutes), and 23% bounce without configuring anything
The pattern is clear: your most powerful feature is a barrier to adoption. New users see it, get intimidated, and bail.
Tuesday, 10:30am - BUILD
You open Claude Code. You describe the problem: "When trial users hit the custom fields screen, they're getting lost. I want to build an interactive guide that walks them through creating their first custom field. It should be simple, friendly, show them the result immediately, and only ask about the options they actually need."
You iterate. Back and forth. By 1pm, you have something real:
- A step-by-step wizard instead of a form
- It asks 3 questions, not 12
- Shows them the result in real-time
- Has a "recommended" option they can use without thinking
You test it yourself. You create a test account, go through the wizard, and it works. You feel 60% confident (which is high for a prototype).
Tuesday, 2pm - MEASURE
You deploy the wizard as an A/B test, but only for new trials starting Tuesday afternoon. Your analyst agent sets up the tracking:
- % of users who see the custom fields screen
- % of users who complete the wizard
- % of users who configure at least one custom field
- Time spent on the screen
- Trial-to-paid conversion rate
The experiment is live. You're comparing new behavior (wizard) against old behavior (form).
Thursday - WAIT AND OBSERVE
By Thursday, you have 60 trial users in each group. Your analyst agent sends you a daily update:
Control group (old form):
- 62% hit custom fields screen
- 34% configure something
- Average time: 7.2 minutes
Variant group (wizard):
- 64% hit custom fields screen
- 71% configure something
- Average time: 3.1 minutes
The wizard is crushing it. More people complete it. They spend less time. But conversion data takes longer (it's about whether they upgrade, and trials are 14 days).
Next Tuesday - AMPLIFY
You now have 10 days of data. The numbers are even more dramatic:
Conversion rate for users who configured a custom field:
- Control: 34% converted to paid
- Variant: 52% converted to paid
That's an 18-point difference. In a cohort of 200 trial users, that's 36 additional conversions. At $29/month, that's $1,044 in additional monthly revenue. You're paying for your own salary with one feature.
You also notice something interesting: users who went through the wizard and configured a field are staying longer (their retention is higher too).
Now you amplify:
- Expand the test to 100% of trial users (not just a test group anymore)
- Brief engineering on building this in production. Show them the prototype. Show them the data. The wizard moves from prototype to roadmap priority.
- Look for similar problems with your research agent. Where else do new users get stuck and bounce?
From problem detection to validated, profitable solution: 8 days.
That's the compounding advantage. You didn't wait for a sprint. You didn't spend a week in meetings. You didn't estimate something that might not work. You just: sensed, built, measured, amplified.
How this connects to what you actually care about: outcomes
Let me get real about something. You probably know about OKRs. Maybe you use them. Maybe you've sat through a planning meeting where everyone wrote objectives and key results. And maybe you've noticed that nothing really changed. You still shipped what you were going to ship. Metrics still moved like they always moved. The OKRs felt like a reporting layer, not a thinking layer.
That's because OKRs without the Impact Loop are just a dashboard. But OKRs plus the Impact Loop? That's a decision system.
Here's how it works:
You set a quarterly outcome. Not an output. An outcome. Outputs are what you make. "Ship role-based permissions." Outcomes are what happens in the world. "Increase trial-to-paid conversion from 28% to 35%."
Now your Impact Loop is no longer random. It's focused. Every sense, build, measure, amplify cycle is aimed at moving that needle. Your sensing agent is looking for signals that point to that outcome. "What's stopping people from converting?" Your build cycles are experiments on that question. Your measurement is ruthlessly focused on conversion metrics.
A practical example: Let's say your outcome is "Increase retention from 85% to 89% for enterprise customers by June 30." That's a 4-point improvement. On a base of 100 customers, that's 4 more customers staying every month.
Now you run the Impact Loop against that outcome:
Week 1 - SENSE: Your research agent analyzes all churn conversations for enterprise customers over the last 90 days. The top reasons: (1) they don't know how to do X with your product, (2) a key person left and nobody knew how to onboard the replacement, (3) they built a workflow that broke when you shipped a change.
Week 1-2 - BUILD: You prototype three experiments:
- An in-app tutorial that shows common workflows
- An "org admin" role that can manage users and permissions without help
- A change log and compatibility guide so users don't get surprised
Week 2-3 - MEASURE: You roll out all three to customers who churned in the past. You measure whether they come back. You measure whether active enterprise customers engage with each feature.
Week 4 - AMPLIFY: One of the three (let's say the org admin feature) is showing 2x engagement and higher retention for customers who use it. You expand that test to all enterprise customers. The other two didn't move the needle, so you kill them.
At the end of the quarter, you've run 16+ complete loops, each feeding into the outcome. Some loops moved the needle a quarter-point. Some moved nothing. But collectively, you hit the 4-point improvement. And you know exactly which features, which changes, which customer segments drove that improvement.
Compare that to: "We'll work on enterprise retention this quarter. Here are the things we might build that seem important." That's theater.
The continuous discovery connection
There's something important about how sensing feeds discovery, which feeds building, which feeds more sensing.
Discovery - really understanding what your customers need, not what they say they want - is continuous. It's not a phase. It's not something you do before building. It's parallel to building.
Here's the loop:
- You sense a problem (trial users are bouncing on the custom fields screen)
- You discover the root cause by talking to bounced users, reading support tickets, analyzing behavior (they're intimidated by power)
- You build a response based on what you learned (a guided wizard)
- You measure the response (wizard works)
- You sense new patterns in the data (users who go through the wizard are also more likely to adopt notifications)
- You discover a new insight (simplification increases feature adoption across the board)
- You build another experiment (add a wizard to notifications)
- Loop...
Most companies separate discovery from building. They have researchers and designers and PMs in discovery, then engineers in building. That's a waterfall with research on top. The Impact Loop has discovery embedded in every beat. You sense, you immediately start learning why. You build while learning. You measure to learn more.
This is only possible if building is fast. If prototyping takes three weeks, you can't afford to have discovery in parallel. But if prototyping takes three hours, discovery becomes part of the normal rhythm.
Why this works when sprints don't
Let me be direct about the failure mode of Scrum in a fast-moving market.
Sprints force batching. You commit to work on Monday. Reality doesn't change until Friday. But reality is changing. Customer behavior is changing. Competitors are moving. Your learning from last week's launch is informing what you should do this week, not what you committed to two weeks ago.
The Impact Loop is continuous. You sense a problem Tuesday morning. You build Tuesday. You measure Wednesday through Friday. You amplify Monday. By the following Tuesday, you're sensing again, and maybe the problem has changed. Or maybe the measurement showed something unexpected. You respond.
This isn't about working faster. It's about responding in the right cadence. Some problems need a same-day loop. A competitor launches a critical feature, you need to know what your response is by EOD. Some bets need a multi-week loop. You're testing a major positioning change, you want 4 weeks of data before you decide.
Sprints force everything into the same cadence. That's the death of speed.
Four concrete actions to start this week
You don't need your organization to adopt this. You don't need a change management project or training or buy-in from engineering. You can start running the Impact Loop tomorrow morning.
Action 1: Set up your sensing layer (Wednesday, 30 minutes)
Stop waiting for weekly metrics reviews. Build a simple daily brief. Use Claude or an AI agent to:
- Pull yesterday's conversion metrics
- Scan your last 20 support tickets and summarize patterns
- Check if any major competitor shipped something
- Identify the top metric mover (what changed the most?)
This should be a 2-minute read every morning. You're not building a complex system. You're just automating the thing you're already doing (checking email, dashboards, Slack) and putting it in one place.
Action 2: Prototype one idea this week (Thursday, 3-4 hours)
Take the problem you've been thinking about and build a prototype. Not a design doc. Something interactive. If you can code (or have Claude Code), build it yourself. If not, work with your designer or engineer to mock something clickable.
Don't overthink it. The goal is to have something you can show a user by Friday.
Action 3: Test it with real customers (Friday, 1 hour)
Send your prototype to three customers. Real customers. The ones you know are feeling the pain. Ask them to use it for 5 minutes. Watch them. Ask one question: "Does this solve the problem I told you about?"
Write down what happens. Did they understand it? Did they get stuck? Did they ask for something different?
Action 4: Measure something (Monday, 2 hours)
Deploy the prototype to a cohort. Even if it's small. 100 users. 500 users. Whatever your traffic looks like. Run it alongside the old version. Measure one outcome: "Does this change user behavior in the way I hypothesized?"
You're not measuring perfection. You're measuring signal. Did the change move the needle? Even a little? That's all you need.
Then loop. By next Wednesday, you have another sensing moment. What did you learn? What does it suggest you should build next?
Start this week. Start small. One prototype. One test. One loop. Within a month, this will be how you work. And you'll be shipping features faster than anyone else in your organization.
Because you're not waiting. You're responding.
Frequently asked
What are the four beats of the Impact Loop?+
Sense (know what's happening), Build (make a working response), Measure (quantify what changed), Amplify (scale wins and kill losers). The loop repeats continuously. Sense in, build out, measure and see, amplify what works, then loop back. No sprint planning ceremony. Just rhythm.
How does the SENSE beat turn raw data into decisions?+
Agents monitor customer signals, behavior anomalies, competitive moves, and market trends continuously. Every morning the PM gets a two-minute synthesized brief. The PM's job is judgment: which signal matters most right now. AI finds the pattern. The PM decides what the pattern means.
Why does building a prototype beat writing a spec in the BUILD beat?+
You get to testing the core assumption in hours instead of weeks. No spec delays. No design meetings. No estimation overhead. The prototype is the spec. If it works, engineering has a working reference and a mountain of customer validation data. If it fails, you learned it in hours.
What makes a metric impact-driven versus vanity-driven?+
Vanity metrics feel good but they don't predict behavior. Did visitors increase? Doesn't matter if they didn't convert. Impact metrics answer: did customer behavior change the way we hoped? Setup completion rate, trial-to-paid conversion, time-to-first-value. Real outcomes.
What does AMPLIFY actually mean when most features are killed?+
Amplify means scaling winners but also means killing losers fast without guilt. You spent four hours testing. It failed. You learned something. Delete the feature. Get it out of the codebase. That's a clean win. The team that kills faster ships faster because it's not maintaining dead weight.
Related reading
Deeper essays and other handbook chapters on the same thread.
Prototype Before You Spec
Why the fastest way to get alignment, test ideas, and advance your career is to build something people can touch - and exactly how to do it in 2 hours.
The Eval Is The Spec
Kill the PRD. Ship against a test set. The eval is the contract, the changelog, and the definition of done.
Ship With Observability or Don't Ship
No feature leaves staging without the traces, metrics, and evals that will tell you whether it's working. Before your first customer hits it.
The Deprecation Playbook
Feature death is the most under-written topic in PM. Kill on signal, not politics, and your team ships faster than the team that hopes politely.
Incident Response Is a PM Ritual
An incident is a customer telling you the truth about your product, loudly, all at once. Stop letting engineering listen alone.
Build a Prototype Agent Stack: PRD to Working Demo in a Day
Build a prototype agent stack: eight open-source Claude repos take a PM from idea to working prototype in a day, with TDD, design, and security review.