Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.
The short version
The Pricing Migration Tracker watches every account on your migration plan daily, scores actual behavior against committed terms, and classifies drift type into one of five buckets: underutilization, quality concern, trust concern, commercial concern, or multi-concern. Each drift type maps to a specific intervention (usage check-in, CS-led dispute review, CPO involvement, CFO escalation) with a 48-hour SLA. The point is the loud-customer model fails because quiet drifters get missed. The agent makes silence visible. The cultural shift is "no news" stops being interpreted as good news. Start by writing the migrations.yaml roster by hand. Forty accounts max. That alone surfaces the drift problem.
The agent I wish I'd built before quarter three
You announce the pricing migration. The plan looks clean. Wave 1 strategic accounts have committed. Wave 2 mid-market is being onboarded. Wave 3 long-tail is on a self-service flow.
Three weeks in, two strategic accounts haven't actually migrated yet. Their account team hasn't escalated because the customer hasn't complained. The customer hasn't complained because they haven't noticed. By the time anyone notices, the customer has missed three usage commitments, and the renewal conversation in six weeks is going to be harder than it should be.
This is the migration drift problem. Every pricing migration has it. The question isn't whether some accounts will drift; the question is which ones, when, and what to do.
The pricing migration tracker agent watches every account on the migration plan every day, flags drift early, and recommends specific interventions before drift becomes churn risk.
What the tracker does
Four jobs, in order.
- Maintain the migration roster. Every account on the migration plan, with their wave, their committed terms, their pilot dates, their migration deadline.
- Watch each account's actual behavior against their commitments: outcome volume, contract usage, dispute count, escalation rate, payment behavior.
- Detect drift when actual diverges from committed by more than the threshold for a given account stage.
- Recommend an intervention scoped to the drift type, with the specific person on the account team to take it.
The first two are operational data. The third and fourth are where the agent earns its keep. Most CPOs don't have a unified view of migration drift; the data is scattered across CRM, billing, support, and product analytics.
The seven components
1. The migration roster. A YAML file (migrations.yaml) listing every account in transition. For each account: customer ID, wave (1/2/3), committed minimum, expected outcome volume, migration deadline, account owner, escalation contact.
- customer_id: c_27184
name: "Acme Corp"
wave: 1
committed_minimum_monthly: 12000
expected_outcome_volume: 18000 # outcomes per month
migration_deadline: "2026-08-15"
account_owner: "ana.gomez@example.com"
escalation_contact: "jp@example.com" # CS lead
pilot_start: "2026-05-15"
2. Behavior collector. Pulls daily metrics for each account: outcome volume from product analytics, dispute count from support, escalation rate from CS tooling, payment behavior from billing. 50 lines of Python plus your data warehouse credentials.
3. Commitment vs. actual diff. For each account, compute: actual outcome volume / committed minimum, actual disputes / dispute SLA, actual escalation rate / baseline escalation rate. Each diff produces a number between 0 and 2 (1.0 = exactly on plan).
4. Drift detector. A simple rule set. If outcome ratio < 0.7 for two weeks, flag "underutilization." If dispute ratio > 1.5 for a week, flag "quality concern." If escalation rate > 1.5x baseline for a week, flag "trust concern." If payment is delayed more than 7 days past due, flag "commercial concern." Multiple concurrent flags raise severity.
5. Intervention library. A YAML file (interventions.yaml) mapping drift types to recommended actions. Underutilization gets a usage check-in call from the account owner. Quality concern gets a CS-led dispute review with the customer. Trust concern gets the CPO involved. Commercial concern goes straight to the CFO. Each intervention has a specific owner and a 48-hour SLA.
6. Daily digest. Every morning at 7am, the agent posts to a Slack channel (or sends an email) with: total accounts on plan, accounts drifting, accounts critical, recommended interventions for the day. Each intervention has a one-click "claim" button.
7. Weekly trend report. Every Monday, the agent compiles a one-page report: migration percentage trend, drift rate trend, intervention completion rate, and the three accounts most at risk for the coming week. Sent to CPO, CRO, CCO.
The drift detection prompt (the nuance part)
Most components are mechanical. The drift classification has nuance that needs an LLM call.
You are reviewing the last 14 days of behavior for customer ${customer_id}.
Their committed monthly outcome volume is ${committed_minimum}.
Their expected monthly volume is ${expected_outcome_volume}.
Actual outcome volume in the last 14 days (annualized): ${actual_outcome_volume}.
Disputes filed: ${disputes_count}, baseline expected: ${baseline_disputes}.
Escalation rate: ${escalation_rate}, baseline: ${baseline_escalation_rate}.
Payment behavior: ${payment_status}.
Classify the drift type as one of:
- on_plan
- underutilization
- quality_concern
- trust_concern
- commercial_concern
- multi_concern (two or more of the above)
Then give a confidence score 0-100, and a one-sentence diagnostic the account team can read.
Return JSON: { "drift_type": ..., "confidence": ..., "diagnostic": ... }
This gets you specific drift classifications you can route. The LLM is doing the synthesis a junior account manager would do (looking at four signals, weighing them, making a call). The recommended intervention is then deterministic from the drift type.
Why daily, not weekly
The instinct is to run this weekly because the migration plan moves on a quarterly cadence. The reality is that migration drift compounds quickly. A customer who's been quiet for a week and isn't using the product is forming an opinion about the new pricing. Three weeks of silence and they've made up their mind. Catching the drift in week one is the difference between a 30-minute check-in call and a six-week recovery effort.
Daily is the right cadence. You won't actually do anything on most days for most accounts; the agent's job is to surface the days when you should.
What this changes about the migration team
Without the tracker, the account team works on a "loud-customer" model. Whoever is yelling gets the attention. The quiet drifters get missed.
With the tracker, the account team works on a triage model. The agent surfaces who needs attention based on actual behavior, not loudness. The CS lead can prioritize their week with confidence that they're not missing anything.
The cultural shift is that "no news from a customer" stops being interpreted as good news. The agent makes silence visible. That alone is worth the build.
What to try this week
If you're in any pricing migration, start with the migration roster. Just the YAML file. Forty accounts maximum on the first cut. Fields: customer ID, wave, committed minimum, deadline, owner.
Just maintaining that file by hand for two weeks will surface the drift problem. The agent automates what the file already shows you. Build the file first.
Once the file is honest and complete, the rest of the agent is a weekend project for an engineer who has used Claude Code before.
The full agent blueprint, including the YAML schemas, the Python collector, and the Slack integration code, is in the toolkit at /artifacts/agent-pricing-migration-tracker. The companion essay on the broader pricing migration sequence is at /blog/pricing-migration-sequence.
Related
- The PM AI Agent Fleet, the 45-agent operating system this agent slots into.
- The Pricing Migration Sequence, the strategic context this agent supports.
- Renewal Risk Agent for Migration Cohorts, the qualitative companion.
Download the artifact
Ready to use. Copy into your project or share with your team.
Also on Medium
Full archive →AI Agents and the Future of Work: A Pixar-Inspired Journey
What product managers can learn about AI agents from how Pixar runs a film team.
Many AI Agents Are Actually Workflows or Automations in Disguise
How to tell agents from workflows from cron jobs, and why it matters for what you ship.
Frequently asked
What does the pricing migration tracker agent do?+
Watches every account on the migration plan every day, scores actual behavior against committed terms, classifies drift type (underutilization, quality concern, trust concern, commercial concern), and recommends a specific intervention with a 48-hour SLA.
How is drift different from migration progress?+
Progress measures whether an account moved to outcome pricing on paper. Drift measures whether the account is actually using the new pricing the way the migration plan assumed. Most failed migrations have green progress dashboards and quiet drift underneath.
What signals does the tracker watch daily?+
Outcome volume vs. committed minimum, dispute rate, escalation rate, payment behavior, and qualitative signals from support tickets and sales calls. Each metric produces a 0-2 ratio against expectation; multiple concurrent flags raise severity.
Why catch drift in week one rather than week three?+
A customer who stays quiet for three weeks has formed an opinion about the new pricing. A 30-minute check-in call in week one is a five-minute fix; the same drift caught at renewal is a six-week recovery effort.