The short version
The day the per-seat tier sunset is when five things broke at a $40M ARR SaaS 21 months into a SaaS-to-agents transition. Migration drift on accounts the dashboards said were fine (12 accounts migrated on paper but barely using the product). An unexpected churning cohort (14 of 35 mid-market accounts with edge use cases). A unit definition that was too loose at scale (200+ contested cases per week). Dispute volume at 12x projection (50 projected, 600 actual). Sales reps quietly negotiated unauthorized extensions for three accounts. By month 22, outcome revenue was 91% and blended gross margin recovered to 71%. The strategy decks make it look linear. The work isn't.
This is the second field report. Where the first covered the first 90 days of a transition, this covers the day the legacy tier actually sunset and the months around it.
What you read in strategy decks is the trough math, the migration percentage, and the recovery curve. What you don't read is what breaks. This is what breaks.
The company is anonymized. The numbers are representative.
The setup
$40M ARR SaaS, 18 months into a SaaS-to-agents transition. Sunset date for the per-seat tier set 21 months out from the original announcement, so this report covers months 18-22 (the three months around sunset day plus the first month after).
By month 18, 78% of legacy revenue had migrated to the hybrid pricing. The remaining 22% was a mix: long-tail accounts that hadn't engaged, mid-market accounts in active competitor evaluations, and a cohort of strategic accounts negotiating extensions.
What we did in months 18-22
Month 18: Final migration push
- 30-day notice email to all unmigrated accounts.
- Personal outreach to the 40 largest unmigrated accounts (CPO + CRO).
- Expanded migration tool features: bulk account migration for parent companies, custom commitments for accounts requesting them.
- Internal dispute team scaled from 4 to 9 analysts.
Month 19: Sunset day
- Per-seat tier turned off in production at midnight on day 1 of month 19.
- Accounts not migrated had a 60-day grace period at the legacy pricing.
- Public announcement.
- Internal celebration (small, deliberately understated).
Month 20-21: Grace period
- Final outreach to accounts in grace period.
- Some migrated. Some negotiated extensions. Some churned.
- Dispute volume spiked.
- New unit definition issues surfaced as customer mix changed.
Month 22: First month post-sunset
- Per-seat data archived.
- Maintenance team reorged: 4 engineers transitioned to bridge or successor roles. 2 took severance.
- Outcome revenue at 91% of total.
- Blended gross margin recovering toward 71%.
What broke
Break 1: Migration drift on accounts the dashboards said were fine
The pricing migration tracker agent had been running since month 4. It surfaced behavioral drift. By month 18, 12 accounts were classified as "on plan" but they were technically migrated and not actively using the new pricing.
The accounts had migrated on paper. They were paying the new committed minimums. Their actual usage was below the committed minimum, so they weren't generating overage revenue. Worst of all, their account teams hadn't escalated because the dashboards said "on plan."
When the legacy tier sunset, these 12 accounts didn't notice. Three weeks later, when they tried to use the legacy product surface, they found it gone. The escalation that followed was angry, and we lost two of them.
Lesson: "migrated" is not the same as "using the new pricing actively." Behavior beats commitment. The migration tracker should have flagged "low utilization despite migration" as a separate concern. Update the agent's drift classifications to include "active utilization" as a dimension distinct from "committed minimum."
Break 2: A customer cohort we expected to migrate churned instead
We had a cohort of mid-market customers (about 35 accounts) on year-long contracts that ran through month 19. We had assumed they'd migrate at renewal.
When renewal came, 14 of the 35 churned. The rest migrated.
The 14 had a common pattern: their primary use case was on the edges of what the new agent product handled well. The agent's outputs for these workflows had a higher dispute rate. The customers had started looking at competitors mid-year and the renewal conversation tipped them.
We had not specifically tracked this cohort. They had been in the long tail of Wave 2.
Lesson: edge use cases need their own migration track and earlier outreach. The mid-market template assumes the use case is in the agent's strong zone. When it isn't, the customer needs either: a clear story about when the agent will be strong on their use case, or a graceful exit with referrals to alternatives.
Break 3: The unit definition was too loose at scale
We defined "resolved ticket" as "customer didn't escalate within 48 hours." This worked for 98% of cases.
The 2% that broke it: customers who reopened a ticket after 60-72 hours with a follow-up question that was technically a new issue but related to the original. Our system counted the original as resolved (no escalation in 48 hours) and billed accordingly. Some customers contested. Our position was technically correct. Their position was that they considered it the same issue.
At low volume, we handled these case-by-case. At sunset volume, the contested cases hit 200+ per week. CS was overwhelmed.
Lesson: test the unit definition with 10x the projected dispute volume before launch, not after. The edge cases that look rare at small scale are common at large scale. We rewrote the unit definition at month 20 to include "no follow-up in 7 days" as a stricter resolution criterion. Lost some revenue. Reduced disputes by 60%.
Break 4: Dispute volume hit 12x our pre-launch projection
We had projected 50 disputes per month at full scale based on the lead customer pilot data. Actual dispute volume in month 20 was 600 per month.
The 12x miss was a combination of factors: customers becoming more aware of dispute rights as the product scaled, the unit definition issues from Break 3, and the migration drift accounts contesting the legitimacy of charges they hadn't been actively engaging with.
We scaled CS dispute analysts from 4 to 9 to 14 over three months. Quality of dispute resolution suffered during the ramp. NPS on the successor dipped briefly (down 8 points from peak).
Lesson: model dispute volume at 5x your projection, not 1.5x. Hire in advance of need. The compounding negative reviews from a slow dispute resolution process can outweigh the cost of overstaffing for a quarter.
Break 5: Sales reps negotiated quiet extensions
Two sales reps, on their own initiative, negotiated 6-month extensions on the legacy tier for three accounts that had refused to migrate. They didn't escalate, didn't get approval, and didn't update CRM with the extension terms.
We discovered this in month 20 when finance ran a reconciliation against the sunset date. Three accounts were paying us on legacy terms past the published sunset.
The accounts were salvageable. We honored the extensions and migrated them at the end of the extension. The reps got performance management conversations.
Lesson: during the final 60 days before sunset and the 60 days after, every commercial commitment from sales needs CPO + CRO sign-off. Audit weekly. The rep behavior wasn't malicious, they were trying to preserve their accounts, but it created a fairness problem with the customers who'd migrated on time.
What still worked
The five things that did most of the heavy lifting and would do so again:
-
The lead customer reference. The first lead customer's case study brought in seven new logos in months 18-22 alone. Pure inbound. The reference paid for itself many times over.
-
The migration tooling. Self-service migration handled 78% of accounts without human intervention. That single number is what made the sunset operationally feasible.
-
The board narrative. Quarterly updates with the trough curve and leading indicators. The board never panicked because they always knew where we were on the curve. The format was sacred.
-
The legacy team's transition into migration engineering. They built the migration tooling. They were the heroes of months 12-22. The team that started with morale issues ended energized.
-
The quarterly reality checks. We caught two operational problems three quarters out (CS dispute capacity, edge-use-case migration risk) because the quarterly check forced honest conversations.
What I would do differently
-
Track active utilization as a separate dimension in the migration tracker. "Migrated" plus "actively using the new pricing" is the real success metric.
-
Identify edge use cases early and run a separate migration track for them. Don't bury them in long-tail outreach.
-
Test the unit definition at 10x projected dispute volume in a stress test before launch. The edge cases that look rare at small scale are common at scale.
-
Hire dispute analysts to 5x the projection before launch, not after. Model dispute volume conservatively.
-
Audit sales rep commercial behavior weekly during the final 60 days before sunset and 60 days after. Authorized exceptions are fine; unauthorized ones aren't.
-
Run a parallel "graceful exit" track for accounts that won't migrate. Help them leave well. Some will come back; all will speak about the experience.
What to take from this
If you're approaching a sunset, the breaks above will mostly happen to you. The question is whether you've planned for them.
The transition that landed our company at 91% outcome revenue and 71% blended gross margin by month 22 was not the transition we planned. It was the transition that emerged from honest reaction to what broke.
That's normal. The strategy decks make it look linear. The work isn't.
The strategy frame: The Cannibalization Playbook. The operating model: The Dual Transformation Operating Model. The pricing sequence: The Pricing Migration Sequence. The first field report: 90 Days Inside a SaaS-to-Agents Transition.
Frequently asked
What is this case study based on?+
An anonymized composite of pricing-tier sunsets I have led or advised on, run together to surface the patterns. Numbers are illustrative but representative of the actual ranges.
What is the per-seat tier sunset?+
The decision to end-of-life a per-seat pricing tier on a published date, after which all customers must move to the new pricing model (typically hybrid: platform fee + per-outcome) or churn. This is the operational endpoint of the cannibalization decision.
What broke?+
Five things. (1) Migration drift on accounts the dashboards said were fine. (2) A customer cohort we expected to migrate that churned instead. (3) The unit definition we'd written turned out to be too loose at scale. (4) Dispute volume hit 12x our pre-launch projection. (5) Some sales reps quietly negotiated extensions for accounts they didn't want to lose.
What still worked?+
The lead customer reference. The migration tooling. The board narrative. The legacy team's transition into migration engineering. The quarterly reality checks. These five did most of the heavy lifting and would do so again.
What would I do differently?+
Test the unit definition with 10x the projected dispute volume before launch, not after. Hire dispute analysts in advance of need, not in response to backlog. Audit sales rep behavior monthly during sunset for unauthorized extensions. Run a parallel 'graceful exit' track for accounts that won't migrate.