The short version
Three things broke in the first 90 days of a $30M ARR SaaS-to-agents transition. The CFO conversation had more sub-agreements than I'd inventoried (comp set, leading indicators, board narrative tone all needed explicit agreement, not just the trough math). Lead customer contract legal review took 8 weeks instead of 4 because the unit definition triggered legal, security, and procurement questions in sequence. Maintenance team morale dropped sharper than expected because "stability" isn't a story people want to tell; renaming the team to "Migration Engineering" and tying their bonus to migration tooling adoption fixed it. The strategic frame, operating cadence, board pre-sell, and team belief all held. The first 90 days were harder than modeled and on track for the second 90 to be easier.
This is a field report, not an essay. The format is: what we believed, what we did, what broke, what we changed, what's still wrong. The company is anonymized but specific. The numbers are representative of the actual ranges I see in transitions of this size.
If you're considering or running a similar transition, this is what the first 90 days actually look like, including the parts that don't show up in strategy decks.
The setup
$30M ARR SaaS. Mid-market and enterprise mix. Per-seat pricing, 110% NRR, 80% gross margin. 30 sales reps, 60 engineers, 8 PMs. Founded 2015, profitable since 2021. Category: workflow automation.
The agent-native version of the product was in private beta with five lead customers when this transition started. Outcome volume was small but the unit economics looked promising.
What we believed at day zero
- The cannibalization decision was the right call. Per-seat pricing was structurally limited, the agent product was meaningfully better at the customer's job, and waiting another year would let a competitor define the category.
- The trough would be around 12 points (80% to 68%) bottoming at month 12, recovering to 72% by month 24. Pre-sold to the board.
- The legacy team would understand the maintenance role as a real role, not a demotion.
- The CFO conversation was already done; we'd had three sessions in the prior quarter.
- The lead customer reference would close in Q1.
- Mid-market migration would begin in Q3.
Three of these turned out to be partially wrong.
What we did
Days 1-30
- Published the sunset date internally (21 months out). Published the reorg into Maintenance team (8 engineers) and Successor team (47 engineers) and Bridge engineering (5 engineers).
- Started the comp plan rewrite. Got the first draft from the CRO at day 25.
- Started Wave 1 strategic account calls. Got through 6 of 20 by day 30.
- Lead customer pilot contract was in legal review.
Days 31-60
- Comp plan back-and-forth with the CRO took longer than planned. CRO pushed back on 50% legacy comp; we ended up at 60% as a compromise.
- Wave 1 calls continued. Got through 14 of 20 by day 60.
- Lead customer contract still in legal review (week 8).
- Maintenance team morale dropped visibly. Three engineers asked for transfers to Successor team. We approved one (genuinely better fit), denied two (we needed them on Maintenance).
Days 61-90
- Comp plan published. The 60% legacy compromise became the operational reality.
- Wave 1 calls completed. 18 of 20 strategic accounts committed to migration on bespoke timelines. Two declined and started competitor evaluations.
- Lead customer contract finally signed at day 75. Pilot launched at day 80.
- Q1 board update with full trough math. Board accepted the trajectory.
- Internal Q&A session with the maintenance team. Tense but useful.
What broke
Break 1: The CFO conversation wasn't actually done
We thought we'd closed the CFO conversation in the prior quarter. We had agreement on the trough math. What we hadn't agreed on was the new comp set for board reporting.
When the Q1 board update came around, the CFO wanted to use a hybrid comp set (50% pure-SaaS, 50% transition-peer). I wanted to use a transition-peer-only comp set. We hadn't surfaced this disagreement until the board deck was being drafted.
We negotiated. Landed on 70/30 transition-peer / pure-SaaS for the next two quarters, transitioning to transition-peer only by Q3. Acceptable. But it cost a week of board prep time and made the CFO and CPO look out of sync to the prep team.
Lesson: the CFO conversation has more sub-agreements than I had inventoried. Specifically: the trough math, the comp set, the leading indicators, the board narrative tone, and the board's expected level of detail. All five need explicit agreement, not just the trough math.
Break 2: Legal review of the lead customer contract took 8 weeks
I had budgeted 4 weeks. We got 8.
The contract had to define the unit ("a resolved support ticket where the customer doesn't escalate within 48 hours"). The customer's legal team had questions about "escalation." Their security team had questions about how we measure escalation. Their procurement had questions about the dispute mechanism.
Each question was reasonable. The accumulated reasonableness was 4 extra weeks.
Lesson: lead customer contract reviews always take longer than the simplest version. Budget 8 weeks. Pre-share the contract template with their legal team in week one, not week six. Have a senior product engineer available for the security team's questions.
Break 3: The maintenance team morale drop was sharper than expected
I had imagined the maintenance role as honorable: keep the lights on, build great migration tooling, graduate to the successor or a new role at sunset. I had communicated it that way.
What the maintenance team heard: "the company has decided we're not the future." Six engineers asked for transfers within the first 30 days. Two senior engineers started interviewing externally.
The fix that worked: I asked the maintenance team's senior engineers to own the migration tooling itself. They became the people who built the bridge customers crossed to get to the successor. Their work was visible to every customer in Wave 1 and Wave 2. They had a heroic story to tell about their last 18 months on the legacy product.
By day 90, the team had stabilized. One engineer left (genuine fit issue, not transition-related). The rest were energized about the migration tooling work.
Lesson: maintenance teams need a story they can tell about their work. "Stability" is not enough. "Building the bridge for our customers" is.
What we changed
Three structural changes by day 90.
- The CFO conversation cadence. Weekly 30-minute working sessions, not quarterly check-ins. This caught sub-agreements before they became board-prep emergencies.
- The Wave 1 call rhythm. I had been doing them at one per day on average, which meant I was tired and rushed. Switched to two per week, deeper preparation per call. Closed faster despite slower per-week pace because each call landed cleaner.
- The maintenance team mandate. Renamed from "Maintenance" to "Migration Engineering" internally. Same scope, different framing. Their bonus structure tied to migration tool adoption metrics, not legacy uptime.
What's still wrong at day 90
Three problems I haven't solved.
Problem 1: The dispute mechanism is unproven at volume
We've handled fewer than 30 disputes total. The CS team's process worked for 30. When Wave 2 hits in Q3 and we have 200-500 customers running through outcome billing, we'll have multiples of 30 disputes per week.
Our plan: hire two more dispute analysts in Q2, train them on the existing 30 cases, scale gradually. But the unknowns of what new dispute types emerge at volume are real. We don't know what we don't know.
Problem 2: Two strategic accounts declined and started competitor evaluations
Out of 20 Wave 1 calls, two declined. Both are in active competitor evaluations. If we lose both, that's $1.8M in ARR.
We're contesting both with custom commercial terms. The CRO is leading. But both customers have real reasons (one prefers the per-seat predictability for their own internal budgeting; one has compliance reasons that complicate per-outcome billing). Some Wave 1 churn is inevitable; we hadn't quite let ourselves accept that going in.
Problem 3: The agent product's quality bar isn't where we want it for general migration
Wave 1 strategic accounts are getting white-glove service. The agent's mistakes are caught by the account team and fixed quickly. Wave 2 mid-market won't get white-glove service. The agent's mistakes will be experienced more directly.
We're investing in eval coverage and prompt quality before Wave 2 starts. But "quality" for an agent product is harder to measure than "quality" for SaaS, and we're catching up.
What I would do differently if I could rewind 90 days
- Surface every CFO sub-agreement explicitly in week one, not let them implicit until board prep.
- Pre-share the contract template with the lead customer's legal team in week one.
- Frame Maintenance as Migration Engineering from day one, not after the morale drop.
- Budget 8 weeks for lead customer contract review, not 4.
- Accept Wave 1 attrition (10-15% is realistic) and plan Wave 1 plus 25% of accounts for the cohort, knowing some will decline.
Everything else was approximately right. The strategic frame held. The operating cadence held. The board pre-sell held. The team's belief held. The first 90 days were harder than I'd modeled, and on track for the second 90 to be easier.
What to take from this
If you're inside a similar transition, the patterns above will mostly happen to you too. The question is whether you've planned for them or whether they catch you by surprise. Each pattern is solvable; none is fatal; together they cost you a quarter of execution speed if you don't anticipate them.
The companies that ship this transition well aren't the ones with perfect plans. They're the ones who name the breaks fast and adjust faster.
The companion strategy is The Cannibalization Playbook. The operating model is The Dual Transformation Operating Model. The pricing playbook is The Pricing Migration Sequence.
Frequently asked
What is this case study based on?+
An anonymized composite of three SaaS-to-agents transitions I have worked through or advised on, run together to surface the patterns. Numbers are illustrative but representative of the actual ranges. Specific company details have been changed to protect commercial sensitivities.
What went wrong in the first 90 days?+
Three things. The CFO conversation took longer than expected and delayed the comp plan rewrite. The lead customer pilot's contract review took 8 weeks instead of 4. The legacy team's morale dropped faster than we anticipated when we announced the maintenance reorg. Each had a fix, but each cost weeks.
What worked better than expected?+
The lead customer reference work. Once the customer signed and we operationalized the per-outcome billing, they became advocates internally and externally faster than we projected. The first three new-business deals we closed on the new pricing came inbound from prospects who'd seen the lead customer's case study.
What's still wrong at day 90?+
The dispute mechanism is unproven at volume. We've handled fewer than 30 disputes total. When mid-market migration begins in Q3, we'll be running 200-500 customers through a process we tested with 10. The CS team knows this and is uneasy. We're scaling the dispute team in advance but the unknowns remain.