
I've sat in enough board rooms this year to see the same conversation happen in three different companies with almost identical words.
CPO presents the roadmap. Velocity is up. Shipping cadence is up. The slide with the big numbers looks great. A board member asks one question:
"What's the eval score on the agent that's driving most of this?"
Silence. Then some version of "we're working on that."
The silence is the whole story. The board has moved. The product org has not.
If you're still presenting 2023 metrics to a 2026 board, you're not under-performing. You're being measured against a scoreboard you haven't read. This post is the new scoreboard. I use it with every board and executive team I work with.
The short version
The 2026 CPO board deck has seven slides: the outcome ledger (bets and what moved), per-outcome unit economics, the eval scorecard, the agent inventory, cycle time by stage, headcount-to-output ratio, and a "willing to be wrong" paragraph. Velocity is no longer a defensible metric. Per-outcome cost and margin is. If you cannot answer "what's the eval score on the agent that drove most of this," you are presenting to a board that has moved past the deck you are showing them. Rebuild the deck before the next quarter, not after the bad call.
For the system underneath the deck, see the seven-stage PM Operating System. For the agent inventory slide, see Your AI Agent Fleet. For the unit economics line, see Per-Outcome Pricing(coming May 18).
Why the old board deck stopped working
The 2023 CPO deck had three movements: velocity, customer signals, and roadmap. It worked because the board believed, roughly, that shipping more features to more customers was the thing. AI did not yet compress the loop from idea to deployed experience.
Three things broke that deck:
- Velocity is now table stakes. Cursor, Claude Code, and the agent stack raised the baseline. Shipping a feature a week is not a signal of a well-run org. It's a signal that you're not behind.
- Feature count decoupled from revenue. AI-era companies are shipping twice as many features and growing at the same rate. Boards noticed. They started asking what is actually moving the number.
- Cost of goods sold stopped being flat. Every AI feature is a variable cost. Tokens, GPUs, and tool calls make a P&L that used to be clean and predictable into something that looks more like an infrastructure business.
The deck that solved for 2023 was built on the assumption that product and engineering were the expensive part and customer acquisition was the variable part. In an agent-native product, that inverts. The product itself is now a variable cost center, and the quality of what it produces determines whether that cost is investment or waste.
Your board knows this. They're reading the same Gartner and Deloitte reports you are. They want a CPO who can answer their new questions before they ask them.
The seven slides that belong in the new deck
Rebuild your monthly and quarterly board materials around these. Not all seven every meeting, but all seven over the course of a quarter.
Slide 1: Outcome ledger, not velocity count
Replace ship count with an outcome ledger. Each entry is a bet you made, what outcome it was designed to move, and what it moved. Three columns: bet, target outcome, actual outcome. If you shipped 40 things and moved two numbers, the ledger has two entries. The other 38 go in a footnote.
This one slide shifts the entire conversation from "are we doing a lot" to "are we doing the right things." It also makes the product org accountable for judgment, which is the only thing left after AI automated the execution layer.
Slide 2: Per-outcome unit economics
For every AI-driven workflow in the product, a single line with five numbers: outcomes delivered this month, average tokens per outcome, average cost per outcome, realized price per outcome, and gross margin. If any line shows a negative or declining margin, it gets a red dot. Red dots get a paragraph, not a slide.
This is the line the CFO has been waiting for you to own. Most product orgs have never built it. The ones that have are the ones getting more headcount this year.
Slide 3: Eval scorecard
One page. Every AI feature in production, scored on your published rubric, with a 30-day trend line. This is where you show that you understand quality drift as a fact of operating in production, not as a risk you might someday get to.
If you don't have evals in production yet, this slide is your Q2 deliverable. Do not present without it by August.
Slide 4: Agent inventory
The agents you run, what they touch, what they're authorized to do, what they cost to run per month, and what human review layer sits on top of each. This slide is half product update and half governance briefing. Your audit committee member will thank you. Your legal counsel will ask for a copy.
Slide 5: Cycle time by stage
Not time-to-ship. Time-from-signal-to-outcome. Break it into the stages you actually run: customer signal received, clustered and validated, prototype built, deployed to a test cohort, adopted at threshold. Each stage gets a median and a P90.
The point of this slide is to show the board where the bottleneck is. In almost every org I've worked with, the bottleneck is now coordination, not build. Admit it on this slide and propose the fix.
Slide 6: Headcount-to-output ratio
The number every CEO is being asked by their board. Revenue per product team member, features shipped per product team member, outcomes moved per product team member. Show the trend over the last four quarters. If the trend is flat while comparable companies are showing 2x, you have a harder conversation incoming. Bring it up first.
Slide 7: What we're willing to be wrong about
The most important slide. A single paragraph, once a quarter, naming the two or three bets you're making that could be wrong. This is what earns you the long leash. Boards fund CPOs who can articulate downside, not CPOs who only present upside.
What you stop including
To make room for the above, here's what I've killed from my own board materials:
- The roadmap slide with the swim lanes. The roadmap is no longer a plan, it's a prediction market. Your actual prioritization happens in a rolling bet ledger. Show that instead.
- The NPS chart. It is a lagging indicator of something you're already tracking better with retention and usage. Keep it in a backup deck if asked.
- Launch calendars. Nobody cares when something shipped. They care what it moved.
- Organizational charts, unless you're proposing a change to one.
The rule: every slide has to survive the question "and what decision does this inform." If nothing changes because of the slide, the slide is theater. Cut it.
The conversation that follows
When you present a deck like this, expect three reactions from your board the first time.
Some will push back on the granularity of per-outcome economics. They'll say it's too operational for a board meeting. Hold the line. The alternative is that they ask you in six months and you haven't built the measurement yet. Better to normalize the conversation early.
Some will ask why this wasn't the deck a year ago. The honest answer is that the tooling to build this view at scale is new. Evals-as-code, token-level cost reporting, and agent telemetry are all less than two years old as production-grade capabilities. Own the timeline.
Some will ask you to simplify. Resist. A quarterly board meeting is the one place where simplification is a euphemism for hiding complexity. If the board can't follow the deck, the deck isn't complicated. Your explanation is.
What changes in the org when you present this way
Rebuilding the board deck rebuilds the product org, because what you measure is what you optimize. Within two quarters of switching to this deck, three things shift:
- Your product leaders start running their own evals. Because they know the eval scorecard is coming to the board, they own the numbers upstream. Quality becomes a product-owned metric, not an engineering-owned one.
- Your PMs learn unit economics. Nobody wants to be the red dot on the margin line. You'll find your product leaders talking to finance on their own, which is a good problem to have.
- Your engineering partners start treating product work as revenue work. Because the outcome ledger credits product decisions with revenue movement, the perennial argument about whether product is a cost center or a revenue center gets settled by the numbers.
The deck is a lagging indicator of how you run. But for the first six months, it's also a forcing function. Use it.
The CPO mandate, stated plainly
Here's what I tell peers who ask what the role looks like now.
You are no longer the senior person who runs the roadmap. You are the operator who owns the quality, cost, and outcome of an agent-augmented product. The board is hiring you to answer three questions every quarter:
- What outcomes did we move, and what did it cost to move them?
- Are the systems that produced those outcomes getting better or drifting?
- What are the two or three bets that could change the shape of the business?
If your current deck doesn't answer those three questions, rebuild it before your next board meeting. The CPOs who have already made this shift are the ones being asked to sit on other boards. The ones who haven't are being quietly benchmarked against the ones who have.
Pick one slide from the list above and ship it at the next meeting. Then add one more the meeting after. By Q4 the full deck is running, and the conversation you're in has moved.
If you lead product at an AI-native company or a traditional company going native, and you want a second pair of eyes on your next board deck, I do a limited number of these reviews. Reach out on LinkedIn.
Further reading
- Lenny Rachitsky on product leadership and exec expectations
- Marty Cagan / SVPG on what product leaders are actually accountable for
- Ben Thompson (Stratechery) on AI strategy and boardroom framing
- Anthropic's engineering writing on building and evaluating agents
- Anthropic's docs on running evaluations as a first-class product metric
Also on Medium
Full archive →Frequently asked
What does the 2026 CPO board deck actually contain?+
Seven slides over the course of a quarter, not all in every meeting: the outcome ledger (bets and what moved), per-outcome unit economics (cost and margin per AI workflow), the eval scorecard (production AI features scored against published rubrics), the agent inventory (what each agent does and costs), cycle time by stage (signal-to-outcome, not time-to-ship), headcount-to-output ratio, and a 'what we're willing to be wrong about' paragraph.
Why did the 2023 CPO deck stop working?+
Three reasons. Velocity is now table stakes because Cursor, Claude Code, and the agent stack raised the baseline. Feature count decoupled from revenue, so boards stopped trusting velocity as a proxy. And cost of goods sold stopped being flat, every AI feature is variable cost, so the P&L looks more like infrastructure than software.
What is per-outcome unit economics in a board deck?+
For every AI-driven workflow, one line with five numbers: outcomes delivered this month, average tokens per outcome, average cost per outcome, realized price per outcome, and gross margin. Negative or declining margin gets a red dot. This is the line the CFO has been waiting for product to own.
How is an eval scorecard different from a quality dashboard?+
An eval scorecard scores every AI feature in production against a published rubric, with a 30-day trend line. It treats quality drift as a fact of operating in production, not a risk you might someday get to. Without one, you cannot answer the board's first question about any AI feature.
What does headcount-to-output ratio look like in practice?+
Three numbers shown over four quarters: revenue per product team member, features shipped per product team member, and outcomes moved per product team member. If your trend is flat while comparable AI-native companies are showing 2x, raise the conversation yourself before the CEO does.
What goes on the 'willing to be wrong' slide?+
One paragraph, once a quarter, naming the two or three bets you're making that could be wrong. Boards fund CPOs who can articulate downside, not CPOs who only present upside. This is the slide that earns you the long leash.