frameworksUpdated·Falk Gottlob··updated ·9 min read

OKR Writing That Doesn't Suck

Most OKRs are disguised task lists. Here's how to write outcome-focused OKRs that actually drive product decisions - with templates, anti-patterns, and real examples.

outcomesOKRsstrategyframeworkstemplate
Helpful?

The short version

Most OKRs are disguised task lists. The litmus test for a real OKR: if your key result were achieved, how would the behavior of your customers, users, or business actually change? If the answer is "we'll have shipped the thing we planned to ship," it's a task. Real OKRs describe a future state ("reduce time-to-value on mobile from 8 minutes to under 4 minutes"), not a project plan ("redesign mobile navigation"). Pick 3-5 objectives per function, max 3-4 key results each, ambitious but achievable (70-80% hit rate). Cascade through outcomes, not work breakdown. OKRs are a decision-making tool: "Does this help us hit our Q2 OKRs? If yes, fund it. If no, don't." That's the entire point.

I've reviewed hundreds of OKRs in my career. Most are bullshit. Not because teams don't care - they do. But because when you sit down to write OKRs, the default gravitational pull is toward output. You start thinking about what you're going to build, not what's going to change as a result.

This is the core problem. OKRs should describe the future state of your business or your customer. They should not describe the project plan.

Why Most OKRs Fail

There's a predictable pattern:

The "Disguised Task List" OKR:

Objective: Rebuild the checkout flow
Key Result: Ship redesigned payment form
Key Result: Migrate users to new system
Key Result: Reduce checkout-related bugs by 30%

This is a project plan with metrics sprinkled in. You know what you're building (payment form). You know when you're done (migration complete). But you don't know why. And you definitely won't know if it was worth doing.

The "Impossible to Measure" OKR:

Objective: Improve developer productivity
Key Result: Build collaboration features
Key Result: Enhance API documentation

What does productivity mean? Better developer experience? Faster time-to-integration? Higher satisfaction scores? You haven't defined success, so you can't fail - you can only finish the to-do list.

The "Too Many OKRs" Trap:

Q1 OKRs: 12 objectives, 40+ key results

If everything is a priority, nothing is. You're just writing down everything you want to do and calling it strategy.

The "Set and Forget" Pattern:

OKRs defined in January
Zero review or adjustment until April

The world changes. Your assumptions were wrong. Markets shift. But the OKRs stay frozen because "we're committed to them." OKRs should guide decisions, not imprison them.

The Behavior Change Test

Here's the litmus test for a real OKR: If your key result were achieved, how would the behavior of your customers, users, or business actually change?

If the answer is "we'll have shipped the thing we planned to ship," it's not an OKR. It's a task.

If the answer is "we can't describe what changes," it's not an OKR. It's too vague.

A real OKR describes a material change in behavior:

Bad:

Objective: Improve mobile experience
Key Result: Redesign mobile navigation

What changes? The navigation looks different. That's it. Users might not notice. You have no idea if they use it better.

Good:

Objective: Increase time-to-value on mobile
Key Result: Reduce onboarding time for mobile users from 8 minutes to under 4 minutes
Key Result: Increase session frequency on mobile from 1.2x weekly to 2x weekly

What changes? New users see value faster. They come back twice as often. That's a behavioral change. That's why you're doing this.

Good vs. Bad OKRs: Real Examples

Example 1: Activation (Early-stage SaaS)

Bad:

Objective: Build advanced segmentation
Key Result: Ship segmentation UI
Key Result: Integrate with analytics platform
Key Result: Zero critical bugs in launch

This is a feature roadmap. You've shipped something. So what?

Good:

Objective: Get users to first successful campaign in under 1 hour
Key Result: Reduce time from signup to first campaign launch from 45 minutes to under 20 minutes
Key Result: Increase percentage of users who create a campaign in first session from 28% to 45%
Key Result: Achieve 92% success rate on first campaign (vs. current 67%)

This is clear. You're changing the onboarding behavior. Users get to value faster. More of them stick around because they succeed early. You can measure it. You know if you hit it or missed.

Example 2: Retention (Mid-market SaaS)

Bad:

Objective: Improve product stickiness
Key Result: Rebuild dashboard
Key Result: Add 5 new integrations
Key Result: Create custom report builder

Again: feature list masquerading as strategy.

Good:

Objective: Lock in power users through daily habit loops
Key Result: Increase 30-day retention for activated users from 61% to 72%
Key Result: Increase average session frequency from 4.2 sessions/week to 5.5 sessions/week
Key Result: Decrease churn rate for users with over 10 sessions to under 3% per month (from 8%)

You're trying to build habit. You're measuring retention behavior, frequency, and churn depth. If you hit these, users are stickier. Whether that's because of the dashboard, integrations, or report builder doesn't matter - the business outcome is what counts.

Example 3: Revenue (Expansion-stage SaaS)

Bad:

Objective: Increase annual contract value
Key Result: Build usage-based pricing model
Key Result: Create premium tier
Key Result: Launch pricing page redesign

You've built the pricing machine. But did customers actually spend more?

Good:

Objective: Shift customer spend from annual contracts to usage-based expansion
Key Result: Increase net MRR growth for expansion revenue from $12k/month to $25k/month
Key Result: Increase percentage of enterprise customers spending under $5k annually who expand to $10k+ within 12 months from 18% to 35%
Key Result: Achieve $180k total expansion revenue (vs. $45k YTD)

Now it's clear. You're not just building pricing. You're actually changing customer spending behavior. Specifically, low-spend customers should expand, and expansion revenue should grow. If you hit the model but don't hit these metrics, the model was wrong.

The Quarterly OKR Cadence That Works

OKRs should be ambitious but achievable. If you hit 70-80% of your OKRs, you're calibrated right. If you hit 100%, you're sandbagging. If you hit under 50%, you're delusional or your OKRs are theater.

The Cadence:

  1. Week 1 of quarter: Leadership context. Market, investor feedback, customer feedback, internal state. "Here's what changed since last quarter." 45 minutes maximum.

  2. Week 2: Draft OKRs by function. Customers and activation folks draft OKRs. Retention folks draft OKRs. Growth folks draft OKRs. Each group thinks independently.

  3. Week 2-3: Dependency mapping. Which retention OKRs depend on activation shipping first? Which revenue OKRs require engineering infrastructure work? Surface conflicts now.

  4. Week 3: Drafting meeting. All functions present. 10 minutes per function. The goal is not consensus - it's clarity. This is where you trade-offs: "We can't do both retention and acquisition. Which matters more?"

  5. Week 3: Final OKRs locked. 3-5 objectives per function, maximum 3-4 key results per objective. Written down. Shared. No surprises.

  6. Week 4 - 12: Weekly check-ins. 15 minutes. One person per function gives an update: "We're on track for KR1, at risk on KR2 because X, and we learned Y." Not a status meeting. A decision meeting. "Should we adjust?" "Should we invest more here?"

  7. Week 10-11: Mid-quarter assessment. Are we going to hit these? If not, why? Is it execution or assumption failure? What do we change?

  8. Week 13: Retrospective. What did we learn? What worked? What broke? What surprised us? Use this for next quarter's context.

How to Cascade Without Creating Bureaucracy

I hate cascading OKRs. It creates bottleneck theater. But you do need alignment.

Here's how to do it without the nonsense:

Step 1: Company OKRs first. Product, engineering, GTM, ops. 5 total, max. This takes 2 weeks.

Step 2: Function OKRs build on company OKRs. But they're not sub-goals. They're specific to what that function controls.

Wrong cascading:

Company OKR: Increase 30-day retention from 50% to 60%
Product OKR: Build churn-prevention features
Product KR: Ship 3 anti-churn features

This is just subdividing work. The product function is responsible for a task, not an outcome.

Right cascading:

Company OKR: Increase 30-day retention from 50% to 60%
Product OKR: Reduce friction in power-user workflows
Product KR: Increase session frequency for users with over 5 sessions from 3x/week to 5x/week

Product is still responsible for retention, but through their own outcome. They might build churn-prevention features. They might redesign power-user workflows. They might create shortcuts or commands. The specific solution is up to them.

The rule: If your function owns a metric, you own the OKR. If you're just helping another function hit their metric, you contribute to their OKR, but don't create a cascading OKR.

Template and Anti-Pattern Checklist

I've created a complete OKR worksheet and scoring rubric in the artifact below. Use it. It has:

  • OKR worksheet: Objective template, key result format, behavioral change test, measurement plan
  • Real examples: 12 examples across activation, retention, revenue, and expansion
  • Anti-pattern checklist: 20 common OKR mistakes and how to fix them
  • Scoring rubric: How to calibrate ambitious-but-achievable, grade your OKRs mid-quarter, and run a retrospective

The artifact is your reference. Print it. Share it. Use it in your drafting meetings. Make it part of how you write OKRs.

One More Thing: How OKRs Actually Drive Decisions

Here's what OKRs should do: When someone pitches a new project, asks for headcount, or wants to take on a side project, you should be able to say:

"Does this help us hit our Q2 OKRs? If yes, we fund it. If no, we don't. If it depends on dependencies, we fund it if it unblocks another OKR."

That's it. That's the entire point. OKRs are a decision-making tool, not a planning tool.

If you write good OKRs, decision-making becomes obvious. If you write bad OKRs, you're left in the same place you started: with a feature roadmap and a list of projects and no idea what actually matters.

Write good OKRs. Your quarterly planning will be faster, clearer, and actually strategic.

Share this post

Download the artifact

Ready to use. Copy into your project or share with your team.

Download

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.