# OKR Writing Guide: Templates, Examples, and Anti-Pattern Checklist

## Part 1: OKR Worksheet Template

### Step 1: Write Your Objective

**Objective Template:**
```
[Verb] [Desired business/customer outcome] [by/through] [mechanism or constraint]
```

**Examples:**
- Lock in daily habit loops among power users
- Reduce onboarding friction for first-time users
- Shift customer spend from fixed to usage-based models
- Own the SMB market for workflow automation

**Guidelines:**
- 5-10 words maximum
- Should inspire and be memorable
- Use strong verbs: "increase," "decrease," "shift," "own," "reduce," "expand," not "improve" or "enhance"
- Should describe outcome, not activity

### Step 2: Define Your Key Results

**Key Result Template:**
```
[Metric name]: Move from [current state] to [target state] by [date]
```

**Examples:**
- 30-day retention: Increase from 52% to 65% by Q2 end
- Time to first success: Reduce from 45 min to under 20 min by Q2 end
- Expansion revenue: Grow from $12k/month to $25k/month by Q2 end
- Session frequency: Increase from 3.2 sessions/week to 5 sessions/week by Q2 end

**Guidelines:**
- 3-4 key results per objective maximum
- Each KR should be independently measurable (no "and")
- Should describe outcome, not deliverable
- Should require cross-functional effort or organizational focus
- Metric must be something that changed in customer/business behavior

### Step 3: Apply the Behavior Change Test

**For each key result, answer:**
1. If we hit this KR, what will customers/users actually do differently?
2. Can we measure this objectively?
3. Is this outcome or output?
4. Will achievement of this KR require hard decisions or trade-offs?

**If you can't answer all four clearly, rewrite the KR.**

### Step 4: Define Measurement Plan

For each KR, document:

```
Metric: [Name]
Definition: [Exactly how we calculate this]
Data source: [Where this lives, Mixpanel, Amplitude, Stripe, etc.]
Current state: [Today's number]
Target: [Q-end number]
Confidence: [High/Medium/Low on whether we can influence this]
Owner: [Who's responsible for this KR]
Dependencies: [What has to happen first?]
```

---

## Part 2: Real OKR Examples by Category

### Category 1: Activation (Free to Paid / Signup to First Success)

#### Example 1A: B2B SaaS (Time to Value Focus)
```
OBJECTIVE: Get new users to their first win in under 30 minutes

Key Result: Reduce time from signup to first successful action from 45 min to under 25 min
  Metric: Onboarding completion time (from signup to first task created)
  Current: 45 min average
  Target: 25 min average
  Owner: Product (onboarding experience)
  Data source: Mixpanel onboarding funnel

Key Result: Increase percentage of signups who complete first success scenario in first session from 28% to 50%
  Metric: First session success rate (% of signups who reach milestone X in same session they signed up)
  Current: 28%
  Target: 50%
  Owner: Product (user research, onboarding flow)
  Data source: Amplitude or Mixpanel

Key Result: Achieve 85% satisfaction rating from users on their onboarding experience (vs. current 62%)
  Metric: Post-onboarding NPS or CSAT (asking "How easy was it to get started?")
  Current: 62%
  Target: 85%
  Owner: Product (customer voice), Support (feedback collection)
  Data source: Intercom or SurveyMonkey post-onboarding survey
```

#### Example 1B: Freemium SaaS (Free-to-Paid Conversion)
```
OBJECTIVE: Convert power-users-in-waiting before they leave

Key Result: Increase free trial conversion rate from 18% to 28%
  Metric: Trial users who convert to paid / Total trial users started
  Current: 18%
  Target: 28%
  Owner: Product (upgrade experience), GTM (pricing strategy)
  Data source: Stripe + analytics integration

Key Result: Increase percentage of users who reach "power user threshold" (5+ features used) from 31% to 50%
  Metric: Users who use 5+ distinct features in first 7 days / Total activated users
  Current: 31%
  Target: 50%
  Owner: Product (feature discovery, UX)
  Data source: Amplitude feature adoption

Key Result: Reduce time from signup to "conversion-ready" signal from 12 days to 6 days
  Metric: Days from signup to using 3+ key features
  Current: 12 days
  Target: 6 days
  Owner: Product (engagement loops), GTM (nurture messaging)
  Data source: Amplitude cohort analysis
```

### Category 2: Retention (Activation to Sticky Behavior)

#### Example 2A: Daily Active Product (Habit Building)
```
OBJECTIVE: Lock in daily habit loops for power users

Key Result: Increase 30-day retention for activated users from 58% to 72%
  Metric: Users active on day 30 / Users activated in cohort (Day 1 active)
  Current: 58%
  Target: 72%
  Owner: Product (engagement design)
  Data source: Amplitude cohort retention

Key Result: Increase average session frequency from 3.8 sessions/week to 5.2 sessions/week
  Metric: Total sessions / Active users / 7 (for weekly average)
  Current: 3.8
  Target: 5.2
  Owner: Product (engagement loops, notifications)
  Data source: Mixpanel or Amplitude event tracking

Key Result: Decrease day-7 churn for high-intent users from 15% to under 8%
  Metric: Users with 5+ sessions in first 7 days who don't return on day 8+
  Current: 15% churn
  Target: 8% churn
  Owner: Product (aha moment reinforcement)
  Data source: Custom cohort analysis in analytics tool
```

#### Example 2B: B2B Retention (Power User Stickiness)
```
OBJECTIVE: Become the collaboration hub for distributed teams

Key Result: Increase account expansion rate (accounts using 2+ key features) from 42% to 65%
  Metric: Accounts using power user features / Total active accounts
  Current: 42%
  Target: 65%
  Owner: Product (feature adoption), CS (adoption support)
  Data source: Mixpanel account-level analytics

Key Result: Reduce monthly churn for accounts with power-user behavior to under 2%
  Metric: Accounts with 10+ logins/month that churn in month N
  Current: 4.2%
  Target: 2%
  Owner: CS (support), Product (retention features)
  Data source: Custom segment in analytics

Key Result: Increase seat expansion per account from 4.2 seats to 6.8 seats for retained accounts
  Metric: Average seats per account (for accounts active for 6+ months)
  Current: 4.2
  Target: 6.8
  Owner: Product (multi-team features), CS (expansion)
  Data source: Stripe + CRM
```

### Category 3: Revenue (Monetization and Expansion)

#### Example 3A: Expansion Revenue (Upsell and Cross-Sell)
```
OBJECTIVE: Shift customer spend from fixed pricing to usage-based expansion

Key Result: Increase net MRR growth from expansion revenue from $8k/month to $18k/month
  Metric: MRR from accounts that existed prior quarter / Total new MRR
  Current: $8k/month
  Target: $18k/month
  Owner: Product (usage-based features), GTM (expansion motion)
  Data source: Stripe + ChartMogul

Key Result: Increase percentage of enterprise customers spending under $5k annually who upgrade to $10k+ within 6 months from 12% to 30%
  Metric: Accounts in cohort at year end spending 2x+ their starting rate
  Current: 12%
  Target: 30%
  Owner: CS (land-expand motion), Product (usage tracking)
  Data source: CRM + Stripe

Key Result: Increase ARPU from $180 to $240 for power user segment
  Metric: Annual revenue per account / Number of active accounts (power user tier)
  Current: $180
  Target: $240
  Owner: Product (premium features), GTM (pricing)
  Data source: Stripe or accounting system
```

#### Example 3B: Enterprise Revenue (Land and Expand)
```
OBJECTIVE: Build predictable enterprise revenue through adoption-driven expansion

Key Result: Land 8 new enterprise customers (TCV greater than $50k) by Q-end
  Metric: Signed contracts $50k+ TCV
  Current: [Count this quarter to date]
  Target: 8
  Owner: Sales, Product (enterprise features)
  Data source: Salesforce

Key Result: Increase net revenue retention to 115% for existing enterprise accounts
  Metric: (Revenue from existing accounts at end of period - Churn) / Revenue at start of period
  Current: [103%]
  Target: 115%
  Owner: CS (adoption), Product (roadmap prioritization for enterprise)
  Data source: Accounting system

Key Result: Reduce time to $10k ARR for new enterprise customers from 5 months to 3 months
  Metric: Median time from contract signature to $10k ARR across cohort
  Current: 5 months
  Target: 3 months
  Owner: CS (onboarding), Product (value realization)
  Data source: CRM + Stripe
```

---

## Part 3: Anti-Pattern Checklist

### Critical Anti-Patterns (If you see these, rewrite)

- [ ] **Output Disguised as Outcome** ("Ship dashboard redesign," "Build 3 integrations")
  - Fix: Describe what changes in user/business behavior, not what gets built

- [ ] **Unmeasurable Objective** ("Improve product quality," "Better UX," "Enhance collaboration")
  - Fix: Define the metric. What specifically improves and how do you measure it?

- [ ] **Too Many OKRs** (More than 5 objectives or 15+ key results per function)
  - Fix: Ruthlessly prioritize. Choose what matters most this quarter.

- [ ] **No Target/Baseline** ("Increase retention" without saying from X to Y)
  - Fix: Always specify current state and target state with numbers.

- [ ] **Outcome You Don't Actually Control** ("Increase market share," "Win more enterprise deals")
  - Fix: Choose outcomes your team can influence. Maybe "Increase enterprise feature adoption" instead.

- [ ] **KR Doesn't Require Hard Decisions** (Should cost you something to hit it)
  - Fix: If you can hit all your KRs by doing what you were going to do anyway, they're too easy.

### Measurement Anti-Patterns

- [ ] **Metric with no data source** ("Increase customer happiness", where's the data?)
  - Fix: Name the tool. Specify exactly how you calculate it.

- [ ] **Lagging indicator only** (Churn rate measured 30 days after quarter ends)
  - Fix: Include leading indicators you can track weekly (activation rate, feature adoption, NPS).

- [ ] **Percentages without context** ("Increase conversion by 15%")
  - Fix: Convert percentages to specific numbers for clarity.

- [ ] **Vanity metrics** (Total signups, total page views, raw DAU)
  - Fix: Use metrics that describe business health (retention, activation rate, revenue per cohort).

### Organizational Anti-Patterns

- [ ] **Top-down OKRs with no input from teams** (Leadership writes them in a vacuum)
  - Fix: Team input is critical. They understand constraints and opportunities. Facilitate, don't dictate.

- [ ] **Completely siloed OKRs** (No dependency mapping, functions working at cross-purposes)
  - Fix: Map dependencies. Where is Product blocking GTM? Where is GTM blocking Product?

- [ ] **OKRs used as performance reviews** (Hitting OKRs ties to bonuses/promotions)
  - Fix: OKRs are learning tools, not scorecards. If people are sandbagging or sandblasting, they're too tied to compensation.

- [ ] **Zero mid-quarter adjustment** ("We're locked in for 12 weeks no matter what")
  - Fix: Review at week 6-7. If something is clearly impossible or irrelevant, adjust.

- [ ] **No retrospective** (Quarter ends, OKRs written for next quarter, no reflection)
  - Fix: Spend an hour. What did we learn? What surprised us? Apply it to next quarter.

---

## Part 4: OKR Scoring and Grading Rubric

### How to Score at Quarter End

**Scoring scale:**
- 1.0 = Achieved 100%
- 0.7 = Achieved 70%
- 0.5 = Achieved 50%
- 0.3 = Achieved 30%
- 0.0 = Achieved 0%

**Overall OKR health:**
- 0.7 to 1.0 per KR = Healthy calibration (average 0.75 across all KRs is target)
- Under 0.5 average = You were too ambitious OR you had execution problems OR your assumptions were wrong
- 1.0 across the board = You sandbagged

### Mid-Quarter Check-In Template (Week 6-7)

For each key result, answer:

```
KR: [Key result name and target]

Current state: [Today's number]
Pace: [On track / At risk / Off track]
Confidence we hit target: [High / Medium / Low]
Why: [If not on track, what's the blocker?]
Action: [Do we adjust the KR? Do we add resources? Do we descope?]
```

**Example:**
```
KR: Increase 30-day retention from 52% to 65%

Current state: 55% (Week 6)
Pace: On track (we're at 55%, need 65%, we're at 85% of target with 50% of time remaining)
Confidence: Medium (dependent on churn-mitigation feature shipping on time)
Why: Feature shipped late. We're running the experiment now, but won't have full cycle of data until week 10.
Action: Keep the KR. Negotiate a 62% target instead of 65% if we need to descope engineering work.
```

### Quarter-End Retrospective Template (Week 13)

```
1. RESULTS
   - How many KRs did we hit? [X out of Y]
   - What was our average score? [0.72 out of 1.0]

2. ANALYSIS
   - Which KRs did we nail? Why?
   - Which KRs did we miss? Why?
   - Were we too ambitious? Too conservative? Right-sized?
   - Did we learn anything that changes our next quarter?

3. LEARNINGS FOR NEXT QUARTER
   - What should we do more of?
   - What should we do less of?
   - What assumptions were wrong?
   - What surprised us?

4. CALIBRATION ADJUSTMENT
   - How ambitious should Q-next be? Same level? Higher? Lower?
   - Do we need better leading indicators to track earlier?
   - Do we need different metrics to measure what actually matters?
```

---

## Part 5: Quick Reference Card

### OKR Format at a Glance

```
OBJECTIVE: [Outcome you want to achieve]

Key Result 1: [Metric] from [current] to [target]
Key Result 2: [Metric] from [current] to [target]
Key Result 3: [Metric] from [current] to [target]
```

### The Behavior Change Test (In One Sentence)

If the KR is achieved, will a customer/user/business decision-maker actually behave differently? If yes, it's a KR. If no, it's a task.

### One OKR Per Function (For Q2 2026, as example)

```
PRODUCT:
Objective: Increase time-to-value and activate more power users
KR1: 30-day retention from 58% to 68%
KR2: Time to first success from 35 min to 18 min
KR3: Session frequency from 3.2/week to 4.8/week

ENGINEERING:
Objective: Stabilize infrastructure and reduce technical debt
KR1: Error rate from 1.2% to under 0.6%
KR2: P99 latency from 1,200ms to under 800ms
KR3: Tech debt reduction (20% of roadmap capacity shipped)

GTM:
Objective: Expand market presence in mid-market segment
KR1: Close 12 mid-market new customers
KR2: Improve sales cycle from 60 days to 45 days
KR3: Increase qualified pipeline by 40%

CUSTOMER SUCCESS:
Objective: Drive adoption and expansion for existing customers
KR1: Net revenue retention to 115% (from 108%)
KR2: Accounts with power-user behavior from 35% to 55%
KR3: Customer satisfaction from 7.2 to 8.1/10
```

### Remember

- OKRs are not a straightjacket. They're a North Star that guides decisions.
- Mid-quarter adjustment is not failure. It's learning.
- Good OKRs create clarity. If people are confused about priorities, your OKRs are too vague.
- The process of writing OKRs is more valuable than the OKRs themselves. You'll learn what your team actually thinks matters.