# AARRR Dashboard: Metric Definitions, Thresholds, and Investigation Playbooks

## Part 1: Metric Definitions by Stage

### Acquisition Metrics

#### 1. Signup Conversion Rate
**Definition:** (Total signups) / (Total landing page visitors) for a specified time period
**Calculation:** Divide new signups by traffic to core landing page or website
**Data source:** Google Analytics, Mixpanel, Amplitude
**Typical benchmark:**
  - Cold traffic (ads): 1-3%
  - Organic traffic: 3-8%
  - Enterprise with sales outreach: 50-80%

**How to track it:**
```
Weekly signup conversion rate = New signups this week / Visitors this week
Track separately by source: Direct, Organic Search, Paid Ads, Partnership, Sales-generated
```

#### 2. Signup-to-Trial Start Rate
**Definition:** (Users who start a trial) / (Users who signed up for account)
**Calculation:** Some signups create an account but don't start a trial. Track how many actually enter the trial.
**Data source:** Your auth system + trial management system
**Typical benchmark:** 70-90% (Most signups start trial, some abandon)

#### 3. Trial Start to Trial Conversion Rate
**Definition:** (Trial users who convert to paid) / (Trial users who started trial)
**Calculation:** Of users who began a trial, what percentage become paying customers within trial window?
**Data source:** Your auth system + Stripe/payment processor
**Typical benchmark:** 5-15% depending on trial length and product

#### 4. Customer Acquisition Cost (CAC)
**Definition:** (Total acquisition spending) / (Customers acquired)
**Calculation:** All marketing spend + sales salaries (if applicable) / Number of new customers acquired
**Data source:** Stripe + marketing analytics + accounting system
**Typical benchmark:**
  - Self-serve SaaS: CAC $50-500 per customer
  - Mid-market: CAC $2k-10k per customer
  - Enterprise: CAC $25k-100k+ per customer

#### 5. Lead Quality Score
**Definition:** Percentage of signups from high-quality sources (company size, vertical, intent)
**Calculation:** (Signups from ideal customer profile) / (Total signups)
**Data source:** Manually score signups or use IP lookup + survey data
**Typical benchmark:** 40-60% of signups are from ICP

---

### Activation Metrics

#### 1. Activation Rate (Primary)
**Definition:** Percentage of signups who reach "aha moment" within N days (typically 7)
**Calculation:** (Users who completed aha moment in first 7 days) / (Total signups in period)
**Data source:** Mixpanel, Amplitude, or custom event tracking
**Typical benchmark:**
  - Most SaaS: 20-40%
  - High-intent (sales-generated): 60-80%
  - Freemium with low intent: 10-25%

**Defining "aha moment" (examples):**
- Project management: Created first project
- Analytics: Created first report or dashboard
- Communication: Sent first message
- Workflow automation: Created first automation or workflow
- Billing: Created first invoice

#### 2. Time to Activation
**Definition:** Median time from signup to aha moment
**Calculation:** For all users who activated, calculate days from signup date to aha moment date. Report the median.
**Data source:** Mixpanel, Amplitude
**Typical benchmark:** 20 minutes to 48 hours depending on product

**Track separately:**
- Mobile vs. Desktop
- Traffic source (does referral convert faster than ads?)
- Account type (free plan vs. trial vs. enterprise)

#### 3. Onboarding Completion Rate
**Definition:** Percentage of signups who complete your structured onboarding
**Calculation:** (Users who finished onboarding) / (Users who started onboarding)
**Data source:** Mixpanel, custom event tracking
**Typical benchmark:** 60-80%

**Note:** Onboarding completion ≠ activation. You can complete onboarding and never use the product. Activation is what matters.

#### 4. Feature Discovery Rate
**Definition:** Percentage of new users who discover/use key features within first N days
**Calculation:** (Users who used feature X) / (Total activated users) within first 7 days
**Data source:** Mixpanel feature tracking
**Typical benchmark:** 40-70% depending on feature importance

#### 5. Activation by Cohort
**Definition:** Activation rate broken down by signup source, device, or account type
**Calculation:** Same as activation rate, but segmented
**Data source:** Mixpanel, Amplitude
**Why track it:** Tells you if acquisition source quality is changing

**Example:**
```
Organic search signups: 45% activation rate
Paid ads signups: 28% activation rate
Referral signups: 52% activation rate

→ If paid ads activation drops from 28% to 18%, investigate the ad platform or messaging
```

#### 6. Activation-to-Retention Correlation
**Definition:** Of users who activated, what % return on day 2/7/30?
**Calculation:** (Activated users still active on day N) / (Total activated users)
**Data source:** Mixpanel, Amplitude
**Why track it:** Tells you if aha moment was real or false positive

**Example:**
```
Activation rate: 35%
Day 7 retention of activated users: 62%
→ Of users who activated, 62% are still using it a week later. Good signal.

vs.

Activation rate: 35%
Day 7 retention of activated users: 28%
→ Users activate but immediately churn. Aha moment is weak or false.
```

---

### Retention Metrics

#### 1. N-Day Retention (Primary)
**Definition:** Percentage of users active on day N who were active on day 1
**Calculation:** (Users active on day N) / (Users active on day 1 in cohort)
**Data source:** Mixpanel, Amplitude
**Standard benchmarks:**
  - 1-day retention: 40-60% (expect dropoff from first day)
  - 7-day retention: 30-50%
  - 30-day retention: 20-40%
  - 90-day retention: 15-30%

**Most important:** 7-day and 30-day retention. Use 7-day as a leading indicator and 30-day as your primary health metric.

#### 2. Churn Rate
**Definition:** Percentage of users active in period N who are not active in period N+1
**Calculation:** (Users active in month N) - (Users active in month N and month N+1) / (Users active in month N)
**Data source:** Mixpanel, Amplitude
**Typical benchmark:**
  - Daily churn: 3-8% per day
  - Monthly churn: 5-15% per month
  - For B2B: Should stabilize around 3-5% monthly after first 3 months

#### 3. Cohort Retention Curve
**Definition:** Track retention for each weekly/monthly cohort over time
**Calculation:** Create a table where rows are signup cohorts and columns are days/weeks after signup
**Data source:** Mixpanel, Amplitude
**Why track it:** Tells you if retention is getting worse (newer cohorts retain worse) or better

**Example:**
```
        Day 7   Day 30  Day 90
Jan 1:  45%     28%     15%
Jan 8:  47%     29%     16%
Jan 15: 42%     25%     12% ← Worse retention for newer cohort
Jan 22: 39%     22%     10% ← Getting worse

→ Either something changed in your onboarding/product, or acquisition quality changed
```

#### 4. Weekly Active Users (WAU)
**Definition:** Number of distinct users active in a week
**Calculation:** Count unique users with at least one session in the week
**Data source:** Mixpanel, Amplitude
**Why track it:** Tells you if returning users are actually coming back regularly

**Track alongside DAU:**
- DAU: Daily Active Users
- WAU: Weekly Active Users
- WAU/DAU ratio should be 3-5x for a healthy product

#### 5. Session Frequency
**Definition:** Average number of sessions per active user per week
**Calculation:** (Total sessions) / (Active users) / 7 days
**Data source:** Analytics tool
**Typical benchmark:**
  - Casual use product: 1-2 sessions/week
  - Regular use product: 3-5 sessions/week
  - Daily habit product: 6-7 sessions/week

#### 6. Feature Retention by Feature
**Definition:** Retention rate for users of specific features
**Calculation:** (Users who used feature X and returned 30 days later) / (Users who used feature X)
**Data source:** Mixpanel feature tracking
**Why track it:** Tells you which features drive habit loops

**Example:**
```
Feature A retention: 52% (sticks with product)
Feature B retention: 28% (weak habit)
Feature C retention: 67% (strong habit)

→ Invest in promoting/expanding Feature C. Fix or deprecate Feature B.
```

#### 7. Segment-Specific Retention
**Definition:** Retention broken down by customer segment
**Calculation:** Same calculation, segmented by plan type, company size, geography, etc.
**Data source:** Mixpanel, Amplitude
**Why track it:** Tells you if a segment is churning unexpectedly

**Example:**
```
All users: 48% 30-day retention
Free tier users: 32% retention
Pro tier users: 65% retention
Enterprise users: 78% retention

→ Free tier retention is low. Consider changing onboarding or adding limitations to drive upgrade
```

---

### Revenue Metrics

#### 1. Payable Activation Rate
**Definition:** Percentage of users who convert to paid within a specified window (typically 30 days)
**Calculation:** (Users who start paying subscription) / (Users who activated)
**Data source:** Stripe + analytics integration
**Typical benchmark:** 5-15% depending on trial length and freemium conversion

#### 2. Trial-to-Paid Conversion
**Definition:** Percentage of trial users who convert to paying customer
**Calculation:** (Trial users who convert to paid) / (Trial users who started trial)
**Data source:** Auth system + Stripe
**Typical benchmark:**
  - 7-day trial: 3-5%
  - 14-day trial: 5-10%
  - 30-day trial: 8-15%

#### 3. Average Revenue Per Activated User (ARAU or ARPU)
**Definition:** Total revenue from activated users / Number of activated users
**Calculation:** Track revenue monthly and divide by number of users who activated in that month
**Data source:** Stripe + analytics
**Typical benchmark:** Varies wildly by product, but track the trend

#### 4. Free-to-Paid Conversion (Freemium)
**Definition:** Percentage of free users who upgrade to paid plan
**Calculation:** (Free users who upgraded) / (Total free users in cohort)
**Data source:** Your app + Stripe
**Typical benchmark:** 1-5% for freemium products

#### 5. Revenue by Cohort
**Definition:** Total lifetime revenue from users acquired in a specific week/month
**Calculation:** Sum of all revenue from cohort / Number of users in cohort
**Data source:** Stripe + cohort tracking
**Why track it:** Tells you if conversion is improving over time and if acquisition quality is stable

**Example:**
```
Jan 1 cohort: $850 per user LTV
Jan 8 cohort: $920 per user LTV
Jan 15 cohort: $740 per user LTV ← Revenue declining for newer cohorts
Jan 22 cohort: $620 per user LTV ← Getting worse

→ Either onboarding is getting worse (users not activating), conversion is declining, or you're attracting lower-intent users
```

#### 6. Expansion Revenue Rate
**Definition:** Monthly recurring revenue growth from existing customers (upsells + cross-sells)
**Calculation:** (MRR from existing customers in month N) - (MRR from existing customers in month N-1) / (Total MRR month N-1)
**Data source:** Stripe + analytics
**Typical benchmark:** 3-10% month-over-month

#### 7. Net Revenue Retention (NRR)
**Definition:** Revenue from existing customers, including expansion and minus churn
**Calculation:** (Revenue from existing customers + expansion) - (Churned revenue) / (Starting revenue)
**Data source:** Stripe + accounting
**Typical benchmark:**
  - Healthy: 90-100% (losing customers but growing in others)
  - Great: 100-120% (expanding faster than churning)
  - Exceptional: 120%+

#### 8. ARPU Growth
**Definition:** Average revenue per user trend month over month
**Calculation:** Track ARPU every month. % growth = (ARPU this month - ARPU last month) / ARPU last month
**Data source:** Stripe
**Typical benchmark:** 1-3% month-over-month growth

---

### Referral Metrics

#### 1. Net Promoter Score (NPS)
**Definition:** "How likely are you to recommend us to a colleague?" (0-10 scale)
- Promoters: 9-10
- Passives: 7-8
- Detractors: 0-6
**Calculation:** (% Promoters) - (% Detractors)
**Data source:** Intercom, SurveyMonkey, or in-app survey
**Typical benchmark:**
  - Below 0: Poor (many detractors)
  - 0-20: Good
  - 20-50: Great
  - 50+: Exceptional

#### 2. Referral Participation Rate
**Definition:** Percentage of customers who actually use your referral feature
**Calculation:** (Users who clicked referral link or shared) / (Total users)
**Data source:** Your referral tracking
**Typical benchmark:** 5-15%

#### 3. Referred Customer Activation Rate
**Definition:** Activation rate for users referred by existing customers
**Calculation:** (Referred users who activated) / (Total referred users)
**Data source:** UTM tracking + analytics
**Typical benchmark:** Should be 20-40% higher than organic (referred users are warmer)

#### 4. Referral Conversion Rate
**Definition:** Percentage of referred users who convert to paying customers
**Calculation:** (Referred users who pay) / (Total referred users)
**Data source:** UTM tracking + Stripe
**Why track it:** Referred users are warmer leads, should convert better

#### 5. Viral Coefficient
**Definition:** Average number of new users acquired per existing user
**Calculation:** (New users acquired from referrals) / (Existing users)
**Data source:** Referral tracking + signup attribution
**Typical benchmark:**
  - Under 0.5: Not viral (need paid acquisition)
  - 0.5-1.0: Slowly growing virally
  - Over 1.0: Exponentially growing (each user brings more than 1 new user)

#### 6. NPS by Segment
**Definition:** NPS broken down by customer type
**Calculation:** Same NPS calculation, segmented by plan, company size, tenure, etc.
**Data source:** Survey data + CRM
**Why track it:** Tells you which customer segments are most/least satisfied

**Example:**
```
Enterprise: +58 NPS (very satisfied)
Mid-market: +22 NPS (satisfied)
SMB: -8 NPS (dissatisfied)

→ SMB customers are unhappy. Investigate why. Is product not right-fit? Is support lacking?
```

#### 7. Detractor Themes
**Definition:** Top reasons customers give for poor experience (from NPS detractors)
**Calculation:** Manually categorize detractor feedback
**Data source:** NPS survey open-ended responses
**Why track it:** Tells you what to fix

**Example:**
```
Top detractor themes:
1. "Missing feature X" (28% of detractors)
2. "Performance is slow" (22%)
3. "Support is unresponsive" (18%)
4. "Too expensive" (12%)

→ Prioritize feature X and performance improvements
```

---

## Part 2: Threshold Recommendations by Stage

### Acquisition Thresholds

| Metric | Alert Level | Action |
|--------|-------------|--------|
| Signup conversion rate | Drops below 80% of baseline | Check: Did landing page change? Did traffic source mix change? Did product update affect signup? |
| Signup-to-trial rate | Drops below 85% of baseline | Check: Are users having auth issues? Is there friction in creating account? |
| CAC | Increases over 120% of baseline | Check: Are you spending more for same customers? Should you pause paid channels? |
| Lead quality | Drops below 75% of baseline | Check: Did you change targeting? Are you capturing lower-ICP leads? |

### Activation Thresholds

| Metric | Alert Level | Action |
|--------|-------------|--------|
| Activation rate | Drops below 85% of baseline | Check: Did onboarding change? Is aha moment harder to reach? Did product regression? Interview non-activators. |
| Time to activation | Increases over 110% of baseline | Check: Did you add required steps to onboarding? Is something broken in signup flow? |
| Onboarding completion | Drops below 90% of baseline | Check: Where are users dropping off in onboarding? Is it a specific step? |
| Activation-to-retention correlation | Drops below 85% of baseline | Check: Is aha moment too weak? Do activated users actually find value? |

### Retention Thresholds

| Metric | Alert Level | Action |
|--------|-------------|--------|
| 7-day retention | Drops below 85% of baseline | Check: Is there day-1-to-day-2 cliff? Did something break for day-2 use case? |
| 30-day retention | Drops below 90% of baseline | Check: Are existing cohorts churning or only new ones? Did you ship a regression? |
| Cohort retention curve | New cohorts dropping 10%+ vs. previous month | Check: Did onboarding or product quality change? Is acquisition quality worse? |
| Monthly churn rate | Increases above 110% of baseline | Check: Which segment is churning? Is it a feature break or natural cycling? |
| Session frequency | Drops below 90% of baseline | Check: Did you deprecate a feature? Is there a regression? |

### Revenue Thresholds

| Metric | Alert Level | Action |
|--------|-------------|--------|
| Payable activation rate | Drops below 85% of baseline | Check: Did pricing change? Did trial length change? Did you add friction to conversion? |
| Trial-to-paid conversion | Drops below 90% of baseline | Check: Is trial messaging clear? Are users reaching aha moment? |
| ARPU | Drops below 95% of baseline | Check: Are users choosing lower-tier plans? Did you add a lower-priced option? |
| Revenue by cohort | New cohort revenue 15%+ lower than previous month | Check: Are activated users smaller-spend accounts? Are conversion rates dropping? |
| Expansion revenue | Drops below 90% of baseline or doesn't grow 2%+ monthly | Check: Are power users maxed out? Did you fail to launch expansion features? |
| NRR | Drops below 95% of baseline | Check: Churn increasing? Expansion declining? Both? |

### Referral Thresholds

| Metric | Alert Level | Action |
|--------|-------------|--------|
| NPS | Drops below 85% of baseline | Check: What changed? Survey detractors. Did you ship a regression? |
| Referral participation | Drops below 90% of baseline | Check: Did you change referral mechanics? Is referral feature broken? |
| Referred customer activation | Drops below 85% of baseline | Check: Are referrals lower-quality? Is the referred experience bad? |

---

## Part 3: Investigation Playbook Template

When you see a metric drop, use this playbook to diagnose quickly:

### Diagnostic Steps (In Order)

1. **Confirm the signal is real**
   - Is this drop above noise threshold? (2-3 day trend, not one data point)
   - Is this a data quality issue? (Check if tracking code deployed correctly)
   - Is this seasonal? (Compare to same time last year)

2. **Isolate the affected segment**
   - Which source? (Acquisition, organic, referral, sales?)
   - Which device? (Mobile, desktop, tablet?)
   - Which geography? (US, EU, Asia?)
   - Which customer segment? (Free, paid, enterprise?)
   - If all segments affected equally, it's a product issue. If one segment affected, it's targeted.

3. **Map the change to recent events**
   - Did you ship code in the last 3 days? (Check deploy history)
   - Did you change messaging or positioning?
   - Did you change pricing or trial terms?
   - Did competitors launch something?
   - Did you send an email that got negative response?
   - Did you change targeting or traffic sources?

4. **Create hypothesis**
   - Based on isolated segment and recent changes, form 1-3 hypotheses
   - Example: "Activation rate dropped 6% because we added required field to signup form and users are abandoning before completion"

5. **Test hypothesis quickly**
   - Look for correlating signals
   - Example: If hypothesis is "signup form broke," check: (1) Did signup-to-trial rate drop? (2) Did time to signup increase? (3) Did mobile signups specifically drop?

6. **Make decision**
   - If hypothesis confirmed and issue is small: Monitor and fix
   - If hypothesis confirmed and issue is critical: Roll back change immediately
   - If hypothesis not confirmed: Move to next hypothesis

7. **Document and act**
   - Record what happened, why, and what you changed
   - Set reminder to re-check metric in 3-5 days to confirm fix worked

### Specific Investigation Templates by Metric

#### Activation Rate Drops

```
Activation rate: 38% → 32% (down 6 points)

CONFIRM:
□ Is this a 2-3 day trend or just one day? (Check chart)
□ Affected cohorts: Last 7 days signups only / All cohorts equally

ISOLATE:
□ Did any traffic source drop? (Breakdown by source)
  - Organic: __ %
  - Paid ads: __ %
  - Referral: __ %
□ Did device type change?
  - Mobile: __ % vs. baseline __ % (Change: __ %)
  - Desktop: __ % vs. baseline __ % (Change: __ %)
□ Did account type change?
  - Free trial: __ % vs. baseline __ %
  - Sales-provided trial: __ % vs. baseline __ %

HYPOTHESES:
□ Onboarding changed or broke (check deploy log)
□ Aha moment definition changed (did you update product?)
□ Traffic source quality decreased (did you launch new ad campaign?)
□ Offer/messaging changed (did copywriting change?)
□ Tracking broken (is activation event still firing correctly?)

TEST:
□ If onboarding hypothesis: Check which step users drop off in. Interview 5 non-activated users.
□ If traffic quality hypothesis: Check ICP score of recent signups. Did new traffic source bring lower-intent users?
□ If tracking hypothesis: Manually test onboarding flow end-to-end. Is activation event triggering?

ACTION:
□ If tracking is broken: Fix immediately, retest
□ If onboarding broke: Roll back or fix regression
□ If traffic quality: Adjust targeting or pause underperforming source
□ If offer/messaging: Re-test messaging or revert copy
```

#### 30-Day Retention Drops

```
30-day retention: 48% → 42% (down 6 points)

CONFIRM:
□ Is this affecting all cohorts or new cohorts only?
  - Jan 1-7 cohort 30-day retention: __ %
  - Jan 8-14 cohort 30-day retention: __ %
  - Jan 15-21 cohort 30-day retention: __ %
□ Is this a product issue (all cohorts dropping) or onboarding issue (only new cohorts dropping)?

ISOLATE BY COHORT:
□ If older cohorts affected:
  - What changed in the product 35+ days ago? (Look back in deploy log)
  - Did a feature break or get deprecated?
  - Did you ship a regression that only shows up over time?
□ If new cohorts affected:
  - Onboarding or product quality issue
  - Check activation rate. Did it drop too?

HYPOTHESES:
□ Product regression (feature broken)
□ Behavioral change (users stopped using key feature)
□ Competitive loss (competitor launched alternative)
□ Seasonal drop-off (compare to same period last year)
□ Segment-specific churn (is one customer segment churning?)

TEST:
□ Breakdown retention by feature usage: Do users of feature X retain better than feature Y?
□ Breakdown retention by segment: Free vs. paid? SMB vs. enterprise? Mobile vs. desktop?
□ Check support tickets: Did churn complaints spike? What are users complaining about?
□ Interview 3-5 churners: Why did they leave?

ACTION:
□ If feature broken: Roll back or fix
□ If behavioral change: Investigate why users stopped using product. Reengagement campaign or UX fix?
□ If competitive: Monitor. Consider feature response or positioning change.
□ If segment-specific: Focus fix on that segment.
```

#### Payable Activation Drops

```
Payable activation rate: 12% → 9% (down 3 points)

CONFIRM:
□ Is this affecting trial users or freemium users? (they have different funnels)
□ Did trial length change? This affects conversion window.

ISOLATE:
□ Breakdown by plan tier:
  - Basic plan conversion: __ % vs. baseline __ %
  - Pro plan conversion: __ % vs. baseline __ %
  - Enterprise plan conversion: __ % vs. baseline __ %
□ Breakdown by acquisition source:
  - Organic: __ % vs. baseline __ %
  - Paid: __ % vs. baseline __ %
□ Breakdown by time in trial:
  - Day 3 conversion: __ %
  - Day 7 conversion: __ %
  - Day 14 conversion: __ %

HYPOTHESES:
□ Pricing changed (did you adjust plan prices?)
□ Onboarding quality degraded (lower quality signups not reaching aha moment)
□ Conversion messaging weak (did you change pricing page or upgrade CTA?)
□ Competition (did competitor launch lower price?)
□ Seasonal (compare to last year)

TEST:
□ If pricing hypothesis: Check if activation rate also dropped. If not, it's onboarding quality, not pricing.
□ If messaging hypothesis: Check if page load time increased or CTA clarity decreased. A/B test messaging.
□ If competition hypothesis: Monitor. Consider response.

ACTION:
□ If pricing: Consider reverting or running discount experiment
□ If onboarding quality: Trace back to onboarding. Did it change?
□ If messaging: Redesign pricing page or upgrade flow
□ If seasonal: Monitor. May be natural
```

---

## Part 4: Weekly Review Template

Use this template every Friday to stay on top of AARRR health:

```
WEEKLY AARRR REVIEW - [Date]

ACQUISITION
□ Signup conversion rate: __ % (baseline: __ %) [GREEN / YELLOW / RED]
□ Signups YTD: __ (pace: on track / behind)
□ CAC this week: $__ (baseline: $__)
□ Top performing source: __ (__ % of signups)
□ Alert: [Any threshold breaches? Document them]

ACTIVATION
□ Activation rate (7-day): __ % (baseline: __ %) [GREEN / YELLOW / RED]
□ Time to activation: __ min (baseline: __ min)
□ Top aha-moment achieving step: __ (__ % of users reach it)
□ Alert: [Any threshold breaches? Document them]

RETENTION
□ 7-day retention: __ % (baseline: __ %) [GREEN / YELLOW / RED]
□ 30-day retention: __ % (baseline: __ %) [GREEN / YELLOW / RED]
□ Session frequency: __ x/week (baseline: __ x/week)
□ Alert: [Any threshold breaches? Document them]

REVENUE
□ Payable activation: __ % (baseline: __ %) [GREEN / YELLOW / RED]
□ MRR: $__ (vs. last week: __) [Change: +$__ / -$__]
□ NRR: __ % (baseline: __ %)
□ Alert: [Any threshold breaches? Document them]

REFERRAL
□ NPS: __ (baseline: __ ) [GREEN / YELLOW / RED]
□ Referral participation: __ % of users
□ Top detractor theme: __ (__ % of detractors cite this)
□ Alert: [Any threshold breaches? Document them]

PRIORITIES FOR NEXT WEEK
□ Investigation #1: [Metric that dropped] → [Hypothesis] → [Action]
□ Investigation #2: [Metric that dropped] → [Hypothesis] → [Action]
□ Wins to celebrate: [What went well this week?]
□ Patterns to watch: [What should we monitor next week?]
```