templatesUpdated·Falk Gottlob··updated ·14 min read

The AARRR Dashboard You Actually Need

Forget vanity metrics dashboards. Here's how to build a pirate metrics dashboard that tells you where your product is leaking value - and what to fix first.

outcomesmetricsAARRRanalyticstemplate
Helpful?

The short version

A useful AARRR dashboard answers one question: where is your product leaking value, and what should you fix first? I track one core health metric per stage (signup conversion, activation rate, 30-day retention, payable activation, net referral rate), each paired with three things: a baseline, a threshold that triggers investigation, and a playbook for what to check first. Most dashboards fail because they show 30 metrics with equal weight and no action triggers. This one is built around five numbers and one rule: if a metric drops below 80% of baseline, run the playbook. Pick one stage this week, define the five numbers, write the thresholds.

I've been in a lot of data-obsessed meetings where someone pulls up a dashboard with 47 metrics and nobody knows what to do with any of them. DAU is up (good?) but engagement is down (bad?). Conversion is solid but churn is increasing. Expansion revenue is up but NRR is down. Everyone stares at the dashboard and someone says, "That's interesting."

This is what happens when you optimize for comprehensiveness instead of decision-making. You end up with a wall of numbers that describe what happened, not what to do about it.

The AARRR framework - Acquisition, Activation, Retention, Revenue, Referral - is useful because it organizes metrics by stage. But most AARRR dashboards are useless for the same reason as every other dashboard: they show you metrics, not signals. They measure activity, not health. They're backward-looking, not forward-looking.

A useful dashboard answers one question: Where is your product leaking value, and what should you fix first?

Why Most Product Dashboards Are Useless

I've seen dashboards that:

Show too many metrics with no hierarchy. You get 30 KPIs displayed with equal visual weight. Is DAU more important than session length? You don't know, so you stare at everything equally and remember nothing.

Measure activity instead of health. "We had 5,000 signups this week" is interesting but useless. "Our activation rate on signups is 28%, down from 31% last week" is actionable. One tells you what happened. The other tells you if something broke.

Have no action triggers. You watch the dashboard every day and nothing ever changes enough to act on. Then one day something is down 40% and you're in crisis mode. You needed thresholds.

Measure vanity metrics. Total users ever signed up. Total sessions. Raw DAU without context. These go up if you acquire more people, but they don't tell you if your product is getting healthier.

Disconnect from business outcomes. The dashboard shows pageviews and clicks but never connects to revenue or retention. You can optimize for pageviews and still lose customers.

The AARRR Framework Applied to Real SaaS

Let's build a dashboard that actually works. The AARRR framework has five stages. For each stage, we need:

  1. The core health metric - The one number that tells you if this stage is working
  2. Supporting metrics - Why that number moved (leading and lagging indicators)
  3. Thresholds - At what point do you care? When do you act?
  4. An investigation playbook - If the metric breaks, here's what to check first

Stage 1: Acquisition (Getting people to try)

The core metric: Qualified signup-to-trial conversion rate

This is the percentage of people who land on your homepage/landing page and actually sign up for a trial. For B2B SaaS, this is under 5% for most companies. For freemium, this might be 20-40% (lower intent). For enterprise with inbound leads, this might be 80%+.

What "good" looks like:

  • Most SaaS: 2-5% (landing page to signup)
  • Bottom quartile: Under 1%
  • Top quartile: Over 8%

Supporting metrics:

  • Traffic source breakdown - Where are signups coming from? Direct, paid, organic, partnership, sales-generated?
  • Time to signup decision - How long do people spend on landing page before converting?
  • Signup source cohort quality - Which traffic sources produce users who activate best?
  • Free trial signup conversion - Of trial signups, what % actually activate (load data, create first project, etc.)?

Thresholds that trigger investigation:

  • Signup rate drops under 80% of baseline - Check: Did we change the homepage? Did we ship a feature that broke signup? Did we change traffic source mix?
  • A traffic source suddenly converts at half its normal rate - Check: Is this source sending lower-intent traffic? Did a campaign end?
  • Time to signup decision exceeds 3 minutes - Check: Is the landing page confusing? Are users bouncing on mobile?

The investigation playbook:

  1. Segment by traffic source. Which source(s) dropped?
  2. Segment by device type. Desktop or mobile?
  3. Check: Did we ship anything in the last 7 days that might affect signup flow?
  4. Check: Did we change our marketing message or target audience?
  5. Check: Is this a temporary fluctuation (within 2 std devs) or sustained?

Stage 2: Activation (Getting to first success)

The core metric: Activation rate (% of signups who reach aha moment in first 7 days)

Activation is the percentage of new signups who successfully use your product in a meaningful way within some window. For most SaaS, this is "created first object" (project, document, campaign, whatever) within 7 days. For communication tools, it's "sent first message." For analytics, it's "created first report."

What matters is defining "aha moment" for your product. This is the moment a user understands value.

What "good" looks like:

  • Most SaaS: 20-40%
  • Bottom quartile: Under 15%
  • Top quartile: Over 50%

Supporting metrics:

  • Time to activation - How long does it take from signup to aha moment?
  • Onboarding completion rate - What % of signups complete your onboarding?
  • Feature discovery rate - What % of new users find the key feature they need in first week?
  • Aha moment correlation with retention - What % of activated users are retained at day 7 vs. day 30?

Thresholds that trigger investigation:

  • Activation rate drops under 75% of baseline - Check: Did you change onboarding? Did you change the signup cohort quality (different traffic source)? Is there a new blocker in the onboarding flow?
  • Time to activation exceeds 180 minutes - Check: Is onboarding too long? Are users getting stuck? Did you add required steps?
  • Activated users who churn by day 7 exceeds 15% - Check: Are users activating but not sticking? Is there a post-activation drop-off?

The investigation playbook:

  1. Segment by traffic source. Did acquisition source quality change?
  2. Segment by device type. Desktop or mobile?
  3. Run retention analysis: Of users who activated, what % come back on day 2? Day 3? Day 7?
  4. Check: Did we ship a feature that changed the aha moment definition?
  5. Check: Did we change onboarding messaging or flow?
  6. User interview 5-10 users who signed up but didn't activate. What's stopping them?

Stage 3: Retention (Coming back regularly)

The core metric: 30-day retention (% of activated users still active 30 days after activation)

Retention tells you if people are using your product as a habit or one-time. 30-day retention is the standard because it's long enough to exclude trial users but short enough to measure cohort health quickly.

What "good" looks like:

  • Most SaaS: 40-60%
  • Bottom quartile: Under 30%
  • Top quartile: Over 70%
  • Daily habit products (messaging, social): 60-80%+

Supporting metrics:

  • 7-day retention - Early signal of habitability (week 1 churn)
  • D1 to D7 churn slope - How fast do people drop off?
  • Weekly active users (WAU) - Are returning users coming back weekly?
  • Session frequency - How often do activated users engage per week?
  • Feature retention by feature - Which features have the best retention slope?

Thresholds that trigger investigation:

  • 7-day retention drops under 80% of baseline - Check: Did we ship a breaking change? Did we change the core value prop? Are new users activating on a weaker value driver?
  • 30-day retention drops under 75% of baseline - Check: Is there a systemic churn event? Did a cohort churn out? Are power users leaving?
  • Session frequency declines for existing users - Check: Did we ship a regression? Did we deprecate a popular feature?

The investigation playbook:

  1. Segment retention by cohort date. Did a specific cohort churn? Or is it across all cohorts?
  2. Segment by feature usage. Do users of feature X retain better than users of feature Y?
  3. Segment by geography/plan/customer segment. Is churn concentrated in one segment?
  4. Check: Did we ship a regression in the last 7 days?
  5. Check: Did we change pricing or terms?
  6. Survey churners. Why are they leaving?

Stage 4: Revenue (Monetization)

The core metric: Payable Activation Rate (% of activated users who convert to paid within 30 days)

This is where the rubber meets the road. You can have great activation and retention, but if nobody pays, you don't have a business. Payable activation is the percentage of activated trial users who convert to a paid plan.

What "good" looks like:

  • Most SaaS: 5-15%
  • Bottom quartile: Under 3%
  • Top quartile: Over 20%
  • Enterprise SaaS with sales: 30-50%+

Supporting metrics:

  • Average revenue per activated user (ARPU) - How much do converting customers pay?
  • Upgrade rate over time - What % of free users upgrade at 7 days? 14 days? 30 days?
  • Trial length effect on conversion - Do 7-day trials convert better than 14-day trials?
  • Revenue by cohort - Which activation cohorts have the highest lifetime value?
  • Expansion revenue rate - For existing customers, what % expand per month?

Thresholds that trigger investigation:

  • Payable activation drops under 80% of baseline - Check: Did you change pricing? Did you change the trial period? Did trial quality change (lower-intent signups)?
  • ARPU drops under 90% of baseline - Check: Are customers choosing lower-tier plans? Did you add a lower-priced tier?
  • Expansion revenue plateaus or decreases - Check: Are power users at max spend? Did you fail to launch new features that would enable expansion?

The investigation playbook:

  1. Segment by plan tier. Which plan is converting worse?
  2. Segment by trial length. Are shorter trials converting better or worse?
  3. Segment by feature usage. Do users of premium features convert better?
  4. Check: Did we change pricing messaging or packaging?
  5. Check: Did we add a lower-priced option that cannibalizes higher-tier plans?
  6. Analyze power users (high engagement). Are they converting at higher rate? If not, why?

Stage 5: Referral (Network growth)

The core metric: Net referral rate (% of customers who recommend you / % who actively recommend you minus % who actively detract)

This is NPS-adjacent, but it's focused on action, not sentiment. What matters is not whether someone thinks your product is good, but whether they're actually telling other people about it.

What "good" looks like:

  • Most SaaS: 10-20% net referral rate
  • Bottom quartile: Under 5%
  • Top quartile: Over 30%
  • Viral products: Over 40%

Supporting metrics:

  • NPS (Net Promoter Score) - Standard satisfaction metric (Promoters minus Detractors)
  • Referral loop participation - What % of customers actually use your referral feature?
  • Referred customer quality - Do referred customers have better activation, retention, and revenue than organic?
  • Time from referral to activation - How long before referred user signs up?
  • Referral source concentration - Are most referrals coming from small number of power users or distributed?

Thresholds that trigger investigation:

  • NPS drops under 80% of baseline - Check: Did we ship a regression? Did customer satisfaction dip? Are power users unhappy?
  • Referral loop participation drops - Check: Did we make the referral mechanism harder? Did we remove incentives?
  • Referred customer activation rate lower than organic - Check: Are we attracting the wrong type of users via referral? Is referral message misrepresenting product?

The investigation playbook:

  1. Segment NPS detractors by reason. What are the top complaints?
  2. Identify power users (high NPS + high engagement) and interview them about referrals
  3. Track referred user cohorts. How do they compare to organic?
  4. Check: Did we ship a regression that lowered satisfaction?
  5. Check: Did we remove referral incentives or make referral mechanics harder?
  6. A/B test referral messages to power users

The Dashboard Layout: From Metrics to Decisions

Here's what an actual AARRR dashboard should look like:

TOP ROW (Health Scorecard):

Acquisition:          Activation:              Retention:              Revenue:               Referral:
2.8% signup conv      35% activation rate      48% 30-day retention     11% pay conversion    +22 NPS
(baseline: 3.1%) ↓    (baseline: 38%) ↓        (baseline: 52%) ↓        (baseline: 13%) ↑     (baseline: +18) ↑
🟡 At risk            🟡 At risk               🟡 At risk               🟢 On track            🟢 On track

SECOND ROW (Leading Indicators):

Time to signup:       Time to activation:      Session frequency:      ARPU trend:            Referral source:
3.2 min               42 min (up from 38)      3.1x/week (down from 3.4) $185/customer         15% from referral
(baseline: 2.8) ⚠️    (baseline: 45) ✓         (baseline: 3.2) ↓        (baseline: $190) ↓    (baseline: 12%) ↑

THIRD ROW (Cohort Health):

Last 7 days cohort:   Last 30 days cohort:     3-month retention:      Expansion rate:       Promoters by segment:
28% activated         42% retained at 30-day  65% retained            2.3% MoM growth        Enterprise: 68%
(vs avg: 35%) 🔴      (vs avg: 48%) 🟡        (vs target: 70%) 🟡     (target: 3%) 🟡       Mid-market: 45%

Color coding:

  • Green (🟢): On track or above baseline
  • Yellow (🟡): Under baseline but not critical
  • Red (🔴): Materially below baseline, needs investigation

How This Connects to the Product Health Agent

The Product Health Agent I described in another toolkit post synthesizes all of this automatically. But the AARRR dashboard is your weekly check-in tool. Here's the division:

Product Health Agent (daily at 4pm):

  • Synthesis of ALL metrics across all systems
  • Tells you the narrative story: Is your product healthy?
  • Detects anomalies: What changed?
  • Flags correlations: Auth errors up AND support volume up = systemic issue?

AARRR Dashboard (weekly review):

  • Focused on one question: Where are we leaking value?
  • Organized by funnel stage for easy diagnosis
  • Includes action triggers: If metric drops, here's what to check
  • Surfaces leading indicators so you act before it's too late

Together, they answer: "How is my product?" (Agent) and "What should I focus on?" (Dashboard).

The Action Framework: When You See Red

If a metric drops under your threshold, here's the decision framework:

1. ISOLATE: Is this a real drop or noise? (Look at 2-3 day trend, not one data point)
2. SEGMENT: Which cohort/source/segment is affected? (All users or specific group?)
3. CORRELATE: What changed in the product or marketing in the last 7 days?
4. PRIORITIZE: Is this about current customers (retention, revenue) or new customers (acquisition, activation)?
5. ACT: If current customers affected, act immediately. If new customers, can wait 1-2 weeks.

Example:

You see that 30-day retention dropped from 52% to 48%. That's a 4-point drop, which is material.

1. ISOLATE: Look at the 7-day trend. Is it 52 → 50 → 48 (steady decline) or 52 → 52 → 48 (sudden drop)? Steady decline suggests cumulative problem. Sudden drop suggests specific event.
2. SEGMENT: Did all cohorts drop? Or just the last 7 days' signups? If it's just new cohorts, then activation or onboarding is the problem. If it's all cohorts, then something changed in the product that's making existing users leave.

3. CORRELATE: Did we ship anything in the last 7 days? Did we change pricing? Did we deprecate a feature? Did we send an email that annoyed people?
4. PRIORITIZE: This affects existing customers, so it's high priority. You need to fix this fast.

5. ACT: Send a message to engineering: "Retention dropped 4 points. Let's investigate in the next 2 hours." Set up a 30-minute diagnostic meeting.

The artifact includes a complete AARRR dashboard template, metric definitions, thresholds, and investigation playbooks for each stage. Use it as your weekly check-in tool.

Share this post

Download the artifact

Ready to use. Copy into your project or share with your team.

Download

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.