# Product Health Dashboard Agent

**Agent Name:** Product Health Dashboard Agent
**Role:** Weekly product analytics deep-dive analyst
**Frequency:** Every Tuesday at 9:00 AM
**Output Channel:** Slack (#product-analytics) + Shared dashboard
**Run Time:** ~20 minutes

---

## PURPOSE

You know the daily metrics: DAU is up 3%, retention held steady, one feature has low adoption. But you don't know why.

Is the low adoption because the feature isn't valuable? Or because it's hard to discover? Is the retention steady because your product is sticky, or because you've already lost the churners and you're looking at survivor bias in your cohorts?

The Product Health Dashboard Agent goes deeper. It analyzes feature adoption curves, compares new cohorts to historical patterns, looks at engagement depth (power users vs surface-level users), digs into performance anomalies, and shows you what's actually driving your metrics.

This is the report your product team lives by during strategic planning. This is where you make decisions about what to keep, what to kill, and what to invest in.

---

## DATA SOURCES

### 1. **Product Analytics Platform** (Mixpanel, Amplitude, or custom)
**Query:**
- Event data for last 7 days
- User segmentation (new vs returning, by cohort, by geography, by feature access)
- Feature usage events (feature_used, feature_completed, etc.)
- Feature adoption funnels
- User journey data

### 2. **User Segmentation & Cohort Data**
**Query:**
- Weekly cohort (users who joined in last 7 days)
- Historical cohorts (last 4 weeks, to compare retention curves)
- Segmentation by: plan tier, geography, persona, feature access
- Identify: Which segments have best/worst retention?

### 3. **Feature Flag & A/B Test Results**
**Query:**
- All active feature flags and their rollout %
- All A/B tests in progress or completed in last week
- Test results: control vs treatment, lift, confidence interval, success metrics
- Extract: Which experiments validated assumptions? Which failed?

### 4. **Performance Monitoring Data** (DataDog, New Relic, Sentry)
**Query:**
- API latency (P50, P95, P99) by endpoint
- Error rate (trending)
- Page load time (trending)
- Uptime / availability
- Any anomalies or incidents

### 5. **Feature Usage Deep Dives**
**Query:**
- For each feature released in last 60 days: adoption curve
- For mature features: usage depth (% of users who use once vs regularly)
- User action sequences (what do users do after using feature X?)
- Feature retention (of people who tried it, how many came back?)

---

## REPORT STRUCTURE

### 1. EXECUTIVE SUMMARY

One-paragraph snapshot of product health.

**Format:**
```
[Week of MMM DD]: Product health [strong/stable/concerning].
[Feature] adoption is [trajectory], [Feature] retention is [trajectory], performance is [status].
Key finding: [One insight that should change your strategy].
```

### 2. ADOPTION METRICS BY FEATURE

New features: How fast are they getting adopted? Mature features: Are they still sticky?

**Format:**

**New Features (Released < 30 Days)**
| Feature | Release Date | Day 1 Adoption | Day 7 Adoption | Trend | Health |
|---|---|---|---|---|---|
| [Feature] | [Date] | X% | Y% | 📈 | 🟢 |
| [Feature] | [Date] | X% | Y% | 📉 | 🟡 |

**Mature Features (Released > 30 Days)**
| Feature | Current MAU | Last Week | Trend | Depth | Retention |
|---|---|---|---|---|---|
| [Feature] | [X]% of users | [X]% | ↑ | [Power:X% / Surface:Y%] | [X]% week-over-week |
| [Feature] | [X]% of users | [X]% | → | [Power:X% / Surface:Y%] | [X]% week-over-week |

**Adoption Status Indicators:**
- 🟢 On track (adoption curve matches projection)
- 🟡 Slower than expected (but still acceptable)
- 🔴 Below expectation (requires investigation)

### 3. COHORT RETENTION ANALYSIS

How do new users stick around? Are we getting better or worse at retaining?

**Format:**
```
Weekly Cohort (Users who joined MMM DD - MMM DD):
- Day 1 Retention: X% (new users who returned after 1 day)
- Day 7 Retention: Y% (new users who returned after 7 days)
- Day 30 Retention: Z% (new users still active after 30 days)

Comparison to Historical:
- 4 weeks ago: D1: X%, D7: Y%, D30: Z% | Trend: [↑/→/↓]
- 8 weeks ago: D1: X%, D7: Y%, D30: Z% | Trend: [↑/→/↓]

Interpretation: [Is retention improving? Are new users less/more sticky than historical?]
```

**Breakdown by Segment:**
```
This Week's Cohort Retention by Plan Tier:
- Free tier: Day 7: X% | Day 30: Y%
- Pro tier: Day 7: X% | Day 30: Y%
- Enterprise: Day 7: X% | Day 30: Y%

Interpretation: [Which segments are stickiest? Is paid retention better than free?]
```

### 4. FEATURE ENGAGEMENT DEPTH ANALYSIS

Who's using features deeply vs just dipping a toe in?

**Format:**
```
**[Feature Name]**
- Total Users: X
- Surface Users (single use): Y% of users | Z% of total usage
- Regular Users (2-5 uses): Y% of users | Z% of total usage
- Power Users (6+ uses): Y% of users | Z% of total usage

Engagement Path: [What do power users do differently? Do they combine this feature with others?]
Stickiness: [Do users who use this feature deep have higher retention?]
```

### 5. PERFORMANCE TRENDS (Week-over-Week)

Is the product getting faster or slower?

**Format:**
```
API Performance:
| Endpoint | P50 (ms) | P95 (ms) | Last Week | Trend | Status |
|---|---|---|---|---|---|
| [Endpoint] | 145 | 320 | 142 / 310 | ↑ (slower) | 🟡 |
| [Endpoint] | 89 | 195 | 91 / 190 | → (stable) | 🟢 |

Error Rate:
- Today: X.X% | Last week: Y.Y% | Trend: [↑/→/↓] | Status: [🟢/🟡/🔴]
- Critical errors: [N] | Last week: [N] | Trend: [Up/Flat/Down]

Uptime:
- This week: X.XX% | Last week: Y.YY% | Any incidents: [None/[incident name]]
```

**Anomalies:**
```
🔴 ALERT: [Endpoint] latency jumped 40% starting Thursday at 2pm.
- Correlation: Correlates with [feature launch / deployment / traffic spike]
- Status: [Being investigated by eng / resolved]
- Impact: [If unresolved, X users affected / feature may timeout]
```

### 6. A/B TEST RESULTS & LEARNINGS

What experiments shipped, what did they teach you?

**Format:**
```
**Completed A/B Tests (Last Week)**

[Test Name]
- Hypothesis: [What you thought would happen]
- Result: [Control: X% | Treatment: Y% | Lift: +Z%]
- Confidence: [X%] | [Statistically significant? Yes/No]
- Decision: [Ship / Iterate / Kill]
- Learning: [What did you learn about user behavior?]
- Impact: [Revenue / retention / engagement impact if shipped]

[Test Name]
[Same format]

**Active Tests**
- [Test name]: Running since [date], N=[sample size], power=[X%]
- [Test name]: Running since [date], N=[sample size], power=[X%]
```

### 7. USAGE ANOMALY DEEP-DIVE

If any anomalies were flagged in daily reports, dig deeper here.

**Format:**
```
**[Anomaly Title]**
- Anomaly: [Feature usage dropped 60% on Thursday afternoon]
- Potential causes investigated:
  - Feature flag status: [Was flag modified?]
  - Code deployment: [Did something ship?]
  - Product announcement: [Was feature de-prioritized?]
  - External event: [Did competitors launch? Did media cover us?]
- Root cause identified: [Most likely explanation]
- Action taken: [What did you do about it?]
- Outcome: [Has it resolved? Is it still being monitored?]
```

### 8. STRATEGIC INSIGHTS & IMPLICATIONS

What should product strategy be based on this data?

**Format:**
```
**Insight 1: [Title]**
- Data: [What the data shows]
- Implication: [What this means for your strategy]
- Recommendation: [What should you do about it?]
- Owner: [Who should own this decision?]

**Insight 2: [Title]**
[Same format]
```

**Example Insights:**
- "Adoption curve for [Feature] is 3x faster than [Previous Feature] released same way. What's different?"
- "Day 7 retention for new cohorts has declined 5% in last 4 weeks. Starting to see early churn signal."
- "[Feature] has high surface usage (many try it) but low depth (power users: 2%). Indicates feature discovery issue, not value issue."
- "Enterprise plan users have 40% higher Day 30 retention than free users. Opportunity for upsell maturity framework."

---

## HOW TO SET IT UP

### Step 1: Connect Analytics Platform
- Get API access to Mixpanel/Amplitude/your analytics tool
- Configure agent to pull: daily events, user cohorts, feature usage data
- Set time window to last 7 days (rolling window)

### Step 2: Define Key Features to Track
- List all features released in last 60 days (features you care about adoption for)
- List mature features you're monitoring (features > 90 days old)
- For each, define: success metric, adoption target, retention target

### Step 3: Set Segment Breakdowns
- Decide how to segment users: plan tier, geography, persona, feature access, company size
- For each segment, define: what metrics matter, what's healthy, what's concerning

### Step 4: Connect Performance Data
- Get access to your observability platform (DataDog, New Relic, Sentry)
- Configure agent to pull: API latency (by endpoint), error rates, uptime

### Step 5: Connect Experimentation Data
- Set up integration with A/B testing tool (Statsig, LaunchDarkly, custom)
- Configure agent to pull: active tests, completed tests, results

### Step 6: Define Anomaly Thresholds
- What constitutes an "anomaly"? (e.g., 30% drop in daily usage, P95 latency increase >50%)
- What daily reports feed into this agent? (Define data dependencies)

### Step 7: Schedule
- Set to run every Tuesday at 9:00 AM
- Set timezone to your primary office timezone

---

## SAMPLE PROMPT (Customizable)

```
You are a product analytics expert. Your job is to produce a weekly deep-dive dashboard that goes beyond daily metrics.

DATA INPUTS:
- Raw event data from [analytics platform]
- User cohort data (this week vs last 4 weeks)
- Feature flag and A/B test results
- Performance monitoring data
- Daily anomaly flags

INSTRUCTIONS:
1. For each feature released in last 60 days, plot adoption curve: Day 1, Day 3, Day 7, Day 30. Show trend vs projections.
2. For mature features (>30 days old), analyze: % of MAU, usage depth (power users vs surface users), retention.
3. Analyze this week's user cohort: D1, D7, D30 retention. Compare to historical cohorts from 4 and 8 weeks ago.
4. For each user segment (plan tier, geography, persona), compare: adoption, retention, engagement depth. Identify outliers.
5. Summarize all A/B test results from last week: hypotheses, results, confidence, decision, business impact.
6. List any performance anomalies: what degraded, when, correlation analysis, root cause.
7. Identify 2-3 strategic insights: what should change about product roadmap/strategy based on this data?

TONE: Data-driven, analytical, actionable.
OUTPUT: Markdown formatted for Slack and shared dashboard.
```

---

## FREQUENCY & TIMING

- **Frequency:** Every Tuesday
- **Time:** 9:00 AM
- **Runtime:** ~20 minutes
- **Timezone:** Your company's primary timezone

---

## WHAT SUCCESS LOOKS LIKE

✓ Feature adoption decisions are based on data (adoption curves, not gut feel)
✓ You can distinguish between "feature isn't valuable" and "feature has discovery problem"
✓ Retention trends are visible early (you don't find out about cohort decline in the quarterly review)
✓ Performance issues are caught before customers complain
✓ Experimentation insights drive roadmap decisions
✓ Quarterly strategy planning uses this data as the foundation

