agentsUpdated·Falk Gottlob··updated ·5 min read

NPS and CSAT Deep Dive Agent

Daily and weekly analysis of your NPS and CSAT scores. Segment breakdowns, feature drivers, and what's actually making your customers happy or frustrated.

agentsmetricscustomer-feedback
Helpful?

Try it live
See this agent running in the sandbox

Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.

The short version

The NPS/CSAT Deep Dive agent runs a daily snapshot at 4 PM and a weekly deep dive Monday at 8 AM. It pulls survey responses from Typeform, Delighted, or SurveySparrow, maps them to customer segment data in your CRM, and surfaces the actual drivers behind your score. The output isn't "NPS is 47," it's "NPS is 47, enterprise is at 58, SMB is at 31, and the #1 detractor driver is onboarding speed." Daily catches major-customer detractors same day. Weekly shows cohort shifts and feature correlation. Connect your survey platform and ask for the top three detractor themes from last month's responses.

You get the NPS email every month. A number. Maybe it went up or down 2 points. You don't really know what changed or why.

That's because raw NPS is almost useless without context. An NPS of 45 could mean: your product is solid but there's a specific feature that's driving detractors crazy. Or: one customer segment loves you and another hates you. Or: a recent feature broke something and sentiment is crashing.

The NPS and CSAT agent pulls back the curtain. Every day it runs fresh sentiment analysis on your customer feedback. Every Monday it delivers a detailed breakdown: which segments are most satisfied, which features are moving the needle, what detractors actually complain about.

How It Works

The agent connects to three data sources and surfaces the real drivers behind your scores:

Daily snapshot (4 PM): Fresh surveys that came in today, sorted by detractor/neutral/promoter. If a high-value customer just marked you as a detractor, you know it same day.

Weekly deep dive (Monday AM): Segment breakdown showing NPS by customer size, industry, use case. Feature analysis - which features are most mentioned in promoter feedback vs. detractor feedback. Cohort analysis - when did each segment's sentiment shift?

The output isn't just "NPS is 47." It's: "NPS is 47. Enterprise segment is 58 (promoters), mid-market is 42 (neutral), SMB is 31 (detractors). The #1 driver for detractors is onboarding speed. The #1 driver for promoters is our API reliability."

Data Sources and Setup

Prerequisites: Complete the Claude setup guide first. You'll need:

  • Survey platform: Typeform, Delighted, or SurveySparrow - pulls NPS/CSAT responses and open feedback
  • CRM: Maps responses to customer segment, size, use case
  • Analytics: Feature usage data to correlate with sentiment (are users who tried feature X more likely to be promoters?)
  • Interview synthesis: Optional - pulls themes from past customer interviews to ground analysis

Schedule: Daily at 4 PM (fresh responses). Weekly Monday at 8 AM (deep dive).

The Claude Prompt

You are analyzing NPS and CSAT data to identify drivers and trends.

Here's this week's survey responses:
[SURVEY DATA: NPS/CSAT scores, open feedback, response dates]

Here's our customer segmentation:
[CUSTOMER SEGMENTS: size, industry, cohort, use case]

Here's feature usage:
[FEATURE DATA: which customers used which features this month]

Please analyze and report:

1. **Segment Breakdown**
   - NPS/CSAT score by: company size, industry, tenure, use case
   - Which segment shifted most from last week?
   - Are any segments below 30 (warning level)?

2. **Driver Analysis**
   - Read all open feedback from detractors (scores 0-6)
   - What are the top 3 complaints? (Group by theme)
   - Which features are mentioned most in detractor feedback?
   - Read all open feedback from promoters (scores 9-10)
   - What are the top 3 reasons for satisfaction?
   - Which features are mentioned most in promoter feedback?

3. **Feature Correlation**
   - Did customers who used [Feature X] give higher scores?
   - Did customers who had trouble with [Feature Y] give lower scores?

4. **Cohort Analysis** (if this is a weekly report)
   - Show trend for each segment over last 4 weeks
   - Which segments are improving? Which are declining?
   - When did each segment's sentiment shift (if there was a change)?

5. **Recommended Actions**
   - What's the ONE driver we should fix to improve detractor scores?
   - What are we doing really well that we should keep doing?
   - Which segment needs the most attention?

Format as a clear, actionable report. Include specific quotes from feedback where relevant.

What This Delivers

Instead of monthly numbers that tell you nothing:

  • Daily: Immediate alert if a major customer becomes a detractor
  • Weekly: Clear segmentation - you know exactly which cohort is struggling and why
  • Actionable insights: "70% of SMB detractors mention slow onboarding. Enterprise customers love our API. Mid-market is split between those who love us and those frustrated with integrations."

Real outcomes:

  • You stop guessing at which features matter and focus on what detractors actually complain about
  • You can celebrate what's working (and double down on it) instead of only fixing fires
  • Segment-specific roadmaps - SMB needs faster onboarding, enterprise needs integration partners

For the full agent fleet and scheduling details, see Your AI Agent Fleet.

Sources: Typeform, Delighted, SurveySparrow.

Share this post

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.