
Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.
The short version
The Interview Synthesis agent reads every customer interview transcript from the week (Otter, Fireflies, manual notes) and produces one synthesis report every Wednesday at 10 AM. The report has six sections: recurring themes (3+ interviews), top pains vs. gains, feature mentions, testable hypotheses, contradictions, and segment insights. The point is that signal only emerges across 8+ interviews and manually reading 12 transcripts takes 6 hours. The agent does it in minutes, with customer quotes preserved. Feed in your last 10 transcripts and segment metadata, then ask for the top three testable hypotheses.
You've done 12 customer interviews this month. You have 12 Otter transcripts. You probably listened to 6 of them and skimmed notes on the others.
The problem is synthesis. Individual interviews are full of noise - off-topic tangents, product complaints that apply to 1 customer, funny anecdotes. The signal only emerges when you connect dots across 8+ interviews. But manually reading 12 transcripts and building a theme map takes 6+ hours.
The Interview Synthesis agent does it in minutes. Every Wednesday, it:
- Pulls all interview transcripts and notes from the week
- Identifies recurring themes and patterns
- Surfaces quotes that prove each theme
- Extracts testable hypotheses
- Flags surprising insights or contradictions
You get a single report that's actually usable for roadmap planning.
How It Works
The agent processes three types of input in sequence:
Transcript parsing: Reads interview transcripts (Otter, Fireflies, manual notes), extracts: who was interviewed (segment, role, company size), key quotes, pain points mentioned, features discussed.
Theme extraction: Looks across all interviews for recurring patterns. Not just "they mentioned onboarding" but "8/12 interviewees said onboarding was slow AND they tried to find workarounds that failed AND it took them 2+ weeks to get production-ready."
Hypothesis formation: Groups themes into testable statements. "Early-stage startups struggle with onboarding speed because they lack in-house engineering expertise to handle our API docs alone." Not "we should make onboarding faster" - specific enough to test.
Data Sources and Setup
Prerequisites: Complete the Claude setup guide first. You'll need:
- Otter.ai or Fireflies: Connected to pull transcripts and auto-generated summaries
- Notion or Google Docs: Notes and research repository
- CRM: Maps interviews to customer segment, size, industry
- Previous interviews: Historical data so the agent can identify new vs. recurring themes
Schedule: Weekly Wednesday at 10 AM. Analyzes all interviews from the past week.
The Claude Prompt
You are synthesizing customer interview transcripts into insights.
Here are the transcripts from this week's interviews:
[INTERVIEW DATA: transcripts, notes, who was interviewed]
Here's context on each interviewee:
[SEGMENT DATA: company size, industry, role, use case]
Please analyze and report:
1. **Recurring Themes**
- For each theme that appears in 3+ interviews, show:
- How many interviews mentioned it? In what context?
- Representative quotes (2-3 best examples)
- Which segments mentioned it most?
- What's the underlying problem?
2. **Pains vs. Gains**
- Top 3 pain points mentioned (prioritize by frequency + severity)
- Top 3 gains/positive moments mentioned
- How do these differ by segment?
3. **Feature Mentions**
- Which existing features did they love or struggle with?
- Which features did they ask for?
- Did anyone mention competitors doing something better?
4. **Testable Hypotheses**
- Convert each major theme into a testable statement
- Example: "Small engineering teams abandon API onboarding because our docs assume they have a dedicated integration engineer"
- For each hypothesis, suggest how you'd test it
5. **Contradictions**
- Did any interviews contradict each other?
- Can you explain the contradiction (different segment? different use case?)
6. **Segment Insights**
- What's unique about enterprise interviews vs. SMB vs. free tier?
- Are different segments solving the same problem differently?
Format as a scannable report. Use quotes extensively - I want to hear the customer voice, not just your summary.
What You Get
Instead of 12 scattered transcripts you never quite process:
- Synthesis in one place: Key themes, supporting evidence, segment breakdown
- Actionable hypotheses: "Startups without dedicated DevOps can't implement our security features alone" - specific enough to act on
- Conversation velocity: You can reference past themes in future interviews and dig deeper instead of re-discovering the same pain points
Real outcomes:
- Roadmap planning driven by customer evidence instead of gut feel
- New interviewer can read the synthesis instead of listening to all 12 recordings
- You spot contradictions early ("Enterprise loves our API, but SMB finds it confusing") and can adjust messaging
For the full agent fleet and scheduling details, see Your AI Agent Fleet.
Also on Medium
Full archive →How to Avoid Survivorship Bias in Product Management
Lessons from the British bomber study, applied to PM customer interviews and analytics.
Why Users Don't Know What They Want Until You Show Them
Visionary product development, when prototypes beat interviews, and how to know the difference.