The Interview Guide That Actually Works

Customer interviews still matter more than ever. But now you show up with full signal context, a working prototype in hand, and AI that synthesizes the conversation before you close your notebook.

Falk GottlobUpdated 12 min readUpdated

Why interviews matter more now, not less

You might think that AI signal processing replaces customer interviews. It doesn't. It makes them essential in a different way.

When your agents are processing thousands of support tickets, sales calls, and NPS responses, you have breadth. You know what customers are saying at scale. What you don't have is depth. You don't know why they feel that way. You don't know the context behind the complaint. You don't know the workaround they've built, the emotion behind the frustration, or the thing they haven't said yet because nobody asked.

That's what interviews are for. Depth. Understanding. The human insight that no amount of data processing can replace.

But here's what's changed: you no longer walk into an interview blind. You walk in knowing exactly what this customer has been struggling with. You've seen their support tickets. You've read the sentiment from their NPS score. You've reviewed their usage patterns. And you're carrying a prototype that might solve their problem.

That's a different caliber of conversation.

The AI-powered interview: before, during, after

The interview itself is still a human conversation. But everything around it has changed.

Before the interview:

In the old model, you'd prepare a few questions and hope the conversation went somewhere useful. Maybe you'd reviewed one or two past interactions with this customer.

Now, your prep agent pulls everything relevant about this customer before the call. Their support history. Their usage patterns. Features they use and don't use. How long they've been a customer. Their NPS score and comments. Any mentions in sales notes or CS logs.

You show up knowing: "This customer submitted 3 tickets about export speed in the last month. Their usage dropped 20% after our last release. They rated us a 6 on NPS and wrote 'exports are painful for our team.'"

That changes your opening question from "Tell me about your experience" to "I noticed your team has been running into export issues. Walk me through what happened last time."

You're already past the surface. You're in the problem from the first minute.

During the interview:

You're still asking questions, listening, probing. That hasn't changed. What's changed is that you have a prototype ready. More on this in a moment.

AI transcription runs in real-time (Gong, Fireflies, Otter, or even the built-in tools in Zoom and Meet). You don't take notes. You listen. You're fully present in the conversation instead of splitting attention between listening and writing.

After the interview:

The old model: spend 30 minutes writing up your notes. Compare manually with other interviews. Hope you remember the important parts.

Now: your synthesis agent processes the transcript within minutes. It extracts: key problems mentioned, emotional intensity, workarounds described, features discussed, competitive mentions, and any commitments or expectations. It compares this interview against the patterns from your signal data. "This customer's export frustration matches a pattern seen in 42 support tickets this month. Their workaround, exporting in small batches, is mentioned by 15% of enterprise users."

You spend 5 minutes reviewing the synthesis instead of 30 minutes creating it. And the synthesis is connected to your broader signal picture, not isolated in a notebook.

The prototype interview: a new format

This is the format I use most now. It's different from the traditional discovery interview and it produces dramatically better insights.

The structure: 30 minutes, not 45.

The old 45-minute interview spent most of its time in exploration. You were trying to find the problem. With signal data, you already have a hypothesis about the problem. So you spend less time exploring and more time validating and deepening.

Minutes 1-3: Context and connection. "Thanks for making time. I'm Falk, I work on [product]. We've been digging into how our export experience works for teams like yours. Want to make sure we're solving the right problems."

One personal question. Nothing about the product yet.

Minutes 3-10: Confirm the signal. "I know your team has been working with large exports. Walk me through what that looks like for you. What happens when you need to get data out of the system?"

You're not asking blind. You know from the signal data that this customer has export problems. But you need to hear it in their words. Let them tell the story. Don't lead. Don't mention the support tickets. Let them describe the problem from their perspective.

This serves two purposes: it confirms the AI-identified signal is real for this person (not just a pattern artifact), and it gives you the context and emotion that data can't capture.

Minutes 10-15: Go deep on the workaround. "How do you handle it now when exports time out? What's your workaround?"

Workarounds are gold. They tell you what the customer values enough to build a manual process around. They show you the shape of the solution from the customer's perspective.

"We export in batches of 500 because anything bigger crashes. My analyst spends two hours a week combining the batches in Excel."

That tells you: they need bulk export. The current limit is around 500 records. It's costing 2 hours per week of analyst time. The solution isn't just "faster exports," it's "eliminate the need to batch."

Minutes 15-22: Show the prototype. "Based on what we've been hearing, we built something. It's rough, but I'd love your reaction. Here, take a look."

Share your screen or send the link. Let them interact with it. Don't explain. Don't guide. Just watch.

What you're observing:

  • Do they understand what it does without explanation?
  • Where do they click first?
  • Do they try to do something the prototype doesn't support? That's a feature insight.
  • Do they say "oh nice" politely or do they lean forward and start exploring?
  • Do they immediately connect it to their problem? "Oh, so I could export everything at once?"

Minutes 22-27: Get the honest reaction. "What's your first reaction? Would this change how your team handles exports?"

Then the critical follow-up: "What's missing? What would make this actually useful for your team?"

This question, asked about a real prototype, produces 10x better answers than "What features would you want?" asked about a hypothetical. Because they've just used the thing. They know what's missing because they tried to do something and couldn't.

"I'd need it to export directly to our data warehouse. Right now we go through CSV and then upload. If this could push straight to Snowflake, that would save us even more."

You just learned that the real solution isn't faster CSV export. It's a direct integration with their data infrastructure. One prototype. One interview. One insight that reframes the entire opportunity.

Minutes 27-30: Close with context. "How important is this for your team? Is this a 'nice to have' or a 'we need this to stay'?"

"Anything else about how your team works that I should understand?"

"Thanks. Super helpful."

That's it. Thirty minutes. You confirmed the signal, understood the context, got a prototype reaction, and uncovered a deeper insight. In the old model, it would take three interviews just to get to the point where you could describe the problem clearly.

Finding the right people to talk to (AI-assisted)

The old model: email a batch of customers and hope the right ones respond. Or ask your CS team for introductions.

The new model: your signal data tells you exactly who to talk to.

For validating opportunities: Talk to customers whose signals match the opportunity you're investigating. If you're exploring export problems, your agent can identify the 10 customers who submitted the most export-related tickets, have the highest usage of the export feature, or mentioned export in their NPS response. These people feel the pain most acutely. Their interviews will be the most informative.

For testing prototypes: Talk to customers in the target segment for the solution you've built. If your prototype is for enterprise teams, don't test it with solo users. If it's for new users, don't test it with power users who've adapted to the current workaround.

For understanding churn risk: Talk to customers the churn predictor has flagged. These are people whose usage dropped, whose NPS went down, or who submitted frustrated support tickets. Their interviews are urgent because they might leave, and valuable because they'll tell you exactly why.

For exploring new markets: Talk to prospects who didn't convert. Sales call analysis can tell you which prospects mentioned specific objections or competitor advantages. Those conversations reveal where your product falls short for segments you want to win.

You're not interviewing randomly. You're interviewing surgically. Every conversation is targeted at a specific learning goal, with a specific customer who has demonstrated relevant behavior.

Synthesis at scale: connecting interviews to signals

Here's where the new model really pulls ahead. In the old model, you'd synthesize interviews individually and then manually look for patterns across 8-10 conversations over a month.

Now, every interview is synthesized within minutes of ending and immediately connected to your broader signal picture.

Your synthesis agent processes the transcript and outputs:

Key problems identified. Not paraphrased. Extracted with context. "Customer described spending 2 hours weekly on batch exports because single exports time out above 500 records."

Emotional intensity. Did they describe this as mildly annoying or deeply frustrating? Were they matter-of-fact or animated? This matters for prioritization.

Match to existing signals. "This matches a pattern seen in 42 support tickets and 3 other interviews this quarter. The batch export workaround is mentioned by 15% of enterprise users in ticket data."

New insights not in signal data. "Customer mentioned they'd want direct Snowflake integration. This is new, not mentioned in any previous signal source. Suggest exploring with 3-5 more enterprise customers."

Prototype reaction summary. "Customer engaged with prototype for 4 minutes. Attempted to export a large dataset. Found the flow intuitive but asked about data warehouse integration. Overall reaction: positive with a specific gap."

After 5 interviews over two weeks, you have a synthesis that spans individual conversations and connects them to your full signal picture. Patterns become obvious. And when you spot something new that didn't appear in the signal data, you know it's worth investigating because a human revealed it.

The questions that still work (and one new one)

The core interview questions haven't changed. Story-based questions are still the best way to surface real behavior and real friction.

"Walk me through the last time you did X." Still the best opening. Forces specificity. Avoids hypotheticals.

"How did you solve that problem?" Surfaces workarounds. Tells you what they value.

"Why?" Asked three times, each time peeling back another layer.

"What would change for you if this was solved?" Tests severity. If the answer is "not much," the opportunity is weak. If the answer is specific and emotional, it's strong.

"What's your workaround now?" The most underrated question in product discovery. Workarounds are prototypes your customers built for themselves. They show you the shape of the solution.

And the new question, specific to prototype interviews:

"What did you try to do that it didn't let you?" After they use the prototype, this question reveals the gap between what you built and what they need. It's more specific than "what's missing" because it's grounded in something they just tried to do.

The mistakes that still kill interviews

AI doesn't fix bad interview technique. If you lead the witness, you'll get biased data faster. If you pitch instead of listen, you'll validate your assumptions instead of testing them.

Leading with the signal. "We know exports are slow. How bad is that for you?" You've told them the answer. Instead: "Walk me through what happens when your team needs to get data out."

Pitching the prototype. "We built this amazing new export tool that can handle 10x more records." Now they feel social pressure to be positive. Instead: "We built something rough. I want your honest reaction. If it's not useful, that's the most helpful thing you can tell me."

Only interviewing people who match the signal. If all your interviewees have export problems, you'll confirm that export is the top priority. But you might miss that onboarding is actually more important. Mix in some customers from different segments or with different usage patterns. Let the interviews surprise you.

Interviewing too many, learning too little. In the old model, you needed volume because each interview was expensive to set up and synthesize. Now, with AI synthesis and targeted selection, 3-5 interviews on a focused topic are enough to get directional signal. Don't run 20 interviews when 5 will tell you what you need to know.

Ignoring what the prototype reveals. Some PMs show the prototype and then keep asking questions about the problem. The prototype is doing the work. Watch how they use it. That's your data. The questions are follow-up, not the main event.

Your interview plan this week

Monday: Identify your targets. Review your signal data or recent support tickets. Pick one opportunity to explore. Identify 3-5 customers who've demonstrated relevant behavior (submitted related tickets, churned recently, or are in the target segment).

Monday: Build a quick prototype. If you have a hypothesis about the solution, prototype it. Even if it's rough. Having something to show transforms the conversation.

Tuesday: Run two 30-minute calls. Use the prototype interview format. Context, confirm signal, go deep on workaround, show prototype, get reaction, close.

Wednesday: Review AI synthesis. Your transcription tool and synthesis agent should have processed both interviews. Review the synthesis. What confirmed your hypothesis? What surprised you? What's new?

Thursday: One more call if needed. If the first two conversations pointed in different directions, run one more. If they converged, you probably have enough signal. Show the iterated prototype if you've had time to update it.

Friday: Update your OST. Based on this week's interviews and prototype reactions, what did you learn? Which solutions are stronger? Which should you kill? What should you prototype next week?

Three to four hours of interview time. AI handles the prep and synthesis. The conversations themselves are deeper and more productive than they've ever been because you're not starting from zero.

Interviews aren't the bottleneck anymore. They're the force multiplier. Every other part of discovery can be accelerated or automated. The interview is where human insight happens. Make it count.

Share this post

Frequently asked

Why do customer interviews matter more when you have AI agents processing signals?+

Agents give you breadth: what customers are saying at scale. Interviews give you depth: why they feel that way, the context behind the complaint, the workaround they built. Together you get both signal and understanding. Neither replaces the other.

What changes about preparation when agents have already analyzed your customer?+

Instead of opening an interview blind, your prep agent has pulled their support history, usage patterns, NPS score, and previous conversations. You walk in knowing their pain points, not guessing. The interview starts deep instead of broad. You spend 45 minutes discovering instead of 30 minutes exploring.

What is the prototype interview format and how is it different?+

Traditional: open-ended discovery conversation. Prototype interview: confirm the signal you already identified from agents, explore the workaround deeply, show them a prototype, watch their reaction, iterate based on feedback. Same time investment. Dramatically more actionable output.

How does real-time transcription change the interview?+

You listen fully present instead of splitting attention between listening and note-taking. You engage more naturally. The transcript is processed within minutes by a synthesis agent. You spend five minutes reviewing the synthesis instead of 30 minutes writing notes.

What happens after the interview when synthesis agents have already processed it?+

The synthesis agent compares this interview against signal patterns from hundreds of other data points. It surfaces what's unique about this customer, what's confirming existing patterns, and which customer segments share similar problems. Context that manual synthesis would take hours to establish appears in minutes.

Related reading

Deeper essays and other handbook chapters on the same thread.