Updated·Falk Gottlob··updated ·8 min read

How a 2-Hour Prototype Killed a 3-Month Project

We were about to commit a full squad to a feature for an entire quarter. A quick prototype and 5 customer calls later, we realized the premise was wrong. Here's the story.

playbookprototypingvalidationreal-worldpractice
Helpful?

The short version

48 hours before a full-squad kickoff for a feature with six months of customer requests, exec sponsorship, and a complete design, I asked for 3 days to validate the workflow. Two hours of Claude Code generated a rough clickable prototype. Five customer calls revealed that our elegant linear workflow was structurally wrong: customers wanted exploration before commitment, collaboration during the process, configuration persistence, and batch application across time, not all at once. Friday I walked into the kickoff with the data and said "these aren't edge cases, these are the actual cases." We took a week to redesign, shipped in May instead of April, and hit DAU targets in week one. The 2 hours of prototype and five calls saved 600 engineering hours building the wrong feature.

It was a Tuesday afternoon. The roadmap was set. The team was staffed. We were 48 hours away from kicking off a full-squad feature - three engineers, one designer, one product manager (me), for the entire Q2.

The feature had everything going for it:

  • Six months of customer requests
  • Executive sponsorship
  • Sales buy-in
  • Design completed
  • Engineering capacity allocated

We were moving Monday morning. Then something made me pause.

I was in the design review, looking at the workflow, and I realized: we'd never actually watched a customer do this work. Not in a prototype. Not in a design walkthrough. We'd asked them what they wanted, they'd told us, and we'd built what they said.

The playbook has a specific section on this: assumptions get expensive. And the most expensive assumption is "customers said they want X, so X is the right answer."

So I did something that felt paranoid at the time: I asked if we could test the design with five customers before we started development.

The response was basically: "You have until Friday. After that we're locked in."

That was Tuesday. I had three days.

The Prototype (2 Hours)

I didn't build a perfect prototype. I used Claude Code to generate a clickable Figma prototype in about two hours - it had the core workflow, zero polish, interactive enough to show how a customer would move through the feature.

The workflow was straightforward: pick a template, configure parameters, apply across environments, review, confirm. Linear. Logical. Exactly what the design doc said.

I scheduled five discovery calls for Wednesday and Thursday. Varied customer segments: early-stage, mid-market, enterprise. All of them had requested this feature.

The Calls (Wednesday-Thursday)

I did something specific with these calls: I didn't explain the feature. I just showed them the prototype and said, "You've asked us for this. Here's what we built. Walk me through how you'd use it."

Call 1 (early-stage, startup): They got through the workflow fine. Clicked through, understood the logic. Then they said something important: "This is great, but I wouldn't use it this way. I'd want to batch operations and apply them across weeks, not all at once."

Call 2 (mid-market): Similar feedback. "The workflow assumes I know what parameters I want before I click in. But I usually want to explore parameters first, see examples, then decide."

Call 3 (enterprise): "We'd need to do this collaboratively with three different teams before confirming. Your review step doesn't have any handoff or approval mechanism."

Call 4 (small customer, power user): "I actually like the workflow, but I'd want to reuse configurations from past operations. You're asking me to configure from scratch every time."

Call 5 (customer who'd requested it months ago): Honest response: "Actually, now that I see it... this isn't quite what I needed. I think I need something simpler. Can I just save the state of an operation and reload it later?"

By Thursday evening, I had a clear pattern: our linear workflow was wrong. Customers weren't trying to use the feature the way we'd designed it.

They wanted:

  • Exploration before confirmation
  • Collaboration/handoff during the process
  • Reusability/configuration persistence
  • Batch application over time, not in one go

Basically, everything we'd designed in reverse.

The Friday Decision

Friday morning, I walked into the kickoff meeting with the prototype results and the call notes. The mood was: "We're moving forward Monday. What did you find?"

I laid it out: "The design doesn't match how customers will use this. If we build it as designed, we'll ship something that works logically but fails in practice. We'll spend three months building it, two months fixing it, and still end up with customer frustration."

The pushback was immediate. "But we've been planning this for six months. The design is solid. These are edge cases."

And I said something I think matters: "These aren't edge cases. These are the actual cases. The linear workflow we designed is the edge case. It's elegant. It's logical. It's wrong."

We had a choice: ship the original design and iterate when customers tell us it's wrong, or pause, redesign based on what we'd learned, and ship something that works from day one.

This is the moment the playbook matters. Going to customers after you've already shipped is too late. Going to them during design is standard. But going to them specifically to validate your workflow against their actual behavior is what separates "we built what was asked" from "we built what works."

The decision: we didn't start Monday. We took a week.

The Redesign (One Week)

One week wasn't a full redesign. It was a pivot.

Instead of:

  1. Pick template
  2. Configure
  3. Apply across environments
  4. Review
  5. Confirm

We redesigned to:

  1. Pick template
  2. Load past configurations (if any)
  3. Explore/build configuration (non-linear, browse examples, test params)
  4. Save configuration as reusable preset
  5. Apply across environments with optional handoff/approval
  6. Review
  7. Confirm

The logic flipped. Instead of "you know what you want, enter it" we went with "explore first, then confirm." That's what the customer behavior had told us.

We also added a collaboration layer: before you confirm an operation, you can route it to a team member for review. This wasn't in the original design. But three out of five customers said they needed it.

And we built in persistence: any configuration you created once got saved automatically. No typing the same params twice.

The Outcome

We started development the following Monday with a revised design. The redesign meant we lost two weeks of planned schedule. So instead of "three months of development plus two months of fixes," we were looking at "two months of development with fewer post-launch fixes."

We shipped in May instead of April.

When it shipped, adoption was immediate. We hit our target DAU numbers in week one. By month two, we'd identified and shipped two minor adjustments. By month three, the feature was stable.

Compare that to the original timeline: ship in April, spend May and June fixing the workflow because customers used it differently than we'd designed it, ship the revised version in July.

We lost one month on the calendar. We gained three months of planning clarity.

But here's what actually mattered: we didn't ship the wrong feature. Not a little wrong. We were about to ship a feature that was structurally wrong - it assumed a user workflow that didn't exist.

What the Two Hours Actually Did

The prototype itself was nothing special - rough Figma click-through, zero design polish. But it forced a specific conversation: not "do you want this?" but "when you have this, how do you use it?"

Those are different questions. The first gets affirmation. The second gets behavior.

The five calls could have happened in the design phase. They didn't, because the design felt "done," and validating a done design feels less important than validating a rough sketch.

But the playbook says: validate your workflow before you commit a team. Not in a design review (where people are looking at aesthetics and flow). In a prototype walkthrough (where they're showing you how they'd actually do the work).

This is expensive to skip. And cheap to do early.

The Lesson

The most dangerous sentence in product management is: "This design looks good. Ship it."

The next most dangerous sentence is: "We've already committed a team. It's too late to change direction."

Between those two sentences is where you lose quarters.

The 2-hour prototype and the five calls cost about four PM hours and 2.5 hours of customer time. If I had to put a number on it, maybe 15 hours of work total across the team and customers.

The original plan would have committed 600 hours of development time (three engineers, 13 weeks at 50% capacity) to a feature with a broken workflow.

Getting one week of paused time to validate that workflow before committing those 600 hours was the cheapest insurance policy I've ever bought.

The prototype didn't save the project. The customer behavior did. But the prototype made it visible early enough to matter.

That's the playbook: validate your assumptions before you bet the team. Not because assumptions are always wrong, but because when they are wrong, the cost of catching them scales with how far you've gotten.

Catch them at "prototype and five calls." Don't catch them at "six months of development and customer frustration."

Share this post

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.