promptsUpdated·Falk Gottlob··updated ·22 min read

20 AI Prompts That Replace 20 Hours of PM Work

Copy-paste these AI prompts for product managers into Claude or ChatGPT. Tested across discovery, roadmapping, retros, and stakeholder updates. Save 20 hours a week.

AI promptsproductivitytoolkitChatGPT for PMsClaude for PMsAI product management
Helpful?

The difference between a PM who ships and one who drowns in busywork is often not their intelligence or work ethic, it's whether they've learned to use AI for the repetitive, high-friction parts of the job.

I've tested hundreds of prompts across customer research, data analysis, strategy, writing, and prototyping. The 20 below are the ones that actually save hours and produce outputs good enough to ship without rework. Not generic "help me write an email" stuff. These are surgical prompts that a CPO would actually use.

Each prompt includes what it does, when to use it, and a pro tip for getting even better results.

The short version

Twenty AI prompts for product managers, organized into five categories: customer research, prioritization, strategy, writing, and prototyping. Each prompt includes the exact text to paste into Claude or ChatGPT, what it does, and a pro tip for better output. The full library is also available as a downloadable markdown file you can drop into your prompt manager or save in a Notion page. The prompts that save the most hours per week, in my testing: the JTBD extractor (Prompt 1), the ticket-theme clusterer (Prompt 2), and the one-pager-from-notes generator (Prompt 14). For a deeper version of this idea (replacing whole PM workflows with always-on agents instead of one-off prompts), see Your AI Agent Fleet and The Impact Loop.

Customer Research Prompts

Prompt 1: Extract Jobs to Be Done from Interview Transcripts

Analyze this customer interview transcript and extract all distinct jobs to be done (functional, emotional, and social).

For each job, provide:
- The exact quote that reveals the job
- The job statement (e.g., "When [situation], I want to [motivation], so I can [expected outcome]")
- Whether it's functional, emotional, or social
- How urgent/high-frequency this job is
- Whether this job is explicitly stated or inferred from context

[Paste transcript here]

Format output as a markdown table and rank jobs by perceived importance.

What it does: Turns raw interview audio dumps into structured job statements. This is the foundation of good positioning and roadmap work.

When to use it: After every customer interview, especially when validating new markets or use cases.

Pro tip: Paste 3-4 transcripts from the same customer segment at once. The AI will start identifying patterns across interviews and flag the consensus jobs (which matter more than outliers).


Prompt 2: Identify Feature Request Themes from Support Tickets

I have a CSV of support tickets with customer complaints and feature requests. Identify the top 5-7 themes, rank them by:
1. Frequency (how many tickets mention this)
2. Revenue impact (which segments ask for this most)
3. Effort to build (best guess: quick, medium, complex)
4. Impact if shipped (game-changing, nice-to-have, table stakes)

For each theme, provide:
- A clear one-liner description
- List of 3-4 example tickets that mention it
- Which customer segments ask for this most
- Estimated usage impact if built

[Paste CSV: ticket ID, customer segment, issue, priority rating]

Output as a ranked table with an executive summary.

What it does: Turns thousands of scattered tickets into a prioritized feature roadmap input. Saves hours of manual clustering.

When to use it: Monthly or quarterly when you're refreshing the roadmap.

Pro tip: Include customer MRR or ARR in the CSV if possible. The AI will automatically weight high-value customer requests higher, which is what you should be doing anyway.


Prompt 3: Convert Feature Requests into User Stories with Acceptance Criteria

Here's a feature request from a customer:

[Paste the request]

Convert this into 3-5 user stories (not one massive one) that engineering can actually scope. For each story:
- User story statement: "As a [user type], I want to [capability] so that [outcome]"
- Acceptance criteria (specific, testable, not vague)
- Edge cases or assumptions
- Effort estimate: small/medium/large
- Suggested implementation approach

Group related stories under an epic if there are 3+. Include a note on what assumptions I should validate with the customer before committing to this.

What it does: Stops the common problem of vague feature requests that become "we're done when it feels right."

When to use it: Every time a customer request reaches your desk that might ship.

Pro tip: Paste the full context of why the customer asked (their business problem, not just the feature), and the AI will write better stories because it understands the actual need.


Prompt 4: Analyze Competitor Feature Gaps Against Your Roadmap

I have:
- Our product roadmap (features we've committed to this quarter)
- A list of competitor features (what [Competitor X] launched this month)

For each competitor feature, analyze:
1. Do we have this? (Yes / No / Partial)
2. If no, should we? (Critical gap / Nice to have / Not our wedge / No customer demand)
3. Can we build it faster than them? (Yes, we can leapfrog / Maybe if we focus / No, they're ahead)
4. Impact on our sales process (Blocker for deals / Nice ammunition / Irrelevant)
5. Recommended response (Add to roadmap / Monitor / Ignore / Build different)

[Roadmap:]
[Competitor features:]

Output as a grid and highlight any RED flags (critical gaps we need to address this quarter).

What it does: Keeps you from building features that don't matter while missing the ones that lose deals.

When to use it: When you ship a new competitive analysis or monthly when you check on what competitors launched.

Pro tip: Feed the AI a month or quarter of competitor launches at once. It gets better at identifying which features are part of the same competitive wedge.


Prompt 5: Create a Jobs-Based Positioning Statement

Here's what I know about our target customer:

Primary job to be done: [job statement]
Secondary jobs: [list 2-3]
Current solution they use: [what they do today]
Pains with current solution: [top 3 pains]
Our unique advantage: [what we do differently]

Generate 3 alternative positioning statements that lead with the job, not the feature. Each statement should:
- Be one sentence
- Lead with the job or outcome, not the technology
- Include why we're better (hint at the advantage)
- Be testable in a headline (could this work in a PPC ad?)

Then rank them by how well they'd resonate with [specific customer type you're targeting].

What it does: Breaks through feature-led positioning into outcome-led messaging that actually sells.

When to use it: Before major launches, new market entry, or when messaging isn't resonating in sales.

Pro tip: Include a quote from your best customer describing the job in their own words. The AI will write positioning that mirrors their language, which converts better.


Data Analysis Prompts

Prompt 6: Analyze Cohort Retention and Predict Churn Risk

I have cohort data showing monthly retention by signup cohort. Here's the data:

[Paste table: Cohort | Month 1 | Month 2 | Month 3 | Month 4 | Month 5 ...]

Analyze:
1. What's the trend? (improving, declining, flat?)
2. Which cohorts are concerning? (lower retention than expected)
3. Based on recent cohort performance, what's our predicted overall churn rate in 6 months?
4. What month-over-month patterns do you see? (e.g., "cohorts acquired in Q2 consistently drop X% more")
5. Recommended actions (what should I investigate?)

Flag any cohorts that suggest a product or onboarding issue (big drop-off in month 1-2) vs. a competitive/market shift issue (gradual decline across all months).

What it does: Turns raw retention tables into early-warning signals for churn.

When to use it: Monthly in your exec review. Weekly if you're troubleshooting a churn spike.

Pro tip: Include the reason for any major product changes you made ("We redesigned onboarding in March" or "Competitor launched free tier in June"). The AI will correlate changes with cohort performance.


Prompt 7: Find Correlation Between Feature Usage and Retention

I have user-level data:
- Sign-up date and cohort
- Key features used (onboarding flow, core workflow, integrations, reporting)
- Retention status at 6 months (churned or retained)

Help me find what correlates with staying. For each major feature:
1. Usage rate in retained users vs. churned users (%)
2. How big is the gap?
3. Is this a leading indicator (people who use it early stay longer) or lagging (people who stay longer use it)?
4. What's the minimum "engagement threshold" (e.g., "users who use X at least 3x in month 1 have 40% higher retention")?

[Paste data as CSV or table]

Output recommendations on:
- Which feature(s) should we push harder in onboarding?
- Which features don't correlate with retention (we might be over-investing)?
- Which features are at risk of low adoption that we need to fix?

What it does: Stops you from building features people don't use and helps you identify what drives retention.

When to use it: Quarterly when you're reviewing what shipped and how it landed.

Pro tip: Include the date each feature shipped. The AI will account for "people just didn't have time to find it yet" vs. "this feature is genuinely not valuable."


Prompt 8: Analyze Win/Loss Data to Identify Deal Blockers

I have data from sales on deals we won and lost this quarter:

For each lost deal:
- Customer segment / company size
- Why they said we lost (objection)
- What they chose instead
- Estimated ARR

For each won deal:
- Customer segment
- Any objections they had (and how we overcame them)
- What features/positioning sealed it
- ARR

Analyze:
1. Are there segment-specific patterns? (mid-market has different blockers than enterprise)
2. What's the most common objection by segment?
3. Of the objections we're hearing, which ones are we actually fixing? Which are we ignoring?
4. For won deals, what's the common denominator? (specific feature, pricing model, trust signal)
5. What single change would win the most lost deals?

[Paste data]

Output: A ranked table of deal blockers by frequency and impact, plus recommendations for addressing the top 3.

What it does: Turns anecdotes ("we lost to Competitor X again") into data-driven roadmap priorities.

When to use it: Monthly review with sales leadership, quarterly for roadmap planning.

Pro tip: Include notes on whether the objection was "it doesn't exist" (build it) vs. "we didn't communicate it well" (marketing/sales motion issue) vs. "they chose cheaper" (positioning issue). The AI will help you separate product gaps from go-to-market gaps.


Prompt 9: Benchmark Feature Usage Against Industry Standards

I work in [industry] and have these feature adoption rates for my product:

[Feature name: % of users who've used it]

I want to know:
1. How do these compare to industry benchmarks? (What % of users typically use this type of feature?)
2. Which features are under-adopted relative to their importance?
3. Which features have higher adoption than expected?
4. What's the likely reason for under-adoption? (Hidden, hard to use, solved differently, not actually needed, poor onboarding)

Help me prioritize which features to fix or promote. Assume I can only improve adoption on 2-3 features this quarter.

What it does: Saves you from optimizing features everyone already uses while ignoring critical features that are flying under the radar.

When to use it: Quarterly product review, especially for mature products.

Pro tip: The AI's "benchmarks" are approximate, so use this as a directional guide, not gospel. But it's often enough to spot which features are obviously broken (50% adoption on a core feature that should be 80%+).


Writing & Communication Prompts

Prompt 10: Turn a Product Change into a Customer Email

We just shipped a change:

[Description of what changed and why]

Write a customer email that:
1. Leads with the benefit (not the feature)
2. Explains in 2 sentences why we made this change
3. Includes a clear CTA (try it now / read the docs / no action needed)
4. Is friendly and direct, not corporate
5. Closes with a way to give feedback

The tone should be: [casual/professional/excited] and the audience is [specific user type: power users / new customers / finance teams].

Make it scannable (short paragraphs, one idea per paragraph). Max 150 words.

What it does: Prevents the death march of "write a customer email about this change." Saves 30 minutes.

When to use it: Every time you ship something material.

Pro tip: Include your last 2-3 product announcement emails as examples of tone. The AI will match your voice.


Prompt 11: Draft a Weekly Product Digest for Your Team

Here's what shipped / what we shipped this week:

[List: what launched, what got fixed, what metrics changed, any blockers]

Write a weekly digest post that:
1. Celebrates the win (one paragraph on what shipped and why it matters)
2. Flags the metrics that moved (retention, activation, MRR, whatever's relevant)
3. Calls out the blocker / next focus (what's in our way?)
4. Includes a "ask for help" section (where we need design, eng, sales input)
5. Ends with one sentence on what we're focused on next week

Tone: Direct, honest, celebration without fluff. Length: ~300 words.

Audience: Product team + stakeholders

What it does: Creates alignment and keeps momentum visible across the org. Prevents "what's the product team even doing?" from leadership.

When to use it: Every Friday or Monday depending on your cadence.

Pro tip: Paste the same week's Slack messages and standup notes. The AI will distill the noise into what actually matters.


Prompt 12: Convert Your Product Spec into a GTM Brief

I have a product spec for a new feature:

[Paste spec: what it is, how it works, who it's for]

Convert this into a GTM brief that:
1. Explains what changed in one sentence
2. Describes why it matters (business outcome, not feature description)
3. Identifies which customer segments care the most
4. Includes 2-3 key talking points for sales
5. Suggests a GTM motion (email, in-app, webinar, customer calls, pricing change, etc.)
6. Flags any pricing or packaging implications
7. Includes a "what to tell customers who ask [common question]" section

Format: Something a go-to-market person can read in 10 minutes and actually use to sell this.

Audience: Sales and marketing team

What it does: Stops the problem where engineering ships something and sales doesn't know how to explain it.

When to use it: Before every material feature launch.

Pro tip: Include the sales objections you've heard recently. The AI will help you craft responses that address them in the GTM brief.


Prompt 13: Draft an Exec Update on a Major Initiative

Here's the status on [initiative name]:

[What we said we'd do, what we've done so far, where we are now, what's next]

Write an exec update that:
1. Opens with status (on track / at risk / blocked)
2. Explains in 3 sentences what this initiative achieves
3. Shows progress with 2-3 specific metrics / milestones
4. Flags any risks or blockers (with mitigation plan)
5. Shows how this connects to company goals (ARR, retention, market position, etc.)
6. Asks for a specific decision or approval if needed

Tone: Confident, direct, no fluff. Length: 300 words max.

Format output so it can be a standalone doc or dropped into a broader exec brief.

What it does: Prevents the "let me write this executive summary" rabbit hole. Gets executives the information they need in the format they expect.

When to use it: Before board meetings, investor updates, or when escalating a major initiative.

Pro tip: Include the last 2 exec updates you sent. The AI will match the style and depth your leadership expects.


Strategy & Planning Prompts

Prompt 14: Analyze Feature Requests by Strategic Priority (RICE/ICE)

I have a list of potential features / initiatives. Score them using RICE (Reach, Impact, Confidence, Effort) so I can prioritize fairly.

[List each with: Description, estimated reach (% of users or number of users), estimated impact (low/medium/high), confidence (low/med/high), estimated effort (in weeks)]

For each initiative:
1. Calculate RICE score (Reach * Impact / Effort, with Confidence as a modifier)
2. Identify which ones are "high confidence, high impact, low effort" (do these first)
3. Flag any that score well on paper but I should be skeptical about (low confidence, indirect benefit)
4. Recommend top 3-5 to prioritize next
5. Group them by time horizon (ship in next 4 weeks, 8 weeks, long-term bets)

Output as a ranked table with RICE scores and my recommended roadmap priority.

What it does: Brings rigor to the "but we should build this" gut-call decisions.

When to use it: Quarterly roadmap planning, when you have too many ideas and need to rank them.

Pro tip: Include actual usage data if you have it (not estimates). The AI will score more accurately. Also, if an initiative keeps getting high scores every quarter but you never ship it, that's a signal to either commit or kill it.


Prompt 15: Build a 90-Day Product Roadmap with Dependencies

Here's our strategic focus for next quarter: [one-liner on what we're optimizing for]

Here are the initiatives I'm considering:

[List: Initiative, estimated effort in weeks, dependencies (what needs to finish first), which strategic goal it serves, business impact]

Build a 90-day roadmap that:
1. Front-loads quick wins (high impact, < 3 weeks)
2. Sequences longer initiatives so dependencies are respected
3. Leaves buffer time for bugs, tech debt, and surprises (assume 20% of capacity)
4. Clearly shows which goals are being served each month
5. Flags any gap (are we missing anything critical?)

Output as:
- A narrative roadmap (what we're doing month by month and why)
- A Gantt-style view showing timelines and dependencies
- A risks section (what could slip? what's high-risk?)

What it does: Moves you from a list of things you want to ship to an actual, realistic, sequenced plan.

When to use it: Beginning of each quarter, or when you're refreshing the roadmap mid-quarter.

Pro tip: Include the last quarter's roadmap alongside your actual shipping velocity. The AI will use that to sanity-check your effort estimates and timing.


Prompt 16: Identify Technical Debt Impacting Product Velocity

I have feedback from my engineering team on technical debt:

[List: what's slowing us down, how long it takes to work around it, impact on shipping velocity]

I also have our planned roadmap for next quarter: [list of features/initiatives]

Help me:
1. Estimate how much each piece of tech debt is costing us (in velocity, % of eng time wasted)
2. Identify which roadmap items would be faster to build if we paid down specific debt
3. Rank debt by "impact if fixed" (if we fix this, we ship 25% faster for X initiatives)
4. Recommend which debt to pay down this quarter (and when during the quarter)
5. Estimate the ROI (debt paydown effort + impact on roadmap)

What percentage of next quarter should we allocate to tech debt vs. new features?

What it does: Helps you make the strategic call on tech debt vs. features in a way that sticks with leadership.

When to use it: Beginning of quarter when you're planning, or whenever your team is making a case for "we need to fix this infrastructure thing."

Pro tip: Get specific with impact. "Database queries are slow" doesn't land. "Every feature we ship that touches user profiles needs an extra week to optimize queries" does.


Prompt 17: Define Success Metrics for a New Feature

We're shipping: [description of feature]

The user job it solves: [the jobs to be done]

Help me define what success looks like for this feature. For each dimension, suggest specific metrics:

1. Adoption (how many users try it, how fast does adoption curve grow?)
2. Engagement (how often do retained users use it, what's the core engagement loop?)
3. Impact (does it drive retention, reduce churn, increase upsells, reduce support load?)
4. Business metrics (revenue impact, unit economics, CAC payback impact)

For each metric, provide:
- The specific metric I should track
- What's a "good" baseline (historical features, competitive data, or reasonable guess)
- What's the success target for this quarter / year
- How to actually measure it (what to log, where to find the data)

Group metrics into "must have" (shows up in exec updates) vs "nice to have" (instrumentation for learning).

What it does: Prevents the common problem of shipping a feature, having no idea if it worked, and shipping the next thing anyway.

When to use it: Before shipping any feature that's more than a small iteration.

Pro tip: Define success metrics before you ship, not after. The AI will help you think about what actually matters vs. vanity metrics.


Code & Prototyping Prompts

Prompt 18: Generate a Technical Requirements Document from a Product Brief

Here's my product brief for a feature:

[Paste the brief: what, why, user stories, acceptance criteria]

I need to hand this to engineering. Generate a technical requirements doc that covers:

1. **System Context**: How does this integrate with existing systems? (Database changes, API endpoints, third-party integrations)
2. **Data Model**: What tables / fields need to change? (Include schema if possible)
3. **API Changes**: What new endpoints or changes to existing endpoints?
4. **Performance Requirements**: Any SLA / latency requirements?
5. **Security & Privacy**: What data is sensitive? How should it be handled?
6. **Scalability**: Any specific concurrency / volume requirements?
7. **Testing Requirements**: How should QA test this? Edge cases?
8. **Deployment Plan**: Staged rollout? Feature flag? Database migration?
9. **Unknowns & Questions**: What ambiguities should we clarify with the PM?

Output a doc that eng can review, poke holes in, and estimate from. Format: markdown suitable for a shared doc.

What it does: Bridges the gap between "here's what I want" and "here's what we're building" so engineering isn't guessing.

When to use it: Before any feature reaches the engineering backlog.

Pro tip: Have your engineering lead review the TRD before you hand it to the full team. They can flag if you've missed anything or made assumptions that won't work.


Prompt 19: Outline an A/B Test for a Feature

We're about to ship a feature: [description]

Help me design an A/B test that validates whether this works. For the test, define:

1. **Hypothesis**: What do we believe will happen? (Be specific: "Users exposed to X will have Y% higher activation")
2. **Primary Metric**: What one metric matters most?
3. **Secondary Metrics**: What else should we watch? (Unintended consequences?)
4. **Sample Size & Duration**: How many users, how long should we run it?
5. **Segments to Test**: Should we test across all user types or focus on a specific segment first?
6. **Success Criteria**: What's the bar for "this worked"? (Statistical significance, minimum effect size)
7. **Monitoring Plan**: What do we watch during the test to make sure nothing broke?
8. **Decision Framework**: If the metric moves +10%, +5%, -5%, what do we do?

Output a one-pager I can share with leadership and eng before we start.

What it does: Stops the problem of shipping a feature, seeing it didn't move the needle, and having no idea why.

When to use it: For any feature where the outcome is uncertain.

Pro tip: Include your last 3 A/B tests. The AI will calibrate the sample size and duration to what's realistic for your product.


Prompt 20: Create a Post-Launch Runbook

We're launching this feature / product: [description]

Create a post-launch runbook for the week after we ship. Include:

1. **Go-Live Checklist** (day of launch)
   - What to monitor
   - Escalation contacts
   - Rollback plan if something goes wrong

2. **Daily Monitoring** (first week)
   - Which metrics matter most?
   - What thresholds trigger an alert / action?
   - Who should be watching?

3. **Customer Communication Plan**
   - Day 1: announcement
   - Day 3: follow-up/questions
   - Day 7: results/early wins
   - Escalation for "this is broken" feedback

4. **Data Collection**
   - What events should be logged?
   - How do we track adoption and engagement?

5. **Decision Points**
   - Day 3: Are we on track? Or do we need to adjust?
   - Day 7: Early wins or signals of trouble?
   - Day 14: Keep shipping or investigate deeper?

6. **Key Contacts**
   - Who owns each part (eng on-call, support for customer issues, etc.)?

Output as a doc your team can print and use.

What it does: Prevents the chaos of shipping and having no plan for what happens next.

When to use it: Before every material launch. For small features, you can simplify, but the pattern is always useful.

Pro tip: Include the launch date, time, and timezone prominently. Include links to the dashboards people should be watching. Make it genuinely actionable.


How to Use These Prompts

These prompts work best when:

  1. You're specific with inputs. The AI gets better with real data, actual customer quotes, real roadmap items. "Analyze this" beats "what should I analyze?"

  2. You iterate. First output is rarely perfect. Ask follow-ups: "Rank these by revenue impact only, ignore effort" or "How would this change if we focused on mid-market instead?"

  3. You customize the format. If your exec prefers tables, ask for tables. If your team uses specific terminology, include it in the prompt.

  4. You use them weekly, not just when crisis hits. The compounding value isn't in one prompt saving an hour. It's in using them consistently and building better habits around data-driven decisions.

Download the companion artifact with all 20 prompts ready to copy-paste, plus variations for different tools and use cases.

Share this post

Download the artifact

Ready to use. Copy into your project or share with your team.

Download

Frequently asked

What's the best AI prompt for product managers?+

There's no single best prompt. The 20 in this post are organized by the part of the PM job they replace: customer research (extract Jobs to Be Done from interview transcripts), prioritization (cluster support tickets into themes ranked by revenue impact), strategy (stress-test a positioning statement), writing (draft a one-pager from notes), and prototyping (turn a feature spec into a Claude Artifacts prompt). The best prompt is the one that matches a task you're currently doing manually, start with the JTBD extractor or the ticket-clustering prompt, both save 2-3 hours on first use.

Can ChatGPT or Claude actually replace product manager work?+

Not the judgment. Not the relationships. But yes for the mechanical parts: extracting themes from interview notes, drafting first versions of PRDs and one-pagers, summarizing support tickets, generating test cases for new flows, writing release notes. A good PM with a good prompt library does in two hours what a PM without one does in two days. The job becomes higher-leverage, not obsolete.

How do I write better AI prompts as a PM?+

Three patterns lift PM prompts from generic to surgical. First, give the AI the role + the audience + the constraint in the first sentence (e.g., 'You're a senior PM at a B2B SaaS preparing a board update; the constraint is 200 words'). Second, paste real artifacts (transcripts, ticket exports, competitor pages) instead of describing them. Third, ask for output in a specific structure (markdown table, JSON, bullet list) so it's drop-in usable. The 20 prompts here all follow this pattern.

Which PM tasks should I never automate with AI?+

Anything that requires accountability or trust. Don't have AI write decisions you'll be held responsible for without reading every word. Don't have it send messages to customers, executives, or your team without you reviewing first. Don't let it set OKRs or commit to deadlines on your behalf. The rule of thumb: if a human will be hurt or misled by a bad output, you have to be in the loop.

Do these prompts work better in Claude or ChatGPT?+

Claude tends to do better on the long-form analysis prompts (interview transcripts, ticket clustering, competitive teardowns) because of the larger context window and stronger reasoning on structured output. ChatGPT tends to do better on the writing prompts (release notes, one-pagers, exec emails) because it's tuned for varied prose voice. Most of these 20 prompts work well in either. Where it matters, I've noted which model I default to.

How often should I update my AI prompt library as a PM?+

Monthly. The models change every few months and prompts that worked great in the previous version may underperform. The bigger reason: you'll discover new tasks worth automating as you use the library. The most-used prompt in any PM's library after six months is one they invented for their own specific situation, not one they copied. Treat the 20 below as a starting point, not a final list.

Keep Reading

Posts you might find interesting based on what you just read.