# The Guerrilla PM Playbook: Operating Without a Team

Complete guide to running world-class product management with 2 hours per week, no research team, no analyst, no design support.

## Overview

The core system is four 30-minute activities, run every week, that compound over time to give you world-class insight without dedicated support functions.

- Monday: 30 min discovery call
- Tuesday: 30 min competitive scan
- Wednesday: 30 min prototype session
- Thursday: 30 min metrics review

## Why This Works

**Speed advantage:** You learn in days, teams with full support learn in weeks
**Clarity:** Constraints force you to ask sharp questions
**Testing over research:** You learn by doing, not by theorizing
**Ownership:** You can't outsource reality away
**Compounding:** Four small activities repeated add up to massive insight over time

## The Weekly Structure

### Monday: Discovery Call (30 min)

**What you do:**
- Call one customer, prospect, or churned customer
- Unstructured conversation
- Ask about their world, their problems, what they're working on
- No pitch. No survey. Just listen.

**How to structure it:**
- Schedule it for 30 minutes
- Start with: "Tell me what you're working on this week"
- Let them talk for 5-10 minutes without interrupting
- Ask why 3-4 times
- Write down exact quotes
- Thank them and hang up

**Where to find customers:**
- Ask your CSM for power users
- Ask Sales for recent wins
- Ask Support for escalations (churned customers often want to explain)
- Check your product telemetry for heavy users
- Post in Slack: "Who wants to chat about how they use the product?"

**What you're listening for:**
- Pain points (things that frustrate them)
- Workarounds (how they work around problems)
- Language (exact words they use)
- Jobs to be done (what they're actually trying to accomplish)
- Constraints (what limits their ability to use your product)

**Writing it down:**
```
Date: [date]
Customer: [name, company, role]
Key quotes:
- "[exact quote about problem]"
- "[exact quote about workaround]"
- "[exact quote about what matters]"

Patterns I'm seeing:
- [one thing]
- [one thing]

Questions for next conversation:
- [something to dig into]
```

**Why 30 minutes is enough:**
- You learn the most in the first 20 minutes
- After that, you're hearing repeats
- One call tells you very little
- Eight calls (one per week) tell you a lot
- Twenty calls (five months) tell you everything

### Tuesday: Competitive Scan (30 min)

**What you do:**
- Let an AI agent scan your competitors
- You skim the results
- Write down what matters
- Move on

**The prompt (copy this):**

```
Here are my 5 competitors: [Competitor A, B, C, D, E]

Please:
1. Check each competitor's website, changelog, and blog
2. What have they shipped in the past week/month?
3. What's their latest positioning/messaging?
4. Any partnerships or integrations announced?
5. Any pricing changes?
6. Any leadership changes?

Format as: Competitor | What shipped | Why it matters | Our response

Be factual, not speculative. If you don't find evidence, say so.
```

**Where to look:**
- Competitor websites (product, pricing, blog)
- Changelog pages (vercel.com/changelog, stripe.com/changelog, etc.)
- Twitter/LinkedIn (recent posts from company accounts)
- G2/Capterra reviews (emerging themes)
- Crunchbase (funding, hiring, exits)
- Industry publications

**What you're tracking:**
- Product launches (what problems are they solving?)
- Pricing changes (who are they targeting?)
- Positioning shifts (how are they talking about their product?)
- Features shipping (what's on their roadmap?)
- Partnerships (are they bundling?)

**Template for tracking:**

```
COMPETITOR SCAN - [DATE]

Competitor A:
- Shipped: [what]
- Positioning: [how they talk about it]
- Likely goal: [why this matters]
- Our response: [what we should do or monitor]

Competitor B:
...
```

**Why 30 minutes is enough:**
- You don't need deep analysis
- You're looking for themes, not details
- AI agents do 80% of the work
- You just need to skim and spot patterns

**What you DO with this information:**
- Every quarter, look back at four competitive scans
- Spot themes in what competitors are doing
- Identify gaps (things competitors aren't addressing)
- Notice if customers are asking for competitor features
- Use in customer conversations: "How does this compare to Competitor X?"

### Wednesday: Prototype Session (30 min)

**What you do:**
- Pick one idea worth testing
- Spend 30 minutes building a rough prototype
- Not production code. Not design-perfect. Testable.
- Use Claude Code to generate the foundation
- You customize it
- Deploy it or record it

**How to structure it:**

Step 1 (5 min): Define what you're testing
```
Feature: [what we're testing]
Hypothesis: [what will happen if we build this]
Success metric: [how we'll know if it worked]
```

Step 2 (15 min): Build the prototype
- Start with Claude Code prompt: "Build a prototype for [feature]. Rough is fine. Testable is important."
- Claude generates basic code
- You edit: add real data, change styling, simplify
- Deploy to a URL or record a video

Step 3 (10 min): Test with one customer
- Show it to someone from Monday's call (if relevant)
- Record their reaction
- Ask: "Would you use this?" and listen

**Example prototypes you can build in 30 min:**
- Interactive flow (what happens when user takes action X)
- Pricing page layout
- New dashboard layout
- Signup flow variation
- Email notification
- Mobile interaction
- Search/filter interface
- Data input form

**Tools:**
- Claude Code + local deployment = 10 minutes to testable
- Figma mockup = 20 minutes to testable
- Video recording + narration = 20 minutes to shareable

**What you're NOT doing:**
- Building production code
- Waiting for design
- Iterating endlessly
- Making it perfect

**What you're doing:**
- Testing core idea in 30 minutes
- Getting one customer's reaction
- Learning if the direction is right
- Killing bad ideas fast

### Thursday: Metrics Review (30 min)

**What you do:**
- An AI agent pulls this week's metrics
- You read a summary
- You spot what changed
- You note what to watch

**The prompt (copy this):**

```
Pull this week's metrics and compare to:
- Last week
- 4 weeks ago
- Same time last year (if available)

Key metrics I care about:
- [Activation]
- [Retention]
- [Usage]
- [Revenue or key business metric]

Format as:
Metric | This week | Last week | Change | Trend | What changed

Also flag:
- Biggest positive change
- Biggest negative change
- Anything anomalous
```

**Metrics to track (pick 3-5):**

Customer metrics:
- Activation (% users reaching key moment)
- Retention (% active users D30)
- Usage (daily/weekly active, feature adoption)
- Expansion (upgrades, seat growth)
- Churn (% churned, reasons)

Product metrics:
- Feature adoption (% using new feature)
- Feature usage (how often)
- User journey abandonment (where do they drop?)
- Time to value (how long to first success)
- NPS or CSAT

Business metrics:
- ARR/MRR
- Customer acquisition cost
- Lifetime value
- Payback period
- Net retention

**What you're looking for:**

```
METRICS REVIEW - [DATE]

Activation:
- This week: 35%
- Last week: 33%
- Change: +2pp
- Trend: Up (good)
- Analysis: Onboarding change from Tuesday is working

Retention:
- This week: 92%
- Last week: 92%
- Change: Flat
- Trend: Flat
- Analysis: No change. Churn still 8% monthly.

Usage:
- This week: 2.1 features per user
- Last week: 1.9
- Change: +10%
- Trend: Up
- Analysis: New feature from month launch is being adopted

Concern:
- [New metric that's off] - need to investigate
```

**Why 30 minutes is enough:**
- You're not doing deep analysis
- You're reading a summary
- You're spotting trends, not doing math
- AI agent does the heavy lifting

**What to do with this:**
- Notice patterns over time (four weeks of this = clear signal)
- Validate whether features worked (did it move the metric?)
- Spot problems early (churn jumping? Investigate.)
- Make decisions (data supports or disproves feature idea)
- Share with team (one graph, one sentence)

## The 4-Week Ramp-Up Plan

### Week 1: Just Do It

Run the system as designed. No optimization. Just 2 hours.

**Check at end of week:**
- Did you do all four activities?
- Did you learn something?
- Did anything feel impossible?

### Week 2: Refine the Format

You've done it once. Now optimize:
- What time works best for discovery calls? Lock it in.
- What format for competitive scan is most useful? Lock it in.
- Are your metrics the right ones? Adjust.
- What's your note-taking style? Standardize it.

Don't change the structure. Just optimize within it.

### Week 3: Add One Analysis Layer

Start connecting things:
- "Customer mentioned problem X in call. Is that showing up in churn?"
- "Competitor launched feature Y. Are customers asking us for it?"
- "New feature adoption is 40%. Customer says it's solving problem Z."

You're not changing the 2-hour commitment. You're just spotting patterns.

### Week 4: Make It Systematic

By now, you have:
- 4 customer calls (16 quotes, 4 patterns)
- 4 competitive scans (16 data points)
- 4 prototypes (4 validation points)
- 4 metrics reviews (data over time)

You're starting to see real signal. Create a simple system:

**Weekly digest (takes 10 min Friday):**
```
Weekly discoveries:
- Customer insight: [one thing from calls]
- Competitive move: [one thing from scan]
- Product validation: [one thing from prototype]
- Metric alert: [one thing from metrics]

Next week's focus: [one area to investigate more]
```

## AI Agents for Guerrilla PM

Here's how to use AI to do the work for you:

### Agent 1: Competitive Intelligence

**What it does:**
- Scans competitor websites
- Pulls changelog updates
- Monitors pricing changes
- Watches for partnerships

**Prompt template:**
```
Monitor these competitors: [list]
I care about:
- Product launches
- Pricing changes
- New integrations
- Positioning shifts

Report on: [timeframe]
Format as: Competitor | Change | Impact | Our response
```

**Tools:**
- Claude Code with web search
- Set up as a weekly task or run manually

### Agent 2: Customer Insights Synthesizer

**What it does:**
- Analyzes customer call notes
- Extracts patterns
- Surfaces contradictions
- Identifies themes

**Prompt template:**
```
Analyze these customer calls: [paste notes]
Extract:
- Top 3 pain points (what frustrates them most)
- Top 3 workarounds (what do they do instead)
- Top 3 jobs to be done (what are they actually trying to accomplish)
- Language they use (exact words for positioning)
- Segments (are different customers saying different things?)
```

**Tools:**
- Paste call notes into Claude
- Weekly synthesis (every 4-5 calls)

### Agent 3: Metrics Analyzer

**What it does:**
- Pulls metrics from your analytics platform
- Compares over time
- Flags anomalies
- Suggests follow-up questions

**Prompt template:**
```
Connect to [Amplitude/Mixpanel/etc]
Pull these metrics: [list]
Compare: This week vs last week vs 4 weeks ago vs YoY
Flag:
- Biggest improvement
- Biggest decline
- Anything anomalous
- Metrics to investigate further
```

**Tools:**
- Your analytics platform API
- Claude Code to run queries
- Automated weekly (Tuesday or Friday)

### Agent 4: Prototype Generator

**What it does:**
- Generates code for interactive prototypes
- Builds based on your description
- Handles styling and interaction
- You customize

**Prompt template:**
```
Build a prototype for: [describe feature]
Users should be able to:
- [action 1]
- [action 2]
- [action 3]

Use: [React/Vue/plain HTML/etc]
Deploy to: [Vercel/Replit/etc]
Timeline: 20 minutes

Don't make it perfect. Make it testable.
```

**Tools:**
- Claude Code
- Deploy to Vercel, Replit, or local server

## Common Scenarios

### Scenario 1: You Have a Feature Idea

Week 1:
- Call a customer. Ask if they have the problem you're solving. (Discovery)
- Check if competitors have this feature. (Competitive scan)
- Prototype a rough version. (Prototype)
- Check if any metric would move if this worked. (Metrics review)

Week 2:
- Call two more customers about the feature. (Discovery)
- Show them the prototype. Record reactions. (Prototype)
- Check feature adoption for competitors. (Competitive scan)
- Estimate impact on key metrics. (Metrics review)

Decision: Build or kill, based on evidence, by week 2.

### Scenario 2: A Metric Drops

Happens Thursday (metrics review).

Friday:
- Call a customer and ask if they noticed the change. (Discovery)
- Check if competitors did something. (Competitive scan)
- Prototype a potential fix. (Prototype)
- Analyze the metric further. (Metrics review)

Monday:
- Roll out the fix or investigate further.

You don't spend two weeks on root cause analysis. You investigate in real time and test fixes immediately.

### Scenario 3: A Customer Churns

CS tells you a customer left. You want to understand why.

1. Call them. Ask what happened. Listen. (30 min, Monday)
2. Check if other customers are seeing the same problem. (30 min, Friday)
3. Prototype a fix if there's a product problem. (30 min, Wednesday)
4. Check if the metric affected others. (30 min, Thursday)

You move from "customer churned" to "here's why, here's what we're doing" in one week.

## Scaling Guerrilla PM

When you're successful with 2 hours/week, teams will ask you to do more. Protect the 2 hours. They're non-negotiable.

If you need to do more:

**Add a second strategic PM** who runs the same system on a different product area. Don't scale yourself. Clone the system.

**Add a 1-2 hour research sprint** once per quarter for deep dives. But don't abandon the weekly system. The weekly system is your baseline.

**Hire a junior analyst** to do Thursday metrics reviews. But do the first three months yourself so you know what matters.

The system is designed to be solo. It stays tight that way.

## Measuring Success

After 4 weeks:
- Did you make faster decisions? (Yes/No)
- Did you ship fewer features that failed? (Yes/No)
- Do you understand your customers better? (Yes/No)
- Are you surprised less often? (Yes/No)

If the answer to most of these is yes, the system is working. Keep going.

After 12 weeks:
- Has feature success rate improved?
- Are your metrics moving in the right direction?
- Is your team happier working with you?
- Are you shipping faster?

If yes, expand or maintain. If no, debug what's wrong.

## The Discipline Required

The hardest part isn't the 2 hours. It's the discipline to:

1. **Not add more.** You will be tempted to do "real research" or hire support. Don't. The system works because it's constrained.

2. **Actually do it.** You will be tempted to skip when busy. Don't. It's when you're busiest that you most need the insight.

3. **Act on it.** Discovery is useless if you ignore it. If a customer says something, do something about it.

4. **Track it.** Write things down. You won't remember patterns from memory.

5. **Repeat.** The magic is in the repetition. Week one teaches you nothing. Month three teaches you everything.

Start this week. Block 2 hours. Do the four activities. Next week, do it again. Month three, you'll wonder how you ever did PM without this system.
