agentsUpdated·Falk Gottlob··updated ·4 min read

Opportunity Prioritization and Synthesis Agent

Weekly synthesis of all your DISCOVER agent outputs into a prioritized opportunity stack. OST-ready opportunities with impact estimates and dependencies.

agentsstrategyprioritization
Helpful?

Try it live
See this agent running in the sandbox

Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.

The short version

The Opportunity Prioritization agent reads the outputs of all five DISCOVER agents (Support Signal Processing, NPS/CSAT Analysis, Interview Synthesis, Journey Mapping, Customer Segmentation) and produces one prioritized opportunity stack every Friday at 11 AM. Three layers of synthesis: cross-dataset validation (opportunities appearing in 2+ reports get high confidence), impact estimation (segment size × WTP × churn reduction), and dependency mapping (fixing A unlocks B). The output is OST-ready: problem statement, evidence, affected segments, estimated impact, rough effort. Going from five scattered reports to one actionable list cuts roadmap planning to a fraction of the time.

You run five discovery agents. Support signals, NPS drivers, interview themes, journey maps, segment updates. Every week you get five reports. Now what?

The problem: synthesis takes hours. You sit with all five reports and try to find the connections. Does the interview theme match the journey friction? Is the NPS driver the same as the support pain point? How do these opportunities rank against each other? Which are dependencies?

The Opportunity Prioritization agent does this synthesis automatically. Every Friday, it reads all five DISCOVER agent outputs and produces a single prioritized list of opportunities. Each opportunity has: impact estimate, segment applicability, implementation dependencies, and a 1-paragraph problem statement.

You go from five scattered reports to one actionable OST.

How It Works

The agent ingests all DISCOVER outputs and applies three layers of synthesis:

Cross-dataset validation: If three different reports point to the same issue (support signals, interview themes, journey friction), it's a high-confidence opportunity. If only one report mentions it, it's lower confidence. The agent surfaces the strongest signals first.

Impact estimation: Using segment size, willingness-to-pay, and engagement data, the agent estimates: how many customers would this affect? How much would it reduce churn? Would it enable new revenue? Rough estimates, but better than guessing.

Dependency mapping: Does fixing opportunity A unlock opportunity B? Does solving the "slow onboarding" issue depend on first fixing "confusing data mapping"? The agent maps these so you can prioritize implementation sequentially.

Data Sources and Setup

Prerequisites: All five DISCOVER agents should be running:

  • Support Signal Processing
  • NPS/CSAT Analysis
  • Interview Synthesis
  • Journey Mapping
  • Customer Segmentation

Schedule: Weekly Friday at 11 AM. Reads outputs from the previous week's DISCOVER runs.

The Claude Prompt

You are synthesizing discovery data into a prioritized opportunity list.

Here are this week's DISCOVER agent outputs:

**Support Signals Report**:
[SUPPORT REPORT]

**NPS/CSAT Analysis**:
[NPS REPORT]

**Interview Synthesis**:
[INTERVIEW REPORT]

**Journey Mapping**:
[JOURNEY REPORT]

**Customer Segmentation**:
[SEGMENTATION REPORT]

Here's our customer base and financial context:
[SEGMENT SIZES, ARR, GROWTH METRICS]

Please synthesize and report:

1. **Cross-Dataset Validation**
   - Which opportunities appear in 2+ reports? (HIGH confidence)
   - Which opportunities appear in only 1 report? (LOWER confidence)
   - Are there contradictions between reports? How do you explain them?

2. **Consolidated Opportunity List**
   For each opportunity, show:
   - **Problem statement**: 1-2 sentence problem (what's broken? who does it affect?)
   - **Evidence**: Which reports validate this? What specific data points?
   - **Affected segments**: Which customer groups feel this pain most?
   - **Estimated impact**: How many customers affected? Revenue impact if you fixed it?
   - **Rough effort**: Is this a small, medium, or large effort?

3. **Impact Ranking**
   - Prioritize by: (impact × segment size × willingness to fix) / effort
   - Show top 10 opportunities in priority order
   - Explain the ranking rationale for the top 3

4. **Dependency Mapping** (IMPORTANT)
   - Does opportunity A depend on opportunity B?
   - Are there clusters of related opportunities?
   - In what order should you tackle them?

5. **Segment-Specific Opportunities**
   - Are different segments asking for different things?
   - Should you build a segment-specific feature set?

6. **Surprising Findings**
   - What's the most unexpected insight from this week's reports?
   - What patterns do you see that the individual reports missed?

Format as an OST-ready list. Make it concrete enough that an engineer could estimate effort, but focused enough that it's usable for prioritization.

What You Get

Instead of five separate reports you have to manually reconcile:

  • Single prioritized list: All discovery synthesized into one OST
  • Confidence scoring: You know which opportunities are validated by multiple signals vs. single data points
  • Impact visibility: Not just "customers want X" but "40 customers want X, and it would reduce churn by 3%"
  • Implementation sequencing: "Fix Y first, it unblocks three other opportunities"

Real outcomes:

  • Roadmap planning is 10x faster (one list instead of five)
  • You catch consensus opportunities early (multiple reports = real pain)
  • You stop building things nobody asked for

For the full agent fleet and scheduling details, see Your AI Agent Fleet.

Share this post

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.