# Signal-to-Ship Cycle Time Agent

**Agent Name:** Signal-to-Ship Cycle Time Agent
**Role:** Tracks every in-flight product change through all seven stages of the PM Operating System and surfaces the bottleneck
**Frequency:** Daily 9:00 AM (snapshot) + Weekly Monday 9:00 AM (trend digest)
**Output Channel:** Slack (#product-flow) + weekly email to product leadership
**Run Time:** ~5 minutes daily, ~15 minutes weekly

---

## PURPOSE

Most PM orgs debate their speed in anecdotes. "We're faster than last year." "Engineering is the bottleneck." "Sales is always rushing us."

None of that is measurable. This agent makes it measurable.

It tracks every active item in your product portfolio through the seven stages of the PM Operating System (Sense → Discover → Decide → Build → Ship → Measure → Amplify), computes median and P90 time-in-stage and total cycle time across the portfolio, names the current bottleneck stage with a specific diagnosis, and flags items that have been stuck too long in any stage.

The compounding value is that once the bottleneck stage is visible, the team routes around it. Week by week, cycle time drops. Within a quarter, median Sense-to-Amplify often falls by 50-70 percent. This is the agent that tells you whether the PM transformation is real or performative.

---

## THE SEVEN PM OS STAGES

| # | Stage | Definition | Primary stage-exit signal |
|---|-------|------------|---------------------------|
| 1 | Sense | A signal has been detected and tagged to a logical project | A signal id (ticket, Gong moment, Slack thread, churn note) is linked to a `project_id` in the identity map |
| 2 | Discover | The signal has been researched and synthesized into a hypothesis | A hypothesis record has been attached to the project (from Interview Synthesis, Journey Mapping, or manual PM write-up) |
| 3 | Decide | The opportunity is prioritized and committed to a sprint | Project is on a sprint plan with target outcome and scored confidence (from Opportunity Prioritization + Sprint Planning agents) |
| 4 | Build | Production code exists and is instrumented | Main-branch PR merged AND project linked to production analytics events |
| 5 | Ship | The feature is in front of eligible customers, GTM-ready | Feature flag rollout = 100% of eligible segment AND launch checklist complete (from Release Readiness + GTM Monitoring agents) |
| 6 | Measure | Impact has been measured; outcome is signed | Experiment/observational reading attached with conclusion (win, loss, null) |
| 7 | Amplify | Learning has been broadcast to the organization | Retro note or internal post references the outcome and the lesson |

An item is in exactly one stage at a time. The timestamp of stage entry is the earliest observed stage-exit signal from the prior stage.

---

## PROJECT IDENTITY MAP (SOURCE OF TRUTH)

The one piece of config that requires human maintenance. Keep as `project_map.yaml` in your repo. The PM who owns the project is responsible for keeping their row current. Ten to twenty minutes a week for a portfolio of 30-50 items.

```yaml
projects:
  - id: sso_smb
    name: "SSO for SMB"
    owner_pm: "@falk"
    eng_lead: "@priya"

    # SENSE
    signal_sources:
      zendesk_tags: ["auth", "sso"]
      gong_topics: ["SSO", "single sign on"]
      salesforce_opportunity_ids: ["006Ab00001Xy00A", "006Ab00001Xy00B"]

    # DISCOVER
    hypothesis_doc: "notion://workspace/research/sso-smb-hypothesis"
    interview_tags: ["sso-smb-research"]

    # DECIDE
    jira_epic: "AUTH-214"
    ost_node_id: "ost-q2-auth-12"
    target_outcome: "activate 40% more SMB trials with SSO enabled"

    # BUILD
    github_labels: ["feature/sso-smb"]
    production_event: "auth.sso.login.completed"

    # SHIP
    feature_flags: ["sso_smb_v1"]
    launch_tracker_id: "LAUNCH-2026-042"

    # MEASURE
    amplitude_feature: "sso_smb_v1"
    experiment_id: "exp-sso-smb-rollout"

    # AMPLIFY
    retro_tag: "sso-smb-retro"
    broadcast_channel: "#wins"

    target_cycle_days: 30
```

Without this map, the agent cannot know that the ticket in Zendesk, the epic in Jira, the PR in GitHub, the flag in LaunchDarkly, the Amplitude feature, and the retro note are all the same logical project. With it, every downstream computation is trivial.

---

## DATA SOURCE WIRING

| PM OS stage | Source | Purpose | MCP / connector |
|-------------|--------|---------|-----------------|
| Sense | Zendesk | Support tickets | MCP Zendesk connector |
| Sense | Gong | Customer calls | MCP Gong connector; transcripts in Weaviate |
| Sense | Salesforce | Opportunity notes, churn events | MCP Salesforce connector |
| Sense | Slack | #customer-feedback threads | MCP Slack connector |
| Discover | Weaviate | Semantic search over transcripts & notes | Vendor API |
| Discover | Google Drive / Notion | Research docs, hypothesis write-ups | MCP connector |
| Decide | Jira / Linear | Sprint plan, OST board, scored opportunities | MCP connector |
| Build | GitHub / GitLab | PR merge times, first-commit timestamps, release tags | MCP connector |
| Build | Analytics | Production event wiring | Amplitude/Mixpanel/Pendo API |
| Ship | Feature flag system | Rollout percentage timestamps | LaunchDarkly/Statsig/Flagsmith API |
| Ship | Launch tracker | Launch checklist completion | Airtable / Notion / internal tool API |
| Measure | Experiment platform | Signed readings and conclusions | Statsig / Eppo / in-house |
| Measure | Amplitude/Mixpanel | Adoption and retention curves | Vendor API |
| Amplify | Internal broadcast channels | #wins, exec report threads | Slack / Notion |
| Amplify | Retro docs | Retrospective Synthesis output | Notion / Confluence |

---

## THE CORE LOOP

```
Daily at 9:00 AM:
  for project in project_map.yaml:
    For each of the 7 stages, pull the earliest-observed stage-exit signal.
      Sense      : signal_id linked to project_id
      Discover   : hypothesis attached
      Decide     : on sprint + target outcome
      Build      : PR merged + analytics wired
      Ship       : flag 100% + launch done
      Measure    : signed reading attached
      Amplify    : retro note / broadcast referencing outcome
    If no signal, the project has not entered that stage yet.
    Current stage = highest-numbered stage the project has entered.
    Compute time_in_stage = now - timestamp(current stage entry).
    Compute total_cycle_time = now - timestamp(Sense entry).
    Stamp project record.

  Aggregate:
    median_time_in_stage[stage] for each of 7 stages
    P90_time_in_stage[stage] for each of 7 stages
    median_total_cycle_time, P90_total_cycle_time
    bottleneck_stage = max deviation from target_cycle_days, weighted by volume

  Identify stuck items:
    For each project, if time_in_stage > P90_time_in_stage[current_stage]
      AND no git/jira/slack activity in last 5 days:
        flag as stuck.
    For each stuck item, pull last 5 activities and let an LLM produce
      a one-line diagnosis.

  Post daily snapshot to #product-flow.

Weekly on Monday at 9:00 AM:
  Run the daily loop, then:
    - Compare this week's median time_in_stage to 4-week rolling avg.
    - Identify stages improving fastest and stages degrading fastest.
    - Run the bottleneck-detection prompt (below).
    - Post the weekly digest.
```

---

## BOTTLENECK DETECTION PROMPT

Run against Claude or equivalent. The four-week history + current-week delta + stuck-item list is the minimum context the agent needs.

```
You are a product operations analyst. You look at portfolio flow data and
name the bottleneck stage in the team's PM Operating System pipeline.

The pipeline has exactly seven stages, in order:
  Sense → Discover → Decide → Build → Ship → Measure → Amplify

Team: {team_name}
Number of active items: {count}

Historical median time-in-stage (days) for the last 4 weeks, per stage:
{stage_time_history_table}

Current week vs this team's historical median, per stage:
{stage_deltas}

Current week vs target_cycle_days (per project, per stage):
{stage_vs_target}

Items currently in each stage, with time-in-stage and last activity:
{items_by_stage}

Stuck items (time_in_stage > P90 for that stage):
{stuck_items_with_diagnosis_notes}

Produce a report with four sections:

1. Portfolio health
   - Is median total cycle time improving, flat, or degrading vs last 4 weeks?
   - Is P90 improving, flat, or degrading?
   - State each with the specific numbers.

2. Bottleneck stage
   - Name the PM OS stage where the team is spending disproportionate time
     relative to both their own history AND their target_cycle_days.
   - If two stages qualify, name both, say which is primary, explain why.
   - If fewer than 5 items are in any candidate stage, refuse to diagnose
     that stage and say the sample is too small.

3. Likely cause of the bottleneck
   - Based on the stuck-item list, propose a specific cause.
   - "Build is slow" is not a cause. "Four stuck items in Build are waiting
     on the same reviewer" is. "Three items have been in Amplify for 20+
     days with no retro note or broadcast" is.
   - Flag the single highest-leverage unblocker. It should be a concrete
     action, not a process change.

4. What improved
   - Which stage or which items moved fastest in the last 4 weeks?
   - If something improved materially, credit the specific change that
     correlated with the improvement (new ritual, new hire, new tool).
   - Brief. Two lines max.

Tone: direct, numerical, no hedging on the data but honest about uncertainty
on causes. No emoji. No motivational language.
```

---

## SLACK DIGEST TEMPLATE (WEEKLY)

```
:chart_with_upwards_trend: SIGNAL-TO-SHIP DIGEST, Week of {week_start_date}

Portfolio health
• {n_active} active items.  Median total cycle: {median_total}d.  P90: {p90_total}d.
• Compared to 4-week avg: median {median_delta}, P90 {p90_delta}.

Time-in-stage (median days | target | status):
  Sense      {t1}  ({target1})  {status1}
  Discover   {t2}  ({target2})  {status2}
  Decide     {t3}  ({target3})  {status3}
  Build      {t4}  ({target4})  {status4}
  Ship       {t5}  ({target5})  {status5}
  Measure    {t6}  ({target6})  {status6}
  Amplify    {t7}  ({target7})  {status7}

Bottleneck: {bottleneck_stage}.
  Likely cause: {cause}.
  Highest-leverage unblocker: {unblocker}.

Stuck items (in stage longer than P90):
• {item_1}, {stage_1}, {days_1}d. {diagnosis_1}
• {item_2}, {stage_2}, {days_2}d. {diagnosis_2}
...

Improving fastest:
• {improvement_1}
```

The daily snapshot is a shorter version: stuck-item list + any stage that crossed a threshold in the last 24 hours.

---

## ESCALATION RULES

Three tiers, configured per project via `target_cycle_days` and a multiplier.

- **Note.** Item enters a stuck state. Post to #product-flow with the diagnosis. No @-mention.
- **Alert.** Item has been stuck for 2x the P90 for its stage. @-mention the owner_pm and eng_lead.
- **Intervention.** Total cycle time for the item has exceeded 2x the `target_cycle_days`. The agent schedules a 20-minute unblocker meeting in the owner_pm and eng_lead's calendars with the diagnosis in the description. Calendar invite auto-sent.

---

## WHAT THIS AGENT DOES NOT DO

- It is not a productivity dashboard. It does not rank individual PMs or engineers. Cycle time is a system property.
- It does not forecast. Use the Engineering Capacity agent for forecasting.
- It does not prioritize. Use the Opportunity Prioritization agent for that.
- It is a process metric, not an impact metric. Pair it with the KPI Watchdog, Feature Adoption, and Product Health agents.

---

## SETUP CHECKLIST

- [ ] Data sources connected for all 7 stages (Zendesk, Gong, Salesforce, Slack, Jira, GitHub, feature-flag system, launch tracker, analytics, experiment platform, retro docs)
- [ ] `project_map.yaml` created with at least 5 projects, each with fields for all 7 stages
- [ ] Stage-exit signals tested end-to-end on one reference project
- [ ] Daily and weekly cron jobs running
- [ ] #product-flow channel exists and owner_pm list is current
- [ ] PM leadership has confirmed `target_cycle_days` per project
- [ ] First weekly digest reviewed with the product leadership team (sanity check before automating)

---

## TUNING NOTES (FIRST 30 DAYS)

Week 1-2, **The numbers are weird.** Time-in-stage will be off for many projects because historical timestamps are messy. Expect the first digest to have 2-3 obvious errors. Fix each by adjusting the `project_map.yaml` or the stage-exit signal for that case. Iterate.

Week 3, **The bottleneck named is wrong.** The detector leans too heavily on median and under-weights P90. Tune the bottleneck prompt to include both, and to require a minimum sample size.

Week 4, **The stuck-item diagnosis is too generic.** Add more context to the diagnosis step: pull the last 3 comments on the ticket, the last Slack activity in the project's channel, and the PR review status. The specific detail makes the diagnosis actionable.

Week 5+, **The agent starts being useful.** At this point, the bottleneck stage named on Monday is almost always the right one. The team starts routing around it. Cycle time begins to drop measurably.

If the agent is still producing noise in week 5, the project_map is the likely cause. Either the maps aren't being kept current or a stage-exit signal is ambiguous. Fix the source.

---

## REFERENCES

- The seven PM OS stages live in the [Product Operating Model](/os/product-operating-model) chapter.
- Stage-specific prototyping mechanics are in [Instant Prototyping](/os/instant-prototyping) and [Prototype in 60 Minutes](/blog/prototype-in-60-minutes).
- This agent consumes outputs from nearly every other agent in the fleet. It's most useful when at least one agent is wired per stage.

---

*If you run this for one quarter and cycle time doesn't drop, the agent isn't the problem. The `project_map.yaml` isn't being maintained, or the leadership team isn't acting on the Monday bottleneck. Both are fixable.*
