agentsUpdated·Falk Gottlob··updated ·5 min read

Retrospective Synthesis and Learning Agent

Extract learnings from sprint retros automatically. Update playbooks, surface patterns, and drive continuous improvement.

agentslearningprocess
Helpful?

Try it live
See this agent running in the sandbox

Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.

The short version

The Retrospective Synthesis agent runs every Friday at 5 PM after your retro and turns retro notes into actual learning. It compares today's retro against the last 8 retros, detects recurring themes (the "slow deploys" complaint mentioned 5 times that nobody fixed), tracks action item follow-through, and proposes specific playbook updates. The point is to stop re-discovering the same problems every quarter. Three sprints from now, you'll have a living team playbook that's been updated 6 times with patterns you actually learned. Connect your retro notes repo and run it on this week's session.

Your team runs retros every other Friday. You talk about what went well, what didn't, and what to improve next sprint. You leave with action items. Then... nothing happens.

The problem is documentation. Retro notes sit in a doc nobody reads. Months later, you're re-discovering the same problems. "We should do more async standups" (you said that 3 sprints ago). "We need better code review discipline" (same issue, second time).

The Retrospective Synthesis agent captures retro learnings and surfaces them. Every Friday evening, it reads the retro notes from that day and: identifies recurring themes across sprints, updates your team playbook, and flags high-priority issues. You actually learn from retros instead of just going through the motions.

How It Works

The agent processes retro notes and applies learning logic:

Retro parsing: Reads raw retro notes (what went well, what didn't, action items) and structures them. Extracts: specific problems mentioned, proposed solutions, owners, and severity.

Pattern detection: Compares this retro against 8+ previous retros. Is this a recurring problem? First time? Did we say we'd fix this last sprint and fail to follow through?

Playbook updates: If a solution is recommended (and agreed), the agent extracts: the principle/process change, and updates your team playbook. "We agreed to async standups. That goes in the playbook."

Incident/pattern flagging: If the same issue appears 3+ times, it gets flagged. "We've complained about slow deploys 5 times. We keep saying we'll fix it and we don't. Let's actually prioritize it."

The output: a retro summary (what changed, what to track), updated playbook, and flagged high-priority patterns.

Data Sources and Setup

Prerequisites: You'll need:

  • Retro notes repository: Retros stored in Notion, Google Docs, or a shared drive
  • Sprint metrics: Velocity, bugs, incidents, deployment frequency - to correlate with retro themes
  • Team playbook: Current processes and principles (so the agent can update it)
  • Historical retros: At least 8-10 past retros to identify patterns
  • Action items tracking: Which action items from past retros were completed? Abandoned?

Schedule: Weekly Friday at 5 PM (after retros). Processes that day's retro.

The Claude Prompt

You are synthesizing sprint retros and extracting learnings.

Here's today's retro notes:
[RETRO NOTES:
- What went well
- What didn't go well
- Action items and owners
- Any process changes discussed]

Here's our sprint metrics for context:
[METRICS: velocity, bugs found, incidents, deployment frequency, pull request cycle time]

Here are the past 8 retro summaries (from previous sprints):
[PAST RETROS: 
- Sprint date
- Top complaints/themes
- Top wins/themes
- Action items and whether they were completed]

Here's our current team playbook:
[PLAYBOOK: current processes, principles, and guidelines]

Here's what we said we'd improve and whether we did:
[ACTION ITEMS: past items, completion status]

Please analyze and report:

1. **This Sprint's Summary**
   - Top 3 wins this sprint
   - Top 3 problems this sprint
   - Proposed action items
   - Metrics context: was velocity up/down? bugs up/down? Any correlation with retro themes?

2. **Recurring Themes** (PRIORITY)
   - Which problems have appeared in 2+ retros?
   - Which in 3+ retros?
   - Are we repeating ourselves instead of fixing?
   - Examples: "slow deploys" (mentioned 5 times, never addressed), "communication gaps" (mentioned 3 times, partially addressed)

3. **Action Item Follow-Through**
   - What action items from past retros did we actually complete?
   - What did we commit to but not follow through on?
   - Why? (deprioritized? too hard? unclear owner?)

4. **Playbook Updates**
   - Should we add any processes to the playbook?
   - Should we change any existing processes?
   - What principle should we encode from this sprint's learnings?

5. **Team Health Indicators**
   - Is team morale improving or declining?
   - Are we shipping faster or slower?
   - Are we more coordinated or less?

6. **High-Priority Issues**
   - Which problems from this retro need immediate action?
   - Which can wait?
   - Which are systemic vs. one-time issues?

7. **Recommendations**
   - What's the ONE thing we should commit to fixing next sprint?
   - What process change would have the biggest impact?
   - What do we need to stop doing?

Format as a retro summary and playbook update. Be specific enough that the team can act on it.

What You Get

Instead of losing retro insights:

  • Memory: You track recurring problems instead of re-discovering them every 3 months
  • Accountability: You see which improvements you committed to but didn't follow through on
  • Playbook evolution: Your team processes improve based on actual experience, not theory
  • Pattern recognition: You see that "deployments are slow" has been complained about 5 times and it's never fixed - that's a signal to prioritize
  • Continuous learning: Each retro builds on previous ones instead of existing in isolation

Real outcomes:

  • Your team's velocity improves because you're actually fixing recurring bottlenecks
  • Team morale improves because action items get done
  • You stop re-discovering the same problems and actually solve them

For the full agent fleet and scheduling details, see Your AI Agent Fleet.

Share this post

Also on Medium

Full archive →

Keep Reading

Posts you might find interesting based on what you just read.