
Stream a simulated run, inspect the notifications it would send on Slack and email, and see exactly where it sits in the 7-stage PM OS flow. No password required.
The short version
The Release Checker agent is your final gate before shipping. It runs Thursday at 10 AM (72 hours before a Monday release) and verifies QA test results (Jira or TestRail), documentation status, GTM materials, sign-offs, and feature flag configuration. Each feature gets a status: Go, At Risk, or Blocked. The output is a clear go/no-go recommendation with named blockers, owners, and EOD-Thursday deadlines. The 72-hour buffer is the point. If QA is at 72% Thursday morning, you have Friday to fix it. If you find blockers Monday, you ship with bugs or you scramble. Add this Thursday and your release-day chaos drops to zero.
It's Thursday afternoon. Your release is scheduled for Monday. Everything looked ready on Wednesday when your Release Readiness Agent ran. But did anything slip in the last 24 hours?
Is QA actually done or are they still testing? Did that help article get published or is it still sitting in draft? Did the sales team actually get trained or is that "on the list for Friday"?
You call around. Someone says "QA's almost done." Someone else says "The docs are good, I published them yesterday." Nobody knows if the other person actually published them, or if "published" means "live on the help site" or just "sent to the team for review."
This is the chaos that happens before most releases. You have partial visibility. You're hoping everything is done but you don't actually know.
The Release Checker Agent is your final gate. It runs Thursday morning and does a verification pass on everything critical. QA test results. Documentation. GTM materials. Sign-offs. Feature flags. It tells you: Are we shipping or are we not?
Why Thursday Verification Matters
Most teams run a release readiness check once, on Wednesday. Then they assume everything stays ready.
But "ready on Wednesday" doesn't mean "ready on Monday." Between Wednesday afternoon and Monday at 2pm, things slip. A critical bug appears. Sign-offs aren't finalized. That help article still isn't live. The sales team didn't show up to the training.
You need a verification pass. A final check 72 hours before release that says: Did we actually fix the things we said we'd fix? Is everything still on track?
The Thursday check also gives you a buffer. If you find blockers Thursday morning, you have Friday to fix them. If you wait until Monday morning to verify, you're shipping with issues or you're scrambling to rescope the release at the last minute.
Here's what a typical Thursday verification might surface:
"QA is only 72% done. Three features still have failing critical tests." You have until Friday EOD to fix it. If you can't, you descope those three features and ship without them.
"The help article for Feature X is still in draft status." It's 9am Thursday. Can you publish it by EOD? If not, you can't ship the feature until the docs are live.
"Sales training is scheduled for Friday but only 40% of the sales team is confirmed to attend." You have Thursday afternoon to push marketing to get attendance up. If they won't attend, you delay the feature launch until training is done.
"The security review is pending and the security team promised completion by Friday." That's a blocker. You note the dependency and plan to verify Friday afternoon before green-lighting Monday's release.
"Feature flag is configured as 50% rollout but the launch plan says dark launch (0%)." Misconfiguration found. You fix it Thursday. Crisis averted.
These are all discoverable Thursday. They're not discoverable Monday when you're trying to ship.
How the Agent Works: A Verification Pass
The Release Checker Agent pulls data from five sources: QA test results, documentation status, GTM materials checklist, sign-off tracking, and feature flag configuration.
It runs 72 hours before your target release date and produces a verification report. Each feature gets a status:
🟢 GO. All QA passed, docs published, GTM ready, sign-offs complete, feature flags configured correctly.
🟡 AT RISK. Mostly ready but one or two items are still in progress. Flagged for owner to complete by EOD Thursday.
🔴 BLOCKED. Can't ship until this is fixed. Likely needs to be descoped from the release.
The report gives you a complete inventory of where each feature stands. It also gives you a clear go/no-go recommendation at the top: Ship the release as planned, delay the release, or descope specific features.
The report structure:
Go/no-go recommendation. At the top. Are we shipping or not? Why or why not? What needs to be fixed by when?
QA status by feature. For each feature: test coverage %, critical tests (pass/fail), regression tests (pass/fail), status, blocker?
Documentation completeness. Help articles (published/draft/missing), release notes (complete/incomplete), changelog, API docs. All organized by feature.
GTM readiness. Sales deck ready? Talking points written? Marketing email drafted? Customer communication prepared? Support training scheduled? FAQs written?
Sign-off status. PM approval, QA approval, engineering approval, design approval, security review (if applicable). Who's still pending? What are the conditions?
Feature flag status. Does each shipping feature have a kill switch? Is it configured correctly? What's the rollout plan?
Release blockers. Every item that would prevent shipping. QA failures. Missing sign-offs. Unpublished documentation. Misconfigured flags. For each blocker: owner, action, deadline.
Strategic decisions. If there are blockers, what are the options? Delay the release? Descope features? What's the recommendation?
Data sources and setup
Prerequisites: Complete the Claude setup guide first. This agent needs the following MCP connections active:
- Jira or TestRail - QA test results and coverage metrics
- Sentry or your error tracking tool - error rates and performance data
- Zendesk - support ticket volume and ticket trends
Schedule: Runs every Thursday at 10:00 AM via cron. Output posts to Slack.
Quick test: Open Claude and ask: "Check post-release stability: error rates vs baseline, performance changes, support ticket volume, and user behavior shifts."
For the full agent fleet and scheduling details, see Your AI Agent Fleet.
The Prompt (Customize This)
Here's the basic prompt structure:
You are a release operations specialist. Your job is a final verification pass 72 hours before release.
DATA INPUTS:
- QA test results
- Documentation status (help articles, release notes, API docs)
- GTM checklist status
- Sign-off tracking
- Feature flag configuration
INSTRUCTIONS:
1. For each shipping feature: verify QA test coverage > 80% and all critical tests passed
2. Verify documentation is published (help article live, release notes complete, changelog updated)
3. Verify GTM readiness: sales trained, marketing materials ready, support prepared
4. Verify all required sign-offs are complete
5. Verify each feature has a configured kill switch with correct rollout plan
6. Identify all blockers: anything that prevents shipping
7. Identify all at-risk items: not done but could be by EOD Thursday
8. Give clear GO / NO-GO / GO WITH CAUTION recommendation
9. For each blocker: state owner, action needed, deadline
TONE: Action-oriented, clear priorities, zero ambiguity.
OUTPUT: Markdown for Slack, with GO/NO-GO at top.
What This Changes
When you add a Thursday verification check, release culture shifts.
Nothing slips without you knowing. Between Wednesday and Thursday, you have full visibility. If something changed status, you see it.
You have a buffer to fix blockers. If QA isn't done or docs aren't published, you have 24+ hours to fix it. You're not discovering blockers Monday morning.
Teams know they're being verified. When your QA team knows that a Thursday agent is checking their test coverage, they finish testing on Thursday, not Monday afternoon. When your documentation team knows that published docs will be verified Thursday, they publish by Thursday. Accountability improves.
Release day is boring. Because you already verified everything Thursday, Monday is just executing the plan. You're not scrambling. You're not discovering issues. You're shipping.
You have data for decisions. If Thursday shows that three features have QA failures and can't be fixed by Friday, you have a real decision: delay the whole release or descope those three features? You make the call based on data, not panic.
Customers get quality. You're not shipping with undocumented features, untrained support teams, or missing feature flags. Everything is verified before customers see it.
The Broader Toolkit
The Release Checker is one piece of a release readiness system:
- Release Readiness Agent (Wednesday): Initial readiness check
- Release Checker Agent (Thursday): Verification that readiness was maintained
Together with your weekly agents:
- Weekly Executive Report (Monday 7am)
- Weekly Ops Digest (Monday 8am)
- Product Health Dashboard (Tuesday 9am)
- Release Checker (Thursday 10am)
You have systematic weekly visibility into every aspect of product operations.
Start with the release checker. It's the last gate before shipping. Don't ship without it.
Download the artifact
Ready to use. Copy into your project or share with your team.
Also on Medium
Full archive →AI Agents and the Future of Work: A Pixar-Inspired Journey
What product managers can learn about AI agents from how Pixar runs a film team.
Many AI Agents Are Actually Workflows or Automations in Disguise
How to tell agents from workflows from cron jobs, and why it matters for what you ship.