
I'm tired of two things. Maybe three.
I'm going to write them down because I'd rather call them out once than keep losing the same five minutes a week to the same patterns showing up in my feed.
If you are guilty of any of the three, I am not subtweeting you. I am tweeting at you. The point of a working notebook is that it picks fights honestly.
Three patterns. None of them ship. All three are taxing the credibility of every PM and every product team doing the actual work.
The short version
Three AI noise patterns are eating product credibility in 2026. First, "Claude superpowers" posts on LinkedIn and Instagram, content that wraps two-year-old model behavior in a new badge every week. Second, SaaS companies publishing thousand-word essays on their AI agent strategy while shipping the same 2019-era forms-and-tables UI with a chat sidebar grafted onto the corner. Third, PMs writing posts arguing that prototyping breaks empathy with customers, defending a workflow that depended on the engineering capacity they no longer have. Each pattern lets a team feel like it is participating in the AI shift while shipping the same product as last quarter. The fix is the work: real changelog evidence, agent-as-surface UIs, prototype-driven discovery.
For the working alternative see Prototype Before You Spec, The PM AI Agent Fleet, and The Eval-First Product Org.
1. LinkedIn and Instagram superpower posts
You know the format. A screenshot of Claude or ChatGPT doing something. A label that says "I just discovered" or "Game-changer" or "This will replace junior PMs." Two hundred words of breathless caption that compresses to "the model is good at what it has been good at for eighteen months." A CTA to subscribe to a newsletter that will tell you next week's Game Changer.
This is the cheapest content there is. It is optimized for the platform's reward function, which is novelty and confidence, not accuracy. A model that ships a small capability increment becomes a "leap." A workflow that has been documented for a year becomes a "discovery." The same "five prompts that change everything" post recycles every six weeks with a different stock photo.
The damage is not that any single post is wrong. Most are technically true. The damage is the aggregate signal a PM gets from scrolling their feed: that the field moves in disconnected micro-leaps every week, that someone else is always one superpower ahead, that there is no point in committing to a workflow because next week's screenshot will obsolete it.
Here is what is actually true. The hard work of building useful AI workflows is not following influencer screenshots. It is sitting with one workflow you actually use, evaluating it against your real distribution of inputs, and improving it over months. None of that is screenshottable. None of it makes a Reel. So you do not see it on the feed. You see the screenshots.
If your AI strategy in 2026 is "monitor LinkedIn for Claude superpowers," you are losing to the PMs who muted those accounts six months ago and shipped something. Inform yourself by using the tools, not by watching other people post about them.
Here is the genre. Composite of dozens of posts that landed in my feed in the last 30 days.
The hard work of building useful AI workflows isn't screenshottable. None of it makes a Reel. So you don't see it on the feed.
2. SaaS essays without the receipts
This is the pattern I find most insidious because the people writing the essays know better.
A SaaS company with a 2019-era forms-and-tables UI publishes a thousand-word post about how AI is reshaping their product. The post cites "agent-first design," "compound workflows," "AI-native architecture." It is on the company blog. It is shared by the CEO. It gets a thousand likes.
Then you open the actual product. The same dashboard. The same forms. The same configuration screens. The same nav. There is a small "Ask AI" button in the bottom-right corner. You click it. A chat sidebar opens. You ask a question. You get an answer that summarizes data that was already in the dashboard above the chat.
That is not an AI shift. That is a chat sidebar. The product underneath has not changed.
Here is what the bolted-on pattern actually looks like in product. Composite mockup, the kind of screen you find in roughly half the SaaS tools I evaluated this year.
The corrective version of that screen is not just "make the sidebar smarter." It is a different product. The agent is the surface. Forms are the fallback.
Run two checks before you believe a SaaS company's AI strategy post.
The first check is the changelog. Pull the last six months of release notes. Has the shipping cadence accelerated, or is the rate flat? Has the average release shipped a real behavior change, or is most of what shipped a series of UI tweaks and bug fixes? AI changes the cadence and the size of what ships. If neither has changed, the AI claim is ornamental.
The second check is the primary surface. Where does the user spend their time? If the answer is "the same forms-and-tables UI" with the agent banished to a sidebar, the company has not made the shift it is writing about. An actually agent-native product makes the agent the surface. Forms exist as fallback for the cases the agent cannot handle, not the other way around. See SaaS to Service-as-Software for what the surface actually shifts to.
Both checks are public information. Anyone with a browser and twenty minutes can run them. The reason most companies skip them is that the answers are uncomfortable.
Your CMO is writing posts. Your customers are reading the same posts. The gap between the post and the product is what costs you credibility. You can either close the gap by shipping the actual shift, or you can keep paying the noise tax until a competitor closes it for you.
The credible move for a SaaS company in 2026 is to stop publishing the manifesto and start shipping the surface. Get the agent into the primary path. Decouple pricing from seats, see Per-Outcome Pricing(coming May 18). Publish your eval reports. Ship a release that would not have shipped without the new architecture. Then write the post.
3. The anti-prototype PMs
This pattern is more recent. It is also the one I find most disappointing, because the people writing it are PMs.
The argument goes something like this. The PM of the future should not prototype, because prototyping pulls them away from customers. The "real" PM job is empathy work, jobs-to-be-done research, framework synthesis. Anyone who is building working software is "doing engineering, not product." Sometimes a citation of Marty Cagan or Teresa Torres is grafted on, almost always out of context.
This is the loudest version of a sound that comes out every time the tools shift. It is the argument the framework specialist makes when they realize their framework is no longer load-bearing. It is the argument the wheel-resistor makes after the wheel arrives.
Here is what one of these posts looks like. Composite, but the genre is real.
Here is what is actually happening. A working prototype, in 2026, takes hours. Not weeks. The PM who builds a prototype on Monday and runs five customer calls against it on Tuesday and Wednesday produces more customer signal in forty-eight hours than the PM who writes a twelve-page PRD over six weeks ever does. The prototype is the customer empathy tool. It is the cheapest way to put a real artifact in front of a customer and watch their actual behaviour, instead of asking them to project their behaviour against a description of a hypothetical artifact.
The "PMs who prototype lose touch with the customer" argument depends on a 2019 premise. That prototyping is expensive. That it eats weeks of engineering. That it is therefore a tradeoff against discovery. None of that is true now. The premise has been falsified by the tooling. The PMs writing the posts have, for the most part, not used the tools.
If you are a PM and you are arguing your peers should not prototype, the most useful thing you can do this week is build one prototype with Claude Code and run it past three customers. The argument will look different on the other side of that experiment. If it doesn't, you and I can disagree productively. The disagreement before the experiment is just defending a workflow that depends on engineering capacity that has been reallocated.
For the working method, see Prototype Before You Spec. For why prototypes have replaced PRDs as the artifact of record, see The PRD Is Dead and Show, Don't Tell.
What the three patterns share
All three patterns let a team feel like it is participating in the AI shift while shipping the same product as last quarter.
The superpower influencer feels engaged with AI by consuming and reposting screenshots of it. They have not built anything. The SaaS company feels engaged by publishing essays about its AI strategy. It has not changed the surface or the cadence. The anti-prototype PM feels engaged by writing think pieces about why prototyping is bad. They have not used the tools.
In each case, the participation is performative. The work underneath is the same as last year, sometimes worse, because the participation eats the time that would otherwise have gone to the work.
This is not a hype problem. It is a credibility problem. The PMs and product teams who have actually shifted are increasingly hard to confuse with the ones who have not, and their products are pulling away. The noise tax is paid in lost customers, lost trust, and lost team time. It compounds.
How to score yourself
You can run the audit on any product, any post, any PM job description in about ten minutes.
Two posts. Same week. Same theme (a PM sharing an AI workflow they built). One is the noise pattern. The other is the signal pattern. The shapes are completely different once you look for them.
Here is the scorecard.
If the team you work on, or the product you work on, scores low on the right column, the credible move is not to write a post about it. The credible move is to ship the shift, then point at it. The post writes itself once the work exists.
The downloadable version of this scorecard, with line-by-line scoring fields and a sample for one real product, lives on the toolkit as the AI Noise vs Signal Audit.
What to do this week
One small thing per pattern. None of them takes more than an evening.
If you were tempted to post a Claude superpower screenshot on LinkedIn this week, instead spend that hour evaluating one workflow you actually use. Write one paragraph on what improved, what broke, and the metric the change moved. Post that. The engagement will be lower. The trust will be higher. Trust compounds. Engagement does not.
If you are at a SaaS company and the company is publishing posts about its AI strategy, ask one question in your next product review. "Has the changelog cadence and the surface design actually shifted to match what the post said?" The answer will be obvious to everyone in the room. Then ask the second question. "If a competitor published the same post and we ran the same checks on their product, what would we see?" That is the conversation worth having.
If you are arguing your team should not prototype, build one prototype this week with Claude Code. Run it past three customers. Write down what you learned. Then re-read your last post on the subject. The honest assessment of your own argument after that experiment is the most useful thing you can write all year.
The whole point of this site is that the working notebook beats the manifesto. The shift to AI-native product work is real, and the patterns above are how teams pretend to participate in the shift without doing the work. Stripping the patterns lets the work get the air it needs.
Pick one. Try it this week.
The AI Noise vs Signal audit checklist, the scorecard above in usable form with scoring fields and one worked example, is on the toolkit at falkster.com/toolkit.
Further reading
- Simon Willison on the gap between AI demos and AI products that hold up
- Hamel Husain on why most agent projects fail in production
- Anthropic's engineering blog on building products that are actually agent-native
- Marty Cagan on the difference between feature teams and empowered product teams
- Teresa Torres on continuous discovery, the canonical version, not the secondhand citation
Sources: Claimed-vs-actual changelog data is composited from public release notes of three mid-market SaaS vendors over the period Nov 2025 to May 2026; numbers and labels redacted to keep the focus on the pattern, not the names.
Download the artifact
Ready to use. Copy into your project or share with your team.
Also on Medium
Full archive →Frequently asked
What are the three AI noise patterns this post calls out?+
First, LinkedIn and Instagram posts that frame Claude or ChatGPT as having a new 'superpower' every week, screenshots of behavior the model has had for 18 months relabelled as a discovery. Second, SaaS companies publishing thousand-word essays on their AI agent strategy while shipping the same 2019-era forms-and-tables UI with a chat sidebar grafted onto the corner. Third, PMs writing posts arguing prototyping breaks empathy with customers, defending a workflow that depended on the engineering capacity they no longer have. Each pattern lets a team feel like it is participating in the AI shift while shipping nothing.
What's wrong with 'Claude superpower' posts on LinkedIn and Instagram?+
They are zero-substance content optimized for the platform's reward function, which is novelty and confidence, not accuracy. Most posts screenshot the model doing something it has been able to do for two years, with a label that says 'game-changer' or 'I just discovered.' The damage is the aggregate signal a junior PM gets from scrolling: that the field moves in disconnected micro-leaps, that someone is always one superpower ahead, that committing to a workflow is pointless because next week's superpower will obsolete it. The fix is to mute the accounts and ship something.
How do I tell if a SaaS company has actually shifted to AI or is just bolting it on?+
Three concrete tests. First, the changelog: pull the last six months of release notes and ask whether shipping cadence and release size have measurably changed, or whether AI claims are decorating a flat changelog. Second, the primary surface: is the agent the surface, or is it a chat sidebar bolted onto the same forms-and-tables UI from 2019? Third, the receipts: is the company shipping public eval reports, agent telemetry, or outcome reporting? If yes, the shift is real. If the only AI artifact is marketing copy, the AI claim is decoration.
Why is the 'PMs should not prototype' argument wrong?+
Because prototyping in 2026 is the cheapest way to keep a PM close to customers, not the thing that pulls them away. A working prototype takes hours, not weeks. Five customer calls against a real prototype produce ten times the signal of five calls against a static spec. The argument depends on a 2019 premise: that prototyping is expensive, that it eats weeks of engineering, that it is therefore a tradeoff against discovery. None of that is true now. The PMs writing the posts have, for the most part, not used the tools.
What does an AI-native shift actually look like inside a product?+
Five visible markers. The primary user surface assumes natural language input, not forms. State and intent are persistent across sessions, the user does not start over every time. The product produces outputs without the user clicking through forms ('here is the report' beats 'configure the report builder'). Telemetry on agent performance is visible in the product. Pricing is decoupled from seats and tied to outcomes. If a SaaS product shows none of these markers, the AI claim is decoration.
How do I score my own product against the three noise patterns?+
Use the AI Noise vs Signal audit checklist linked at the end of the post. For each AI claim your team makes externally, score: is there a corresponding shift in the changelog, is the AI surface primary or sidebar, is there public eval reporting, is the pricing model still seat-based. The lower the score, the more you are paying the noise tax. The score is a forcing function for an honest conversation with your CTO and CEO.
Is the post arguing AI is overhyped or underhyped?+
Underhyped where it counts, overhyped where it doesn't. The actual shift in product work is large and the teams who have made the shift are pulling away from those who have not. The noise patterns are bad because they let teams feel like they are participating in the shift while changing nothing about what they ship. Stripping the noise lets the real work get the air it needs.