
The short version
I'm living the SaaS to Service-as-Software transition at Smartcat right now. In traditional SaaS you give people a tool. In Service-as-Software, AI agents do the work and humans review, approve, and handle edge cases. This changes four product fundamentals: UX flips from "how do I use this" to "what did it do," the reliability bar goes way up because a bug means wrong work gets delivered autonomously, pricing has to follow value not seats, and you need a unified knowledge layer. New product surface area: managing AI agents like employees (performance monitoring, training, coordination, compliance). Build the trust ramp gradually. Start with the most repetitive, well-defined tasks. The PM role shifts from tool designer to workforce architect.
Enterprise software has gone through three phases. First, it stored data (systems of record). Then it helped people collaborate around that data (systems of engagement). Now we're entering a third phase: software that does the work itself.
I'm living this transition at Smartcat right now. We're moving from a platform where humans use tools to manage translation and localization, to one where AI agents handle most of the work autonomously. Humans review, approve, and handle edge cases. But the heavy lifting is done by the software.
This is the shift from SaaS to Service-as-Software. And it changes almost everything about how you build product.
What "Service-as-Software" actually means
In traditional SaaS, you give people a tool. A dashboard. A workflow builder. A project management board. The human does the work using the tool.
In Service-as-Software, AI agents do the work. The human oversees, adjusts, and handles the stuff that requires judgment. The software isn't a tool anymore. It's a worker.
This isn't hypothetical. At Smartcat, our AI agents create content, localize it across languages, check quality, and publish, with humans stepping in only where their judgment adds value. The product went from "here's a tool to manage your translations" to "your translations are done, here's what we need you to review."
That's a completely different product. Different UX, different value prop, different pricing model, different support model.
What this changes for product teams
Your UX flips from "how do I use this" to "what did it do." When the software does the work, the user interface becomes about oversight, not operation. Dashboards shift from input tools to output reviews. The design challenge moves from "make this easy to use" to "make this easy to trust and verify."
Your reliability bar goes way up. When a human uses a tool and it has a bug, they work around it. When an AI agent does the work autonomously, a bug means wrong work gets delivered without anyone catching it. You need much stronger quality assurance, monitoring, and guardrails.
Pricing has to follow value, not seats. Per-seat pricing makes no sense when the AI is doing the work. If one person can oversee what used to take a team of ten, charging by seat means your revenue drops as your product gets better. Outcome-based pricing, paying for completed work or achieved results, is the right model for Service-as-Software.
You need a knowledge layer. AI agents can only be as good as the context they have. At Smartcat, we built a knowledge graph that pulls together brand guidelines, terminology databases, previous translations, and style preferences. Without this unified knowledge layer, agents make decisions in a vacuum and the output is generic.
The management problem nobody's talking about
When you have AI agents doing real work, you need to manage them like you'd manage employees. Not with one-on-ones and performance reviews, but with:
Performance monitoring. Is the agent doing good work? What's the error rate? Where does it struggle?
Training and updates. When business rules change, the agents need to be retrained. When new edge cases emerge, they need to be taught.
Coordination. When multiple agents work on related tasks, they need to stay aligned. Your customer support agent and your content creation agent need to be on the same page about brand voice.
Compliance and audit trails. When work is done autonomously, you need to be able to explain what happened and why. Especially in regulated industries.
This is a real product surface that doesn't exist in traditional SaaS. We're building it at Smartcat and it's becoming as important as the agents themselves.
The trust ramp
You can't go from "here's a tool" to "the AI does everything" overnight. Users need to build trust gradually.
Think about it like autonomous vehicles. Tesla didn't start with fully self-driving. They started with lane assist, then autopilot on highways, then expanding to more scenarios. Each step built confidence.
Service-as-Software products need the same ramp. Start by having the AI draft things for human review. Then move to AI doing the work with human spot-checks. Eventually, for well-understood tasks, the AI runs autonomously and humans only intervene on exceptions.
At Smartcat, some of our customers are fully autonomous on routine content. Others still review every piece. Both are right for where they are in the trust curve. Our product supports both modes and makes it easy to dial the autonomy up or down.
Where this is going
A few things I think will happen in the next couple years:
Companies will buy outcomes, not software. Instead of "we need a translation management system," it'll be "we need our content localized into 30 languages within 24 hours of publication." The vendor that can deliver that outcome, regardless of how, wins.
The PM role shifts from tool designer to workforce architect. You're not designing screens and flows. You're designing how AI agents work, what decisions they can make, when they escalate to humans, and how they learn from corrections.
Small teams will do enormous things. A team of five humans managing 50 AI agents can produce the output of a team of 200. This is already happening. It changes the economics of entire industries.
Quality and consistency become competitive advantages. When AI does the work, the variance between outputs drops. Companies that can guarantee consistent quality at scale will eat the ones that can't.
What this means for PMs
If you're building product right now and not thinking about where your users' work could be done by the software itself, you're behind. The question isn't "how do we make this tool better?" It's "which parts of this work should the software just do?"
Start with the most repetitive, well-defined tasks your users currently handle. Those are your first candidates for agent automation. Then work your way up the complexity ladder.
The shift from SaaS to Service-as-Software is the biggest change in enterprise software since the move to the cloud. It's happening now, and it's moving fast.