
I've spent the better part of two decades building the workflow automation systems that millions of businesses rely on today. First creating what became Salesforce Flow, then as Chief Product Officer at Boomi, then leading Slack's Platform with the remit of re-imagining automation in Slack , now co-founding my next business Noded AI . So, I've been watching and building in the AI agent revolution with both excitement and a healthy sense of care. The promises are familiar: automate complex processes, reduce human intervention, scale operations seamlessly. Yet the reality check is also familiar: 95% of AI agent pilots are reportedly failing (MIT: State of AI in Business 2025).
Before we sound the alarm, let's put this in perspective. Everyone thinks 95% failure is terrible - but it's actually not that bad for emerging technology! Traditional workflow and technology projects haven't exactly been paragons of success either. McKinsey estimates that more than 70% of digital transformations fail, while the Standish Group reports that 66% of technology projects end in partial or total failure. ERP implementations fare even worse, with Gartner citing failure rates exceeding 75%. So the 95% AI agent failure rate? It's not a rallying call for despair - it's a sign that we're in the early, maturing phase of a transformative technology. We've been here before.
After observing successful and unsuccessful workflow and agent implementations across the industry, I believe we're making a fundamental framing error that's contributing to these high failure rates. We're treating AI agents as if they are the workflow, when the most successful use cases position them as operators of workflows.
Think about it this way: when we deploy an AI agent, are we asking it to invent a new process each time, or are we asking it to execute an established process with intelligence and autonomy? The difference is profound.
In the "agent as workflow" model, we feed the AI context and expect it to figure out the optimal path forward each time. This appeals to our sense that AI should be magical - that because it has learned from vastness of the internet, it should inherently know how to handle any business process.
But in successful implementations, I've observed, the AI operates more like a highly capable new hire: it follows established workflows while making intelligent decisions within defined parameters. The agent doesn't reinvent how to handle a customer complaint or process an invoice - it executes the company's established workflow for these processes, but with the speed and consistency that only AI can provide. This isn't revolutionary thinking, but it seems we keep forgetting it.
Companies are team sports, and most successful organizations drive consistency to enable scaled execution. You can't have agents randomly solving the same problem differently each time it's presented. Just as you wouldn't want human employees each inventing their own approach to customer service, you don't want AI agents doing so either.
Consider a simple, and very human, example: a customer complains that their meal doesn't meet expectations. The response varies dramatically by company culture - one might offer a replacement, another might comp the entire table, and yet another might politely (or not so politely) say "too bad". These aren't right or wrong responses; they're cultural expressions of company values applied to customer engagement processes.
This highlights something crucial: business processes aren't just logical decision trees. They're cultural artifacts that encode priorities, values, and behavioral norms. There are countless micro-moments in any process where culture and judgment matter as much as intelligence. It's one of the reasons I've always built human centric workflow platforms - the intersection of automation and human endeavor is challenging and insanely rewarding.
This brings us to a challenge the industry seems to be underestimating: AI agents need onboarding, just like human employees do.
You wouldn't put even a PhD-level intern in front of a customer service workflow without training them on company language, glossary terms, priorities, and decision-making norms. Not all of this is captured in the workflow metadata - much of it exists in the cultural context of how work actually gets done.
More importantly, you wouldn't expect that intern to act fully autonomously on day one. Even when shadowed by experienced staff, you'd expect them to make mistakes - quite a few, actually. The key is putting them into active duty with appropriate guardrails so they can learn from real situations and real feedback loops.
We need the same approach with AI agents. They need real data, real scenarios, and real feedback to improve. Yet we often deploy AI agents with extensive context but minimal behavioral training, and then expect perfection from the start. It's frankly unrealistic. We give them the "what" but not enough of the "how" or "why," and we don't give them the supervised practice time they need to learn. The result is agents that might be technically correct but operationally misaligned with company culture and expectations.
This disconnect between expectation and reality became clear to me at the AI summit this year, hosted by HubSpot. There was a fascinating discussion about "gaining comfort" that your initial pilot was going to make mistakes 30-40% of the time. The striking part? Few leaders had the stomach for it.
We expect technology to be deterministic - when you press a button, the same thing happens every time. But the real value AI brings is that it is not deterministic. It brings judgment, adaptation, and contextual decision-making. The challenge is that this very capability that makes AI valuable also makes leaders uncomfortable. We want the benefits of intelligent automation without accepting the learning curve that intelligence requires.
Interestingly, AI agents tend to be most successful in back-office processes - what I call the "boring bits" of business operations. These are processes that require intelligence but don't need the human touch. Think data entry, invoice processing, inventory updates, compliance checks, or report generation. They're more algorithmic and less cultural - perfect training ground for AI agents.
What makes these processes ideal is that they typically have:
The AI sits at the edges of these processes, acting as an evolved human-to-machine interface - far more sophisticated than traditional "form filling" but not requiring the cultural nuance of customer-facing interactions.
As you move closer to customer-facing or culturally sensitive processes, the need for nuanced judgment increases. This isn't a limitation of AI capability so much as a recognition that some processes carry cultural weight that needs to be explicitly encoded, not assumed. Start with the boring bits, build confidence and expertise, then gradually expand to more complex territories.
I don't think this analysis should dampen enthusiasm for AI agents - quite the opposite. By properly framing agents as operators of workflows rather than replacements for them, we can set ourselves up for actual success instead of chasing magic that doesn't exist. Here's what that looks like:
We're all learning our way through this transformation together. The AI agent space is evolving rapidly, and what works today may not be the best approach tomorrow. But based on what I've observed - both in successful implementations and in the broader patterns of how organizations adopt new operational technologies - treating AI as a capable operator of established workflows seems more promising than expecting it to be the workflow itself.
The future of work isn't about replacing human judgment with AI magic - it's about augmenting human-designed processes with AI capability. And that might just be magical enough.
The patterns we're seeing with AI agents aren't new - they're the latest chapter in the long story of how organizations adopt transformative technologies and go through process improvement to scale.
So question for you. What has your experience been? I'd love to hear from others who are navigating these challenges, especially those finding success with different approaches.
—
Steve Wood is currently the co-founder of Noded AI . A product focused on helping Customer Success Manager turn every customer into a success story. You can sign up free here: https://getnoded.ai
Like these results?