The Problem Nobody Wants to Stop and Actually Think About
Why traditional automation is key to avoid the AI solution trap. Or maybe it’s more like. Okay let me just. There’s this thing happening where enterprises keep falling into what they call the “solution trap” and it’s basically when companies deploy new tech like artificial intelligence without even. They don’t even stop to think about whether it actually solves a real business problem. They just. They just go.
And that’s according to Niranjan Vijayaragavan, CTO at Nintex, the workflow automation company, who was basically saying organisations need to apply the right tools to the right problems. Which sounds so simple but then you look at what’s actually happening and. Because of generative AI, because of ChatGPT, because of all the noise, corporate boards are pushing their teams to adopt AI immediately. Right now. Yesterday. Without even knowing where it could be. Where it would even make sense to.
“Experimentation is good,” Vijayaragavan said, and fine, sure, okay. But he kept stressing this point about finding where the real bottlenecks actually are in a process. You need a hypothesis. IT leaders have to have a hypothesis about how AI delivers value in those specific situations. And here’s the thing that keeps getting buried under all the hype, 80 to 90% of tasks can be automated using traditional technology. Traditional. Not AI. Just regular automation. That number alone should make people stop and.
The Shotgun Approach and Why It’s Falling Apart

But instead what’s been happening is this “shotgun approach” where organizations just fire in every direction and end up with fragmented projects that deliver little to no return on investment. Vijayaragavan basically said it’s time for “pause and reflection” because. He said “there is value there, the structure is lacking to derive that value.” Which is kind of the whole problem in one sentence if you think about.
And then the considerations start stacking up and it’s a lot. Security and privacy, for one. Is the underlying large language model absorbing the data it accesses? Is it learning from your proprietary. Are you even sure about.
Then there’s the cost of rectification. So say an AI agent saves an employee one hour a day. Great. But if that hour gets eaten up unwinding the agent’s mistakes then the whole value proposition just. Collapses. Gone. And opportunity cost on top of that. Money spent experimenting with undefined AI projects is money not spent on things you already know have value. It’s just. There are so many layers to this.
Automation as the Foundation, Not the Afterthought
A recent Nintex study found that 84% of CIO and CFO respondents now believe automation is a necessary precursor to successfully implementing AI in business processes. Eighty-four percent. And Nintex’s whole position is that automation acts as the muscle to complement AI’s brain. Automation supports scaling by standardising processes, ensuring data quality, providing governance foundations. Without that muscle the brain is just. It’s just sitting there.
Structured vs Unstructured Data

Making data AI-ready. That was a huge topic throughout 2025 and Vijayaragavan said progress has been made but success depends on clear use cases. And here’s where it gets complicated because there’s structured data and unstructured data and they behave completely. With unstructured data, LLMs have actually proven pretty efficient at processing it. Nintex used LLMs to make their technical documentation more accessible, which is great, that works. Fine.
But structured data. The stuff sitting in SQL databases. Siloed, unclean data “is still the state of the union,” Vijayaragavan said. And it’s going to take time to fix even with modern platforms like Snowflake and Databricks. These are legacy issues that don’t just. You can’t just wave a magic.
He made this really important distinction about combining text from different sources being a non-deterministic process versus combining database records which should be deterministic. And because LLMs are inherently probabilistic, IT teams get more accurate results using SQL statements to combine records. “As a general rule, do not use inferencing models where a deterministic result is expected.” Which is so clear and yet people keep trying to.
Where AI and Traditional Tech Actually Work Together
Although. AI can help with mapping records from different tables where column names are inconsistent. Like matching a column named ‘Customer’ with one named ‘Cust.’ That’s useful. That’s a legitimate.
And there are scenarios where both approaches work together. Like an HR system using an LLM to interpret a natural language question, “how many days leave have I got left,” converting it into a specific database query. Then the query deterministically looks up the entitlement and subtracts days taken. Vijayaragavan called that “a great use of LLMs.” So enterprises need a platform supporting both deterministic and inferencing technologies because context matters and the right tool for the right.
GenAI in Customer Support and Development
On generative AI in customer support and software development, Vijayaragavan said “the promises have largely materialised.” AI-powered customer support still makes errors, sometimes gets the tone wrong, but generally it’s effective enough for deployment. It serves as a capable assistant. But it’s not yet trusted to operate autonomously. Not yet. In the coming year he expects more examples of agentic AI entering production but not fully autonomous. Humans still providing oversight. Still watching.
Governance, Agents, and the Growing Weight of It All

And the governance piece. He argued that governance of AI agents should mirror governance of human workers. Both need clear roles, responsibilities, access rights. Agents have to keep records of their actions. They need to explain what they did and why. Which makes sense but also the amount of infrastructure required to.
Vijayaragavan noted a “healthy acknowledgement” in the industry that traditional automation is still more appropriate than agents in many contexts. But as foundation models get more accurate and specialized models improve, the scope for agents will grow. Provided organizations invest in context engineering. “Currently, a lot of context is required to get the quality of responses that are required.” Enterprise users are in a unique position to build that context into their agents rather than leaving it to individual users, which.
If You Can Even Call It That
Ultimately organizations have to weigh the total costs of deploying AI. Including impact on employee morale. Against risk appetite. All of it. Every piece.
“A multi-year investment may be needed before you see the returns, so it is important to think carefully why you are embarking on a project,” Vijayaragavan said. And then he added something that should probably be printed on every boardroom wall: “‘The board thought it was a good idea’ should not be sufficient reason.”
Boards are pushing for AI. But without a foundation of traditional automation and clean data, these projects are destined to. They’re just going to.




