Kevin Smith
6 min read • 26 August 2025
🔗 Originally published on LinkedIn
For most of software’s history, the mental model has been simple: we write code to make machines follow rules. Those rules might be as dry as “add VAT to this invoice” or as consequential as “adjust the control surfaces of a Boeing 787 on final approach”. But in all cases, the premise was the same. Software existed to encode and execute a fixed logic faster, more reliably, and more scalably than a human could.
Now, a new kind of software is emerging. One that doesn’t just follow rules, but makes decisions. One that can navigate ambiguity, select tools, and adapt its behaviour in real time. In short: agents.
And here’s a key point that gets lost in all the noise and in all the hype: this isn’t one at the expense of the other. We’re not trading in our fast, repeatable, deterministic codebases for fuzzy, adaptive and slower language models. Instead, we’re entering an era where two fundamentally different modes of software coexist, collaborate, and call on each other.
One mode is about flawless execution; the other is about adaptive orchestration. Together, they change the question from “How do I get software to do this?” to “What outcome do I want" and "Which combination of machine capabilities will get me there?”
Traditional software is the industrial machinery of the digital world.
Its strengths are obvious:
Whether it’s an accounting system calculating payroll taxes, an e-commerce backend processing orders, or a compiler turning source code into executables, the principle is the same: the rules are known and hard-coded. If they change, a developer changes the code.
You can think of this mode as a precision-engineered production line. It’s not built to improvise; it’s built to run exactly as designed. And in many contexts, that is essential. Indeed, it is largely what all software has ever been and thus what we have always meant when we talk about "software".
Agents, by contrast, are like the skilled operators working on the factory floor. Except they’re not limited to one factory, one set of tools, or one location. They’re goal-driven, knowledge-powered, and capable of operating in uncertain or changing conditions.
Where traditional software asks: “What’s the next instruction in the sequence?” Agents ask: “Given the goal, what should I do next, and with which tools?”
This difference is profound:
A human legal assistant might retrieve case law, draft a response, and email it to a partner.
An agentic legal assistant could do the same, moving between knowledge retrieval, document drafting, and workflow automation without needing a bespoke app for each stage.
The mistake I see in a lot of commentary is the framing of this as a zero-sum game as if the old model will be wholly replaced by the new. That’s not just wrong; it’s a category error.
Agents and traditional software are not competitors. They’re complementary modes in a single systems architecture:
This isn’t a revolution where one side wins; it’s a fusion. Agents elevate software from being a set of static, siloed applications to being callable functions in a much larger reasoning and action loop.
In this view, the future isn’t about replacing SAP, Salesforce, or your custom ERP. It’s about making them first-class citizens in a new ecosystem where an agent can say: “Given the current sales numbers, the updated market forecast, and the inventory constraints, I’ll adjust our Q4 production plan and I’ll use the ERP’s supply chain module to implement those changes automatically.”
What we have to understand though, is the software tools that are easy for humans to use, might not be the same as the tools that are easy for agents to use. Therefore, it is entirely likely that 'traditional' software will eventually be written for, and targeted at, a different type of user; that is, a software agent.
To make sense of this, I picture a two-layer model.
Layer 1: Deterministic Execution Layer The base layer is everything we’ve built over decades: APIs, databases, enterprise apps, ETL pipelines, rules engines. They’re fast, stable, and reliable. They do exactly what they’re told. The base layer is also the new tools that we will build; the tools specifically built for agents.
Layer 2: Adaptive Orchestration Layer Above it sits the agent layer. A reasoning environment capable of interpreting goals, choosing tools, sequencing actions, and adapting based on results. This layer doesn’t replace the base; it uses it. It’s the human-like operator that knows which levers to pull.
Crucially, the boundary is porous. Agents may also call other agents. Traditional software may embed agentic components (think: a CRM with a built-in AI lead qualifier). And over time, we’ll see hybrids where the line between the two modes blurs.
When you have both modes working together, the design brief for software changes.
Historically, if you wanted to automate a process, you built a dedicated system for it - an application with fixed workflows, coded logic, and user interfaces for every interaction. You hardwired the business rules into the system itself.
In the agent-plus-software future, you can separate capabilities from control logic. You don’t need an all-singing, all-dancing app for every scenario; you can expose atomic, well-designed functions (APIs, data queries, automation hooks), and let agents compose them dynamically. This makes software more modular and more reusable. The same set of tools can serve dozens of different agent-driven workflows.
The mental shift is from “What app do I need for this?” to “What outcomes do I want, and which capabilities do I need to compose to get there?”
For developers, this means:
The winners in this shift will be the platforms that are agent-friendly by design. Rich APIs, clear schemas, documented capabilities, and a focus on interoperability.
For organisations, the shift means:
The risk, of course, is in governance. If agents can re-route workflows and call systems autonomously, you need guardrails: authentication, auditing, monitoring, and escalation paths when something unexpected happens.
If we zoom out, this is part of a broader evolution.
The endpoint isn’t just smarter tools; it’s a living digital workforce that grows in competence, shares what it learns, and continuously improves the way it uses traditional systems. The deterministic layer stays but it becomes the substrate for an adaptive, self-optimising layer above.
This may sound like pure science fiction at this point, but the groundwork is already being laid. You don't have to squint too far into the future to see that this is where agentic computing is heading.
The temptation, that you can see people starting to embrace, is to imagine a world where everything is agentic and deterministic systems fade away. That’s a mistake. Deterministic software is still the backbone of our digital infrastructure. It’s what keeps planes in the sky, transactions secure, and data consistent.
The opportunity is in interleaving the two modes so tightly that we stop thinking of them as separate. We’ll have:
We’ll be designing not just for execution, but for collaboration between machines of different kinds, and between humans and machines.
So in a sense then, the real future of software isn’t about replacing humans with agents or replacing code with LLM's. It’s about evolving the partnership between deterministic execution and adaptive orchestration, between precision and judgement, between rules and reasoning.
Two modes. One goal. Better software.
This article was originally written and published on LinkedIn by Kevin Smith, CTO and founder of Dootrix.
Kevin Smith