Kevin Smith
6 min read • 27 November 2024
🔗 Originally published on LinkedIn
In technology, certain ideas emerge that promise to reshape how we think about and interact with machines. One such idea is agentic computing—a concept that’s recently gained momentum, though its roots stretch back decades. While early work provided a strong theoretical foundation, it’s the advances in large language models (LLMs) that have reignited interest and made agentic computing a practical and transformative possibility. This article will explore what agentic computing is, where it comes from, why it’s relevant now, and what it could mean for the future.
An agent, in computing terms, is a software entity designed to act autonomously on behalf of a user or system. Unlike traditional software that relies on explicitly defined instructions, agents can make decisions independently, adapt to changing environments, and even learn from past interactions. This autonomy sets them apart from traditional tools and opens the door to more dynamic and responsive computing systems.
To borrow a perspective from Bill Gates, agents could represent a seismic shift in how we use technology. Gates has linked their potential with the transformative leap from command-line interfaces to graphical user interfaces, which redefined the accessibility and usability of computers.
Agentic computing isn’t new. Its conceptual roots can be traced back to the 1990s, when researchers like Yoav Shoham introduced Agent-Oriented Programming (AOP) . Shoham’s framework proposed that software entities should be modelled as having beliefs, desires, and intentions—mental states that guide their decision-making. Around the same time, Ker’95 [pdf] , a seminal paper by Nicholas R. Jennings and Michael J. Wooldridge, formalised the principles of autonomous agents and multi-agent systems. This work emphasised autonomy, goal-directed behaviour, and the ability to interact with other agents in a shared environment.
These early efforts laid the groundwork for agentic computing, but they were ahead of their time. The computing power and data required to realise their full potential simply didn’t exist. Now, however, things have changed.
The renewed interest in agentic computing is directly tied to recent breakthroughs in artificial intelligence. Large language models (LLMs) like OpenAI’s GPT have demonstrated capabilities that align closely with the goals of agentic systems. These models can process vast amounts of data, understand complex instructions, and generate nuanced responses—all in real time. They also have the flexibility to handle a wide range of tasks, from answering questions to writing code or even composing music.
What makes LLMs particularly transformative for agentic computing is their ability to function as general-purpose intelligences within a system of specialized agents. While traditional agents were limited by narrowly defined rules and behaviors, LLMs can act as adaptable hubs of intelligence. They enable systems to handle unforeseen scenarios, integrate diverse data sources, and interact more effectively with both humans and other agents.
Sam Altman, CEO of OpenAI, has called agents “the killer function of AI”. He predicts that by 2025, agents will be deeply integrated into our daily lives, transforming industries by automating complex workflows and delivering personalised, dynamic experiences. Tools like ChatGPT are paving the way for this future by making agents smarter, more adaptable, and easier to integrate into existing systems.
To understand agentic computing in action, it’s helpful to think about how agents are structured within a system. Agents can generally be divided into two types:
Micro-agents are typically equipped with certain common traits that enable them to operate effectively within an agentic system. These traits include the use of tools, predefined roles, and task hand-offs:
By leveraging these traits, micro-agents contribute to a highly adaptable and efficient system. This hierarchy enables agentic systems to combine scalability with efficiency. Macro-agents delegate tasks to micro-agents, allowing the system to tackle complex problems without becoming overwhelmed.
While individual agents can accomplish impressive tasks, the real power of agentic computing lies in agent orchestration—the ability to manage and coordinate multiple agents to work together effectively. Think of agent orchestration as a conductor leading an orchestra, ensuring that each agent performs its role at the right time and in harmony with others. Without orchestration, even the most capable agents can struggle to achieve complex goals efficiently.
Two notable frameworks in this area are Magentic-One and OpenAI Swarm. These frameworks provide the infrastructure to manage interactions between multiple agents, allowing them to collaborate dynamically and respond to changing requirements in real time.
These frameworks are still early examples of agent orchestration, but they are expected to mature significantly by 2025 and beyond, potentially becoming integral to various industries and complex systems.
Agentic computing is no longer confined to research labs. It’s being applied across industries, often in ways we encounter daily without realising it.
Personal Productivity: LLM-powered macro agents like ChatGPT are evolving into intelligent personal assistants. They can manage your calendar, summarise emails, and even generate detailed reports—tasks that previously required human oversight.
E-Commerce: Platforms like Amazon already use agentic systems to recommend products based on user behaviour. With smarter agents, these systems could become even more adaptive, tailoring recommendations in real time based on external factors like seasonal trends or supply chain constraints.
Healthcare: Agents are transforming healthcare by assisting with diagnostics, monitoring patient data, and even enabling robotic surgeries. For example, an agentic system might analyse a patient’s vitals in real time, alerting doctors to potential issues before they become critical.
Insurance: Traditional insurance comparison engines are static tools. An agentic approach could involve dynamic software robots that continuously analyse policy updates, risk factors, and market conditions to provide more personalised and timely recommendations.
The advantages of agentic computing are clear, but they come with challenges that must be addressed.
As we look ahead, the possibilities for agentic computing are staggering. Imagine a future where software robots, powered by LLMs, manage entire workflows autonomously. They could handle everything from monitoring supply chains to optimising energy use in smart cities—all while collaborating seamlessly with human operators.
The combination of agentic principles with advanced AI models is already driving innovation. These systems are becoming more than tools—they’re becoming collaborators in solving some of the world’s most complex problems.
Yet, as exciting as this future is, it’s worth considering how we want to shape it. The question is not just what agents can do, but how we can use them responsibly to create systems that align with human values and priorities.
Agentic computing is not a new idea, but it’s an idea whose time has come. Advances in LLMs and Gen-AI have provided the missing pieces needed to make agentic systems practical, scalable, and transformative. By combining the foundational principles established in works like Ker’95 with cutting-edge AI capabilities, we are entering a new era of computing.
This is not just about building smarter tools; it’s about creating systems that think, adapt, and act with unprecedented autonomy. And as we stand on the cusp of this transformation, one thing is clear: the future of computing is agentic.
This article was originally written and published on LinkedIN by Kevin Smith, CTO and founder of Dootrix.
Kevin Smith