In a world where science fiction is fast becoming science fact, what does it mean to stay ahead?
That’s the question at the heart of Episode 001 of The Next Thing Now, a podcast by Dootrix CEO Rob Borley and CTO Kevin Smith. In this debut episode, the pair dive into the chaotic, exciting, and sometimes unsettling pace of change sweeping through AI, quantum computing, and agent-based software development.
Borley opens with disbelief: "It's completely crazy how fast the world is moving." He and Smith point to two landmark developments that occurred almost simultaneously:
Microsoft's quantum computing breakthrough, rooted in a new “topological” state of matter, which could lead to stable, scalable quantum processors.
Google's AI agent for science, which replicated a decade of superbug research in 48 hours—validating existing hypotheses and generating novel ones.
Together, these stories illustrate a new truth: multiple S-curves in tech—quantum, AI, biotech—are now stacking and accelerating in tandem.
The pair reflect on how AI tools are amplifying the abilities of top experts—but potentially marginalising juniors. Smith likens the situation to Formula 1: experienced drivers (experts) can push tools to their limits, while newcomers may “stall at the start line.” Without foundational experience, how can junior professionals safely and effectively steer AI?
It’s a subtle but important shift: from doing the work to orchestrating intelligent tools that do the work.
While transformer-based LLMs (like ChatGPT) dominate headlines, Smith introduces an emerging alternative: diffusion language models. Inspired by image generation (like Stable Diffusion), these models:
Start with a "blob" of text
Iteratively refine it toward a coherent response
Use less energy and may be faster or more creative in certain contexts
Though still in early development, they hint at a possible future beyond transformers—especially for applications requiring high nuance or originality.
The conversation then turns to AI agents—autonomous systems that can:
Reason through complex tasks step by step
Use external tools (like web search or code execution)
Learn from failure and adapt in real-time
Smith shares a case study: an agent tasked with building a superhero playlist. It wrote, tested, debugged, and re-wrote code, corrected itself, then compiled a Spotify playlist—all independently. The real kicker? These agents can work in teams, reviewing and correcting one another’s output.
"This is alien software," Smith marvels. "Software that writes itself—not like we do, but based on different constraints and understanding."
Borley warns of the return of “shadow IT”—this time in AI form. As non-technical employees begin building automations with tools like Cursor or Bolt, companies risk losing visibility and control over internal software development. While such experimentation can spark innovation, it also creates governance and compliance nightmares.
The implications go deeper. With agentic systems replacing large teams of junior workers, what happens to traditional organisational structures? Smith notes that if a senior engineer can orchestrate agents to design, build, test, and deploy a product, the need for large departments fades.
This isn't theoretical. Startups are already leveraging agents to bootstrap SaaS apps with minimal human input.
In the final section, Smith dives into the limits of today’s models. While current LLMs are powerful, their reasoning is often probabilistic—based on pattern matching rather than logic. The next frontier? Hybrid AI combining:
Neural nets for intuitive "fast thinking"
Symbolic modules for formal "slow thinking"
One promising example is Google DeepMind’s AlphaGeometry, a system that combines deep learning with deductive reasoning to solve mathematical problems more like a human would.
In a lighter but revealing segment, Borley recalls trying to teach ChatGPT to play rock-paper-scissors fairly. It cheated. When asked to go first, it simply waited for his answer and picked the winning move. Only when prompted that the game “must be fair” did it attempt encryption-based reveals. Even then, consistency varied.
The anecdote highlights a deeper issue: LLMs often lack embedded ethical or contextual understanding, and their responses can be brittle or contradictory.
Borley and Smith’s first episode pulls no punches. They describe a world in rapid upheaval—where software isn’t just written, but writing itself; where AI doesn't just assist, but collaborates; and where tomorrow's companies may be lean teams orchestrating fleets of thinking agents.
As Smith puts it: "You still need the humans—but not nearly as many."