In Episode 006 of The Next Thing Now, Rob Borley and Kev Smith explore how AI is evolving from a tool that executes commands to something far more human-like: a companion that learns through experience. What begins with a story about a weekend in Milan guided entirely by AI turns into a profound reflection on prediction, memory, learning, and the human role in an increasingly agentic world.
Rob opens with a story of his spontaneous trip to Milan, where he navigated the city entirely through ChatGPT—from public transport to sightseeing to hidden local restaurants. With real-time suggestions, GPS awareness, and even a custom scavenger hunt around the cathedral, the AI acted not just as a tool, but as a dynamic companion. Kev jokes that Rob “lived on the edge”—relying entirely on something that didn’t even exist a few years ago.
The point: AI is already transforming how we experience the world, seamlessly integrating planning, translation, culture, and decision-making.
Reflecting on the trip, Rob and Kev unpack a key insight: LLMs are prediction engines, not thinking machines. Like the human brain, they cut cognitive corners to deliver the most likely response based on previous data.
Rob draws from The Science of Storytelling by Will Storr, explaining how our brains construct the world by hallucinating likely outcomes from limited input—just like an LLM. This shared mechanism is both powerful and problematic: it explains how AI can feel intelligent while still being capable of confident mistakes.
“We’re used to software being deterministic—AI isn’t. It’s more like working with a co-worker, not a calculator.” – Kev
The conversation shifts from prediction to collaboration. Rob suggests that the best way to think about LLMs is not as tools, but as partners—akin to junior engineers learning through feedback. Kev agrees, noting that AI coding tools already operate like apprentice programmers:
They generate code.
They read error messages.
They iterate until it works.
The result is not always elegant—but it’s functionally correct. And with practices like test-driven development, humans can constrain AI to deliver reliable outcomes.
This episode introduces a critical concept: the era of experience. Kev references a recent DeepMind paper that suggests we’ve reached the limits of static, pre-trained models. Future AI must:
Learn continuously over time.
Sense the environment.
Adapt based on real-world feedback.
This marks a shift from today’s LLMs (which start fresh with every session) toward persistent agents that evolve, remember, and improve—more like people than programs.
Rob imagines a world where every company has its own AI engineer, trained not on general knowledge, but on how that company thinks, builds, and works. Instead of one universal model, we might have millions of individualised ones—each learning through experience like a human team member.
Kev counters that this doesn’t mean every organisation needs to train its own model—because once one agent gains deep experience, it can be cloned, scaled, and deployed across the world. Human learning is slow and singular; AI learning is exponential and shareable.
The discussion naturally turns to AGI (Artificial General Intelligence). Kev points out that the definition keeps shifting—once it meant “as smart as a human,” now it requires human-like learning and adaptation over time.
But once AI reaches that point, the gap to ASI (Artificial Superintelligence) may be very small. Because unlike humans, AI can:
Copy itself instantly.
Share learnings across clones.
Scale exponentially.
It’s a chilling and exhilarating thought: once AI can truly learn from experience, the superintelligence gap may close in months, not years.
The episode circles back to the here and now. Rob describes how Dootrix uses AI to analyse workshop transcripts—asking, “What did we miss?” Often, the AI surfaces insights that no one in the room spotted—because it listens to everything, unbiased and unfiltered.
Kev adds that AI is a better listener than most humans. Each person in a meeting leaves with a different mental model, shaped by their biases. An AI transcript acts like a mirror—processing everything and feeding back new connections in seconds.
Rob and Kev discuss the ongoing battle for developer mindshare:
OpenAI has reportedly acquired Windsurf for $3B.
Anthropic’s Claude is being integrated into Apple’s next version of Xcode.
Internally, companies like Spotify and Duolingo are mandating AI use in engineering.
But not everyone is moving. Rob notes that many public sector organisations are still frozen—citing poor tools, legacy systems, and fear of change.
To illustrate what good looks like, Kev points to McLaren F1. Once at the back of the grid, they transformed into championship contenders within two years—through visionary leadership, new hiring, and radical rethinking.
It’s a metaphor for business in the AI era: those with strong leadership, clarity of purpose, and a willingness to break old habits will surge ahead. Others will be left behind.
The key message of Episode 006 is this: we are crossing a threshold. AI is no longer just an efficient tool—it’s becoming a thinking partner. But for this new era to reach its full potential, we must rethink how we work, lead, and learn.
It’s not just about writing code faster or generating text. It’s about building systems that adapt, evolve, and ultimately reflect the messy brilliance of human intelligence.