In Episode 005 of The Next Thing Now, Rob Borley and Kev Smith tackle the increasingly tangled world of AI models, vibe coding, and the tension between creativity and control in a rapidly evolving development landscape. What starts as a rant about OpenAI’s naming conventions becomes a deep reflection on trust, tooling, and the transformation of software itself.
Rob opens with an existential question: What is OpenAI, really? Is it still the noble R&D lab of its origins—or just another tech giant chasing monetisation, building social features, and releasing hardware?
This duality is reflected in ChatGPT itself: once a demo, now a sprawling product with cross-chat memory, project folders, an image library, and an overwhelming number of models. From 4.0 to 4.1 mini, nano, and a raft of "O" models, users are left wondering: What model am I even using—and why does it matter?
“It’s a spaghetti mess,” says Rob. “Even the names don’t make sense.”
Kev takes a stab at untangling the mess:
4.0 is the default, multimodal model for general use.
4.1 is essentially the same, but accessed via API for developers.
Mini/Nano variants trade accuracy for speed and lower cost.
4.5 was a preview, now discontinued.
The “O” series (01, 03, 04, etc.) are reasoning-first models—designed for multi-step problem solving.
The newly released 03 and 04 models are fully agentic, able to chain tools together (e.g., web search + code execution + weather data) to solve complex tasks autonomously.
While OpenAI hasn’t branded them officially as “agentic,” Kev insists these are the first true general-purpose agent models.
The conversation shifts to the concept of vibe coding—a term that’s evolved from “mindless prompting” into shorthand for AI-assisted development. Kev critiques it:
Vibe coding enables anyone to build functional prototypes without deep knowledge.
But it lacks the robustness, safety, and long-term reliability needed for production software.
It’s not software engineering—it’s creative exploration.
“You can build fast,” Kev says, “but you still need to build slow if you want it to last.”
They argue that the real power lies in switching modes: use vibe coding to prototype, then switch to structured engineering to scale safely.
Rob and Kev discuss OpenAI’s strategic play to own developer tooling. After failing to buy Cursor, OpenAI is reportedly trying to acquire Windsurf, another AI-powered IDE. The goal? Lock developers into their model ecosystem—just like Apple did with the App Store.
The broader challenge, though, isn’t tools—it’s trust and training:
Many seasoned devs resist new methods because they don’t want to “start over.”
Junior devs risk skipping the deep experience needed for production-grade systems.
Organisations need to rethink onboarding, mentorship, and skill development.
“You can’t just throw a new dev at a bug anymore—you need to teach them to think slowly, as well as build fast.”
Rob references the recent viral trend of AI-generated action figures, and the creative community’s backlash. Artists and makers felt undercut and overwhelmed. A hashtag—#NoAIStarterPack—emerged in protest.
It reflects a deeper tension:
AI gives creators new power.
But when too many people can do the same thing instantly, the craft feels devalued.
The outcome? Some creatives lean into AI to accelerate, while others push back, framing themselves as digital artisans in a sea of mass production.
Kev shares a striking anecdote: a woman walked into a shop, held up a picture from ChatGPT, and said, “I want this shampoo.”
She didn’t test it. She didn’t Google it. She didn’t compare options. The AI had earned her trust—so she acted.
This moment, the hosts argue, marks a turning point:
Search and shopping are collapsing into AI-driven decision-making.
LLMs are becoming the buyer, not just the assistant.
Trust and personalisation are the new battlegrounds.
If OpenAI can maintain that trust, they may surpass even Google as the go-to layer for everyday life.
The episode ends with a balanced perspective:
The “vibe” mode is great for prototyping, ideation, and architectural exploration.
But serious software still demands structured engineering, thoughtful design, and long-term safety.
The real shift? Developers now need to operate in both modes.
Fast and slow. Experimental and exacting.
“It’s not about replacing developers,” says Kev. “It’s about giving them a new set of superpowers—and teaching them when to use which.”
Rob and Kev believe OpenAI may be on the verge of becoming the next Apple or Google. But only if they get productisation, naming clarity, and developer trust right. The race is on—and the world isn’t waiting.