THE NEXT THING NOW PODCAST

MCP in the Matrix

The Next Thing Now: From Simpsons Filters to Strategic AI Gaps

In the third episode of The Next Thing Now, Rob Borley and Kev Smith blend humour, insight, and industry critique as they explore the accelerating progress of AI—particularly image generation, agentic systems, and the deepening divide between legacy institutions and AI-native disruptors. What begins with cartoons and ends with Apple’s existential AI crisis is, ultimately, a meditation on momentum, governance, and the costs of waiting too long.

AI Image Models Evolve—Fast

Fresh off their complaints in Episode 2 about the shortcomings of AI image generation, Rob and Kev joke that Sam Altman must be listening—because the very next day, OpenAI dropped a major update.

Highlights include:

  • Layered image composition, akin to Photoshop, enabling selective edits without breaking character consistency.

  • Dramatically better style fidelity (e.g., turning people into Ghibli characters or Simpsons versions).

  • Real-world impacts on niche businesses—e.g., Etsy sellers who used to handcraft cartoonified pet portraits are suddenly obsolete.

The update shows how rapidly AI is improving—transforming yesterday’s frustrations into today’s default capabilities.

MCP: The “Neo” Moment for AI

Kev introduces a compelling analogy to explain MCP (Model Capability Protocol):

“It’s like sticking the Neo tube in the back of the LLM’s head.”

MCP allows language models to gain specific skills—like working with Google Sheets or booking holidays—by connecting to external tools via APIs. This turns generalist models into domain-specific agents with new superpowers. Rob shares a real-world example: a travel AI that uses MCP to query both API-connected partners and open websites, then makes booking decisions based on the best match.

They also explore how OpenAI’s adoption of Anthropic’s MCP standard signals industry convergence and maturity. While competition remains, shared protocols are forming the foundation for interoperable agent ecosystems.

 From Deterministic Software to Agentic Autonomy

The duo dive into the philosophical shift from traditional software—where behaviour is scripted and predictable—to agentic software, where behaviour is constrained but not prescribed.

Key ideas:

  • Constraints > Instructions: Instead of dictating exact steps, developers will increasingly define safe zones for agents to operate in.

  • Chain-of-command metaphors: Kev wishes for “less clever” LLMs that don’t try to help unless told to—like well-trained soldiers that follow orders.

  • Governance lag: MCP may be ready, but the guardrails around it are not. Enterprise adoption remains slow due to security, risk, and compliance challenges.

Legacy Inertia vs. AI Startups

Rob paints a vivid picture of a government wanting to modernise with AI—but unable to answer its own call to action. Why? Legacy systems, buried data, and staff untrained in modern tooling. Even worse, many departments operate with outdated tech or rely on disappearing knowledge.

“It’s not a tech problem—it’s an infrastructure, skills, and leadership problem.”

They describe the CIO’s new pressure: historically enablers of board strategy, they’re now being asked to lead it, often without the tools or authority to do so. Meanwhile, startups can move quickly, building AI-native systems from scratch.

Media, Copyright, and the AI Deals No One Talks About

Rob reveals a wave of quietly signed licensing deals between OpenAI and major publishers (e.g., Guardian, Reuters, AP), giving LLMs access to premium training data. Some publishers are even launching AI-powered summary services, replacing traditional sub-editors with automated outputs reviewed by humans.

Kev draws a parallel with Apple’s iTunes era:

“Is this another walled garden moment? Are media companies giving away control for short-term reach?”

They also discuss the rise of AI crawlers scraping the open web, prompting Cloudflare’s AI Labyrinth, a defensive system that traps crawlers in endless loops—an early glimpse of AI vs. AI warfare.

Apple’s AI Vacuum: A Strategic Misstep?

Kev and Rob express real concern over Apple’s missed momentum:

  • Siri’s AI upgrade has been delayed until 2027.

  • Apple’s AI strategy is vague and slow compared to Google, Microsoft, Meta, and Amazon.

  • OpenAI, meanwhile, is expanding rapidly—positioned to dominate both enterprise (via Microsoft) and consumer (via iOS integration).

Rob sees disturbing echoes of Microsoft’s Windows Phone era: great tech, late to the party, overtaken by mobile-first upstarts. The risk? Apple becomes the next Nokia—unless it finds a partner or reinvents itself.

A Call to Think Differently

The episode closes with a warning: bolting AI onto legacy systems doesn’t work. Just as poor mobile UX often involved wrapping web views into clunky apps, bad AI UX involves adding LLMs as “asterisks” or sidebars to existing tools.

Instead, they urge:

  • Reimagine the experience from scratch—start with a greenfield AI-native use case.

  • Use experimentation to learn—then decide what should stay on rails and where agentic freedom can be safely applied.

  • Avoid “checkbox AI”—aim for meaningful change, not surface-level integrations.


Final Thought: The Gap Between “What’s Possible” and “What’s Practicable”

Episode 003 presents a sobering duality. On one hand, AI is leaping forward—visually, structurally, and strategically. On the other, large institutions are gridlocked by inertia, risk, and legacy systems. The message is clear: the future is here—but unevenly distributed.

Now is the time for leaders, especially CIOs, to rethink how they adopt, govern, and integrate AI. The cost of delay may be irrelevance.

Are you ready to accelerate your digital transformation?

Work With Us