Kevin Smith
4 min read • 16 July 2025
🔗 Originally published on LinkedIn
The internet. A vast, unfiltered expanse of information. Raw, chaotic and often perilous. But is all that about to change?
Large language models (LLMs) like OpenAI’s ChatGPT are emerging as intermediaries, filtering and presenting information in a curated manner. LLMs are becoming “a condom for the internet”; a protective barrier between us and the unfiltered web.
This is a new era where AI intermediaries sanitise, streamline and mediate our interactions with the chaotic sprawl of the digital world. While this offers safety and convenience, it also raises uncomfortable questions about authenticity, control and the future of information access.
OpenAI’s ChatGPT is the poster child for this new paradigm. Instead of directing users to a multitude of sources, it synthesises information into concise, coherent responses.
This approach offers clear advantages:
But this model isn’t without consequence. It introduces the risk of over-reliance on curated content and the potential erosion of information diversity.
If the internet becomes an API for language models, the raw weirdness, creativity and fringe perspectives that have always defined it risk being smoothed into oblivion.
Google’s core business revolves around search and advertising. Its traditional model encourages users to click through to various websites, generating billions in ad revenue. This ecosystem depends on users actively exploring the web.
Enter LLMs.
When AI provides direct, high-quality answers, the incentive to click disappears. This is not a minor disruption; this is a slow-motion car crash aimed squarely at Google’s golden goose.
When OpenAI turn on the ads and the sponsored content, and they will, that goose is cooked.
In response, Google has launched its Search Generative Experience (SGE), a bold attempt to integrate AI-generated summaries into search results. But there’s a catch. The more effective the AI is at delivering instant answers, the fewer ads get clicked. Google is left cannibalising its own business model just to remain relevant.
It’s a perfect illustration of the innovator’s dilemma. Either disrupt yourself, or someone else will.
Meta, traditionally a social media giant, isn’t sitting still either. Unlike Google, its revenue doesn’t depend on search, it depends on capturing attention inside closed ecosystems like Facebook, Instagram and WhatsApp.
But if users start spending more time interacting with AI agents, whether for discovery, planning, shopping or conversation, Meta’s attention monopoly comes under threat.
Zuckerberg’s response has been aggressive. Meta recently launched Meta Superintelligence Labs, funnelling billions into its AI ambitions. It invested $14 billion into Scale AI and appointed its CEO, Alexandr Wang, as Meta’s Chief AI Officer.
Meta’s goal? Develop AI systems that don’t just augment chat but aim for something bigger: general-purpose intelligence that can rival or surpass human capabilities.
To supercharge this effort, Meta has been poaching top talent from OpenAI with eye-watering compensation packages - some exceeding $100 million. High-profile defections include OpenAI researchers from the Zurich office, such as Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai.
This is not a a shot across the bow. This is a full-blown AI talent war.
Meta’s hiring spree isn’t just a story about pay packets, eye-watering though they are, it’s a strategic move that signals where the power balance in tech is heading.
OpenAI’s leadership has expressed public concern about these defections. Chief Research Officer Mark Chen bluntly described it as, “someone breaking into our home and stealing something.”
This isn’t melodrama. It’s a reflection of the existential stakes. AI is the foundation of the next computing platform. Whoever controls the best models, the best infrastructure and the best researchers effectively controls the future.
The lines between Big Tech companies, AI labs and infrastructure providers are collapsing into a single, brutal battleground.
If LLMs become the default interface for accessing knowledge, the ripple effects across the internet economy are profound:
This is an economic shockwave waiting to happen.
Both companies are racing to either own the AI layer or embed AI deeply within their existing ecosystems before someone else cuts them out.
The internet of today - the wild, sprawling, link-driven, ad-supported mess - is being slowly suffocated. Replaced by an AI-mediated layer that is clean, safe, efficient… and profoundly centralised.
If OpenAI (or any foundation model provider) becomes the de facto interface for the web, it essentially becomes the condom for the internet: a protective, interpretive layer between users and the raw web. One that filters, sanitises and curates.
You don’t Google. You don’t click. You just ask.
And whether that’s OpenAI’s GPT, Google’s Gemini, Meta’s Llama or Apple’s on-device personal agents, the end result is the same.
The internet fractures into three modes:
While this sounds like hyperbole, you can see it’s already starting to play out:
The open web as we knew it is being wrapped. Sheathed inside an AI abstraction layer. Safer, yes. More efficient, absolutely. But also more centralised, more controlled and less… real.
The internet condom is coming. Whether that’s a blessing, a curse or just an inevitability is a choice we probably don’t get to make anymore.
This article was originally written and published on LinkedIn by Kevin Smith, CTO and founder of Dootrix.
Kevin Smith