The Next Thing Now Podcast

The Reality of AI in Critical National Infrastructure

Written by Rob Borley | Mar 23, 2026 6:07:01 PM

Can AI be trusted where failure isn’t an option?

There is a version of the AI conversation that most of us are familiar with by now. It is fast moving, optimistic, and often framed through examples that feel immediate and accessible. Write this. Summarise that. Generate something useful in seconds. The value is obvious because the feedback is instant.

Spend time inside critical national infrastructure and the tone changes quickly.

Here, systems do not exist to demonstrate what is possible. They exist to deliver consistent, dependable outcomes over long periods of time. They sit behind services that people rely on every day, often without ever thinking about them. Water, energy, transport, communications. The expectation is not innovation for its own sake. It is continuity. Stability. Accountability.

That context reshapes the conversation around AI in a fundamental way.

The tension between speed and safety sits at the centre of it. Most of the current AI narrative is built around acceleration. Faster development cycles. Faster outputs. Faster decision making. Entire workflows compressed into minutes. That promise is real, and in many environments it is already delivering value.

But infrastructure is not optimised for speed. It is optimised for resilience.

The question is not how quickly something can be built. It is how confidently it can be relied upon. That changes the criteria for adoption. A model that performs well in a controlled setting is only the starting point. What matters is how it behaves under pressure, how it integrates with existing systems, and how it responds when conditions deviate from the expected.

As Simon put it during our conversation, large organisations are conditioned to ask a simple question early in the process: “So what?”

What does this actually change?
What does it improve?
What happens when it fails?

That line of thinking slows things down, but it also sharpens them. It forces a level of clarity that is often missing from more speculative conversations about AI. It introduces discipline into environments where the cost of being wrong is measured in more than inconvenience.

From the outside, this can look like hesitation. Particularly in the UK, where there is a persistent narrative that we are slower to adopt new technology, more constrained by regulation, more cautious in our approach.

There is some truth in that. But it misses the point.

The UK operates some of the most complex and heavily regulated infrastructure systems in the world. Those systems are expected to function reliably, at scale, under constant scrutiny. The organisations that run them are accountable not just to shareholders, but to regulators, governments, and the public.

In that context, caution is not a weakness. It is a requirement.

What appears to be slow progress is often deliberate progress. The work is happening, but it is happening in a way that reflects the environment. New capabilities are tested in controlled settings. Introduced into specific parts of the system where the impact can be understood. Evaluated over time before being expanded.

This is not the same pattern we see in consumer technology, and it is not supposed to be.

There is also a deeper technical reality underpinning this. Much of what we describe as AI today is probabilistic. It produces outputs that are highly plausible, often useful, but not guaranteed to be correct. In many domains, that is acceptable. In some, it is transformative.

In infrastructure, there are limits to where that kind of behaviour can be tolerated.

There are parts of the system where approximation works. Supporting analysis, identifying patterns, assisting human decision making. In those areas, AI can add immediate value, particularly when tasks are structured in a way that aligns with how these models operate.

There are other parts where precision is essential. Where outcomes must be consistent, repeatable, and explainable. Where the margin for error is effectively zero. In those areas, the role of AI is far less clear, and the bar for adoption is significantly higher.

Understanding that distinction is not straightforward, and it is not static. It requires ongoing judgement, informed by both technical capability and operational context.

Which brings this back to leadership.

The challenge facing leaders in these environments is not whether to adopt AI. That question has already been answered. The challenge is how to integrate it in a way that aligns with the realities of the system they are responsible for.

That involves more than selecting tools or launching pilots. It requires a clear view of where value can be created without introducing unacceptable risk. It requires the ability to navigate regulatory expectations that are still evolving. It requires building confidence within teams that are being asked to engage with technology that behaves differently from anything they have used before.

It also requires resisting the pressure to move at a pace that does not fit the environment.

There is a constant pull from the outside world. Competitors appear to be moving quickly. New capabilities are announced almost daily. The fear of being left behind is real.

But in infrastructure, moving too quickly can create its own form of risk. Systems become harder to understand. Dependencies increase. The ability to explain and justify decisions weakens.

Leadership, in this context, is about holding that line. Creating space for progress without compromising the principles that underpin the system.

It also means recognising that the role of people is not diminishing, it is changing.

As more of the execution shifts into automated systems, the importance of intent, judgement, and accountability increases. Decisions still need to be made about what should happen, not just how it happens. Outcomes still need to be owned.

AI changes the shape of the work, but it does not remove the responsibility for it.

The reality of AI in critical national infrastructure is not defined by breakthroughs or sudden transformation. It is defined by a series of careful decisions, each made within the constraints of a system that cannot afford to fail.

That may not make for the most exciting narrative.

But it is the one that matters.