“The UK must be an AI maker, not just an AI taker,” pledged Keir Starmer at London Tech Week last month. As political speeches go on this side of the pond, it’s got legs. Not as many as “Alligator Alcatraz”, but catchy enough that it’s stuck with me since. But what does it actually mean to be an ‘AI maker’? And what, exactly, would the stakes be to be anything else?
The term hints at something still largely absent from mainstream political discourse: that artificial intelligence is fast-becoming the most important strategic asset of the 21st century.
AI has often been framed as the next general purpose technology — in the lineage of steam, fire, electricity, and the internet. But this fails to tell the whole story. Fire doesn’t decide. Steam doesn’t act. Electricity doesn’t wield. AI does all three, and more. As for the internet, all evidence points to it not surviving the age of AI at all.
So while many still insist that AI is just another tool, I find the analogy lacking (and as Wilde might have put it, “The man who could call AI a tool should be compelled to use one.”). AI is not a product to be packaged and sold, nor a feature to be bolted onto existing systems.
Instead, AI is something else entirely: the substrate of the next century, a new layer of decision-making, perception, and power.
As seapower defined the 18th century, and airpower the 20th, the 21st will be defined by machinepower: the ability to scale and deploy artificial intelligence at speed.
In order to understand this, we need to stop thinking of adding AI to the system, and start thinking of AI as the system.
This is the realpolitik. This is policy.
When I was at the AI Summit in London this June — loitering near the main stage, half-listening, half-doomscrolling — waiting for the UK’s Minister for Defence Procurement and Industry to take the podium for a speech, I wanted to know how the government was adapting to arrival of the Intelligence Age. When Maria Eagle stepped onto the stage, her message was clear-as-day doctrine.
“We cannot fail to pioneer and master AI,” she declared. Citing I.J. Good’s warning that an ultra-intelligent machine could be humanity’s last invention. In her speech she made it unequivocal: the threat isn’t AI itself and that, in a pre-war world, the real threat is falling behind.
While Dominic Cummings recalls being laughed at for such beliefs not so long ago, this belief is no longer fringe in government. Just last week, Science and Technology Secretary Peter Kyle wrote to the UK's national institute for artificial intelligence to tell its bosses to refocus on defence and security.
In a letter, Kyle said boosting the UK's AI capabilities was "critical" to national security and should be at the core of the Alan Turing Institute's activities. Kyle even went so far as to suggest that the institute should overhaul its leadership team to reflect its "renewed purpose”, and that any further government investment in the institute would depend on the "delivery of the vision" he had outlined in the letter.
Meanwhile, in May, the UK’s Strategic Defence Review placed AI at the centre of military planning. Drone swarms, uncrewed fleets, satellite analysis, predictive maintenance — it’s all being wired into Britain’s warfighting systems. Over £1 billion is being channelled into a new Defence AI Investment Fund. Entire divisions are being restructured around uncrewed systems and algorithmic command chains.
Whitehall, too, is shifting. Procurement reform, SME fast-tracks, a newly inclusive Defence Industrial Joint Council, all framed around one directive: AI.
Whilst sovereignty has crept back into the mainstream political lexicon in recent years over borders, laws, trade. Sovereignty over AI infrastructure — Sovereign AI — is a different matter entirely. The capacity to train, direct, and deploy AI is not something the UK currently controls. We rely mostly on American platforms, American chips, American clouds. For now, Britain is renting its future by the hour.
But, as the government is rightly pointing out, this simply will not do. In the Intelligence Age, power isn’t something you can negotiate. It’s something you build — in data centres and in chip fabs. Sovereignty today is measured in gigawatts and petaflops, in who owns the systems that increasingly do the deciding.
Across the world, models are being weaponised, chips hoarded like munitions, and open source access restricted in line with national interest. This is, quite clearly, an arms race. One not fought over land but over intelligence.
Earlier this year, a former Google engineer was charged with stealing AI chip secrets for Chinese firms, a crime that carries up to 175 years in prison and millions in fines. The case, part of the U.S. Disruptive Technology Strike Force, highlights just how high the geopolitical stakes between the world’s superpowers has become.
According to the Wall Street Journal, Chinese models like DeepSeek are being deployed across global institutions — from HSBC and Saudi Aramco to universities in South Africa and government projects in Japan.
Their edge? Good-enough performance at a fraction of the cost. U.S. firms may still dominate in raw capability, but the risk lies in China winning in distribution, locking in standards and embedding models across the Global South via its Global AI Governance Initiative (GAIGI).
And it’s not just the models. As the Oxford Internet Institute outlined in a recent paper, the real contest lies in compute: where the infrastructure is, who controls it, and who supplies the chips.
By these measures, most of the world is already out of the race. Just 32 countries host frontier-scale AI data centres. Only 24 can support training-grade compute — the kind needed to build advanced systems, not just run them. The rest are what the authors call “strategically dependent”: reliant on foreign cloud providers for access to infrastructure, subject to external pricing, legal frameworks, and service availability.
The United States leads decisively: 26 training-grade regions spread across 22 states. China is second, with at least 4 training regions and 22 inference zones. Europe barely registers. France has three hubs. Germany, four. The UK? Just three. On total footprint, U.S. companies operate 87 of the world’s known AI-ready regions, nearly two-thirds of the global total.
But geography alone doesn’t confer sovereignty. Control matters. And much of the UK’s cloud capacity — like that of Europe — is owned and operated by American firms: AWS, Microsoft, Google. The data centres may be here, but the decisions around them are made elsewhere.
Even India, long seen as a tech superpower, finds itself in a bind. Despite its software dominance, it lacked the infrastructure and investment to build its own foundation models — until DeepSeek launched.
Initial reports around DeepSeek suggested that a Chinese team, operating with fewer resources, had suddenly outperformed Western benchmarks. The effect was immediate. One Indian founder called it “the kick in the backside we needed.” Within weeks, the government secured access to over 18,000 GPUs and launched an ambitious national AI mission, funnelling public and private resources into sovereign model development.
Their reaction reveals the stakes. Because without compute, there is no model. No control. No say in the systems that will increasingly govern everything.
This is why facilities the size of cities are rising across the globe, not to serve users, but to secure strategic advantage. OpenAI is constructing a $60 billion data centre in Texas that is bigger than Central Park, and the centre even has its own natural gas plant. This isn’t an anomaly. Meta, too, has plans for a data centre half the size of Manhattan, and it’s not for Facebook.
Beneath all of this sits the real choke point: access to the chips themselves. Over 95% of accelerator-enabled cloud regions globally run on U.S.-owned semiconductors — almost all from a single company: NVIDIA, a company with a higher valuation than the entire UK economy. Most countries have no domestic chip capacity. They rely entirely on foreign goodwill for access to the most strategically important resource on Earth.
In Argentina, one of the country’s top AI labs operates from a converted university classroom. Its systems are cobbled together from second-hand GPUs and spare parts. “We are losing,” the lab’s director told Oxford researchers.
In Kenya, engineers building large language models for African languages begin work at 3 a.m., according to the NYT. Not out of ambition, but necessity — compute rented from U.S. clouds is cheapest when Americans are asleep. Their work determined by others’ circadian rhythms.
And to hammer this point home, Harvard’s Kempner Institute now has more compute than the entire African continent combined.
These are the contours of a new divide, what Oxford’s Vili Lehdonvirta calls the “compute curtain.” A line drawn between those who can build intelligence infrastructure and those who can’t.
For the takers, there is only compliance. They won’t shape the next generation of AI systems — they will simply live with the consequences.
For the makers, the stakes couldn't be higher. “There’s no question that the armies of the future are gonna be drones and robots,” said David Sacks, former PayPal Mafioso and now Trump’s AI (and crypto) czar. “And they’re gonna be AI-powered… I would define winning as the whole world consolidating around the American tech stack.” This is more than about selling products. It’s about exporting sovereignty and increasingly, ideology.
“We want to make sure democratic AI wins over authoritarian AI,” said Sam Altman last year. More recently, Elon Musk tweeted how Grok would be used “to rewrite the entire corpus of human knowledge,” adding missing information, deleting errors and then retraining on the result. AI will become the feedback loop of editorial control, the masquerade of reality to whomever dominates this new arena first.
According to Musk’s old PayPal colleague Peter Thiel, it was this fear that apparently got Elon into politics in the first place. In a recent conversation, he revealed that it was the fear that the ‘woke AI’ would follow them to Mars. Musk feared he’d become an AI taker, not a maker and see his plans for interstellar travel sabotaged (cue MechaHitler…).
A Great Game is afoot, those who win it will be able to project force across land, sea, air and space in ways humanity has never witnessed before. It will not just be about who commands the skies or the seas, but who commands the systems that command everything else. In the 21st century, it is machinepower that will define all.
Back here in Britain, the government was ridiculed earlier this month for advertising a “Head of Ventures” role, tasked with securing AI sovereignty and brokering deals with the likes of Microsoft and OpenAI. Sounds important. So why was it dubbed “laughable”? The salary: £79,440.
Still, it looks like we’re finally acting on the realpolitik of it all. During that same London Tech Week speech, Starmer pledged £2 billion toward boosting the UK’s AI infrastructure. Peanuts compared to the US and China, but at least a signal of intent.
Also in June, the government named Rolls-Royce as preferred bidder to build the UK’s first small modular reactors — a crucial step toward meeting the energy demands the Intelligence Age will require. You can’t be an AI superpower and have energy dependency. Just like you can’t run a car without an engine. Let’s hope someone in Whitehall is piecing that together.
For all the government’s foibles, on this they are right. The UK must become an AI maker, not an AI taker. Whether they have the will — or the competence — remains to be seen. Because in the end, the AI makers won’t just win at AI. They’ll win full stop.