Sir Keir Starmer is seeking to strengthen diplomatic ties with Donald Trump’s administration by shifting the UK’s focus on artificial intelligence towards security co-operation rather than a “woke” emphasis on safety concerns.
Tech secretary Peter Kyle announced on Friday that the UK’s AI Safety Institute, established just 15 months ago, would be renamed the AI Security Institute.
The body, which was given a £50mn budget, will no longer focus on risks associated with bias and freedom of speech, but on “advancing our understanding of the most serious risks posed by the technology”.
Earlier this week, the UK joined the US at the AI Summit in Paris in refusing to sign a joint communique — approved by around 60 states, including France, Germany, India and China — that pledged to ensure “AI is open, inclusive, transparent, ethical, safe, secure and trustworthy”.
Officials said the recent moves on AI were part of a broader strategy at a time the Trump administration engages in a trade war against China and the EU. Some believe aligning on US priorities over AI could help the UK avoid being targeted in other areas.
At the AI summit in Paris this week, US vice-president JD Vance warned against “excessive” AI regulation and said the country would build systems “free from ideological bias”. Meanwhile, Trump confidant Elon Musk said at an event in Dubai on Thursday he was concerned that “if, hypothetically, AI is designed to think DEI at all costs, it could think too many men are in power and just execute them”.

The UK’s new ambassador to the US Peter Mandelson said his “signature policy” would be fostering collaboration between the two countries’ tech sectors, to ensure both countries could secure a “logical advantage” over China.
“It would be disastrous if we in the west lost the advanced technology race to China and China were to gain a technological stranglehold,” said Mandelson, adding that the “backbone” of the special relationship between the US and UK lies in its defence, intelligence and security partnerships.
Britain’s decision to move closer to the US on AI has been criticised by tech experts and civil society groups who argue the UK is overestimating what it has to offer, while isolating itself from European allies on tech regulation.
“The US is engaged in AI imperialism,” said Herman Narula, chief executive of UK-based AI company Improbable. “The thing that is of greatest interest to them is access to our market. What else do they need us for?”
For the UK to present an attractive proposition to the US it will need to make serious concessions on what it can offer, including laxer rules around the inputs used to train AI models and a less stringent approach to GDPR, said Narula.
At the AI summit, people briefed on the US decision not to sign the joint communique said it did not make a clear enough distinction between use of the technology by democratic and authoritarian regimes — and pointed to the fact that China was a signatory.
One Labour MP described the UK’s decision not to sign the declaration as a “low-cost way to send a clear geopolitical signal”, adding that they believed it was “exactly the right move”.
People close to the UK’s decision argued the move had been over-interpreted, arguing that it was more the result of limited efforts the French hosts of the summit made to secure signatories.
The UK government said the declaration “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it — a critical focus for the UK”.
When the AI Safety Institute was first launched last year, then prime minister Rishi Sunak said it would explore “all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely”.
Since then, Starmer has so far held off publishing its AI Safety Bill pending greater clarity from the US government of its priorities, according to people briefed on the matter. The law would theoretically turn voluntary agreements on pre-market testing of models by the AISI with companies including Meta, Amazon and OpenAI, into legally binding obligations.
Gregory C. Allen, director of the Wadhwani AI Center at the Center for Strategic and International Studies, said that “safety is associated with censorship on social media platforms because it was the safety teams of some of these platforms that were responsible for the decision to remove Donald Trump from major platforms”.
Allen said he would not be surprised if the US changed the name of its own AISI in the near future. The body has so far struggled to hire staff amid a backdrop of profound political uncertainty. Last week, it emerged that the institute’s inaugural director, Elizabeth Kelly, was standing down from her role.
Jakob Mökander, director of science and tech policy at the Tony Blair Institute, said the UK’s AISI was the “best funded in the world”, so if the US continued to collaborate with the UK it could continue to “have an AI Safety Institute but send all of its models to the UK for testing”.
Lord Peter Ricketts, former UK national security adviser and permanent secretary at the Foreign Office, expressed scepticism that pursuing collaboration on AI would be a fruitful strand of diplomacy.
“The US AI ecosystem is so vast that any UK contribution could only make a marginal contribution and part of that would be our convening power,” he said. “If we align with the US and put ourselves at odds with the EU that will surely weaken our ability to convene — and possibly damage the reset [with the EU].”
Additional reporting by Chloe Cornish in Dubai