The 2025 Paris AI Action Summit was meant to be a moment of global unity — a declaration that artificial intelligence would be developed openly, transparently, ethically, and safely. Instead, it exposed a widening fault line in global AI governance. The US and the UK refused to sign the summit’s declaration, citing concerns about regulatory overreach and innovation constraints. Their absence sends a troubling signal: Are we inching towards an AI Cold War, one that threatens to fragment the global AI landscape at a time when international cooperation is needed most?
Contrasting positions
The decision by Washington and London to sit out the agreement stands in stark contrast to the positions of the EU, India, China, and dozens of other nations that endorsed the Paris declaration. French President Emmanuel Macron and Prime Minister Narendra Modi, the co-chairs of the summit, called for a balanced approach — one that fosters innovation while ensuring ethical safeguards. In contrast, US officials, led by Vice President JD Vance, have argued that excessive oversight would cripple AI’s transformative potential, warning against the kind of bureaucracy that has slowed down technological progress in Europe.
This divide is not just philosophical; it is deeply strategic. The US and UK see AI as a domain where dominance translates directly into geopolitical power. Washington has increasingly framed AI through a national security lens, restricting exports of advanced AI chips to China and warning against European-style regulations that could limit American tech firms’ global competitiveness. The UK, similarly, has prioritised a light-touch regulatory framework, hoping to position itself as a hub for AI investment rather than a regulator of its risks.
Meanwhile, the rest of the world is moving in a different direction. The European Union’s AI Act, expected to become a global benchmark, places strict rules on high-risk AI systems, ensuring they meet transparency and accountability standards. China, for its part, has taken an aggressive stance on AI regulation — not necessarily for the same reasons as Europe but to maintain control over the development of the technology. India has sought to balance open innovation with national security considerations, emphasising AI as a public good.
If the world’s leading AI nations refuse to collaborate on shared safety standards, ethical guidelines, and global frameworks, we risk an AI arms race where competition overrides common good.
There is a better way forward. Instead of retreating into nationalist AI policies, the US and UK should engage with the global community to shape AI governance frameworks that are both pragmatic and enforceable.
The world does not need a single, rigid AI regulatory regime. But it does need interoperability — common safety standards, shared transparency requirements, and mechanisms for preventing AI-driven harm, whether in the form of misinformation, biased algorithms, or autonomous weapons. This could be achieved by framing a networked architecture for AI governance, where countries align on key principles while allowing for localised policy adaptations. Such an approach would allow innovation to flourish while ensuring AI does not become a destabilising force.
Additionally, the US and UK should recognise that AI’s greatest challenges cannot be solved in isolation. These issues require cross-border cooperation, joint research initiatives, and public-private partnerships that leverage AI for societal good. AI should be treated as a global public good, not a zero-sum game.
The US and UK may believe they are acting in their national interest by refusing to sign the Paris declaration. But in the long run, they risk isolating themselves from the very global AI ecosystem they seek to lead. AI’s future should be shaped by international cooperation that ensures safety, equity, and progress for all.
The writer is a tech entrepreneur, and former Managing Director of CGI India