Contact Information

37 Westminster Buildings, Theatre Square,
Nottingham, NG1 6LG

We Are Available 24/ 7. Call Now.

Stay informed with free updates

When the Chernobyl nuclear power plant exploded in 1986 it was a catastrophe for those who lived nearby in northern Ukraine. But the accident was also a disaster for a global industry pushing nuclear energy as the technology of the future. The net number of nuclear reactors has pretty much flatlined since as it was seen as unsafe. What would happen today if the AI industry suffered an equivalent accident?

That question was posed on the sidelines of this week’s AI Action Summit in Paris by Stuart Russell, a professor of computer science at the University of California, Berkeley. His answer was that it was a fallacy to believe there has to be a trade-off between safety and innovation. So those most excited by the promise of AI technology should still proceed carefully. “You cannot have innovation without safety,” he said. 

Russell’s warning was echoed by some other AI experts in Paris. “We have to have minimum safety standards agreed globally. We need to have these in place before we have a major disaster,” Wendy Hall, director of the Web Science Institute at the University of Southampton, told me. 

But such warnings were mostly on the margins, as the summit’s governmental delegates milled around the cavernous Grand Palais. In a punchy speech, JD Vance emphasised the national security imperative of leading in AI. America’s vice-president argued that the technology would make us “more productive, more prosperous, and more free”. “The AI future will not be won by hand-wringing about safety,” he said.

Whereas the first international AI summit at Bletchley Park in Britain in 2023 focused almost entirely — most said excessively — on safety issues, the priority in Paris was action as President Emmanuel Macron trumpeted big investments in the French tech industry. “The process that was started in Bletchley, which was I think really amazing, was guillotined here,” Max Tegmark, president of the Future of Life Institute, which co-hosted a fringe event on safety, told me.

What most concerns safety campaigners is the speed at which the technology is developing and the dynamics of the corporate — and geopolitical — race to achieve artificial general intelligence, when computers might match humans across all cognitive tasks. Several leading AI research companies, including OpenAI, Google DeepMind, Anthropic and China’s DeepSeek, have an explicit mission to attain AGI. 

Later in the week, Dario Amodei, co-founder and chief executive of Anthropic, predicted that AGI would most likely be achieved in 2026 or 2027. “The exponential can catch us by surprise,” he said. 

Alongside him, Demis Hassabis, co-founder and chief executive of Google DeepMind, was more cautious, forecasting a 50 per cent probability of achieving AGI within five years. “I would not be shocked if it was shorter. I would be shocked if it was longer than 10 years,” he said.

Critics of the safety campaigners portray them as science fiction fantasists who believe that the creation of an artificial superintelligence will result in human extinction: hand-wringers standing like latter-day Luddites in the way of progress. But safety experts are concerned by the damage that can be wrought by the extremely powerful AI systems that exist today and by the danger of massive AI-enabled cyber- or bio-weapons attacks. Even leading researchers admit they do not fully understand how their models work, creating security and privacy concerns. 

A research paper on sleeper agents from Anthropic last year found that some foundation models could trick humans into believing they were operating safely. For example, models that were trained to write secure code in 2023 could insert exploitable code when the year was changed to 2024. Such backdoor behaviour was not detected by Anthropic’s standard safety techniques. The possibility of an algorithmic Manchurian candidate lurking in China’s DeepSeek model has already led to it being banned by several countries.

Tegmark is optimistic, though, that both AI companies and governments will see the overwhelming self-interest in re-prioritising safety. Neither the US, China or anyone else wants AI systems out of control. “AI safety is a global public good,” Xue Lan, dean of the Institute for AI International Governance at Tsinghua University in Beijing, told the safety event.

In the race to exploit the full potential of AI, the best motto for the industry might be that of the US Navy Seals, not noted for much hand-wringing. “Slow is smooth, and smooth is fast.”

john.thornhill@ft.com

Source link


administrator

Leave a Reply

Your email address will not be published. Required fields are marked *