The EU’s police agency Europol has warned that criminal organisations increasingly use artificial intelligence to stage attacks on behalf of hostile powers, posing an “unprecedented” security challenge to national governments.
In its report on the threats posed by organised crime published on Tuesday, Europol warned that criminals were becoming “proxies” in hybrid attacks such as sabotage allegedly committed by Russia and China.
“Geopolitical tensions have created a window for hybrid threat actors to exploit criminal networks as tools of interference,” Europol wrote, adding that this was having a destabilising effect on EU countries.
AI and other technologies such as blockchain or quantum computing have become a “catalyst” for crime, as they “drive criminal operations’ efficiency by amplifying their speed, reach, and sophistication”, the agency said.
“Cyber crime is evolving into a digital arms race,” said Europol executive director Catherine De Bolle. “AI-driven attacks are becoming more precise and devastating. Some attacks show a combination of motives of profit and destabilisation, as they are increasingly state aligned and ideologically motivated.”
Cyber attacks were increasingly politically motivated and targeting governments and critical infrastructure rather than businesses or individuals, carried out by criminal groups on behalf of state actors such as Russia. Criminal actors were exploiting vulnerabilities, such as those created by government contractors, to breach secure systems, Europol said.
“We observe a growing collaboration between criminal networks and actors orchestrating hybrid threats, exploiting geopolitical tensions and undermining our institutions,” De Bolle said, adding that crime was being “accelerated by AI” and other technologies.
The use of AI by criminal gangs is a new development compared with Europol’s last report published in 2021, when artificial intelligence was mentioned only once. The technology is being used to create sophisticated malware used for cyber attacks, or generate targeted messages to deceive victims, for instance by mimicking family and friends’ voices, writing styles, or their images for fake live videos.
“By creating highly realistic synthetic media, criminals are able to deceive victims, impersonate individuals and discredit or blackmail targets,” Europol wrote.
AI had also “accelerated” online fraud and helped criminals access personal data, for instance through automated phishing attacks. “The scale, variety, sophistication and reach of online fraud schemes is unprecedented,” Europol said, citing strategies such as luring consumers to invest in dubious schemes, for instance involving cryptocurrencies, or to pay money to purported romantic partners or victims of ongoing conflicts or humanitarian crises.
The organisation also warned that AI models were generating child abuse material, which was increasingly being shared online in private forums and chats. De Bolle has previously called for the implementation of new EU rules to tackle online child abuse material by sifting encrypted message services like WhatsApp, but countries like Germany have been blocking them over privacy concerns.
Europol also warned that more traditional cross-border crimes such as the smuggling of immigrants, drugs, firearms and waste remained a serious issue, and violence linked to organised crime was on the rise in some countries, citing for instance attacks in Germany.
It said that “parts of these criminal activities’ processes are shifting more to the online domain, particularly when it comes to recruitment, communication, marketing or retail, and relevant use cases of AI are on the horizon”.
Europol said that the different criminal activities generated “immense” profits that were laundered through illicit means and cryptocurrencies, but very hard to recover — estimating that only 2 per cent of illicit proceeds were confiscated.