Daniel Thomas’s article “Social media platforms face crackdown on illegal content” (Report, March 17) highlights the stricter content moderation enforcement adopted by Ofcom, the communications regulator, as a critical step.

However human oversight alone won’t be enough. No team of moderators, no matter how well-resourced, can match the scale and speed required to tackle illegal content. Only artificial intelligence can do that.

For years, policymakers and tech companies have debated content moderation limits. AI removed some of those doubts, achieving 94 per cent accuracy with a 0.005 per cent false-positive rate in removing online terrorist propaganda.

AI is not a silver bullet. Concerns around bias, false positives, and over-reach are legitimate and must be addressed.

But the vast training data available for illegal content moderation makes this one of AI’s most controlled and validated applications.

The real threat isn’t AI — it’s failing to use it.

Relying on human oversight alone means illegal content will outpace enforcement. AI is the future of online safety. The tools are ready, so let’s embrace them. Certainly, there’s no time left to waste.

Angie Ma
Co-founder, Faculty, London EC1, UK



Source link


Leave a Reply

Your email address will not be published. Required fields are marked *