India gives tech platforms 9 days to label all AI content and remove deepfakes within 3 hours

2 hours ago 720

India just threw down a challenge that might be impossible to meet. Starting February 20, social media companies operating in the country must label every piece of fake AI content and take down illegal deepfakes within three hours. The technology to do this properly doesn’t exist yet.

The rules, announced Tuesday, put pressure on platforms like Meta, Google, and X to deploy systems that catch and mark AI-generated images, videos, and audio before users see them. Companies must also stop people from removing or hiding these labels. Even with billions in resources, these tech giants struggle to make their current detection tools work reliably.

Most big platforms already use something called C2PA, which plants invisible information inside files to show how they were made. It’s like a nutrition label for digital content. When it works, you can see if a photo came from a real camera or an AI generator. Facebook, Instagram, YouTube, and LinkedIn try to flag this content, but the labels are easy to miss, and plenty of fake stuff slips through.

The system has major holes. Open-source AI tools and apps that make fake nude photos often skip the labeling process completely. Even when labels exist, they disappear during file uploads on many sites. C2PA’s supporters have spent years saying the technology just needs wider adoption to succeed. India is about to test that claim with 500 million social media users.

Why India’s market power changes everything

India has 481 million Instagram users, 403 million on Facebook, 500 million watching YouTube, and 213 million using Snapchat. X considers India its third-biggest market. When a country this large makes new rules, global tech companies typically adjust their systems everywhere, not just in one place.

This push comes after India spent months dealing with a deepfake crisis. Cryptopolitan reported last October that Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan sued over fake videos using their faces, seeking nearly half a million dollars in damages. The couple claimed YouTube’s AI trainers grabbed public content without permission to train systems that later created fake media with their images. Cases like these, along with viral fake videos of actress Rashmika Mandanna, pushed officials to act.

The timing lines up with India’s AI ambitions. Google is building a $15 billion AI hub in Visakhapatnam that will become the company’s largest facility outside America. The site will have gigawatt-scale computing power and is set to open in July 2028. With that kind of AI infrastructure arriving, regulators want content safety rules in place first.

Critics warn of “rapid fire censorship”

The tight deadlines worry free speech advocates. The Internet Freedom Foundation says the three-hour takedown window will force companies to use automated systems that delete too much content by mistake. They call it creating “rapid fire censors” because there’s no time for humans to review reports properly.

Platforms like X, which haven’t set up any AI labeling yet, now have just nine days to build entire systems from scratch. Meta, Google, and X all declined to comment. Adobe, the company behind C2PA, stayed silent too.

Officials writing the rules seem to know current technology isn’t ready. The requirements say platforms should use detection methods “to the extent technically feasible” – legal language that admits perfection isn’t expected. India’s leaders believe pressure will drive innovation. They’re betting that when you force tech companies to either build better systems or lose access to hundreds of millions of users, they’ll figure it out fast.

Whether better AI detection technology actually exists to be built, or if India just ordered companies to deliver something that can’t be made yet, remains to be seen. We’ll find out in nine days.

The smartest crypto minds already read our newsletter. Want in? Join them.

Read Entire Article