Feb 12 • 07:34 UTC 🇮🇳 India Aaj Tak (Hindi)

Will AI Creators be eradicated on social media? The Government's new rule will change everything

India's government has implemented new regulations requiring explicit labeling of AI-generated content on social media platforms.

The Indian government has introduced new rules concerning AI-generated content, mandating that any photos, videos, or audio produced using AI tools must be clearly labeled. Social media companies have a deadline of February 20 to ensure compliance with these regulations. This includes a requirement to verify whether users are truthful about the origin of the content, along with a rule to remove misleading or illegal AI content within three hours of identification, indicating that deepfakes and fraudulent videos will no longer be tolerated.

With these changes set to take effect on February 20, 2026, major social media platforms like Instagram, YouTube, and Facebook will need to adapt their operations significantly. Historically, AI-generated content often circulated without clear identification, leaving users unaware of the authenticity of the material. The new regulations require platforms to collect information on the extent of AI usage in uploaded content at the time of upload, meaning trust in user declarations alone will no longer suffice; platforms must also employ technical measures to verify the legitimacy of the content.

The challenge lies in the growing sophistication of deepfake technology, which can produce highly realistic visuals that even automated systems struggle to distinguish from real content. This raises concerns about the efficacy of the new measures, as the technological advancements may make it difficult for platforms to comply fully with the government’s requirements, potentially leading to an increase in disputes over what constitutes misleading content and how promptly it can be identified and removed.

📡 Similar Coverage