Feb 10 • 11:51 UTC 🇮🇳 India Aaj Tak (Hindi)

AI-generated videos will no longer be spared, government introduces strict rules, fake content will be removed in 3 hours

The Indian government has announced stringent new rules for content generated by artificial intelligence, requiring clear labeling of such content on social media platforms and digital companies.

The Indian government has officially notified new regulations regarding content created using artificial intelligence, which is set to directly impact social media platforms and digital companies. Under these new rules, which will take effect starting February 20, 2026, any content produced through AI tools must be clearly labeled as such. This move comes in response to the increasing prevalence of deepfake videos, counterfeit images, and fake audio content that have made it difficult for average users to distinguish between real and false information.

The government's rationale for these regulations is grounded in the rising incidents of misinformation, defamation, and fraud associated with AI-generated content. With the clear definition of 'synthetic content,' which encompasses any audio, video, photo, or visuals created by a computer or algorithm that appear convincingly real, the government aims to address the potential threats posed by misleading information. This means that content designed to portray people or events in a way that misleads viewers will fall under these new regulations.

However, not all types of content manipulation are included in this framework. Basic editing tasks like color correction, translations, or document preparation will not be subjected to these regulations unless they create misleading or false records. This balanced approach reflects the government's aim to mitigate risks while allowing for legitimate content creation and sharing, ultimately striving for a safer digital environment without stifling creativity.

📡 Similar Coverage