Feb 10 • 13:13 UTC 🇮🇳 India Aaj Tak (Hindi)

If you post a video made with AI, you could get caught; the government has tightened its grip, understand in five points

India's government has implemented new regulations for AI-generated content to combat misinformation and holds social media companies and users accountable for identifying such content.

In response to the proliferation of deepfake videos and false statements circulated on social media, the Indian government has introduced new regulations governing AI-generated content. These guidelines aim to mitigate the risks posed by misleading visuals and audios, making it increasingly difficult for users to differentiate between real and machine-generated content. As AI tools enhance content creation capabilities, they also present unique challenges in the form of deception and misinformation.

Under the new regulations, both social media platforms and individual users will bear the responsibility of labeling AI-generated material. The government has mandated a labeling requirement that at least 10% of any AI-generated visual must clearly denote its artificial origin. This measure seeks to ensure that viewers are adequately informed about the authenticity of content they encounter online, thereby curbing the spread of harmful misinformation, especially in politically charged environments.

The implications of these regulations extend beyond mere compliance; they represent a significant shift towards greater accountability in the digital landscape. As false narratives can swiftly influence public opinion and incite unrest, the government's proactive stance underscores the vital need to safeguard public discourse against the threats posed by evolving AI technologies. The emphasis on labeling and rapid action against fake content aims to restore trust in information shared on social media, fostering a safer online ecosystem for users across India.

📡 Similar Coverage