| Prelims: (Polity + Governance + Science & Technology + CA) Mains: (GS 2 – Governance, Fundamental Rights, Cyber Law; GS 3 – Science & Technology, Internal Security) |
Advances in generative artificial intelligence have dramatically lowered the cost and skill required to create hyper-realistic audio, video, and images. While these technologies enable creativity and innovation, they have also fueled a surge in deepfakes, misinformation, impersonation, and non-consensual intimate imagery.
Globally, governments are grappling with how to regulate synthetic media without undermining free speech and innovation. The European Union’s AI Act, China’s deepfake labelling rules, and emerging US policy debates reflect a shared concern over the societal harms of unregulated AI-generated content.
In India, the challenge is compounded by the scale of digital platforms, rapid virality, linguistic diversity, and the vulnerability of individuals—especially women and public figures—to online abuse. Existing takedown timelines under the IT Rules were increasingly seen as inadequate to prevent irreversible harm once content goes viral.
The February 2026 amendments mark a shift from reactive content moderation to proactive digital governance, embedding accountability, transparency, and rapid response into India’s regulatory framework.
For Court/Government-declared illegal content:
For non-consensual intimate imagery and deepfakes:
For other unlawful content:
Takedown timeline reduced from 36 hours to 3 hours.
Rationale: Earlier timelines were seen as ineffective in preventing virality and irreversible reputational harm. The government argues that major platforms possess sufficient technical capacity for faster removal.
Concerns:
Legal definition of “Synthetically Generated Information (SGI):”
Audio, visual, or audio-visual content artificially created, generated, modified, or altered using a computer resource in a way that makes it appear real or indistinguishable from authentic events or persons.
Key features:
Exclusions: Routine editing and quality-enhancing tools (e.g., smartphone touch-ups) are excluded—narrowing the scope from the draft October 2025 version.
Under Section 79 of the IT Act, 2000, intermediaries are protected from liability for user-generated content, provided they exercise “due diligence.”
Impact of the amendment:
The amendment partially rolls back an earlier rule that limited States to appointing only one authorised officer for issuing takedown orders.
States may now designate multiple authorised officers, addressing administrative needs of populous States and improving enforcement capacity.
The urgency of reform follows global controversies, including:
These incidents raise serious concerns about:
India’s amendments place the country within a broader international movement toward stricter AI governance and platform accountability.
Article 19(1)(a) – Freedom of Speech:
Overbroad or rushed takedowns may chill legitimate expression. Short timelines increase the risk of defensive over-removal by platforms.
Article 21 – Right to Privacy and Dignity:
Faster removal of non-consensual deepfakes strengthens protection of individual dignity, bodily autonomy, and reputational rights.
Federal Implications:
1. Protecting Individual Dignity and Privacy
Rapid takedown of deepfakes and intimate imagery strengthens constitutional protections under Article 21.
2. Enhancing Platform Accountability
By linking compliance to safe harbour protection, the rules shift responsibility onto intermediaries to act decisively and transparently.
3. Curbing AI-Driven Misinformation
Mandatory labelling and rapid removal reduce the risk of synthetic media distorting public discourse, elections, and social trust.
4. Aligning India with Global AI Governance Trends
The amendments place India alongside the EU, China, and other jurisdictions adopting stricter norms on synthetic media.
5. Strengthening Cyber Governance Architecture
The reforms modernise India’s digital regulatory framework to keep pace with rapidly evolving AI technologies.
FAQs1. What are “synthetically generated information” (SGI) under the new rules ? SGI refers to audio, visual, or audio-visual content artificially created or altered using computer resources in a way that makes it appear real or indistinguishable from authentic events or persons. 2. What are the new takedown timelines ? Court/government-declared illegal content must be removed within 3 hours, and non-consensual deepfakes or intimate imagery within 2 hours. 3. How do the amendments affect intermediary safe harbour ? If intermediaries fail to exercise due diligence in removing unlawful synthetic content, they may lose safe harbour protection under Section 79 of the IT Act. 4. Are all edited images considered AI-generated content ? No. Routine editing and quality-enhancing tools, such as smartphone touch-ups, are excluded from the definition of synthetically generated information. 5. Why are these amendments constitutionally significant ? They seek to balance freedom of speech (Article 19(1)(a)) with the right to privacy and dignity (Article 21) in the digital age. |
Our support team will be happy to assist you!