New
GS Foundation (P+M) - Delhi : 19th Jan. 2026, 11:30 AM Spring Sale UPTO 75% + 10% Off, Valid Till : 6th Feb., 2026 GS Foundation (P+M) - Prayagraj : 09th Jan. 2026, 11:00 AM Spring Sale UPTO 75% + 10% Off, Valid Till : 6th Feb., 2026 GS Foundation (P+M) - Delhi : 19th Jan. 2026, 11:30 AM GS Foundation (P+M) - Prayagraj : 09th Jan. 2026, 11:00 AM

Curbing Deepfakes and Digital Harm: India’s New Framework for Regulating Synthetic Media

Prelims: (Polity + Governance + Science & Technology + CA)
Mains: (GS 2 – Governance, Fundamental Rights, Cyber Law; GS 3 – Science & Technology, Internal Security)

Why in News ?

  • The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, effective February 20, 2026.
  • The amendments aim to strengthen regulation of AI-generated (synthetic) content and drastically reduce takedown timelines for unlawful material.
  • They seek to curb the spread of non-consensual deepfakes, intimate imagery, and unlawful content, while reinforcing platform accountability under the IT Act, 2000.

Background and Context: Regulating AI in the Digital Public Sphere

Advances in generative artificial intelligence have dramatically lowered the cost and skill required to create hyper-realistic audio, video, and images. While these technologies enable creativity and innovation, they have also fueled a surge in deepfakes, misinformation, impersonation, and non-consensual intimate imagery.

Globally, governments are grappling with how to regulate synthetic media without undermining free speech and innovation. The European Union’s AI Act, China’s deepfake labelling rules, and emerging US policy debates reflect a shared concern over the societal harms of unregulated AI-generated content.

In India, the challenge is compounded by the scale of digital platforms, rapid virality, linguistic diversity, and the vulnerability of individuals—especially women and public figures—to online abuse. Existing takedown timelines under the IT Rules were increasingly seen as inadequate to prevent irreversible harm once content goes viral.

The February 2026 amendments mark a shift from reactive content moderation to proactive digital governance, embedding accountability, transparency, and rapid response into India’s regulatory framework.

Key Amendments at a Glance

1. Sharp Reduction in Removal Timelines

For Court/Government-declared illegal content:

  • Takedown timeline reduced to 3 hours (earlier 24–36 hours).

For non-consensual intimate imagery and deepfakes:

  • Takedown timeline reduced to 2 hours (earlier 24 hours).

For other unlawful content:

  • Takedown timeline reduced from 36 hours to 3 hours.

Rationale: Earlier timelines were seen as ineffective in preventing virality and irreversible reputational harm. The government argues that major platforms possess sufficient technical capacity for faster removal.

Concerns:

  • Determining “illegality” within 2–3 hours is operationally difficult.
  • Risk of over-censorship and precautionary takedowns.
  • Increased compliance burden for intermediaries.

2. Mandatory Labelling of AI-Generated Content

Legal definition of “Synthetically Generated Information (SGI):”
Audio, visual, or audio-visual content artificially created, generated, modified, or altered using a computer resource in a way that makes it appear real or indistinguishable from authentic events or persons.

Key features:

  • AI-generated imagery must be labelled “prominently.”
  • The earlier proposal requiring 10% of image space to carry the label has been diluted.
  • Platforms must:
    • Seek user disclosure for AI-generated content.
    • Proactively label content if disclosure is absent.
    • Remove non-consensual deepfakes.

Exclusions: Routine editing and quality-enhancing tools (e.g., smartphone touch-ups) are excluded—narrowing the scope from the draft October 2025 version.

Safe Harbour and Intermediary Liability

What is Safe Harbour ?

Under Section 79 of the IT Act, 2000, intermediaries are protected from liability for user-generated content, provided they exercise “due diligence.”

Impact of the amendment:

  • If an intermediary knowingly permits, promotes, or fails to act against unlawful synthetic content, it may be deemed to have failed due diligence.
  • This may result in loss of safe harbour protection, significantly increasing regulatory pressure on platforms and altering the liability landscape of digital intermediaries.

Administrative Changes

The amendment partially rolls back an earlier rule that limited States to appointing only one authorised officer for issuing takedown orders.

States may now designate multiple authorised officers, addressing administrative needs of populous States and improving enforcement capacity.

Trigger Events: The Global Deepfake Crisis

The urgency of reform follows global controversies, including:

  • AI platforms generating non-consensual intimate images of women.
  • Deepfake political speeches and impersonations undermining democratic trust.
  • Manipulated audio-visual content misrepresenting real-world events.

These incidents raise serious concerns about:

  • Privacy and dignity
  • Gender-based online violence
  • Electoral integrity and public order

India’s amendments place the country within a broader international movement toward stricter AI governance and platform accountability.

Governance and Constitutional Dimensions

Article 19(1)(a) – Freedom of Speech:

Overbroad or rushed takedowns may chill legitimate expression. Short timelines increase the risk of defensive over-removal by platforms.

Article 21 – Right to Privacy and Dignity:

Faster removal of non-consensual deepfakes strengthens protection of individual dignity, bodily autonomy, and reputational rights.

Federal Implications:

  • Allowing multiple State officers enhances decentralised enforcement and operational flexibility.
  • The amendments reflect a constitutional balancing act between free expression and protection from digital harm.

Significance of the Amendments

1. Protecting Individual Dignity and Privacy

Rapid takedown of deepfakes and intimate imagery strengthens constitutional protections under Article 21.

2. Enhancing Platform Accountability

By linking compliance to safe harbour protection, the rules shift responsibility onto intermediaries to act decisively and transparently.

3. Curbing AI-Driven Misinformation

Mandatory labelling and rapid removal reduce the risk of synthetic media distorting public discourse, elections, and social trust.

4. Aligning India with Global AI Governance Trends

The amendments place India alongside the EU, China, and other jurisdictions adopting stricter norms on synthetic media.

5. Strengthening Cyber Governance Architecture

The reforms modernise India’s digital regulatory framework to keep pace with rapidly evolving AI technologies.

Challenges and Way Forward

Challenges

  • Determining illegality within 2–3 hours: Legal ambiguity and unclear law enforcement communications may hinder accurate and timely decisions.
  • Risk of over-censorship: Platforms may err on the side of removal, potentially undermining free speech and digital innovation.
  • Compliance burden on Big Tech and smaller platforms: Real-time moderation requires advanced AI tools and human review; smaller platforms may struggle disproportionately.
  • Verification mechanisms: Ensuring authenticity of user declarations and deploying “reasonable technical measures” without violating privacy.

Way Forward

  • Clearer illegality standards: Develop structured guidance and standardised digital takedown protocols for platforms. 
  • Independent oversight mechanism: Establish appellate or review authorities to check arbitrary or excessive takedowns.
  • Strengthening AI detection tools: Promote indigenous AI detection systems under India’s national AI mission.
  • Harmonisation with the Digital Personal Data Protection Act:
    Ensure consistency in privacy, consent, and data protection standards.
  • Capacity building for States: Train authorised officers in cyber law, AI governance, and digital evidence handling.

FAQs

1. What are “synthetically generated information” (SGI) under the new rules ?

SGI refers to audio, visual, or audio-visual content artificially created or altered using computer resources in a way that makes it appear real or indistinguishable from authentic events or persons.

2. What are the new takedown timelines ?

Court/government-declared illegal content must be removed within 3 hours, and non-consensual deepfakes or intimate imagery within 2 hours.

3. How do the amendments affect intermediary safe harbour ?

If intermediaries fail to exercise due diligence in removing unlawful synthetic content, they may lose safe harbour protection under Section 79 of the IT Act.

4. Are all edited images considered AI-generated content ?

No. Routine editing and quality-enhancing tools, such as smartphone touch-ups, are excluded from the definition of synthetically generated information.

5. Why are these amendments constitutionally significant ?

They seek to balance freedom of speech (Article 19(1)(a)) with the right to privacy and dignity (Article 21) in the digital age.

Have any Query?

Our support team will be happy to assist you!

OR
X