Mumbai: The Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proposing mandatory labelling of all AI-generated content across major social media and generative AI platforms. The move marks India’s first comprehensive attempt to regulate deepfakes and synthetic media amid rising concerns over AI misuse.
The draft rules mandate that all synthetic content must carry clear and visible labels — with watermarks or metadata identifiers permanently embedded. For images and videos, the label must cover at least 10% of the display area, while for audio, identifiers must be present during the first 10% of playback.
Platforms will also be barred from allowing users to remove or suppress these identifiers, making it harder to disguise AI-generated material.
Inviting public feedback by November 6, MeitY said the move follows growing public concern about the potential for AI-driven misinformation, impersonation, and election manipulation.
“With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (commonly known as deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly,” the ministry said in its statement.
The draft introduces, for the first time, a legal definition of “synthetically generated information” as content “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.” This move effectively brings AI-generated media under the same due diligence and takedown obligations that apply to other unlawful online information.
Platform Responsibilities and Legal Implications
Under the proposed framework, significant social media intermediaries must require users to declare whether uploaded content is synthetically generated before publication and use automated systems to verify such declarations.
All synthetic or declared AI content must include visible labels or notices, allowing users to differentiate between authentic and manipulated material.
Non-compliant platforms risk losing safe-harbour protections under Section 79 of the IT Act, 2000, which shields intermediaries from liability for user-generated content.
However, MeitY clarified that safe-harbour provisions will continue for platforms that remove synthetic content through grievance redressal mechanisms.
The ministry said the proposed amendments aim to “promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies.” It further clarified that the rules apply only to publicly available content, not private or unpublished media.
Commenting on the draft, Dhruv Garg, Partner at the Indian Governance & Policy Project, noted, “It is interesting to note that India has implicitly chosen to regulate the generative AI platforms as intermediaries giving them plausible safe harbour protections. While some other jurisdictions have already established regulations around disclosures and labelling, it is essential that these requirements balance transparency with need for scalability, innovation and creative expression.”
The draft rules leave some ambiguity around AI-generated text, as they provide labelling guidance only for visual and audio content, not for text-based content created by chatbots or writing tools.
Globally, similar labelling requirements are being rolled out. The EU’s AI Act, effective 2026, mandates transparency for synthetic media but offers limited specifics for text, while China’s September 2025 deepfake rules require explicit visible labels such as “AI-generated” in all AI-created content.
With these draft amendments, India joins the growing list of nations taking early steps to regulate the fast-evolving landscape of AI-generated and synthetic content, aiming to balance innovation, accountability, and user protection.
















