New Delhi: The Centre has formally tightened regulatory oversight of AI-generated and synthetic content, notifying sweeping amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The revised framework, issued by the Ministry of Electronics and Information Technology (MeitY), will take effect from February 20, 2026. It brings AI-generated content squarely within the same compliance regime as other forms of unlawful “information” under the IT Rules.
Mandatory user declaration and AI verification
A key change requires significant social media intermediaries to obtain a user declaration—before publishing—on whether uploaded content is “synthetically generated”. Platforms must then deploy “reasonable and appropriate technical measures”, including automated tools, to verify the accuracy of that declaration.
Where verification confirms synthetic origin, the content must be clearly and prominently labelled with an appropriate notice.
What qualifies as synthetic content
The amendments formally define “synthetically generated information” as audio, visual or audio-visual material that is artificially or algorithmically created or altered in a manner that appears real and is likely to be perceived as indistinguishable from a natural person or real-world event.
However, exclusions have been carved out. Routine edits that do not materially distort meaning, good-faith creative or design work not resulting in false documents, and accessibility or quality enhancements—such as translation or improved searchability—are not treated as synthetic information under the rules.
Permanent labels and metadata safeguards
For platforms that enable the creation or sharing of synthetic content, the rules mandate prominent labelling to ensure immediate user awareness. Intermediaries must also embed permanent metadata or provenance markers, including unique identifiers—“to the extent technically feasible”—to trace the platform resource used to generate or alter the content.
Crucially, platforms are barred from allowing the modification, suppression or removal of such labels or metadata once applied.
Three-hour takedown rule
Compliance timelines have been sharply compressed. Upon receiving lawful directions from a competent authority or court, intermediaries must remove or disable access to flagged content—including deepfakes and other synthetic material—within three hours, down from the earlier 36-hour window.
Non-consensual intimate imagery and other unlawful synthetic content fall within this expedited framework.
Grievance redressal norms have also been tightened. Platforms must acknowledge user complaints within two hours (earlier 24 hours) and resolve them within seven days (earlier 15 days).
Proactive AI safeguards
The notification places responsibility on platforms to deploy automated and other technical tools to prevent users from creating or sharing unlawful synthetic content. It specifically flags categories such as child sexual abuse material, non-consensual intimate imagery, impersonation-based deception, false documents or electronic records, and content linked to arms or explosives procurement.
Importantly, the rules clarify that content removal carried out in good faith compliance—using reasonable technical measures—will not be treated as a breach of safe harbour protections under Section 79 of the IT Act.
Administrative and legal updates
The amendments also specify that any police intimation for content removal must come from an officer not below the rank of Deputy Inspector General of Police and must be specially authorised by the appropriate government.
Further, references to the Indian Penal Code in the Rules have been updated to the Bharatiya Nyaya Sanhita, 2023. Platforms are required to periodically inform users—at least once every three months—about compliance obligations and consequences, in English or any language listed in the Eighth Schedule of the Constitution.
With AI-generated content now explicitly equated with “information” under the IT Rules, the government has moved to close regulatory gaps, tightening accountability for social media intermediaries while embedding disclosure, traceability and faster enforcement mechanisms into India’s digital compliance framework.
















