Mumbai: The Advertising Standards Council of India (ASCI) has released draft guidelines aimed at ensuring responsible labelling of AI-generated content in advertising, as brands increasingly adopt artificial intelligence-driven campaigns across platforms.
The proposed guidelines, which are open for stakeholder consultation until June 13, 2026, seek to bring greater transparency to synthetically generated advertising content while avoiding what ASCI describes as “consumer label fatigue.”
The framework has been aligned with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, amended on February 10, and focuses primarily on consumer outcomes rather than regulating the AI technology itself.
According to ASCI, the use of AI in advertising would be considered misleading or harmful only when it creates unrealistic expectations, exploits vulnerable audiences, depicts unsafe situations, or uses a real person’s likeness without consent.
The draft guidelines introduce a three-tier risk classification system for AI-generated advertising content:
High Risk: Prohibited Content
ASCI stated that advertisements falling under the high-risk category would violate the ASCI Code regardless of whether an AI disclosure label is used. Examples include fabricated endorsements, misleading product demonstrations, fake realistic locations, unauthorised deepfakes, use of copyrighted content without consent, and AI-generated fictional authority figures such as fake doctors endorsing products.
Medium Risk: Mandatory Labelling
For medium-risk content, disclosure labels would be compulsory where AI materially influences consumer decision-making. This category includes virtual influencers, AI-generated likeness or voice replication even with consent, synthetic product demonstrations, fictional AI-generated events or settings, demonstrations of non-existent products, exaggerated AI-generated sound effects tied to product features, and sponsored AI-driven product recommendations.
ASCI has proposed disclosure formats such as “Audio/Video created using AI” or “Audio/Video enhanced using AI,” with all disclaimers required to comply with the ASCI Code’s disclaimer norms.
Low Risk: No Disclosure Needed
The low-risk category covers routine AI-assisted enhancements that do not materially affect consumer understanding or decision-making. These include standard editing, colour correction, background visuals, ambient music, fantastical effects such as dragons or fairies, and administrative or accessibility-related AI applications.
The guidelines also emphasise a principle-based and risk-led approach to AI governance in advertising, rather than imposing blanket restrictions on AI-generated content.
ASCI has invited feedback from advertisers, agencies, consumer groups, and other stakeholders before the finalisation process begins after June 13, 2026. Feedback can be submitted to [email protected].
The draft guidelines are expected to play a significant role in shaping the future use of AI-generated content in India’s advertising ecosystem, especially as brands increasingly experiment with virtual influencers, synthetic storytelling, and AI-powered creative production.
















