Let me say something that most people in the marketing and communications industry are thinking but not saying out loud.
The conversation around AI ethics in advertising has been hijacked by two groups — technologists who think ethics is a compliance checkbox, and regulators who think a policy document will fix human behavior. Neither is right. And while these two groups debate frameworks and guidelines, brands are quietly making decisions every single day that are eroding trust with their audiences in ways they won’t fully feel until it’s too late.
The real question is not whether AI should be used in advertising. That ship has sailed. The question is who inside a brand is responsible for drawing the line — and why, in most organizations, nobody actually owns that answer.
The Efficiency Trap
AI in advertising started with a genuinely useful promise. Better targeting. Faster creative iteration. Smarter spend optimization. Personalization at scale. These are legitimate capabilities and any brand ignoring them is leaving competitive advantage on the table.
But somewhere between the promise and the execution, something shifted.
Brands started using AI not just to optimize advertising — but to manufacture it. AI-generated faces replacing real people in campaigns. Synthetic voices mimicking celebrities without consent. Deepfake testimonials designed to look authentic. Hyper-personalized emotional triggers engineered to exploit psychological vulnerabilities rather than serve genuine customer needs.
The efficiency gains are real. The ethical cost is being deferred — and deferred costs always come due.
What I see consistently working with brands across sectors is this: the teams making AI-driven advertising decisions are almost never asking “should we do this?” They are asking “can we do this?” and “will it perform?” Those are completely different questions. And the gap between them is where brand reputation goes to die.
The Three Lines Brands Are Crossing Without Realizing It
There are specific areas where I see brands regularly crossing ethical lines — not maliciously, but carelessly. And carelessness in communication is not a defense. It’s a liability.
1. Authenticity Fraud
When a brand uses AI to generate customer testimonials, product reviews, or social proof that did not come from real human experience — that is not personalization. That is deception. It doesn’t matter how sophisticated the AI is or how realistic the output looks. The moment a brand manufactures authenticity, it has broken the foundational contract with its audience.
The short-term performance metrics will look fine. The long-term brand equity damage is invisible until it isn’t — until a journalist runs the story, until a customer calls it out publicly, until the brand is defending itself in a news cycle it cannot control.
2. Emotional Manipulation Over Genuine Value
AI systems are extraordinarily good at identifying psychological trigger points — fear, urgency, belonging, status anxiety — and optimizing ad content to exploit them. Some of this is standard marketing practice that predates AI. But AI has industrialized it at a scale and precision that crosses into manipulation territory.
There is a difference between persuasion and manipulation. Persuasion presents a genuine value proposition and lets the customer decide. Manipulation engineers an emotional state designed to override rational judgment. Brands using AI to do the latter are building short-term conversion numbers on a foundation of eroded trust. That is not a sustainable business model. It is a reputation crisis in slow motion.
3. Synthetic Identity Without Disclosure
AI-generated spokespeople, virtual influencers, synthetic brand voices — none of these are inherently unethical. But using them without disclosure is. Audiences have a right to know when they are engaging with a constructed identity versus a real human being. The brands that are burying this in fine print or not disclosing it at all are making a calculated bet that their audience won’t notice or won’t care. That bet is getting riskier by the month as media and regulatory scrutiny increases.
Why Self-Regulation Will Fail and Government Regulation Will Overcorrect
The industry’s answer to AI ethics in advertising has largely been self-regulation. Industry bodies producing guidelines. Platforms releasing policy documents. Brands issuing internal AI ethics frameworks that nobody reads after the launch press release.
Self-regulation fails for a simple reason: it has no teeth. When the choice is between an ethical constraint and a performance number, performance wins inside most organizations. Every time. Unless there is a structural consequence — legal, financial, or reputational — the ethical guideline is advisory at best and decorative at worst.
But the answer is not aggressive government regulation either. Heavy-handed regulation in a fast-moving technology space almost always produces two outcomes: it protects incumbents who can afford compliance infrastructure, and it stifles the legitimate innovation that smaller brands and agencies depend on. India’s advertising and marketing ecosystem is still maturing. Premature over-regulation will damage it.
The honest answer is that neither pure self-regulation nor government regulation will solve this. What will solve it — slowly, imperfectly, but sustainably — is brand-level accountability driven by reputation risk.
This is a PR and Communications Problem, Not Just a Legal One
Here is where I want to make a point that I don’t see being made enough in this conversation.
AI ethics in advertising is being treated as a legal and compliance issue. Brands are asking their legal teams to define the boundaries. They are consulting policy advisors. They are reading regulatory guidance.
That is the wrong department to own this question.
The consequences of crossing ethical lines in advertising are not primarily legal. They are reputational. A brand that gets called out for using AI deceptively does not lose a court case first. It loses trust first. It loses the media narrative first. It loses customer loyalty first. The legal consequence, if it comes, comes much later.
Which means the people who should be drawing the line — the people who understand where the reputational risk actually sits — are the CMO, the communications lead, and frankly, the PR counsel. Not the legal team.
The brands that are going to navigate AI ethics well are the ones where the communications function has a seat at the table when AI-driven advertising decisions are being made. Not to slow things down. Not to be the department of no. But to ask the question that nobody else in the room is asking: if this campaign gets written about tomorrow — not in an ad trade publication, but in a mainstream news outlet — what is the story? Is it the story we want told about this brand?
That question, asked early and consistently, is more effective than any compliance framework.
Where Should the Line Actually Be?
Since I’ve been critical of vague ethical guidelines, let me be specific.
The line should be drawn at disclosure, consent, and genuine value.
Disclosure: Any AI-generated content — synthetic voices, generated imagery, virtual spokespeople — should be clearly disclosed to the audience. Not in fine print. Clearly.
Consent: Any use of a real person’s likeness, voice, or identity in AI-generated advertising requires explicit consent. Not assumed consent. Not consent buried in a terms of service document. Explicit, informed consent.
Genuine Value: AI-driven personalization should serve the customer’s genuine interest — showing them something relevant and useful — not exploit psychological vulnerabilities to engineer purchases they will regret.
If the only reason an AI targeting system works is because it found an emotional pressure point to exploit, that’s not smart advertising. That’s a reputational liability waiting to surface.
These three principles are not complex. They do not require a 40-page policy document. They require leadership that is willing to say: we will not do this, even if it performs, because the long-term cost to this brand’s credibility is not worth the short-term metric.
The Brands That Get This Right Will Win the Next Decade
We are at an inflection point in the relationship between brands and their audiences. Trust is becoming the scarcest resource in marketing. Audiences are more skeptical, more informed, and more capable of calling out inauthenticity than at any point in the history of advertising.
AI is an extraordinary tool. Used with editorial judgment, genuine transparency, and a long-term view of brand equity — it can build stronger, more relevant, more credible brand communication than was ever possible before.
Used carelessly, in service of short-term performance at the expense of trust, it will accelerate the erosion of brand credibility at a scale and speed that no crisis communications plan can manage.
The line is not drawn by regulation. It is not drawn by an AI ethics committee. It is drawn by brand leadership that understands that reputation is not a department. It is the sum total of every decision the brand makes — including, and especially, how it chooses to use the most powerful communication tools available to it.
Draw the line there. Hold it there. And make sure the person responsible for your brand’s reputation is in the room when the AI advertising decisions are being made.
Shiva Bhavani is the Founder & CEO of Wing Communications, a strategic PR and reputation management agency working with high-growth brands across India.
(Views are personal)

















