AI is no longer just a buzzword, it’s rewriting the way we tell stories, connect with audiences, and shape brand perception. But as one recent viral example shows, it’s not all smooth sailing. You might have seen the speculative Adidas ad created by filmmaker Blair Vermette. It featured people dancing in floral
print Adidas gear and looked every bit like a high-end campaign. Except there was one twist, the collection didn’t exist. The entire thing was AI-generated. People loved it so much they asked Adidas to make it real. That’s the power of AI, but also a red flag. If we can generate desire for something that doesn’t exist, where do we draw the line between marketing and manipulation?
As PR professionals, we have to pause and ask if we are using AI to inform and inspire or to deceive and distort? Here’s why that distinction matters now more than ever.
Are We Reflecting Desires or Manufacturing Them?
Traditionally, marketing aimed to reflect what people already wanted. Now, with generative AI we’re not just responding to demand, we are creating it. The Adidas ad didn’t just promote something, it tested an idea in real time. Useful? Yes. Ethical? That depends.
Many experts argue that AI gives companies insight into our preferences and even our vulnerabilities. That’s powerful stuff. But it also creates the risk of nudging consumers toward things they didn’t even want until the algorithm decides they should. The line between personalisation and manipulation starts to blur. We need to stay on the right side of it.
The FOMO Trap: When Marketing Gets Pushy
We all know marketing thrives on urgency “Only 2 left” or “Flash sale ends tonight” . AI takes that to a new level tailoring these messages to individual users based on what gets them to click. But here’s the kicker, sometimes there isn’t really a scarcity. The AI just knows that fear of missing out works on you.
Over time, this kind of psychological pressure creates distrust. When consumers start feeling manipulated, they tune out or, worse, they call you out.
So what’s the fix? Honesty. Transparency. It’s not just nice to have, it’s a business case for being real.
Why Transparency Isn’t Optional Anymore
In today’s hyper aware environment people want to know what’s real and what’s AI assisted. If you are using AI to write copy, generate visuals or test concepts just say so. Some governments are even moving toward requiring that AI generated content be clearly disclosed.
Think of it this way, no one minds if AI helps you brainstorm. But if it’s creating entire campaigns without context and your audience feels duped the backlash will be swift. One fashion brand learnt this the hard way when they tested AI models to show off body diversity and the public wasn’t impressed. They didn’t want synthetic inclusion, they wanted the real thing.
Virtual Influencers and the Authenticity Question
Let’s talk about AI influencers like India’s own “Kyra”. On the surface they’re a marketer’s dream, always on brand, always available and scandal proof. In a market like India where cultural nuance and emotional connection matter, virtual influencers still have a long way to go.
The takeaway here? Use AI influencers if you must, but make sure your audience knows they’re virtual. And more importantly, don’t use them as a shortcut to avoid genuine storytelling.
The Copyright Elephant in the Room
There’s another layer to this intellectual property. AI tools don’t create from scratch, they remix what they’ve been trained on. Some companies have already landed in legal hot water for using unlicensed data to train these models.
If you’re using AI-generated content for a campaign, you need to know what went into it. Was it trained on licensed data? Is the output actually yours to use?
To stay safe, partner with AI tools that are transparent about their training data. Better yet, keep a human in the loop. Creativity can be co-piloted by machines but ownership, accountability and final approval must stay with us.
AI Lacks What Matters Most: Empathy
AI is efficient. It can predict, optimize, and even surprise us. But what it can’t do is care. Empathy, intent, and emotional timing are human traits. And in communication, they matter more than polished copy or trendy visuals.
One Indian healthcare brand tried using an AI avatar to share emergency tips. Smart idea until they realised the bot might offer generic or even misleading advice. They quickly reined it in. Some messages, especially in healthcare or crisis communications, need a real voice with real
understanding.
Final Thoughts: Use the Tech, But Keep the Trust
There’s no denying that AI will change how we work. It already is. But we need to be intentional. If your storytelling loses the human element, your brand might lose its audience.
Let’s not fake authenticity. Let’s not fake urgency. And let’s definitely not fake inclusion, creativity or empathy. Because once trust is broken it’s nearly impossible to rebuild.
AI can help us go faster. But only we can make sure we’re going in the right direction. As a PR professional, the challenge is not just to adopt AI, it’s to adopt it responsibly. And that starts with remembering who we’re really talking to, and that is living humans.
(Views are personal)