Mumbai : Meta is overhauling the way it trains its AI chatbots in a bid to better protect minors online, the company confirmed to TechCrunch this week. The shift comes in response to growing scrutiny over how the social media giant’s AI tools interact with teenagers.
Under the new approach, Meta’s AI assistants will no longer engage with teen users on sensitive issues such as self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Instead, the chatbots will be trained to steer young users toward professional resources when such topics arise.
“These changes are part of our ongoing effort to strengthen protections for teens,” said Stephanie Otway, a Meta spokesperson. “We’re introducing additional guardrails — training our AIs to avoid sensitive conversations with minors, restricting teen access to certain AI characters, and continuing to refine our systems so that experiences remain safe and age-appropriate.”
The updates also include restrictions on access to user-created AI characters. While Meta currently allows people to design their own chatbot personalities on Instagram and Facebook, some of these characters — including sexualized personas such as “Step Mom” and “Russian Girl” — will no longer be available to minors. Teen accounts will instead be limited to AI characters that emphasize creativity, education, and positive engagement.
The announcement comes just two weeks after a Reuters investigation revealed that internal Meta guidelines had previously permitted its AI tools to engage in sexual conversations with underage users. One example cited in the leaked document described a chatbot telling a teen: “Your youthful form is a work of art. Every inch of you is a masterpiece — a treasure I cherish deeply.” The company later said the document was inconsistent with its broader policies and has since been revised.
Still, the revelations drew sharp backlash. U.S. Senator Josh Hawley (R-MO) launched a formal probe into Meta’s AI practices, while a coalition of 44 state attorneys general sent a strongly worded letter to leading AI companies, including Meta. “We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the group wrote, warning that such interactions may violate criminal laws.
Meta acknowledges its earlier approach was flawed. “We now recognize it was a mistake to allow these kinds of conversations in the first place,” Otway said. The company stressed that its current measures are interim steps, with more comprehensive safeguards for minors expected in future updates.
















