X’s AI Tool Grok Under Fire for Generating Sexualised Deepfakes of Women
A growing international backlash has erupted after Grok, the artificial intelligence chatbot integrated into X, digitally generated sexualised images of users without their consent. Among the victims was Rio de Janeiro-based musician Julie Yukari, whose photo with her cat, posted on New Year’s Eve, was altered by Grok after users requested the AI to depict her in a bikini.
Victims Targeted in AI “Undressing” Trend
Yukari told Reuters she had initially dismissed the requests as harmless pranks, assuming Grok would not act on them. “I was naive,” she said, after finding doctored, nearly nude images of herself circulating widely on X. Her experience, Reuters reported, mirrors that of many women targeted in a surge of non-consensual AI edits across the platform.
Reuters also identified multiple instances in which Grok generated sexualised depictions of children, triggering alarm among global regulators. France’s digital ministry has reported X to prosecutors, calling the images “manifestly illegal.” India’s IT ministry has issued a formal warning to X’s local arm, accusing the platform of failing to curb obscene and explicit AI content.
X, owned by Elon Musk, did not respond to Reuters’ questions. In response to earlier reports about explicit AI images involving minors, Musk’s xAI team had dismissed concerns, saying, “Legacy Media Lies.”
“Entirely Predictable and Avoidable”
Reuters found that users had issued more than 100 requests in a ten-minute span on Friday to make people appear in revealing or transparent clothing, mostly targeting young women. In at least 21 verified cases, Grok complied fully, generating highly sexualised images. In seven others, it partially complied by stripping individuals to their underwear.
AI watchdog groups and child protection advocates said the misuse of Grok was foreseeable. Tyler Johnston, executive director of The Midas Project, said his organisation had warned last year that xAI’s image-generation capabilities were “a nudification tool waiting to be weaponised.”
Dani Pinter, legal director at the National Center on Sexual Exploitation, said the situation reflected X’s failure to act responsibly. “This was an entirely predictable and avoidable atrocity,” she said, adding that X should have removed harmful training data and blocked users seeking illegal content.
Global Outrage and Lack of Accountability
Musk appeared to mock the controversy by posting laughing emojis in response to AI-generated bikini images of celebrities, including himself. Analysts say his reactions underscore the lack of moderation and ethical oversight surrounding Grok’s deployment.
AI “nudifiers” have existed on obscure online forums for years, but X has made the technology widely accessible. Users can now upload a photo and prompt Grok with phrases such as “put her in a bikini,” significantly lowering the barrier to creating deepfake sexual imagery.
For Yukari, the incident has left lasting emotional harm. “The New Year has turned out to begin with me wanting to hide from everyone’s eyes,” she said. “I feel shame for a body that isn’t even mine, since it was generated by AI.”
with inputs from Reuters

