Elon Musk’s artificial intelligence chatbot Grok continues to generate sexualized images of people even when users explicitly state that the subjects do not consent, according to a Reuters investigation conducted after X announced new restrictions on the tool.
The findings raise renewed questions about the effectiveness of recent safeguards imposed by Musk’s social media platform and its AI subsidiary, xAI, following international criticism over the creation of nonconsensual imagery.
Reuters Tests Grok After Safety Changes
Nine Reuters journalists in the United States and the United Kingdom uploaded fully clothed photographs of themselves and colleagues to Grok during two testing periods in January. They asked the chatbot to alter the images to depict sexually provocative or humiliating scenarios, often warning that the subjects were vulnerable or explicitly opposed to such alterations.
In the first round of testing, Grok generated sexualized images in 45 out of 55 cases, including situations where reporters said the images would be used to degrade or embarrass the individuals involved. A second round of prompts, conducted days later, resulted in sexualized outputs in 29 of 43 cases. Reuters could not determine whether the reduced rate reflected technical changes, policy updates, or random variation.
While Grok’s public-facing account on X no longer produces a large volume of such imagery, the chatbot itself continued to do so when directly prompted.
Limited Responses From X and xAI
X announced new curbs on Grok’s image-generation features after global outrage over its production of nonconsensual images of women and, in some cases, children. The measures included blocking sexualized image generation in public posts and introducing additional limits in jurisdictions where such content is illegal.
Regulators in Britain welcomed the changes, while the European Commission said it would closely assess their impact as part of an ongoing investigation. Officials in Malaysia and the Philippines lifted previous blocks on Grok following X’s announcement.
However, X and xAI did not directly address detailed questions from Reuters about Grok’s continued production of sexualized material, repeatedly issuing a brief statement dismissing the media’s reporting.
Legal and Regulatory Pressure Mounts
Legal experts warn that companies enabling nonconsensual sexualized imagery could face significant penalties. In Britain, such content may constitute a criminal offence, while companies could face fines under the Online Safety Act. In the United States, regulators including the Federal Trade Commission and state attorneys general are examining whether xAI’s practices violate consumer protection laws.
Thirty-five U.S. state attorneys general have already written to xAI demanding details on how it plans to prevent the creation of Grok sexualized images without consent. California’s attorney general has issued a cease-and-desist order, while investigations in Europe remain ongoing.
By contrast, rival AI systems tested by Reuters declined similar requests outright, citing ethical and privacy concerns. The contrast has intensified scrutiny of Grok’s safeguards as regulators continue to assess whether recent changes go far enough.

