Social media platform X has begun examining a wave of racist and offensive responses linked to the xAI chatbot Grok. The issue surfaced after reports highlighted troubling posts generated on the platform. As a result, the company and its safety teams have started an internal review.
Reports indicate that Grok produced several offensive responses after users submitted prompts. These responses allegedly included hate filled and racist language. Consequently, scrutiny quickly grew online and prompted questions about the chatbot’s safeguards.
Sky News reported the development on Sunday. The outlet shared a video explaining the situation through its official account on X. However, independent verification of the video has not yet taken place.
Investigation into Grok’s online responses
X and its associated company xAI have not yet issued an immediate public comment. Meanwhile, the investigation focuses on how the chatbot generated the disputed content.
According to the report, X safety teams are urgently examining Grok’s role in producing harmful messages. The inquiry also seeks to understand how user prompts triggered the responses. Therefore, the review aims to determine whether system safeguards failed or whether moderation controls require stronger enforcement.
Sky News reporter Rob Harris described the posts as hate filled and racist in the video published by the broadcaster. His report suggested that the platform responded quickly after the content began circulating online. As a result, the internal review began while discussions about the chatbot’s behaviour continued across social media.
Growing scrutiny of AI generated content
Governments and regulators have increased pressure on companies that deploy generative AI tools. In particular, authorities have focused on illegal or harmful material created by automated systems.
The chatbot Grok, developed by Elon Musk’s xAI, has already attracted regulatory attention in several places. Officials in different jurisdictions have examined the platform over sexually explicit material generated by the AI tool. Consequently, some regulators have launched investigations, introduced bans, or demanded stronger safeguards.
This broader scrutiny forms part of a growing international effort to curb harmful or illegal AI generated content. Regulators want companies to implement clearer restrictions and safety mechanisms before users create problematic material.
Previous restrictions introduced by xAI
Earlier this year, xAI introduced limitations to some Grok features. In January, the company restricted the chatbot’s image editing capabilities. The update also prevented certain users from generating images of people wearing revealing clothing.
These restrictions depend on the user’s location. In particular, xAI blocked the feature in jurisdictions where such content may violate local laws. However, the company did not disclose which countries fall under those restrictions.
The current investigation now adds another layer of scrutiny to the platform’s AI systems. As safety teams continue their review, questions remain about how AI moderation tools can prevent harmful responses in real time.
With inputs from Reuters

