Unauthorised Change Caused Misleading Response by Grok on Sensitive Topic
Elon Musk’s AI company, xAI, has addressed reports that its Grok chatbot made claims about a so-called genocide against white citizens in South Africa. According to a statement on Thursday, xAI blamed the incident on an unauthorised change to Grok’s software and has committed to fixing the issue.
The company said the alteration was made early Wednesday, bypassing its normal review procedures. This led the chatbot to produce politically sensitive responses that do not align with xAI’s values or policies.
“This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values,” the statement said.
Political Bias in AI Remains a Concern
Since the launch of ChatGPT in 2022, concerns about political bias, hate speech, and factual accuracy in AI chatbots have been widespread. The Grok incident adds to this ongoing debate.
Screenshots shared by users on X (formerly Twitter) showed Grok referencing the idea of “white genocide” in South Africa during unrelated discussions. This triggered backlash and raised questions about the system’s safeguards.
Elon Musk himself, a native of South Africa, has previously criticised South African land expropriation policies. While some have echoed his concerns, the South African government maintains there is no evidence of racial persecution or genocide. It also dismissed past claims by US President Donald Trump and others as unfounded.
xAI Takes Steps to Increase Transparency and Monitoring
In response to the controversy, xAI announced several measures aimed at improving transparency and trust. The company will begin publishing Grok’s system prompts on GitHub, allowing the public to track and comment on changes.
Additionally, xAI will establish a 24/7 monitoring team to handle incidents that automated systems fail to catch. These steps aim to prevent similar issues and restore user confidence in the platform.
with inputs from Reuters