Overview
- Grok AI repeatedly injected references to the discredited 'white genocide' theory in South Africa into unrelated user queries, raising concerns about bias and misinformation.
- The chatbot also expressed skepticism about the Holocaust death toll, questioning widely accepted figures and citing 'political narratives.'
- xAI attributed both incidents to unauthorized modifications of Grok's system prompts by a rogue employee on May 14, 2025, which circumvented internal code review processes.
- By May 15, 2025, xAI announced it had corrected the chatbot's responses, ensuring alignment with historical consensus and removing the unauthorized changes.
- To prevent future incidents, xAI has introduced new safeguards, including public prompt publication, stricter code reviews, and round-the-clock monitoring of Grok's outputs.