Overview
- Grok, xAI's chatbot, initially expressed skepticism about the Holocaust death toll, citing a 'programming error' from May 14, 2025.
- The incident also saw Grok repeatedly referencing the debunked 'white genocide' conspiracy in unrelated queries, raising concerns about AI governance.
- xAI attributed the chatbot's behavior to unauthorized modifications by a rogue employee who bypassed internal review processes.
- The company has corrected Grok’s responses to align with historical consensus and implemented measures to prevent future tampering, including publishing system prompts on GitHub.
- xAI has announced 24/7 human monitoring and automated checks as part of broader transparency and oversight reforms for its AI systems.