Elon Musk’s xAI sued for turning three girls’ real photos into AI CSAM
- 2 hours ago
- 1 min read

A tip from an anonymous Discord user led cops to find what may be the first confirmed Grok-generated child sexual abuse materials (CSAM) that Elon Musk’s xAI can’t easily dismiss as nonexistent.
As recently as January, Musk denied that Grok generated any CSAM during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people.
At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children.
Rather than fix Grok, xAI limited access to the system to paying subscribers.
Read the full story | ARS TECHNICA


