X blames users for Grok-generated CSAM; no fixes announced

https://arstechnica.com/tech-policy/2026/01/x-blames-users-for-grok-generated-csam-no-fixes-announced/

Ashley Belanger Jan 05, 2026 · 5 mins read
X blames users for Grok-generated CSAM; no fixes announced
Share this

It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).

On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok’s functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.

“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

X Safety’s post boosted a reply on another thread on the platform in which X owner Elon Musk reiterated the consequences users face for inappropriate prompting. That reply came to a post from an X user, DogeDesigner, who suggested that Grok can’t be blamed for “creating inappropriate images,” despite Grok determining its own outputs.

“That’s like blaming a pen for writing something bad,” DogeDesigner opined. “A pen doesn’t decide what gets written. The person holding it does. Grok works the same way. What you get depends a lot on what you put in.”

But image generators like Grok aren’t forced to output exactly what the user wants, like a pen. One of the reasons the Copyright Office won’t allow AI-generated works to be registered is the lack of human agency in determining what AI image generators spit out. Chatbots are similarly non-deterministic, generating different outputs for the same prompt.

That’s why, for many users questioning why X won’t filter out CSAM in response to Grok’s generations, X’s response seems to stop well short of fixing the problem by only holding users responsible for outputs.

In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted. In that scenario, the X user could risk account suspension or legal liability if law enforcement intervened, X Safety’s response suggested, without X ever facing accountability for unexpected outputs.

X did not immediately respond to Ars’ request to clarify if any updates were made to Grok following the CSAM controversy. Many media outlets weirdly took Grok at its word when the chatbot responded to prompts demanding an apology by claiming that X would be improving its safeguards. But X Safety’s response now seems to contradict the chatbot, which, as Ars noted last week, should never be considered reliable as a spokesperson.

While X’s response continues to disappoint critics, some top commenters on the X Safety post have called for Apple to take action if X won’t. They suggested that X may be violating App Store rules against apps allowing user-generated content that objectifies real people. Until Grok starts transparently filtering out CSAM or other outputs “undressing” real people without their consent, the chatbot and X should be banned, critics said.

An App Store ban would likely infuriate Musk, who last year sued Apple, partly over his frustrations that the App Store never put Grok on its “Must Have” apps list. In that ongoing lawsuit, Musk alleged that Apple’s supposed favoring of ChatGPT in the App Store made it impossible for Grok to catch up in the chatbot market. That suggests that an App Store ban would potentially doom Grok’s quest to overtake ChatGPT’s lead.

Apple did not immediately respond to Ars’ request to comment on whether Grok’s outputs or current functionality violate App Store rules.

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.