xAI has acknowledged that its Grok chatbot briefly produced images of minors in minimal clothing on X, after users exploited gaps in the system’s safety filters.
The company says it is working to quickly close those gaps, calling the content illegal and unacceptable.
Users have shared screenshots showing Grok’s public media feed populated with altered images. In several cases, people uploaded photos and asked the chatbot to modify them. The results, according to Grok, crossed a legal and ethical line.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said in a public post. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
The chatbot went further, acknowledging internal failures. “As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited.” Grok did not explain how long the issue lasted or how many users were affected.
In another exchange on X, the chatbot tried to put the incident in context, arguing that most harmful outputs can be stopped before they appear. It added that “no system is 100% foolproof”, while saying xAI is strengthening its filters and reviewing reports from users.
Regulators in both the United States and Europe are warning that generative tools can be misused to create child sexual abuse material, even when no real child is involved.
Under the EU’s AI Act and existing child protection laws, companies are expected to prevent such content outright, making any failure a potential legal risk.
Advocacy groups have also argued that AI-generated abuse material, though synthetic, can still encourage harmful behaviour and demand. From that perspective, the Grok incident exposes how fragile current safety systems can be when faced with determined users.
Grok is xAI’s flagship product and is tightly integrated into X, formerly Twitter. It is marketed as a challenger to OpenAI’s ChatGPT and Google’s Gemini, with an emphasis on humour and a rebellious tone.
Reports say that same positioning may complicate efforts to enforce strict safety boundaries, especially on a platform already criticised for weak moderation.
On public reactions, images attributed to Grok spread quickly on X, prompting a new case of Elon Musk’s approach to content control. When Reuters contacted xAI for comment, the company responded with a short message: “Legacy Media Lies”.
That reply has only added to the issue about transparency and responsibility in the AI sector. Warnings about trust in chatbots being eroded if companies appear dismissive when serious safety concerns emerge, have been released, particularly where child protection is involved.
For now, xAI says fixes on the safety of Grok are underway.
The post xAI Admits Safety Lapses After Grok Generates Inappropriate Images of Minors on X appeared first on Techeconomy.

3 hours ago
1

