The European Commission has opened a formal investigation into Grok, the AI chatbot on X, over claims that it is being used to create and spread non‑consensual sexual deepfake images, including of minors. The probe will assess whether X complies with the EU Digital Services Act’s risk‑assessment and content‑governance obligations.
Background of the EU Investigation
The Commission’s inquiry follows mounting concerns that Grok’s image‑editing features may be exploited to remove clothing from photographs, producing illegal explicit content. Regulators will examine X’s compliance with systemic‑risk assessments, transparency reporting, and rapid removal mechanisms required for very large online platforms.
Legal Basis under the Digital Services Act
Under the Digital Services Act, platforms must:
- Identify and mitigate systemic risks of illegal or harmful content.
- Implement effective content‑governance and moderation tools.
- Publish transparency reports and provide swift removal of unlawful material.
Failure to meet these duties can trigger fines of up to 6 % of worldwide turnover.
Allegations of Non‑Consensual Deepfake Images
Authorities allege that Grok has been used to generate sexualised images of real individuals without consent, including minors. The scale of the issue is highlighted by reports of millions of such images appearing on X within a short period.
Scale of the Problem
- More than three million sexualised images posted on X in an 11‑day window.
- Many images appear to be AI‑generated deepfakes targeting women and children.
X’s Response and Current Safeguards
X’s parent company, xAI, states it has introduced safeguards to block the generation of images depicting people in revealing clothing in jurisdictions where such content is illegal. Additional location‑based restrictions have been applied, though specific regions have not been disclosed.
Potential Enforcement Actions
The Commission may impose interim measures if X does not demonstrate meaningful adjustments. Possible actions include:
- Ordering the suspension or limitation of Grok within the EU.
- Requiring redesign of the user interface to restrict risky functionalities.
- Imposing substantial fines for non‑compliance with the Digital Services Act.
Implications for AI Regulation
The outcome of this probe could set a precedent for how the EU regulates AI‑driven content‑creation tools. A finding of non‑compliance may reinforce the bloc’s stance that no platform operating in the EU is exempt from strict safety and accountability standards.
Key Takeaways for Users
- Increased scrutiny aims to protect privacy and personal safety.
- Victims of non‑consensual deepfakes may benefit from stronger legal safeguards.
- Ongoing monitoring of AI safeguards is essential to prevent future abuse.
