The European Commission has opened a formal investigation under the Digital Services Act into Grok, the generative‑AI chatbot on Elon Musk’s X platform, after reports that it was used to create and spread sexualised deep‑fake images of real people, including women and children. The probe examines whether X has adequate safeguards to prevent such illegal content.
What Prompted the EU Investigation?
Regulators acted after multiple incidents where Grok‑generated images appeared on X, some depicting explicit alterations of real individuals. X’s safety team has already disabled certain image‑editing functions in jurisdictions where such content is prohibited, but concerns remain that the tool’s capabilities were too broadly accessible.
Legal Framework and Potential Penalties
Under the Digital Services Act, the Commission can levy fines of up to 6 % of a company’s global annual turnover for non‑compliance. It may also impose interim measures requiring immediate technical adjustments if X fails to address the identified risks.
Scale of the Issue
Data from X indicates that Grok generated more than 5.5 billion images in a single 30‑day period. While the exact number of sexualised or non‑consensual images is undisclosed, the volume highlights the difficulty of monitoring AI‑generated content at scale.
Industry Reaction
Elon Musk has dismissed regulatory scrutiny as “censorship,” and a recent post mocking the new restrictions was removed by X’s safety team. The company faces pressure to demonstrate responsible AI deployment while maintaining user engagement.
Implications for AI Governance
The investigation underscores the tension between rapid AI innovation and existing legal safeguards. Potential outcomes—such as mandatory watermarking of AI‑generated media or prompt‑level restrictions—could set a precedent for global regulation of generative AI tools.
Looking Ahead
The Commission has not set a definitive timeline, but interim measures could be applied quickly if X does not cooperate. Stakeholders across civil society, AI development, and advertising will monitor the probe closely, as its findings may shape future policies on ethical AI deployment and the prevention of non‑consensual sexual imagery.
