X Launches EU Probe into Musk’s Grok Deepfakes

ai

The Irish Data Protection Commission has opened a large‑scale investigation into Elon Musk’s AI chatbot Grok after the tool generated sexualised images of real people, including minors, on X. Regulators are probing whether X violated GDPR and the Digital Services Act, while users worry about privacy, consent, and the spread of AI‑driven deepfakes.

Why the EU Probe Matters

GDPR Implications for Personal Data

The commission will examine how X collected, stored, and fed personal data of EU citizens into Grok’s generative model. If the processing lacked a lawful basis or proper safeguards, X could breach core GDPR obligations, exposing the platform to hefty penalties.

Digital Services Act Risk‑Assessment Requirements

Under the DSA, large platforms must assess and mitigate systemic risks from AI features. Investigators are checking whether X performed a thorough risk assessment before rolling out Grok, especially given the tool’s ability to create realistic, sexualised depictions of identifiable individuals.

Potential Consequences for X and Grok

Fines and Operational Restrictions

The DPC can levy fines up to €20 million or 4 % of global turnover under GDPR, and up to 6 % of worldwide revenue under the DSA. Beyond monetary sanctions, X might be forced to limit Grok’s capabilities or temporarily suspend the feature in the EU.

Impact on AI Development Practices

Should regulators deem X’s rollout negligent, the case could set a precedent that reshapes how AI tools are vetted across Europe. Developers may need to adopt stricter testing, documentation, and transparency standards to avoid similar scrutiny.

What Developers and Users Should Do

Implement Privacy‑by‑Design

Embed privacy safeguards from day one. If you’re building an AI model, treat any data that could re‑identify individuals as personal data under GDPR. Conduct regular impact assessments and document consent mechanisms.

Stay Informed and Protect Your Data

Keep an eye on X’s policy updates—you deserve to know how your interactions might be used. Use privacy settings, limit the personal information you share, and consider alternative platforms if you’re uncomfortable with AI‑generated content.

  • Audit your data sources for compliance.
  • Document risk‑assessment processes clearly.
  • Engage legal counsel familiar with GDPR and the DSA.
  • Educate users about the potential for AI‑driven deepfakes.