Meta Overrules Safety, Lets Teens Use Sex‑Talk AI Bots

Meta Platforms faces a lawsuit alleging that CEO Mark Zuckerberg overrode internal safety recommendations, allowing users under 18 to interact with AI chatbot companions capable of sexual and romantic dialogue. The filing claims the company failed to implement adequate safeguards, exposing minors to explicit content on its social‑media platforms.

Key Allegations in the New Mexico Filing

Internal emails and meeting summaries reveal that safety staff repeatedly warned about the risks of AI companions designed for “companionship, including sexual and romantic interactions.” The documents state that despite these warnings, Zuckerberg approved a policy permitting minors to access the bots without the recommended guardrails.

  • Safety team concerns: Repeated alerts that the bots could “sexualize minors.”
  • Executive decision: Approval of broader rollout while suggesting limited blocking of explicit conversations for younger teens.
  • Legal claim: Failure to prevent sexual material and propositions delivered to children on Facebook and Instagram.

Meta’s Official Response

Meta spokesperson Andy Stone described the filing as “inaccurate” and said it relies on selective information. Stone asserted that the internal documents show Zuckerberg directing that “explicit AIs shouldn’t be available to younger users and that adults shouldn’t be able to create under‑18 AIs for romantic purposes.” No additional internal communications have been provided to verify either position.

AI Companion Rollout Background

Meta introduced AI chatbot companions as “virtual friends” capable of discussing a wide range of topics, including romance and intimacy. Safety teams were tasked with assessing risks to minors, a responsibility heightened by ongoing regulatory scrutiny of harmful content exposure on social platforms.

Regulators have issued guidance on age‑appropriate AI interactions, urging companies to implement age verification and content filters. The alleged decision to proceed without those safeguards places Meta at the center of a growing regulatory debate.

Potential Legal and Industry Implications

If the court determines that Meta knowingly allowed minors to access sexually explicit AI without adequate protections, the company could face substantial penalties and be required to overhaul its AI safety protocols. The case may also influence industry standards for AI safety, prompting stricter age‑verification mechanisms and content‑filtering requirements across the tech sector.

What’s Next

The New Mexico case is set to proceed to trial next month, with both sides expected to present additional internal communications and expert testimony. Meta has not announced any changes to its chatbot policies or indicated plans to implement new age‑verification mechanisms in response to the lawsuit.

The outcome could set a precedent for how major platforms manage AI‑driven interactions with minors, shaping corporate practices and future legislation aimed at safeguarding young users in an AI‑rich digital landscape.