OpenAI now automatically limits ChatGPT access for accounts it predicts belong to users under 18. The system flags accounts using behavioral cues and prompts a quick age verification through Persona, lifting restrictions once adulthood is confirmed. This change strengthens OpenAI’s safety commitments while aiming to keep the verification process simple and privacy‑focused.
How the Age‑Prediction System Works
The new model evaluates multiple signals, including account age, typical activity times, usage patterns, and the age initially provided during sign‑up. When the combination of these factors suggests a minor, the system automatically applies a restricted experience.
Restrictions Applied to Suspected Minor Accounts
Accounts flagged as under‑18 receive a filtered ChatGPT environment that blocks content such as:
- Graphic violence and explicit sexual role‑play
- Self‑harm instructions and risky challenges
- Material promoting extreme beauty standards or unhealthy dieting
These safeguards align with OpenAI’s core commitments to protect younger users from harmful material.
Verification Process and Privacy Measures
When a restriction is triggered, users see a verification prompt. Through Persona, they can submit a selfie and, if required, a government‑issued ID. Successful verification lifts the limits.
Privacy safeguards include:
- End‑to‑end encryption of submitted documents
- Limited data retention—verification data is deleted after the age check
- No use of verification data for model training
Appealing a Misclassification
If an account is mistakenly identified as belonging to a minor, users can initiate an appeal via the settings menu. The appeal follows the same Persona flow, allowing users to provide documentation that confirms they are over 18 and regain full access.
Future Enhancements and Parental Controls
OpenAI plans to refine the age‑prediction model with real‑world signals to improve accuracy. Ongoing developments include optional parental controls such as:
- “Quiet hours” that block usage during designated times
- Opt‑out of contributing interactions to model training
- Parent notifications when acute distress is detected
These features aim to give families more oversight while maintaining a seamless experience for adult users.
