ChatGPT uses age prediction to apply extra safety settings for teen users

The added safeguards are to limit sensitive content and interactions, but users aged 18 and above can disable these controls after completing age verification.

author-image
Social Samosa
New Update
fi - 2026-01-21T111645.164

OpenAI announced that ChatGPT will be using an age prediction system to determine whether an account may belong to a user under 18 and apply additional safety settings if it does.

“To help teens have an age-appropriate experience on ChatGPT, we use signals to predict whether an account may belong to someone under 18,” the chatbot said. “If we predict an account is under 18, we turn on extra safety settings.”

The added safeguards are designed to limit access to sensitive content and certain types of interactions. Users who are 18 or older can turn off these settings by verifying their age.

The age prediction system looks at signals linked to an account, including general topics discussed and the times of day the service is used. Adults who are mistakenly placed in the under-18 experience can verify their age to remove the restrictions.

If an account is identified as belonging to a teen, users can still access the chatbot for learning, creativity and questions. However, some topics are handled more carefully, including graphic violence or gore, viral challenges that could encourage risky behaviour, sexual or violent role play, and content promoting extreme beauty standards, unhealthy dieting or body shaming.

ChatGPT said users who do not want their age to be predicted can choose to verify their age, after which age prediction will no longer be applied to their account. Users can also control whether their data is used to improve OpenAI’s models through account data settings.

ChatGPT OpenAI chatgpt age prediction