OpenAI has begun rolling out its age-prediction technology across ChatGPT consumer accounts. In a post on Monday, the company said that for users under 18 who haven’t already provided their age, ChatGPT will analyze behavior and other signals, such as how long an account has existed and when it is active, to estimate age.
If a user is incorrectly identified as underage, OpenAI says they can verify their age using technology from identity verification service Persona. This process requires a live selfie and a government-issued ID. OpenAI has also made a ChatGPT page available that links directly to age verification.
The new system, first announced last September as part of broader changes aimed at younger users, introduces additional guardrails to the AI chatbot. OpenAI says these measures provide “safeguards to reduce exposure to sensitive or potentially harmful content.”
In a separate support page, the company explains how age prediction works and what types of content are restricted. These include graphic violence or gore; depictions of self-harm; viral challenges that could encourage risky or harmful behavior; sexual, romantic, or violent role-playing; and content promoting extreme beauty standards, unhealthy dieting, or body shaming.
OpenAI and other AI companies have faced increasing scrutiny and multiple lawsuits related to the deaths of teenagers who were engaging with chatbots, including ChatGPT. Over the past year, OpenAI has also introduced additional parental control features.
Age verification and age-based access restrictions are becoming more common across online platforms, driven in part by laws proposed or enacted in various countries and U.S. states. Earlier this month, gaming platform Roblox introduced mandatory age checks, while a new law in Australia imposes a ban on social media access for children under 16.
How well will ChatGPT’s age prediction work?
It remains unclear how accurately ChatGPT will be able to predict users’ ages across its roughly 800 million daily active users, or how quickly the system will improve.
Age-verification technology, however, is generally more mature and accurate, according to Jake Parker, senior director of government relations at the Security Industry Association.
Modern facial recognition and face-analysis tools can perform exceptionally well when implemented correctly, Parker said.
“The U.S. government conducts ongoing technical evaluations through the National Institute of Standards and Technology’s Face Recognition Technology Evaluation and Face Analysis Technology Evaluation programs,” Parker said. “These programs show that at least the top 100 algorithms are more than 99.5% accurate for identity verification, while the leading age-estimation technologies exceed 95% accuracy.”
Parker added that more platforms are moving toward age verification and biometric scanning to ensure age-appropriate access.
Not a complete solution
Some experts caution that technology alone isn’t enough to protect young users. Kristine Gloria, chief operating officer of youth-focused nonprofit Young Futures, said strict monitoring has limitations.
“We know that generative AI presents real challenges, and families need support in navigating them,” Gloria said. “To truly move forward, platforms need to prioritize safety by design alongside engagement.”
Gloria emphasized that protecting children online requires transparency, accountability, and a strong commitment to digital literacy.
“Our goal should be to build environments where safety is foundational, rather than relying on technical quick fixes,” she said.
Leave a comment