OpenAI is building a system to estimate whether a ChatGPT user is under 18. If it thinks you are, it will change how the product works.
Minors will get stricter safety controls, filtered content, and new parental supervision options like blackout hours, chat history limits, and account linking. If the system isn’t sure, it will assume you’re a minor until proven otherwise.
The goals are enhanced user safety and preparation for more aggressive global regulation, but these safety features hide something a bit bigger: AI tools are starting to adjust themselves based on who you are, not just what you type.
This is a design shift with serious implications. Age is just the start. You can expect AI systems to begin adapting their tone, capabilities, and behavior based on all kinds of inferred traits. Some will improve user experience; others will raise serious privacy and bias concerns.
Think about your own products and services. Are they one-size-fits-all? Or are they age-aware, role-aware, risk-aware? If not, they will be. This is where personalization, governance, and compliance meet.
As for OpenAI, the technical challenges here are real. Age estimation from behavioral signals is notoriously difficult. False positives and false negatives can create reputational and legal risk. Ask yourself: when your systems start guessing who you are, what will they do with that guess?
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.