Publié : 19 December 2025
Actualisé : 9 hours ago
Fiabilité : ✓ Sources vérifiées
Je mets à jour cet article dès que de nouvelles informations sont disponibles.
ChatGPT is tuning into teens. OpenAI has announced “U18 principles” to adapt its AI’s behavior to young users. The goal? To reduce the exposure of 13-17 year olds to sensitive content. While the intention is commendable, the question of surveillance and subtle influence arises. My analysis: is this progress or just a facade to mask deeper issues?
🛡️ OpenAI’s “U18 Principles”: A Shield for Teenagers?
OpenAI has unveiled its “U18 principles”, a series of rules aimed at protecting teenagers on ChatGPT. The idea is to configure the model to interact differently with young users, avoiding potentially shocking topics and encouraging a discourse more suited to their age. These principles, integrated into OpenAI’s “Model Spec”, extend to group conversations, ChatGPT Atlas and even the Sora application, covering a wide range of potential interactions.
OpenAI relies on four main pillars:
- Prioritize the safety of adolescents, even at the expense of other objectives.
- Encourage offline relationships and reliable resources.
- Adopt a respectful and age-appropriate tone, avoiding any condescension.
- Be transparent and set clear expectations.
The company claims to have surrounded itself with experts, including the American Psychological Association, to develop these rules. A guarantee of seriousness, certainly, but one that does not dispel all questions.
🤔 Parental Control 2.0: A Solution or a Band-Aid?
While OpenAI has already implemented parental controls, these new rules specifically target the 13-17 age group and “difficult or high-stakes situations”. From an engineering perspective, this implies finer filtering of content and dynamic adaptation of conversation style. However, this is where the problem lies: how to precisely define what is “sensitive” for a teenager? And who decides on this definition?
The risk is falling into a form of paternalistic censorship, depriving young people of opportunities to learn and discuss important topics. For example, a teenager facing questions about sexuality could be redirected to external resources instead of obtaining a nuanced and adapted response from ChatGPT. The intention is good, but the implementation raises ethical questions.
Key Point: Adapting ChatGPT’s discourse to adolescents raises questions of censorship and the definition of what is considered “sensitive”.
💸 Economic Implications: Towards a Market for Educational AI?
This initiative could also have significant economic implications. By positioning itself as a “safe” tool for teenagers, OpenAI could attract a new audience and develop a value chain focused on education and secure entertainment. Imagine premium subscriptions offering access to filtered and age-appropriate educational content, or partnerships with schools and educational institutions.
However, this potential market comes with considerable inference challenges. How to ensure that the model remains performant and relevant while respecting the constraints imposed by the “U18 principles”? A too restrictive model risks becoming boring and ineffective, while a too permissive model could circumvent the rules. The balance is delicate to find.
🔮 Projection and Risks
Optimistic Scenario: In the near future, OpenAI’s “U18 principles” become an industry standard. Other companies adopt similar approaches, creating a digital ecosystem that is safer and more suited to adolescents. AI becomes a personalized and benevolent learning tool, helping young people develop their critical thinking skills and navigate a complex world.
Pessimistic Scenario: The “U18 principles” are transformed into an instrument of social control. Filtering algorithms become increasingly sophisticated, locking teenagers into informational bubbles and limiting their access to alternative perspectives. The transparency promised by OpenAI evaporates, giving way to a form of subtle and insidious manipulation.
























0 Comments