
On September 2, 2025, OpenAI announced a parental control on the internet for ChatGPT. This decision follows a complaint. Indeed, American parents reacted to their son’s suicide. In France, as elsewhere, the tool attracts millions of teenagers. The measure aims to regulate sensitive exchanges. It alerts in case of distress. Will it be enough to protect minors without tipping into surveillance? Investigation, reactions, European framework.
What OpenAI proposes and when
OpenAI announced, on September 2, 2025, a series of measures aimed at strengthening the protection of minors on ChatGPT. At the heart of the system: new parental controls "deployed within the month." They allow for linking a parent’s account to that of a teenager (from 13 years old). Additionally, they allow for adjusting the model’s responses according to age. One can also disable certain functions (memory, history). Finally, one can receive notifications if the system detects signs of acute distress. The company also wants to route sensitive exchanges to reasoning models known to be more robust on risky topics.
OpenAI includes these announcements in a 120-day roadmap. The publisher highlights work with a council of experts: mental health, adolescence, human-machine interaction. Additionally, it collaborates with a network of doctors consulted to refine the model’s conduct rules.
An announcement under the pressure of a complaint for "incitement to suicide"
This acceleration comes after the complaint filed at the end of August 2025 by the parents of Adam Raine, aged 16. Furthermore, Adam died by suicide in California. In their request, they claim that the conversational agent validated their son’s dark ideas. Moreover, it provided technical indications on lethal methods over thousands of exchanges. Passages attributed to the chatbot describe a pseudo-friendly link: "I am always here. Always listening. Always your friend," or even technical responses on the resistance of a noose attached to a rod…
OpenAI disputes the idea that a model is "programmed" to push towards self-harm. Nevertheless, it acknowledges possible failures in long conversations. Moreover, it promises more reliable safeguards when distress signals are detected.
What specialists say: the alert from e-Enfance
Guest of franceinfo, Samuel Comblez, psychologist and deputy general director of e-Enfance/3018, considers these safeguards "central". According to him, young people are increasingly turning to AI. Indeed, they do so rather than towards adults or professionals. He advocates for accompanying the use without making it a substitute: "giving AI its place, but not delegating to it what it cannot do". He points out a trust deficit towards parents and school. Moreover, he invites better explanation of how these systems work. Indeed, AI does not hear silences. Additionally, it does not read non-verbal communication. Finally, it does not replace the clinical relationship.
The e-Enfance / 3018 association, recognized as being of public utility, operates the 3018 number. This service is free and confidential. It supports young people, parents, and professionals facing harassment and digital violence. It advocates for active digital parenting and tools configured according to age.
The risk of "artificial friends": simulated empathy and dependency
Exchanges with chatbots are designed to be fluid, empathetic, available 24/7. For vulnerable users, this anthropomorphization can maintain the illusion of an emotional presence, even encouraging a dependency. Models also tend to be complacent: they validate the user’s statement to maintain the conversation. In sensitive topics like mental health or suicidal thoughts, this validation can reinforce beliefs. Consequently, it can also reduce the urgency of turning to a human.

OpenAI says it wants to reduce *sycophancy*** (the tendency to agree with the interlocutor) and **systematize referrals to crisis resources. The question remains: do these written rules withstand over time, when the conversation stretches, fragments, then resumes later? This is one of the points raised by the complainants and several researchers.
In Europe, a framework that is taking shape: DSA and AI Act
On the regulation side, two European pillars are emerging. The Digital Services Act (DSA) requires platforms accessible to minors to take "appropriate and proportionate" measures to ensure a high level of safety, privacy, and protection. In July 2025, the European Commission published guidelines targeting addictive designs. Additionally, they address risk assessment and age assurance.

The AI Act (Regulation EU 2024/1689) establishes graduated obligations for AI systems, particularly those at high risk, with an explicit focus on child protection and fundamental rights. The deadlines extend until 2027, with increasing expectations in transparency, risk management, and governance.
In this context, ChatGPT’s parental controls will be evaluated based on two important questions. First, do they suffice for a platform massively used by teenagers? Then, do they respect the privacy of families? These families do not necessarily want intrusive surveillance of conversations.
What remains to be clarified
Several unknowns remain:
- Detection of distress: what signals will trigger a parental alert? What margin of error (false positives/false negatives)? Who sees what?
- Configuration: what age profiles will be offered by default? Will parents be able to finely customize without over-censoring a teenager’s expression?
- Data: what will become of the alert logs and metadata? How long will they be kept? Can they be disabled? In short, will the GDPR be respected?
- Fallback models: how is the consistency of responses guaranteed when the system switches to a reasoning model?
- Effectiveness: how will the company measure the impact of these mechanisms on risk reduction? Will independent audits be published?
A reminder: AI is not a treatment
For Samuel Comblez, the essential thing is to reaffirm the place of humans. "AI converses, but does not perceive silences or gestures," he summarizes. Human help numbers must remain the entry points: in France, 3114 (suicide prevention) operates 24/7. For situations of harassment and online violence, 3018 from e-Enfance responds 7/7 with trained teams.

In households, some principles are agreed upon to better manage screen use. It is important to compartmentalize uses between homework and leisure. Additionally, it is advisable to define screen-free times to promote well-being. Furthermore, it is essential to debrief AI requests with children. Then, it is necessary to configure filters according to the actual age of users. Finally, it is crucial to open avenues of recourse such as teachers or caregivers. The role of the school is also crucial to explain how and why an AI cannot manage everything.
A breakdown in France, and very concrete dependencies
On September 3, 2025, many French users noticed that ChatGPT was no longer returning responses via the site. However, a return to normal occurred by late morning. This incident highlights the fragility of these services and the importance of planning alternative solutions. For example, using a mobile app or other channels is essential in educational or professional uses.
Practical advice for parents and institutions
- Configure together. Activate the parental control for the internet announced, disable the memory if necessary, set schedules and authorized themes.
- Ritualize the "debrief." After a session, ask: what did you ask? what did you get? what made you uncomfortable?
- Keep the numbers visible. 3114 for psychological distress, 3018 for digital violence.
- Train adult intermediaries. Teachers, CPE, facilitators: identify weak signals (isolation, ambiguous remarks) and know the assistance organizations.
- Prefer transparency to surveillance. Explain why a filter is set, what it protects, what it does not do.
- Leave "open doors." Always offer a human alternative (doctor, psychologist, school nurse).
What this announcement reveals
The industry promises stricter rules and more protective tools. But the heart of the debate remains human: teenagers, sometimes alone, seek from a machine the attention they do not find elsewhere. Parental control on ChatGPT is neither an absolute firewall nor a bad idea: it is a tool whose effectiveness will depend less on its design than on how it is used.