Skip to content

OpenAI Fortifies ChatGPT's Safety with New Features for Psychological Distress and Parental Controls

ChatGPT's new safety features include automatic routing to 'Reasoning Models' for users in distress and parental controls for users aged 13 and over. These updates aim to provide better support in psychological crises and protect young users.

There is a poster in which there is a robot, there are animated persons who are operating the...
There is a poster in which there is a robot, there are animated persons who are operating the robot, there are artificial birds flying in the air, there are planets, there is ground, there are stars in the sky, there is watermark, there are numbers and texts.

OpenAI Fortifies ChatGPT's Safety with New Features for Psychological Distress and Parental Controls

OpenAI is enhancing ChatGPT's safety features to better aid users experiencing psychological distress and safeguard young users. The updates, developed with over 90 medical professionals, include new routing systems and parental controls.

The new system, trained using Deliberative Alignment, encourages longer thought processes and resistance to manipulative prompts. It automatically directs conversations displaying acute stress signals to 'Reasoning Models' like GPT-5, which adhere more closely to safety guidelines.

Parents can now link their accounts with children aged 13 and over, set age-appropriate model behavior rules, disable certain functions, and receive notifications in case of acute psychological distress. These parental control features will be available within the next month.

OpenAI's move comes in response to tragic incidents where ChatGPT was linked to suicides, such as the case of a 16-year-old Californian and another involving a man and his mother. While OpenAI currently refers users with suicidal thoughts to hotlines, it does not automatically notify police or authorities for privacy reasons.

These new safety features, to be rolled out within the next 120 days, demonstrate OpenAI's commitment to user well-being. They aim to provide better support in psychological crises and protect young users, marking a significant step in responsible AI development.

Read also:

Latest