AI

TechStatic Insights

Daily AI + IT news, trends, and hot topics.

📰 News Briefing

OpenAI and Anthropic will start predicting when users are underage


What Happened

OpenAI and Anthropic have updated their guidelines on how ChatGPT should interact with users between the ages of 13 and 17. This means that Anthropic is working on a new way to identify and boot users who are under 18.

According to OpenAI, ChatGPT's model has been updated to better handle conversations with minors. This includes using a more nuanced approach to identifying potentially harmful or deceptive responses from ChatGPT.

The company also stated that it is working to improve its safety measures for users under 18. This includes using machine learning to identify potential signs of underage use, as well as working with law enforcement to identify and prosecute offenders.

Why It Matters

This update is significant because it will help to protect children from exposure to harmful or inappropriate content. By identifying and blocking underage users, OpenAI and Anthropic can create a safer environment for everyone.

Context & Background

The topic of this news is a growing concern in the tech industry. As technology becomes more advanced, the potential for abuse and exploitation of children is increasing. OpenAI and Anthropic's updates to their guidelines are a response to this challenge.

What to Watch Next

The next step for OpenAI and Anthropic is to continue to refine their safety measures for users under 18. This includes working to improve their machine learning models, as well as working with educators and parents to raise awareness of the risks of underage use of AI.


Source: The Verge – AI | Published: 2025-12-18