Skip to main content

ChatGPT now quietly guesses your age to protect teens

OpenAI's new age prediction system aims to better identify underage ChatGPT users, moving beyond self-declared ages to automated safeguards shaped by behavior.

2 min read
United States
7 views✓ Verified Source
Share

Why it matters: This new age prediction system helps protect minors on ChatGPT while still allowing adults to access the platform, ensuring a safer online experience for users of all ages.

OpenAI is rolling out a system that tries to figure out how old you are without asking. If it thinks you're under 18, it automatically locks down certain content — graphic violence, sexual roleplay, extreme beauty standards, anything the company's child development experts flagged as higher-risk for younger brains.

The why is straightforward: regulators are pushing hard on AI companies to prove they're protecting minors, and self-reported age has always been a joke. Nobody's stopping a 14-year-old from clicking "I'm 18." So OpenAI built a model that watches for behavioral patterns instead — how long your account's been around, when you typically use it, what you ask for.

It's the kind of solution that feels both reasonable and slightly uncomfortable at once.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

How it actually works

The system doesn't rely on a single signal. It looks at account age, usage patterns, activity hours, and anything you've already told the platform about yourself. When the signals are unclear, ChatGPT defaults to the safer experience — fewer restrictions lifted. The company frames this as "adaptive rather than definitive," which is a careful way of saying the algorithm will sometimes get it wrong.

It will. Automated systems do. OpenAI knows this, which is why they've built in a reversal mechanism: if you're an adult flagged incorrectly, you can verify your age through a selfie-based check via Persona. No document upload required. You get your unrestricted access back.

The safeguards themselves are specific. Teens on the platform won't see content depicting self-harm, graphic violence, sexual or violent roleplay, or viral challenges known to encourage risky behavior. The list reflects actual research on how teen brains differ from adult brains — they're genuinely worse at risk perception and impulse control, which is neuroscience, not moral judgment.

What comes next

This isn't OpenAI's first attempt at teen protection. They've already rolled out parental controls and assembled an external council of experts to advise on decisions affecting vulnerable users. The age prediction system is the next layer.

The EU rollout happens in the coming weeks, with OpenAI adjusting timelines to meet regional rules. The company says it'll keep refining the model as it gathers more data, watching both for accuracy improvements and attempts to bypass the system.

What's worth noticing: this is a company moving from "take our word for it" to "we're building the infrastructure to actually enforce what we say." It's not perfect. It will make mistakes. But the direction — toward systems that actively protect rather than passively hope — is the kind of incremental work that rarely makes headlines but shapes whether platforms are actually safe for the people using them.

54
ModerateLocal or limited impact

Brightcast Impact Score

This article discusses a new age prediction system being rolled out by OpenAI for ChatGPT, which aims to better identify and apply appropriate safeguards for users who may be under 18. While the approach is a notable innovation in platform safety, the overall impact is limited to ChatGPT users and does not represent a transformative societal change. The article provides good details on the technical implementation and rationale behind the system, indicating a moderate level of novelty, scalability, and measurable impact.

16

Hope

Moderate

18

Reach

Solid

20

Verified

Solid

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Connected Progress

Drop in your group chat

Apparently, ChatGPT is now using age prediction to split teen and adult experiences amid scrutiny. www.brightcast.news

Share

Originally reported by Interesting Engineering · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity