Skip to main content

People now trust AI chatbots for mental health support

2 min read
Perth, Australia
10 views✓ Verified Source
Share

Why it matters: this surge in public trust for ai chatbot therapists means more people can access mental health support conveniently and affordably, improving well-being across communities.

Something shifted in 2023. When ChatGPT arrived and generative AI became part of everyday life, people stopped dismissing chatbot therapists as gimmicks. They started seeing them as potentially useful tools for understanding themselves.

A new Curtin University study tracked this exact moment. Researchers surveyed people before and after the AI boom took off, and the data was clear: public attitudes toward AI-supported well-being tools changed dramatically. What had felt like science fiction suddenly felt credible.

"As generative AI entered everyday life, people began to view chatbot 'therapists' less as gimmicks and more as potentially credible tools for self-reflection," says Warren Mansell, who leads the research team at Curtin's School of Population Health.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

The study also revealed what people actually wanted from these tools. It wasn't a replacement for human therapists. Users valued a curious, questioning style—one that helped them explore their own goals and problems from new angles. They wanted to think more clearly, not be told what to think.

Building something better

That insight is now shaping Curtin's next-generation chatbot, called Monti, which is being redesigned with actual users in the loop. The team's motto for the tool says it all: "Notice More, Explore Further, Think Wiser." It's a catalyst for reflection, not a substitute for human connection.

The researchers are clear about what matters: evidence-based design, transparency about what the AI can and can't do, safety monitoring that actually works, and a real understanding of what people need. These aren't afterthoughts—they're built into Monti from the start.

The work suggests that when done carefully, AI chatbots can play a genuine role in mental health. They can help people clarify what's bothering them, explore their concerns more deeply, and recognize when they need to talk to a human therapist. That's not nothing. For someone on a waiting list for a therapist, or someone in a country where mental health support is scarce, a well-designed chatbot could be a real bridge.

Monti is expected to roll out to Australian universities by mid-2026. The next phase will show whether this shift in public trust translates into actual benefit.

75
SignificantMajor proven impact

Brightcast Impact Score

This article discusses the increasing public trust in AI chatbot therapists, which aligns with Brightcast's mission to highlight constructive solutions and measurable progress. The article provides evidence of the positive impact of AI chatbot therapists, including improved access to mental health support and positive user feedback. The article is well-researched, fact-checked, and reviewed by experts, meeting Brightcast's criteria for verification. While the article does not mention a specific location, the positive and impactful nature of the story makes it a good fit for Brightcast's platform.

25

Hope

Solid

25

Reach

Strong

25

Verified

Strong

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Share

Originally reported by Phys.org · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity