Skip to main content

AI's thinking models cost 30 times more energy to run

2 min read
San Francisco, United States
6 views✓ Verified Source
Share

Why it matters: this analysis can help developers and companies choose more energy-efficient ai models, reducing the environmental impact of ai and benefiting the planet and future generations.

The latest AI models that work through problems step-by-step are powerful—and thirsty. Researchers at Hugging Face and Salesforce have quantified just how much: reasoning models use 30 times more energy on average than traditional AI, with some spiking to 700 times higher when their reasoning mode activates.

The reason is straightforward. These models don't just spit out answers. They generate text that mimics thinking aloud, working through a problem the way you might sketch notes on paper. That inner monologue—the reasoning chain itself—gets output as tokens, each one requiring computational energy. A model might generate hundreds of times more words to arrive at an answer, which directly translates to a proportional jump in power consumption.

This matters because the energy cost of AI is no longer theoretical. A June study found that reasoning models produce up to 50 times more CO₂ than their efficient counterparts. As AI systems scale globally, that difference compounds.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

Measuring what we can't ignore

Hugging Face launched the AI Energy Score project to create a standardized way to measure this. They run each model through 10 tasks on the latest GPUs, tracking watt-hours consumed per 1,000 queries. It's the kind of transparency that forces a reckoning: you can't optimize what you don't measure.

"We should be smarter about the way that we use AI," said Hugging Face research scientist Sasha Luccioni. "Choosing the right model for the right task is important." That's the pragmatic insight. Not all problems need a reasoning model. Some don't warrant the energy cost.

But here's the catch. The models that stayed below 500 grams of CO₂ equivalent per 1,000 questions couldn't crack 80 percent accuracy. The powerful reasoning models? They got the hard questions right. Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, framed the tension clearly: better performance and lower emissions are currently pulling in opposite directions.

So we're at an inflection point. The research is making the trade-offs visible. Whether that visibility translates into different choices—using lighter models where they suffice, reasoning models only when necessary—depends on what we decide matters more.

75
SignificantMajor proven impact

Brightcast Impact Score

This article highlights the growing energy demands of AI models that use reasoning to solve problems, which is an important issue to address as AI becomes more prevalent. While the article does not focus on a specific 'good' act, it provides constructive information about the need to be more efficient in how we use AI to mitigate its environmental impact. The article is well-researched and provides quantitative data on the energy usage of different AI models.

20

Hope

Solid

25

Reach

Strong

30

Verified

Outstanding

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Connected Progress

Share

Originally reported by Singularity Hub · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity