Skip to main content

AI's Energy Habit Is Bonkers. This New Method Could Slash It by 100x.

AI's power demands are soaring, consuming 415 TWh in the US alone. A new hybrid AI approach promises to drastically cut energy use and boost reliability, tackling AI's growing environmental footprint.

By Elena Voss, Brightcast
3 min read
Medford, United States
15 views✓ Verified Source
Share

Why it matters: This breakthrough promises a sustainable future for AI, benefiting everyone by reducing energy consumption and making advanced technology more accessible.

Artificial intelligence is a lot like that friend who constantly needs to charge their phone, but on a global, grid-straining scale. In 2024, AI systems and their data centers in the U.S. alone guzzled 415 terawatt-hours of electricity. That's over 10% of the entire country's energy, and experts expect it to double by 2030. Because apparently, that's where we are now.

This raises a rather inconvenient question: Can AI become more powerful without needing to plug into, well, everything?

Researchers at Tufts University School of Engineering just dropped a mic on that question. They've cooked up a new AI method that could use 100 times less energy than current systems. Oh, and it's also more accurate for certain tasks. Which, if you think about it, is both impressive and slightly terrifying.

Wait—What is Brightcast?

We're a new kind of news feed.

Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.

Start Your News Detox

Smarter, Not Harder

The secret sauce is called neuro-symbolic AI. Think of it as combining the gut instincts of a neural network (that's the 'neuro' part, like how your brain learns patterns) with the methodical logic of an accountant (that's the 'symbolic' part, using rules and categories). It’s similar to how humans break down complex problems: a little intuition, a lot of step-by-step thinking.

This isn't about making ChatGPT write even more convincing poetry. Matthias Scheutz's lab is focused on robots that work with us, specifically visual-language-action (VLA) models. These are the systems that let a robot see what you're saying, understand it, and then actually do something, like stack blocks.

Now, stacking blocks sounds simple. But for a robot, it's a minefield of potential errors. Shadows can confuse it. A slightly misaligned block can throw off the whole operation. It’s the robot equivalent of a chatbot confidently giving you a completely made-up answer.

But symbolic reasoning changes the game. It lets the robot use general rules — like "blocks have a center of mass" or "gravity exists" — leading to more reliable planning and way less trial and error. Scheutz explains that traditional VLA models rely purely on statistics, which can lead to those hilarious (or frustrating) errors. A neuro-symbolic VLA, however, uses rules to limit the guesswork, finding solutions much faster.

In tests, the new system tackled the classic Tower of Hanoi puzzle, succeeding 95% of the time. Traditional VLA models managed a paltry 34%. For a more complex version? The neuro-symbolic system still hit 78%, while its conventional counterparts failed every single time. Ouch.

And here's the kicker: Training time plummeted from over a day and a half to a mere 34 minutes. Energy consumption during training? One percent of what conventional models needed. During operation? Five percent. Let that satisfying number sink in.

Scheutz points out that current LLMs, like ChatGPT, are basically just predicting the next word or action. This can be wildly inefficient. An AI summary on Google, for instance, can use 100 times more energy than simply showing you a list of websites. We're essentially using a supercomputer to tell us what time it is.

As AI continues its inevitable march into every corner of our lives, the demand for bigger data centers — each needing more power than some small cities — is only going to grow. The researchers believe that our current energy-hungry AI models aren't sustainable. Hybrid neuro-symbolic AI, however, might just be the grown-up solution we need to keep the lights on.

79
SignificantMajor proven impact

Brightcast Impact Score

This article describes a significant breakthrough in AI energy efficiency, offering a novel neuro-symbolic approach that could reduce power consumption by 100x. The research from Tufts University presents a promising solution to a major problem, with clear potential for widespread impact on the future of AI and its environmental footprint. The evidence is strong, with a proof of concept demonstrating both efficiency and accuracy improvements.

34

Hope

Outstanding

27

Reach

Outstanding

18

Verified

Solid

Wall of Hope

0/50

Be the first to share how this story made you feel

How does this make you feel?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

Connected Progress

Drop in your group chat

Just read that a new neuro-symbolic method could cut AI energy use by 100x. www.brightcast.news

Share

Originally reported by SciTechDaily · Verified by Brightcast

Get weekly positive news in your inbox

No spam. Unsubscribe anytime. Join thousands who start their week with hope.

More stories that restore faith in humanity