Artificial intelligence is a lot like that friend who constantly needs to charge their phone, but on a global, grid-straining scale. In 2024, AI systems and their data centers in the U.S. alone guzzled 415 terawatt-hours of electricity. That's over 10% of the entire country's energy, and experts expect it to double by 2030. Because apparently, that's where we are now.
This raises a rather inconvenient question: Can AI become more powerful without needing to plug into, well, everything?
Researchers at Tufts University School of Engineering just dropped a mic on that question. They've cooked up a new AI method that could use 100 times less energy than current systems. Oh, and it's also more accurate for certain tasks. Which, if you think about it, is both impressive and slightly terrifying.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxSmarter, Not Harder
The secret sauce is called neuro-symbolic AI. Think of it as combining the gut instincts of a neural network (that's the 'neuro' part, like how your brain learns patterns) with the methodical logic of an accountant (that's the 'symbolic' part, using rules and categories). It’s similar to how humans break down complex problems: a little intuition, a lot of step-by-step thinking.
This isn't about making ChatGPT write even more convincing poetry. Matthias Scheutz's lab is focused on robots that work with us, specifically visual-language-action (VLA) models. These are the systems that let a robot see what you're saying, understand it, and then actually do something, like stack blocks.
Now, stacking blocks sounds simple. But for a robot, it's a minefield of potential errors. Shadows can confuse it. A slightly misaligned block can throw off the whole operation. It’s the robot equivalent of a chatbot confidently giving you a completely made-up answer.
But symbolic reasoning changes the game. It lets the robot use general rules — like "blocks have a center of mass" or "gravity exists" — leading to more reliable planning and way less trial and error. Scheutz explains that traditional VLA models rely purely on statistics, which can lead to those hilarious (or frustrating) errors. A neuro-symbolic VLA, however, uses rules to limit the guesswork, finding solutions much faster.
In tests, the new system tackled the classic Tower of Hanoi puzzle, succeeding 95% of the time. Traditional VLA models managed a paltry 34%. For a more complex version? The neuro-symbolic system still hit 78%, while its conventional counterparts failed every single time. Ouch.
And here's the kicker: Training time plummeted from over a day and a half to a mere 34 minutes. Energy consumption during training? One percent of what conventional models needed. During operation? Five percent. Let that satisfying number sink in.
Scheutz points out that current LLMs, like ChatGPT, are basically just predicting the next word or action. This can be wildly inefficient. An AI summary on Google, for instance, can use 100 times more energy than simply showing you a list of websites. We're essentially using a supercomputer to tell us what time it is.
As AI continues its inevitable march into every corner of our lives, the demand for bigger data centers — each needing more power than some small cities — is only going to grow. The researchers believe that our current energy-hungry AI models aren't sustainable. Hybrid neuro-symbolic AI, however, might just be the grown-up solution we need to keep the lights on.










