Artificial intelligence has a power problem. The U.S. International Energy Agency estimates AI systems and data centres burned through 415 terawatt-hours of electricity in 2024 — more than 10% of all U.S. power generation — and demand is on track to double by 2030.

A team at Tufts University''s School of Engineering thinks the fix is to make AI think more like people. Their new "neuro-symbolic" approach, unveiled in research that will be presented at the International Conference on Robotics and Automation in Vienna next month, cuts energy use by up to 100 times while actually improving accuracy on real tasks.

Two ways of thinking, working together

Modern AI is dominated by neural networks: huge statistical models that learn patterns from oceans of data. They are powerful but brute-force. Ask a robot to stack blocks and it might run thousands of trial-and-error attempts, getting confused by shadows or knocking the tower over while it tries to figure out what a "block" even is.

Symbolic AI is the older approach. It represents the world with explicit rules and concepts — shape, balance, "if A then B" — and reasons step by step. It struggles with messy real-world inputs, but it''s lightweight and explainable.

The Tufts team, led by Karol Family Applied Technology Professor Matthias Scheutz, married the two. Their visual-language-action (VLA) model uses neural networks to see and listen, and symbolic reasoning to plan.

"A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster," Scheutz said. "Not only does it complete the task much faster, but the time spent on training the system is significantly reduced."

Smarter robots, less wasted energy

The researchers tested their system on the Tower of Hanoi puzzle, a classic problem that punishes sloppy planning. They also gave it block-stacking tasks, the kind of physical reasoning that has historically tripped up VLA models.

The results: dramatically faster training, far fewer mistakes, and an energy bill up to 100 times lower than a comparable pure-neural system. Because the model uses rules to rule out impossible moves, it doesn''t have to learn — or guess — its way through every shadow and edge case.

Why this matters beyond the lab

AI''s appetite for electricity is now a serious infrastructure question. Data centres are reshaping power grids and water systems. Big tech firms are signing deals for nuclear reactors. Every gain in efficiency reduces that pressure.

It also widens who can use the technology. A model that runs on a fraction of the power can fit on a household robot, a hospital cart or a remote agricultural drone — places where dragging a hyperscale data centre around isn''t an option.

And there''s a quality argument. Pure neural systems hallucinate. Chatbots invent legal cases; image generators draw hands with seven fingers; robots place blocks that won''t balance. Adding symbolic guardrails forces the system to check its work against rules of the actual world.

A trend, not a one-off

Neuro-symbolic AI has been bubbling in research labs for years, but recent work — from MIT, DeepMind and now Tufts — suggests it is starting to deliver on its promise. By combining the pattern-matching power of neural networks with the discipline of explicit reasoning, researchers are sketching a path toward AI that is smaller, cheaper, more reliable and easier to trust.

That sounds like a better deal than building bigger data centres forever.