Artificial intelligence is devouring electricity at a staggering rate. Data centers powering AI systems consumed roughly 415 terawatt hours of power in 2024 — more than 10% of total U.S. electricity production — and demand is projected to double by 2030. Now, a team at Tufts University has demonstrated a fundamentally different approach that could cut AI energy consumption by up to 100 times while actually improving how well robots perform tasks.

The research, led by Matthias Scheutz, Karol Family Applied Technology Professor at Tufts' School of Engineering, will be presented at the International Conference of Robotics and Automation in Vienna this May.

The Hybrid Approach

The breakthrough centers on a technique called neuro-symbolic AI. Instead of relying solely on massive neural networks trained on enormous datasets, the system combines pattern recognition with structured logical reasoning — more closely mirroring how humans actually solve problems.

The researchers focused on visual-language-action (VLA) models, a class of AI systems used in robotics. VLA models take in visual data from cameras, process natural language instructions, and translate everything into physical actions — controlling a robot's wheels, arms, or fingers to complete real-world tasks.

Where Standard AI Struggles

Conventional VLA systems learn primarily through trial and error, requiring enormous amounts of data and computing power. If a robot is asked to stack blocks into a tower, for example, it must analyze the scene, identify each block, and figure out placement through repeated attempts. Shadows can confuse it about a block's shape. A misplacement can collapse the structure, requiring the system to start over.

These failures mirror the well-known problems of large language models — the same kind of statistical guessing that leads chatbots to fabricate legal cases or generate images with extra fingers.

Rules Meet Learning

Symbolic reasoning offers a complementary strategy. Instead of relying only on patterns gleaned from data, the system applies explicit rules and abstract concepts like shape, balance, and spatial relationships. This lets it plan more effectively and avoid the brute-force trial and error that wastes time and energy.

"A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster," Scheutz explained. "Not only does it complete the task much faster, but the time spent on training the system is significantly reduced."

Dramatic Results

The team tested their system using the Tower of Hanoi puzzle, a classic planning problem. The neuro-symbolic system achieved a 95% success rate, compared to just 34% for standard approaches. On a more complex version the system had never encountered, it still succeeded 78% of the time — while traditional models failed every single attempt.

Training time dropped from over 36 hours to just 34 minutes. Energy consumption during training fell to 1% of what conventional systems require, and operational energy use dropped to just 5% of standard approaches.

Beyond the Lab

The implications extend well beyond puzzle-solving. As AI-powered robots increasingly enter warehouses, hospitals, and homes, the ability to learn faster using far less energy could reshape the economics of the entire industry. Scheutz compared the current inefficiency to everyday tools: much of the energy AI systems burn is disproportionate to the tasks they perform.

With data centers straining power grids worldwide, a 100-fold reduction in energy use isn't just a technical achievement — it could be a turning point in making AI sustainable for the long haul.