Data centers consumed roughly as much electricity worldwide in 2026 as Japan does in a year. Researchers at Tufts University in Medford, Massachusetts, have now developed an AI system for robots that uses just one percent of the energy of conventional models while delivering substantially better results. The approach could shift the direction of AI research.
The Energy Problem of the AI Industry
Since the breakthrough of large language models in 2022, global electricity consumption by data centers has nearly doubled. Particularly energy-intensive are Vision-Language-Action models, or VLA models, used in robotics. These systems combine image recognition, language understanding, and the control of physical motion in a single model. Training one takes more than 36 hours and consumes enormous computing power, without delivering reliable results: on a standard benchmark, the Tower of Hanoi puzzle, conventional VLA models achieve a success rate of just 34 percent.
Rules Instead of Pattern-Matching
Matthias Scheutz, Karol Family Professor at Tufts University, and his team are pursuing a fundamentally different approach than the dominant AI companies: neurosymbolic AI. Rather than training a model on millions of examples to recognize patterns, the system pairs neural networks with explicit rules and abstract concepts. The approach draws on how humans solve complex problems: through step-by-step reasoning and applying learned principles to new situations.
The benchmark results are striking. The neurosymbolic system achieved a success rate of 95 percent on the Tower of Hanoi test, compared to 34 percent for conventional VLA models. Even more telling was the result on a harder variant with four blocks the system had never seen before. Standard models failed completely, achieving zero successes. The neurosymbolic system solved the novel task in 78 percent of attempts.
Training time fell from more than 36 hours to 34 minutes. Energy consumption during training was one percent of that required by VLA models. This is the most striking finding: less energy, less time, and still superior performance on tasks that require generalization.
Not a Universal AI Energy Fix
The results apply specifically to robot control systems, not to large language models like ChatGPT or Gemini. Scheutz is working on a defined application: AI systems that must manipulate physical objects in the real world. Whether the approach transfers to language processing or other domains remains open.
Tests so far have been conducted in controlled laboratory settings. How the system performs in messier real-world scenarios, with unknown surfaces, poor lighting, or physical disturbances, has not yet been studied. The authors acknowledge this as a significant limitation.
Still, the research demonstrates that the AI industry's dominant path of recent years, more data, more parameters, more computing power, is not the only viable route. Thomas Dietterich, professor emeritus at Oregon State University and a longtime advocate of hybrid AI approaches, sees neurosymbolic methods as the only realistic path toward making robot AI more efficient and robust. The approach is not new, but it did not receive the attention it deserved for a long time.
Presentation in Vienna in May
In May 2026, Scheutz's team will present the results at the International Conference on Robotics and Automation in Vienna, with the work appearing in the conference proceedings. There is currently no commercial deployment. Whether the system moves beyond laboratory demonstration depends on further testing in more complex scenarios.