For more than half a century, chip designers have lived with an uncomfortable trade-off: as memory components get smaller, they tend to leak more electricity, run hotter and behave less reliably. A new device out of Japan suggests that bargain may not be permanent.
In findings published April 28, 2026 by the Institute of Science Tokyo, researchers describe a memory chip just 25 nanometers across — roughly one three-thousandth the thickness of a human hair — that performs better as it shrinks rather than worse. The result challenges a fundamental assumption that has shaped electronics engineering for decades and points toward a new generation of memory hardware that consumes less energy in smaller packages, exactly the direction the AI era needs hardware to go.
The team built what is known as a ferroelectric tunnel junction, or FTJ, a type of memory first proposed in 1971. Instead of storing data the way conventional memory does — by holding electric charge in a transistor that has to be constantly refreshed — an FTJ encodes a 0 or a 1 in the orientation of electric polarization inside a special material. Flip the polarization, and you change how easily current passes through the device. The state holds without continuous power, which means the chip can sit idle without draining a battery.
The idea is elegant. The execution has been brutal. For decades, FTJ devices ran into the same wall: as they were miniaturized, electrical current leaked through the boundaries between tiny crystals inside the material, wasting energy and corrupting reads and writes. Generations of researchers proved the concept worked at lab scale and then watched it fail to translate to the densities a real product needs.
The unlocking ingredient turned out to be hafnium oxide, a material whose ferroelectric properties were only discovered in 2011. Unlike previous FTJ candidates, hafnium oxide stays well-behaved at the small dimensions modern chips operate at. The Science Tokyo team paired it with a novel fabrication approach that controls how the material crystallizes during manufacturing, suppressing the leakage paths that previously doomed similar devices.
The headline result is not just that the new chip works at 25 nanometers. It is that it works better there. Because the material's ferroelectric switching becomes more efficient at smaller scales, the device draws less energy per write operation as it is shrunk down — a direct inversion of the usual relationship between size and power. That property would be valuable in any electronics product. In AI hardware, where billions of memory operations happen per inference and energy costs are the dominant constraint on what models can be deployed, it is potentially transformative.
The applications the team highlights are exactly the ones the next decade of computing depends on: edge AI accelerators that run models on phones and wearables without draining batteries; sensor networks that can sit in the field for years on a coin cell; data-center accelerators that no longer need exotic cooling to keep their memory subsystems in check.
There is, of course, a long road between a working laboratory device and a chip in a phone. Manufacturing yields, integration with logic transistors, write endurance over billions of cycles, and the economics of a new fabrication process all need to be proven at scale. The teams that introduced the original metal-gate, high-k breakthroughs in the mid-2000s spent years moving from lab to fab. But the Science Tokyo finding solves the specific problem that has kept FTJs penned in for half a century, and it does so with hafnium oxide, a material the semiconductor industry already knows how to handle in volume.
For a field that has been forced to extract performance gains from increasingly heroic and expensive engineering tricks, the appeal of a memory technology that simply gets better when you make it smaller is hard to overstate. Sometimes progress in computing arrives as an exotic new architecture. This time, it is a 55-year-old idea finally finding its material.


