A 35-inch, 33-pound robotic snowman just upstaged one of the most powerful CEOs in technology. During NVIDIA's annual GTC conference keynote in San Jose on March 16, CEO Jensen Huang was joined onstage by an unexpected guest: a fully autonomous, walking, talking robot version of Olaf — the beloved snowman from Disney's Frozen franchise.
Complete with stick arms, a carrot nose, and a white felt covering, the robotic Olaf didn't just stand there and wave. It walked freely across the stage, engaged in conversation with Huang, and navigated its environment independently. The audience of thousands of AI developers, researchers, and industry leaders watched in delighted astonishment.
Three Giants, One Snowman
The robot is the product of a remarkable three-way collaboration between Walt Disney Imagineering, NVIDIA, and Google DeepMind. Unlike any Disney animatronic before it, this Olaf walks autonomously, balances on its own, and interacts with people around it in real time.
At its core, the robot runs on Newton — a GPU-accelerated, open-source physics simulation engine co-developed by the three companies and now released through the Linux Foundation. Newton is built on NVIDIA's Warp framework and the OpenUSD standard, supporting multiple physics solvers that enable complex contact-rich behaviors like walking on uneven terrain, manipulating objects, and maintaining balance under dynamic conditions.
"It was because of physics, using this Newton solver, that we jointly developed with Disney and DeepMind, that made it possible for you to be able to adapt to the physical world," Huang told the robotic snowman during their onstage exchange.
100,000 Virtual Olafs in Two Days
Disney built a custom simulation layer on top of Newton called Kamino, which runs thousands of parallel training environments on a single GPU. Engineers trained 100,000 virtual Olaf instances in just two days using a single NVIDIA RTX 4090 GPU. That training covered standing, walking, heat management, and noise reduction.
The deep reinforcement learning approach taught the robot to balance on unstable surfaces in a matter of hours — a task that would take traditional robotics approaches far longer. Each virtual Olaf could have a slightly different body structure, teaching the AI to generalize rather than memorize specific movements.
Coming to a Theme Park Near You
This wasn't a concept demo. Disney confirmed that the robotic Olaf will debut for park guests at Disneyland Paris's brand-new World of Frozen land on March 29 — just 13 days after its GTC appearance. The robot will perform on a moving boat in the attraction's lagoon, greeting riders as they pass by.
Kyle Laughlin, SVP of Research & Development at Walt Disney Imagineering, described the specific challenges of building a character that must delight guests while operating in unpredictable real-world conditions: variable temperatures, moving platforms, excited children reaching out to touch it.
Disney plans to extend the technology to more characters across its parks and cruise ships worldwide, potentially transforming the theme park experience from one of static, track-mounted animatronics to truly interactive character encounters.
A Glimpse of Physical AI
For NVIDIA, the Olaf demo was the capstone of a keynote focused on what Huang calls "physical AI" — artificial intelligence that operates in and interacts with the real world. From self-driving cars built on NVIDIA's Drive Hyperion platform to warehouse robots and humanoid machines, the message was clear: AI is leaving the screen and entering the physical world.
That the most memorable demonstration of this future came in the form of an animated snowman who just wants a warm hug? That might be the most Disney thing imaginable — and the most human way to introduce a technological revolution.