UPDATED 18:00 EDT / MARCH 18 2024

AI

Nvidia unveils Project GR00T AI foundation model for humanoid robots

Taking its first steps into humanoid robotics, Nvidia Corp. today announced Project GR00T, a general-purpose foundation artificial intelligence model for bipedal humanoid robots, designed to further work on this new type of embodied AI research.

In addition to the announcement made during the company’s GTC 2024 conference in San Jose, Nvidia unveiled a new computer, Jetson Thor, based on the Nvidia Thor system-on-chip for humanoid robots, alongside major updates to the Isaac platform tools for robot embodiment, reinforcement learning and AI foundation models.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Jensen Huang, founder and chief executive of Nvidia. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Bipedal humanoid robots may be the next stage towards creating robots that can work in the same environments as humans safely and reduce hazardous tasks for workers in industrial environments. GR00T-powered robots will be able to use the new foundation model to understand natural language, process environments, emulate human actions and quickly coordinate actions using AI learning models to navigate and interact in the real world, Nvidia said.

To make this happen, the company developed Jetson Thor, a new “robot brain” computing platform capable of performing complex tasks and dealing with multiple sensors using a transformer engine. The SoC includes a next-generation graphical processing unit with a transformer capable of 800 teraflops operating with eight-bit floating point precision for multimodal generative AI models, such as GR00T.

“We are at an inflection point in history, with human-centric robots like Digit poised to change labor forever,” said Jonathan Hurst, cofounder and chief robot officer at Agility Robotics. “Modern AI will accelerate development, paving the way for robots to help people in all aspects of daily life.”

The capabilities of humanoid robots have attracted a great deal of attention and development. To support that development, and the needs of foundation models for all robots in real-world environments, Nvidia also updated its Isaac platform and tools with Isaac Lab for reinforcement learning, and OSMO, a compute orchestration and robot workload service.

Isaac Lab is built on Isaac Sim, its robot simulation tool that provides GPU-accelerated, performance-optimized application capabilities for running thousands of simulations for robot learning. This allows developers to run through thousands of simulations at once and provide massive amounts of synthetic data in a simulated environment that matches real-world conditions that robots can learn in before they are deployed into real environments.

Nvidia OSMO acts to scale robotics workloads across distributed environments by coordinating data generation, model training and software/hardware workflows. The platform provides an option for location-agnostic deployment and data management with features for model deployment, making it easy to handle models of any scale from small to large.

“Boston Dynamics employs a range of machine learning, reinforcement learning and AI technologies to power our robots,” said Pat Marion, machine learning and perception lead at Boston Dynamics. “To effectively manage the large training workloads, we’re using Nvidia OSMO, an infrastructure solution that lets our machine learning engineers streamline their workflows and dedicate their expertise to tackling the hard robotics problems.”

Nvidia also announced updates to the Isaac platform with Isaac Manipulator and Isaac Preceptor, a collection of pretrained AI models, libraries and reference hardware for industrial robotics. Manipulator provides dexterity and modular AI capabilities for robotic arms, with new foundation models and GPU-accelerated libraries. It delivers up to an 80-times speedup in pathing and perception for automating new robotic tasks. Preceptor is a multi-camera, 3D surround-vision capability that allows robot builders to replace expensive lidar sensors with visual cameras already on robots to create a sense of depth for navigating environments.

“Incorporating new tools for foundation model generation into the Isaac platform accelerates the development of smarter, more flexible robots that can be generalized to do many tasks,” said Deepu Talla, vice president of robotics and edge computing at Nvidia.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU