UPDATED 18:00 EST / JANUARY 05 2026

AI

Nvidia introduces open-source AI models for humanoid robots, autonomous vehicles

Nvidia Corp. has released more than a half-dozen artificial intelligence models designed for autonomous systems such as self-driving cars.

The algorithms, which are all available under an open-source license, made their debut today at the CES electronics show in Las Vegas. They’re rolling out alongside several development tools and a computing module for robots called the Jetson T4000.

Autonomous vehicle software

Nvidia’s new lineup of open-source AI models is headlined by Alpamayo 1 (pictured), a so-called VLA, or vision-language-action, algorithm with 10 billion parameters. It can use footage from an autonomous vehicle’s cameras to generate driving trajectories.

Alpamayo 1 has a chain-of-thought mechanism, which means that it breaks down the navigation tasks it receives into smaller steps. According to Nvidia, that approach has two benefits. One is that Alpamayo 1 can explain each step of its reasoning workflow, which makes it easier to evaluate the soundness of navigation decisions. The chain-of-reasoning mechanism also helps the model tackle tricky driving situations.

It’s not designed to run in autonomous vehicles. Instead, Nvidia sees developers using it to train such vehicles’ navigation models. According to the company, the algorithm lends itself to tasks such as evaluating the reliability of autonomous driving software. In the future, Nvidia plans to release larger Alpamayo models that will support a broader range of reasoning use cases. 

“Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions — it’s the foundation for safe, scalable autonomy,” said Nvidia Chief Executive Officer Jensen Huang.

Alpamayo 1 is available alongside three additions to Nvidia’s existing Cosmos series of world foundation models. Like Alpamayo 1, the new models can be used to develop software for self-driving cars. They can also power other types of autonomous systems including industrial robots.

The first two models, Cosmos Transfer 2.5 and Cosmos Predict 2.5, are designed to generate training data for robots’ AI software. That training data takes the form of synthetic video footage. Cosmos Transfer 2.5 can, for example, generate a clip that depicts an industrial robot in a car factory. Cosmos Predict 2.5 offers similar features along with the ability to simulate how an object might behave in the future. A user could upload a photo of a bus and ask the model to simulate where the bus will be five seconds into the future.

The third new addition to the Cosmos model series is called Cosmos Reason 2.0. According to Nvidia, it can equip a robot with the ability to analyze footage of its environment and automatically carry out actions.

Cosmos Reason powers Isaac GR00T N1.6, another new model that Nvidia debuted today. Isaac GR00T N1.6 is a VLA model like Alpamayo 1, but it’s optimized to power humanoid robots rather than autonomous vehicles. Nvidia’s researchers trained the algorithm on a dataset comprised of sensory measurements from bimanual, semi-humanoid and humanoid robots.

“Salesforce, Milestone, Hitachi, Uber, VAST Data and Encord are using Cosmos Reason for traffic and workplace productivity AI agents,” Kari Briski, the vice president of generative AI software at Nvidia, wrote in a blog post. “Franka Robotics, Humanoid and NEURA Robotics are using Isaac GR00T to simulate, train and validate new behaviors for robots before scaling to production.”

Nvidia’s robotics-focused algorithms are rolling out alongside a pair of more general-purpose model families called Nemotron Speech and Nemotron RAG. The former series is headlined by a speech recognition model that the company says can provide 10 times the performance of comparable alternatives. Nemotron RAG includes embedding and reranking models. 

Embedding models turn data into mathematical representations that AI applications understand. Reranking is one of the steps involved in the RAG, or retrieval-augmented generation, workflow. After an AI application uses RAG to retrieve the files needed to answer a prompt, a reranking model highlights the most relevant files.

Open-source development tools

Nvidia’s AI models are joined by a trio of development tools that are likewise available under an open-source license. The first tool, AlpaSim, enables developers to create simulated environments in which autonomous driving models can be trained. The software makes it possible to customize details such as traffic conditions and a simulated vehicle’s sensor array. For added measure, developers can inject sensor noise to evaluate how well their AI models filter erroneous data.

Nvidia is also rolling out a second simulation framework called Isaac Lab-Arena. It’s designed to ease the task of training AI models for robots. According to the company, Isaac Lab-Arena enables developers to measure AI models’ performance using popular third-party benchmarks such as Robocasa, which is mainly used to evaluate household robots.

Software teams can use Nvidia’s third new tool, OSMO, to manage their simulation workloads. It’s an orchestrator that also lends itself to managing other AI development workflows such as synthetic data generation pipelines and model training jobs. Nvidia says that OSMO can orchestrate workloads across public clouds and developer workstations.

New hardware

Manufacturers can use a new Nvidia computing module called the Jetson Jetson T4000 to power their robots. It’s based on the company’s Blackwell graphics processing unit architecture. An industrial robot maker, for example, could use the module to run its systems’ AI-powered factory floor navigation software.

The Jetson T4000 includes 64 gigabytes of memory and can manage up to 1,200 TFLOPS, or 1,200 trillion computations per second, when processing FP4 data. That makes it four times faster than Nvidia’s previous-generation robot module. The Jetson T4000 will be available for $1,999 to customers who purchase at least 1,000 units.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.