UPDATED 12:00 EDT / MARCH 21 2023

AI

Nvidia brings on new advances in robotics and computer vision AI

Nvidia Corp. is expanding its tools for robotics and the artificial intelligence that power them through improvements to the platforms that developers and engineers use to train and deploy autonomous machines in factories, offices and cities.

At GTC 2023, the company’s virtual developer conference starting today, Nvidia said that Omniverse Cloud would be hosted on Microsoft Azure, which would increase access to Isaac Sim, the company’s virtual training environment for developing and managing AI-based robots. The company also announced a full production line up of Jetson Orin-based modules, which are powerful AI-robot platforms – essentially robot “brains” that act as edge AI computing platforms.

“The world’s largest industries make physical things, but they want to build them digitally,” said Nvidia founder and Chief Executive Jensen Huang. “Omniverse is a platform for industrial digitization that bridges digital and physical.”

Isaac Sim runs on Nvidia Omniverse, a powerful metaverse simulation technology that creates digital twins of the real world so robotics developers can recreate the spaces and situations that robots will operate in. Building and managing robots in the real world means ingesting, creating tremendous datasets from scratch and this can be extremely burdensome.

Omniverse Cloud on Azure provides access to powerful Nvidia OVX servers designed specifically for robotics simulations in order to make the development of robotics workflows faster and easier. It also comes with a large number of tools that allow teams to collaborate on training environments, eases robotics training data, simulation, validation and deploying AI.

Bridging the physical with the digital, Nvidia introduced the latest in its lineup of edge compute AI platforms in the Jetson Orin module lineup with the new Jetson Orin Nano Developer Kit. The new kit is much more powerful than the previous generation of Jetson Nano and is aimed at developers looking to create AI-powered robots, smart drones, intelligent vision systems and more.

The Nvidia Jetson Orin family of modules are designed to support a broad range of AI robotics, machines and automation capabilities at the network edge. They are built on Nvidia Ampere chips and support a wide variety of AI models for any application. The Orin-based chips are capable of running from 40 trillion AI operations per second with the entry-level Jetson Orin Nano up to 275 trillion operations per second with the Jetson AGX Orin for advanced solutions such as autonomous vehicles.

Nvidia said Jetson is in use by more than 6,000 customers and more than 1 million developers, including Amazon Web Services, Canon Inc., Cisco Systems Inc., Hyundai Robotics Co. Ltd., John Deere, Teradyne Inc. and TK Elevator. Companies adopting the new Orin-based modules include Hyundai Doosan Infracore Co. Ltd., Verdant Robotics, and the drone companies Skydio Inc. and Zipline Inc.

Putting all of this together, Nvidia announced expansions to the Metropolis ecosystem and technology supporting computer vision AI, including its TAO Toolkit 5.0 for developers; updates to the Deepstream software development kit for building computer vision apps; and early access to Metropolis Microservices.

The TAO Toolkit is a low-code AI framework development kit that assists with the rapid creation of AI models for any developer, on any service and for any device. With TAO 5.0, Nvidia is adding a bunch of features including pre-trained models for vision transformers, the ability to deploy on any platform with ONNX export, automatic tuning with AutoML and AI-assisted data classification or annotation.

Using TAO, developers can quickly dive through data with little or no coding needed and have it quickly classify what it sees. It can use pretrained models to train and classify visuals for tasks such as people detection, vehicle classification, pose estimation and object estimation. With data augmentation, models can be pruned and fine-tuned through iteration until they work the way engineers need them to before they’re ready for industry integration and deployment.

Nvidia DeepStream software development kit is kicking it to the next level with updates that help developers make next-generation vision AI. DeepStream allows developers to produce computer vision AI rapidly from streams of video produced by the pipeline-based, open-source GStreamer framework. However, the recent update allows developers to create their own low-code graph-based AI vision pipelines without the need for the streaming framework.

DeepStream can also now bring in more than just visuals. It can also address information from additional sensors such as lidar and radar as well as environmental and processes with sensor fusion. That will allow a whole new set of potential new computer vision-based AI applications across industries such as using it for quality control and autonomous machines that need to identify tight scheduling needs or other changes in the environment.

Finally, when there are multiple cameras over a wide area, that can pose a problem for vision AI, but Nvidia has a solution for that. It’s called Metropolis Microservices, a cloud-native microservices reference framework for vision AI applications. It allows developers to spread perception rapidly over a large area using multiple cameras and weld them together into a single understanding for the AI to work with in order to create multicamera tracking apps.

That has a large number of applications, such as across factory floors with a multitude of cameras watching products move along numerous conveyors, a stadium with people walking down hallways for congestion control, retail stores with cameras on shelves for inventory control, and smart cities trying to understand traffic better.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU