The top five product announcements from Nvidia’s GPU Technology Conference
Nvidia Corp.’s GPU Technology Conference, being held this week in digital format, has evolved over the years from a hard-core tech event for pure propellerheads to more of an industry event where the latest and greatest innovations in accelerated computing are highlighted.
This includes a wide range of use cases, including ray tracing, autonomous vehicles, artificial intelligence, machine learning and more. Along with providing some industry vision, Nvidia typically uses GTC to take the covers off new products, and this year was no different. Here are the top five product announcements from GTC:
Omniverse Replicator
Virtual worlds are all the rage. In the past couple of weeks, we have seen Facebook Inc. change its corporate name to Meta Platforms Inc. to enter the metaverse and Microsoft Corp. announce its own metaverse vision with 3D avatars for Microsoft Teams.
At GTC, Nvidia made a number of announcements related to its version of the metaworld, Omniverse, that make it easier for customers to train AI models to make the metaworld more realistic.
Nvidia’s Omniverse Replicator is a synthetic data-generation engine that creates simulated data for training neural networks that would power a virtual world. It’s easy to have some skepticism around the use of virtual worlds, but Nvidia showcased to applications that I would consider low-hanging fruit for the Omniverse.
Nvidia DRIVE Sim is a virtual world for creating digital twins of autonomous vehicles. Training cars to drive can take tens of thousands of hours while driving millions of miles to replicate all the possible scenarios, with many being difficult to do in the physical world. For example, the sensors on a self-driving car can be flaky when the sunset is right at the horizon time.
Car manufacturers can only test this scenario in real life just a few minutes a day. With Omniverse, the sun can be held at a certain point and the virtual car can be driven for hundreds of hours, speeding up training time.
Nvidia Isaac SIM is similar but is designed for robots. Training robots can be a very expensive time-consuming process as one would need to teach it how to navigate going up stairs, down stairs, slanted roads, how to avoid objects, what’s a moving object, what’s fragile and other scenarios. With Isaac SIM, that can be done in the virtual world and when the training is done, it’s loaded into the robot, and it can function.
Synthetic data is important as it augments real-world data which can be labor-intensive, error-prone, biased and expensive. Omniverse Replicator can also create data that is difficult for humans to create, such as the sunset example. This might include moving objects at high speeds, at high altitudes or low depths, or in bad weather.
Omniverse Avatar
This product brings together speech AI, computer vision, natural language processing, simulation and recommendation engines to create interactive, intelligent 3D avatars. The training allows them to understand speech and get into actual conversations. Nvidia is positioning the initial use case to be customer service, where orders can be taken in a restaurant, an appointment can be made or a hotel room booked.
Virtual agents are already being used today, but they are primarily text-based and Omniverse now shifts this to a 3D, interactive virtual person. Microsoft showed an example of people collaborating through Teams with avatar-based colleagues, but I’m not sure how much appeal that would have. For personal interactions, the Webex Hologram product Cisco debuted at its recent WebexOne event made more sense because I could see the actual person.
With customer service, though, the use cases I highlighted are fine for an avatar to complete, since they are basic tasks. Anything more substantive, such as when dealing with money or with healthcare, would default to a human. But for fast transactions, avatars could be a cost-effective way to provide faster, better service.
Zero-trust cybersecurity platform
There is no hotter topic in cybersecurity than zero trust. An easy way to think about zero trust is that it flips the entire networking model 180 degrees. Internet Protocol networks were built on the concept that everything can talk to everything, which is why the internet works so well
Unfortunately, it also lets hackers gain access to everything once they breach any point in the network. Zero trust disallows access to everything unless explicitly allowed.
Although the concept of zero trust is straight forward, the implementation is not. The rise of 5G, WiFi 6, the “internet of things,” working from home and the cloud has greatly increased the enterprise attack surface, making the process of implementing zero trust complex and computationally intensive.
At GTC, Nvidia announced a zero-trust platform that combines its BlueField data processing units, DOCA software development kits for BlueField, and the Morpheus security AI framework. The DPUs play a key role because they offload the processor-heavy tasks from the central processing units on the firewalls or servers that drive up the costs of those devices. The DPU can handle processes such as validating users, isolating data and other tasks letting the firewalls and other devices do what they were meant to do.
DOCA 1.2 and Morpheus provide the developer tools and AI frameworks that are used to analyze traffic, inspect logs and application traffic, and customize zero trust. As part of the launch Juniper Networks Inc. and Palo Alto Networks Inc. were announced as vendors using the zero-trust platform.
Clara Holoscan
Nvidia Clara is a healthcare application framework for AI powered imaging, genomics and smart hospitals. Clara Holoscan lets developers build applications that process sensor data, render high-quality graphics and perform AI inferencing to improve medical device technology.
Although medical devices are very diverse, they tend to process data the same way. Data is collected, analyzed and then visualized for human decision making and Clara Holoscan addresses each phase.
In actuality, Holoscan uses a wide range of Nvidia technology to address the different aspects of medical AI. For example, Omniverse can be used to render visual data that can then be manipulated to run “what if” scenarios. Also, the Nvidia Triton inferencing server classifies, segments and tracks objects.
During his keynote, Chief Executive Jensen Huang (pictured) provided a number of examples medical devices that have been infused with AI, such as the Medtronic Hugo robot-assisted surgery robot, Johnson and Johnson robotic endoscopy and the Stryker AIRO interoperative CT scanner.
Earth Two
The keynote ended with Huang announcing Nvidia will build a simulated Earth, or Earth Two as he called it. The purpose of this isn’t to create a multiverse or science fiction fantasy but to study and predict climate change. Every company of any significant size has laid out net zero plans and pledged to make the world a better place, but how do they know their efforts will lead to change that’s meaningful?
Earth Two can be used to run global simulations and understand that if the world and business leaders get together and agree on certain things, we will see positive results by specific dates. This can help organizational leaders alter plans, if needed.
I can see a tool like this being used heavily at events such as the World Economic Forum where climate change has become a hot topic. Earth Two would allow delegates to make informed decisions instead of ones based merely on hope.
Photo: Nvidia/livestream
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU