UPDATED 13:00 EDT / AUGUST 09 2022

EMERGING TECH

At SIGGRAPH, Nvidia pushes the envelope for virtual worlds and digital humans

Nvidia Corp. took the stage today at the ACM SIGGRAPH computer graphics and technology conference in Los Angeles to announce advances in tools that developers will use to create the metaverse and avatars for digital humans.

“The combination of artificial intelligence and computer graphics will power the metaverse, the next evolution of the internet,” said Jensen Huang, founder and chief executive of Nvidia.

Updated Omniverse platform

In order to give developers what they need to deliver the next stage of the metaverse, a conception of interconnected immersive 3D virtual worlds similar to what the web did for the internet, Nvidia is advancing its Omniverse real-time collaboration and creation platform with a number of new tools.

The expansion of Omniverse includes a number of AI-powered features that allow artists and developers to collaborate closely to build virtual worlds and content faster and connect with other third-party applications.

These include the newly released upgrades to Omniverse Kit, a toolkit for building native Omniverse extensions and apps that include real-time physics simulations for particles, cloth and other physics objects. That allows artists and engineers to rapidly recreate anything from the real world and put it into virtual worlds.

The AI tool Audio2Face has been added that creates facial animations directly from audio files and can even imitate emotional facial expressions realistically. That’s an important component for making virtual avatars, the representation of people in virtual worlds — so-called “digital humans” — more realistic.

Nvidia also announced the physics-based machine learning framework Modulus as an Omniverse extension. Using Modulus, developers and engineers can use machine learning models trained on real-world physics to create digital twins and simulate industrial metaverse applications such as storehouses, warehouses and robotics pipelines, so they can be safely experimented upon.

AI and neural graphics tools

To simplify and shorten the process of modeling the real world, Nvidia released a number of software development kits that use AI and neural graphics standards research, allowing an AI to learn from data as it receives it.

The newly released Kaolin Wisp is an addition to Kaolin, which is a PyTorch tool library for deep learning. It can allow engineers to implement new training models in a matter of days instead of weeks. Moreover, NeuralVDB, an improvement on the industry standard for volumetric data storage OpenVDB, can dramatically reduce the memory footprint of 3D model data using machine learning.

“Neural graphics intertwines AI and graphics, paving the way for a future graphics pipeline that is amenable to learning from data,” said Sanja Fidler, vice president of AI at Nvidia.

Nvidia produces software that allows artists to recreate real-world objects just by scanning them with a camera and using neural graphics to capture them and their behavior quickly. The software, called Instant NeRF, creates a 3D model of the object or scene from 2D images and allows artists and developers to immediately import it into a virtual world or metaverse.

The rise of ‘digital humans’

Interacting in the metaverse will require more than just simulating physical objects such as cars, tables and chairs. It will also be populated with digital humans, the avatars of people visiting from the real world, who will want their bodies, called avatars, to express themselves in a consistent way.

There will also be AI-driven avatars in the metaverse that look and act like people that serve as virtual assistants and in customer service roles.

To bring them to life, Nvidia has unveiled the Omniverse Avatar Cloud Engine, a collection of AI models and services for building and operating avatars within the metaverse. The tools within ACE will include everything from conversational AI to animation tools that cause an avatar’s mouth to synch with speech and its expressions to appear appropriate to emotions.

“With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud,” said Simon Yuen, director of graphics and AI at Nvidia.

The Avatar Cloud Engine includes technologies such as Audio2Face and Audio2Emotion, which would allow for complex facial and body animations and more to be translated into the metaverse. As a result, avatars would be able to move and speak in realistic ways.

“We want to democratize building interactive avatars for every platform,” added Yuen.

The technology will be generally available sometime in early 2023 and it will be able to run on embedded systems and all major cloud services.

Images: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU