UPDATED 13:00 EDT / MAY 02 2023

AI

Nvidia to announce breakthrough AI research for computer graphics

Researchers from Nvidia Corp. have announced a number of innovations focused on helping developers and artists combine artificial intelligence with computer graphics software to bring their creative ideas to life.

The company says it will release no less than 18 new research papers detailing its innovations at SIGGRAPH 2023, an annual conference focused on computer graphics that runs Aug. 6-10. The papers, which were produced in collaboration with researchers from dozens of universities in the U.S., Europe and Israel, cover generative AI models that transform text into images, inverse rendering tools that can make 3D versions of still images, physics models that use AI to simulate complex 3D elements, and more.

Nvidia explained in a blog post that creators already have access to various generative AI models that can transform text into images. Such tools are widely used in tasks such as creating conceptual art, storyboards for movies, video games and 3D virtual worlds. However, they’re still somewhat limited, especially when the artist has something very specific in mind. For example, an advertising executive might be planning a campaign around a new teddy bear, and want to create various scenarios that showcase the toy in different situations, such as at a teddy bear tea party.

Existing tools can’t create this level of specificity very well, so Nvidia’s researchers have come up with a technique that enables generative AI models to use a single example image to customize their output in very specific ways. A second technique outlines a highly compact model called Perfusion, which allows users to combine multiple personalized elements with a handful of concept images and create more specific, AI-generated visuals.

Elsewhere, Nvidia’s researchers have been focused on speeding up the time-consuming process of rendering 2D images into a 3D environment. A third research paper is centered on new technology that Nvidia says is able to run on a conventional laptop and generate a photorealistic 3D head-and-shoulders model from a single 2D portrait. According to the company, this is a major breakthrough that will rapidly accelerate 3D avatar creation, with big implications for videoconferencing and virtual reality applications.

In another initiative, Nvidia collaborated with Stanford University researchers to bring lifelike motion to 3D characters. For example, users can feed the model with video from tennis matches and then transfer this lifelike motion to a 3D tennis-playing character. The simulated player can then play extended rallies with other characters, Nvidia said. Uniquely, the model is able to solve the problem of creating 3D characters with diverse skills and realistic movement without requiring expensive motion-capture video data.

Nvidia has also applied its AI smarts to neural rendering, which is a technique that simulates the physics of light reflecting through a virtual scene. Its research demonstrates how AI models for textures, materials and volumes can be used to create film-quality, photorealistic visuals of objects in real time for video games and virtual worlds.

The company explained how its latest neural rendering compression techniques can substantially increase the realism of such scenes, capturing much sharper detail than in previous formats, where the text of a book, for example, remains blurry.

Finally, Nvidia’s researchers showcased their recent progress in neural materials research. The paper details an AI system that can learn how light reflects from photorealistic, multilayered materials, before reducing the complexity of these assets to smaller neural networks that run in real-time. The result is up to 10 times faster shading, Nvidia’s researchers said. It demonstrated the extreme level of realism it can achieve in the image below of a neural-rendered teapot, which accurately shows the ceramic material and the imperfect nature of its clear-coat glaze, as well as fingerprints, smudges and dust.

Nvidia said all of its latest research will be made available at this year’s SIGGRAPH conference. It’s hopeful that developers and enterprises will embrace its new techniques to generate synthetic objects and characters to populate virtual worlds for applications such as robotics and autonomous vehicle training. Moreover, it hopes creatives such as artists, architects, filmmakers and video game designers will adopt its techniques to produce higher-quality visuals than were previously possible.

Images: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU