UPDATED 14:49 EST / JUNE 17 2024

AI

Runway debuts new Gen-3 Alpha model for generating videos

Startup Runway AI Inc. today debuted a new artificial intelligence model, Gen-3 Alpha, that can generate 10-second videos based on text prompts.

New York-based Runway is backed by more than $190 million in funding from Google LLC, Nvidia Corp. and other investors. Gen-3 Alpha is the third addition to a series of video generation algorithms that the company originally debuted last February. According to TechCrunch, Runway plans to expand the product line down the road with several more capable versions of its new model.

The company says Gen-3 Alpha can generate higher-fidelity videos than its previous-generation AI. According to Runway, the quality improvements partly stem from the fact that the model is better at depicting motion. Additionally, Gen-3 Alpha is more adept at ensuring that the frames of a video are consistent with one another.

A second set of optimizations implemented by Runway helped reduce the amount of time it takes the model to generate videos. According to TechCrunch, Gen-3 Alpha is capable of generating a 10-second clip in 90 seconds.

Runway is developing a new set of safety features for the model to ensure that it’s not used to generate harmful content. As part of the effort, the company will add a provenance system based on C2PA standard. The system will modify videos created using Gen-3 Alpha with information indicating they were generated by AI.

The C2PA standard is developed by an industry consortium of the same name that counts Intel Corp., Arm Holdings plc and other tech giants among its backers. The technology makes it possible to equip a multimedia file with metadata that not only indicates whether it was AI-generated, but also provides other information such as when it was created. C2PA stores this metadata in a format designed to block tempering attempts.

Runway reportedly plans to make Gen-3 Alpha available for customers in the coming days. The company will use the model to power three cloud services that can generate videos from images and text, as well as draw images based on user prompts. Down the road, Runway will roll out additional features designed to provide “more fine-grained control over structure, style, and motion.”

The company’s long-term commercialization plans will also see it offer customized versions of Gen-3 Alpha to enterprises. According to Runway, those customized models will enable customers to more closely align the style of AI-generated videos with their project requirements.

Runway’s long-term AI development roadmap focuses on what it refers to as general world models. The company says that such AI systems won’t simply draw objects, but will rather simulate them based on “realistic models of human behavior” and other complex data. Runway claims that Gen-2, the predecessor to Gen-3 Alpha, is an early example of this approach because it has gained “some understanding of physics and motion.”

The company faces competition from several other market players. In February, OpenAI detailed an AI-powered video generation system dubbed Sora. More recently, startup Luma AI Inc. last week introduced a rival model called Dream Machine that offers a similar set of capabilities. 

Image: Runway

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU