UPDATED 18:07 EDT / MARCH 20 2023

AI

Runway debuts AI model that can generate videos from text

Startup Runway AI Inc. today debuted Gen-2, an artificial intelligence model that can generate brief video clips based on text prompts.

New York-based Runway develops AI models that ease image and video editing tasks for creative professionals. Last year, the startup helped co-create the popular Stable Diffusion generative AI model. In December, it raised $50 million Series C funding round a reported valuation of $500 million.

Gen-2, the startup’s new AI model for generating videos, is an improved version of an existing neural network called Gen-1 that debuted in February. The startup says Gen-2 can generate higher-fidelity clips than its predecessor. Moreover, the model provides more customization options for users.

Runway’s original Gen-1 neural network takes an existing video as input along with a text prompt that describes what edits should be made. A user could, for example, supply Gen-1 with a video of a green car and a text prompt that reads “paint the car red”. The model will then automatically make the corresponding edits.

Gen-1 can also modify a video by adapting it to the style of a reference image provided by the user. Gen-2, the new model that Runway debuted today, adds another way of generating clips. It doesn’t require a source video or reference image and allows users to create videos simply by entering a text prompt.

Runway detailed the technology that powers the model in an academic paper published earlier this year. According to the company, its model uses an AI method known as diffusion to generate videos.

With the diffusion method, researchers add a type of error called Gaussian noise to a file. They then train a neural network to remove the Gaussian noise and restore the original file. By repeating this process many times, the neural network learns how to analyze the input data it receives and turn it into a new file that matches the user’s specifications.

The company developed its model using a training dataset that comprised 240 million images and 6.4 million video clips. Afterwards, it held a series of user studies to evaluate Gen-2’s capabilities and said Gen-2 significantly outperformed two of the most advanced AI models in the same category.

Runway is not the only company developing AI models capable of generating videos. Last year, Meta Platforms Inc. researchers detailed a similar clip generation model called Make-A-Video. Like Gen-2, it can generate clips based on text prompts.

Image: Runway

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.