Runway AI launches an API to expand access to its most powerful video generation model
Generative artificial intelligence startup Runway AI Inc. said today that it’s making its most advanced video generation model available to companies via an application programming interface in early access now.
With the move, organizations will be able to integrate Runway’s Gen-3 Alpha Turbo model directly into their own applications, platforms and services, making it easier for their developers and other employees to create new video content with the tools they use for everyday work.
New York-based Runway said the new API basically makes its video creation model more accessible, so that advertising teams can create marketing videos on the fly within their existing workflows.
The API isn’t available to everyone yet, but interested companies can sign up to the waitlist to gain access. The company said in a blog post it wants to obtain feedback from early adopters of the API before rolling it out to everyone in the coming weeks.
The generative AI startup is one of the leading players in video creation, having been founded in 2018 and released a series of increasingly powerful AI models designed for that purpose. Its foundational models power a series of tools that are meant to simplify the process of creating videos, offering capabilities such as real-time video editing, automated rotoscoping and motion-tracking.
Those tools, which are aimed at both professionals and hobbyists, are said to dramatically reduce the time and effort required to generate high-quality videos. For instance, its automated rotoscoping tool enables users to quickly separate foreground elements from the background, a task that has traditionally required tons of effort and knowledge of sophisticated editing software.
Meanwhile, its motion tracking tool makes it easier to track and apply the effects of moving objects or people in videos. Users can perform both tasks simply by telling it what to do with text-based prompts.
API access and pricing
Runway announced its Gen-3 Alpha Turbo model back in June, saying at the time that it was the most advanced model it has released so far. It enables users to generate higher-fidelity videos than its previous models could do, with better depiction of motion.
The new API will be offered via two different subscription plans, one for individuals and small teams, and another for enterprises. Depending on which plan customers choose, they’ll gain access to endpoints that allow them to integrate the latest model with various tools they’re already using, so users can initiate video generation tasks without having to disrupt their workflows.
Runway said it will charge one cent per credit to access the API, with five credits required to create a one-second video. Using the Gen-3 Alpha Turbo model, it’s possible to create videos of up to 10 seconds at most, a task that would cost 50 cents.
With the launch of the API, Runway is making the Gen-3 Alpha Turbo model more accessible, hoping that leads to wider adoption. Previously, the only way to access it was through the Runway platform.
According to the company, the marketing firm Omnicom Group Inc. is already experimenting with the API, though it didn’t say what kinds of videos it’s producing with it.
The debut of the API moves Runway further ahead of its rivals in the generative AI video industry. Competitors such as OpenAI and Google DeepMind have yet to launch their alternative video generation models.
The company has been in the news a lot lately. Days earlier, Runway announced a new capability within its platform called “Video to Video,” which represents another way users can introduce more precise movement and expressiveness into their generated videos. They simply upload an existing video that they want to replicate, prompt their desired aesthetic direction, and the model will do the rest.
In July, it was revealed that Runway was holding talks with potential investors about the possibility of raising an additional $450 million in funding at a valuation of $4 billion, with plans to use the capital to accelerate the development of its models and expand its developer, sales and go-to-market teams.
That month, the company also found itself at the center of some controversy when a leaked document emerged showing that it had scraped content from “thousands of videos from popular YouTube creators and brands, as well as pirated films,” in order to train its video generation models.
Image: SiliconANGLE/Microsoft Designer
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU