AI
AI
AI
Startups Runway AI Inc. and DeepSeek today released two foundation models that they say can outperform algorithms developed by the tech industry’s largest players.
Runway’s new algorithm, Gen-4.5, provides text-to-video features. DeepSeek, in turn, has released an updated version of its namesake reasoning model. The Chinese startup says that DeepSeek V3.2 is better at coding and math-related tasks than its predecessor.
Founded in 2018, Runway is backed by more than $300 million in funding from Nvidia Corp., SoftBank Group Corp. and other investors. The company received a $3 billion valuation in its most recent raise. It provides access to its video generation models through an application programming interface and cloud-based design tools.
Runway says Gen-4.5 has set a new record on the Artificial Analysis Text to Video benchmark, which is used to compare AI video generators’ performance. The model’s score is partly the results of optimizations that make it better at following prompts than its predecessor. Users can ask Gen-4.5 to customize a clip’s camera angle, lightning and a range of other parameters.
The company says the model also produces more realistic clips. Compared with earlier video generators, Gen-4.5 is more adept at rendering physics effects such as motion and collisions. However, the model has certain limitations: It generates some effects too early and occasionally fails to render certain objects specified by the user.
Runway plans to roll out Gen 4.5 to its products by the end of the week. According to the company, the model will offer comparable speeds and pricing as the previous-generation Gen 4 algorithm. Runway runs the Gen 4.5 deployment that powers its products on an AI cluster powered by Nvidia Corp.’s Blackwell and Hopper chips, which it also used to train the model.
“Together, we are partnering to advance the entire lifecycle of AI from pretraining, to post-training and inference,” said Nvidia Chief Executive Officer Jensen Huang.
DeepSeek-V3.2, the other new frontier model that debuted today, is optimized for reasoning tasks such as debugging code. It outperformed GPT-5 on the SWE Multilingual and Terminal Bench 2.0 programming language. However, it fell short of the records set by Google LLC’s Gemini model series.
Reasoning models use a module called an attention mechanism to process text. The attention mechanism determines the meaning of a word by reviewing the surrounding text, identifying the most relevant phrases and factoring them into its calculations. That process accounts for a significant percentage of LLMs’ hardware use.
According to DeepSeek, DeepSeek-V3.2 includes a new implementation of the attention mechanism that requires less infrastructure. The company calls that implementation DSA. It lowers hardware use by reducing the amount of text that DeepSeek V3.2 must review to determine the meaning of words.
Companies that prioritize output quality over hardware efficiency can use DeepSeek V3.2-Speciale, a performance-optimized version of the model that debuted in conjunction. DeepSeek measured the LLM’s performance by having it answer question sets from the International Mathematical Olympiad and International Olympiad in Informatics. The model achieved gold-level scores across both tests.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.