UPDATED 08:00 EDT / OCTOBER 22 2024

AI

Genmo introduces Mochi 1, an open-source text-to-video generation model

Genmo Inc., an artificial intelligence content generation platform, today announced the preview release of its new open-source model Mochi 1, capable of video generation.

The company said Mochi 1 represents dramatic improvements in state-of-the-art quality of motion as well as the ability to keep in line with the query text written by users. It’s not uncommon for AI models to “daydream” even when given specific instructions in text, so Genmo said that its model has been trained to adhere strongly to instructions.

In addition to the new model release, Genmo unveiled a new hosted playground, where users can try out Mochi 1 for free. The weights are also available on the AI model hosting site Hugging Face.

Alongside the news, Genmo shared that the company raised $28.4 million in Series A funding led by NEA with participation from The House Fund, Gold House Ventures and WndrCo, Eastlink Capital Partners and Essence VC. The company said that it would use the funding to help unlock what it calls the “right brain of artificial general intelligence.”

Mochi 1 represents what the company says is the first step toward building that right brain, which is commonly associated with creativity, whereas the left brain is associated with analytical and logical thinking. Much investment and work has been placed in video generation since the launch of highly featured AI video generators such as Runway AI Inc.’s model and OpenAI’s Sora.

The company said the new model sets a high bar for realistic motion dynamics by understanding physics such as fluid movement, fur and hair simulation, and most importantly human motion. The model can generate smooth videos at 30 frames per second for durations up to 5.4 seconds – which is currently the industry standard for most models on the market.

When prompting, it sticks very closely to what people tell it when they are clear and concise with what they want it to display. This ensures that it delivers accurate videos that reflect what users instruct it to perform, the company said, giving users detailed control over characters, scenes and other controls.

To build Mochi 1, Genmo used a 10 billion-parameter diffusion model, representing the number of variables that can be used to train a model to make it more accurate. Under the hood, the company used its own Asymmetric Diffusion Transformer, or AsymmDiT, architecture that the company said can efficiently process user prompts and compressed video tokens by streamlining text processing to focus on visuals.

AsymmDiT jointly builds video using text and visual tokens, similar to Stable Diffusion 3, but the company said its streaming architecture has nearly four times as many parameters as the text stream through a larger hidden dimension. Using an asymmetric design, it can lower its memory use for deployment.

The Mochi 1 preview showcases a preview base model that can generate 480p video, but the company said the full release of the model is slated for release before the end of the year. The full model will include Mochi 1 HD, which will support 720p video generation and enhanced fidelity for smoother motion.

Genmo said it trained Mochi 1 entirely from scratch. At 10 billion parameters, it said, it represents the largest video generative video model ever released to open source. The company’s existing closed-source image and video generation models already have more than 2 million users. Released under the Apache 2.0 open-source license, Mochi 1’s model weights and source code are available for developers and researchers to work with and can be found on GitHub and Hugging Face.

Image: Genmo

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU