

Moonvalley, a generative artificial intelligence research startup, today announced the launch of a foundation AI model for Hollywood studios, filmmakers and enterprise partners trained on clean sources.
During a time when generative AI, especially generative video is trending, it’s becoming more and more difficult to stand out from the crowd. However, Naeem Talukdar, co-founder and chief executive of Moonvalley, thinks that “Marey,” the model being released by the company, will distinguish itself by being ethically sourced and licensed.
“The industry default to date has been that it can’t be done,” Talukdar told SiliconANGLE in an interview. “That’s where the fair use arguments kind of came from. It was: We don’t have a choice. We have to go and scrape everybody’s art and do this because there’s no other way of doing it.”
Using only fully licensed and commercially available sources can pose a problem, of course, because it reduces the total diversity of data. However, Talukdar said that hasn’t prevented the company from creating a model he believes is completely on par or better than what’s already available.
Talukdar said that to do this, Moonvalley works closely with the entertainment industry and the vision is to create a model that enhances and augments the creator’s work rather than just producing video. The name of the model itself, “Marey,” refers to French inventor Étienne-Jules Marey, whose work in sequential high-speed photography helped lead to the development of the first moving picture films.
To develop their model, Moonvalley partnered with Asteria, a generative AI film and animation studio. Led by two-time Oscar nominee Bryn Mooser, Asteria brings significant industry expertise through its ownership of documentary studio XTR and the streaming platform Documentary+, which boasts a reach of more than 120 million households.
In training and developing Marey, Talukdar said, the vision was to prioritize industry collaboration and reject the industry approach of commoditizing artwork. That meant completely redefining the nature of the generative video AI model from the perspective of creators. To achieve this the company collaborated with actual industry workers such as filmmakers and editors to create a product that addresses their needs.
“The idea was how to build this technology around the creator,” Talukdar said. “I need to be able to click the camera and drag it around. I need to be able to see my characters, stage and cast them. I can’t do that by looking at a giant text prompt.”
Marey’s asset library enables creators to construct scenes with video game-like flexibility. They can define and personalize characters, and import images of any element — characters, objects and settings — to be generated and incorporated into their compositions.
For example, that would allow the AI to generate a Speakeasy in the Prohibition Era and populate it with a group of pubgoers, background characters, main cast and maybe a specific set of mugs that would remain from scene to scene. Then it can condition each scene so that it retains continuity from clip to clip. It can be a fundamental problem for many video-generating AI models that they begin to break down and lose coherence between multiple generations, but Talukdar said Marey handles it well.
According to Moonvalley, Marey provides native video generation of up to 30 seconds, breaking through the industry average of five- to 10-second clips and allows users to produce longer high-quality scenes. The average time between cuts in a TV show is about three to eight seconds per shot, with some variation depending on the genre and the purpose of the shot. Having extra time on either side of the shot can make it easier to edit into a full scene.
Due to close collaboration with industry filmmakers, Marey offers precision camera controls within generated outputs. That means a user can have the camera press in, linger, pan or move through the scene in various ways. The model also enables nuanced control over in-scene movements, including objects such as individual checker pieces, or animating the wind blowing through a person’s hair.
Marey’s release comes as generative AI model developers continue to churn out text-to-video AI models, including Meta Platforms Inc., Google LLC, Haiper Ltd. and Genmo Inc. Earlier this year, creative software developer Adobe Inc. released its own public beta test of a new generative video AI model powered by its Firefly Video Model that it claims is commercially safe.
“Technology has always been the driving force behind the evolution of cinema,” said Bryn Mooser, co-founder and chief executive of Asteria. “AI is the most powerful technological change in our lifetime and we are building a model and tools for filmmakers to be able to create work that until now only studios can afford to make.”
THANK YOU