UPDATED 19:11 EDT / SEPTEMBER 25 2024

AI

Allen Institute for AI debuts new Molmo series of open-source multimodal models

The Allen Institute for AI today released Molmo, a family of open-source language models that can process text and images.

The launch came against the backdrop of Meta Platforms Inc.’s Connect 2024 product event. Alongside new mixed reality devices, the company debuted an open-source language model series of its own called Llama 3.2. Two of the models in the lineup have multimodal processing features similar to those offered by Molmo. 

The Allen Institute for AI, or Ai2, is a Seattle-based nonprofit focused on machine learning research. Its new Molmo model series comprises four neural networks. The most advanced model features 72 billion parameters, the most hardware-efficient has 1 billion and the other two contain 7 billion each. 

Alongside the ability to answer natural language prompts, all four algorithms offer multimodal processing features. Molmo can identify objects in an image, count them and describe them. The models are also capable of performing related tasks such as explaining the data visualized in a chart.

In an internal evaluation, the Allen Institute for AI used 11 benchmark tests to compare Molmo against several proprietary large language models. The version of Molmo with 72 billion parameters achieved a score of 81.2, slightly outperforming OpenAI’s GPT-4o. The two Molmo versions with seven billion parameters trailed the OpenAI model by fewer than five points.

The smallest model in the series, which features 1 billion parameters, has more limited processing capabilities. But the Allen Institute for AI says that it can still outperform some algorithms with 10 times as many parameters. Furthermore, the model is compact enough to run on a mobile device.

One of the contributors to the Molmo series’ processing prowess is the dataset on which it was trained. The file comprised several hundred thousand images that were each accompanied by a highly detailed description of the depicted objects. According to the Allen Institute for AI, studying those descriptions enabled Molmo to become more adept at object recognition tasks than larger models that were trained on lower-quality data.

“We take a vastly different approach to sourcing data with an intense focus on data quality, and are able to train powerful models with less than 1M image text pairs, representing 3 orders of magnitude less data than many competitive approaches,” the researchers who developed Molmo detailed in a blog post.

The algorithm series’ debut today coincided with the release of Llama 3.2, a new family of language models from Meta. Like Molmo, the lineup comprises four open-source neural networks.

The first two models contain 9 billion and 11 billion parameters, respectively. They’re based on a multimodal architecture that allows them to process not only text but also images. Meta says that the models can perform image recognition tasks with similar accuracy as GPT4o-mini, a scaled-down version of GPT-4o.

The two other models in the Llama 3.2 series focus on text processing tasks. The more advanced of the two features 3 billion parameters while the other has about a third as many. Meta says that the models can outperform comparably sized algorithms across a wide range of tasks.

Photo: Allen Institute for AI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU