Meta introduces Llama 3.1, its biggest and best open-source AI model to date
Meta Platforms Inc. today unveiled its largest-ever open-source artificial intelligence model to date, Llama 3.1 405B, that the company claims can rival even the most powerful closed models on the market, including those from OpenAI and Anthropic PBC.
According to Meta, Llama 3.1 excels at state-of-the-art capabilities such as general knowledge, math, tool use and multilingual translation. On the language front, the company added support for eight new languages, including French, German, Hindi, Italian, Portuguese and Spanish, with more on the way.
“Our experimental evaluation suggests that our flagship model is competitive with leading foundation models across a range of tasks, including GPT-4, GPT-4o, and Claude 3.5 Sonnet,” Meta’s research team said in a blog post. “Additionally, our smaller models are competitive with closed and open models that have a similar number of parameters.”
Llama 3.1 is an upgrade to the Llama 3 large language model the company released in April 2024 and will come in a staggering 405 billion-parameter size. However, that model is available only in 8 billion and 70 billion parameter versions. With this new ultra-large release, those two models are also getting upgrades.
The new model 3.1 has a 128,000-token context window, this is the size of the input that users can feed it before text sent to it gets cut off. That many tokens would allow the model to read most extremely large reports, medium-sized books, long transcripts and other large documents. That many tokens amount to about 96,000 words or the length of a standalone novel of about 400 pages.
Meta said that the new context window and multilingual support will also come to the 8B and 70B models. That will enable them to be remain easy to use in smaller footprints, while still providing advanced reasoning, support advanced use cases, such as long-form summarization, multilingual conversation and coding capabilities.
The company also said it’s changing its licensing so that developers may now use the outputs from Llama models, including its new 405B model, to “teach” smaller models. That will allow developers to use larger, smarter models to improve other models through training and fine-tuning.
Meta Chief Executive Mark Zuckerberg said the release of Llama 3.1 represents the company’s commitment to open-source innovation.
“Today, Linux is the industry standard foundation for both cloud computing and the operating systems that run most mobile devices — and we all benefit from superior products because of it,” Zuckerberg said in a blog post. “I believe that AI will develop similarly. Today, several tech companies are developing leading closed models. But open source is quickly closing the gap.”
According to Zuckerberg, keeping the Llama models open source maintains each individuals’ capability to use and train their own without the worry that some organization can pull the rug out from under them. It allows them to “control their own destiny.” It also makes models more affordable and efficient in the long run.
Zuckerberg said that inference on Llama 3.1 405B can be run on developer’s own infrastructure at roughly 50% the cost of large closed-source models such as OpenAI’s flagship model GPT-4o.
“The open-source nature of Llama 3.1 405B represents a significant step forward in democratizing access to AI technology,” Victor Botev, co-founder and chief technology officer of AI research assistant tool developer Iris.ai, told SiliconANGLE. “Meta is enabling researchers and developers worldwide to explore, innovate, and build upon state-of-the-art language AI without the barriers of proprietary APIs or expensive licensing fees. This approach emphasizes transparent development, fosters collaboration and accelerates progress in the field, potentially leading to breakthroughs that benefit society as a whole.”
Botev warned, however, that the extremely large size of the model could work against it. Prioritizing colossal model sizes in AI development come with pitfalls, such as computational resource and energy consumption needs, that can lead to both cost and environmental sustainability issues down the road.
“Innovations in model efficiency might benefit the AI community more than simply scaling up to larger sizes,” Botev said.
Everyday users can try out Meta’s new Llama 3.1 405B model right now using the company’s Meta AI app. However, it must be selected manually and it’s in preview currently, which means users only get a certain number of queries each week before it drops down to a lower-quality model (Llama 3.1 70B).
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU