UPDATED 12:20 EDT / APRIL 10 2024

AI

Google updates its Gemma AI model family with variants for coding and research

Google LLC announced Tuesday the first new additions to its Gemma family of lightweight, open-source artificial intelligence large language models that will add coding capabilities and research experimentation opportunities.

The Gemma models share technical components with Google’s Gemini model, which is the most complex and powerful model that the company has produced to date. Gemini underlies the company’s Gemini AI chatbot, formerly named Bard, which is available on the web and mobile devices, and it’s also used to power many of Google’s AI-based services.

One of the two new variants is CodeGemma, a lightweight model for code completion and generation tasks, combined with instruction-following capabilities. The other is RecurrentGemma, an efficient model designed for research purposes. Google also showcased performance upgrades to Gemma itself, including performance updates and bug fixes.

CodeGemma is a powerful but lightweight coding model available in three sizes. The first is a pretrained 7 billion-parameter variant specialized in code completion and code generation tasks, the basic version that every developer would use for everyday development tasks. The next is a 7 billion-parameter instruction-tuned variant for code chat and instruction-following, providing the ability to understand the intentions of the developer, recommend code changes and provide code blocks. Finally, there’s a pretrained 2 billion-parameter model for fast completion, sized to fit on a local machine.

Using these new model variants, CodeGemma can complete lines and even generate entire blocks of codes either locally or using cloud resources, Google said. The model was trained on 500 billion tokens, primarily in English language data taken from web documents, mathematics and code. It is proficient in multiple languages including Python, JavaScript and Java, as well as other popular languages.

RecurrentGemma is a model designed to use recurrent neural networking and local attention to improve memory efficiency. That allows the model to lower memory requirements greatly, which allows for its use on devices with limited memory, such as single graphical processing units or central processing units. Google said it’s a technically distinct model designed for research purposes.

Recurrent neural networks, in particular, scale efficiently when generating long sequences, allowing the model to generate more tokens per second when running live data at significantly larger batch sizes, even with limited memory.

Chart showing how RecurrentGemma has fast sampling speeds at long sequence lengths, but transformer-based models such as Gemma are slower with longer samples. Image: Google

With these improvements, RecurrentGemma can greatly improve memory usage and increase performance, while still producing similar benchmark scores to the Gemma 2B model.

“RecurrentGemma showcases a non-transformer model that achieves high performance, highlighting advancements in deep learning research,” Google said in its blog post.

Google also announced the release of Gemma 1.1, which improves performance over its previous version overall and includes bug fixes. The company said that it listened to developer feedback in changing its terms of service to provide flexibility.

Access to Gemma and the new models is available through the Gemma website, the Vertex AI Model Garden – Google’s cloud service for managed AI development – the Hugging Face model repositoryNvidia Corp.’s NIM application programming interfaces or for download using Kaggle.

Image: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.