Google releases Gemma family of open-source AI models inspired by Gemini
Google LLC released Gemma today, a family of lightweight open-source artificial intelligence large language models built using the same research as Gemini, the company’s largest and most powerful AI technology.
According to Google, the model’s name comes from the Latin word “gemma,” which is the root for “gem,” meaning precious stone.
Gemma shares inspiration and technical components with Google’s Gemini model, which is the most powerful model that the company has produced to date. Gemini underlies the company’s Gemini AI chatbot, recently renamed from Bard, which is available on the web and mobile devices.
Gemma is available in two sizes, one with two billion adjustable parameters, and the other with seven billion parameters. Both have pretrained and instruction-tuned variants for developers and researchers to use. The company said that the models are capable of achieving best-in-class performance compared to other models of similar sizes and can run directly on AI-enabled laptops and desktop computers.
Although Gemini is a fully robust multimodal model capable of receiving audio, video and images and outputting text and images, Gemma is only capable of text inputs and text outputs. Gemini is also a multilingual model and capable of speaking in multiple languages, whereas at release Google said Gemma will only use English.
By releasing these models as open source, Google is making certain that developers and researchers have access to AI models that have the same technical infrastructure as Gemini to experiment on even if they cannot afford access to Gemini. By providing these models as open, researchers and developers will have direct access to the parameters and the underlying technical architecture, which will allow them to easily tune and adjust the models to fit their needs.
The Google team that created Gemma added that the models were designed with the company’s AI safety principles. “As part of making Gemma pre-trained models safe and reliable, we used automated techniques to filter out certain personal information and other sensitive data from training sets,” Google said. The models were also produced using reinforcement learning from human feedback to create responsible behaviors and security teams did extensive security evaluations.
Additionally, Google said that the company is releasing a responsible AI tool kit together with the models “to help developers and researchers prioritize building safe and responsible AI.” The toolkit will assist developers provide safety classification, debugging Gemma’s behavior and accessing best practices based on Google’s own experience.
Developers interested in working with Gemma will find it ready to use on Colab and Kaggle notebooks starting today, and can also quickly grab it from repositories such as Hugging Face and Nvidia NeMo.
Image: geralt/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU