Google merges its DeepMind and Google Brain AI research groups
Google LLC parent Alphabet Inc. is merging its two main artificial intelligence research groups into a single unit.
Alphabet Chief Executive Officer Sundar Pichai announced the move in a blog post published today. The search giant’s DeepMind and Google Brain groups are becoming a single unit known as Google DeepMind. It will be headed by a leadership team that comprises executives from both groups. In conjunction, Google is forming a scientific board to oversee the newly formed unit’s AI research.
DeepMind, one of the two groups involved in the merger, became part of the search giant through a 2014 acquisition. Co-founder Demis Hassabis led the group as CEO after the deal. Hassabis will remain in the CEO role following the group’s merger with Google Brain.
The leadership team of the new Google DeepMind unit will also include Google Brain Vice President Zoubin Ghahramani, as well as Eli Collins and Koray Kavukcuoglu. Collins was previously the vice president of products at Google Research. Kavukcuoglu is DeepMind’s vice president of research and will also lead the scientific board that Google is forming to oversee the unit’s AI work.
“To ensure the bold and responsible development of general AI, we’re creating a unit that will help us build more capable systems more safely and responsibly,” Pichai wrote. “Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI.”
In the years leading up to today’s merger, DeepMind and Google Brain both made major contributions to AI research. They developed a number of groundbreaking neural networks models that unlocked new applications for machine learning. In parallel, the two research groups helped advance the technical foundations of AI.
In 2017, Google researchers invented a new way of designing neural networks that they named the Transformer architecture. It allows AI models to interpret text more accurately than was possible before. Today, the Transformer architecture powers many of the most advanced language models on the market, including OpenAI LP’s GPT-4.
Google Brain earlier developed the TensorFlow toolkit for building AI models. DeepMind, in turn, has created an AI system that can predict the structure of proteins, a task essential to understanding how they work. Researchers previously spent decades unsuccessfully attempting to develop a program that can reliably estimate protein structures.
In today’s blog post, Pichai disclosed that one of the new Google DeepMind unit’s priorities will be developing a “series of powerful, multimodal AI models.” A multimodal model is a neural network that can process multiple types of data. Google’s machine learning research roadmap reportedly covers several other areas well.
Historically, DeepMind and Google Brain pursued research projects separately. Indeed, DeepMind reported has been at odd with Google’s AI researchers in recent years. But last month, it was reported that the groups have teamed up to develop a model meant to challenge OpenAI’s GPT-4. It’s believed the model could feature as many as a trillion parameters.
In parallel, Google is said to be working on another AI project that is known as Magi internally. The project reportedly aims to upgrade the company’s flagship search engine with additional machine learning features. It’s believed Google plans to add multiple chatbots, as well as enhancements that will provide more personalized search results for users.
Image: DeepMind
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU