UPDATED 20:14 EDT / MAY 18 2021

AI

Google shows off advances in conversational AI, search and TPU chips

Google LLC today announced some major breakthroughs in its artificial intelligence capabilities, including a new, next-generation conversational language model that creates far more realistic and interesting dialogue than anything it has come up with so far.

Google’s Language Model for Dialogue Applications was announced during its virtual I/O conference today, and displayed some huge leaps in AI language understanding too.

LaMDA’s skills were shown off in two separate conversations. In the first, LaMDA pretended to be the dwarf planet Pluto and answered questions on what people could expect to see if they visited. In the second, it played the role of a paper airplane, and discussed what it’s like flying through the air and how to make a plane that travels farther.

“It’s really impressive to see how LaMDA can carry on a conversation about any topic,” Google Chief Executive Sundar Pichai said. “It’s amazing how sensible and interesting the conversation is. But it’s still early research, so it doesn’t get everything right.”

Pichai pointed LaMDA’s ability to refer to real facts and events in both conversations, such as recalling the New Horizons probe that flew past Pluto in 2015. It also cited the record distance that a paper airplane has been recorded as having been thrown, over a thousand feet.

Here’s LaMDA in action:

Pichai explained that LaMDA was built using the Transformer neural network architecture created by Google and open-sourced in 2017. Transformer networks are known to be exceptional at language understanding and are also used by rival conversational AI models such as OpenAI’s GPT-3. LaMDA was also trained using dialogue, so it knows how to carry out free-flowing conversations, Google said.

Pichai said in his presentation that much of Google’s AI work is related to retrieving information, and that the whole point of LaMDA is to teach AI to understand language better so that it can be used to improve its products. As a next step, Pichai said, LaMDA’s creators are now working on how to add more insightful, unexpected and witty responses to its repertoire of conversational skills.

More helpful search results

One of the more immediate areas where conversational AI should prove to be helpful will be Google Search. The company announced a new Multitask Unified Model that is said to be a thousand times more powerful than its older Bidirectional Encoder Representations from Transformers or BERT model.

In a second presentation, Google explained that MUM has been designed to handle more complex searches and is trained using data crawled from the open web, with low-quality content being removed from the equation.

Google gave the example of someone who uploads a photo of a pair of boots and asks if they’re suitable for hiking up Mount Fuji. MUM is able to analyze the photo of the boots to work out exactly which brand and type they are, and then provide a detailed response that includes results that explain what the boots are good for, reviews and so on.

MUM can also generate language itself, so it can even create a narrative similar to what a human subject expert might say, Google explained. And that narrative would come with visual aids in the form of images and videos it finds on the web. MUM would also generate some links to more relevant content, such as what to do after hiking up Mount Fuji and so on.

Google said its users will be able to see MUM’s capabilities for themselves soon via new features and updates to Google Search and other products in the coming months. Though the company said it’s in the “early days” of exploring MUM, it called it an important milestone toward a future where “Google can understand all of the different ways people naturally communicate and interpret information.”

Analyst Holger Mueller of Constellation Research Inc. told SiliconANGLE that Google’s leadership in algorithms on silicon is what enables it to create AI that is far ahead of anything its competitors can come up with.

“Today it is pushing its lead in understanding languages further with LaMDA and also advancing its search capabilities with MUM,” Mueller said. “A marketer would cringe at the naming of both efforts, but that should not deter us from the key takeaway that Google is doing a lot of heavy lifting by humanizing machine interactions and with that changing user experiences.”

More powerful AI chips

Google’s AI is only as good as the hardware that powers it, and more advanced models require ever more powerful processors. To that end, the company also announced its fourth-generation tensor processing units at Google I/O, which it said can handle AI and machine learning tasks in “close-to-record wall clock time.”

Specifically, Google said the new TPUs are geared toward object detection, image classification, natural language processing, machine translation and recommendation engines.

The TPUv4 chips, as Google calls them, are said to be twice as fast at processing AI workloads as its previous generation TPUv3 chips. They also provide a big boost in bandwidth and benefit from some unspecified advances in interconnect technology, Google said. Overall, the chips show an average 2.7 times performance improvement versus the TPUv3 chips.

Google’s TPUs are application-specific integrated circuits designed to accelerate AI workloads on the Google Cloud platform. The company uses its TPUs to power its own services, including Google Search, Google Assistant, Google Translate, Gmail and many others.

For the biggest computing tasks, Google will offer access to TPUv4 clusters that contain a total of 4,096 chips interconnected with 10 times as much bandwidth as other networking technologies. This means the TPUv4 clusters will be able to deliver over an exaflop of compute power, which is comparable to having 10 million average laptops all running at peak performance.

“This is a historic milestone for us — previously to get an exaflop, you needed to build a custom supercomputer,” Pichai said. “But we already have many of these deployed today and will soon have dozens of TPUv4 four pods in our data centers, many of which will be operating at or near 90% carbon-free energy.”

Google said it will make the TPUv4 pods available on its cloud infrastructure platform later this year.

Images: Google

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

We are holding our third cloud startup showcase on Sept. 22. Click here to join the free and open Startup Showcase event.

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.

Click here to join the free and open Startup Showcase event.