UPDATED 15:44 EDT / MARCH 31 2023

AI

Google upgrades Bard with technology from its cutting-edge PaLM language model

Google LLC has enhanced its Bard chatbot’s capabilities using technology from PaLM, an advanced language model that it debuted last year.

Google and Alphabet Inc. Chief Executive Officer Sundar Pichai detailed the update in a New York Times interview published today. PaLM will “bring more capabilities; be it in reasoning, coding, it can answer maths questions better,” Pichai said. Jack Krawczyk, the product manager for Bard at Google, added in a tweet that the update has already been released. 

The new version of Bard is described as being more adept at solving math problems. The chatbot can answer “multistep” text prompts more reliably as well, Krawczyk stated. Further down the line, Google also expects improvements in Bard’s ability to generate software code. 

PaLM, the language model that the search giant used to enhance Bard, was first detailed by its researchers last year. The model features 540 billion parameters, the configuration settings that determine how a neural network goes about processing data. The more parameters there are, the more tasks a neural network can manage.

The PaLM model demonstrated impressive performance in a series of internal evaluations carried out by Google. During one test that involved 28 natural language processing tasks, it achieved a higher score than OpenAI LP’s GPT-3 model. It also set new records in two math and coding benchmarks.

Google trained PaLM on two TPU v4 Pods hosted in its public cloud. Each TPU v4 Pod includes 4,096 chips optimized specifically to run AI workloads. Combined, those chips can provide up to 1.1 exaflops of performance, which equals 1.1 million trillion calculations per second.

During the development of PaLM, Google managed the AI training process using an internally developed software system called Pathways. The system distributes the computations involved in training an AI model across multiple chips to speed up the workflow. When running PaLM, Pathways used 57.8% of the underlying chips’ processing performance, which Google says set a new industry record.

The original version of Bard that Google introduced last month was based on an AI called LaMDA. Google first detailed LaMDA last January, or three months before it debuted PaLM. The former model supported up to 137 billion parameters at the time of its introduction while PaLM features 540 billion.

“We clearly have more capable models,” Pichai told the Times in reference to LaMDA. “To me, it was important to not put [out] a more capable model before we can fully make sure we can handle it well.” 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.