UPDATED 18:48 EDT / JANUARY 18 2024

AI

Meta plans to buy 350K Nvidia GPUs to build artificial general intelligence

Meta Platforms Inc. Chief Executive Mark Zuckerberg today revealed that his company is planning to snap up about 350,000 high-end H100 graphics processing units from Nvidia Corp. to provide the computing power to advance its goals in a new area of artificial intelligence.

The company is striving to become a leader in the area of artificial general intelligence, or AGI, which is a potential advanced AI that can match the intelligence of humans and perform a vast array of different tasks.

Zuckerberg (pictured) said in an Instagram Reels video posted today that AGI may eventually be able to power a wide range of cutting-edge services and devices, including more advanced digital assistants, augmented reality glasses and more. The company’s aim is to build “the best AI assistants, AIs for creators, AIs for businesses and more,” he said, adding that doing so will require advances in every area of AI.

In order to get there, Meta will need to make a significant investment in its AI computing infrastructure, hence its plans to acquire mountains of powerful H100 GPUs from Nvidia. The H100 chip is Nvidia’s most advanced ever, and is said to be particularly adept at training the large language models that power generative AI such as ChatGPT.

“We’re building an absolutely massive amount of infrastructure to support this,” Zuckerberg said. “By the end of the year, we’re going to have around 350,000 Nvidia H100s, or around 600,000 H100 equivalents of compute if you include other GPUs.”

It’s a staggering number, not least because of the enormous expense required to purchase all of that silicon. Although the CEO didn’t say how much the company intends to spend on its GPUs, it’s likely to be a very big number. What’s more, Meta is unlikely to have amassed all that many so far, as the H100 only went on sale in late 2022, and has been in limited supply.

Estimates from analysts at Raymond James say Nvidia is selling the H100 for between $25,000 to $30,000, while on eBay they can fetch more than $40,000. Even though Meta will definitely get some credit for bulk purchasing, it’s still likely to spend in excess of $9 billion on the GPUs.

Zuckerberg said the 350,000 H100 GPUs will be added to an infrastructure that will also contain around 600,000 H100 equivalents, which could mean the company is also bulk buying rival chips such as Advanced Micro Devices Inc.’s new Instinct M1300X AI accelerator.

 

View this post on Instagram

 

A post shared by Mark Zuckerberg (@zuck)

Nvidia and its shareholders will no doubt be delighted to hear about Meta’s GPU investment plans, but it’s less clear what Zuckerberg’s announcement really means for his own company, Charles King, an analyst with Pund-IT Inc., told SiliconANGLE. He pointed out that it wasn’t so long ago that Zuckerberg was proclaiming the metaverse as the next big thing, plowing billions of dollars into what, at least for now, must be seen as a failed effort. “AI certainly has more support across tech and other industries than the metaverse ever did,” King added. “However, at this point, Meta’s GPU purchases are more indicative of its relative wealth than the potential success of its plans.”

Holger Mueller of Constellation Research Inc. said Meta primarily sees AI as a tool to help keep users engaged with its platforms, so they can keep generating advertising revenue. “Nvidia was the world’s biggest winner last year as it provides the hardware platform for generative AI, and if other vendors follow Meta’s lead, it will be another very good year for that company,” he said.

Zuckerberg’s comments do at least provide some insight into just how much money big technology companies are spending to stay at the forefront of AI development. Research from Omdia says that other leading players, such as Microsoft Corp., Google LLC and Amazon.com Inc., are all believed to have bought between 50,000 and 150,000 H100 GPUs last year, and will likely have similar plans to add to their computing stacks.

The importance of GPUs to Meta’s ambitions was stressed by Chief AI Scientist Yann LeCun during an interview last month. “If you think AGI is in, the more GPUs you have to buy,” he said.

All told, Meta’s total expenses for 2024 are likely to be in the region of $94 billion to $99 billion, the company said in its most recent quarterly financial report. One of the biggest expenses is expected to be computing expansion. “In terms of investment priorities, AI will be our biggest investment area in 2024, both in engineering and computer resources,” Zuckerberg said on a conference call with analysts.

In his Instagram post today, Zuckerberg revealed that Meta’s GPUs are already being used to good effect in the training of Llama 3, which is an open-source alternative to OpenAI’s ChatGPT models. However, the company is taking a different approach to its rival, and plans to open-source the Llama 3 model, just as it did with its predecessor, Llama 2.

“This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can,” Zuckerberg insisted.

Photo: Anthony Quintano/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU