UPDATED 12:00 EDT / MAY 18 2023

INFRA

Meta Platforms unveils next-generation AI infrastructure and data center designs

Facebook parent company Meta Platforms Inc. has lifted the lid on what its next-generation artificial intelligence infrastructure will look like, showcasing its newest training and inference accelerator and various other bits of futuristic hardware at its AI Infra @Scale event today.

Meta’s head of global infrastructure Santosh Janardhan explained that the company’s AI compute requirements are expected to grow dramatically over the next decade. It will need to boost its processing power considerably in order to support its AI research, its cutting edge applications and experiences within its family of apps, and its metaverse ambitions.

“We are facing another inflection point for infrastructure,” he said in opening the event. “AI is not just a workload, it’s the workload. We are transforming everything in our infrastructure.”

To that end, the company has been executing on a plan to build the next generation of its infrastructure backbone, and today it’s sharing its most recent progress. Among the updates are the company’s first custom chip for running AI models, a new AI-optimized data center design and phase two of its new supercomputer for AI research, which will be powered by a whopping 16,000 graphics processing units.

The Meta Training and Inference Accelerator announced today is Meta’s latest family of custom accelerator chips designed to power AI inference workloads. Known as MTIA, the new processor provides much greater compute and efficiency than traditional central processing units because it’s customized for very specific internal workloads, Janardhan explained. Meta plans to deploy the MTIA chips alongside powerful GPUs to deliver better performance, lower latency and superior efficiency for every kind of AI workload.

Meta also unveiled a revamped data center design to support future generations of AI hardware for both training and inference workloads. According to Janardhan, the new data center is based on an AI-optimized design that will support liquid-cooled AI hardware and a highly performant AI network that connects thousands of its MTIA chips. The idea is to build “data center-scale” AI training clusters, Janardhan said. The new design ensures these will be faster and more cost-effective to build, while accommodating new hardware such as Meta’s first in-house video transcoding application-specific integrated circuit MSVP, which is designed to handle video workloads.

Finally, Meta showcased phase two of its Research SuperCluster AI supercomputer, which is said to be one of the fastest such machines ever built. RSC, as it’s known, is designed to train a new generation of large AI models that will support augmented reality, content understanding systems, real-time translation and other advanced workloads.

The supercomputer is powered by 16,000 GPUs which are linked by a new, three-level Clos network fabric. RSC has been up and running for some time already, Janardhan said, used in research projects such as LLaMA, a large language model that the company unveiled earlier this year.

“We’ve been building advanced infrastructure for AI for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do,” said Meta Platforms founder and Chief Executive Mark Zuckerberg.

Janardhan explained that the goal behind developing all of this infrastructure is to develop and train much larger and more sophisticated AI models and deploy them at scale. Meta is at the forefront of AI development, he said, using it to enable better personalization and create richer experiences while helping businesses to reach their desired audiences.

As part of those efforts, Meta also announced Code Compose, a new generative AI-powered coding assistant it built internally to make its developers more productive.

“By rethinking how we innovate across our infrastructure, we’re creating a scalable foundation to power emerging opportunities in the near term in areas like generative AI, and in the longer term as we bring new AI-powered experiences to the metaverse,” Janardhan said.

Image: Meta

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU