UPDATED 13:52 EDT / OCTOBER 06 2023

AI

OpenAI – and Microsoft – reportedly could each develop custom AI chips

OpenAI LP has reportedly been exploring the idea of developing custom artificial intelligence chips since last year.

Reuters reported the initiative late Thursday, citing people familiar with the matter. It’s believed the chipmaking effort is a response to the fact that the Nvidia Corp. graphics cards OpenAI uses to power its AI models are currently in short supply. According to Quartz, some customers have waited months to receive their GPU orders.

Update: Late today, The Information reported that Microsoft Corp. also might develop its own AI chips for the same reason.

Nvidia’s top-end data center GPUs are not only in short supply but also expensive. According to a Bernstein analysis cited by Reuters, each question that users send to ChatGPT costs four cents to process. This means processing one-10th the query volumes Google LLC’s search engine receives would require OpenAI to buy $48.1 billion worth of GPUs.

A custom-designed AI processor could significantly lower the company’s chip costs. Plus, it would reduce the need for OpenAI to compete with Nvidia’s other customers for GPUs. That means the AI developer would be less affected by GPU shortages.

OpenAI has reportedly considered buying an AI chip startup to accelerate the development effort. According to Reuters, the company went as far as performing due diligence on a potential acquisition target. Due diligence is an audit in which a company verifies key details such as the reliability of an acquisition target’s technology.

Buying a chipmaker could be a budget-straining move even for OpenAI, which has raised more than $11 billion from investors. The most well-established AI chip startups are backed by hundreds of millions of dollars in venture funding. In 2019, before the recent surge of demand for GPUs, Intel Corp. paid $2 billion to buy AI processor developer Habana Labs Ltd.

A complex undertaking 

AI-optimized data center chips vary significantly in their design. However, almost all such processors share two key technical properties: They have high core counts and contain a large amount of onboard memory. Any chip OpenAI develops would most likely possess the same characteristics. 

High core counts are necessary because of the way AI models crunch data. A neural network processes information using mathematical operations known as matrix multiplications. Those are calculations that are carried out not on standard numbers but rather matrices, which are large collections of numbers organized in rows and columns like a spreadsheet.

With the right algorithm, it’s possible to break up a matrix multiplication into smaller calculations and then run those calculations in parallel. That means they’re performed simultaneously, which is faster than carrying them out one after another. 

Practically all AI chips have high core counts to optimize this parallelization process. The more cores a chip has, the more matrix multiplication calculations it can carry out at once rather than one after another. That’s why Nvidia’s flagship H100 data center GPU has more than 18,000 cores, while top-end central processing units feature only a few hundred.

The other common feature shared by GPUs and other AI accelerators is that they include a large amount of onboard memory. The reason is that AI models, particularly language models such as GPT-4, take up a lot of space. The more memory a chip has, the larger the AI models it can run locally without having to be linked with additional processors.

Nvidia’s H100 GPU includes optimizations designed to speed up large language models. Given that ChatGPT is currently OpenAI’s primary revenue source, it’s possible the company will also seek to equip its AI chip with such optimizations. However, adding specialized features could add complexity to the project and extend development times, which may lead OpenAI to opt for a simpler initial design.

Other options 

Building an AI chip in-house is reportedly not the only option OpenAI has weighed to address the GPU shortage. According to Reuters, the company is also considering sourcing AI chips from suppliers other than Nvidia.

OpenAI could buy AI chips from one of the numerous startups active in this market, as well as Advanced Micro Devices Inc. and Intel. The latter companies have both been investing heavily in their AI chip portfolios. AMD recently detailed an upcoming machine learning accelerator, the MI300X, that it says includes nearly twice as many transistors as Nvidia’s H100.

OpenAI reportedly also considered the possibility of “working more closely” with chipmakers as part of its effort to address the GPU shortage. Reuters’ didn’t specify what such a collaboration would involve.

Microsoft, OpenAI’s top backer, previously collaborated with AMD to develop a partly customized chip for its Surface line of consumer devices. A potential future partnership OpenAI might ink with a chipmaker might have a similar structure. The AI developer could order a partly customized version of an existing machine learning accelerator, or codevelop a new product from scratch the way Google LLC teamed up with Broadcom Inc. to build its TPU chips. 

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU