UPDATED 12:15 EST / APRIL 18 2023

AI

Microsoft reportedly developing its own AI chip named ‘Athena’

Microsoft Corp. is reportedly forging ahead with efforts to develop its own chips optimized for artificial intelligence in a bid to reduce costs for training generative AI models such as those powering OpenAI LP’s ChatGPT chatbot.

Citing two people familiar with the matter, The Information reported today that Microsoft has been working on the new chipset, codenamed “Athena,” since at least 2019. Employees at Microsoft and OpenAI already have access to the new chips and are using them to test their performance on large language models such as GPT-4.

Training large language models requires the ingestion and analysis of tremendous amounts of data in order to create new outputs for AI’s to mimic human conversation, which is the hallmark of generative AI models. This requires large numbers of computer chips — on the order of tens of thousands — that are optimized for AI, which can become extremely costly.

Microsoft is said to be developing its new chip for use in its own products to diminish its reliance on third-party chips and reduce those costs. Currently, computer chip designer Nvidia Corp. has dominated the market for AI chip market and its newest H100 GPU was released last year. With this new silicon, Microsoft will join the ranks of Amazon.com Inc., Google LLC and Meta Platforms Inc., which all develop and build their own custom chips for AI.

“Microsoft wants to use large-language models across all their applications including Bing, [Microsoft] 365, and GitHub,” SemiAnalysis Chief Analyst Dylan Patel told The Information. “To deploy that at scale using off-the-shelf hardware would cost tens of billions of dollars a year.”

Microsoft invested $10 billion in OpenAI in early 2023 and quickly began to integrate its AI technology into its services including its Bing search engine with Bing Chat, Microsoft 365, Dynamics 365 for business users and Security Copilot for security professionals.

The report notes that these chips are not intended to replace Nvidia’s chips. Instead, they’re meant to augment Microsoft’s existing infrastructure. The company has multiple future generations of the chip planned, according to The Information.

At this time it’s unknown if Microsoft plans to release them for Azure cloud AI customers or if they are intended for internal use only, but the company does provide AI-optimized cloud instances based on Nvidia’s H100 chipsets. It’s also not known how this might affect the AI supercomputer collaboration between Microsoft and Nvidia announced late last year.

Microsoft reportedly expects that the new chip will be ready to debut sometime in 2024.

Photo: Microsoft

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU