UPDATED 14:05 EST / AUGUST 05 2024

An AI chip on a bed of circuitry with the letters "AI" in capital AI

AI chipmaker Groq raises $640M to meet rising demand for high-speed inference compute

Groq Inc., an artificial intelligence and machine learning chipmaker startup, said today that it has raised $640 million in a late-stage round of funding led by Blackrock Inc.

The startup designs semiconductor chips and software that optimizes deployed AI activities, known as inference, with a vision to compete with the biggest names in the industry such as Nvidia Corp. The Series D funding round values the company at $2.8 billion and brings the total raised to date to over $1 billion, including a $300 million Series C in 2021.

The Series D funding round also attracted investments from new and existing investors Neuberger Berman, Type One Ventures, Cisco Investments, Global Brain’s KDDI Open Innovation Fund III and Samsung Catalyst Fund.

The company was founded in 2016 by Chief Executive Jonathan Ross, a former Google LLC engineer who invented the search giant’s TPU machine learning processors. The company’s flagship product is an AI chip called the LPU Inference Engine. The LPU, which stands for Language Processing Unit, is designed to power large language models in production after they have been designed and trained.

During a speed test in November, Groq set an inference speed record while running Meta Platform Inc.’s Llama 2 70B LLM. In the test, the company’s chips and software stack set the bar for performance and accuracy for the Meta AI model with more than 300 tokens per second per user.

Since then, the company has updated its stack so that companies may bring Meta’s largest open model Llama 3.1 405B onto its hardware. This includes other Llama 3.1 models in the family such as 70B Instruct, 40B Instruct and 8B Instruct.

“You can’t power AI without inference compute,said Ross.We intend to make the resources available so that anyone can create cutting-edge AI products, not just the largest tech companies…. Training AI models is solved, now it’s time to deploy these models so the world can use them.”

Ross said the new funding will allow the company to deploy more than 100,000 additional LPUs into GroqCloud, the company’s cloud-based service for AI inference. Developers can the service to quickly and easily build and deploy AI applications using popular industry LLMs including Llama 3.1 from Meta, Whisper Large V3 from OpenAI, Gemma from Google and Mixtral from Mistral AI.

Through GroqCloud, developers get on-demand access to LPUs for their AI applications so that they can familiarize themselves with the company’s chips and optimize for the architecture. Groq built the cloud service with the help of Definitive Intelligence, a Palo Alto, California-based analytics provider that the company acquired in March.

“Having secured twice the funding sought, we now plan to significantly expand our talent density,Ross added.We’re the team enabling hundreds of thousands of developers to build on open models andwe’re hiring.”

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU