UPDATED 09:00 EDT / AUGUST 22 2023

AI

AI-specific chipmaker d-Matrix announces new next-gen compute platform with Jayhawk II

D-Matrix Corp., a startup that provides high-efficiency silicon and generative artificial intelligence compute platforms for data centers, today announced the next generation of its Jayhawk platform that provides improved performance and efficiency for AI deployment.

The company focuses on making silicon and platforms that allow companies to accelerate the deployment of generative AI applications in the inference stage. That’s when the AI is being used on data it has never seen before to provide assistance and information — for example, when an employee or a customer uses an AI such as Meta Platform Inc.’s LLaMA 2 to summarize a document or asks a question in natural language to get an answer.

Today, d-Matrix announced Jayhawk II, the second-generation of its “chiplet” architecture with enhanced digital in-memory computing, or DIMC. The company announced the first version in January, which was focused on interconnect, or how the chiplets hook up to one another.

This new generation focuses more on the DIMC, that brings fully digital programmable memory directly on chip next to the compute architecture in order to reduce latency for inference processing. Although in-memory compute is not a new thing for chips, other chips use analog memory, which can be inaccurate. By wedding digital memory with compute, d-Matrix says that it helped resolve this issue.

Chiplet architecture uses multiple small chips that make up a larger integrated circuit. The approach allows for building modular and scalable compute platforms for different needs. When an AI model is deployed, it can be split across the chiplets in order to perform its application processing more efficiently at different scales.

Up until now, most platform makers have built for generalized use but most often they are used heavily in the generative AI training side of the equation, which is where models are pretrained with giant amounts of data before they’re deployed.

“Training is all about performance, but inference is about efficiency,” Sid Sheth, co-founder and chief executive of d-Matrix, told SiliconANGLE in an interview. “So how to you build a dedicated inferencing solution that can scale out and do it very efficiency? This became the d-Matrix thesis.”

Sheth explained that when it comes to inference, it’s important to provide what the customer needs where they need it. It’s not possible to build platforms that are “one size fits all” for inference because inference happens at the edge and in the cloud, which is why chiplets focus so heavily on modularity and interconnect. This is also why the Jayhawk II now iterates on that to build out its compute engine.

By having the powerful compute engine in each chiplet available and interconnected, it means that generative AI applications have the lowest possible latency when workloads are being spread across them. This means that computation can grab data from memory as quickly as possible, process it and send it back without having to wait.

“It is a very different approach,” Sheth said. “It is not like a traditional approach where the memory and the compute are decoupled, so the memory and compute are co-located. It’s very focused on inference. What we did was say, ‘Why not keep all the coefficients inside the memory and compute in the same place?’ It reduces the back-and-forth and the time and energy spent.”

D-Matrix received $44 million in a funding round in April 2022 led by Microsoft Corp.’s M12 venture fund and Korean semiconductor maker SK hynix Inc. The company is currently providing its Jayhawk II compute platform for evaluation to numerous companies and Sheth said d-Matrix plans to go to market sometime in 2024.

Image: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU