UPDATED 10:57 EDT / APRIL 20 2022

AI

D-Matrix lands $44M to build AI-specific chipsets

Three-year-old startup d-Matrix Corp. said today that it has closed a $44 million funding round to support its efforts to build a new type of computing platform that support transformer artificial intelligence workloads.

The company also announced its first silicon “chiplet” based on a digital in-memory computing or DIMC architecture. Founded by two veterans of microprocessor and network hardware design, the company is building processors that specifically target AI’s inference stage, during which the trained neural network model makes predictions.

Most systems built up to this point have focused more on the earlier training stage. “Training is like the first 20 years of life and inference is like the next 40,” said co-founder and Chief Executive Sid Sheth. “The model is fully trained and you’re feeding it data and asking it to make decisions for you.”

The transformer architecture was invented by Google LLC in 2017 and has become the dominant neural network design for language processing.

D-Matrix says it’s the first company to find a way to create a DIMC engine that doesn’t require conversions between analog and digital signals. Traditional in-memory architectures have had to make that conversion to feed data into flash or nonvolatile memory, Sheth said. “You lose a lot of efficiencies and your calculations aren’t as predictable or accurate,” he said.

The company claims its processor designs can yield between 50 and 100 times better performance than a central processing unit chip and up to 30 times better performance than a graphics processing unit. They are operating system-agnostic and support popular AI frameworks such as PyTorch, multilevel intermediate representation and the Open Neural Network Exchange. They can augment a variety of computing architectures including RISC-V and single instruction/multiple data.

“End-users see a programming interface that they can program with a Tensor toolchain and additional instructions provided for the in-memory computing engine,” Sheth said. “As long as they can map their workloads to the hardware, it’s very easy.”

The chiplet architecture employs multiple small chips to make up a larger integrated circuit. The approach reduces waste during the fabrication process and allows components to be mixed and matched to make up the microprocessor. Sheth said the company is designing its first product chiplet to be integrated with other processors and offered in both chip and card form. Samples are expected to be available next year.

“We want to fit the entire GPT-3 model – which is very large – on a single card,” he said. Generative Pre-trained Transformer 3 is a language model created by the Open AI LLC lab that has applications in a variety of language processing scenarios.

Funding was led by Playground Global LLC, Microsoft Corp.’s M12 venture fund and Korean semiconductor maker SK hynix Inc. They join existing investors Nautilus Ventures Advisors US LLC, which does business as Nautilus Venture Partners, Marvell Technology Group Ltd. and Entrada Ventures LLC.

Photo: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU