UPDATED 08:00 EDT / OCTOBER 20 2020

EMERGING TECH

Startup Flex Logix launches ‘industry’s fastest’ edge AI inference chip

Semiconductor startup Flex Logix Technologies Inc. today announced the launch of its InferX X1 artificial intelligence chip, which it says can provide up to 10 times more performance than Nvidia Corp. silicon in some scenarios. 

Flex Logix was formed in 2014 and has since raised $27 million from investors. The startup is led by Chief Executive Officer Geoff Tate, a former Advanced Micro Devices Inc. senior vice president.

Flex Logix’s new chip is a fifth the size of a penny and is designed for AI inference, or running fully trained neural networks on live data. The chip is geared toward connected systems that run at the so-called edge of the network such as industrial robots and autonomous vehicles. The startup describes it as the “industry’s fastest AI inference chip” for edge systems.

According to Flex Logix, the chip runs the widely-used YOLOv3 object recognition model 30% faster than an Nvidia Xavier edge computing module. Moreover, the startup says early adopters have found that its chip can provide as much as 10 times higher performance for certain other models. Flex Logix claims that the InferX X1 will be capable of providing these speeds at a lower price point than the competition: High-volume pricing is expected to be as little as a 10th of Nvidia silicon.

What allows InferX X1 to outperform rivals, the startup claims, is a proprietary architecture that incorporates more than 25 patents’ worth of technology. At the heart of the chip is a set of 64 AI-optimized circuits dubbed 1-Dimensional Tensor Processors. These circuits are specifically designed to perform matrix multiplications, the mathematical operations that neural networks use to process data, and their configuration can be fine-tuned to optimize performance.

The artificial neurons in neural networks are arranged into groups known as layers that often have different performance characteristics. According to Flex Logix, the InferX X1 can reconfigure itself on the fly to optimize processing speed for the data layer it’s currently running. This real-time optimization is made possible by the fact that the reconfiguration takes just a few millionths of a second to perform. 

Flex Logix will bring the chip to market alongside development tools to make it easier for customers to deploy software on the chip. To jumpstart its sales effort, the startup is introducing a series of accelerators cards powered by the InferX X1 that enterprises can attach to servers to speed up their AI applications.

The first card, called the InferX X1P1, is now sampling to customers. It promises to deliver up to a third the AI performance of Nvidia’s T4 data center chip for less than a fourth the cost. Flex Logix plans to start mass-producing its chip and server accelerator cards in the second quarter of 2021. 

Photo: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU