DEEPX raises $80.5M in fresh funding to mass-produce its AI chips
DEEPX Co. Ltd., a South Korean developer of artificial intelligence chips, has raised $80.5 million in funding to support its commercialization efforts.
Seoul-based private equity firm SkyLake Equity Partners led the Series C investment, which was announced on Thursday. BNW Investments, AJU IB and returning backer TimeFolio Asset Management chipped in as well. DEEPX detailed that the capital infusion increased its valuation by a factor of eight without disclosing absolute numbers.
The company will use the funding to accelerate the mass production of its flagship product line, a collection of four chips optimized for AI workloads. The processors’ performance ranges from 5 to 400 TOPS. One TOP equals a trillion computing operations per second.
DEEPX’s entry-level 5 TOP chip, the DX-V1, combines a set of circuits optimized for machine learning workloads with a four-core central processing unit. There’s also a built-in video codec, a compute module optimized to change the format of video streams. The DX-V1 is designed for use in industrial robots’ onboard cameras and other connected devices.
The most advanced chip, the DX-H1, provides 400 TOPs of performance in a PCIe card that can be attached to data center servers. According to the company, the chip is also suitable for use at edge locations such as factories. It’s primarily optimized for running computer vision algorithms.
While processing user input, AI models generate a significant amount of additional data known as intermediate results. That data has to be frequently moved between the chip running an AI model and the attached DRAM, which consumes a significant amount of power. It also slows down processing: a neural network can only begin a new computation once the necessary data has been retrieved from DRAM, which takes time.
DEEPX ships its silicon with a technology designed to reduce the movement of information to and from memory. According to the company, its chips have to access DRAM less frequently than other AI accelerators such as graphics cards. The result is an increase in processing efficiency.
The company provides a software framework, DXNN, that can automatically compile customers’ AI models into a format its chips support. Moreover, the compiler optimizes those models in the process to improve their performance.
The software uses a method called quantization to reduce the amount of memory required by a neural network’s parameters. Using the technique, certain parameters that take up 32 bits of space can be compressed into eight bits. Quantization not only lowers AI models’ RAM requirements but also improves their performance because reducing the number of bits a chip must run speeds up processing.
In addition to ramping up mass production of its chips, DEEPX will use its newly announced funding round to establish distribution agreements with partners. In conjunction, the company is teaming up with more than 120 product design firms to support its commercialization efforts. Before its chips enter mass production, DEEPX plans to test them with an initial roster of about 100 early adopters including automakers Hyundai Motor Co. and Kia Corp.
Down the road, DEEPX plans to expand its product portfolio with additional chips. The company indicated that those processors will be optimized to run large language models.
Photo: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU