Arm unveils ‘Project Trillium’ AI chips for mobile devices
This seems to be the week of new dedicated artificial intelligence chips, from Google LLC adding AI chips to its public cloud to reports that Amazon.com Inc. is developing its own AI chips. The trend continues with U.K.-based semiconductor giant Arm Holdings plc, which today announced a machine learning initiative codenamed Project Trillium.
Project Trillium will include a line of processors specifically designed to accommodate the style of computing needed for machine learning and neural networks. Arm said that the project will focus on mobile chips at first, but the company added that Trillium is meant to be highly scalable, eventually offering “the ability to move up or down the performance curve.”
According to Arm, Project Trillium’s machine learning processors will enable “a new class” of mobile and smart devices.
“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint,” said Rene Haas, president of Arm’s IP Products Group. “New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push the boundaries of what will be possible across a broad range of devices.”
The company said that its Arm ML processors can handle more than 4.6 trillion operations per second while drawing very little power, which is important for mobile devices where battery life is a top priority for many users. Project Trillium also includes Arm OD, an object detection chip designed to identify people and objects in real time using a device’s camera. Arm notes that the two chips can work together to deliver “a high-performance, power-efficient people detection and recognition solution.” For regular people, this would enable better face recognition features and augmented reality experiences.
Jem Davies, vice president, fellow and general manager of Machine Learning at Arm, said the speed and power efficiency of the new chips make them “equal to the most challenging daily machine learning tasks.”
“That performance can go even higher in real-world use,” Davies wrote in a blog post, using AR-enabled scuba masks as an example use case for the chips. “This means devices using the Arm ML processor will be able to perform ML independent of the cloud. That’s clearly vital for products such as dive masks but also important for any device, such as an autonomous vehicle, that cannot rely on a stable internet connection.”
Davies added, “We now have an ML processor architecture that is versatile enough to scale to any device, so it is more about giving markets what they need, when they need it. This gives us, and our ecosystem partners, the speed and agility to react to any opportunity.”
Arm said Project Trillium is only a codename for its new initiative, and it plans to announce a commercial brand name for its new line of chips at a later date. The company will offer an early preview of the chips in April, and it plans to make them generally available by mid-2018.
“Arm is at the center of most all smartphones, tablets and IoT end points, so the company is in a position to increase the machine learning industry footprint,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “Lots of competition is out there already from Intel, Qualcomm, Nvidia and Xilinx, and Arm wants to differentiate itself on a total solution, scalability and performance per watt.”
Photo: Arm Holdings
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU