

Arm Holdings plc today introduced a new portfolio of chip designs that handset makers can use to develop mobile processors.
The product family is known as Lumex. It includes central processing units, graphics processing units and an interconnect module. Smartphone makers can use the latter component to link together Arm’s new GPUs and CPUs into a single system-on-chip.
Lumex includes four CPU core designs equipped with the company’s SME2 technology. It’s an instruction set extension, a collection of low-level computing operations that a chip can mix and match to perform complex calculations. The computing operations in SME2 are optimized to run artificial intelligence models.
Mobile developers often compress the AI models they use in apps to improve performance and reduce battery usage. According to Arm, SME2 includes a 512-bit register optimized to power compressed neural networks. A register is a speedy memory pool built directly into a CPU.
SME2 also includes new computing operations for moving and changing data. Additionally, a so-called predicate-as-counter mechanism reduces the amount of data that must be processed to complete computing operations. That reduction helps boost throughput in some cases.
The most capable of the four CPU core designs in the Lumex chip design family is known as the C1-Ultra. According to Arm, the processor provides 25% better single-thread performance than previous-generation silicon. That makes it suitable for hardware-intensive AI use cases such as content generation.
The C-1 Ultra is rolling out alongside the C-1 Premium, which is designed to power simpler AI applications such as voice assistants. The C-1 Premium is 26% smaller than the C-1 Ultra.
There are also two other C-1 cores that trade off some performance for higher efficiency. The C1-Pro is geared toward apps that prioritize sustained performance, while the C1-Nano has a compact design that allows it to be used in wearables.
Handset makers can mix and match Arm’s new CPU cores. A company could, for example, build a smartphone processor that includes one C1-Ultra and several C1-Pro cores. Demanding apps can run on the C-1 Ultra while other workloads may be sent to the other cores to save energy.
“For generative AI, speech recognition, classic machine learning (ML) and computer vision (CV) workloads, the SME2-enabled Arm C1 CPU cluster achieves up to 5x AI speed-up compared to the previous generation CPU cluster under the same conditions,” Stefan Rosinger, Arm’s senior director of CPU product management, wrote in a blog post. “Through SME2, the CPU cluster delivers up to 3x improved efficiency.”
The C1 cores are designed to work with a new GPU series called the Mali G1 that Arm debuted in conjunction. The most advanced GPU in the lineup, the G1-Ultra, can perform inference 20% faster than its predecessor. A new ray tracing module allows it to render lighting, shadows and reflections twice as fast.
The G1-Ultra has 10 shaders, which have a similar role as the cores in a CPU. It’s rolling out alongside three other G1 GPUs that include up to nine shaders. Additionally, Arm has a new interconnect module called the S1 L1 that can link together C1 and Mali G1 chips with 75% less static latency than earlier hardware.
Arm says that processors based on Lumex chip designs can be manufactured using a three-nanometer node. The company expects the first Lumex-powered mobile devices to launch later this year or in early 2026.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.