Xilinx unveils new breed of computer processor designed for AI inference
Chipmaker Xilinx Inc. today lifted the lid on the first product to come out of the adaptive compute acceleration platform project it revealed earlier this year.
Xilinx’s proprietary seven-nanometer ACAP chips are said to have capabilities for artificial intelligence workloads that go far beyond its field-programmable gate array or FPGA chips, which are processors used in data centers that can be reprogrammed in real time for different computing tasks.
The new chips combine multiple compute acceleration technologies, including leading edge memory, integrated networking and various software development tools and frameworks that can be used for AI workloads and 5G networking, according to Xilinx.
When the company first spoke of its ACAP chips in March, it said it was trying to create a highly software-programmable architecture that would be better able to keep up with the pace of innovation in the data center. Xilinx’s premise is that AI technologies were accelerating too fast for the likes of Intel Corp.’s and Nvidia Corp.’s central processing units and graphics processing units to keep up.
And so Xilinx’s new Versal chips, which is a combination of the words “versatile” and “universal,” are being billed by the company as an entirely new classification of processing unit. They combine Xilinx’s FPGA technology with two higher-performance Arm processor cores, two low-power Arm processors and a dedicated AI compute engine.
The AI engine is said to provide higher throughput, lower latency and greater power efficiency than existing hardware, making the Versal chips the best bet for AI inference and advanced signal processing, the company said.
Xilinx makes some big claims, including that the Versal chips offer four times the power efficiency of Nvidia’s leading GPUs, with two to eight times the inference benchmark performance. They’re also 43 to 72 times faster than Intel’s Xeon CPUs when it comes to inference workloads, the company claimed.
Xilinx said emerging trends such as AI and machine learning, vast amounts of big data that needs to be collected and analyzed, and 5G networking will create a huge demand for its adaptive compute acceleration hardware.
“With the explosion of AI and big data and the decline of Moore’s Law, the industry has reached a critical inflection point. Silicon design cycles can no longer keep up with the pace of innovation,” Xilinx Chief Executive Officer Victor Peng said in a statement. “Four years in development, Versal is the industry’s first ACAP. We uniquely designed it to enable all types of developers to accelerate their whole application with optimized hardware and software and to instantly adapt both to keep pace with rapidly evolving technology.”
With the Versal family of chips, Xilinx is clearly mounting a challenge to Nvidia, which remains the market leader in hardware used to train AI systems. But Xilinx is specifically eyeing the AI inference market, which relates to the application of deep learning models in consumer and cloud environments. Xilinx reckons there’s massive potential here, citing data from Barclays Research that says the total addressable market with inference processing will triple that of AI training by 2023.
The Versal chips do have some disadvantages however, with the main downside being that FPGAs remain difficult to program for specific tasks, said Holger Mueller, principal analyst and vice president of Constellation Research Inc.
“The battle for the hardware that runs AI-powered applications is in full swing. And while GPUs have certainly won round one in this game against FPGAs, round two is wide open,” Mueller said. “It’s well possible for the FPGAs to strike back and the planned specs of Xilinx look remarkable. But hardware power needs to be tamed with software platforms, and that’s where Xilinx has more work to do, as FPGA clusters have been traditionally difficult to program.”
Another problem for Xilinx is that the for all its talk, the Versal chips won’t actually be released until the second half of 2019, by which time Nvidia could well have released its own new products addressing the same kinds of workloads.
Image: Xilinx
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU