

Intel Corp. is working with Baidu Inc. to optimize its latest Nervana Neural Network Processor on the Chinese internet giant’s PaddlePaddle deep learning framework to try to speed up the training of artificial intelligence models.
Intel said its NNP-T processor, introduced late Tuesday, is a “new class of efficient deep learning system hardware designed to accelerate distributed training at scale.”
Meanwhile, PaddlePaddle, which stands for “PArallel Distributed Deep Learning,” is the main deep learning platform used by Baidu to power its AI services.
The chipmaker said its collaboration with Baidu, announced at the Baidu Create AI developer conference Tuesday, is important because AI has advanced to the point where it has become a “pervasive capability” that will be used to enhance just about every kind of computing application in smartphones, computers and data centers. As a result, there’s a need for AI-specific hardware such as the NNP-T chip to be optimized for the most commonly used frameworks.
“The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale,” said Naveen Rao, corporate vice president and general manager of Intel’s AI Products Group. “Intel and Baidu are focusing on building radical new hardware, co-designed with enabling software, that will evolve with this new reality – something we call ‘AI 2.0.’”
The collaboration extends a partnership between the two firms that stretches back almost a decade. In recent years, the companies have partnered to optimize Intel’s previous-generation Xeon Scalable processors on Baidu’s PaddlePaddle framework, so the move to optimize NNP-T is a logical next step.
“Processor architectures and platforms need to be optimized for developers in order to be meaningful,” said Holger Mueller, principal analyst and vice president at Constellation Research Inc. “This is even more critical for new and upcoming AI architectures, and that explains why this partnership is important for Intel. But partnerships are one thing, real developer adoption is another, and so we will have to wait and see in a few quarters what kind of uptake this will yield.”
Intel has also worked with Baidu in the past to optimize its Optane DC Persistent Memory for Baidu’s AI framework, enabling the Chinese company to take advantage of its superior memory performance to deliver personalized content to users through its AI recommendation engine.
Not least, the companies are working together on something called “MesaTEE,” which is a “memory-safe function-as-a-service” framework that allows security sensitive services such as banking, autonomous driving and healthcare to process data more securely on platforms such as public cloud infrastructure and blockchains.
Support our open free content by sharing and engaging with our content and community.
Where Technology Leaders Connect, Share Intelligence & Create Opportunities
SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.