Doubling down on an already big bet on artificial intelligence, Intel Corp. today announced a series of new hardware and software products and partnerships intended to position itself as the key driver for what it believes is the next wave of computing.
“We want to drive the AI computing era,” Intel Chief Executive Brian Krzanich (above) said at an “AI day” Intel held in San Francisco today. “We want to democratize access to AI. [And] we want to be the trusted AI guide in the industry.”
Nervana AI platform
For one, the chipmaking giant announced the Intel Nervana platform, a set of technologies based on its $400 million acquisition in August of Nervana Systems Inc. The company was designing chips and software to accelerate deep learning, a branch of AI that is responsible for big advances in speech recognition, language translation and image recognition in recent years.
Chips code-named Lake Crest based on the Nervana technology will be tested in the first half of next year with availability to “key” customers later in the year. Intel also announced a new chip called Knights Crest that will merge its Xeon processors with Nervana technology. Diane Bryant, executive vice president and general manager of Intel’s Data Center Group, promised the company will deliver a 100-fold increase in performance for deep learning in the next three years.
Bryant took particular aim at graphics processing unit chips, made by Nvidia Corp., Advanced Micro Devices Inc. and others, that have been driving much of the progress in deep learning in recent years. Although Intel processors, or CPUs, run more than 90 percent of the computer servers used in AI work, deep learning neural networks, which attempt to mimic the brain in a primitive way, are usually run on GPU add-ons to those systems.
Intel also served notice that it plans to lead the way on whatever form of AI develops going forward. Partly to that end, the company announced a partnership with Google Inc., a leader in machine learning, which is used in virtually all its apps and services and offered via its cloud computing platform. The partnership is aimed at making AI easier to implement across multiple cloud computing systems.
Intel has already begun tweaking its chips to run the TensorFlow machine learning software library that Google developed and made available through open source. It plans to complete those tweaks by the start of 2017.
The two companies also are jointly working to make Kubernetes, the Google-developed open source software that organizes “containers” for running software across many computers, work better on Intel chips. Diane Greene, senior vice president of Google’s cloud and enterprise operations, said the collaboration can help make sure AI can be done across multiple public and private clouds.
Not least, Intel announced a raft of tools for developers to use AI. They include commonly used software frameworks for deep learning, such as Theano, Caffe and Torch, that Intel plans to make its chips run better, as well as software tools to help build deep learning models and train them with data. And by the end of the year, Intel said, it will release “BigDL” data analytics software to apply deep learning capabilities to the Spark data processing engine.
Overall, Intel sought to show how it has the widest range of hardware and software for running AI systems, whether they are in the cloud, private data centers, or devices such as personal computers, mobile phones and millions of sensor-equipped devices often called the Internet of Things. “We’re trying to be a company that powers all smart and connected devices,” Krzanich said. “Devices become noise without some artificial intelligence.”
Along with Intel’s recent push to make sure Intel chips continue to dominate servers used in virtually all public and private clouds, the company’s executives also hope this will position Intel to be the foundation for whatever new AI developments come along. “All methods of AI will run best in Intel architecture,” Bryant declared.
A lot to prove
Intel has a lot to prove, though. Nvidia’s growth has been rocketing lately as its chips move from their traditional gaming niche to machine learning applications in servers. That likely won’t change until Intel produces working silicon and it’s built into dedicated servers.
Meantime, Google, despite its new partnership with Intel, has been making its own chips, called Tensor Processing Units, to run its own machine learning work. And IBM on Monday announced a “PowerAI” initiative for the enterprise that combines its Power chips with Nvidia GPUs.
Intel’s aggressive announcements looked promising to at least some observers. “Intel threw their formal AI strategy axe into the ocean, which was very important given the general tech industry sees GPUs as the current driver of AI compute,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, who noted that Intel has appeared to pull together several AI-related acquisitions more quickly than expected. “It’s impressive for such a large company. It’s now up to Intel to flawlessly execute.”