UPDATED 17:10 EDT / AUGUST 17 2016

NEWS

Intel bets on 5G wireless networks and AI to power its future

Intel today doubled down on two big bets on the future of computing: faster wireless networks and artificial intelligence.

At its annual developer conference in San Francisco, the chipmaker made the case that its own future will depend as much as anything on ultra-fast wireless networks that can handle the explosion of devices and data created by billions of smartphones and wireless sensors in everyday devices now known as the Internet of Things. But currently those networks simply can’t handle all those devices and data, requiring not only more bandwidth but a new network design.

To that end, Intel announced an even closer partnership with longtime ally AT&T. The wireless carrier will work with Intel to help bring the essence of cloud computing to wireless networks, particularly the much faster 5G networks that will start to roll out in the next year or two. AT&T will use Intel’s powerful Xeon processor chips in its data centers to make it easier to run applications on software-based virtual networks rather than on custom silicon chips. “We intend for software-defined networking and analytics to be central to our launch,” said John Donovan, chief strategy officer and group president of AT&T’s technology and operations.

Intel signaled its belief that its future depends on the new applications enabled by networks like AT&T’s, elevating the carrier into the same elite group of public and large private cloud computing companies it calls the “Super 7”: Amazon.com, Microsoft, Google, Facebook, Baidu, Alibaba and Tencent. Like those companies, AT&T will get early access to Intel silicon and other technologies. Patrick Moorhead, president of Moor Insights & Strategy, said it was odd to include AT&T in the group of the largest cloud companies, but it could help Intel ensure that its chips and devices will run well on 5G networks.

And Intel’s carrier relationships may not end there. “All the big carriers are doing makeovers to their networks,” said Jean Bozman, vice president and principal analyst at Hurwitz & Associates.

Intel’s other big focus at the conference was artificial intelligence, specifically deep learning neural networks, which attempt to emulate how the brain works to improve image and speech recognition and other tasks. Diane Bryant (pictured above), executive vice president and general manager of Intel’s Data Center Group, said analytics enabled by AI is driving an increasingly large portion of data center computing. “Analytics is fastest-growing workload in the data center,” she said in her keynote at IDF, where she introduced a new high-end chip aimed at AI.

The two seemingly disparate technology bets on 5G and AI actually connect, though in a somewhat roundabout way. The vastly greater network speeds that will come with 5G networks in coming years, as much as 100 times today’s bandwidth, will change the nature of computing and communications much more than a simple speed increase would suggest.

Venkata “Murthy” Renduchintala, president of Intel’s Client and Internet of Things Businesses and Systems Architecture Group

Venkata “Murthy” Renduchintala, president of Intel’s Client and Internet of Things Businesses and Systems Architecture Group

“It will make computing in our world truly pervasive and ambient,” said Venkata “Murthy” Renduchintala, president of Intel’s Client and Internet of Things Businesses and Systems Architecture Group. “Compute and communications will come together.”

Essentially, Bridget Karlin, managing director of IoT strategy at Intel, added in a later technical session, “wireless goes from a communications platform to a compute platform.” That means long-predicted new application ranging from 3D video and hologram calls to immersive experiences such as virtual reality to smart traffic and parking solutions can finally happen.

Important as 5G networks may prove to Intel, it isn’t betting just on public wireless networks. It’s also continuing a long-term push to supply chips and other technologies to improve backroom data centers operated by large companies, which increasingly are remaking them into more flexible private cloud networks. Intel’s betting those networks ultimately will be a much larger market than the “Super 7+1,” as Intel now dubs the seven cloud companies plus AT&T.

“The future is thousands of clouds,” said Bryant, who noted that private cloud hardware and software spending is now 20 percent of the cloud computing total, growing 20 percent annually. “No one cloud solution is going to solve all needs.”

Intel’s cloud focus is a defensive as well as offensive move. Public cloud computing has started on a steep growth curve, as large companies such as Netflix and Apple use it for some or all of their computing and storage needs. But given the small number of “hypercloud” companies, Intel has limited leverage over them of the kind it had with hundreds of PC makers. So it needs a healthy, growing population of private clouds to provide another boost for its data center chips.

But Intel isn’t alone in this quest either, and that’s where AI comes in. Companies such as Nvidia Corp., which makes graphics processing units (GPUs), and Google, which designed its own “Tensor Processing Unit” chip, are using them in servers for emerging workloads such as machine learning, and so is a wave of startups. Seeming to concede as much, Intel recently said it would acquire Nervana Systems Inc., a designer of chips and systems for the fast-emerging branch of artificial intelligence called deep learning.

Intel's new "silicon photonics" chip module for speeding up networks

Intel’s new “silicon photonics” chip module for speeding up networks

But today, Intel also announced new chips based that are intended to boost its presence in AI and beyond. For one, it officially unveiled the next generation of its Xeon Phi processor chips, code-named Knights Mill, that will focus on machine learning when they become available next year. Intel touted its expected performance on AI tasks, thanks partly to the chips’ ability to access memory much faster.

Nvidia shot back with claims on its corporate blog that Intel’s performance benchmarks are outdated. Still, Jing Wang, a senior vice president at Baidu, which has depending heavily on GPUs for its AI research, indicated the Chinese company is broadening its bets. “Xeon Phi processors are a great fit for running our machine learning systems,” he said.

Intel also announced the commercial availability of a new “silicon photonics” chip module intended to vastly speed up data center networks generally. Sixteen years in development, the module connects network switches via photons or units of light rather than much slower electrons, removing a longtime bottleneck in corporate networks by enabling 100 gigabyte per second speeds, even over several kilometers. Microsoft’s Azure cloud operation will start using it soon.

Photos by Robert Hof


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU