Intel doubles down on cloud push, but will it be enough?
As enormous as cloud computing has become, Intel thinks it’s only getting started–and the chip maker wants a bigger piece of what’s coming next.
Intel today doubled down on a cloud bet it launched last July at an event in San Francisco. It debuted widely expected Xeon server chips in a bid to speed up the cloud, along with two faster solid-state drives. And among a raft of announcements of new and improved technologies such as faster storage and expanded partnerships to make cloud computing easier for many more companies, it announced plans for what it called the world’s largest cloud application testing cluster, with up to 1,000 servers.
Intel’s big goal is to spur the creation of tens of thousands of private and hybrid clouds, going well beyond today’s huge public cloud providers such as Amazon Web Services, Microsoft and Google as well as massive private clouds from the likes of Facebook and China’s Alibaba.
“It’s very much a multi-cloud world,” Diane Bryant (pictured), Intel’s senior vice president and general manager of its data center group, said at the all-day event for customers and partners.
Selling the silicon brains of private data centers is critical for Intel, whose core personal computer market is in decline. The company also has struggled to get its chips into smartphones.
That’s not the only motivation for Intel. Public cloud computing has taken off, with even large companies such as Netflix and Apple using public clouds for some or all of their computing and storage needs. But the rise of super-clouds from the likes of Amazon, Microsoft and more recently Google means there’s a very limited set of companies that are building their own servers. And Google in particular, whose data centers use Intel chips in the servers, is also looking at others such as ARM.
As a result, Intel has limited leverage with what it calls the “Super 7” cloud companies. “Just like every hardware vendor, Intel’s greatest fear is that the number of buyers becomes very small and they lose the ability to influence pricing,” said Brian Gracely, an analyst with Wikibon (owned by the same parent company as SiliconANGLE).
Intel’s data center push also highlights a fundamental shift in its perennial focus on maintaining Moore’s Law, the decades-long doubling of the number of transistors in the same silicon space every couple of years. Intel has signaled that it’s slowing the pace of increasing performance in favor of chips that don’t use as much power–a quality prized not only for small devices like smartphones but for servers installed by the hundreds or thousands inside air-conditioned data centers.
Long a company that has tried to forge standards in new technology products, Intel is attempting to do the same in cloud computing. It announced a partnership with two startups, CoreOS and Mirantis (the latter an Intel investment), to make it simpler for businesses to move data and computing jobs freely among various public clouds and between those and its own data centers.
CoreOS distributes a version of Kubernetes, open-source software originally developed by Google. It helps companies manage multiple software shells called containers in which multiple applications can run on a single installation of an operating system, reducing the number of servers needed. Mirantis sells a commercial version of OpenStack, a set of software tools for managing public and private clouds. The Intel agreement aims to make it easier for companies to use both technologies together.
In addition today, Intel announced plans with VMware to create a network of “centers of excellence” intended to speed up cloud deployments. And it’s expanding its “cloud builders” program to include training in the software-defined infrastructure technologies that underlie cloud computing setups. Not least, it revealed “Resource Director” technology that automatically opens up memory space on a Xeon processor to high-priority jobs.
The announcements extend Intel’s nearly year-long effort to make it easier and faster to set up and maintain cloud computing centers inside companies. Bryant said that any company with more than about 1,200 to 1,500 servers is a good candidate to set up its own cloud at a lower cost than it can obtain from public cloud companies.
In particular, Intel highlighted several of the next 50 or so companies it views as prime candidates to drive the next wave of cloud data centers: Naver (Korea’s top search engine), Flipkart (the No. 1 e-commerce company in India, whose data center capacity is doubling every year), and Swisscom (Switzerland’s leading telecommunications company, which is seeing 30 percent growth every month). These 50 companies, Bryant said, are growing at nearly twice the rate of the Super 7.
Mainly, Intel sought to downplay the perception that operating cloud computing networks is difficult for all but the largest companies, while still acknowledging the reality that it is indeed still difficult for many. “It’s not easy and quick to set up a cloud today,” Jason Waxman, vice president and general manager of Intel’s Cloud Platforms Group, said in an interview during the event. “But it should be.”
At the same time, Intel said its efforts and those of others in the industry have started to break down the barriers. “They are not big scary architectures anymore,” Jonathan Donaldson, vice president of software defined architecture in Intel’s Cloud Platforms Group, said in an interview. “The perception that this is too hard is long past its expiration date.”
It’s not clear how quickly Intel’s cloud efforts will pay off, and to what extent they will convince more companies to jump into building their own clouds. Gracely said most of the announcements sounded like incremental improvements and updates.
But Gracely also said the new Resource Director, already being used by companies such as Alibaba, Facebook and China Mobile, looks interesting. “This is important for public cloud applications because it can be difficult to maintain performance levels when you have no visibility about what other applications are running on a server or CPU,” he said. “This seems to allow software-level visibility into chip-level performance in real-time.”
Whether hybrid clouds for most companies are more than a transition to the undeniable efficiency of using public cloud services remains a point of contention in the industry. Jay Kreps, cofounder and chief executive of the cloud streaming-data startup Confluent, told SiliconANGLE that while hybrid setups may be necessary for regulatory and other corporate compliance reasons, ultimately the public cloud will be the most efficient even for very large enterprises. The transition may be slow, he said, but “it’s going to happen.”
Photo by Robert Hof
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU