Is the x86 server ready for open source? #CIOAngle, #OCPSummit
This week theCUBE covered the Open Compute Project Summit (#OCPSummit). As the name implies, this conference is part of the open source movement, but with a twist. When most people hear “open source” they think software — Linux, OpenStack, KVM and other major open source projects. This conference is about open source hardware, and in particular, x86 servers.
That may take a moment to understand. But this is real, and there are many indications that x86 architecture is ready for open source and indeed that something that resembles this is already happening on a large scale in the big Web service companies.
In general, standards appear in technology when it has matured to the point that the competing vendors can no longer differentiate on the basis of features or performance. At that point the only source of differentiation left is price, and the vendors end up locked into a “race to the bottom.” The technology becomes a commodity that provides razor thin margins, and only the lowest cost providers can survive. That is what happened in the PC industry in the early 1990s and led to the rise of Dell and, later, Lenovo.
One good example of this is the rise of Unix and, later, Linux as production operating systems. Around 1990 the server vendors — IBM, HP, etc. — found that their proprietary operating systems no longer provided any real market differentiation. They refocused competition further up the stack and began migrating to a standard industry operating system, Unix, to save money. However, they discovered that Unix, which was not designed as a production system, did not have everything they needed, and each vendor ended up building a semi-propriety operating system — AIX, HP-UX, Solaris, etc. — on top of Unix.
At about the same time (1991) Linus Torvolds created Linux and eventually started the Open Source movement. As that took hold and Linux matured through the 1990s, Linux became a better solution for a standard low cost OS. Today it is delivered on every kind of hardware from low-end servers to mainframes and is fast becoming the dominant compute environment in the data center.
Signs of X86 commoditization
.
Of course the rule that “open source always wins” is not by any means absolute, or we would all have Linux laptops today. And this is software, not hardware. However, several major market signs indicate that the x86 architecture is reaching commoditization. The most dramatic of these was IBM’s announcement last week that it is selling not just its entire System x server line but every device it makes on top of x86 servers, including its network switches, to Lenovo.
The Chinese manufacturer, which bought IBM’s PC line and grew it into the leader of the PC market, is the definition of a commodity market company. It knows how to drive every extra penny of cost out of its manufacturing and supply chain while maintaining quality and staying at the forefront of design, whereas IBM is very much a leading edge company with a huge R&D budget, that focuses on developing technologies and markets where differentiation is defined by capabilities and services rather than price. Another major sign of x86 commoditization is the hyperscale architectures of the big Web service companies, which are based on white-box hardware built, for the most part, in either Taiwan or China. A third is HP’s MoonShot and its approach to bringing hyperscale into the enterprise.
Real benefits of standardized hardware
.
So the question is, can standardization on one or more open source architectures that benefit from development by a large community rather than the much smaller groups of developers that individual companies can summon, provide more benefit than those companies can realize from proprietary products. The huge and growing market for white box servers, which by definition have no proprietary identity, says the answer is yes. And that answer is not confined to the huge Web service companies using hyperscale architectures.
“Anybody who has a few racks of servers is wondering ‘How do I squeeze efficiency out of that?’” Gary Orenstein, CMO of Fusion-io, said in an interview on theCUBE from the OCP Summit on Tuesday. “Everybody wants a smaller, more efficient, more effective data center which can serve more compute power with less energy. This mission of openness is a great way to get to this level of efficiency. As the Open Compute mission expands, it’s the payday of the efficiency that’s going to keep this wheel turning more and more, not just openness.”
So the question is, who among the big server vendors is paying attention? Wikibon CTO David Floyer provided dead-on analysis of the situation in theCUBE: “Every time you add an SME (single manageable entity) to the infrastructure, you approximately double the cost of management. By pulling together hardware, software and applications into SMEs you manage that cost. By having those [Open Compute] standards, you can then go on up the stack and add on middleware, databases and applications to reach economies of scale by having single SMEs as high up the stack as you can. This drives innovation in multiple ways.”
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU