

Following IBM Corp.’s surprise acquisition of Red Hat Inc. announced Sunday, is “Big Purple” in your future?
Well, that depends on what you think are the answers to these questions:
That’s a lot of questions and I don’t have precise answers for all of them, but let’s jump through them quickly and then get to the answer for that last question.
Let’s clump the answers for 1), 2) and 3) – which consider the importance of open source in the future – together. The world is going to need more software than ever and can only acquire it through an open source model. Why? Because the lowest cost and highest quality approach to software development is reuse combined with a commons-based licensing model. The population of software developers may be growing by 15 percent per year and improvements to the quality of tools may be improving developer productivity significantly, but there is no way the specificity and complexity of edge and increasingly integrated core solutions can be developed in a few software houses or using a bespoke soup-to-nuts approach in enterprises.
Will we see better “edge-to-core” frameworks for building software? Absolutely. The Kubernetes ecosystem, for example, has that problem in its sights. Can we add IP protections to the basis of open source to allow enterprises to appropriate returns to edge-related invention? Blockchain is an example of a technology that could be used to track and manage contribution of code to the open source commons. How will we price software for trillions of devices? Well, let’s claim that the “real world” to “digital world” software interface is going to catalyze the greatest jump in code production in history, and the vast majority of it will be based on open source.
What about questions 4), 5) and 6), which focus on how cloud-based solutions will be written and operated (given that open-source software will be the greatest contributor to those solutions)?
Wikibon strongly believes that the default approach to cloud will not be “move all data to the cloud,” but rather “move the cloud experience to the data, both at the edge and in data center cores.” Why? Physics (e.g., latency and data locality), economics (e.g., bandwidth costs and IP protection) and regulations (e.g., privacy and governance). This guarantees that hybrid cloud will be the dominant framework for enterprise cloud architectures.
But the enterprise goal should be to distribute the data and processing to where it’s required, while limiting the complexity of the hybrid cloud platform employed. The likely scenario is that enterprises choose a relatively simple, but powerful hybrid cloud plane that is plastic (e.g., scales and easily reconfigures in response to workload changes), can be managed from within the business’s asset portfolio, is highly secure, features predictable costing and can easily acquire value-adding cloud services at the level of SaaS and PaaS. Simply put, the more common – and open – the platform across cloud, core and edge, the better for the enterprise.
However, a simpler platform doesn’t predict simpler applications. The solutions being built to traverse the real and digital worlds require unprecedented combinations of deep technical (i.e., evolving cloud, security and AI technology), physical (i.e., real-world events being tracked and acted upon), and business (i.e., emergent and contingent processes and agency regimes) expertise. These solutions certainly will be applied using significant service resources, both of the professional service and cloud service kind. Why? Because expertise will accrete fastest in professional service companies and computing scale in cloud service companies.
The big trick for service providers, of course, will be how fast technical, physical and business expertise can be translated into software. As expertise is coded, the relationship between service types will shift from lower margin professional services to higher margin cloud services. Which will fuel more technology, physical and business research and development, which will turn into more software. Of course, as long as that software mostly follows an open-source lifecycle. Otherwise, the wheels of progress will gum up.
So, that suggests that combining a company such as Red Hat with an extensive open source presence and a company such as IBM with extensive core-to-edge technology and business expertise should be a winner, right?
Well, the devil is in the details and my colleague David Vellante has posted an extensive review of those details. His conclusion? Probably, but we’ll see.
That leads to the answers for 7) and 8). I think I’ve shown that the vast majority of software hasn’t been written, that an open-source approach (like Red Hat’s) will be essential for it to be generated and adopted, and that the solutions to be built will create unprecedented and fungible opportunities for turning expertise into code (which IBM does well) — much of which will run at the edge and in core systems on-premises (which sustains the relevance IBM’s legacy customer relationships).
If the market trusts IBM and Red Hat (and IBM doesn’t abuse that trust), then it’s a good deal for users and partners. Certainly, we have to wait and see, but at Wikibon we’ll focus on a few questions over the course of the next year as we wait for the deal to close, including:
Action Item. IBM’s acquisition of Red Hat should be a great deal for users, but the execution of the merger and IBM’s resultant business model will determine success or failure. Nonetheless, IBM should now be a far better partner for providing technologically advanced, and economically practical, edge solutions. Also, users should start planning to evolve legacy applications to hybrid cloud options, testing IBM’s resolve to embrace and exploit Red Hat’s cloud technology and approach to business.
THANK YOU