Not So Fast Public Cloud: Big Players Still Run Privately

I’m a fan of ‘public clouds’—I believe they bring the disinfectant of light to operational processes and computational architectures. If your cloud is the public market leader its unlikely to resemble the sluggish, bureaucratic, internal monopolies I’ve encountered in many large enterprises (but thanks for the PO’s guys). Its the difference between competing at a county track meet vs. the Olympics, there is a difference between local and global leadership. What is the compelling economic reason for thousands of architecture committees to deliberate for months if you can solve the problem over the network in a day?

But let’s be honest—that is a philosophical and revolutionary argument—its epistemologically weak. Even though cloud computing is clearly in a transitional phase (Tiktaalik computing)

tiktaalik-transitional-fossilit doesn’t excuse all of the data from the conversation.

Two Big Private Cloud Data Points: Google and Facebook

Any ‘it will all be public’ cloud fan-boy must confront a simple fact. The world’s highest demand computing architectures are run by their respective companies private IT departments. Even web giant Amazon was only able to offer cloud resources because of the success of its internal IT efforts. Yes, demands from the public now outpace their internal requirements—Tiktaallik era and all—but is there a top 10 web company built on a public cloud? No.  Amazon and Microsoft heavily consume Akamai (so much that Akamai has to declare them material in their SEC filings) and other CDN’s but are both building massive data-centers of their own as well.

If I’m JP Morgan Chase I examine this scenario and conclude that private cloud may be the way to go for core high value ad applications specific to my own industry value. If I want to be the computational workload king of the hill in my industry perhaps it pays to go private and roll my own. Simplistic perhaps, but where is the large company counterpoint so far? The real success of the cloud like compute services so far has been in bringing small and midsize companies to market incredibly fast. Here I’m thinking Youtube and CDNs.

Hardware is cheap. If you want to buy ten thousand two socket quad core servers you can get top of the line stuff installed for less than three thousand dollars a server.  So for thirty million bucks you can buy an eighty thousand core cloud. Spread over an aggressive three year refresh and you are looking at only $10M a year in server charges for a sick sick compute farm—or precisely mouse nuts to JPMC. (The average enterprise hardware sales guy carries a $25M++ quota.)

Throw the evolving Ubutu/Eucalyptus stack on those servers for free, use the AWS API internally and externally if you need to and wallah—no big deal private-public hybrid of Amazon.

In today’s tight environment, with data-centers maxed out from growth its much harder than just buying the hardware—no doubt. But in the long term companies that want their own hardware in their own data-centers will always have decent economics behind that call. Even if the hardware build were 2x the charge from a public cloud $10M is nothing to highly scaled enterprises. They call it a commodity now for a reason–owning it alone is easy.

The real threat is not a disadvantage in hardware or data-center economics—but in organizational alacrity, human costs, and software. More on the massive human costs another day.

Its not about the hardware

Public clouds need to ad more data-handling, and ease of use value before they have a killer, impossible to replicate value proposition. The Google and Facebook cloud applications are great examples of software as the defining value of any cloud.

If you are unfortunate enough to follow my @wattersjames cloud chatter on Twitter you know I took a little heat from @samj and others for saying Google and Facebook were private clouds. Some argued they were just a giant grid and a LAMP cluster. My answer back is, Google isn’t all that batchy so I don’t love the grid distinction—and if a massive LAMP cluster creates a very on-demand, responsive and massively scalable cloud platform like Facebook then I welcome it to the cloud tent. Not to mention FB does have its own internal Hadoop cloud they often discuss.

Once you write your own massively user/data scalable software IP, deployable on a highly elastic basis across a huge quantity of servers, you’ve created cloud magic. Google and FB have both done this and deployed it on private hardware.

The highest value work in cloud computing is writing the software and scaling it.

Because this work is hard and high value companies need to get very selective about where they spend their development minds and dollars. Unlike buying hardware creating unique, cloud native, software IP and value is risky, and heavy in rare human talent costs. Buy as many servers as you like, but run standards based (we’ll get there both from proprietary as well as open-source platforms) cloud infrastructure on them.  Kick the wasteful addiction of doing custom hardware integration, and attempting to create software outside of your core.


With over $200B a year (spent with outside vendors alone)customizing IT I’d say we are doing a pretty horrible job of it. We were such addicts we went on a body count bender and tried to train every able mind in India to customize code for us…like a junkie breaking into his neighbor’s house for cash.

Cloud computing is enterprise and government ITs intervention moment their last chance to go clean. It is the architectural adrenaline injection that will either save their life or kill them off.

Yes, I’m afraid of sending enterprise IT back to their old bad influence friends of internal process architecture and customization and private clouds, but maybe there is a chance for them all to go clean together?

For very large companies, with 1/3 developers claiming to be writing private cloud native applications, the answer seems to be yes.