UPDATED 14:00 EDT / MARCH 05 2014

The evolution of the data center : Timeline from the Mainframe to the Cloud

As mentioned in Wikibon’s “The Data Center: Past, Present and Future” post, “Data centers are at the center of modern software technology, serving a critical role in the expanding capabilities for enterprises.” The concept of “data centers” has been around since the late 1950s when American Airlines and IBM partnered to create a passenger reservations system offered by Sabre, automating one of its key business areas. The idea of a data processing system that could create and manage airline seat reservations and instantly make that data available electronically to any agent at any location became a reality in 1960, opening the door to enterprise-scale data centers.

Since then, physical and technological changes in computing and data storage have led us down a winding road to where we are today. Let’s take a brief look at the evolution of the data center, from the mainframe of yesterday, to today’s cloud-centric evolution, and some impacts they’ve had on IT decision-making.

1946

The Electronic Numerical Integrator and Computer (ENIAC) was built in 1946 for the U.S. Army to store artillery firing codes and was dubbed as the first general-purpose electronic digital computer.

Early 1960s

The first transistorized computer (TRADIC) was introduced in 1954 and was the first machine to use all transistors and diodes and no vacuum tubes. Serious commercial systems did not arrive until the 1960s, leading to mainframes like the IBM System series to develop a substantial jump in compute abilities.

1971

Intel introduced its 4004 processor, becoming the first general-purpose programmable processor on the market. It served as a “building block” that engineers could purchase and then customize with software to perform different functions in a wide variety of electronic devices.

1973

The Xerox Alto becomes the first desktop computer to use a graphical UI and included a bit-mapped high-resolution screen, large internal memory storage, and special software.

1977

ARCnet is introduced as the first LAN, being put into service at Chase Manhattan Bank. It supported data rates of 2.5 Mbps, and connected up to 255 computers across the network.

1978

SunGard develops and establishes the business of commercial disaster recovery.

Note: Prior to the introduction of PC servers, IT decisions revolving around the mainframe had to be made on an absolute enterprise scale for everything from operating system, hardware, and applications. All of these things ran within one device for the entire enterprise, offering limited flexibility and difficult IT decisions.

1980s

Personal computers (PCs) were introduced in 1981, leading to a boom in the microcomputer industry.

Sun Microsystems developed the network file system protocol, allowing a user on a client computer to access files over a network in a manner similar to how local storage is accessed.

Computers were being installed at a rapid rate everywhere we turned, but minimal attention was being given to environmental and operating requirements.

Early 1990s

Microcomputers began filling old mainframe computer rooms as “servers,” and the rooms became known as data centers. Companies then began assembling these banks of servers within their own walls.

Mid 1990s

The “.com” surge caused companies to desire fast internet connectivity and nonstop operation. This resulted in enterprise construction of server rooms, leading to much larger facilities (hundreds and thousands of servers). The data center as a service model became popular at this time.

Note: Thanks to PCs (servers), IT decisions started becoming made in two separate ways. Servers allowed for application-based decisions, while hardware (data center) decisions remained their own enterprise level decision.

1997

Apple created a program called Virtual PC and sold it through a company called Connectix. Virtual PC, like SoftPC allowed users to run a copy of windows on the Mac computer, in order to work around software incompatibilities.

1999

VMware began selling VMware Workstation, which was similar to Virtual PC. Initial versions only ran on Windows, but later added support for other operating systems.

Salesforce.com pioneered the concept of delivering enterprise applications via a simple website.

2001

VMware ESX is launched – bare-metal hypervisors that run directly on server hardware without requiring an additional underlying operating system.

2002

Amazon Web Services begins development of a suite of cloud-based services, which included storage, computation and some human intelligence through “Amazon Mechanical Turk.”

2006

Amazon Web Services begins offering IT infrastructure services to businesses in the form of web services, now commonly known as cloud computing.

2007

Sun Microsystems introduces the modular data center, transforming the fundamental economics of corporate computing.

2011

Facebook launches Open Compute Project, an industry-wide initiative to share specifications and best practices for creating the most energy efficient and economical data centers.

About 72 percent of organizations said their data centers were at least 25 percent virtual.

2012

Surveys indicated that 38 percent of businesses were already using the cloud, and 28 percent had plans to either initiate or expand their use of the cloud.

2013

Telcordia introduces generic requirements for telecommunications data center equipment and spaces. The document presents minimal spatial and environmental requirements for data center equipment and spaces.

Google invested a massive $7.35 billion in capital expenditures in its Internet infrastructure during 2013. The spending was driven by a massive expansion of Google’s global data center network, which represented perhaps the largest construction effort in the history of the data center industry.

Today and Beyond

Today’s datacenters are shifting from an infrastructure, hardware and software ownership model, toward a subscription and capacity on demand model.

In an effort to support application demands, especially through the cloud, today’s data center capabilities need to match those of the cloud. The entire data center industry is now changing thanks to consolidation, cost control, and cloud support. Cloud computing paired with today’s data centers allow IT decisions to be made on a “call by call” basis about how resources are accessed, but the data centers themselves remain completely their own entity.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU