UPDATED 12:32 EDT / MARCH 13 2014

Peeling back the layers of the software- defined data center

software defined networking web connections infrastructure hyperscale architectureMany companies today are adopting the term ‘Software-Defined Data Center.’ Some refer to it as extending the notions of abstraction, pooling and automation to all data center resources – such as the traditional areas compute, network and storage as well as the power and cooling. Equally important is the ability to be able to manage all of this via a set of policies.

But if we go beneath all of the layers, what really is going on?

The varying concepts such abstraction, pooling and automated are all related. For example, an abstraction is provided when you create a logical description of your resources, like a network router. This can be done in many ways. Sometimes, there may be an actual physical device in the network, but the system may present it as a virtual device by associating a logical device to the physical device An example of this is network slicing, such as FlowVisor. When you manipulate the virtual device, the system may translate those requests to the actual physical device which handles the request.

However, this approach has not gained market adoption. A more interesting method is whether you can actually eliminate physical devices. For example, in some network virtualization systems, the activities of a logical network switch are simulated completely in software and the physical network just does the journeyman job of transmitting the bits across the wire.

You can ask, “How can that happen? Doesn’t a network packet ultimately need to get handled by some physical device?” The answer is “Yes, but that happens only part of the time”. The reason is that if you look inside a physical network router, it’s basically a computer, and network packets are just bits on the network. So if you can manipulate the bits, then as far as the users are concerned (the “users” can actually be another computer program, or another network device) it doesn’t matter whether it’s done via software simulation or a physical devices underneath, as long as the network packets go through.

Layering flexibility

.

Using this reasoning, the other attributes mentioned earlier, like pooling and automation fall into place. Having a layer of indirection provides the flexibility that adds the software-base flexibility to the software defined data center.

shared storage architecture datacenter hyperscale

Image: Charlotte Bartlett

Let’s look at automation. In this case, the use-cases are much broader and includes items such as cooling and power. If you insert a layer of software to present a layer of abstraction, then you have a path towards automating these devices. In case of traditional server-related devices, you can almost construct devices such as virtual machines, storage volumes or network devices, implemented as computer programs that run in your system, and mapped to physical devices that implement them. It’s not possible to do that with cooling or power, because we’re talking about real things, like airflow and electrons, but you can certainly control them as long as they are under computer control. By automating them because you are no longer relegated to manually performing tasks such as changing settings, pulling and inserting wires, or flipping physical switches. You can create an API layer that represents the datacenter devices. Once you abstract them via a programming interface, you can not only separate the logical world from the physical world, but you can now start to be clever about changing the device and their configurations to meet the higher level user and application needs. This may range from operational items, like self-provisioning of devices by end-users. You are no longer bound to configuring physical devices, which are often under control of a datacenter IT team.

They are either very busy, or are reluctant to make frequent changes, so you can create a self-provisioning system. Even if you are not providing self-provisioning of services, automation has many advantages. One is the mapping of policies, such as security policies, into the logical infrastructure. If there is a need to cordon off a segment of your infrastructure for security reasons, it may be extremely painful to rapidly re-configure the physical devices. But if they are under programmatic control, you can closely tie the policies of the application owners and their workloads right into the underlying devices and their configuration.

Systems infrastructure can also work in concert with cooling, which is not intuitive. An actual example is if cooling systems are starting to be strained in one side of the datacenter floor. A system administrator can pre-emptively perform live migration of the virtual machines from one side of the data center to another cooler section in order to escape a growing wave-front of hot air which may imminently cause servers to overheat. If the cooling system sensors can be integrated with the virtual machine orchestration system, the live migration tasks may become automated, and coordinate the cooling requirements, power consumption against virtual machine placement. One can think of similar work being done by monitoring the status of the power grid and adjusting the workload in order to balance the workloads based on the datacenter’s power system stability or capacity.

Another example is in pooling of resources. In the traditional world of virtualization and server consolidation, one can either slice and dice physical resources into multiple pieces, or do the reverse by combining multiple devices and presenting it as though it was one device. This kind of aggregation was always possible in the network world through concepts like network link aggregation or bonding. We also see similar concepts in storage such as logical volumes that span multiple disks. Rather than use different methods to pool storage or networks that is specific to an operating system or at a physical device level it’s much more convenient to have a consistent view of the world via a software abstraction. Now I have the freedom to choose among different hardware vendors and create a set of pooled resources. Even in the area of power, it’s possible to consider multiple data centers in different power systems to be part of a single pool. If you think there’s a high chance of a brown-out in one datacenter, you can move workloads to a different datacenter based on the stability of the power grid.

It’s all about abstraction

 .

So the upshot of this is that a software defined data center is, as its name implies, driven by software. But more importantly, it is the abstraction that really matters. By separating the physical devices from the way you want to view the world, you can make the computer systems a lot more flexible and agile. It enables you to manipulate the world based on the end-user requirements, which is a top-down view of the world, as opposed to a device-centric view of the world, which is a bottoms-up view of the world.

This is a natural evolution for most computer systems. In summary, what I believe matters most is to respect the needs of the end-users and the policies in which they run their workloads. By defining the data center as software abstractions, it becomes much easier to break down the traditional application owner vs. infrastructure owner world. This is not to say that the data center world is going to be defined solely by app owners. Both application as well as networking teams will participate and the benefits already seen in areas such as server virtualization will extend to the entire data center including network systems.

 .

dan condeAbout the Author

Dan Conde likes to work in system and infrastructure software and has previously worked at VMware,  Rendition Networks, NetIQ and Microsoft. Dan received his Computer Science degree from the University of California at Berkeley and an MBA from the Haas School of Business at UC Berkeley.

 .

feature image: gualtiero via photopin cc

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU