UPDATED 13:00 EDT / APRIL 28 2011

Interop Preview: What is all the Hype About Openflow?

I have been keeping tabs on OpenFlow for the past few years, and I am excited to see increased awareness and interest in this technology as we approach Interop. Despite the strength of the Open Networking Foundation’s industry partners, OpenFlow and Software Defined Networking have only really become well known among academics, researchers and industry strategists, and there is little awareness of it among business leaders and even among IT and networking professionals. I view the strong showing for Openflow at Interop this year as sort of a coming out party for OpenFlow which will significantly increase industry awareness and anticipation, and make it among the most well known and highly anticipated technologies in the entire computing industry.

So what is OpenFlow?

In 1984, a team of engineers from Phoenix Technologies worked to replicate the BIOS of the IBM Personal Computer. The result was the PC clone, which commoditized and opened the world of computing to millions of new applications that have shaped the world as we now know it. Because of this innovation, Windows, Linux, VMware, the open-source movement and cloud computing as we know it, all became possible. While this has been revolutionary for computing, computer networking technologies have not been able to benefit from this open paradigm. Prior to 1984, compute systems were vertically integrated in that the hardware, operating system and application environment were controlled by the vendor, and this vendor alone controlled the pace of innovation, and limited the possible uses and applications of the system. Today, the networking industry is also completely vertically integrated, with hardware, operating systems, and application development capabilities completely controlled by the network vendor. As a result, network technologies have matured to the point that for many years there has been little technological differentiation between networking vendors, yet these technologies never became commoditized leaving enterprises to pay a significant premium for commodity technology. The pace of innovation in networking has slowed to a crawl as barriers to entry in the networking industry have been far to high for startups and innovators to overcome with few exceptions. OpenFlow promises to be the technology to solve this challenge.

Like Phoenix BIOS, Openflow created a standard interface to abstract network hardware from the operating system and applications. And beyond the BIOS analogy, the OpenFlow standard also allows for virtualization, and in effect provides a hardware based hypervisor for networking hardware. So in a nutshell, OpenFlow has the potential to impact networking technology in a similar manner to the way PC-Clone AND the hypervisor have impacted computer technology, creating opportunities for new network operating systems, networking technologies, and robust application development environments that will allow rich interaction between applications and networks.

What challenges can OpenFlow solve?

While some of the applications may have been apparent, in 1984 I don’t think we could have imagined most of the ways in which cheap, personal computing would impact the economy and our lives, and I view the potential of OpenFlow in a similar regard. I have listed some of the applications that I foresee, but once networking development is opened to the world, I hope to see possibilities beyond what I can imagine today.

Controller-Based Networking

I anticipate one of the first widespread applications of OpenFlow to be in the form of controller-based networking. One way to think of this is by comparing wired local area networks to the way that most wireless local area networks are implemented today. A wireless LAN can consist of hundreds or thousands of wireless switches (Access Points), which are all managed and controlled at a single interface. When this paradigm is applied to wired LAN’s, it could radically reduce capital expenditures for networking as it would allow the thousands of elements that represent a computer network to become much simpler and commoditized and correspondingly much less expensive. It would represent an even more drastic impact on operational expenditures as both the knowledge and the amount of manpower required to operate this type of network is drastically reduced. Using a controller based model centralizes network intelligence and can serve as a significant enabler of IT management systems, as they would no longer have to interact directly with thousands of network elements.

IT Management and Automation

As OpenFlow helps to centralize network intelligence, it removes the need for network management systems to interact directly with the thousands of network elements, greatly simplifying the job of the NMS. Beyond this simplification, the Open Networking Foundation (ONF) is working to enrich an d standardize northbound API’s for network operating systems. The ONF API promises to allow for significant increase in visibility and control between external applications and the network.

Network Security and Performance

Organizations are building their data center environments to be able to provide a virtual machine with the touch of a button, and dynamically provision the required compute, storage and network elements. But networking was not designed to be dynamic and agile, and efforts to make traditional networking policy more agile and dynamic simply do not keep up with application needs. Networks play an important role in ensuring that applications perform properly and are secure, but the manner in which networks provision these features today is growing increasingly irrelevant. To provide QOS and Security, the traditional model would be that for each application, the network administrator would need to learn the required performance characteristics, IP addresses of the application and its clients, TCP/UDP port numbers, and then would need to manually create a static policy to enforce this. This is clearly not a feasible model, not only in terms of the time and manpower, but more importantly it is not feasible because these elements are increasingly dynamic. IP addresses of servers and its clients and peers change constantly, and most applications use a wide and dynamic range of ports.

But what if, the application could talk to the network. The application could dynamically deliver its security and performance requirements to the network, which would allow compute elements to be completely dynamic and mobile while retaining the benefits of network services. This would also allow the application to deliver much more sophisticated quality and security requirements dynamically, allowing the network to provide far better application security and performance than it can today.

Multi-tenant networks

PAAS, IAAS, SAAS and other cloud services use a single compute environment to provide services to multiple customers. Security and isolation of customer data is an absolute requirement, and improvements in security need to be made before many customers and applications with rigid security requirements can move to the cloud. MPLS and Virtual Routing and Forwarding models provide logical isolation in multi-tenant environments today, but these technologies can be expensive, difficult to implement and administer, and are limited in the features and security they can provide. OpenFlow completely abstracts all network intelligence from the network forwarding hardware, which not only promises to simplify provisioning and management, but also could offer a level of security and isolation not possible with MPLS and VRF alone.

Cloud-Based Application/Job scheduling


Over time I expect the one-to-one coupling between applications and operating systems to evolve to true cloud based paradigms in which application processing jobs will be sent to the nearest compute element that has availability. Map-Reduce and Hadoop operate on this paradigm. In this model, the application (jobtracker) sends the compute job to a nearby node for processing. Network performance characteristics between the jobtracker and compute node are significant factors, and today Map-Reduce has to use very limited knowledge of the network to make decisions. Again, what if the application could talk to the network?

Other applications?

There is a lot of important research currently being done on OpenFlow, and I would encourage you to visit http://www.openflow.org/videos/ to see some of the research being worked on now. There are applications being developed show promise in: Drastically simplifying MPLS provisioning and management, incorporate advanced load balancing into the network operating system, allowing the network to forward based on real-time link characteristics, convergence, power reduction, allowing mobile devices to seamlessly transition between numerous types of networks, and much more.

Is OpenFlow just hype?

Over the years, there have been many initiatives to standardize and open network technologies, and most have completely failed or had very limited impact, so what is different about OpenFlow?

I think there are a lot of proof points which indicate that OpenFlow will live up to its promise, but I see the most important of these as the increased demand and need for robust application/network interaction. So the destiny of the network is no longer left to control by networking companies. Google, Yahoo, Facebook and other prominent technology companies have realized that applications need to interact directly with the network to enhance and enable new levels of functionality in cloud and mobile applications, and these companies are active in driving the success of Openflow. Add that to the fact that nearly all of the most powerful networking companies, including Juniper, Cisco and Brocade, have gotten behind this initiative.

For these reasons, I view OpenFlow’s success and impact as not if, but when. OpenFlow, and its parent architecture, Software-Defined Networking, are here. In the coming years, these will revolutionize and transform the computer and communications networks, and bring a pace of innovation in networking that we have not seen since the introduction of the internet.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU