DARPA-backed project aims to dramatically cut WAN provisioning times
Despite the tremendous progress made in recent years on injecting more programmability into the stack, the wide-area network (WAN) remains a universal bottleneck that affects companies and cloud service providers equally. While companies like Amazon and Google have figured out how to scale across commodity servers within individual data centers, the process of connecting data centers to each other continues to be time-consuming and rigid.
But it doesn’t have to be, according to the group of vendors working to lower the bandwidth barriers in multi-location cloud environments under the auspices of DARPA’s Dynamic Multi-Terabit Core Optical Networks (CORONET) program. The participants recently demonstrated a proof-of-concept that successfully reduced the time it takes provision links between disparate facilities from days to a mere seconds. That speed-up holds potentially major ramifications for how cloud vendors and large enterprises manage their environments.
The prototype is based on IBM’s OpenStack-powered public cloud, which runs on hardware managed by SoftLayer, the hosting company it acquired in June, 2013. Virtual machines deployed across two SoftLayer facilities were hooked up to a WAN orchestration engine developed by fellow CORONET contributors AT&T Corp. and Applied Communication Sciences that automatically relegated requests to the most appropriate tier of the carrier’s network: either IP/MPLS, the optical transport layer or a wavelength.
The aptly-named AT&T SDN WAN Orchestrator managed to configure links in as little as 40 seconds, the companies claimed, after which transport capacity could be allocated in less than a second. They further said the software handled everything from the initial set-up to the termination of individual connections based on application requirements.
DARPA plans to make the technology available commercially in the telecommunications industry with the goal of extending the usefulness of bursting services that cloud providers and enterprises depend upon to handle traffic spikes. Taking out the overhead required to tap that extra bandwidth could make it possible to transfer large amounts of information among data centers as needed and thus eliminate many of the trade-offs currently involved in keeping geographically distributed environments properly backed up and synced.
photo credit: ViaMoi via photopin cc
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU