Facebook redefines data center design to support the next billion users
Facebook Inc. is marking the launch of its latest data center in the central Iowa town of Altoona by providing the public with a rare glimpse into the inner workings of the 140-megawatt facility, which implements a brand new network topography that replaces clusters with “pods” as the basic building block.
The move to release the details of the network continues the social networking titan’s tradition of transparency about infrastructure. Whereas other web-scale giants have tended to keep their data center secrets close to the vest, Facebook is playing a leading role in bringing those lessons to the enterprise with its Open Compute Project. The company is not releasing the technology behind its new network to the open-source community just yet, but it has disclosed a wealth of information about the new setup that could help push organizations into revisiting their legacy architectures.
The Altoona facility puts a unique twist on the traditional method of aggregating servers into clusters in a way that works around the inherent limitations of that model. Instead of splitting up the network into massive groups of racks, Facebook uses uniform pods containing only 48 cabinets each.
That modular approach offers several benefits over cluster-based topologies, the company said. First, fewer servers in a group lowers the number of ports necessary to support it, which in turn eliminates the need for monolithic switches such as the kind Cisco Systems Inc. sells. This kills two birds with one stone by sidestepping the risk of becoming locked into a proprietary technology while also reducing the number of servers that may go offline in the event of a failure.
Second, and more significantly, assigning a limited number of machines to a pod avoids the trade-offs created as a cluster grows and the increasing communications among the multiplying racks within starts to eat into the bandwidth available for connecting with the rest of the data center. That is the biggest sticking point for Facebook, which supports so many users that services are often distributed across multiple clusters.
Thanks to the use of pods, the Altoona facility suffers from no such limitation, according to Facebook. Instead of a large propriety device, each group of racks is served by four medium-sized commodity switches that divide the available 160 gigabits of bandwidth equally between internal and upstream channels. That adds up to a highly scalable topography that Facebook claims makes it possible to accommodate hundreds of thousands of servers with optimal use of data center floor space. Underpinning it all is a complete separation of the management layer from the underlying hardware, which is what the software-defined networking movement is trying to bring into the enterprise today.
Facebook is seeing tremendous success with its implementation. The environment already provides ten times as much transport capacity among pods as the social networking giant’s other facilities in its first iteration, which the company expects it will be able to easily quintuple in the future. Connecting the data a center to the social network’s users is a set of specifically-designated edge pods that each provides 7.86 terabits per second of transport capacity with the option to add more.
Photo credit: Beraldo Leal via photopin cc
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU