UPDATED 00:37 EDT / AUGUST 24 2016

NEWS

Multiple updates led to Google cloud outage

Google has provided an explanation for the two-hour outage affecting its Google Cloud Platform on Aug. 11, saying the error was entirely its own fault.

The incident affected Google App Engine’s APIs, which were left “unavailable” for short periods. According to Google, the outage meant that 18 percent of apps hosted in its US-Central region saw error rates of between 10 percent and 50 percent. In addition, 3 percent of applications saw error rates exceeding 50 percent, while users saw a “median latency increase of just under 0.8 seconds per request,” due to the outage.

In an explanation posted to its Cloud Status Dashboard, Google said that the outage occurred following a “periodic maintenance procedure in which Google engineers move App Engine applications between datacenters in US-CENTRAL in order to balance traffic more evenly.”

Google explained how it performs this kind of delicate balancing act:

“As part of this procedure, we first move a proportion of apps to a new datacenter in which capacity has already been provisioned. We then gracefully drain traffic from an equivalent proportion of servers in the downsized datacenter in order to reclaim resources. The applications running on the drained servers are automatically rescheduled onto different servers.”

All well and good, but the problem was that Google decided to do this at a time when it was also carrying out a software update on its traffic routers, which caused a “rolling restart of the traffic routers” and temporarily “diminished the available router capacity.”

The resulting “server drain” led to the “rescheduling of multiple instances of manually-scaled applications,” Google said. As such, App Engine immediately tried to create new instances of those manually-scaled apps by sending a request to the server hosting the new instance, via the traffic routers. Unfortunately, due to the routers being updated, some of those instances started up too slowly. App Engine responded by firing off repeated start requests which caused a spike in CPU load on the routers, overloading them and causing some incoming requests to be dropped.

Google reckoned that it had the routing capacity to handle the load, but the problem was its routers weren’t able to handle the unexpected surge in retry requests. As a result, its cloud crashed and burned.

Google later rolled back and restored its services, and has promised to try and ensure it doesn’t happen again.

“In order to prevent a recurrence of this type of incident, we have added more traffic routing capacity in order to create more capacity buffer when draining servers in this region,” the company said.

Google didn’t make any promises about scheduling upgrades. It’s only doing one at a time, so it’s not entirely clear that a similar problem won’t occur further down the line.

Image credit: Humusak via pixabay.com

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU