UPDATED 00:22 EDT / APRIL 13 2017

CLOUD

Google tests a collaborative approach to machine learning

Google Inc. is plotting to speed up machine learning with a new “federated learning” approach that sees training data spread across millions of individual Android devices.

The approach is different from traditional machine learning techniques, where datasets are distributed across multiple cloud servers. It enables machine learning models to be trained from actual users’ interactions with their Android devices, Google researchers Daniel Ramage and Brendan McMahan said in the company’s research blog.

The method also allows models to be trained faster, and with less power consumption than traditional methods. In addition, users can benefit immediately from any improvements made to the machine learning models.

For starters, Google is testing out its federated learning approach via the Gboard keyboard for Android’s querying feature. Gboard suggests queries for users as they type, and with federated learning it stores information on the device about the context in which each query was suggested, and whether or not that query was ignored or selected. The history on each device is then processed by federated learning to improve Gboard’s query suggestions. The data is also encrypted and sent back to Google’s cloud servers, so the improved query suggestions can be applied to everyone.

Obviously, privacy will be a concern here for many users, so Google has programmed its servers so they only receive updates from individual Android devices in batches from several thousand devices at once. These are then decrypted and aggregated before the data is used to improve the machine learning model in the cloud.

“Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device,” said McMahan and Ramage. It separates “the ability to do machine learning from the need to store the data in the cloud,” they added.

google-federated-learning

Google had to do a lot of work under the hood to implement federated learning. In their blog post, McMahan and Ramage explain that with traditional machine learning, the datasets are partitioned homogeneously across multiple cloud servers, while the algorithms are designed to work with high-throughput, low-latency connections.

With federated learning, however, the training data sets are spread unevenly across millions of Android devices, where connections are generally high-latency with lower bandwidth. Google also had to ensure minimal disruption for users, which means the on-device training only occurs when devices are idle, on a free wireless connection. “Applying Federated Learning requires machine learning practitioners to adopt new tools and a new way of thinking,” the two researchers said.

Still, they claim the benefits to users make it all worthwhile. Google is also using the approach to improve other applications, including language models and image search rankings.

Image: Piyushgiri Revagar/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU