UPDATED 14:38 EST / MARCH 06 2019

AI

Google launches TensorFlow 2.0 with tools for building privacy-conscious AI

Google LLC today launched a new iteration of TensorFlow, its popular artificial intelligence framework, and a pair of complementary modules aimed at enabling algorithms to process user data more responsibly.

TensorFlow 2.0 focuses primarily on improving usability. The release brings a streamlined application programming interface based on Keras, an open-source tool designed to make AI development frameworks easier to use. It enables engineers to access features that were previously spread out across multiple APIs in one place and provides more options for customizing the development workflow.

Another key enhancement is the addition of support for so-called eager execution. TensorFlow 2.0 fires up AI models much faster than previous versions, which lets engineers try out different model variations with shorter delays between test runs. This has the potential to save a considerable amount of time given the highly iterative nature of machine learning development.

Yet even with the significant improvements in TensorFlow 2.0, it’s the two accompanying tools Google rolled out alongside the release that have drawn the most industry attention. They’re meant to help developers build privacy controls directly into their AI software to provide better protection of user information.

The first module, TensorFlow Privacy, enables machine learning models to discard potentially sensitive data they’re not supposed to process. It achieves that by automatically filtering input that is different from the information an algorithm typically ingests. An AI-based spell checking tool, for instance, would mostly take letters as input, which means long digit sequences such as credit card numbers could be easily identified and filtered.

“To use TensorFlow Privacy, no expertise in privacy or its underlying mathematics should be required: those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes,” Google engineers Carey Radebaugh and Ulfar Erlingsson detailed in a blog post.

Google’s other new privacy module is called TensorFlow Federated. The software is aimed at the growing number of mobile services that rely on AI to support core features.

Because of mobile devices’ limited processing power, apps usually handle the learning aspect of machine learning by sending user data to a cloud-based backend for analysis. TensorFlow Federated enables apps to perform the analysis directly on the user’s handset. Developers can then collect the resulting insights and use them to improve their AI algorithms without having to access the underlying data, which increases privacy for consumers.

“With TFF [TensorFlow Federated], we can express an ML model architecture of our choice, and then train it across data provided by all writers, while keeping each writer’s data separate and local,” Alex Ingerman and Krzys Ostrowski, two of the engineers who helped develop the project, wrote in a separate post.

Much like TensorFlow itself, the new modules are available under an open-source license.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU