UPDATED 14:17 EST / OCTOBER 28 2016

NEWS

Google Brain researchers teach AI to make its own encryption

Researchers at Google Brain, Google’s deep learning project, have worked out a way to teach artificial intelligence neural networks how to create their own encryption formulas.

In a research paper published by Martín Abadi and David Andersen, “Learning to Protect Communications with Adversarial Neural Cryptography,” the researchers put three AIs together: two that attempt to communicate secretly (Alice and Bob) and a third that tries to spy on that communication (Eve).

According to the paper, Alice’s job is to construct messages using some sort of secret algorithm and Bob’s duty is to discover how to decrypt those messages. On the other side of the divide, Eve is listening in to the communication between Alice and Bob and attempts each time to read the message sent.

The objective of the entire process is to have Alice and Bob come up with a communication scheme that Eve cannot easily break, all without teaching Alice, Bob or Eve any particular encryption scheme.

The only thing traded between Alice and Bob was a pre-determined cryptographic key to which Eve did not have access. From there Alice would iterate through methods of encrypting a message to Bob and send it along. The success of that message being decrypted by Bob and not being successfully read by Eve would set the parameters for the next attempt.

The entire domain of the message was only 16 bits. That’s not a very long message, but it was sufficient for the simple encryption learning the researchers wanted to teach.

“Neural networks are generally not meant to be great at cryptography,” the researchers wrote. “Famously, the simplest neural networks cannot even compute XOR, which is basic to many cryptographic algorithms. Nevertheless, as we demonstrate, neural networks can learn to protect the confidentiality of their data from other neural networks: they discover forms of encryption and decryption, without being taught specific algorithms for these purposes.”

During the experiment, researchers discovered that Alice was able to devise methods for communicating secretly with Bob. However, Eve is not easily thwarted and the presence of that AI spying on the other two drives them to practice and enhance the secret messages.

At first, the AIs were not very good at sending messages to one another, but over time they got better. After 15,000 iterations, the researchers discovered that while Bob was able to decrypt the message every time, Eve was only able to decode 8 of the 16 bits. As each bit could only be a 1 or a 0 this was not sufficiently different from pure chance.

Human-made encryption is still far beyond the reach of the cryptography learned by the AI systems in this experiment. However, the way that encryption breaking is done—seeking out patterns in large sets of encrypted sent messages—is the mainstay of big data and machine learning apparatuses.

In an era when security on the Internet is built on the power of cryptography, powerful interests turning machine learning towards breaking encryption could become the next security arms race.

Image via Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU