Teaching a Neural Network to Encrypt

Researchers have trained a neural network to encrypt its communications.

In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography:.

This story is more about AI and neural networks than it is about cryptography. The algorithm isn’t any good, but is a perfect example of what I’ve heard called “Schneier’s Law“: Anyone can design a cipher that they themselves cannot break.

Research paper. Note that the researchers work at Google.

Posted on November 3, 2016 at 6:05 AM12 Comments

Comments

Dr. I. Needtob Athe November 3, 2016 8:41 AM

Re: “Anyone can design a cipher that they themselves cannot break”:

My favorite computer game is a puzzle game called Portal 2, which includes a tool to design your own puzzles (called “tests”) and share them online.

The same principle of overconfidence and belief that if you can’t see something then it’s not there applies to beginners who try their hand at designing Portal 2 tests. They typically publish a test with assurance that there really is a solution even though it seems impossible, but then they’re humbled when they’re shown that it’s actually much easier than they intended.

Clive Robinson November 3, 2016 9:34 AM

@ Bruce,

You are right about the cipher not being of any real value but…

We humans have taken several thousand years to get to where we currently are with our crypto. And for atleast a couple of thousand years of that we thought the diameter of sticks to do simple transposition and shifted alphabet as simple substitution were realy neat.

So the question that arises is how long before AI gets to the point of progressing beyond the simple, even the difficult and possibly beyond our abilities to comprehend?

Whilst I do not believe in the “AI Singularity” idea[1], computers with even rudimentry “Soft AI” are doing better than humans in constrained fields of endevor already. And crypto is a constrained field of endevor.

[1] If you are not an AI person you might be mildly amused by Linus Torvalds take on the singularity idea 😉

hawk November 3, 2016 11:46 AM

Dear Echo Chamber:

The problem is, it is invariably interpreted as “any system you invent is broken”

I have proven this has nothing to do with cryptography. Instead, it is entirely about PERMISSION. If someone unfamiliar to the group presents a design it is immediately dismissed as snake oil. But if you belong to the group, you can freely and openly discuss designs without fear of ridicule, even if the design being discussed is ridiculous.

In industry money talks. Junk gets sold all the time even after receiving wide-ranging accolades. If you’re not part of the elite group, your design is rejected without review. It goes like this: “We’re the experts and we have a lot of money. If anyone was going to come with an improvement it would be US and since you ain’t US your design is by definition junk.”

The ONLY place where this is not true is among criminal hackers.

Martin Walsh November 3, 2016 11:54 AM

Do you mean

Schneier’s Law = “Anything YOU invent is bad but anything WE invent is good” ?

Martin Walsh November 3, 2016 12:30 PM

Here’s an example of industry outcry

http://reports.informationweek.com/abstract/21/4478/Security/security-epic-fail.html

Dozens more like it actually led to CEO’s asking if experts were secretly motivated to keep things broken, sick of getting spun around by “Feistel Networks” and “Rounds” and “Finite Fields” they didn’t know or care about.

If you go back and read what experts touted over the past five years you would be horrified by what they maintained then. And now everything is broken, it’s too late. You can pay thousands to send someone through the SANS Institute and get certified something but they will never be able to, or be permitted to, design and develop solutions. You must PAY A LOT OF MONEY TO CONSULTANTS.

Why can’t they actually fix anything? It’s because they can’t. The only thing that matters is having someone to blame it on when it breaks. But whatever you do, don’t change anything. You’re not an expert.

hawk November 3, 2016 1:18 PM

In the 70’s and 80’s sport diving popularity exploded. One major equipment mfr conducted a marketing study to learn why men entered the (dangerous) sport. Many would spend a lot of money on equipment but dive only a few times before giving it up. What they found was that as many as a quarter of new entrants did so to impress their girlfriend. Of course ads for wetsuits would follow depicting beautiful women clinging to a man in his wetsuit.

In years to come, studies of information security industry will show that the number one attribute associated with expertise was the immediate and categorical rejection of anything different. Some evidence of this can be found in “contests” and in the awarding of grants. In the future, they will want to know why nothing got fixed.

David Leppik November 3, 2016 3:19 PM

While they used neural nets as their primary tool, the overall setup has more of a genetic algorithm feel. They trained one system to encrypt, then had a second system trained to break the encryption. Then they repeated until the second system couldn’t break it. It’s a cute experiment, but it’s more like schoolkids playing with secret codes than like modern cryptography.

AI does well with well-defined problems. You tell the system the rules for the problem space, you give it enough training data, and you get something out that matches those parameters.

Computer security is the opposite. The only rules that matter are the ones the attacker decides to follow; all the rest are distractions. While there’s nothing in principle to keep an AI from designing an encryption algorithm, it would have no guards against any attack it didn’t anticipate, nor would it be designed to make it easy to incorporate fixes against side-channel attacks. For that you need code that was written to be easy for humans to read, understand, and modify. Which pretty much means humans writing it for themselves.

foo November 4, 2016 1:21 AM

Let’s talk about another law I’ve often stated: “Anyone could learn to do anything… just takes some more time/effort than others, plus some decide to be more patient than others and actually spend that required time/effort.”

When you put that together with “Anyone can design a cipher that they themselves cannot break” you get the realization that anyone could also design a really good cipher too, they just have to put in the difficult grunt work to grow out of being a novice first (and that means practice… by breaking lots of easier ciphers first, and thus learning what makes a good one)….

All you guys who interpret this another way are just pessimistic nay-sayers who don’t want to learn anything. Just be quiet and go do better! The effort will teach you… if you keep going, you may become an expert too, eventually. Just please don’t sell snake oil in the mean time.

otherwise November 4, 2016 8:38 AM

You say they work at Google?

Those researchers are from the insurance industry.

The “loss functions” they talk about in their paper are straight-up actuarial mathematics.

ai_noob June 21, 2017 6:55 AM

Okay this is pretty nice.
On the other hand, I think it is no big news.

One-Time-Pad Encryption is still used nowadays and is believed not to be breakable
if you communicate the key correctly.

This will only use neural networks to reproduce One-Time-Pad encoding,
since the amount of bits in message and key are the same.
Therefor, you are able to “mask” the message 1:1 – no one will ever be able to decrypt this, without the correct key.
There is no real value in using Neural Networks here, in my opinion.
Or am I missing some essential points here?

More information on One-Time-Pad at https://de.wikipedia.org/wiki/One-Time-Pad.

P.S.:
I’m at this moment trying to reproduce the Networks from the information given to
test if they are able to archiev an encryption also when using a key smaller than the message, e.g. keylength = 1/2 message length.

Best regards

Leslie May 5, 2018 2:57 AM

Most likely not. Present day encryption frameworks are outlined around cryptographic arbitrary number generators, their yield is intended to be factually undefined from genuine arbitrariness. Machine learning is by and large in view of finding factual examples in the information, and with really arbitrary information there is none. Notwithstanding for imperfect crypto where there is some little example to be discovered, a lot of haphazardness in the information will overpower any immediate endeavor to unscramble the ciphertext. Please help me to can I pay someone to do my assignment!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.