The Deep Learning represents an intimate mode of functioning of the human nervous system approach, and applies to artificial intelligence systems to be able to learn to do things for themselves. This is becoming a key technology to process massive amounts of data served by Big Data, but its evolution is beginning to achieve surprising quotas, and even a little disturbing.
A team of Google Brain, the draft deep learning of the search company has been able to teach their machines to create their own encryption without human intervention. Come on, who are learning to keep secrets under an encryption we do not necessarily have to understand why … or know how to decipher.
Progress has been published in a scientific paper by researchers Martin Abadi and David Andersen. In it explained what method followed so that your neural networks have been able to find a way to use simple encryption techniques without having been taught specific cryptographic algorithms.
How they have succeeded?
To make artificial intelligences have been able to accomplish this, the team of Google Brain has performed several times an experiment using three neural networks. They have called Alice, Bob and Eve, and each has been assigned a specific role to simulate a conversation via the network.
Alice was responsible for sending messages to Bob, Eve while trying to spy on them and find out what was said between them. These messages started from a plain text, and Alice’s mission was to encrypt them for external agents like Eve were not able to know what they said even having access to them.
All this had to do so that Bob could rebuild the message that came to him. So they could do it, Alice and Bob were assigned a number of predefined so that when utilization encrypt and decrypt your message numbers. Eve did not have these numbers, so the first two had to learn to combine them with the original message so that he could not understand.
In the first attempts in which the test was performed, encryption of messages Alice was quite poor, and Eve had no trouble solving it. But after several attempts, Alice developed his own technique autonomously to encrypt data, and Bob managed to develop another to decipher what he said.
From 15,000 repetitions of this experience, Alice was able to give messages that could rebuild Bob without Eve guessed more than 8 of the 16 bits containing. That success rate, considering that each bit was a 1 or a 0, is similar to that which can be obtained by chance.
You may also like to read another article on NetDigEdu: Google wants to be more like Apple, and that is dangerous
We do not know how they have developed this method
Magic (or disturbing) of this breakthrough in neural networks is that researchers do not know exactly how Alice encryption method works .In addition, although Bob were able to see how they can fix it , they have not been able to do so that it is easy to understand how to get it .
And this is bad news, because it means not always be able to understand how the encrypted created by these machines, which is a danger to ensuring safety messages work. This moment makes this technique of creating encrypted has very few practical applications for the future.
Still, scientists from Google are excited with the results, and already speak in his article trying different network configurations and different training procedures to better understand the process.
The most doomsayers might say that it is dangerous to teach machines to keep secrets. Personalities like Bill Gates, Mark Zuckerberg, Stephen Hawking and Elon Musk have long positioning in favor and against artificial intelligences develop debating the possible dangers that may involve. But calm, because if Google is perhaps creating a mechanism that assures us that we can turn them off in case something goes wrong.