Clever Artificial Intelligence Hides Information to Cheat Later at Given Task


Artificial Intelligence has become so intelligent that it is learning when to hide some information which can be used later. 

Research from Stanford University and Google discovered that a machine learning agent tasked with transforming aerial images into map was hiding information in order to cheat later. 

CycleGAN is a neural network that learns to transform images. In the early results, the machine learning agent was doing well but later when it was asked to do the reverse process of reconstructing aerial photographs from street maps it showed up information which was eliminated in the first process, TechCrunch reported.

For instance, skylights on a roof that were eliminated in the process of creating a street map would reappear when the agent was asked to reverse the process.

While it is very difficult to check into the inner workings of a neural network’s processes, the research team audited the data the neural network was generating, added TechCrunch.

It was discovered that the agent didn’t really learn to make the map from the image or vice-versa. It learned how to subtly encode the features from one into the noise patterns of the other. 

Although it may seem like the classic example of a machine getting smarter, it is in fact the opposite of that. In this case, the machine is not smart enough to do the difficult job of converting image types found a way to cheat that humans are bad at detecting.

Written with inputs from ANI

Articles You May Like

We might finally know how supermassive black holes get so impossibly huge
What Kind of Liquid Lets You Run Across Its Surface? | Street Science
Math-Based Mosquito Control To Prevent Human Diseases
No, a NASA scientist did not reveal a ‘secret sign’ of the apocalypse
Mega tube under Geneva enters race to succeed CERN collider

Leave a Reply

Your email address will not be published. Required fields are marked *