Just Like People, Google’s Robots Can See Things That Aren’t There If They Look Hard Enough

inceptionism-1

Stare in the clouds hard enough and you’ll eventually make out shapes that only you can see. It’s not that uncommon to recognize silhouettes of houses, animals, and whatever images preoccupy your mind. Turns out, robots can do the same thing. So much so, in fact, that Google engineers even coined a word for it: inceptionism.

Basically, Google engineers trained an artificial neural network to recognize specific types of objects to the point where it can take a new input and label it correctly. All fine, right? Then, they had the software alter the input image to amplify the exact object they’re looking for, enhancing it to make the object stand out. Still good, of course. But, then, they decided to make it amplify objects that aren’t actually in the picture, leading to… well… interesting results.

inceptionism-2

As it turns out, making a neural network look for animals and enhance them in a photo of nothing but sky and clouds will cause the networks to, essentially, make up animals in the photo. Seriously. Then, they made it look and enhance again in the output photo from the first time around, leading to the network seeing even more. Yeah, you catch the drift. The networks, basically, over-interpret the images, causing them to see things that aren’t there.

inceptionism-3

To push the concept of inceptionism further, the team decided to send their networks in an endless feedback loop. Basically, enhance an image, take the result as input, enhance it again, and repeat until the engineers get bored. The results are pretty astounding, with the software, basically, creating new objects out of thin air, leading to final images that look like trippy artwork in hypnotizing colors.

inceptionism-4

Check out the original post over at Google’s blog to learn more.

inceptionism-5

Check It Out