Google’s image recognition programs are usually trained to look for specific objects, like cars or dogs.
But now, in a process Google’s engineers are calling “inceptionism,” these artificial intelligence networks were fed random images of landscapes and static noise.
What they get back sheds light on how AI perceive the world, and the possibility that computers can be creative too.
The AI networks churned out some insane images and took the engineers on a hallucinatory trip full of knights with dog heads, a tapestry of eyes, pig-snails, and pagodas in the sky.
Engineers trained the network by 'showing it millions of training examples and gradually adjusting the network parameters,'according to Google's research blog. The image below was produced by a network that was taught to look for animals.
Each of Google's AI networks is made of a hierarchy of layers, usually about '10 to 30 stacked layers of artificial neurons.' The first layer, called the input layer, can detect very basic features like the edges of objects. The engineers found that this layer tended to produce strokes and swirls in objects, as in the image of a pair of ibis below.
As an image progresses through each layer, the network will look for more complicated structures, until the final layer makes a decision about the objects in the image. This AI searched for animals in a photo of clouds in a blue sky and ended up creating animal hybrids.
Here's the same image of a blue sky put through a network trained to search for buildings, specifically pagodas. Trippy!
These examples show how AI networks trained to recognise towers, buildings, and birds interpreted images of landscapes, trees, and leaves.
The engineers also found that the AI were able to generate, or 'see' objects in images of static noise.
Google's engineers used this process to verify that the AI were correctly learning the right features of the objects they were meant to learn. It's hard to tell what this AI was looking for, cupcakes, flowers or oranges.
AI trained to identify places and building features spat out the trippiest images. Like these blue and green arches.
The engineers found that the AI tended to populate specific features with the same object. For example, horizons tended 'to get filled with towers and pagodas' and 'rocks and trees turn into buildings.'
The AI also produced strange images reminicent of the early-90s 'magic-eye' books -- just from static noise. Look closely and you might be able to find something here.
One AI network turned an image of a red tree into a tapestry of dogs, birds, cars, buildings and bikes.
The engineers also fed works of art to the AI artworks. One network superimposed a dog on the screaming figure in 'The Scream,' a painting by Edvard Munch.
The majority of the AI networks were trained with images of animals. One AI network populated an image of a waterfall with dogs, birds, pigs and goats.
The engineers believe that 'inceptionism' may inspire artists to use AI to as a 'new way to remix visual concepts.'
Business Insider Emails & Alerts
Site highlights each day to your inbox.