Google’s AI has made some really trippy images, and now you can buy some of them!
In an effort to understand how its artificial intelligence interprets the world, Google began a process it dubbed “inceptionism” in June.
The purpose of inceptionism was to see how Google’s AI neural networks carried out classification tasks so engineers could further improve the system. But a quirky result of the project was the production of images that look like a serious acid trip.
The project quickly garnered a lot of interest among programmers and artists, so Google decided to open-source its code, dubbed DeepDream, so that anyone could make their own funky images.
To celebrate this new branch of art, Google will auction more than two dozen of its computer-generated images at the Gray Area Foundation for the Arts in San Francisco on Friday.
Here’s how inceptionism works and a look at some of the images that will be available at the auction.
Inceptionism can work one of two ways. The first way is to feed Google’s neural network an image and ask it to look for something specific.
Above is a nice breakdown as to how it works. In the second column, we see Google fed its neural network an image of a tree. Because that neural network is trained to look for buildings in an image, it spat back a squat, green building.
But the second, perhaps more fun, way to use inceptionism is to feed the neural network an image and let it decide what it sees.
Google fed a neural network an image of a sky, and it saw birds!
When taking that second approach, what the neural network will produce depends on how many layers the image goes through. Google’s AI Network is made up of 10-30 layers. The first layer, or input layer, will look at the edges or corners of an image.
Images that go through the first layer will tend to come back with some added swirls or strokes, but look more or less like the original image.
The intermediate layers will look for basic shapes or components, such as leaves. What you see below (and the rest of the images from this point on) are up for auction.
The more neural network layers the original image goes through, the more trippy the end result will be.
But it’s when an image goes through those final layers where the image output gets really weird. This layer will look for complex things like an entire building.
The neural network produced a building that is a blend of green and blue arches.
This image is pretty incredible. Google fed its AI network an image of random noise, and this is what it spat out.
Here’s another image that was spat out after Google gave its AI an image of random noise.
And here’s another! This one came back filled with a sea of blue pagodas.
After processing all 30 layers, the final images can be super wacky.
Or fairly tame.
Memo Akten, an artist from Istanbul, is putting this image up for auction. It was originally a satellite view of the Government Communications Headquarters in the UK before going through Google’s artificial neural networks.
This one is titled “Saxophone Dreams” and was made by sculptor and Google employee Mike Tyka.
This kind of artwork is only going to become more common. The University of London is now offering a course on Machine Learning and Art, and NYU is offering something similar.
In fact, the The Tate Modern’s IK Prize 2016 topic is Artificial Intelligence.
More on the University of London’s course can be found here.
And more on NYU’s course is available here.
All proceeds collected from the art auctioned will go to the Gray Area Foundation.
That foundation “has been active in supporting the intersection between arts and technology for over 10 years,” Google wrote on its research blog.
In total, there will be 29 pieces put up for auction.
Google declined to comment on how much they expect each piece of art to auction for.
Happy art hunting!
This article originally appeared on Tech Insider. Read the original here.