This Is What it Looks Like When a Computer "Hallucinates"

Artificial neural networks—systems of interconnected processors that attempt to mimic the structure of the human brain—are usually used for things like facial recognition. Feed a neural network enough photos of your own face, and soon enough, it will learn your dimples, your chin, the distance between your eyes—and be able to recognize those things the next time it sees you, just like a toddler would. But as a handful of researchers have shown recently, there’s no reason neural networks can’t also approximate the brain’s weirder and more creative processes—processes like dreaming.
Below, watch a computer dream in real-time. Just zone out for a while—we’ll explain what’s happening in a second.
Earlier this month, researchers at Google showed what happened when they turned their image search neural network inside out. When you show Google Images a picture of a house in a reverse image search, it can tell you it’s a house—so why not ask Google to show you what it thinks a house looks like?
The results were swirling and vaguely creepy mosaics that showcased the strengths and the pitfalls of the technology. The Google neural network’s idea of a starfish looks a lot like an actual starfish, but when visualizing a dumbbell, it accidentally incorporated a flexed human arm as well. (Probably because many of the internet’s dumbbell pictures also show the musclemen who wield them.) Taking the idea a step further, the researchers had their neural network look at a preexisting image, then amplify any patterns it recognized in that image—in other words, the computer was finding pictures in the clouds.

Image via Google Research
Inspired by the team at Google, a group of Belgian PhD students led by Jonas Degrave built the interactive visualization that’s embedded at the top of this post—which they call an “LSD neural network,” or an “AI that hallucinates.” On the livestreaming platform Twitch, users can submit suggestions for “hallucinations” in a chat room. Every 45 seconds, the neural network picks a suggestion at random, then starts “hallucinating” that suggestion, just as Google’s network hallucinated those mutant animals in the clouds.
I spent way longer than I should have watching the thing work this morning. Many of the most vivid examples came from animal names, which I’ve screenshotted below.
Tree Frog

Hammerhead shark

White wolf

The network has a list of about 1000 terms that it’s familiar with, meaning my attempts to get it to generate art in the style of Mondrian and Keith Haring were for naught. But many of the seemingly ordinary objects in its vocabulary produced super trippy tableaux as well.
Lipstick

Schooner

Chocolate sauce

Jigsaw puzzle

Matchstick

Castle

Ten-gallon hat

Even the neural network’s failures were interesting. When one viewer asked for pizza, it generated some pepperoni-esque discs and a bunch of gross little gobbly mouths—just like Google’s dumbbell and arm. (I didn’t get a screenshot of that one.) When I asked it to hallucinate a website, it generated this formless array—maybe because websites don’t have any defining physical characteristics like edges and shapes.
Website

If you’re interested in participating, head here. You’ll need a Twitch account.
Screenshots via Twitch. Contact the author at andy@gawker.com.