Cloud gazing seems like it should be a uniquely human activity—who else would stare up at the sky and turning wisps of clouds into shapes and faces? But, now, a robot can do that, too. What’s happening there? Is the robot…imagining?
This robot is named Cloud Face, and it’s a face-detection algorithm that can look at pictures of cloud and detect ones that look like human faces. The program comes from Shinseungback Kimyonghun, who describes it this way:
‘Cloud Face’ is a collection of cloud images that are recognized as human face by a face-detection algorithm. It is a result of computer’s vision error, but they look like faces to human eyes, too. This work attempts to examine the relation between computer vision and human vision.
Kimyonghun figured this out mostly accidentally. He told Fast Company that the whole thing started with a webcam that was supposed to capture human faces:
“One day, I hooked a webcam and a snack bag, and cast the fishing rod out to the window of my studio,” the studio’s Kim Yong Hun explains. “I expected that it would capture faces of passersby when they look at the bait. After a few hours later, it actually got some faces of people staring at it. However, there were also many images that were not faces. That was because the face-detection algorithm often found patterns of building walls and streets as faces.
At Fast Company, they sell the project as “an imaginative robot.” Mark Wilson writes:
Looking through the images almost kicks you in the gut. Because it’s one thing if Facebook can auto-tag my friend’s faces on my uploaded photographs, but it’s a whole other thing if some snippet of code can lay beside me on a grassy knoll, point to the sky, and make a convincing argument as to why a bit of puffy condensation resembles a dude on a train eating a donut.
But is this robot really “imaginative?” Can robots imagine?
It depends on how you describe imagination. In one paper, computer scientists talk about building a robot with “functional imagination,” which they describe as ” the purposeful manipulation of information that is not directly available to the senses – references to imagination always point to something that in reality is not there.” There are other researchers teaching robots to imagine what humans might want—in this case, how humans might want to arrange furniture in a room. Ashutosh Saxena at Cornell is trying to figure out how to get robots to put themselves in human shoes. IEEE Spectrum explains:
Essentially, what Saxena’s group is doing is teaching robots to use their imaginations by placing virtual humans in the environment that they want to organize, and then figuring out what those virtual humans are likely to do.
So the cloud face project isn’t the only thing where computers are creating fantasy images and scenarios. And there’s another project quite like the cloud face one, called Google Face. Created by Onformative, Google Face scours Google Earth for things that look like faces.
This concept—the idea that we can see faces in blobs (like the face on the moon)—is called “pareidoliak.” To humans, the world is full of faces in clouds, earth, grilled cheeses and oil slicks. We see them everywhere. And now, apparently, we’ve taught robots to, as well.
More from Smithsonian.com: