Current Issue
May 2014 magazine cover
Subscribe

Save 81% off the newsstand price!

Building a robot that humans can love is pretty ambitious. But Javier Movellan (in his San Diego lab with RUBI) says he would like to develop a robot that loves humans. (Timothy Archibald)

Robot Babies

Can scientists build a machine that learns as it goes and plays well with others?

"We have learned a lot from this little baby," Movellan said, giving the robot an affectionate pat on its square cheek.

For the past several years he has embedded RUBI at a university preschool to study how the toddlers respond. Various versions of RUBI (some of them autonomous and others puppeteered by humans) have performed different tasks. One taught vocabulary words. Another accompanied the class on nature walks. (That model was not a success; with its big wheels and powerful motors, RUBI swelled to an intimidating 300 pounds. The kids were wary, and Movellan was, too.)

The project has had its triumphs—the kids improved their vocabularies playing word games displayed on RUBI's stomach screen—but there have been setbacks. The children destroyed a fancy robotic arm that had taken Movellan and his students three months to build, and RUBI's face detector consistently confused Thomas the Tank Engine with a person. Programming in incremental fixes for these problems proved frustrating for the scientists. "To survive in a social environment, to sustain interaction with people, you can't possibly have everything preprogrammed," Movellan says.

Those magic moments when a machine seems to share in our reality can sometimes be achieved by brute computing force. For instance, Einstein's smile-detection system, a version of which is also used in some cameras, was shown tens of thousands of photographs of faces that had been marked "smiling" or "not smiling." After cataloging those images and discerning a pattern, Einstein's computer can "see" whether you are smiling, and to what degree. When its voice software is cued to compliment your pretty smile or ask why you look sad, you might feel a spark of unexpected emotion.

But this laborious analysis of spoon-fed data—called "supervised learning"—is nothing like the way human babies actually learn. "When you're little nobody points out ten thousand faces and says 'This is happy, this is not happy, this is the left eye, this is the right eye,'" said Nicholas Butko, a PhD student in Movellan's group. (As an undergraduate, he was sentenced to labeling a seemingly infinite number of photographs for a computer face-recognition system.) Yet babies are somehow able to glean what a human face is, what a smile signifies and that a certain pattern of light and shadow is Mommy.

To show me how the Project One robot might learn like an infant, Butko introduced me to Bev, actually BEV, as in Baby's Eye View. I had seen Bev slumped on a shelf above Butko's desk without realizing that the Toys 'R' Us-bought baby doll was a primitive robot. Then I noticed the camera planted in the middle of Bev's forehead, like a third eye, and the microphone and speaker under its purple T-shirt, which read, "Have Fun."

In one experiment, the robot was programmed to monitor noise in a room that people periodically entered. They'd been taught to interact with the robot, which was tethered to a laptop. Every now and then, Bev emitted a babylike cry. Whenever someone made a sound in response, the robot's camera snapped a picture. The robot sometimes took a picture if it heard no sound in response to its cry, whether or not there was a person in the room. The robot processed those images and quickly discerned that some pictures—usually those taken when it heard a response—included objects (faces and bodies) not present in other pictures. Although the robot had previously been given no information about human beings (not even that such things existed), it learned within six minutes how to tell when someone was in the room. In a remarkably short time, Bev had "discovered" people.

A similar process of "unsupervised learning" is at the heart of Project One. But Project One's robot will be much more physically sophisticated than Bev—it will be able to move its limbs, train its cameras on "interesting" stimuli and receive readings from sensors throughout its body—which will enable it to borrow more behavior strategies from real infants, such as how to communicate with a caregiver. For example, Project One researchers plan to study human babies playing peekaboo and other games with their mothers in a lab. Millisecond by millisecond, the researchers will analyze the babies' movements and reactions. This data will be used to develop theories and eventually programs to engineer similar behaviors in the robot.

It's even harder than it sounds; playing peekaboo requires a relatively nuanced understanding of "others." "We know it's a hell of a problem," says Movellan. "This is the kind of intelligence we're absolutely baffled by. What's amazing is that infants effortlessly solve it." In children, such learning is mediated by the countless connections that brain cells, or neurons, form with one another. In the Project One robot and others, the software itself is formulated to mimic "neural networks" like those in the brain, and the theory is that the robot will be able to learn new things virtually on its own.

The robot baby will be able to touch, grab and shake objects, and the researchers hope that it will be able to "discover" as many as 100 different objects that infants might encounter, from toys to caregivers' hands, and figure out how to manipulate them. The subtleties are numerous; it will need to figure out that, say, a red rattle and a red bottle are different things and that a red rattle and a blue rattle are essentially the same.The researchers also want the robot to learn to crawl and ultimately walk.

Tags
About Abigail Tucker

A frequent contributor to Smithsonian, Abigail Tucker is writing a book about the house cat.

Read more from this author

Comment on this Story

comments powered by Disqus