Robots Get the Human Touch

Robots are able to do a lot of things. But now they’re taking on the biggest challenge of all: Figuring out how humans work

Honda's Asimo robot
Honda's Asimo robot Feedloader (Clickability)

I’ve always thought the Tin Man got stiffed.

At the end of The Wizard of Oz, when the wizard rewards Dorothy and her friends for turning the Wicked Witch of the West into a puddle, he hands the Cowardly Lion an epic medal and the Scarecrow a diploma—which today may not seem like much more than a license to embrace debt, but back in the day was a big deal.

And what did he give the Tin Man? A ticking heart trinket that looked like something he picked up at the Oz Walmart.

With robots we’re still struggling with the heart thing. Some can do remarkable physical feats, such as sprint. Others have been programmed to teach themselves how to control their own bodies.

But when it comes to expressing feelings, or even reading our feelings, robots are pretty clueless. Not to say they’re not trying.  On Tuesday, Honda trotted out an upgraded version of Asimo, the hobbit-sized robot who became a YouTube star a few years ago when he conducted the Detroit Symphony through “The Impossible Dream.” The new Asimo is reportedly a lot smoother, runs faster and can pour you a drink. But at the demo, it also was able to distinguish the voices of three people spoken at once, using face recognition and analyzing sound, to figure out that one woman wanted hot coffee, another orange juice, and still another tea.

Then there are the robots at the University of Tsukuba in Japan that have learned to distinguish between a human smile and a frown and then adapt their behavior to what they think they’ve seen. They apparently get it right 97 percent of the time.

From the opposite perspective, scientists in Munich have created something called Mask-bot, which uses 3-D rear projection to create amazingly human-looking robot faces.

Learning to live with humans

The field is called social robotics, and it remains a tricky business. The goal is to get robots to understand us, all our quirks and little nuances and get us to trust them. And yet, if they seem too human, people can find them weird and unsettling. Scientists in social robotics often say they’re always learning about what it means to be human and to live with humans. For instance, researchers found that people like robots more if they don’t blurt out information right away. Adding just a one-second delay made them more likeable.

Scientists at Keio University in Japan have gone a step farther. They’ve turned robots into avatars of sorts (although they call them “telexistence robots.”) Humans using a  3-D head-mounted display can see, hear and feel what a robot does, but operate it remotely with their own movements.

Cynthia Breazeal, who designed one of the first sociable robots, a talking head named named Kismet, at MIT in the 1990s, thinks the big challenge simply is making us comfortable living with robots. “It really struck me when we sent a robot to Mars,” she told the BBC.  “I thought, ‘We’ve sent robots to the depth of the oceans. We’ve sent robots into volcanoes. Now we’ve sent robots to Mars. But they’re not in our homes. Why aren’t they in our homes? Because the human environment is so much more complex.’

“We need to understand how robots are going to interact with people and people are going to react to robots.  And you have to design robots with that in mind.”

Model behavior

Here are more ways robots are evolving:

Bonus Video: See how a robot learns how to fold a towel by watching humans. It’s not nearly as boring as it sounds.

Today’s question:  Was there any time today when you could have used a robot?

Get the latest stories in your inbox every weekday.