When Robots Get Morals

The rapid development of artificial intelligence is bringing us closer to the day when machines will be able to think for themselves

How long before robots show a full range of emotions?
How long before robots show a full range of emotions? Image courtesy of Flickr user Solo

It’s been a humbling year since the total beatdown of two former Jeopardy champions on national TV by a supercomputer named Watson. Sure, the machine gave an occasional lame answer, but in the land of game shows, we were a conquered species.

Last weekend we had our revenge.

At the American Crossword Puzzle Tournament in Brooklyn, a computer program named Dr. Fill went up against a roomful of puzzle masters and this time the machine proved human. It finished 141st among 600 contestants, disappointing its inventor, Matthew Ginsberg, who thought it would end up in the top 50.

Our glory, however, will likely be fleeting. Ginsberg, an expert in both artificial intelligence and creating crosswords, said Dr. Fill had simply had a bad day–largely because it wasn’t prepared to deal with one puzzle where some words needed to be spelled backwards and another where a few had to be arranged diagonally. It still thinks too logically. But Ginsberg promises to be back and the next Dr. Fill will be wired wiser.

There’s little question, in fact, that the pace of complex and nuanced thinking by machines will only accelerate in the coming decade. Listen to Judea Pearl, one of the pioneers of artificial intelligence, who was interviewed last week after winning the A.M. Turing Award, considered the Nobel Prize of computing.

“I think there will be computers that acquire free will, that can understand and create jokes… There will be computers that can send jokes to the New York Times that will be publishable.”

Pearl, now 75, is still at it. He’s working on what he calls “the calculus of counterfactuals,”–sentences based on something that didn’t happen. The goal is to provide machines with the knowledge to think through hypothetical situations, such as “What would have happened if John McCain had been elected president?” And that, he contends, is a big step toward computers gaining autonomy and, one day, developing a kind of morality.

“This allows them to communicate to themselves, to take responsibility for one’s actions, a kind of moral sense of behavior,” Pearl said. “These are issues that are interesting–we could build a society of robots that are able to communicate with the notion of morals.”

From the brains of babes

Sounds like a brainy new world, but the key is to teach robots to think in more sophisticated ways–and that doesn’t mean like adult humans. Computers do the task-focused, goal-oriented thing pretty well already. What they need to think like are babies.

More and more AI researchers believe that. As Alison Gopnik, a scientist at the University of California, Berkeley put it, “Young children are the greatest learning machines in the world.” Not only do they learn a language, but they figure out causal relationships, notice patterns and adapt to a world in which, at first, nothing makes sense.

The big challenge, obviously, is to figure out how babies do those things, break the process down into motivations and reactions and then program them. Only then will machines be able to make connections without being told.

But that may be the toughest puzzle of all to solve. And, sadly, even all those smart babies can’t explain it.

Learning curves

Here’s the latest on what’s happening with artificial intelligence:

  • Brad must be so jealous: It needed help with the graphics and sound, but an artificial intelligence program named Angelina has created its own video game from scratch. Says Michael Cook, the London computer scientist who created Angelina: “In theory, there’s nothing to stop an artist from sitting down with Angelina, creating a game every 12 hours, and feeding it into the Apple apps store.”
  • Motion slickness: A team of researchers at MIT are developing a system through which drones would use 3D vision to read human body signals so the robot planes can land on aircraft carriers.
  • This is a movie waiting to happen: If all this talk about smart robots is making you nervous, University of Louisville computer scientist Roman Yampolskiy is already way ahead of you. He’s advocating the creation of “virtual prisons” to contain AI if it gets too smart. And even with that, he worries that particularly clever artificial intelligence programs will be able to “attack human psyches, bribe, blackmail and brainwash those who come in contact with it.”
  • Buried past: A Harvard-MIT team has combined artificial intelligence and satellite photos to identify thousands of places where ancient humans may have lived in settlements.
  • Watson makes nice: We’ve come full circle. IBM and the Memorial Sloan-Kettering Cancer Center in New York announced that they will use Watson the supercomputer’s ability to mine massive amounts of data and research to help doctors with cancer diagnosis and treatment options.

Video bonus: Okay, sometimes AI can feel a little creepy. Here’s a clip on Bina 48, the talking head that’s the face of LifeNaut, a project where people have started uploading digital files about themselves (videos, pictures, audio recordings), with the goal of creating a digital clone that can live forever.

Get the latest stories in your inbox every weekday.