Can Machines Learn Morality?

The debate over drones stirs up questions about whether robots can learn ethical behavior. Will they be able to make moral decisions?

Can drones be taught the rules of war?
Can drones be taught the rules of war? Photo courtesy of the Department of Defense

When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:

How are the decisions made to send killer drones after suspected terrorists?

The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.

But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?

Moral quandary

That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.

In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”

And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?

Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.

Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.

Beware of killer robots

But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”

At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.

The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.

The art of deception

Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.

But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.

For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.

It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.

“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”

To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.

The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).

What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”

Machines march on

Here are more recent robot developments:

  • Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
  • What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
  • Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
  • Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.

Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?

More from Smithsonian.com

This Robot Is a Better Dad Than Your Dad

Robots Get the Human Touch

Get the latest stories in your inbox every weekday.