Can Killer Robots Learn to Follow the Rules of War?

Researchers have set out to learn whether military machines can be programmed to behave morally, and if so, should have the authority to kill on their own

irobot warrior.jpg
Military robots are being built with plenty of firepower. But should they be trusted to kill? Courtesy of iRobot.

As Memorial Day reminds us every year, war doesn’t go away. 

But it does change. And one of the more profound shifts we’ll see in coming years is to a military increasingly dependent on robots.  Drones get most of the attention now, but more and more of the Defense Department’s innovations are other kinds of unmanned machines, from experimental aircraft to on-the-ground soldiers.

It’s easy to understand the attraction. Using robots is potentially more efficient, more precise and less expensive than relying solely on humans in warfare. And it would also, of course, mean fewer human casualties.

But this transformation brings with it a complex challenge: Can military machines be programmed to make decisions?  Which leads to a stickier question: Can robots learn morality?

The U.S. Office of Naval Research thinks now is the time to find out.  Earlier this month, it announced a five-year, $7.5 million grant to fund research at Tufts, Brown, Yale and Georgetown universities and Rensselaer Polytechnic Institute (RPI) on whether machines can one day be able to choose right from wrong.

The Navy wants to avoid the situation Google’s driverless car now faces: having a technology that's released far ahead of any clarity on the legal and ethical issues it raises.  Before autonomous robots go out into the field, the military wants to know if they can actually learn to do the right thing.   

As Selmer Bringsjord, head of RPI’s Cognitive Science Department sees it, a robot’s artificial intelligence could be designed to function at two levels of morality. The first would be based on a checklist of clear ethical choices, such as “if you come upon a wounded soldier, you should help him or her.”  But what if this action conflicts with its primary mission, such as delivering ammunition critically needed by other soldiers?  At that point, says Bringsjord, the robot would need to have the capability to engage in “deeper moral reasoning” to make decisions in situations its programmers might not have been able to anticipate.

Researchers would need to develop unique algorithms and computational mechanisms that could be integrated into the existing architecture of autonomous robots, a "thinking process" that would allow a machine to override planned behavior based on its ability to apply moral reasoning. 

If this seems extraordinarily ambitious, well, it is. To begin with, the researchers will need to isolate the basic elements of human morality—on which principles do we all agree?—and then figure out how to incorporate them into algorithms that would give robots some level of moral competence.

That's no small undertaking. For that reason, it's likely that machines not controlled by humans will, for the foreseeable future, be limited to non-combat roles, such as surveillance, security, search and rescue or medical care. But inevitably, military planners will want an answer to the question that hangs over all of this:  Should robots, acting on their own, ever be allowed to kill a human?  

If a machine is 90 percent sure that everyone in a vehicle it intends to shoot is a terrorist, is that sure enough? Would a robot be able to fire a weapon at a child if it determines it is the enemy? If something goes wrong and an autonomous robot mistakenly blows up a village, who's responsible?  The commander who ordered the operation?  The person who programmed the robot?  Or no one?

If you think this is still the stuff of science fiction, consider that earlier this month, experts on subjects ranging from artificial intelligence to human rights to international law weighed in on “killer robots” at a United Nations conference in Geneva. Groups like Human Rights Watch and officials from a handful of countries, including Pakistan, Egypt, Cuba and The Vatican, called for an outright ban on robots with the authority to kill.  But most countries, particularly the ones with the most advanced robots, aren’t ready to go that far. 

Take a look, for instance, at the WildCat, a four-legged robot being developed to run at high speeds on all types of terrain.  

MilitarySkynet.com - Wildcat - The new fast running military killer robot that runs faster than you.

For now, the U.S military follows a 2012 Defense Department directive that no machine with the power to kill can be fully autonomous. A human has to, literally, call the shots.  But that might not be true of all places: In March, New Scientist quoted a Russian official saying that robot sentries at five ballistic missile installations will be able to detect and destroy targets without a human giving the go-ahead. 

The Foster-Miller Talon, seen below, is one of the many machines around the globe being developed for combat. 

 

Foster-Miller TALON

Rules Of War

A report on the killer robot conference will be presented to the U.N.'s Certain Coventional Weapons committee in November. But it doesn't appear that a ban is coming any time soon.

Among those joining the discussion in Geneva was Ronald Arkin, an artificial intelligence expert from Georgia Tech, who has long been an advocate for giving machines the ability to make moral decisions.  He believes that if robots can learn to follow international law, they could actually behave more ethically than humans in warfare because they would be unfailingly consistent. They wouldn’t be able to respond in anger or with panic or prejudice. So Arkin opposes a ban, though he is open to a moratorium on autonomous weapons until machines are given a chance to see if they can master the rules of war.

Another AI scientist, Noel Sharkey, who debated Arkin in Geneva, is of a very different mind.  He doesn’t think robots should ever be given the authority to make life or death decisions.

Recently he told Defense One “I don’t think they will ever end up with a moral or ethical robot.  For that we need to have moral agency.  For that we need to understand others and know what it means to suffer."

“A robot may be installed with some rules of ethics, but it won’t really care," he says.

Get the latest stories in your inbox every weekday.