How Close Are We to Creating a Real-Life Chappie?

Despite the potential danger, some scientists believe it’s only a matter of time before autonomous sentient robots walk among us

Chappie Robot
Columbia Pictures

In the not-too-distant-future, Johannesburg, rotten with crime, becomes the first city ever to deploy a fleet of autonomous robot police droids. At first, the machines seem like an effective solution. Crime drastically dips and the project is hailed as a success—until something goes wrong. Rap-rave gangsters looking for a quick payday hijack a damaged droid scheduled for demolition. With the help of an engineer coerced at gunpoint, they reprogram the droid, named Chappie, as an autonomous agent, effectively creating the singularity, the point at which artificial intelligence becomes smarter than human intelligence.

Like most good science fiction, director Neill Blomkamp’s new film, Chappie, acts as a commentary on human nature and current problems, including poverty, crime, discrimination, bullying and police brutality. But it also raises more prescient questions about what many think—for better or for worse—will be an inevitable emergence of sentient artificial intelligence.   

“In the past, across the board, everything humans have conceived of, regardless of ethics and morals, have been tried out and, if possible, done,” says Wolfgang Fink, a physicist at the University of Arizona and the California Institute of Technology. “Autonomous systems will emerge if someone figures out how to create them—that’s a given.” 

Indeed, numerous scientists, including Fink, are now feverishly pursuing this line of research, and progress has already been made. Aside from Chappie itself, much of the robotics depicted in the film is in fact already available or very close to being available. Remotely operated robots similar to the movie’s Moose—the hulking death machine reminiscent of Robocop’s ED-209 that is operated by Hugh Jackman’s deranged policeman—exist today.

Likewise, robots like the Chappie police droids—ones that are programmed as rule-based systems and are artificially intelligent, but that lack self awareness or autonomy—are nearly ready, although their battery life and agility does not yet match the models shown patrolling the streets of Johannesburg. If and when such machines are deployed, though, it might not be such a radical thing for us to accept. “We’re very good at habituating and getting used to changes in the environment, including technological ones,” says Ali Mattu, a clinical psychologist at Columbia University Medical Center and creator of the Brain Knows Better, a science fiction psychology blog. “As robots become part of our daily life, I think in some ways it might feel seamless.”

A police droid, however, does not an autonomous sentient being make. A truly self-governing, self-aware being like Chappie would be a departure from anything ever seen before. “In the time since Neanderthals, we haven’t ever really had the potential to work collaboratively with a whole new species that is intelligent,” Mattu says. “If we can overcome barriers to sharing empathy with an artificial life form, then this could lead to an amazing age for humanity.”

Hardware is not the obstacle preventing such a being from emerging—that’s simply an engineering task, Fink says. Instead, creating the software—the ghost in the machine—is the real challenge. Researchers are taking two different approaches toward this problem. Some are trying to create a ready-to-load sentient being from scratch, while others think that writing a basic program equipped with the tools it needs to learn, adapt and modify itself through experience—as seen in Chappie—is the way to go. As Fink explains: “It’s a case of either already baking the pie and putting the pie into the system, or giving the system the ingredients for the pie and leaving it up to the system to bake it.”

When the breakthrough does come, most likely it will happen not incrementally but suddenly, as depicted in Chappie. Fink predicts that—rather than a university, government or corporation taking the credit—the disruption that leads to the singularity will be delivered by a small team of researchers or even a single individual, likely backed by a wealthy private funder, spirited by the freedom to experiment and break free of conservative mainstream research. Indeed, that is nearly the scenario in Chappie: Chappie is created not by a massive corporation, but by a self-driven engineer, played by Dev Patel, who uses all the resources provided by his cushy day job to support his own vastly different work on autonomous artificial intelligence by night.

Once truly self-governing artificial intelligence emerges, however, it will be impossible to anticipate how complex the system will become, or in what direction it will evolve. Unlike organic systems like us, it would not be constrained by the slow slog of biological evolution. Its development would be explosive. There is no guarantee that such a system would adopt or retain a set of moral or ethical values—or that those values would extend to humans. In the case of Chappie—who develops much like a human child, learning and maturing as time passes—an early promise not to harm humans goes a long way in keeping the robot’s actions in check. But when threatened with annihilation, Chappie, like many humans, largely sets aside its morals and acts out of sheer self-interest, defending itself even if that means hurting others.

And while Chappie comes across as relatable and human-like, a real-world sentient robot quite possibly would not abide by human-like reasoning, diminishing our ability to anticipate its actions or understand its motivations. “Once you reach that level, you’ve essentially lost control over the system,” Fink says. “It’s both exciting and scary, because it won’t be human-like.”

Additionally, whether the software initially inhabits the shell of a Chappie-like humanoid robot, a spaceship or even an implant in your body, once it escapes the confines of that physical form—as depicted (albeit cringe-worthily) in Transcendence—it would be nearly impossible to contain it.

While science fiction books and films have mulled over the question of artificial intelligence for decades, Fink points out that the necessary breakthrough “could literally happen any time now.” Not everyone is comfortable with this. Last January, PayPal co-founder and Tesla Motors CEO Elon Musk donated $10 million to prevent robot overlords from someday taking over the planet and eliminating us. By attempting to create autonomous artificial intelligence, he warns, we are “summoning the demon.”

Musk is not alone in his trepidation about this line of research. In a recent Reddit “Ask Me Anything” thread, Microsoft co-founder and philanthropist Bill Gates wrote that he is “in the camp that is concerned about super intelligence” and “doesn’t understand why some people are not concerned” about manmade artificial beings that exceed our own cognitive abilities and gain autonomy. Physicist Stephen Hawkings likewise shares that concern, and articulates it even more bluntly: “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.  

Even the blockbuster comic book movies are treading on this well-worn path, normally reserved for the science fiction genre. As hinted in the latest trailer for Marvel’s Avengers: Age of Ultron, the film’s heroes fight with a robot who was originally created as the planet’s savior from war, but instead decides to exterminate humanity.

On the flipside, however, humans could be the aggressors, threatening the existence of the autonomous beings we ourselves create, as depicted in Chappie. Whether we humans pursue a peaceful path largely depends on two things, Mattu says: if we can communicate with the robots, and if we have empathy for them. The latter, he explains, depends on seemingly shallow factors such as what the robots look like—Do they have eyes? Do they look similar, but not too similar, to us?— and whether they come across as generally likable. Sharing the same goals as us would also help their case. These factors tap into our innate neurological tendency to categorize others as either part of our trusted in-group or as part of a potentially threatening out-group. Roboticists are taking such psychological factors into account in designing their latest machines.

Mattu points out, however, that even if the robots do satisfy all of these prerequisites, there’s still no guarantee that things will go well. “Humans have a hard enough time seeing each other as human, let alone AI or alien life,” Mattu says. “We also have a history of first contacts going very poorly.”

So it could be that we destroy our creation before we even get to know it, or that the reverse comes true—that our software offspring makes slaves of us all as seen in the Matrix, or decides, Skynet-style, that humanity simply isn't worthy of existence. On the other hand, humans and autonomous robots could embrace one another, agreeing to work together on interesting pursuits like space exploration. The only way to find out whether things end in flames and tears or progress and friendship, however, is to create those beings in the first place. But if the past is any indicator, that will only be a matter of time. “We’re always driven by curiosity, by a desire to explore and to discover the unexpected,” Fink says. “And scientific ethics tend to lag behind our progress.”   

 “Sometimes we get atomic power, and sometimes we get atomic weapons,” Mattu adds. “We don’t know what direction it will take, but we can’t stop science.”  

Get the latest Travel & Culture stories in your inbox.