Half a century ago, 2001: A Space Odyssey imagined a future fueled by high-tech computers that thought, learned and adapted. Central to this vision was HAL (Heuristically programmed ALgorithmic computer) 9000, the “sentient” computer that ran the crew’s ship, Discovery One. In the film, HAL stood in as mission control center, life support and sixth member of the crew, making an ambitious Jupiter mission possible for the ship’s six astronauts.
Today, as we look toward sending the first humans to Mars, the idea of HAL is shimmering once more at the forefront of researchers’ minds. Roughly 15 years from now, NASA plans to put the first humans in orbit around the red planet, which will mean traveling farther from Earth than ever before. Unlike moon-goers, these astronauts won’t be able to rely on ground control for a quick fix. If something goes wrong, they’ll be up to 40 minutes away from getting a reply from Earth.
"'Houston, we have a problem' is not really a great option, because the response is too slow," as Ellen Stofan, former NASA chief scientist, put it last month at a summit on deep space travel hosted by The Atlantic. "I keep saying, we need a nice HAL."
When it hit theatre screens in 1968, 2001 swiftly became an iconic thought-experiment on the future of humanity in space. Praised for its innovative vision and attention to scientific detail, the film was hailed in WIRED magazine as “a carefully wrought prediction for the future.”
HAL, by extension, became an important cultural reference for anybody thinking about artificial intelligence and the future of computers. It can speak, listen, read faces and (importantly) lips, interpret emotions, and play chess; In 2015, WIRED referred to him as a "proto-Siri." The crew depends on it for everything—which becomes a problem when, 80 million miles from Earth, HAL begins to behave erratically.
That's because 2001's HAL wasn't nice. As the main antagonist of the film, it ended up turning on the crew in an attempt to “save” the mission.
Still, "many scientists are themselves a part of HAL's legacy," wrote David Stork, now a computer scientist at the technology company Rambus, in his 1996 book HAL's Legacy. For the book, Stork interviewed some of those scientists on the occasion of HAL's "birthday" (when it first became operational) in the timeline of 2001 novelization.
"You can't help but be inspired," says Jeremy Frank, a computer scientist who is leading development on AI and other automated technology for future human NASA missions, of 2001 and other sci-fi depictions of AI. He agrees with Stofan that AI will be vitally important for human deep space missions. "We're absolutely going to have to have something."
What that something will be isn’t clear yet, Frank says. A real-life HAL might be expected to monitor life-support systems at all times to avoid any disasters, manage power generation, perform basic autopilot navigation, keep an eye on sensors for any errors and more. But whatever it entails, this AI will help free astronauts of the day-to-day details so they can keep their focus on the mission and the science.
"The immense role for AI is to enable the humans to stay out of the trenches," says Steve Chien, leader of the artificial intelligence group at NASA's Jet Propulsion Laboratory that helps rovers and probes choose which data to send back to Earth, and even select objects and areas to study on their own. For AI, this means taking over many of the more mundane maintenance and operations tasks of the spacecraft (and potentially a Mars base) to allow human astronauts to focus on more abstract tasks like scientific experiments.
"That's a much more effective way of doing science," says Chien, whose team helped develop AI technology that's been used for the Curiosity rover on Mars. "We don't want the astronaut spending all their time making sure the life support system works."
But asking an AI system to perform all those tasks is no small feat, Frank warns. Even during normal operations, real-life HAL would have to manage many independent systems, some of which are complex to operate on their own. For AI to respond to various situations, its creators would have to anticipate and map out all of those situations. "It just takes a huge amount of time and energy to even describe the problem," says Frank.
"There are going to be many complicated things, from temperature and pressure, to food and navigation," says Stork of the challenges an AI would face on every minute of a space mission. In past space missions, these challenges have been handled by ground-based computers, diligent astronauts and even NASA staff with slide rules.
"You need extremely sophisticated computer systems," Frank says. "We're past the days of going to the Moon with the sort of computing power that's in my iPhone."
Anything used on a space mission has to be hauled out to space and work in the tight quarters of a spacecraft, Frank says, not to mention be able to run on a limited source of power, usually from a small nuclear generator. In short, the more sophisticated a space mission's AI will be, the more computer you'll need. Despite how far technology has come, Frank points out, "software has mass."
Integrating all of that software together will be one of the biggest challenges to creating a spacecraft AI computer, Frank says—throwing together separate computer systems focusing on different aspects won't work. Otherwise, one could end up with a situation like a team of uncooperative rowers on a ship.
"Those tools were never built to be integrated with each other," Frank says, "never mind on a spacecraft that was built to run on limited computing."
In 2001, the problem isn’t HAL’s ability to process and perform his designated tasks. Rather, when the astronauts try to disable some of HAL’s processing functions, he sets out to kill the humans to preserve himself. The concern that such a powerful computer could go rogue might sound like strictly the province of sci-fi. But in fact, it’s no small challenge in researchers’ minds.
"That question exists in every system that we build," Chien says. "As we build more and more complex systems, it becomes harder and harder for us to understand how they will interact in a complex environment."
It's next to impossible to know how complex artificial intelligence actually works. In fact, many computer scientists still describe the way machines learn as a "black box." Artificial neural networks often function much like the human brain. “Unfortunately, such networks are also as opaque as the brain,” writes Davide Castelvecchi for Nature. “Instead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher.”
This makes it difficult to program in fail-safes, Chien says, because it's impossible to imagine how a learning, growing, adapting AI will react to every single situation.
Frank believes it will come down to properly programming both the computers and the astronauts working with them. "You have to just consider the AI as just another part of the system, and sometimes your system lies to you," Frank says. In 2001, HAL announces himself “foolproof and incapable of error”—but even today’s computers aren’t infallible. People working with an AI computer should know to not reflexively trust it, but treat it like any normal computer that could occasionally get things wrong.
Now, 50 years since the release of 2001: A Space Odyssey, how close is HAL's legacy to Stofan's vision for deep space travel?
"We have it in little bits and pieces now," says Stork. Some of our advancements are remarkable—for example, a form of AI sits in many of our pockets with voice-recognition technology like Siri that we can talk to conversationally. There’s AlphaGo, the AI computer that beat a human champion of the intricate strategy game Go. AI computers have even written literature. But these efforts all took specially tailored machines and years of work to complete these singular tasks.
"AI is doing a lot of incredible things in a lot of focused tasks, but getting AI to be as strategic as a smart human?" Chien says. "That is the challenge of tomorrow."
This prospect is made more challenging by the fact that NASA, unlike Silicon Valley, tends to be averse to the risks of trying new technology, Chien says. When it comes to spaceflight, he adds, this is understandable. "A million things have to go right for it to work," Chien says. "Just a few things have to go wrong for it to not work."
For Frank, it seems extraordinarily difficult to ever imagine an AI computer replacing all of the functions of the people working in NASA's ground control center, which is always staffed with at least six people, 24 hours a day, seven days a week, like HAL was able to. "But the good news is that we don't think you actually need to replace them all," Frank says. For a mission to Mars, he points out, astronauts would still be able to rely on regular, though not instantaneous, contact with Earth.
In reality, AI will be more crucial for missions than Mars, where human astronauts aren’t part of the picture, says Chien. He and other scientists meet regularly to speculate on these kind of far-out futures, for instance: How would you send a probe to explore the deep seas of Europa, where no radio contact with Earth is possible? What about sending an automated spacecraft to an entirely different solar system?
"NASA wants to go and do things in places where you can't send people," Chien says. "These are just crazy ideas—that would really require AI."