The ‘Godfather of A.I.’ Now Warns of Its Dangers

Geoffrey Hinton quit Google this week to speak his mind on artificial intelligence, which he says may soon grow smarter than—and even manipulate—humans

Geoffrey Hinton outside Google
Computer scientist Geoffrey Hinton in 2015 AP Photo / Noah Berger

Artificial intelligence pioneer Geoffrey Hinton announced he was leaving his part-time job at Google on Monday so that he could speak more freely about his concerns with the rapidly developing technology.

Hinton’s work on neural networks—the method that teaches A.I. to process data in a way similar to the human brain—underpins how modern chatbots like ChatGPT and Google Bard function. But now, he partly regrets making this advancement, writes Cade Metz for the New York Times, which first reported the story.

“I’m just a scientist who suddenly realized that these things are getting smarter than us,” Hinton tells CNN’s Jake Tapper. “I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”

Hinton has been working on neural networks since he was a graduate student at the University of Edinburgh in the 1970s. Then, “few researchers believed in the idea,” per the Times. Even Hinton’s PhD advisor had his doubts. “We met once a week,” Hinton told the Times in 2019. “Sometimes it ended in a shouting match, sometimes not.” 

But Hinton persisted with his work. In the 1980s, he and his colleagues proposed a technique called backpropagation, which is an algorithm for training machines, reports Will Douglas Heaven for MIT Technology Review. In 2012, he had a big breakthrough: Hinton and two of his students created a neural network that could analyze photos and teach itself to identify objects. The next year, the team’s neural network startup was bought by Google, where Hinton proceeded to work, splitting his time between the tech giant and the University of Toronto. He and two other A.I. pioneers won the 2018 Turing Award, often called the “Nobel Prize of computing,” and Hinton is now known as the “Godfather of A.I.”

In a tweet, Hinton clarified he left Google not to criticize it, but to discuss the dangers of A.I. without it reflecting on the company, which he writes “has acted very responsibly” with the technology.

For years, Hinton believed neural networks were inferior to the way human brains function, but he tells the Times he’s recently started thinking differently after seeing how much A.I. has developed in the last five years alone. He once thought it would take up to 50 years for the technology to outsmart humans, but he now tentatively predicts it’ll be just 5 to 20.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he tells MIT Technology Review. “How do we survive that?” 

Hinton’s immediate concerns are that the internet will soon be flooded with fake text, pictures and videos that regular people won’t be able to distinguish from reality. But eventually, he says this technology could be used by humans to sway public opinion in wars or elections. He believes that A.I. could sidestep its restrictions and begin “manipulating people to do what it wants,” by learning how humans manipulate others, he tells CNN.

When asked by CBS News in March what the chances were that the technology could wipe out humanity, he responded, “It’s not inconceivable. That’s all I’ll say.”

Around that time, about 1,400 tech leaders, including Elon Musk, signed an open letter for companies to pause work on A.I. systems more powerful than GPT-4 for at least six months, citing “profound risks to society and humanity.” The letter stated that A.I. developers should work with policymakers to “dramatically accelerate development of robust A.I. governance systems” that would oversee use of the technology. 

Yann LeCun, Meta’s chief A.I. scientist who won the 2018 Turing Award with Hinton, also signed the letter and has concerns about the technology. But he doesn’t share Hinton’s bleak assessment. 

“I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” LeCun tells MIT Technology Review. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”

Despite his concerns, Hinton tells CNN he did not sign the letter because he doesn’t believe the United States should stop its progress when other nations, such as China, would continue with it. 

“It’s not clear to me that we can solve this problem,” Hinton tells the publication. “I believe we should put a big effort into thinking about ways to solve the problem. I don’t have a solution at present.”

Get the latest stories in your inbox every weekday.