Man or Computer? Can You Tell the Difference?

Could you be fooled by a computer pretending to be human? Probably

Phenom-Chatbots-631.jpg
Paul Garland

It’s not every day you have to persuade a panel of scientists that you’re human. But this was the position I found myself in at the Loebner Prize competition, an annual Turing test, in which artificial intelligence programs attempt to pass themselves off as people.

The British mathematician Alan Turing probed one of computing’s biggest theoretical questions: Could machines possess a mind? If so, how would we know? In 1950, he proposed an experiment: If judges in typed conversations with a person and a computer program couldn’t tell them apart, we’d come to consider the machine as “thinking.” He predicted that programs would be capable of fooling judges 30 percent of the time by the year 2000.

They came closest at the 2008 Loebner Prize competition when the top chatbot (as a human-mimicking program is called) fooled 3 of 12 judges, or 25 percent. I took part in the next year’s test while doing research for a book on how artificial intelligence is reshaping our ideas about human intelligence.

The curious thing is that Turing’s test has become part of daily life. When I get an e-mail message from a friend gushing about pharmaceutical discounts, my response isn’t: No, thanks. It’s: Hey, you need to change your password. Computer-generated spam has changed not only the way I read e-mails, but also the way I write them. “Check out this link” no longer suffices. I must prove it’s me. Personalization has always been a part of social grace, but now it is a part of online security. Even experts sometimes get fooled. Psychologist Robert Epstein—the co-founder of the Loebner Prize competition—was duped for four months by a chatbot he met online. “I certainly should have known better,” he wrote in an essay about the encounter.

Chatbots betray themselves in many ways, some subtle. They’re unlikely to gracefully interrupt or be interrupted. Their responses, often cobbled together out of fragments of stored conversations, make sense at a local level but lack long-term coherence. A bot I once chatted with claimed at one point to be “happily married” and at another “still looking for love.”

At the Loebner Prize, I laced my replies with personal details and emphasized style as much as content. I’m proud that none of the judges mistook me for a computer. In fact, I was named the “Most Human Human” (which became the title of my book), the person the judges had the least trouble identifying as such. With the Turing test moving from the realm of theory into the fabric of daily life, the larger question—What does it mean to act human?—has never been more urgent.

Get the latest Science stories in your inbox.