How Machines Are Getting Better at Making Conversation

Digital assistants are developing personalities, with some help from poets and writers

Digital assistants
Will digital assistants replace both Google searches and mobile apps? Microsoft

Remember back when just about everything in the digital world revolved around "search," when a high Google ranking was the Holy Grail.

Then there was that stretch when every company felt it needed to have its own mobile app. That was the key to keeping up with the mass migration to smartphones.

But it’s time to move on, say the seers of Silicon Valley, to a new phase, one that, believe it or not, is built around conversation.

Microsoft CEO Satya Nadella said as much a few weeks ago at the big developers’ conference known as Microsoft Build, describing conversation as the next digital “platform.” He went so far as to suggest that chatbots—software that allows you to communicate directly with devices, when making a purchase, for instance—will have the same kind of profound impact as did browsers that first enabled us to search the Web and Apple’s use of the touchscreen on phones.

More recently, Facebook founder Mark Zuckerberg weighed in, announcing that chatbots will become a key feature of Facebook Messenger. They not only will be able to understand what a person is saying—either through voice recognition or comprehending a typed sentence—but will also learn enough about your preferences from past conversations that they’ll have the ability to take actions on their own, such as sending flowers or ordering dinner.   

The thinking is that instead of searching the Web to get information, we will increasingly rely on bots to do it for us, whether it’s through conversational digital assistants like Apple’s Siri or Microsoft’s Cortana, or through more specialized software, such as a restaurant bot that uses artificial intelligence to get to know a person's tastes and habits so well it can anticipate any needs.

Getting to know you

It’s too soon to predict how all this will shake out, but Microsoft’s Nadella shared a vision of people interacting regularly with their digital assistants, which could, in turn, connect them with “worker bots” that actually handle tasks like booking flights or scheduling meetings. And Amazon has just unveiled software that will allow us to communicate with thermostats, light switches and other devices through its digital assistant, Alexa.

That suggests a different kind of relationship with our machines, one that’s more personal and engaging. Not that digital assistants will necessarily become our virtual sidekicks, but if we grow to like and trust them—and forgive their mistakes—we’re much more likely to make them part of our daily lives.

So, there’s more effort going into making them feel a part of our world, that they’re even plugged into pop culture. In line with the start of the new season of “Game of Thrones,” Siri was programmed with a bunch of snappy responses that showed off her knowledge of the show. And, after a commercial featuring Cookie Monster interacting with Siri got more than 10 million views on YouTube, Apple followed up with a silly behind-the-scenes version earlier this month.

Shaping a personality

For its part, Microsoft’s Cortana has a nice singing voice (I’ve heard “her” do abbreviated versions of “Danny Boy” and “Auld Lang Syne” when asked to sing a song) and gracious responses to personal questions (When asked her age, it replied, “By your calendar, I’m still in infancy. In bot years, I’m quite mature.”).

As it turns out, Cortana has a small team—including a poet, a novelist and a playwright—putting words in her mouth, according to a recent report by Elizabeth Dwoskin in the Washington Post.  Their job is to not just make her sound more human, but also to add layers to her personality. As the writers shape her as a character, they wrestle with how she should respond to different types of questions. How solicitous should she be? How coy? How knowledgeable does she need to be about current events? When should she express an opinion? 

Then there’s another consideration. Just as having Cortana sound too robotic could keep people from connecting with her, making her seem too human can feel creepy. The dip in empathy a person can feel if a robot seems too real is what’s become known in robotics as the “uncanny valley.” It’s thought that a little quirkiness, even a flaw, in a digital assistant isn’t a bad thing. It can make it seem less threatening and more endearing.

But Microsoft also knows the risks of inviting humans into a bot’s learning process. Late last month, it unveiled on Twitter and a few other chat apps a conversational bot named Tay. It was an experiment to see what the audience could teach it. It didn’t go well. In a matter of hours, Tay had learned to be outrageously inappropriate, spewing racist comments and suggesting at one point that the Holocaust never happened. Microsoft issued a quick apology and Tay went away.

Life guides?

In some ways, Google has the most to lose from the bot boom. As it becomes easier for bots to retrieve information, answer questions and perform tasks, why would we even do Google searches? Google, of course, know this, so it’s betting big on its own digital assistant, Google Now

By combining voice search with the massive amount of user data it has collected over the years, Google hopes to develop the ultimate virtual valet, one that knows so much about you that it can be several steps ahead in addressing your needs. The search company's goal is to make the leap, through Google Now, from purveyor of instant information to trusted life guide. 

Google's bot ideally will be able to let you know, based on real-time data, when you need to leave for work, or clue you into cool things you can do with your kids over the weekend, or suggest vacation options based on places you’ve enjoyed in the past. As Amit Singhal, Google’s senior vice president of search products, put it during a recent interview with Time, “I want Google Now to help me not only just do the next thing. I want it to enable a better experience in this beautiful journey that we call life.”

Singhal also pointed out that Google, at least for now, is not expending a lot of energy in giving Google Now a winning personality. He contends that teaching it to tell jokes would suggest that these bots are more sophisticated than they actually are at this point. Better, he says, to concentrate on refining its mining of vast amounts of data to allow Google Now to learn how to form the kind of connections that make human speech understandable.

That’s the real tipping point for bots. Software is much more effective now at recognizing human words than even a few years ago, with an accuracy rate of more than 90 percent, according to most estimates. But truly understanding what those words mean in context remains a hurdle.

At the same time, we tend to raise our expectations. Real conversation moves the interaction well beyond typing a question into a search box. When speaking with a chatbot, we're more likely to feel like we’re speaking to another person, to be more open-ended and talk in multiple sentences spiced with double meanings and colloquialisms. All of which makes it that much harder for bots to figure out just what it is we want.

Still, bots are increasingly viewed as a natural extension of how we already communicate and use our mobile phones. And, they’re being seen as the way we’ll talk to our cars, our TVs and all our other appliances. 

Clearly, digital assistants and bots still have a way to go before they can interpret everything we say with precision and express themselves in language that sounds natural. But it seems only a matter of time before they’re less novelty and more companion. 

Some very big companies are banking on it.  

Get the latest stories in your inbox every weekday.