By Reading Brainwaves, an A.I. Aims to Predict What Words People Listened to

The research is a long way off from practical use, but researchers hope it might one day aid communication for people who experienced brain injuries

An artist's digital depiction of colorful, squiggly lines representing brain activity around an image of a brain
The artificial intelligence has looked for patterns between audio recordings and the brain activity of people listening to those recordings.  John M Lund Photography Inc / Getty Images

Scientists are trying to use artificial intelligence to translate brain activity into language.

An A.I. program analyzed snippets of brain activity from people who were listening to recorded speech. It tried to match these brainwaves to a long list of possible speech segments that the person may have heard, writes Science NewsJonathan Moens. The algorithm produced its prediction of the ten most likely possibilities, and over 70 percent of the time, its top-ten lists contained the correct answer.

The study, conducted by a team at Facebook’s parent company, Meta, was posted in August to the preprint server arXiv and has not been peer reviewed yet.

In the past, much of the work to decode speech from brain activity has relied on invasive methods that require surgery, writes Jean-Rémi King, a Meta A.I. researcher and a neuroscientist at the École Normale Supérieure in France, in a blog post. In the new research, scientists used brain activity measured with non-invasive technology.

The findings currently have limited practical implications, per New Scientist’s Matthew Sparkes. But the researchers hope to one day help people who can’t communicate by talking, typing or gesturing, such as patients who have suffered severe brain injuries, King writes in the blog post. Most existing techniques to help these people communicate involve risky brain surgeries, per Science News.

In the experiment, the A.I. studied a pre-existing database of 169 people’s brain activity, collected as they listened to recordings of others reading aloud. The brain waves were recorded using magnetoencephalography (MEG) or electroencephalography (EEG), which non-invasively measure the magnetic or electric component of brain signals, according to Science News.

The researchers gave the A.I. three-second segments of brain activity. Then, given a list of more than 1,000 possibilities, they asked the algorithm to pull the ten sound recordings it thought the person had most likely heard, per Science News. The A.I. wasn’t very successful with the activity from EEG readings, but for the MEG data, its list contained the correct sound recording 73 percent of the time, according to Science News.

The AI’s “performance was above what many people thought was possible at this stage,” Giovanni Di Liberto, a computer scientist at Trinity College Dublin in Ireland who was not involved in the study, tells Science News. Of its practical use though, he says, “What can we do with it? Nothing. Absolutely nothing.”

That’s because MEG machines are too costly and impractical for widespread use, he tells Science News. Plus, MEG scans might not ever be able to capture enough detail of the brain to improve upon the findings, says Thomas Knöpfel, a neuroscientist at Imperial College London in England, who didn’t contribute to the research, to New Scientist. “It’s like trying to stream an HD movie over old-fashioned analogue telephone modems,” he tells the publication.

Another drawback, experts say, is that the A.I. required a finite list of possible sound snippets to choose from, rather than coming up with the correct answer from scratch. “With language, that’s not going to cut it if we want to scale it to practical use, because language is infinite,” says Jonathan Brennan, a linguist at the University of Michigan who didn’t contribute to the research, to Science News.

King notes to Time’s Megan McCluskey that the study has only examined speech perception, not production. In order to help people, future technology would need to figure out what people are trying to communicate, which King says will be extremely challenging. “We don’t have any clue whether [decoding thought] is possible or not,” he tells New Scientist.

Currently, the research, which is conducted by the Facebook Artificial Intelligence Research Lab and not directed top-down by Meta, is not designed for a commercial purpose, King tells Time.

To the critics, he says there is still value in this research. “I take this more as a proof of principle,” he tells Time. “There may be pretty rich representations in these [brain] signals—more than perhaps we would have thought.”

Get the latest stories in your inbox every weekday.