A computer model has reconstructed a snippet of a Pink Floyd song by reading the brain activity of people listening to the tune.
In a new study, published Tuesday in the journal PLOS Biology, participants with electrodes on their brains listened to “Another Brick in the Wall, Part 1,” from the rock band’s 1979 album, The Wall. Researchers then used the computer model to convert the electrode signals into audio.
The recreated bars sound garbled and hazy—a distorted echo of the original track. But unmistakable elements of the song’s rhythm, melody and harmony shine through.
“These exciting findings build on previous work to reconstruct plain speech from brain activity,” Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the research, tells Scientific American’s Lucy Tu. “Now, we’re able to really dig into the brain to unearth the sustenance of sound.”
Researchers have been working at decoding brain activity with artificial intelligence for years. They’ve tried reading brain scans to determine which words people are listening to, and they’ve even attempted to translate entire stories. Another study aimed to reproduce images that participants looked at.
Such technology could one day be used to help people who are unable to communicate with spoken words. Ludovic Bellier, a neuroscientist at the University of California, Berkeley, and co-author of the new study, tells Science’s Phie Jacobs that he hopes the findings could eventually help people who have trouble speaking due to strokes, injuries or diseases, by making sense out of their brain activity.
For the new study, researchers played the Pink Floyd song to 29 participants with epilepsy. As treatment for their epilepsy, the participants already had electrodes implanted in their brains, per the Times. The song played in the operating room while the patients underwent surgery meant to prevent seizures, according to Fortune’s Erin Prater.
The researchers trained a computer model on the brain data from participants as they listened to about 90 percent of the Pink Floyd song. But the remaining 10 percent—a 15-second clip from the middle of the track—was left out of the training data, writes Science. Instead, the team asked the algorithm to recreate this section of the music from the brain activity based on patterns it had learned. The team trained 128 models, each operating at a different frequency, and together, they matched specific electrode signals to certain characteristics of music, per the Times.
Beyond creating a haunting piece of music, the study also provided insights into which specific parts of the brain are involved in music perception. It found that both hemispheres play a role, but the right hemisphere is engaged more than the left, which supported findings from other work, according to the study.
The superior temporal gyrus, located in the temporal lobe, seemed to be heavily involved in musical perception, with a particular subregion connected to rhythm. Previous research has connected different parts of the brain to perceiving specific aspects of music, including pitch, rhythm and the texture of the sound, called timbre, according to the study.
In the future, the researchers hope their insights could help devices translating brain signals into words to incorporate the more musical elements of speech. Language, like music, includes changes in pace, pitch and volume that are a vital part of communicating, per Scientific American.
“These elements, which we call prosody, carry meaning that we can’t communicate with words alone,” Bellier tells the publication.
The researchers chose the Pink Floyd song for this study, in part because it contains a mix of sung words and instrumental sections, according to the Times. But they had another reason, too: The participants “just love Pink Floyd,” Bellier tells Science.