No matter where you are, a bop is a bop. Whether a melody makes people get up and dance, soothes their sadness, fall in love, or lull them to sleep, similar rhythms and tones make music a universal language, as the saying goes. Now, there might be science to back it up.
To better understand the similarities in music that could provide insight into its biological roots, a team of researchers focused on music with lyrics. They started by looking at ethnographic descriptions of music in 315 cultures worldwide, all of which featured vocal music, before analyzing musical recordings from 60 well-documented cultures, according to a study published in the journal Science.
W. Tecumseh Fitch, a cognitive biologist at the University of Vienna who was not involved in the study, writes in a commentary that accompanied the research in Science:
The authors find that not only is music universal (in the sense of existing in all sampled cultures) but also that similar songs are used in similar contexts around the world.
“Music is something that has bedeviled anthropologists and biologists since Darwin,” Luke Glowacki, an anthropologist at Pennsylvania State University and a co-author on the paper, tells the Wall Street Journal’s Robert Lee Hotz. “If there were no underlying principles of the human mind, there would not be these regularities.”
Basically, the team found that humans share a “musical grammar,” explains the study’s lead author Samuel Mehr, a psychologist at Harvard University. He tells Jim Daley at Scientific American, “music is built from similar, simple building blocks the world over.”
The team used a combination of methods—including machine learning, expert musicologists and 30,000 amateur listeners from the United States and India—to analyze a public database of music. In one part of the study, online amateur listeners were asked to categorize random music samples as lullabies, dance songs, healing songs, or love songs. Dance songs were the easiest to catch. In other parts of the study, the music samples were annotated by listeners and transcribed into a musical staff, which is a form of musical notation in Western cultures. When this data was fed to a computer, it was able to tell different kinds of songs apart at least two-thirds of the time.
Critics have questioned the use of machine learning algorithms and Western notation because of the biases that come with both.
“Using Western notation to notate examples and then drawing conclusions from those notated scores is a really problematic practice,” Shannon Dudley, an ethnomusicologist at the University of Washington, who was not involved in the study, tells Scientific American. “Subtleties of rhythm, subtleties of pitch differentiation, articulation and timbre—there are a lot of things that have a huge impact on the way people hear music that aren’t there in [Western] notation.”
Ethnomusicologist Elizabeth Tolbert of John Hopkins’ Peabody Institute, who wasn’t involved in the study, tells the Wall Street Journal that the research team “may be over-interpreting their results” by searching for common patterns in such a diverse variety of music.
Regarding staff notation, Mehr points out to Scientific American that it was only one of five analysis methods that the team used. “We find the same result each of the five ways—that form and function are linked worldwide,” he says. So while the staff transcriptions are missing details like timbre and words, “they are nonetheless capturing meaningful information about the vocalizations in the songs.”
Co-author Manvir Singh, a cognitive and evolutionary anthropologist at Harvard University, also tells to Scientific American that the music database is open access. “We’d be glad for anyone to test our conclusions using an alternative method,” he says.