Can Biomusic Offer Kids With Autism a New Way to Communicate?

Biomedical engineers are using the sound of biological rhythms to describe emotional states

biomusic 2.jpg
The emotional interface tracks physiological signals associated with emotional states and translates them into music. joey333/iStock

An ethereal sound, with a smooth, rangy melody that shuffles through keys, and a soft tap for a beat, fills a lab at Toronto’s Holland Bloorview Kids Rehabilitation Hospital. Made possible by wearable sensors on a child’s fingertips and chest that track pulse, breathing, temperature and sweat, and an algorithm that interprets that data as sound, the electronic output isn’t really danceable. But the changes in tempo, melody and other musical elements instead provide insight into the child’s emotions.

This is biomusic, an emotional interface that tracks physiological signals associated with emotional states and translates them into music. Invented by a team at Holland Bloorview, led by biomedical engineers Stefanie Blain-Moraes and Elaine Biddiss, the intent is to offer an additional means of communication to people who may not express their emotional state easily, including but not limited to children with autism spectrum disorder or with profound intellectual and multiple disabilities. In a 2016 study in Frontiers in Neuroscience, Biddiss and her coauthors recorded the biomusic of 15 kids around the age of 10 — both kids with autism spectrum disorder and typically developing kids — in anxiety inducing and non-anxiety inducing situations and played it back to adults to see if they could tell the difference. They could. (At the bottom of the study, you can dowload and listen to the biomusic.)

“These are children who may not be able to communicate through traditional pathways, which makes things a little bit difficult for their caregivers,” says Stephanie Cheung, a PhD candidate in Biddiss’ lab and lead author of the study. “The idea is to use this as a way for caregivers to listen to how those signals are changing, and in that way to kind of determine the feeling of the person they’re communicating with.”

While Biddiss’ studies employed that atmospheric sound, it need not be a particular type of music, points out Blain-Moraes, an assistant professor of physical and occupational therapy who runs the Biosignal Interaction and Personhood Technology Lab at McGill University. A former graduate student with Biddiss at Holland Bloorview who helped invent the original system, Blain-Moraes is working to further develop the technology. Among her modifications is the option to use different “sound skins” that apply noise that the user finds pleasant. The goal is not to design a technology for a single group.

“We look a lot for what we call resonant design,” she says. “We’re not trying to design for a condition, we’re looking to design for a need, and often those needs resonate across conditions.” This could be a caregiver who wants more information from her patient, or a mother who wants an alternative way to monitor a baby in another room. It could apply to an individual who wants to track his own emotional state, or someone with an aging parent who has become less able to express him or herself.

In the original state, the technology featured a fingertip sensor that tracked heart rate, skin temperature and electrodermal activity (perspiration). These were expressed, respectively, in the beat, key and melody of the music. An additional chest strap tracked chest expansion, which was integrated into the music as a sort of whooshing sound. Each of these physiological features is subject to change when a person is feeling anxious: Perspiration, heart rate and respiration all increase, while the blood vessels contract, making the skin temperature decrease.

But, there are still a lot of hurdles to overcome, technological and otherwise. Ideally, the system is less obtrusive. Blain-Moraes implemented a method to estimate breathing based on the amount of blood in the finger, to replace the chest strap, and placed other sensors in a FitBit like wristband. Fitting it all into a consumer product like an Apple Watch, while not inconceivable, will require smaller, better sensors than we have available now.

“There’s an important distinction that you need to make between changes in your body that happen to maintain homeostasis and changes in your body that are specific to emotional and mental states,” says Blain-Moraes. “You need sensors that are sensitive enough to be able to pick up these changes — and they tend to be a lot smaller scale and faster — that are related to physiological, mental and emotional states.”

Then, there’s the scientific challenges. Detecting anxiety seemed to work, when compared to a relaxed state. But how would the technology fare when comparing anxiety to excitement, two states that feature many of the same physiological signals, let alone complex and overlapping emotions? Using the context of the situation may help, but the process is further complicated by the users — kids with autism spectrum disorder don’t always show the same physiological signals, sometimes exhibiting increased heart rate in non-anxiety states, showing a narrower range of electrodermal activity and differing skin temperature responses.

"Biomusic and sonification technologies are an interesting approach to communicating emotional states," says Miriam Lense, a clinical psychologist and research instructor at Vanderbilt University Medical Center in the Program for Music, Mind and Society. "It remains to be seen how well this technology can distinguish states that have overlapping physiological output—for example, both excitement and anxiety involve heightened arousal—as well as mixed and fluctuating states. In different populations and for different individuals, there may be differences in how states are manifested physiologically."

Finally, and most problematically, there are ethical dilemmas. What biomusic is doing is broadcasting very personal information — one’s emotional state — publicly. In many of the use cases, the people in question don’t have the ability to communicate consent. And when a person is unable to verify the accuracy of that information — say, that they are in fact feeling anxious — that person may not be able to correct a misunderstanding.

“It’s like with many ethical issues, there isn’t a right or there isn’t a wrong,” says Biddiss. “It could equally be considered wrong to deny a person a communication pathway with their loved ones.”

In a worst-case scenario, this could play out in a feedback loop of embarrassing biomusic. Once, during a lecture, Blain-Moraes wore a biomusic system. When she was asked a difficult question, the biomusic intensified, causing everyone to laugh, which made her embarrassed, so it intensified further, and everyone laughed more — and so on.

Despite these issues, biomusic is progressing as a technology. It’s simple to interpret and doesn’t require undivided, visual attention. Blain-Moraes’ team at McGill is working toward an app, with companion sensors. They’re in the research and design stages, she says, sharing prototypes with caregivers and patients with dementia or autism to ensure that it’s a participatory process. In a previous study in Augmented and Alternative Communication by Blain-Moraes, Biddiss, and several others, parents and caregivers viewed biomusic as a powerful and positive tool, calling it refreshing and humanizing.

“This is really meant to be a ubiquitous tool, that can be used to make people more aware of their emotions,” Blain-Moraes says.

Get the latest stories in your inbox every weekday.