When my 8-month-old cries, I ask him if he’s hungry, or wet, or just needs a cuddle.
“Babububuu,” he says.
What I need is a baby cry translator. And that’s just what a team of researchers say they’ve developed.
“Experienced nurses or pediatricians can identify why a baby is crying because they have experience,” says Lichuan Liu, a professor of electrical engineering at Northern Illinois University, who conducted the research at the Digital Signal Processing Laboratory where she is director. “We talked to them, and they mentioned that based on the cry’s sound there are some clues.”
So Liu set out to identify the features of cries that can help mark them as expressions of pain or discomfort. These features include differences in pitch and frequency. The team then developed an algorithm based on automatic speech recognition to detect and identify these features. This “cry language recognition algorithm” was trained on recordings of baby cries taken from a hospital's neonatal intensive care unit. It uses compressed sensing, a process that reconstructs a signal based on incomplete data, which is necessary for identifying sounds taking place in noisy environments. It can identify a baby cry against a background of, say, adult speech or loud television sounds or babbling toddlers—that is to say, the actual environments where babies live. By classifying different cry features, like pitch, the algorithm can suggest whether the cry is due to sickness or pain, and identify the degree of urgency.
The team had experienced pediatric care providers assess 48 baby cry recordings for probable cause of crying—hunger, tiredness, gas pain, etc. They then compared these to the algorithm's assessments. The algorithm agreed with the humans 70 percent of the time.
Liu hopes to partner with industry to develop a baby cry recognition machine for new parents. She and her team have applied for a patent on their technology.
“You can figure out why [the baby’s] crying and use appropriate techniques to soothe your baby,” she says. “Then if it’s something really special, you can understand maybe it’s an emergency.”
So-called “uncommon cry signals”—signs of pain or sickness—are often very high-pitched and very loud compared to ordinary crying, Liu says. Identifying these cries could also be helpful in a hospital setting, to help doctors and nurses quickly figure out which babies need immediate attention. Cries can also be used as preliminary diagnostic tools for problems like chromosomal abnormalities, or simply to identify common issues like colic. The research was published last month in the journal IEEE/CAA Journal of Automatica Sinica.
Developing AIs to detect human emotions is challenging, says Julia Rayz, a professor of computer and information technology at Purdue University, where she studies human-computer communication.
“Think how difficult it is for a human to recognize emotions in somebody that they don’t know,” she says. “Compare it to how much easier it is to recognize an emotion when we know a person. A computer has to go thought the same thing, except that it usually generalizes the information across populations. So, for somebody who seems like they are smiling in their neutral phase, a computer may say that the person’s face shows a genuine smile—correlation with happiness—while it is not true. Same with unhappiness.”
Liu and her team continue to train the technology for greater accuracy. They also plan to add more features, such as the ability to identify and classify movement and facial expressions. This could help give a more detailed reading of a baby’s emotional and physical state. They also hope to begin human trials; so far the algorithm has only been tested on recorded cries.
“My boys are 10 and 4, so they’re not babies anymore, but I still remember,” says Liu. “So if there is anything I could do to help new parents like my husband and myself....We really want this to be a real product that people can use when they need it."