The ability to make and interpret facial expressions plays a central role in being human. As one research team put it, "The face is a visible signal of others’ social intentions and motivations, and facial expression continues to be a critical variable in social interaction." Which is why it is surprising to find that not only are we lousy at correctly interpreting expressions, but that computers are significantly better at it.
In a new study, researchers from the University of California, San Diego, created a program that allows computers to interpret expressions of pain on videos of peoples' faces, The New York Times reports. Researchers already knew that people tend to perform poorly at determing if someone is lying to them or deceiving them. To put the computer to the test, the researchers recruited volunteers and filmed their facial expressions as they underwent two different tasks: one in which they stuck their hand in an ice bath for a minute (painful) and another in which they submerged their hand in a soothing vat of warm water while producing their best faked expressions of pain.
Then, they asked other human volunteers to take a look at 50 of those videos and determine which ones represented the genuine expression of pain, the Times says. The computer also analyzed those videos. The human judges only guessed right about half the time, the researchers found. Even with an hour of special training, the humans still scarcely improved their ability to identify the fake—performing at about 55 percent accuracy. (You can flex your own pain-reading skills by taking the Times' online quiz).
The computer, meanwhile, scored at 85 percent accuracy when assessing those same recordings. The macine's software pulled this off by measuring "the presence, absence and frequency of 20 facial muscle movements in each of the 1,800 frames of one-minute videos," the Times describes.
This is the first time a computer has actually outperformed humans at reading facial expressions. But if computers get good enough at such expression-reading tasks, the Times points out, eventually they could be deployed to perform tasks ranging from lie detecting to job interviews to providing medical diagnostics.