Here’s Why A.I. Can’t Be Taken at Face Value

Cooper Hewitt’s new show drills down into the inherent biases lurking within computer intelligence systems

Expression mirror with eyes, nose and mouth
Expression Mirror (detail) by Zachary Lieberman David Levene, Cooper Hewitt

At a moment when civil rights groups are protesting Amazon’s offering its face-matching service Rekognition to the police, and Chinese authorities are using surveillance cameras in Hong Kong to try to arrest pro-democracy campaigners, the Cooper Hewitt, Smithsonian Design Museum offers a new show that could not be more timely.

The exhibition, “Face Values: Exploring Artificial Intelligence,” is the New York iteration of a show the museum organized, as the official representative of the United States, for the 2018 London Design Biennial. It includes original works the museum commissioned from three Americans, R. Luke DuBois, Jessica Helfand, Zachary Lieberman as well as a new interactive video experience about AI by the London filmmaker Karen Palmer of ThoughtWorks. The imaginative installation, which includes a screen set into a wall of ceiling-high metal cat tails, was designed by Matter Architecture Practice of Brooklyn, New York.

“We are trying to show that artificial intelligence is not all that accurate, that technology has bias,” says the museum’s Ellen Lupton, senior curator of contemporary design.

R. Luke DuBois’s installation, Expression Portrait, for example, invites a museumgoer to sit in front of a computer and display an emotion, such as anger or joy, on his or her face. A camera records the visitor’s expression and employs software tools to judge the sitter’s age, sex, gender and emotional state. (No identifying data is collected and the images are not shared.) We learn that such systems often make mistakes when interpreting facial data.

“Emotion is culturally coded,” says DuBois. “To say that open eyes and raised corners of the mouth imply happiness is a gross oversimplification.”

DuBois wants the viewer to experience the limits of A.I. in real time. He explains that systems often used in business or governmental surveillance can make mistakes because they have built-in biases. They are “learning” from databases of images of certain, limited populations but not others. Typically, the systems work best on white males but less for so just about everybody else.

Machine-learning algorithms normally seek patterns from large collections of images—but not always. To calculate emotion for Expression Portrait, DuBois used the Ryerson Audio-Visual Database of Speech and Song (RAVDESS), which is comprised of video files of 24 young, mostly white, drama students, as well as AffectNet, which includes celebrity portraits and stock photos. DuBois also used the IMDB-WIKI dataset, which relies on photos of famous people, to calculate people's age. Knowing the sources of Dubois’s image bank and how databases can be biased makes it easy to see how digital systems can produced flawed results.

DuBois is director of the Brooklyn Experimental Media Center at New York University’s Tandon School of Engineering. He trained as a composer and works as a performer and conceptual artist. He combines art, music and technology to foster greater understanding of the societal implications of new technologies.

He is certainly on to something.

Installation view metal cat tails on ceiling
The imaginative installation, which includes a screen set into a wall of ceiling-high metal cat tails, was designed by Matter Architecture Practice of Brooklyn, New York. Matt Flynn, Cooper Hewitt

Last week the creators of ImageNet, the 10-year-old database used for facial recognition training of A.I. machine learning technologies, announced the removal of more than 600,000 photos from its system. The company admitted it pulled millions of photos in its database from the Internet, and then hired 50,000 low-paid workers to attach labels to the images. These labels included offensive, bizarre words like enchantress, rapist, slut, Negroid and criminal. After being exposed, the company issued a statement: “As AI technology advances from research lab curiosities into people’s daily lives, ensuring that AI systems produce appropriate and fair results has become an important scientific question.”

Zachary Lieberman, a New Media artist based in New York, created Expression Mirror for the Cooper Hewitt show. He invites the visitor to use his or her own face in conjunction with a computer, camera and screen. He has created software that maps 68 landmarks on the visitor’s face. He mixes fragments of the facial expression of the viewer with those of previous visitors, combining the fragments to produce unique combined portraits.

“It matches the facial expression with that of previous visitors, so if the visitor frowns, he or she sees other faces with frowns,” Lieberman says. “The visitor sees his expression of an emotion through those on other people’s faces. As you interact you are creating content for the next visitor.”

“He shows it can be fun to be playful with data,” Lupton says. “The software can ID your emotional state. In my case, it reported I was 90 percent happy and 10 percent sad. What is scary is when the computer confuses happy and sad. It’s evidence the technology is imperfect even though we put our trust in it.”

Lieberman c0-founded openFrameworks, a tool for creative coding, and is a founder of the School for Poetic Computation in New York. He helped create EyeWriter, an eye-tracking device designed for the paralyzed. In his Expression Mirror, white lines produce an abstract, graphic interpretation of the viewer’s emotional status. “If you look happy you might see white lines coming out of your mouth, based on how the computer is reading your expression,” he says.

Jessica Helfand, a designer, critic, historian and a founder of the blog and website “Design Observer,” has contributed a visual essay (and soundtrack) for the show on the long history of facial profiling and racial stereotyping titled A History of Facial Measurement.

“It’s a history of the face as a source of data,” Lupton says. Helfand tracks how past and present scientists, criminologists and even beauty experts have tried to quantify and interpret the human face, often in the belief that moral character can be determined by facial features.

Karen Palmer, the black British filmmaker, calls herself a “Storyteller from the Future.” For the show, she created Perception IO (Input Output), a reality simulator film.

The visitor takes the position of a police officer watching a training video that portrays a volatile, fraught scene. A person is running toward him and he tries to de-escalate the situation. How the visitor responds has consequences. A defensive stance leads to one response from the officer, while a calm, unthreatening one leads to a different response.

Perception IO tracks eye movements and facial expressions. Thus, the visitor is able to see his or her own implicit bias in the situation. If you are a white policeman and the “suspect” is black, do you respond differently? And visa versa. Palmer’s goal is for viewers to see how perceptions of reality have real-life consequences.

The takeaway from the show?

“We need to understand better what A.I. is and that it’s created by human beings who use data that human beings select,” Lupton says. “Our aim it to demystify it, show how it’s made.”

And the show is also meant to be entertaining: “We are trying to show what the computer thinks you are.”

“Face Values: Exploring Artificial Intelligence,” is on view at the Cooper Hewitt Smithsonian Design Museum in New York City through May 17, 2020. The museum is located at 2 East 91st Street (between 5th and Madison Avenues.

Get the latest on what's happening At the Smithsonian in your inbox.