Facebook’s Mood-Manipulation Study Wasn’t Nice But It Also Wasn’t Very Good Science
Facebook tried to deliberately change the moods of some of its users
For a week-long period in January 2012, researchers at Facebook were trying to directly manipulate the moods of hundreds of thousands of people. The results, published in Proceedings of the National Academy of Sciences, seemed to show that, when a person posts a sad (or happy) status update, that person's friends start posting sadder (or happier) subsequent updates.
The study has been out for a few weeks, and at first only a few people seemed to notice. This past weekend, though, that changed. Journalists and scientists have been attacking the study from all sides, saying not only that trying to manipulate people's emotions without their approval is a huge breach of research ethics, but that the study itself was just bad science.
The study was meant to test of what social scientists (including Facebook's Adam Kramer, who led the study) call "mood contagion"—how happiness and sadness can spread from person to person. Facebook's algorithms already determine what users see on their news feeds; for the study, Kramer and his team took this a step further. They tweaked the streams of about 689,000 people to show either more positive posts or more negative posts. Then, they watched to see how this affected the subsequent posts of those hundreds of thousands of people.
But not getting “informed consent” from people before engaging in psychological research is a huge misstep, says Robinson Meyer for the Atlantic.
"[T]he study has come in for severe criticism,” says Charles Arthur for the Guardian, “because unlike the advertising that Facebook shows - which arguably aims to alter peoples' behaviour by making them buy products or services from those advertisers - the changes to the news feeds were made without users' knowledge or explicit consent.”
On Sunday, Kramer posted to Facebook saying that the study was designed to have as little impact on people's emotions as possible while still producing statistically significant results.
Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.
Ethical breaches aside, psychologist John Grohol at Psych Central says that the Facebook study has some glaring scientific problems, too.
The tool that Kramer and colleagues used to determine whether a status update was happy or sad isn't really cut out for the job, says Grohol. The Facebook researchers used an automated text analysis approach that scans a body of text and counts the number of positive and negative words. This is fine for books and essays and longer articles, says Grohol, but fails spectacularly when applied to short bits of text like Facebook status updates. The tool also misses other impact aspects of Facebook communication, things like emojis and sarcasm. Grohol:
[E]ven if you believe this research at face value despite this huge methodological problem, you’re still left with research showing ridiculously small correlations that have little to no meaning to ordinary users.
This isn't the first test of “mood contagion” to be done using Facebook, but it is the first that we know of where people were manipulated rather than just observed. In most cases, an “intervention” study like this would be better than than a strict “observation” study, but that's assuming the study is well designed and ethically sound.
In his Facebook post, Kramer says that the social science team at the company have been working on “improving our internal review practices.” Kramer's assurances that Facebook is changing may make you feel a little better now, but if you remember feeling very, very, very, very vaguely more sad for a week in January 2012, maybe now you know why.