Adam Kramer, one of the three Facebook data scientists who wrote a controversial study that proved that positive and negative moods can be transferred across the social network, responded to the backlash with a public Facebook post defending the goal of the study, but admitting that the research benefits of the paper may not have justified all of the anxiety it caused.
In the experiment, Facebook tweaked the algorithms of nearly 700,000 unwitting Facebook users, causing people to see an abnormally low number of either positive or negative posts. People who saw more negative posts would write more negative posts themselves, and vice versa, proving that their mood was being affected by the kinds of things they were reading on Facebook.
The experiment became controversial because people didn’t like that they could have been part of it without their knowledge or explicit permission — though everyone implicitly gave permission just by agreeing to Facebook’s terms of service. People also found it unsettling that Facebook would be willing to manipulate the emotions of its users, especially by making them feel more negative than usual. Even the woman who edited the paper said she found it creepy.
Kramer says he and the other researchers conducted the study because they wanted to test the common worry that seeing friends post positive comments causes people to feel left out or negative, or that seeing too many negative posts might stop them from using the site.
He emphasises that the study only affected .04% of users, only for a single week in early 2012, and in a very small way.
“And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it,” he writes. “The result was that people produced an average of one fewer emotional word, per thousand words, over the following week.”
He also says, however, that he understands why people are upset, and says that he’s sorry the paper didn’t describe the researcher ‘s motivations. “In hindsight, the research benefits of the paper may not have justified all of this anxiety,” he writes.
Finally, he acknowledges that the experiment was run more than two years ago and Facebook has been working on improving its internal review practices.
Based on some of the top comments on Kramer’s thread, some users still aren’t satisfied.
“I appreciate the statement, but emotional manipulation is still emotional manipulation, no matter how small of a sample it affected,” one woman writes.
Here’s the full post from Kramer:
OK so. A lot of people have asked me about my and Jamie and Jeff’s recent study published in PNAS, and I wanted to give a brief public explanation. The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.
Regarding methodology, our research sought to investigate the above claim by very minimally de-prioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads. And we found the exact opposite to what was then the conventional wisdom: Seeing a certain kind of emotion (positive) encourages it rather than suppresses is.
And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.
The goal of all of our research at Facebook is to learn how to provide a better service. Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone. I can understand why some people have concerns about it, and my coauthors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety.
While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.