Facebook’s emotion study mess, summarized: still awful [Updated]

by Sunita

A couple of days ago I broke my hiatus to talk about Facebook’s emotion study. I found myself updating the post almost immediately, and the updates have continued unabated as more information trickled out through the weekend, sometimes contradicting, sometimes reinforcing what we thought we knew. Rather than endlessly updating that post I thought it was time to write a new post that summarizes where we are now. I’m not linking a lot in this summary, but I encourage you to read and follow Kashmir Hill, who has been doing superb reporting (and updating) at Forbes.

A quick reminder of the main points: Two academics and a Facebook data scientist published a paper in the Proceedings of the National Academy of Sciences. They reported the results of a study which attempted to manipulate the emotional content of Facebook News Feeds. 600,000 users were included in the study, with some being in treatment groups (their feeds were manipulated according to the study protocol) and some in control groups (their feeds were not).

The two requirements to be included were: (1) posting in the English language; and (2) having posted in the last week. The study took place over seven days in January 2012.

US government-funded research that involves human subjects is required to conform to the Common Rule, which protects subjects from mistreatment and stipulates that they give informed consent before they participate. Informed consent is a technical requirement that places certain burdens of explanation and disclosure on the researcher. Not all studies can provide complete information to subjects before the experiment takes place because knowledge may alter behavior in ways that invalidate the study. In these cases, deception is permitted in initially informing the subject and carrying out the data collection, but the subject must be debriefed after participation, ideally directly after but in all cases before the data are analyzed. Consequent to this debriefing, the subject has the right to have her data removed from the study.

Studies that do not receive federal funding are not required to conform to the Common Rule, but many private and public institutions (and some companies) voluntarily choose to meet these requirements.  Versions of the Common Rule are in force in many other countries but by no means all.

The data collection and analysis for this emotion study were conducted by the Facebook data scientist. The academic authors, according to the notes provided by PNAS, limited their participation to designing the research and writing up the paper (together with the Facebook data scientist). This means that the academics were effectively siloed from the human subject part of the research.

Facebook asserts that they received informed consent from their human subjects. How? Because all Facebook users agree to Facebook’s terms of service conditions when they set up their accounts. These conditions include assent to research for internal operations purposes.

This assertion is completely untenable in the US academic sphere. Informed consent requires affirmative, specific-to-the-study, consent, not a blanket “I agree” box check, and it requires the researcher to provide information about the specifics of the project and possible ramifications before and after the subject participates. I am confident that there is no IRB in a US research institution that would consider agreement to Facebook’s TOS to constitute informed consent to any and all research projects.

But university IRB approval wasn’t necessary, because Facebook didn’t receive federal funds, and the academics (who are responsible to their universities) didn’t participate in the collection and analysis of data that failed to meet human subjects treatment standards. The study, to use an apt phrase, engaged in IRB laundering. Cornell University either did not need to grant IRB approval or granted IRB approval for data analysis on an existing dataset. Update: Cornell has released a statement asserting that

Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.

Facebook apparently has an internal IRB process, but if they are happy with the argument that TOS agreement provides informed consent, they’re presumably engaged in looking at other aspects of the research.

[Update: Kashmir Hill at Forbes has just reported that Facebook's TOS in January 2012 made no mention of research. So the TOS justification falls apart.]

As I said in the previous post, this is not the first time Facebook and academics have joined forces to manipulate user behavior for non-commercial (or not solely commercial) purposes.  In 2010 they conducted a political experiment to increase voter turnout and claimed that the manipulation resulted in up to 340,000 more votes cast. It is extremely unlikely that Facebook’s “informed consent” process was any different for that study.

Where does this leave us? I’ve read the criticisms that

  1. the effects were small, so who cares?
  2. it was a crappy study and it probably didn’t affect users at all, so who cares?
  3. Facebook does this all the time, so who cares?
  4. users should know what they’re getting into with Facebook so it’s on them, not Facebook.

These arguments have been cogently addressed elsewhere and I’m not interested in rehashing them. My current and greatest concern is about the effects of large-scale experiments on populations who clearly, obviously, have not consented to participate, the results of which are disseminated publicly, with the imprimatur of academic legitimacy.

There is nothing academically legitimate about this study. The academics involved should be ashamed of themselves for lending scholarly credibility to an ethically indefensible process. And the PNAS should be ashamed of itself for allowing ethically flawed data collection and analysis to be approvingly published in a respected journal.

I don’t love IRBs. I complain about my university’s IRB every time I have to recertify myself, every time I advise students on their IRB applications, and every time the subject of IRBs comes up in faculty conversations and meetings.

This is not about how well IRBs do their job. It’s about the fact that imperfect or not, IRBs are much, much better than the alternatives. IRB’s are part of the institutional checks that keep us from repeating debacles like the Tearoom Trade study, the Stanford Prison Experiment, and Willowbrook.

As Susanna Fraser noted in comments to my previous post, we have enough suspicion around science and scientific research these days. When academics contribute to research that violates the protocols designed to protect the dignity of human subjects, everyone loses. Even, in the long run, Facebook.

 

About these ads