Facebook’s indefensible, unscholarly research [Updated]

by Sunita

Note: This blog post has been updated 5 times since I first published it less than 24 hours ago. The updates are inserted where they make the most sense, so read through the post to get them all. They are bolded to make it easier.  

Note #2: I’ve summarized the latest information in a new post

 

I’m supposed to be on hiatus. And I was, until I woke up this morning and clicked a link to an A.V. Club article describing a Facebook-based research paper on contagious emotions. I read the article and was dumbfounded. Then I thought, well, I should make sure it’s being reported accurately, so I downloaded the paper and read it. That didn’t help. Indeed, it made things worse.

Facebook has billions of users and is now a publicly traded company whose financial health depends on keeping shareholders happy. One of the ways it does this is by constantly tweaking its presentation of personal, public, and commercial information in order to increase participation and maximize ad revenues. Facebook has never been demonstrably interested in its users’ privacy, and unlike Google, it has never had “don’t be evil” as a maxim. Its founder is on record as saying that wanting to have more than one online/offline identity displays a lack of integrity. For Facebook, information is money, i.e., your information mints their money.

I dumped my Facebook account years ago after barely using it, because I got tired of having to be vigilant about my privacy settings every time they tweaked the site (and they tweak the site a lot). When I teach a privacy course, Facebook is one of the obvious examples of an online world that makes privacy control extremely difficult. I tell my students and anyone else who will listen, if you’re going to use Facebook, set your privacy options to maximum and hope for the best.

But the story that broke today is not about privacy. It’s about integrity, trust, and informed consent. Briefly stated, one of Facebook’s “data scientists” coauthored a paper with two academics that reported the results of a large-scale experiment in which people’s feeds were manipulated in order to test whether adding and subtracting positive and negative messages affected readers’ emotional responses at the time they read the post and in the days following. The A.V. Club story has the guts of the research and links to the paper. In addition, The Atlantic has two articles, one an interview with the psychologist who edited the article for the journal and another summarizing some of the points. In addition, Slate has an excellent article on the ethics of the research, with contributions from ethicists and lawyers.

As the Slate and Atlantic articles suggest, there is no way that what the researchers claim to have obtained qualifies as “informed consent” under either the Common Rule or most universities’ IRB (Institutional Review Board) procedures. I find it interesting that in the author credits to the article, the Facebook researcher is credited with the data collection and data analysis, while the two academic authors (one from UCSF and the other from Cornell University) are responsible only for the research design and writing up the paper. This seems to silo them from IRB requirements (which understandably focus on the treatment of human subjects during data collection and analysis). Here is Cornell University’s FAQ on IRB procedures. Under their procedures, there is no way these data could be collected and analyzed without IRB scrutiny, and it has to take place before the study can begin.

[Update: Lee (@ZLeeily) contacted Fiske directly and has posted the results of her email exchange. It is unclear which of the two universities’ IRBs approved this, since Fiske doesn’t name the university. [ETA: but see Update #4 below.] The board seems to have treated this manipulation as functionally equivalent to editorial & advertising-oriented curation of users’ feeds, which is dumbfounding to me. It certainly doesn’t seem to fit the points in Cornell’s FAQ, but individual decisions can obviously diverge from these.]

[Update #2: The editor of this article also edited the controversial and highly criticized article on "female" hurricanes.]

In the article, the authors assert that the data collection and analysis technique

“was adapted to run on the Hadoop Map/Reduce system (11) and in the NewsFeed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

This is hogwash, for two reasons.

  1. Whether data are collected by humans or by automated procedures has no bearing on informed consent conditions. This may matter to Facebook in terms of what it promises its users, but it is irrelevant to academic human-subject rules. Anonymization is relevant and was followed here, but that’s a separate issue.
  2. Facebook’s own Data Use Policy stipulates that “in addition to helping people see and find things that you do and share, we may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

Note that it says “information we receive about you.” Also note that this information will be used “for internal operations.”

A publicly accessible journal article is a long way from “internal operations.” And this data was not passively received, it was experimentally manipulated. Some people received positive messages, some negative. The entire experiment was designed to test whether users’ emotional states appeared to be altered as a result of this manipulation. (I say “appeared” because the data are collected from users’ Facebook posts; there are no direct measures of individual emotional states).

Look, academics experimentally manipulate emotions in studies on a regular basis. Academics are also doing Facebook-based studies of politics and other topics. I’ve personally conducted dozens of interviews for two different projects, both of which ask difficult personal questions of my subjects, and those interviews can have emotional implications as well. The problem here isn’t what the study wanted to measure. It’s that they didn’t ask people whether they wanted to participate.

Even in “deceptive research,” i.e., research where the participants cannot be told of the goals of the research in advance because it may taint their behavior and therefore the results, there are protocols. These include debriefing the subjects as soon as possible after the manipulation and allowing them to withdraw their data if they so desire. None of these protocols appear to have been followed.

Facebook is not required to conform to the Common Rule because it is unlikely to have received government funds, given that it has money to burn on studies like this.

[Update #4: Law professor James Grimmelmann notes (via @jonpenney) that the study seems to have had at least some federal support: a Cornell Chronicle story reports that it was funded in part by the James S. McDonnell Foundation and the Army Research Office. The latter is a federal institution and therefore covered by the Common Rule. One of the academic co-authors is currently at Cornell and the other, while now on the UCSF faculty, was a postdoc at Cornell when the study began. That makes it more likely that Cornell's IRB approved the research. ]

[Update #5: According to Forbes reporter Kashmir Hill, who has been doing stellar reporting on this story, the IRB review was not at a university but within Facebook itself. So we're back to where we started. After 5 updates, I think it's time for a new post. This is getting more confusing, not less. ]

But unlike some writers, I’m not completely convinced this study qualifies as legal in every subject’s country. The TOS are vague and broad, and in this case not only is there no reason for Facebook users to think “research” includes “emotional manipulation,” Facebook seems to have violated its own TOS assurances by reporting the results to the world. And again, these data weren’t just “received,” they were created by the experiment. In the absence of the experiment, the statistically significant results would not have obtained. Given that users were selected on the basis of using English as their language, the laws of multiple countries are relevant, including countries with varied legal definitions of privacy and consent.

[Update #3: Also from @ZLeeily: we don't know whether the researchers excluded minors from their sample. IRB rules often require that researchers treat minor and adult human subjects separately.]  

But even if this research was legal (and that’s a big if), it wasn’t ethical. I don’t know anyone who loves IRB reviews, but I know very few scholars who seriously want to go back to the bad old days of every researcher abiding by his own code. IRBs frequently overreach and are tone deaf, but so are researchers.

The history of human subject research is full of horrible examples of abuse and exploitation. As an empirical social scientist, I know how lucky I am that people are willing to spend their time, effort, and emotions, and even risk their reputations (however hard we try to anonymize their identities) to increase human knowledge. Their generosity is a gift that should be honored and respected.

Facebook just told us how badly they fucked with that gift.

 

About these ads