Facebook, Manipulation, and Data Protection – part 2

Right. Having gotten some day job work out of the way I return to this topic to tease out the issues further.

One aspect that I didn’t touch on in the last post was whether or not Data Protection exemptions exist for research and if those exemptions apply in this case. This discussion starts from the premise that EU Data Protection law applies to this Facebook research and that Irish Data Protection law is the relevant legislation.

The Exemption

Section 2(5) of the Data Protection Acts 1988 and 2003 provides an exemption for processing for research purposes:

(a) “do not apply to personal data kept for statistical or research or other scientific purposes, and the keeping of which complies with such requirements (if any) as may be prescribed for the purpose of safeguarding the fundamental rights and freedoms of data subjects.

And

(b) “the data or, as the case may be, the information constituting such data shall not be regarded for the purposes of paragraph (a) of the said subsection as having been obtained unfairly by reason only that its use for any such purpose was not disclosed when it was obtained, if the data are not used in such a way that damage or distress is, or is likely to be, caused to any data subject

The key elements of the test therefore are:

  1. The data is being processed for statistical or scientific purposes
  2. And the processing of the data complies with requirements that might be prescribed for safeguarding fundamental rights and freedoms

This means that for research which is being undertaken for scientific purposes with an appropriate ethics review that has identified appropriate controls to safeguard fundamental rights of Data Subjects, which since the enactment of the Charter of Fundamental Rights in the EU includes a distinct right to personal data privacy. This was reaffirmed by the Digital Rights Ireland case earlier this year.

The question arises: was the Facebook study as scientific purpose? It would appear to be so, and in that context we need to examine if there was any processing requirements set out to safeguard fundamental rights and freedoms of Data Subjects. That is a function of the IRB or Ethics committee overseeing the research. Cornell University are clear that the issues of personal data processing were not considered in this case as their scientists were engaged in a review and analysis of processed data and they did not believe that there was human research being undertaken.

Whether or not you consider that line of argument to be Jesuitical bullshit or not is secondary to the simple fact that no specific requirements were set out from any entity regarding the controls that needed to be put in place to protect the fundamental rights and freedoms (such as freedom of expression) that the Data Subject should enjoy.

Legally this means that the two stage test is passed.  Data is being processed for a scientific purpose and there has been no breach of any provision set down for the processing of the data to safeguard fundamental rights, so consent etc. is not required to justify the processing and the standard around fair obtaining is looser.

Apparently if your review doesn’t consider your research to be human research then you are in the clear.

Ethically that should be problematic as it suggests that careful parsing of the roles of different participants in research activity can bypass the need to check if you have safeguarded the fundamental rights of your research subjects. That is why ethics reviews are important, and especially so when it comes to the ethics of “Big Data” research. Rather than assessing if a particular research project is human research we should be asking how it isn’t, particularly when the source of the data is identifiable social media profiles.

A Key Third test…

The third part of the test is whether or not the data is being used in a way that would cause damage or distress to the data subject. This is a key test in the context of the Facebook project and the design of the study. Consent and fair obtaining requirements can be waived where there is no likelihood of damage or distress being caused to the research subject.

However, this study specifically set out to create test conditions that would cause distress to data subjects.

It may be argued that the test is actually whether or not the distress would be measured as an additional level of distress that would be caused over and above the normal level of distress that the subject might suffer. But given that the Facebook study was creating specific instances of distress to measure a causation/correlation relationship between status updates and emotional responses, it’s hard to see how this element of the exemption would actually apply.

Had Facebook adopted a passive approach to monitoring and classifying the data rather than a directed approach then their processing would not have caused distress (it would have just monitored and reported on it).

The Upshot?

It looks like Facebook/Cornell might get off on a technicality under the first two stages of the test. They were conducting scientific research and there was no prerequisite from any Ethics committee to have any controls to protect fundamental rights. However that is simply a technicality and it could be argued that, in the absence of a positive decision that no controls were needed, it may not be sufficient to rely on that to avail of the Section 2(5) exemption.

However, it may be that the direct nature of the manipulation and the fact that it was intended to cause distress to members of the sample population might negate the ability to rely on this exemption in the first place, which means that consent and all the other requirements of the Data Protection Acts should apply and be considered in the conduct of the research.

The only saving grace might be that the level of distress detected was not found to be statistically large. But to find that they had to conduct the questionable research in the first place.

And that brings us back to the “wibbly-wobbly, timey-wimey” issues with the consent relied upon in the published paper.

Ultimately it highlights the needs for a proactive approach to ethics and data privacy rights in Big Data research activities. Rather than assuming that the data is not human data or identifiable data, Ethics committees should be invoked and required to assess whether the data is and ensure that appropriate controls are defined to protect fundamental rights. Finally, the question of whether distress will be caused to data subjects in the course of data gathering needs to be a key ethical question as it can trigger Data Protection liability in otherwise valuable research activities.