Beki Grinter

The Right to be Forgotten and the Right to be Equal

In computer science, empirical, European Union, social media on July 16, 2014 at 4:48 am

I’ve said this before, the Internet can be a mean misogynistic place. Could the Right to be Forgotten help with this?

The Right to be Forgotten is an EU ruling that gives people the means to ask search engine companies to remove data from their searches if it is irrelevant. Its sparked a lot of controversy as well as questions.

The controversy could be characterized as pitting freedom of expression and information against individual privacy rights. Additionally, people have argued that it creates an unfair burden on intermediaries such as Google.

While I am open to these arguments, I find myself thinking about how freedom of expression and misogyny interact. Some of the things that are written about women on the Internet are vile, abusive, full of bile and hatred. Freedom of expression has always had limitations: libel (making false and damaging statements) and obscenity. Freedom of expression on the Internet seems never to have had these limitations, and so obscene libelous statements directed at women exist in perpetuity on the Internet. Perhaps some might argue that its the responsibility of the person they are targetted at to take it up through the courts. But how, when the authors of these remarks are hidden. Which makes me think there is a role for corporations. Or at least a responsibility.

Some advocates for the right to be forgotten have argued that it reflects a social value of forgiveness. We all have the right to make mistakes and then over time have those mistakes disappear into a forgotten history. I agree.

But what I am asking and suggesting here is that the Right to be Forgotten maybe a means to finally have an Internet that is fair to all. For a long time visions of the Internet have championed it as a platform welcoming anyone and everyone. The right to be forgotten may have a role in actually ensuring that it welcomes minorities by proving for once and for all that it will not tolerate discrimination.

538, the World Cup, and Facebook: Telling Stories about Data

In computer science, discipline, empirical, research, social media on July 15, 2014 at 6:49 am

As many of you already know, I’ve been following the World Cup. My team, Germany, won. Watching the World Cup has always involved reading news reports and commentary about the matches. This year I decided to include 538 in my reading.

538 is Nate Silver’s website. Nate Silver became famous predicting US elections. He is a master of analyzing big data to make predictions. It works well for elections. But it doesn’t work so well for the World Cup, at least not for me. First, the site predicted Brazil to win for a long time.

But it’s not just that 538 did not accurately predict the winners. I think that 538 misses the point of a World Cup. Crunching data about the teams doesn’t tell the whole story. And the World Cup is stories. Many stories. As a fan you learn the stories of your team and its history. You might start with world history—this is very salient as a Germany fan. England versus Argentina similarly (1984). It also involves stories about the teams previous encounters. Germany versus Argentina has happened before, even in Finals. And those stories are recounted, and reflected on, in the build up to a game. You might tell stories about strategy. Certainly the Germans have been telling those, about a decade long commitment to raising German players. How you structure a league to encourage more domestic players that can also play for the national side. How you balance the demands of a national league and a national team.

In a nutshell, context matters. These stories of world politics, former World Cups, and the arc of time turn statistics about the players into something richer. 538 tells none of those stories. And I suppose that’s exactly what it wants to be, a “science” of the World Cup. But my World Cup isn’t statistics, it’s larger, more discursive and has a multi-decade narrative arc.

Reflecting on this caused me to revisit the Facebook study. Yes, that Facebook study. The study reported data. But it was data about people. However, at the same time I think some of the response could be interpreted as people feeling that there was more to the story than just statistical reporting of the outcomes. Is it a similar type of human-dimension, an infusion of humanity? This is the question I’ve kept wondering since reflecting on the problems of both of these data-driven reports. 538 reduces football to data. In so doing it loses the human dimension. The Facebook study started as data and the public raised human concerns and considerations. If I have a take away it is that fields like social computing, or any data science of humans, need to seriously pay attention to the stories that we tell about people. How we frame or potentially reduce people is something that the public will care about, for it is their humanity, their stories that we seek to tell.

That Facebook Study

In academia, computer science, discipline, empirical, European Union, research, social media on July 8, 2014 at 8:07 am

Following Michael Bernstein’s suggestion that Social Computing researchers join the conversation.

Facebook and colleagues at Cornell and the University of California, San Francisco published a study in which it was revealed that ~600,000 people had their Newsfeed curated to see either positive or negative posts. The goal was to see how seeing happy or sad posts influenced the users. Unless you’ve been without Internet connectivity you likely have heard about the uproar its generated.

Much has been said, Michael links to a list and some more essays that he’s found. Some people have expressed concerns about the role that corporations play in shaping our views of the world (via their online curation of it). Of course they do that everyday, but this study focused attention on that curation process by telling us, at least for a week how it was done for the subjects of the study. Others have expressed concern about the ethics of this study.

What do I think?

I’ve been dwelling on the ethical concerns. It helps that I’m teaching a course on Ethics and Computing. And that I’m doing it in Oxford, England. So I’m going to start from here.

First, this study has caused me to reflect on the peculiar situation that exists in the United States with regards to ethical review of science, and the lack of protection for individuals that participate in it.

In the United States, only institutions that take Federal Government research dollars are required to have Institutional Review Boards (IRBs). The purpose of an IRB is to review any study involving human subjects to ensure that it meets certain ethical standards. The IRB process has its origin in the appalling abuses conducted in the name of science like the Tuskegee Experiment. Facebook does not take Federal research money, and is therefore not required to have an IRB. The institutions by which research gets published are also not required to perform ethical reviews of work that they receive.

I find myself asking whether individuals who participate in a research study, irrespective of who funds that work, have the right to be protected? Currently there’s an inconsistency, in some research the answer is yes, and in others it is no. It seems very peculiar to me that who funds the work determines whether the research is subject to ethical review and whether the people who participate have protection.

Second, most of the responses I’ve read have been framed in American terms. But social computing, including this study, aspires to be a global science. What I mean is that nowhere did I read that these results only apply to a particular group of people from a particular place. And with the implication of being global comes a deeper and broader responsibility: to respect the values of the citizens that it touches in its research.

The focus on the IRB is uniquely American. Meanwhile I am in Europe. I’ve been learning more about European privacy laws, and my understanding is that they provide a broader protection for individuals (for example, not distinguishing based on who pays for the research), and also place a greater burden on those who collect data about people to inform them, and to explicitly seek consent in many cases. I interpret these laws as reflecting the values that the 505 million European Union citizens have about their rights.

I’ve not been able to tell whether European citizens were a part of the 600,000 people in the study. The PNAS report said that it was focused on English speakers, which perhaps explains why the UK was the first country to launch an inquiry. If Europeans citizens were involved we might get more insight into how the EU and its member nations view ethical conduct in research. If they were not, there is still some possibility that we will learn more about what the EU means when it asks “data controllers” (i.e. those collecting, holding, and manipulating data about individuals) to be transparent in their processes.

I’ve read a number of pieces that express concern about what it means to ask people to consent to a research study. Will we lose enough people that we can’t study network effects? How do we embed it into systems? These are really good questions. But, at the same time I don’t think we can or should ignore citizen’s rights and this will mean being knowledgable about systems that do not just begin and end with the IRB. Its not just because its the law, but because without it I think we demonstrate a lack of respect for other’s values. And I often think that’s quite the point of an ethical review, to get beyond our own perspective and think about those we are studying.

Follow

Get every new post delivered to your Inbox.

Join 1,821 other followers