Beki Grinter

Archive for the ‘social media’ Category

The Trouble with being a Loser

In social media, women on July 22, 2015 at 8:04 am

In the last couple of days I’ve seen a report from the Washington Post about a study that finds that men who are not so good at video games are more likely to be abusive to women. The study recognizes that women, far more than men, are likely to be targets of abuse online. It sought to find a reason for this sexist behaviour and used video game play as an example domain. Their conclusion is that men who are not good at those games feel threatened, but only (or much more so) by women than by other men. Hence the lashing out.

Its been bothering me. I saw it shared several times, and each time, I wondered why I felt bothered by it. I’ve just listened to Mary Beard’s lecture for the London Review of Books, and so I tried to think through this study using her lens. Her explanation of why women are subject to so much abuse explores how women’s voices have been silenced in the public sphere for the better part of 2,000 years. How practices of oratory in use today build on a lengthy tradition of associations with male voices and that we are still culturally raised to accept the voice of authority as being masculine rather than feminine.

In putting the study and Mary Beards lecture in dialog I came to see points of intersection though. The idea that men feel threatened comes out strongly in both Dr. Beard’s lecture “its not what you say, its the fact that you’re saying it” and of course the loser gamer. But I find myself preferring Dr. Beard’s explanation. I think we can potentially feel sorry for a loser, but that risks that we dismiss or excuse their actions as being those of a pathetic fool. And if we do that, we continue to reinforce patterns that make the public silencing of women’s voices acceptable as a response to something that threatens a man.

The Right to be Forgotten and the Right to be Equal

In computer science, empirical, European Union, social media on July 16, 2014 at 4:48 am

I’ve said this before, the Internet can be a mean misogynistic place. Could the Right to be Forgotten help with this?

The Right to be Forgotten is an EU ruling that gives people the means to ask search engine companies to remove data from their searches if it is irrelevant. Its sparked a lot of controversy as well as questions.

The controversy could be characterized as pitting freedom of expression and information against individual privacy rights. Additionally, people have argued that it creates an unfair burden on intermediaries such as Google.

While I am open to these arguments, I find myself thinking about how freedom of expression and misogyny interact. Some of the things that are written about women on the Internet are vile, abusive, full of bile and hatred. Freedom of expression has always had limitations: libel (making false and damaging statements) and obscenity. Freedom of expression on the Internet seems never to have had these limitations, and so obscene libelous statements directed at women exist in perpetuity on the Internet. Perhaps some might argue that its the responsibility of the person they are targetted at to take it up through the courts. But how, when the authors of these remarks are hidden. Which makes me think there is a role for corporations. Or at least a responsibility.

Some advocates for the right to be forgotten have argued that it reflects a social value of forgiveness. We all have the right to make mistakes and then over time have those mistakes disappear into a forgotten history. I agree.

But what I am asking and suggesting here is that the Right to be Forgotten maybe a means to finally have an Internet that is fair to all. For a long time visions of the Internet have championed it as a platform welcoming anyone and everyone. The right to be forgotten may have a role in actually ensuring that it welcomes minorities by proving for once and for all that it will not tolerate discrimination.

538, the World Cup, and Facebook: Telling Stories about Data

In computer science, discipline, empirical, research, social media on July 15, 2014 at 6:49 am

As many of you already know, I’ve been following the World Cup. My team, Germany, won. Watching the World Cup has always involved reading news reports and commentary about the matches. This year I decided to include 538 in my reading.

538 is Nate Silver’s website. Nate Silver became famous predicting US elections. He is a master of analyzing big data to make predictions. It works well for elections. But it doesn’t work so well for the World Cup, at least not for me. First, the site predicted Brazil to win for a long time.

But it’s not just that 538 did not accurately predict the winners. I think that 538 misses the point of a World Cup. Crunching data about the teams doesn’t tell the whole story. And the World Cup is stories. Many stories. As a fan you learn the stories of your team and its history. You might start with world history—this is very salient as a Germany fan. England versus Argentina similarly (1984). It also involves stories about the teams previous encounters. Germany versus Argentina has happened before, even in Finals. And those stories are recounted, and reflected on, in the build up to a game. You might tell stories about strategy. Certainly the Germans have been telling those, about a decade long commitment to raising German players. How you structure a league to encourage more domestic players that can also play for the national side. How you balance the demands of a national league and a national team.

In a nutshell, context matters. These stories of world politics, former World Cups, and the arc of time turn statistics about the players into something richer. 538 tells none of those stories. And I suppose that’s exactly what it wants to be, a “science” of the World Cup. But my World Cup isn’t statistics, it’s larger, more discursive and has a multi-decade narrative arc.

Reflecting on this caused me to revisit the Facebook study. Yes, that Facebook study. The study reported data. But it was data about people. However, at the same time I think some of the response could be interpreted as people feeling that there was more to the story than just statistical reporting of the outcomes. Is it a similar type of human-dimension, an infusion of humanity? This is the question I’ve kept wondering since reflecting on the problems of both of these data-driven reports. 538 reduces football to data. In so doing it loses the human dimension. The Facebook study started as data and the public raised human concerns and considerations. If I have a take away it is that fields like social computing, or any data science of humans, need to seriously pay attention to the stories that we tell about people. How we frame or potentially reduce people is something that the public will care about, for it is their humanity, their stories that we seek to tell.

That Facebook Study

In academia, computer science, discipline, empirical, European Union, research, social media on July 8, 2014 at 8:07 am

Following Michael Bernstein’s suggestion that Social Computing researchers join the conversation.

Facebook and colleagues at Cornell and the University of California, San Francisco published a study in which it was revealed that ~600,000 people had their Newsfeed curated to see either positive or negative posts. The goal was to see how seeing happy or sad posts influenced the users. Unless you’ve been without Internet connectivity you likely have heard about the uproar its generated.

Much has been said, Michael links to a list and some more essays that he’s found. Some people have expressed concerns about the role that corporations play in shaping our views of the world (via their online curation of it). Of course they do that everyday, but this study focused attention on that curation process by telling us, at least for a week how it was done for the subjects of the study. Others have expressed concern about the ethics of this study.

What do I think?

I’ve been dwelling on the ethical concerns. It helps that I’m teaching a course on Ethics and Computing. And that I’m doing it in Oxford, England. So I’m going to start from here.

First, this study has caused me to reflect on the peculiar situation that exists in the United States with regards to ethical review of science, and the lack of protection for individuals that participate in it.

In the United States, only institutions that take Federal Government research dollars are required to have Institutional Review Boards (IRBs). The purpose of an IRB is to review any study involving human subjects to ensure that it meets certain ethical standards. The IRB process has its origin in the appalling abuses conducted in the name of science like the Tuskegee Experiment. Facebook does not take Federal research money, and is therefore not required to have an IRB. The institutions by which research gets published are also not required to perform ethical reviews of work that they receive.

I find myself asking whether individuals who participate in a research study, irrespective of who funds that work, have the right to be protected? Currently there’s an inconsistency, in some research the answer is yes, and in others it is no. It seems very peculiar to me that who funds the work determines whether the research is subject to ethical review and whether the people who participate have protection.

Second, most of the responses I’ve read have been framed in American terms. But social computing, including this study, aspires to be a global science. What I mean is that nowhere did I read that these results only apply to a particular group of people from a particular place. And with the implication of being global comes a deeper and broader responsibility: to respect the values of the citizens that it touches in its research.

The focus on the IRB is uniquely American. Meanwhile I am in Europe. I’ve been learning more about European privacy laws, and my understanding is that they provide a broader protection for individuals (for example, not distinguishing based on who pays for the research), and also place a greater burden on those who collect data about people to inform them, and to explicitly seek consent in many cases. I interpret these laws as reflecting the values that the 505 million European Union citizens have about their rights.

I’ve not been able to tell whether European citizens were a part of the 600,000 people in the study. The PNAS report said that it was focused on English speakers, which perhaps explains why the UK was the first country to launch an inquiry. If Europeans citizens were involved we might get more insight into how the EU and its member nations view ethical conduct in research. If they were not, there is still some possibility that we will learn more about what the EU means when it asks “data controllers” (i.e. those collecting, holding, and manipulating data about individuals) to be transparent in their processes.

I’ve read a number of pieces that express concern about what it means to ask people to consent to a research study. Will we lose enough people that we can’t study network effects? How do we embed it into systems? These are really good questions. But, at the same time I don’t think we can or should ignore citizen’s rights and this will mean being knowledgable about systems that do not just begin and end with the IRB. Its not just because its the law, but because without it I think we demonstrate a lack of respect for other’s values. And I often think that’s quite the point of an ethical review, to get beyond our own perspective and think about those we are studying.

MOOC Participation: Diversity and Assumptions of Development

In computer science, discipline, empirical, research, social media on February 12, 2013 at 11:30 am

Continuing my series of posts about MOOCs. Today’s is about a type of open/development rhetoric I keep hearing associated with MOOCs. It’s well meant I am quite sure, but I’ve heard the following sentiment: MOOCs will allow anyone from any continent to access content. And that in turn leads to increased education, skills for all.

I have a number of problems with this argument.

Starting with the obvious, this sentiment makes important assumptions about access. That access to the Internet and its content is uniform across the world. But it’s not. The Internet is a very different experience if you have a smartphone as your only means of access, versus if you have a laptop. Behind the hardware, there are questions of corporate policies and pricing mechanisms that influence access. Bandwidth caps, bandwidth pricing can influence how people use their phones, and in many parts of the world also how they use the wired network.

Behind these crucial practical questions of access lurk other assumptions, which warrant questioning. Is the content we create relevant or useful for everyone? What assumptions do the producers of content make about, say, what has been previously taught? What assumptions are made about the types of hardware and software the students have access too? And most critically, what assumptions get made about why the person is taking the course and whether that content will ultimately be most useful?

Although its not used too much, I have heard the word “Africa” used to describe diversity. I do think its well meant but it has the danger to collapse all of these questions into a stereotype of a person. Africa is not a person, nor is it a country, it’s a continent of great diversity in all senses. A person from Africa may well contribute to diversity in a MOOC setting, but so might a person from America.

Like others, I see this as being part of understanding the participation divide that shapes the Internet today. Some of that divide is the question of access, its costs, modalities, and so forth. But that’s not all that shapes the participation divide. When we overly simplify an entire continent we close down the question of what shapes participation in very problematic ways. If we are really committed to understanding how online education might help more people learn, the participation divide is precisely the question we ought to open up, to really take account of the highly diverse population of people that have some reach to the Internet. Because it’s only when we actually take diversity seriously that we have any shot at getting to something better than more education for the already well educated.

Romney’s binders

In social media on October 18, 2012 at 7:13 pm

Something curious is going on on Amazon.com

In the wake of Mitt Romney’s comment on Tuesday night

“I said, ‘Well, gosh, can’t we — can’t we find some — some women that are also qualified? I went to a number of women’s groups and said, ‘Can you help us find folks,’ and they brought us whole binders full of women.”

People have taken to Amazon to write reviews of binders. Ones that take up the question of whether you can fit women into binders, whether binders come with women, whether they appeal to the 47% and so forth. It was first picked about a day ago, a story retweeted on Twitter, which I wonder led to more people writing the reviews.

These reviews are a vehicle to express the reviewer’s dislike of Mitt Romney. But what an interesting place to do it. The night of the debate I watched my Facebook stream (mostly left-leaning people, but not exclusively) and Mitt’s comments about binders came up there. Indeed, its come up over and over again in the last few days. I am sure that Facebook and Twitter are being used by supporters of both candidates, and that doesn’t surprise me.

So what to make of Amazon reviews as being a site of political expression? It’s certainly not the first time that Amazon.com reviews have been used for purposes beyond a recommendation. The Wolf t-shirt is a famous example. You could say that the binder reviews have elements of the same humour (at least those of a non-Republican or non-Romney persuasion). But you see other turns there, expressions of anger about the status of women… other types of expression. And at least to me that is what makes these reviews fascinating.

Sharing Instruments: SMS Logging

In computer science, discipline, empirical, research, social media on October 5, 2011 at 8:33 am

This is a second post sharing instruments to help others with their empirical research.

One of my most cited papers is “y do tngrs luv 2 txt msg?” which was a study that I did with Marge Eldridge when we both worked at EuroPARC in Cambridge, UK. What interested us both was how rapidly text messaging had been adopted by teens. What were they using it for? Why?

In the spirit of making more of my materials available I wanted to share the diaries that we asked the teens to keep. There are short excerpts in the paper, but here they are in full. I should say that we were trying to balance portability and privacy against collecting the type of data that would allow us to gain insight into how the technology was being used. This is why the diaries look the way they do.

We asked them to log all the messages that they sent and received, and provided instructions for how to use both the sent and received forms.

Irene, Turks and Caicos and Google Search

In empirical, HCI, ICT4D, social media on August 24, 2011 at 10:22 am

A few weeks ago Hurricane Irene hit Turks and Caicos. I wanted to know how bad it had been because I went there a couple of years ago and thought it was beautiful.

So, I did what I often do, I typed in Irene and Turks and Caicos. Back came all these reports about the fact that it had passed through the country, reported by American newspapers and media outlets who mentioned that in passing on their lengthy stories about how it was going to impact the United States. There was one exception on the second page, a report from Cuba. This focused on how Irene had hit Turks and Caicos as part of reporting on its general track and that it was not headed to Cuba. One damage report did eventually surface, from the Bahamas.

I was reminded once again how search is not equal. Type in those three words and what you dominantly get is reports from the United States about the United States. I use Google each day for a variety of information needs. This was an important opportunity to calibrate once again on the importance of reflecting on the generation of information and its implications for what gets known.

Twitter before Shockwaves

In empirical, HCI, research, social media on August 23, 2011 at 2:30 pm

About 30 minutes ago I felt, here in Atlanta GA, the shockwaves from the earthquake that happened in Virginia. Since Earthquakes are rare on the Eastern Seaboard, when I felt them I actually wondered whether I was having a mild dizzy spell. I logged on to the US Geological Survey website to find out whether I had actually experienced an earthquake.

My first clue that I was right was that the website was temporarily unavailable and then it took a long time to load. My working hypothesis is that a lot of other people were checking it also. Then of course i went to twitter.

In Atlanta it’s Will and Jada (Smith & Pinkett, who’ve announced that they are separating) that is trending on twitter. A first for me to use twitter as a resource and not know what hashtag to look for. But after a bit of searching I found something even more intriguing. Several people in New York City reporting that they learnt about the earthquake in Virginia before they felt the aftershocks.

I guess twitter beat the shockwaves for some people. I would love to know more about how that works. Did they happen to have active twitter friends from Virginia who they were following (I suppose most likely), or was it breaking news services, or even retweets?

And in the oddest announcement, the University of Toronto Press has announced a 20% discount on their books today. Code: Earthquake20.

Ah, so partial answer. Some people saw it trending on twitter before they felt the shockwaves. Not true for me as I said. And another, some people in New York were following people in DC and saw the tweets coming in from there (I’ve seen that multiple times now). Someone in South Carolina read a tweet from DC and then felt it. Its a fascinating way to build up a map of the spread.

An Update from My Scales: Why I Won’t Tweet My Weight and Persuasive Computing

In empirical, HCI, research, social media on August 3, 2011 at 9:59 am

I’ve just read the fabulous Fit4Life paper from this year’s CHI, which presents a fictional system called Fit4Life designed following Persuasive Computing and then uses it to critically reflect on persuasion in design. I think it’s important to say from the outset that one fairly common critique of Persuasive Computing is that the interaction is typically framed as being between a tool and the user it is trying to persuade. But, when you use terms like tool so the agency embedded in the machine gets divorced from the person who put it there. It’s designed in.

And now I have a real example.

I recently purchased a wi-fi scale. I like technology and this scale not only computes my weight, but also my BMI and the amount of fat versus muscle I have and then sends that information to a web app (it also has an iPhone and iPad app). Each morning I step on and learn my fate. I believe that scales can compute weight relatively accurately. I think I even understand how BMI is computed, although I note that on webpages that describe BMI they also describe other variables and uncertainties with the measure. My scale does not come with range. It’s definitive. It’s also definitive about my fat and lean ratios. I have no idea how those are computed, how accurate they are, and what variability may exist in making these computations.

On the web I can review my data. And there someone has made some fascinating design decisions. First, I can chose to tweet my weight. It will be a cold day in hell the first time I decide to do that. What were the designers thinking? Then I wondered whether it was meant as social encouragement. I’ve noticed people’s FitBits letting the Facebook world know of their step activity for the week. Is the idea that I could get encouragement from others if I posted my weight? What would I do on days it went up (immediately tweet that it’s due to muscle gain, admit that the Carbonara was good last night) or just be ashamed? I find myself thinking that step activity is a bit different from weight, there’s a relationship but also an ambiguity. I also understand that WeightWatchers groups share their weights for good or bad. But that’s among a group who are sharing together, and that seems different than simply broadcasting it on Twitter.

Second, visualizations of weight and the fat-lean ratios also come with ideals or “objectives”. It tells you how much you have to lose and gain in order to be ideal. Since I don’t understand how my fat-lean ratios were computed, or how the fat, lean and weight ideals were computed, or what the variables not taken into account might be, I’m left with a message about my body that consists of three numbers. There’s nothing I can manipulate except of course my weight (and also my fat and lean within that–and I’m not even quite sure how to manipulate those, strength training, avoiding fatty foods would be my guesses).

These are problems of over-quantification and rationalization of body, of being told by a tool what is wrong or right that doesn’t account for its measures and the weaknesses that they might have, and provides the user relatively little control over their own data. And they are all things discussed by the Fit4Life paper. And thanks to Fit4Life, I’ve been reminded to keep an eye on my responses to my scale. Thanks of course to media pressure, I’m hardly immune to body ideals though.