Beki Grinter

Archive for the ‘ICT4D’ Category

Indigenous Weather Forecasting in the United Kingdom

In discipline, ICT4D on July 25, 2012 at 12:34 pm

At ICTD 2012 I saw a paper about a system for Kenyan farmers that combines weather forecasts provided by a meterological office with other types of indigenous knowledge that the locals use. One of the arguments being made in the paper was that the weather forecasts that came from the office were too broad in scope, they covered too much terrain to be useful for the farmers and were therefore less useful. But, because those forecasts were scientific the other ways of knowing were sometimes characterized as primitive. The paper attempted to integrate both ways of knowing into a single system for helping the farmers.

Setting aside the paper’s system, I wanted to return to the idea of indigenous weather forecasting. Someone in the audience stood up and made a comment about how this system might be interesting to try in Yorkshire, a county in the UK. People laughed a bit, perhaps uneasily. And I could only think of the rhyme I learned as a child: Red skies at night, shepherd’s delight. Red Skies at morning, shepherd’s warning (which with some experience doesn’t appear to work in Georgia). I wondered what sorts of knowledge Yorkshire men and women used in the Dales and on the moors to make sense of their weather.

Last night I was watching a program about St Kilda, its an abandoned island off the coast (by about 100 miles) of Scotland. On the program the historian described at length some of the indigenous weather forecasting practices that islanders used to use. The weather is extremely changeable, so the locals would watch what shore the waves were breaking on (to one side of the bay, good weather, to the other side, a storm coming it). They would watch where the birds settled on the islands, using their landing points as knowledge about what was to come. And then they showed us the waves and the birds doing things that signaled poor weather and it came in (of course it could have been a bit of video trickery, I hope not though). The historian also explained the point of reference for doing it: big Atlantic storms, some bringing winds as strong as 100 miles per hour, dangerous to be on an island which has some of the highest cliffs to coastline in the UK.

I think it’s easy to see indigenous knowledge as something other people have, a foreign concept, perhaps especially while I write from my desk in an Institute of higher learning that is entirely devoted to the production of scientific knowledge (if I had a dollar for every time the word science was used…). But, unless its just the Kenyans and the British, and I don’t think it is, I suspect indigenous knowledge is all around us.

And now I’m really curious what sorts of indigenous weather forecasting methods Georgians use, and what their point of reference is/are when they use it.


A community for HCID/4D

In discipline, HCI, ICT4D, research on May 30, 2012 at 1:12 pm

I had a number of conversations with people about my last post on HCI4D (for those of you who didn’t read it it was a short reflection on the role that the 4 plays in ICT4D and its implications for HCID/HCI4D). I’d like to begin by thanking everyone who wrote to me and engaged me in these discussions, as usual I learnt a lot. And this post is a reflection of some of what I learnt, and some thoughts about what I think might be done.

I learnt that there has been some discussion about forming a community (a la UIST and CSCW) within SIGCHI focused on HCI4D. As part of that the name was discussed. Several things about it, including the term Development and the question of its relationship to HCI. I will try to summarize the debates as I understand them. Development is a concept with a long, complicated, and problematic history. Development for who, by whom, how, with what objective (asking any of these questions when faced with the term development gets at some of the issues). The relationship concern is about what it means to separate (which a community does, in a way—it marks a set of things as being some how of the same and different from others) HCI4D from HCI. Also there are definitional boundary challenges, what is the set. For example, as I have mentioned before, I still find it strange that ICTD can only happen some countries and not others.

These are real concerns. And I wonder whether rather than engaging them as a set of things that make the formation of a community complicated, they might be precisely the reasons to create a community.

As I understand it communities are organizational tools. They support the growth and awareness of a collection of concerns. And HCI4D is a really interesting space to discuss the types of issues that the very discussions about its name have raised. Development is a complicated term historically and that history has impact on what we do in contemporary practice. But it doesn’t mean that contemporary practice shouldn’t be explored and its lessons understood, and used to reflect back onto Development (and whether indeed what is being done qualifies or whether the agenda is different, and whose agenda it is, and the role of location, partnerships and so forth). A community could give Development a central place in their agenda.

Concerning the relational boundaries, I also think that the value a community could bring is through reflection on why the distinction was drawn, whether its the right distinction, and so forth. In other words use the feeling that it is complicated to split whatever it is we do into HCI and HCID/4D as a point of reflection on what are attempting to accomplish, or how it might have arisen. One thing that seems different to me is failure. There seems a lot more willingness to discuss failures in the HCID/4D context. Is that because there are more of them? Or has HCI constructed a discipline in which failure is not a learning opportunity but a paper that was not able to be published? I think another lesson that might come out of comparing the two is methodological, I think some of the methods that we’ve developed in HCI do not import straightforwardly into HCID/4D, but they were never described as having limitations.

Finally, I think the project of examining the constituencies served by HCI could be done through HCI4D. Its presence suggests that HCI has focused on a subset of people (hence the problems with methods). But it makes it more visible. In the end I think a regional grouping of people is tricky and will be tricky for ICTD/HCID also. However, I think a community makes it more visible, and opens it up for discussion, and I feel that that is something worth discussing.

I suppose what I am saying in a rather clumsy way is that its the very concerns about HCID/4D that I think make it interesting to create community around. Not a community that is awkward about their presence, but uses these challenges as motivating concerns for reflection and discussion.


In discipline, HCI, ICT4D, research on May 24, 2012 at 11:05 am

One of the many things I’ve learnt as I have learnt more about ICTD is that there is an intentionality to the presence of the 4 in some of the formulations of the name. In other words, Information and Communications Technologies and Development is different from Information and Communications for Development. And its not just difference in words, the choice means something.

Information and Communications Technologies and Development concerns the relationship between technologies (whether in use, or being built) and development. By contrast, Information and Communications Technologies for Development is the study of what should be done and how it should be done. It ties research to the practice and takes a stronger moral stand about the outcome, that something should actually happen.

I like this because of the degree of intentionality it gives to the process of doing research and its outcomes for the people who participate in that work. Of course you can see the same type of intentionally in participatory design, action research and in some of the recent discussions about Value Sensitive Design. But the intentionality is tied to the methods used, its about the discipline itself, the corpus of knowledge and the common shared values of the community.

In HCI the term HCI4D has been gaining increasing traction—I have not seen the term HCID in use—but perhaps its time to have the same type of discussion about whether we are for or and. And this discussion would happen at an interesting time in HCI, as I have heard other discussions about whether there is a common core in the field, and if so what it is that unites the collection of very diverse activities in HCI.

The Role of Free Will in HCI

In computer science, empirical, HCI, ICT4D, research on April 17, 2012 at 10:33 am

I have been wondering about the role of free will in HCI research lately. It’s a statement of the obvious to say that there are many different theories that inform HCI research, and those theories make different assumptions about knowledge and truth. And sometimes when I read or listen to conversations about those theories, and the methods associated with them, I hear talk about choice. Most specifically that we can choose the most appropriate theory for the research that we want to conduct.

But can we? Can we really choose among them, is it that simple? I am not so sure. Perhaps it’s just me but I find myself drawn to theories and methods that are commensurate with values I hold. I tend to choose things that produce results (even surprising ones) that I find compelling.

I should say that I am not opposed to others using methods that do not align with my values. In fact, I find the resulting scholarship quite interesting. But I also think I tend to be drawn to those papers in ways that take the results and use them to ask questions that are answerable using methods and theories that align with my values.

As HCI reflects on its methodological and theoretical plurality, I would like the field to reflect on how it talks about those methods and theories and whether we are in fact free to choose, and how free we are?

It’s all in the way you say it…

In academia, discipline, empirical, HCI, ICT4D, research on April 9, 2012 at 7:02 pm

I was reading a paper when I came across the following sentence…

We intentionally biased our data sample in terms of type and size

There’s so much going on here but lets start with the high order bit that saying this in a paper might as well be accompanied with the following sentence

Please reject this now.

Lets start out with the statement, what sounds good about intentionally biasing our data sample in the following ways? Well I did have one thought, it’s better than unintentionally biasing it (which just seems careless). No the authors knew what they were doing. And, also a plus, they admitted it in case the reviewers didn’t know what they were doing. Whoo hoo, I would have given up as a reviewer and just written up “The authors admit that they’ve conducted a flawed experiment.”

Moving past the idea that the authors are flaunting the rules of experimental design this phrase raised other questions.

The paper in which I found this sentence was a qualitative piece of work. For example, one question, what is bias in qualitative sampling? In fact, some forms of sampling are the pursuit of quite intentional people. People with a particular expertise for example. (Imagine your three cycles into your Grounded Theory and you have some very particular questions that only a few people in the corporation you’ve been studying can answer because it falls within their job responsibilities, well then you’re either going to select these people to interview or you are going to waste a lot of time trying not to be selective in who you select to talk too).

Questions about size can be complicated as well. Size often suggests a numerical size but as I’ve said before, 12 does not equal theoretical saturation (tip: having a fully worked out theory does).

Behind these questions lie a type of care with terminology. The authors talk about data samples, bias, type and size, often words applied to experimental design. These are not the right ways to talk about qualitative research. Sure you want to talk about who you interviewed or observed, your participants, but they are rarely a data sample, they are the people that led you to the collection of a particular set of data… the logic of who they are is not about sampling from a population to ensure coverage, but about selecting people who can help develop the theory or analysis, and the size is dependent on different ways of determining completion.

I know that this sentence was thoughtlessly written. It was honest, but it sets up the reviewers in a variety of ways as I hope I’ve pointed out. And if I have to put it crassly, don’t use experimental terms to write about non-experimental ways of conducting empirical research. It’s just ugly.

ICTD: Talking about talking about Kenya

In academic management, discipline, empirical, ICT4D, research on March 29, 2012 at 12:38 pm

My final ICTD post has taken a while to write, I wanted to have some time to reflect upon the experience. The experience in question was watching a series of talks focused on Kenya from both Kenyan and non-Kenyan researchers. Here’s a post that summarizes one concern that was raised for non-Kenyan researchers, the “researcher effect.”

But I think there was something else going on (I think it might be the “othering effect” and I would welcome feedback on that). I felt that I was sensing a significant difference in the way that Kenya was being discussed when I compared foreigners talking about Kenya and Kenyans talking about Kenya. At ICTD, I heard from Kenyans about the excitement surrounding the vibrant technology innovation culture. Its a technology culture that’s not just having impact on Kenya, but all around the world. Ushahidi is a great example of this. By contrast, the talks from foreigners seemed to focus more on problems, ones that needed and could be addressed through technology. It was well meant of course.

But these two discourses are very different. One is a discourse of opportunity and the other a discourse of problems. And I think thinking about these differences and discourses is very important because the United States (and countries like it) have an abundance of the types of institutions that produce and control the production of scholarly discourse. This gives us a disproportionate control over it including what constitutes knowledge about places and people who are not in the United States. We have great power to amplify perceptions of other places and people and give them value through the legitimacy conferred on all scholarship.

Also, I think it’s a real win of ICTD that there are enough people here who are not foreign to remind us of how foreigners talk about their home. That’s a real strength of the ICTD conference. I’ve been wondering what it would be like if more of our participants came to CHI. What would they think about the ways that we talk about them?

ICTD: Is it the Right Categorisation

In C@tM, computer science, discipline, HCI, ICT4D, research on March 14, 2012 at 9:42 am

Erik Hersman writes about why he doesn’t like the term ICT4D. The opening lines really resonate with me. If a project involving technology is done in poor parts of places like the United States or Europe it does not get the label ICTD. So, why does it get that label if its done in sub-Saharan Africa?

It echoes the remarks I blogged about from the opening talk where the speaker asked how we would feel if the focus of One Laptop Per Child was Alabama? Or, I think, in many parts of the United States. What would we be saying? I’m already aware that people in low-income neighborhoods can and do feel that they are the ongoing target of the United States’ medical community’s criticism and unfairly so. And they resist the messaging, viewing it as discriminatory.

Erik Hersman goes on to write about a variety of African start ups. Are they ICT4D? MixIt for example. What about technologies like Ushahidi, which started in Africa but has been used in settings that are not ICT4D.

At one level you could view this as a labeling problem. But there is also a research community gathering around it. As this field gains traction and matures, it seems like it’s a good time to ask whether its the right grouping. I’ve long held the view that we ought to look for common points of intersection for ICT interventions in any economically disadvantaged community. We’ve called this Computing at the Margins here at Georgia Tech, not sure that that’s the right label either, but the grouping is broader, and the idea is that what might be shared in common is that what these groups need is not more access to the same technologies, but technologies that speak more specifically to values that these groups hold (i.e., systems designed for them).

But here at ICTD 2012 I’m asking myself a second set of questions, fueled by the blog post that was referenced in the ICTD 2012 twitter stream, the plenary and other remarks I think I have heard during the sessions. And the questions are:

If the people who live in the places, who are technical innovators, (colleagues and partners) find this term problematic, should we?

Is ICTD a form of “othering” (of course you can ask this about Computing at the Margins too).

p.s. There’s been more written about ICT4D/ICTD with resources gathered here and there are a lot more dimensions to the debate about the name and the goals of the enterprise than those I’ve blogged about.

Talking about Failures in Research

In computer science, discipline, ICT4D, research on March 13, 2012 at 10:23 am

The last open session I attended yesterday focused on failures in ICTD. What I learned was that there are clearly a number of different ways in which projects can fail in their deployed environment (i.e., not failures in the laboratory). But it is not clear that writing about that type of failure is accepted. So, one reason to have this open session was to openly account for failure, and the role that failure plays in terms of the knowledge it generates.

There’s a lot to say about this. Here are my thoughts, caveat if I sound vaguer than usual its also because some of the people asked that their talks be off the record. Not only does this suggest the magnitude of the difficulties associated with talking about failure, but I’m also trying to preserve the privacy here.

Study Design and Values. I’ve heard this before, but once again, new examples of how methods designed in the West and for Western settings just don’t translate well because they make all sorts of assumptions. Ideas about time yielded some fabulous examples. Assuming that people’s orientation towards time are the same as Western notions of what it means for an activity to start, the day to start, the academic year. Its easy to see beliefs about time as being highly problematic for study design.

Interestingly one thing that came up during this session was how school can compete with the harvest, i.e. people will stop sending their children to school when it is time to harvest crops—and that reminded me of my Grandfather who had very similar experiences as a child in rural England. I guess he didn’t participate in research that took place in school during the harvest either.

Methods and Foundations. I think the way I have written about the above is as a practical problem, constraints that have to be accounted for and worked into the study design that doesn’t fail. That would be a fair reading, but I think I heard something else too. Again, an example. Individual assessment, the evaluation of how an individual does with something (a test, a system, both, more). Individual assessment makes two assumptions—that it is individuals (rather than say groups) that should be assessed and that assessment is a legitimate and useful outcome. This is not just perhaps a methods challenge, but also rather more. Assessment is core to any discipline whose knowledge outcomes have to be proved through evaluation. Problematizing assessment problematizes that type of knowledge production.

Sponsors. In more than just this session there have been discussions about the role of sponsors. As an outsider, it seems to me that there are a wider array of potential funders for this research. But, that wider array is matched by a wider array of desired outcomes. What happens when the actual experience doesn’t match the desired outcomes. Sometimes it’s easy to see the influence of sponsors. I’ve written about my own experiences in Industrial Research, understanding why the corporation pays for research and what implications that that has for your research. Applying for grants is also writing about outcomes to sponsors: sponsors do shape outcomes. Even if the sponsor asks for “good science” as an outcome, that’s still a value, and with respect to failure it’s worth asking whether that constitutes “good science” and if it does why its largely hidden from the outputs of current “good science.”

Taboo Topics. Another failure mode seems to be to ignore topics that are pervasive in practice and central to the experience of ICTs but are difficult when put into explicit research focus. All of my experiences with the study of religion and ICTs gives me unique insight into what its like to take up these taboo research topics. What can I say other than to thank all the people who made that possible, but most of all Susan Wyche (but the others include the reviewers of the papers, those who came to talks, those who built on the work, those who wrote letters in support of both myself and Susan—in other words the entire community that it takes to assess and determine the legitimacy of the scholarship). I also thought I heard that by avoiding these topics, the situation is not just that major causes for appropriation and rejection of technologies might be missed, but that given that these influences would be at work anyway, their presence in practice but absence in scholarship would lead to very problematic results (under-explained outcomes).

Finally, I wondered about my intellectual roots. I wondered does HCI and the other fields I come from do any better? What is our culture of discussing failure? I can think of examples where I know of reports of systems that were deployed and the researchers turned up to discuss what worked and what didn’t work. But that wasn’t systems that the community built and I don’t have a good answer to how we talk about failure closer to home. But now I know it’s a good question to ask.

Open Sessions, Othering and ICTD

In academia, computer science, discipline, ICT4D, research on March 13, 2012 at 9:20 am

ICTD has a new-to-me idea of the open session. People volunteer to coordinate a session around a particular topic. The conference attendee is free to go to any that they are interested in. Yesterday I attended one on anthropological debates and how they pertain to ICTD. The topic of “othering” was discussed. And one of the things I like about ICTD is that there were plenty of opportunities to hear from those who live in places that are much more likely to be othered… I was reminded of the keynote I blogged about earlier, the remark about why Alabama is not the focus for One Laptop Per Child, to make a point about how we talk about those who are the object of that focus.

I think I would have felt like an outsider in ICTD anyway. I’ve not published here, watching my co-chairs interact with their community, I am aware that I have not got the same history with this community (although I plan on working to change that!). But, what I am most enjoying about ICTD is these other ways in which I am feeling sensitized to a variety of issues. Fabulous stuff, keep it up ICTD!

ICTD: The first post

In ICT4D, research on March 12, 2012 at 10:59 am

Just listened to the keynote by David Kobia from Ushahidi.

Lots to say about the keynote, but here are just a few thoughts. He began by talking about his time in Alabama, where he studied Computer Science. Along the way he pointed out that Alabama faces challenges, challenges that are sometimes attributed to the developing world as if they are not present in the developed world. Developed countries are economically uneven. Then he asked how people would feel if they heard One Laptop Per Child people talk about deploying these machines to Alabama. He left it unsaid, but I presumed that the question was how would we feel if we were talked about in those terms. He also reminded me of the curious artificial split that this conference itself makes, D does not stand for Developed but rather Developing, so not Alabama then.

If I understand the history correctly, Ushahidi (and other things like it) have been very influential in creating a tech innovation culture in Kenya. This has morphed into things like the iHub, a space in Nairobi where people can come together to build systems and share ideas and so forth. iHub has spawned research @ ihub promoting African based African focused research. And he said, several times, that it was going to be important. I believe him. All their sponsors are technical companies from the Developed world, and I wondered how that worked.