The last open session I attended yesterday focused on failures in ICTD. What I learned was that there are clearly a number of different ways in which projects can fail in their deployed environment (i.e., not failures in the laboratory). But it is not clear that writing about that type of failure is accepted. So, one reason to have this open session was to openly account for failure, and the role that failure plays in terms of the knowledge it generates.
There’s a lot to say about this. Here are my thoughts, caveat if I sound vaguer than usual its also because some of the people asked that their talks be off the record. Not only does this suggest the magnitude of the difficulties associated with talking about failure, but I’m also trying to preserve the privacy here.
Study Design and Values. I’ve heard this before, but once again, new examples of how methods designed in the West and for Western settings just don’t translate well because they make all sorts of assumptions. Ideas about time yielded some fabulous examples. Assuming that people’s orientation towards time are the same as Western notions of what it means for an activity to start, the day to start, the academic year. Its easy to see beliefs about time as being highly problematic for study design.
Interestingly one thing that came up during this session was how school can compete with the harvest, i.e. people will stop sending their children to school when it is time to harvest crops—and that reminded me of my Grandfather who had very similar experiences as a child in rural England. I guess he didn’t participate in research that took place in school during the harvest either.
Methods and Foundations. I think the way I have written about the above is as a practical problem, constraints that have to be accounted for and worked into the study design that doesn’t fail. That would be a fair reading, but I think I heard something else too. Again, an example. Individual assessment, the evaluation of how an individual does with something (a test, a system, both, more). Individual assessment makes two assumptions—that it is individuals (rather than say groups) that should be assessed and that assessment is a legitimate and useful outcome. This is not just perhaps a methods challenge, but also rather more. Assessment is core to any discipline whose knowledge outcomes have to be proved through evaluation. Problematizing assessment problematizes that type of knowledge production.
Sponsors. In more than just this session there have been discussions about the role of sponsors. As an outsider, it seems to me that there are a wider array of potential funders for this research. But, that wider array is matched by a wider array of desired outcomes. What happens when the actual experience doesn’t match the desired outcomes. Sometimes it’s easy to see the influence of sponsors. I’ve written about my own experiences in Industrial Research, understanding why the corporation pays for research and what implications that that has for your research. Applying for grants is also writing about outcomes to sponsors: sponsors do shape outcomes. Even if the sponsor asks for “good science” as an outcome, that’s still a value, and with respect to failure it’s worth asking whether that constitutes “good science” and if it does why its largely hidden from the outputs of current “good science.”
Taboo Topics. Another failure mode seems to be to ignore topics that are pervasive in practice and central to the experience of ICTs but are difficult when put into explicit research focus. All of my experiences with the study of religion and ICTs gives me unique insight into what its like to take up these taboo research topics. What can I say other than to thank all the people who made that possible, but most of all Susan Wyche (but the others include the reviewers of the papers, those who came to talks, those who built on the work, those who wrote letters in support of both myself and Susan—in other words the entire community that it takes to assess and determine the legitimacy of the scholarship). I also thought I heard that by avoiding these topics, the situation is not just that major causes for appropriation and rejection of technologies might be missed, but that given that these influences would be at work anyway, their presence in practice but absence in scholarship would lead to very problematic results (under-explained outcomes).
Finally, I wondered about my intellectual roots. I wondered does HCI and the other fields I come from do any better? What is our culture of discussing failure? I can think of examples where I know of reports of systems that were deployed and the researchers turned up to discuss what worked and what didn’t work. But that wasn’t systems that the community built and I don’t have a good answer to how we talk about failure closer to home. But now I know it’s a good question to ask.