Some time ago, my colleague Mark Guzdial wrote eloquently about his concerns about Computing4Good. I also have concerns, and after failing to make my case verbally, I’ve decided to explain here.
Computing4Good (C4G) is an initiative that was started at the College of Computing about 2 years ago. It was well-intentioned, an expression of how some of the work we do within the College can have societal impact as well as intellectual impact. At first blush, Computing4Good is a very appealing phrase, especially for branding. My concerns are with the research that we describe as Computing4Good.
In his post, Mark observes that even though education was not included as part of Computing4Good, it is in fact a public good. Far more problematically, it is the public good that an institution of higher education must stand for, in everything it does.
If education is not a public good, we don’t exist.
(NB: I also believe that education is a public good. Education made me what I am, because it not only taught me, it also gave me wings to fly. Computer Science is responsible for my emigration. I value it so highly I dedicated my hard earnt Ph.D. thesis to the British taxpayer who had made it possible. I choose not to go into more depth on this because Mark has already done an excellent job).
My own concerns about C4G also turn on defining what’s good. The best example I can give is that I supervise research on the uses of Information and Communications Technologies (ICTs) for religious purposes. For roughly 4 billion people, religion (of one type or another) is probably something that they would describe as good. For the people I’ve met through this project, good seems like an understatement, religious belief brings them a sense of wellbeing, of purpose, of peace, and so forth.
But as a scholar, find answering the question of whether religion is good far more difficult. I don’t think religion can be reduced to terms like good. In describing religion as good or not, we lose its rich human complexity (and the nuance of its relationship to technologies). While I don’t believe that it was intended, a consequence of Computing4Good is that it forces us to ask and answer the question of whether religion is good because we have to decide whether it is included in this initiative. Indeed, it asks us to do that for everything we do within the CoC. Further, because this is a public-facing initiative, we communicate to people what C4G includes, and by absence what it does not include. This means that we can’t easily defer or ignore answering whether every research project in the College is computing for good or not.
You don’t have to be a postmodernist to understand the problem of the category of the other. Two frequent “others” that come up in relation to good are bad and evil. This means that we could be perceived not just as deciding what is C4G, but through omission what’s bad and what’s evil. Good is a value judgement, and I think it’s not just too simple for the complex socio-technical world we live in, and further, by being publicly visible with this type of assessment of our research products, I think we have taken several related risks.
First, I wonder how well qualified we are to make that judgement. If it seems “good” to us, is that (I hate to say it) good enough? How are we going to know whether we represent all of the potential stakeholder perspectives on the problem? How do we, the decision-makers make that decision, what are the criteria that make something good? The example I would use is ICTs for women’s rights. Is that good? Probably depends on understanding an individual’s religious, political and cultural contexts and values, not to mention the values of those who are the target of the intervention (bringing it closer to home, how about ICTs that help women find abortion clinics? Answering the question of whether it’s good or not brings you into an extremely charged debate in the United States). I’ve not seen any description of our criteria, other than Technology + Social Activism = Computing 4 Good. Of course who is even permitted to be socially active (i.e. the producer of ICTs), particularly in the public arena that C4G takes place in is also dependent on religious, cultural, economic, and political contexts.
Second, the empirical risk. Computing4Good implies that people will be involved in the outcomes of our research. And that in turn raises the question of what they think. They are stakeholders in the outcomes of our research, and bring their own value systems to bear on our products. An example. In the course of the religious ICTs project I met a minister, and while I have always been on the fence about the question of religion, he makes me consider belief very seriously. Each time we interact, I find him so impressive in what he does and why it is humbling. And much of this is focused on his outreach work. We’ve talked about how he uses ICTs in his outreach work, and I know from the way that he talks about his work he thinks that this is Computing4Good.
I don’t look forward to the day he asks me whether his work (and my study of it) is within the Computing4Good agenda. I’ll tell him no, because we’re not comfortable putting religion into this category. And say we decided to include it, would other people in the Computing4Good umbrella be comfortable with that? Computing4Good is not just what we decide and how we feel about the products of our work, but also what people who are involved with, or rejected by, our work think. Computing4Good is so difficult to define that we risk leaving people on the outside who want to be inside, and of course if we bring them in we risk alienating those already there. Not only does this jeopardize what we might do with them now, today, but also potentially risks alienating them from being involved in the future. We may even turn people off working with us, if they decide that what Computing4Good is doesn’t include them.
Third, there’s an intellectual risk. Words like good (and modern FWIW) suggest a naivety about the intellectual agendas that frame our research. The research communities who are the targets for the products of our intellectual efforts as well as the source of our intellectual inspirations, have developed a rich understanding the transfer of technologies from one place to another. They show how the cultural, social, economic, political and historical contexts create very different value systems between those who produce and those who consume technologies, not to mention the directions of technology flows and the power relations that those migrations can constitute. Some of the communities I would place into this category are Information and Communications Technologies for Development (ICT4D), Human-Computer Interaction (HCI), Anthropology, Sociology, Science and Technology Studies, Postcolonial Studies, and likely more. Intellectual discussion within these communities does not begin or include good (bad or evil), but focuses on the rich detailed interactions of these contexts and how they are embodied in technologies and the methods, practices, theories and commercial contexts in which those systems are made, as well as how they flow from their source to their destination, and then how they are not just adopted but appropriated into people’s lives.
In conclusion, C4G was a well intentioned idea, and very attractive to brand. But, I think that it carries sufficient risks in a) the problem of scoping what’s included, b) the complexity of whom we include and the risk of alienating the very people we seek to serve now and in the future, and c) the damage it may do some of our research reputations. The socio-technical world that we inhabit, and in which we in the CoC seek to understand deeply and influence, is a complex space of values. It’s not good, it’s not bad (or even evil), it’s far more serious than that. And it’s that more (the details, the value interactions and so forth) that I believe are where the most important research problems lie, and where the most significant impact through results is to be had.