Beki Grinter

Posts Tagged ‘Computing at the Margins’

Growth of ICT4D research

In ICT4D, research on February 13, 2010 at 4:45 pm

Richard Heeks recently posted data showing the growth of ICT4D research.

I find this interesting.

I am sure that you can produce similar data about Biomedical Informatics research. Easily. That would not peek my interest as much.

Because there’s an important difference between Biomedical Informatics and ICT4D, a battle for legitimacy. From what I understand the history of Biomedical Informatics, in addition to being a history of growth is one of finding a name. Biomedical Informatics appears to supersede Health Informatics (although that’s still very much used). It’s also meant to imply more than Bioinformatics. And then there’s a history of different names being used in North America and Europe (I think that’s some of the difference between Health and Medical Informatics). But, things do largely seem to be converging on Biomedical Informatics as the right name for the discipline, with specialities in all sorts of things such as public health informatics, clinical informatics, bioinformatics, and so forth.

But, that’s the name… and the name has been changed and discussed to reflect what should be included in the field. (I have my own opinions of course, which turn on doing the thing that I find myself frequently doing, which is to inspect assumptions… that’s how the idea of Wellness Informatics started, as a means to organize that type of inspection… and I still don’t know where it stands, but I have and continue to enjoy the conversations and people that that process has facilitated).

By contrast, ICT4D has been growing while the people doing the research have been discussing what the research is in the field. That, at least to me, seems quite unusual. To have sustained growth and increased commitment to a field of research for which the case for the research in the field is not clear, even to some of those who do work in the field.

Now that’s interesting. There seems to be a collective “gut sense” that this is an area with rich possibility even though the nature of that possibility is hard to pin down. I wonder whether some of it can be explained by the low morale in CS, and what I see as some of the differences that this area supports… But I don’t know.

All I do know is that smart people, and increasing numbers of them, are putting their bets on ICT4D. And perhaps that’s as it should be. Some of my management are fond of the idea that high risk equals high reward. Well I’d say it’s pretty risky to take on things that the research reward is not clearly understood.


Going Beyond Good: Reflections on Rob Kling

In C@tM, computer science, discipline, HCI, ICT4D, research on December 29, 2009 at 3:47 pm

Previously I’ve written about why I’m concerned about Computing4Good.

In writing that blog post, as has also been the case with others that I’ve written, I spent some time reflecting on my career in what I might call “going beyond good” through the careful empirical study of Information and Communications Technologies as human-built and human-used machines.

The person who introduced me to this world was Rob Kling. Rob recruited me to graduate school at the University of California, and there I joined the Computers, Organizations, Policy and Society (CORPS) group. The research focus of this group was on the societal (be it individual, group, organization or society) impacts of technologies. Some of Rob’s ideas (Rob was a voluminous thinker and writer, so I don’t feel terrible when I say some) have remained with me ever since that time, and I thought I would write them down here. It was these ideas that begun to shift my thinking about computerization in so many ways.

Technological Utopianism (Rob and Suzi Iacono)

A concept that refers to visions of Computerization as being nothing but good. These descriptions may describe increased control, change the balance of labour between people and machines (with the potential for deskilling), and a myriad of other potentially problematic situations. But that is largely lost in the vision being offered. Technology is simply going to make things better. It suffers not just from under-estimating the potential for problems, but also from usually reducing computing to a “swap” rather than being involved in a complex socio-technical system (more on that later).

A second component to technological utopianism is that it’s a rhetorical strategy, i.e. it’s a way of selling a vision of computing. Before the outcome there is the positioning that creates the motivation and movement towards the outcome. Technological utopianism is powerful.

I find myself thinking about technological utopianism each time I hear an account that System X will improve our processes. Whether it’s innovation in technologies to increase airport security, or purchasing an expensive customer relationship management system to improve workflow (Harvard Business Review says that 51% of all CRM efforts fail FYI). Arguments focused on technology, which give it agency for good, omit all the other things that will have to happen. Will training to use the machines be sufficient so that their human operator can use them effectively, will the machine be deployed into an environment that is like the one it was tested in, do we know what our current processes are well enough to know whether they are aligned with the management philosophy built into the system itself… all of these questions and more come popping into my mind.

And technological utopianism is at the front of my mind when I think about ICT4D. I should stress that I think it’s an awareness that researchers within the field have, but as I enter it, I am reminded that the proposition that Computerization will make things “better” is extremely problematic. It’s naive at best. It underestimates the socio-technological system, it also stubbornly ignores where the technology comes from and who profits. I think that as ICT4D becomes increasingly “popular” as a field of practice, technological utopianism will/can/should undergo a renaissance.

The Web of Computing

This was Rob’s (and Walt Scacchi’s) term for all the things it takes to make computing work in any form. It was their argument for empirical research and understanding of all the things that computing depends on. Minimally, since computers are human-used, it is all the people involved in the use of the machine, and what they do with each other as well as with the machine and any other system elements. It is not limited to computer users, but also those who support the users (i.e. technical support, those who face or not the customers in the organization that built the system, and those who depend on the outputs of the system usage). It’s all the hardware it takes to make the system work, which is not limited to the machine, but also the networked connections and other infrastructural technologies, like, say the power grid. (Again, well illustrated by ICT4D, where many of those infrastructures are missing). And within the computer itself, it’s not just the application in use, but all of the things that it in turn depends on, the dependencies on operating systems and other applications and so forth. (I think this is always well illustrated by the “lock-in” created once a system is deployed, while all the applications and platforms may make sense at the time of original investment, it is over the course of time that these dependencies can become extremely challenging, particularly if they are outside the scope of control of the application provider). Then there’s also the organizational, legislative, etc… environments in which all computers are used, and which govern how, whom, etc. applications are used. (Consider the household as a microcosm of such operating contexts, how do families create “rules” about how children use the computer).

Social Informatics

Rob’s legacy culminates in social informatics. He’d left Irvine by the time he started using this term more frequently, but to me it describes everything he and his students, and many of his colleagues did and continue to do. And I’ve been thinking a lot about the devolution of Computing as a discipline. Social Informatics, in Rob’s formulation, begins with some statements about the nature of the discipline. “It is defined by its topic (and fundamental questions about it), rather than by a family of methods…” perhaps he should have added to this or being defined by some property of the technology itself (like say areas defined as networking, many core computing,…). Then in a classically Rob move he offered a rich example, this was a hallmark of his writing and what he taught others to do.

Rob was the first person to present me with a vision of social realism, a way to describe computerization and change as a complex endeavour, and one worthy of study. It was one that lured me to graduate school and initially work with Rob as my advisor. It was in the course of working with Rob that I learnt another lesson, from him in a way, which was that I should never do research about something I don’t care about at all. Turned out that while Rob cared deeply about Digital Libraries, which were gaining considerable ground in a pre-Web, gopher-based Internet, I thought they were duller than dishwater. I saw nothing of consequence there, and I was almost certainly wrong about that, but without being able to see the ability to make a difference I couldn’t work up the energy needed to do scholarship.

Going Beyond Good: Computing4Good Considered Harmful

In C@tM, computer science, discipline, HCI, ICT4D, research on December 21, 2009 at 12:56 pm

Some time ago, my colleague Mark Guzdial wrote eloquently about his concerns about Computing4Good. I also have concerns, and after failing to make my case verbally, I’ve decided to explain here.

Computing4Good (C4G) is an initiative that was started at the College of Computing about 2 years ago. It was well-intentioned, an expression of how some of the work we do within the College can have societal impact as well as intellectual impact. At first blush, Computing4Good is a very appealing phrase, especially for branding. My concerns are with the research that we describe as Computing4Good.

In his post, Mark observes that even though education was not included as part of Computing4Good, it is in fact a public good. Far more problematically, it is the public good that an institution of higher education must stand for, in everything it does.

If education is not a public good, we don’t exist.

(NB: I also believe that education is a public good. Education made me what I am, because it not only taught me, it also gave me wings to fly. Computer Science is responsible for my emigration. I value it so highly I dedicated my hard earnt Ph.D. thesis to the British taxpayer who had made it possible. I choose not to go into more depth on this because Mark has already done an excellent job).

My own concerns about C4G also turn on defining what’s good. The best example I can give is that I supervise research on the uses of Information and Communications Technologies (ICTs) for religious purposes. For roughly 4 billion people, religion (of one type or another) is probably something that they would describe as good. For the people I’ve met through this project, good seems like an understatement, religious belief brings them a sense of wellbeing, of purpose, of peace, and so forth.

But as a scholar, find answering the question of whether religion is good far more difficult. I don’t think religion can be reduced to terms like good. In describing religion as good or not, we lose its rich human complexity (and the nuance of its relationship to technologies). While I don’t believe that it was intended, a consequence of Computing4Good is that it forces us to ask and answer the question of whether religion is good because we have to decide whether it is included in this initiative. Indeed, it asks us to do that for everything we do within the CoC. Further, because this is a public-facing initiative, we communicate to people what C4G includes, and by absence what it does not include. This means that we can’t easily defer or ignore answering whether every research project in the College is computing for good or not.

You don’t have to be a postmodernist to understand the problem of the category of the other. Two frequent “others” that come up in relation to good are bad and evil. This means that we could be perceived not just as deciding what is C4G, but through omission what’s bad and what’s evil. Good is a value judgement, and I think it’s not just too simple for the complex socio-technical world we live in, and further, by being publicly visible with this type of assessment of our research products, I think we have taken several related risks.

First, I wonder how well qualified we are to make that judgement. If it seems “good” to us, is that (I hate to say it) good enough? How are we going to know whether we represent all of the potential stakeholder perspectives on the problem? How do we, the decision-makers make that decision, what are the criteria that make something good? The example I would use is ICTs for women’s rights. Is that good? Probably depends on understanding an individual’s religious, political and cultural contexts and values, not to mention the values of those who are the target of the intervention (bringing it closer to home, how about ICTs that help women find abortion clinics? Answering the question of whether it’s good or not brings you into an extremely charged debate in the United States). I’ve not seen any description of our criteria, other than Technology + Social Activism = Computing 4 Good. Of course who is even permitted to be socially active (i.e. the producer of ICTs), particularly in the public arena that C4G takes place in is also dependent on religious, cultural, economic, and political contexts.

Second, the empirical risk. Computing4Good implies that people will be involved in the outcomes of our research. And that in turn raises the question of what they think. They are stakeholders in the outcomes of our research, and bring their own value systems to bear on our products. An example. In the course of the religious ICTs project I met a minister, and while I have always been on the fence about the question of religion, he makes me consider belief very seriously. Each time we interact, I find him so impressive in what he does and why it is humbling. And much of this is focused on his outreach work. We’ve talked about how he uses ICTs in his outreach work, and I know from the way that he talks about his work he thinks that this is Computing4Good.

I don’t look forward to the day he asks me whether his work (and my study of it) is within the Computing4Good agenda. I’ll tell him no, because we’re not comfortable putting religion into this category. And say we decided to include it, would other people in the Computing4Good umbrella be comfortable with that? Computing4Good is not just what we decide and how we feel about the products of our work, but also what people who are involved with, or rejected by, our work think. Computing4Good is so difficult to define that we risk leaving people on the outside who want to be inside, and of course if we bring them in we risk alienating those already there. Not only does this jeopardize what we might do with them now, today, but also potentially risks alienating them from being involved in the future. We may even turn people off working with us, if they decide that what Computing4Good is doesn’t include them.

Third, there’s an intellectual risk. Words like good (and modern FWIW) suggest a naivety about the intellectual agendas that frame our research. The research communities who are the targets for the products of our intellectual efforts as well as the source of our intellectual inspirations, have developed a rich understanding the transfer of technologies from one place to another. They show how the cultural, social, economic, political and historical contexts create very different value systems between those who produce and those who consume technologies, not to mention the directions of technology flows and the power relations that those migrations can constitute. Some of the communities I would place into this category are Information and Communications Technologies for Development (ICT4D), Human-Computer Interaction (HCI), Anthropology, Sociology, Science and Technology Studies, Postcolonial Studies, and likely more. Intellectual discussion within these communities does not begin or include good (bad or evil), but focuses on the rich detailed interactions of these contexts and how they are embodied in technologies and the methods, practices, theories and commercial contexts in which those systems are made, as well as how they flow from their source to their destination, and then how they are not just adopted but appropriated into people’s lives.

In conclusion, C4G was a well intentioned idea, and very attractive to brand. But, I think that it carries sufficient risks in a) the problem of scoping what’s included, b) the complexity of whom we include and the risk of alienating the very people we seek to serve now and in the future, and c) the damage it may do some of our research reputations. The socio-technical world that we inhabit, and in which we in the CoC seek to understand deeply and influence, is a complex space of values. It’s not good, it’s not bad (or even evil), it’s far more serious than that. And it’s that more (the details, the value interactions and so forth) that I believe are where the most important research problems lie, and where the most significant impact through results is to be had.