Beki Grinter

Posts Tagged ‘disciplinary devolution’

A Name for Computer Science

In academia, computer science, discipline on March 19, 2010 at 10:24 am

This post continues a series of reflections on the discipline that is sometimes known as Computer Science.

I while ago I wrote about some of the naming conventions that we use in our discipline. One is to name around function: networking, security, operating systems. In this we choose a function of the machine itself, suggesting a separation of parts. I can’t help thinking that this separates theory from practice. Because in theory these are distinct, but in practice all these things must work together. And in practice there are collaborations across the fields. Another naming scheme we use emphasizes the greatness of the machine or its complexity. I think of high-performance computing and many core computing as two examples. Rhetorically we choose abundance over scarcity.

But naming also seems to be part of our disciplinary discussion right now.

One question turns on Computing or Computer Science. It is, for example, the “Computing Community Consortium” that’s an entity designed to promote, well Computing or Computer Science? The College of Computing uses Computing distinctly from Computer Science, but I don’t know whether the CCC or the Computing Research Association sees the two as distinct. If it is a distinction, what is it?

The College has three schools: Computational Science and Engineering, Computer Science, and Interactive Computing. Like other free standing Schools (Colleges at Georgia Tech: Georgia Tech has other naming issues that are beyond the scope of this particular post). We recently became this way, and one reason I have always assumed that we separated into Schools was because the College was getting a little unweildly. I’ve always assumed that this was a solution to our increased size, and also a response to what I call the devolution of Computer Science.

But, to share another example, UC Irvine has a free-standing Computing school: the Donald Bren School of Information and Computer Sciences and it comprises three departments, Computer Science, Informatics, and Statistics. At a point, and this seems to be it, free standing schools/colleges are moving towards a structure that has internal divisions. And of course, other Universities are also working with a division, and working out what’s housed where. The University of Washington strikes me as an example, where you can find people who have a research interest in HCI who have affiliations in Computer Science and Engineering, the Information School and Human Centered Design and Engineering.

What will this be? It is clearly a work in progress, not just here at Georgia Tech, and more broadly. And as for me? I think that we should postpone the discussions about what Computer Science is until that’s decided nationally, and simultaneously we should participate in those discussions since Georgia Tech has such a stake in them.

Advertisements

Education: those next 20 years…

In academia, computer science, research on March 5, 2010 at 9:49 am

I’m getting closer to my colleague Mark Guzdial’s intellectual turf here, but I saw two things recently that made me pause and take a moment.

The first was an update on a fluid situation, the question of how much of a budget cut the University System of Georgia will take next year. It’s still a work in progress, note that the article says that there’s another meeting on Wednesday which concluded with a still fluid situation. The list of cuts, from each University appears towards the end of the article and it makes for sobering reading. I say that as a State employee myself, one who works for Georgia Tech.

The second was the announcement that Hawaii’s schools will be moving to a four day school week. I’m not a parent, but I can imagine that this is going to be something else for the dual-income parents of Hawaii. There’s always been an alignment between the school week and the work week, not perhaps a complete one, but this is certainly going to expose more of the assumptions that come with that alignment. And that’s not even a thought about education. As the article comments, people are concerned about the educational impact on Hawaii’s children with less instruction time.

Together they paint a picture of an educational system that is in transformation. No-one is predicting that this recession will end soon. Many suggest that there will be recovery, but it will be slow. The effects of this could be as long lasting as the generation experiencing it (those in education at this time). Intriguingly (and somewhat politically I’ll observe) it is the people who are largely not in education who are deciding the fate of those who are.

Recently, Dick Lipton posted an article about education, asking could Universities become extinct in the next 25 years, one that Mark and I built on. I have to wonder whether these are just two data points, in a sea of others that set up the conditions for just such an event. If we’re optimistic, we can hope that at the end of this time we’ll look back and be able to see innovations that set up better conditions for the next generation. I’m trying to keep focused on that. And, one bright spot, the University of Michigan.

Low Morale in CS?

In academic management, computer science, discipline on January 24, 2010 at 5:56 pm

I’ve heard that there’s a malaise within Computer Science. Recently, I hear this more and more. And Mark Guzdial wrote about his own encounters with this sentiment in his blog.

Mark highlights some excellent reasons why this might be the case.

Senior faculty today spent their whole careers defining and defending their turf — “This is computer science, and that isn’t.”  At the same time, computer science has had dramatic change: From computer time being more expensive than human time, to the reverse; from memory being dear, to memory being plentiful; from sequential processing being the assumption, to today’s world where parallel processors are all that we can see going forward.  How often does a discipline change so many of its base assumptions in the lifetime of a faculty member?  Change is hard for anyone, and particularly so when you’ve spent your career making arguments that are weakened or changed by time.

And I of course see it as another opportunity to inspect assumptions.

The first thing that struck me reading this was how peculiar it was to participate in the development of a machine focused discipline that would ultimately give rise to a different class of machines. Computer Scientists develop methods, theories and tools to facilitate all aspects of machine production. And those innovations, as well as accompanying ones in the business world, have transformed the very object of study, the machine, from one of scarcity into one of abundance, from one of expense to one of relative cheapness.

Perhaps this is one unique challenge for a discipline of machinery.

I also wonder whether a focus on production makes as much sense as it used to.

Mark and I come from disciplines within Computer Science that you could say have a focus on consumption as well as production. Human Computer Interaction does focus on the methods to build better machines, more human-usable and human-useful ones, but we focus on the consumer, the user of said machine and use that to inform the methods of system production. And because Computing Education focuses on computational literacy, I think it’s impact is equally to create the next generation of builders as well as the next generation of users of the technology. You need a type of literacy to work the Computer into daily life. And I think a focus on consumption, on a novel experience was partially what Rob Pike was arguing when he made an early case for reformulating Systems research. (I now wonder whether Rob’s message was an early warning of what was to come).

Perhaps broadening focus to consumption offers possibilities, for framing production-focsed research.

I’m increasingly interested in understanding how computational solutions may not be one-size fits all. This is very visible in considering technologies for developing nations. There is now widespread recognition that what has worked well for the middle and upper classes of Western Industrialized nations, is not going to work elsewhere for a host of reasons. It’s not just that cultural differences impact human-centered aspects of consumption (as a colleague of mine, Mike Best puts it, what use is the desktop metaphor in cultures where there is no history of using desks). Simultaneously others write about exclusively technical challenges presented by the very different infrastructural configurations that govern the conditions of use. Not to mention the role of nation-states and international bodies laws and regulations that dictate conditions of use.

Another way to say this is that focusing on domains of consumption seems to give purchase in Computing. And I wonder whether this also explains the rapidly growing areas of Computational Science and Engineering, Health Informatics, etc… But, it also requires something else, potentially, giving up notions of universal generalisability in all circumstances. Let me say, I think that there has to be room for scoped generalisability, but letting the domain drive, suggests an openness to having some solutions that are domain focused. And maybe there are solutions that will carry from domain to domain, but we probably ought to be OK if there are not.

Domain foci, domain generalisability, also continues to push on something else I am not sure we’re always ready to accept, which is the increased diversity of knowledge in the discipline of Computer Science. Our methods, tools, and theories come from a vast number of sources, I can only see domain specialization continue to push that further. So, perhaps it’s time to abandon pre-conceived notion that a) you can learn all of Computer Science, and that there is a b) small set of things that must be taught in Computer Science. Domain specialization presses on that because we face a dilemma, with finite teachable time, can you teach all that is required to make a domain expert? What is worth teaching should likely also include emphasis on different ways of knowing, behind our diversity of methods, is a fairly important set of philosophical differences.

I am lucky, I find myself more excited about the times in CS right now because I can’t help thinking how amazing it is that this discipline of machine has given rise to a machine that now can take us on new voyages of discovery. But, I’ve long had a passion for both consumption and production, so perhaps I was always bound to find this particular juncture more compelling.

New Paradigms for Old Business?

In computer science, discipline on December 14, 2009 at 5:48 pm

I just finished reading Denning and Freeman’s piece in Communications of the ACM about Computing’s Paradigm. Their argument is that Computing

  • embodies science, engineering and mathematics but can not completely be defined by any of those disciplines
  • is in fact a hybrid of these, and applied to information processes. And because of the application of these to information processes, the result is a new paradigm (one that has roots in science, engineering and mathematics) but can not be defined by them.

Additionally, they also argue that computing has five characteristics

  • initiation: determine whether the system to be build can be built
  • conceptualization: design a computational model that generates the system’s behaviours
  • realization: implement in a medium capable of providing those behaviours
  • evaluation: test the result for a variety of properties, correctness, consistency with hypotheses etc…
  • action: put the results into the world

They also argued that Computing was originally dominated by the engineering approach, when any system was hard to build, and then that two other views that of information processing as being the object of study that made computing unique (of course this was before the widespread abundance of Information Schools, but not before the presence of Library Schools, arguably also in the information business) and programmer and the science as the art of designing information processes emerged both which challenged the engineering view.

There’s quite a lot I like about this, but it also struck me about the challenges ahead. I was smiling and thinking as I read this “has everyone got the memo” that Computing is not one size fits all.

This caused me to reflect on an interdisciplinary meeting I attended on the topic of usable home networking. Now I don’t want to say that it was the case that everyone participated in one of two particular styles, but my notes support one of the striking recollections I had about the meeting… which was how differently people oriented to the problem and the charge of the workshop.

At one point I heard someone explain that networking research had solved the problems of home networking in theory, just not in practice. I was perplexed by this remark, I still am. It reminds me of the quip that “in theory there’s no difference between theory and practice, in practice there is.” From this person’s perspective the work of networking research had been done. And that it could be done conclusively in theory.

The idea that work can be done in theory but not in practice (I think omitting at least the last two characteristics of Denning and Freeman’s description of computing) was puzzling. Perhaps particularly for me in HCI, where the idea that people are a theoretical construct and not a practical, living set of entities.

A second way that the paradigmatic challenge emerged was during the working groups. One group spent most of their time discussing why the problem was hard. One might even suggest a little defensively, as if it had to be worked up as a hard problem to establish its legitimacy in a space of difficult research challenges. Note, the workshop posed usable home networking as an important research challenge, one worthy of our five days of time.

Another group came back with a solution network architecture and then proceeded to lead a discussion about the requirements that an editor that the user would use to write the policy that specifies their home network. Cynically I was wondering whether someone could show me an example of a successful end-user policy language. And because I don’t come from this particular disciplinary orientation I was confused about how we could be at solution, I did not understand where the problem had gone.

Now I realise that it was just a mis-match between the way we were doing disciplinary business. I wanted to spend time in problem discovery, while others took the problem as granted and were moving to solution. And I think this is going to be the hardest challenge for Computing. The legacy of the multiple paradigms and disciplinary origins that comprises computing is not just our history but in our value systems. Perhaps Computing is not an entity, perhaps it needs dividing (something I’ll note that a number of Universities are experimenting with) and perhaps the “right” split is one that follows these disciplinary origin lines?

Thoughts on Systems Software Research is Dead

In computer science, discipline, empirical, research on October 20, 2009 at 7:31 am

I’ve just finished reading Rob Pike’s Systems Software Research is Dead talk that he gave in 2000 (right before I left Bell Labs). It’s a provocative piece, but I think that’s Rob Pike.

The piece made me think several different things.

First, he claims that the Systems research community has abandoned the development of operating systems and languages in favour of measuring things about existing systems. Measurement as a “misguided” focus on science, but then he adds:

“By contrast, a new language or OS can make the machine feel different, give excitement, novelty. But today that’s done by a cool Web site or a higher CPU clock rate or some cute little device that should be a computer but isn’t.

The art is gone.

But art is not science, and that’s part of the point. Systems research cannot be just science; there must be engineering, design, and art.”

I’ve been thinking a lot about what Computer Science is all about, what (who 😉 it should embrace recently. And I have to say this just sounds a lot more exciting to me, this statement draws me into an exploration of systems and machines as a holistic activity. Do we have to strip out the design, art, and engineering so that we can live up to the name Computer Science?

It also got me thinking about impact. There’s a lot of attention given to having research that has impact. Impact. One way, although probably not the only way to have impact is to have industrial/commercial impact. Having studied commercial software production processes, I’m somewhat cynical. I used to, especially when I was attempting to make change, think it was a miracle that any software ever got built let alone shipped and used. This type of impact, I firmly believe requires patience and intelligence and also a degree of luck. I suppose that’s true of many things, but impact and luck are an interesting pair.

Setting aside luck. One route to impact is to have success in American Industry. I’ve said before that I think this raises questions for some research areas, ones where there are interesting collisons between profits and innovation.

But, as I was reading Pike’s talk it also occurred to me that Computer Science has a peculiar relationship with industry. While we, as researchers, approach it as a way to have impact, it is this same industry that’s simultaneously closed off research opportunities.

He says “Even into the 1980s, much systems work revolved around new architectures (RISC, iAPX/432, Lisp Machines). No more. A major source of interesting problems and, perhaps, interesting solutions is gone.”

I’m trying to think of another discipline that has had commercial impact so central to its sense of self-value as Computer Science. And there was a time when commercial systems, in their biodiversity, gave rise to challenges. But, the Computer Industry seems to have shut down opportunities as it has focused on the creation of hardware and software standards that at least according to Pike may have ended the best of Systems Software Research. I can’t help thinking that the relationship of industry and academia in Computer Science is at best more complicated than I recall ever having had discussions about. FWIW, and since it’s my blog, I think that there are other problems with what I increasingly see as a play towards a notion of impact, success and business in general being equated to industry because the two organizational types are not the same.

And of course Pike agrees with me. He ends his talk with “The community must separate research from market capitalization.”

Computer Science: Why I care

In academia, computer science, discipline, HCI, research on October 14, 2009 at 6:23 am

As I’ve said before, I’m very interested in disciplinary evolution. There are many reasons, but one of them is that I’ve been discussed as an example of someone who is not a Computer Scientist. At least three things bother me about this discussion. First, these criticisms are largely said about me and not to me. Second, it assumes that the discipline of Computer Science can be defined, and I don’t think evidence supports that. While I don’t completely agree with Eden’s arguments (as an example of writing about multi-paradigmatic behaviour in CS), I do concur that we’re proceeding in multiple distinct paradigms that come with different, possibly irreconcilable methodological, ontological, and epistemological assumptions, which makes me wonder whether we do collectively know what the discipline of Computer Science is all about. Third, the criticism also dismisses the commitments I’ve made to my profession as well as the assessments I have had by others regarding the role of my research in the field of Computer Science (an obvious example, I publish in conferences that are mostly sponsored by the ACM, the professional association for Computer Science researchers, and others cite my work in other Computer Science conferences).

I have three degrees, all in Computer Science. While degrees do not make a Computer Scientist, I would suggest that they give me many years of training for understanding what is included in Computer Science. But degrees can not define a Computer Scientist. After all some of most significant innovations come from people who don’t have degrees in Computer Science. No-one is what their degrees say they are, it’s what they choose to do and why.

So, my commitment to Computer Science was cemented in graduate school. I went to graduate school at UC Irvine. The other day I found a paper that discussed the program I was in in graduate school (the Computers, ORganizations, Policy and Society (CORPS) group). It was not HCI, although it was similar, it was focused on Computing as an empirical science, combining a priori theories that can explain technologies in use-context, with a posteriori empirical analysis of what happened when technologies were deployed in particular contexts. I was hooked, this made the Computer Science of numerical analysis, formal methods, graphics, make sense to me.

Three and a half years later I graduated with an MS and PhD. My thesis work explained how dependencies in code reflected dependencies in the division of labor, and showed how these labor relationships were not being accounted for in the processes used to develop software. Because of this, I received an offer of employment at Bell Labs, and I joined the Computer Science research division of Bell Labs. My job description, continue to do Computer Science research on the human-centered problems that continue to plague software development (in 1960’s it was a crisis, in the 1990’s it became a chronic crisis, and apparently hell). I’ve written about how amazing this time was, how much I learnt. Bell Labs demanded excellence in science, it was a world-class research laboratory, and so it held us all to the highest standards of research in our discipline: Computer Science. So, each year I continued to do research in this space and had the honour (it was terrifying at times) to have my performance assessed by the type of people whose contributions to Computer Science are central to the discipline. But, of course this was simultaneously the privilege of working at Bell Labs, to have your own standards set by people who made Computer Science.

Four years later it was clear that Bell Labs was going to go through what many nationally acclaimed scientific laboratories go through: downsizing. I joined the Computer Science Laboratory at Xerox PARC, as a member of the Distributed Systems area (why this comes as a surprise to people I do not know). CSL was very similar to Bell Labs, but PARC is physically smaller than Bell Labs was. So, that made it more intense, the evidence of PARCs contributions to Computer Science were everywhere, you could physically see them (like the Ethernet). Again, what I was responsible for doing was to advance Computer Science, that’s how I was judged.

So, my entire career through Bell Labs and Xerox PARC was as a practitioner of the research of Computer Science. That’s who mentored me, set the standards, and evaluated my contributions, with the help of external communities of researchers who accepted my papers into journals and conferences in the discipline of Computer Science.

From there I joined Georgia Tech, and one day I discovered  that I was in the School of Interactive Computing. And I like it very much. But, I think there’s some confusion about whether Interactive Computing is Computer Science. To me the answer is obvious, it’s the third paradigm of Computer Science. Its an empirical experimental discipline, drawing on a priori theory to inform computer program design, some of which are programs designed to push new computational space (such as robotics), others of which are designed to probe phenomena (like learning and how people do so). We use empirical scientific investigation to determine whether we have been successful, and if we have not what has failed. It is the science of computing that is the raison d’etre for Interactive Computing.

To those who have told someone, but not me, that I don’t do Computer Science this is my response. Computer Science is complicated to define, and we’d all be better served understanding it more deeply. And I am lucky to have had a career where the standards of engagement and assessment were set by people whose contributions to Computer Science are clear: who have collectively done the important work of defining the field. And I will also note here that I never heard any of those people discussing who was not Computer Science, they were far to busy trying to actually develop the field. Finally, I want to close with the comment that I am categorized as a minority in Computer Science because I am a woman. I struggle with that categorization, but I believe that some of the choices I made professionally have come with higher costs for me than they would if I had been a man. So, one reason I am very committed to Computer Science is that I’ve given a lot to it, but it came with costs—things I reluctantly gave up to pursue a career in Computer Science.

Three Paradigms of Research in Computer Science

In academia, computer science, discipline on October 13, 2009 at 12:25 pm

Recently I wrote about some of the challenges that the new discipline of ICT4D faces (based on my reading of other’s scholarly discussions), and what the discussion of those challenges tells us about Computer Science. I suggested that new fields provide an opportunity to look under the disciplinary hood of Computer Science, because disciplinary challenges are usually reflections of previously hidden assumptions.

But there’s another way to example the assumptions of a discipline which is to read papers that discuss them openly. I recently read Ammon H. Eden’sThree Paradigms of Computer Science” which does just that. He suggests that Computer Science is “unusual” in that it has three mutually exclusive paradigms that guide research in the discipline. The paradigms reflect three questions that in my own experience are asked about Computer Science. Is it a branch of Mathematics, Engineering or the Sciences? Currently he suggests that all three paradigms are at work in the methods and results being produced under the banner of Computer Science. So what are the three models?

Before turning to each of the paradigms note that for Eden, activity in Computer Science is organised around the program (including databases, WWW applications, OS, device drivers, viruses etc…) and as it is written and as it is run. So compares the paradigms based on how they treat the program methodologically, ontologically and epistemologically.

Rationalist Paradigm: Computer Science as a Branch of Mathematics (uses Theoretical CS as example)

As a branch of mathematics, writing programs is treated as a mathematical activity, and “deductive reasoning is the only accepted method of investigating problems.” p144. Programs are mathematical expressions. Research results, i.e., knowledge, focuses on program understanding their completeness (full and formal specification) and emphasizes a priori reasoning.

Technocratic Paradigm: Computer Science as a Branch of Engineering (uses Software Engineering as example)

The technocratic paradigm, Eden argues evolved in the face of arguments that the complexity of systems put the rationalist paradigm out of reach for classes of programs. Eden draws on the DeMillo, Lipton, Perlis (1979) as early evidence of this paradigm. As a branch of engineering, methods emphasize the production of reliable programs. The discipline draws on established engineering methods as well as demonstrating through rigourous testing that programs exhibit reliable behaviours. It’s impractical (or impossible?) to formally specify a program, so we turn to a posteriori knowledge (i.e., results from experience). And in this paradigm, he argues that the ontology is one of nominalism, programs do not exist in the abstract but only in the concrete. But he’s also quick to point out that there’s actually no clear theoretical commitment to the concept of a program by within this paradigm.

Scientific Paradigm: Computer Science as a Natural/Emprical Science (uses Artificial Intelligence as example)

This paradigm draws from Newell and Simon (1976). But, it’s an orientation to Computer Science as an empirical and experimental science. And it includes the experimental science of human-built entities, since programs are made by people. Eden argues that this paradigm differs from the Technocratic paradigm because the focus is not on reliability, but on scientific experimentation that is hypothesis driven, and includes also the use of programs as a tool in a hypothesis driven examination of phenomena that exist in the human or natural world. Methodologically, the scientific paradigm relies on deduction and empirical validation to explain, model and predict program behaviour. The difficulty, in practice, of always being able to deduce program properties means that the paradigm relies on both a priori and a posteriori knowledge. And the ontological assumptions made are that programs in execution are similar to mental processes.

Beki’s take away. I’ve been hearing discussions about whether Computer Science is math, engineering or science for a long time now. This helps understand that the discipline is actually all three. But, now I wonder whether it can survive as all three. Perhaps these are the cleaving points for a future for Computer Science? I also wonder whether my colleagues would subscribe to these paradigms, I’m guessing not all of them do. But I can’t help feeling that within all of this, and perhaps not entirely characterised by this piece, are some important things to understand about Computer Science. It’s definitely got me thinking, and a paper that does that is worth its weight in gold.

Newell and Simon’s Turing Award Speech from 1976.

“Computer Science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Never the less they are experiments. Each new machine that is build is an experiment. ACtually constructing the machine poses a question to nature and we listen carefully for the answer by observing the machine in operation and analysing it by all analytical and measurement means possible.”

and

“We build computers and programs for many reasons. We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about. Society often becomes confused about this, believing that computers and programs are to be constructed only for the economic use that can be made of them (or as intermediate items in a development sequence leading to use use). It needs to understand that the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature. It needs to understand that, as in any science, the gains that accrue from such experimentation and understand pay off in the permanent acquisition of new techniques; and that it is these techniques that will create the instruments to help society in achieving its goals.”

Reflections on ICT4D

In C@tM, computer science, discipline, ICT4D, research on October 7, 2009 at 9:13 am

This is a series of reflections about a new area of research that’s rapidly gaining traction within Computer Science. It has various names (a sign of its youth), but many call it ICT4D. A quick web surf finds the following class websites. At CMU you can take Human Computer Interaction in the Developing World, at Washington and Berkeley you can take classes for Computing or Information and Communications Technologies for the Developing World. At Stanford you can take a class on Technologies for Liberation, which is part of a broader program on Liberation Technology, which takes a different perspective but clearly has relationship to the focus on emerging nations where many of these problems are particularly acute. And here at Georgia Tech we offer multiple classes in this space, including Computing4Good and Computers, Communications and International Development.

Classes are not just a necessity but also a reflection of topics that faculty think are important or interesting to teach. So, from this, and other evidence such as the growth in HCI4D work at the ACM CHI conference, and the NDSR workshop held at SOSP and previously at SIGCOMM conferences.

I’m interested in ICT4D, well something a bit different to that, for two reasons.

First, I supervise students who have contributions to make to the emerging body of scholarship in this area. One student, Susan Wyche, has used multi-sited ethnography to understand how ICTs are used in Kenya, Brazil and the United States. Another student, Andrea Grimes, is interested in ICTs for underserved communities within the U.S. Both do work that pushes on the definition of ICT4D. Traditionally, by virtue of being in the U.S., Andrea’s work would not be part of ICT4D. And yet, questions of design, implementation, and evaluation in under resourced communities cuts across her work in ways that are not dissimilar to discussions within ICT4D. What makes this all the more interesting is that Susan’s work which from it’s multi-sited nature is very central to ICT4D also pushes on definitions, by showing how somewhat resourced communities already appropriate ICTs, thus pushing on the idea that ICT4D is not what we will do but also what is already happening.

Second, I’m interested in ICT4D because it affords an opportunity to look at the discipline of Computer Science. I’m very interested in the formation of disciplines, how Science is a human-organized process, and how its organization affects what we do. ICT4D is going through some struggles with identity and legitimacy, and the questions that it raises give us a rare opportunity to examine the assumptions implicit in the discipline that it is trying to find its home within.

Here are some of what I have learnt so far, which drawn from the CCC‘s Global Development meeting (and of course my own reading of the materials)

Some participants, i.e. those who come from CS orientations, struggle to answer the question “where’s the Computer Science in ICT4D?” And others list numerous opportunities (to empirically show what the potential might be for areas that span the fields of Computer Science, such as low-cost connectivity, getting content into developing regions via novel networking architectures and caching systems, mobile and low-OS footprint applications, power management, computer vision for detection problems in health).

But others have observed that the question is also an opportunity to inspect the assumptions that underlie the production of knowledge within Computer Science. Some people observe the following. First, that CS has been focused on problems that are experienced by those solving them. Second, that in publication, and the review that leads to that, Computer Scientists prioritize the solution to the problem, rather than the problem itself. And these are related. Clearly, if you pick problems you have first hand experience with then the balance time spent on problem discovery versus solution would likely emphasize the solution. ICT4D problems are not those that many (but not all) in Computer Science have spent time experiencing, so problem discovery and exploration take considerably more time. Some observe that HCI has done a good job of making problem explication a part of the science, but also note the difficulty that HCI continues to have in establishing its legitimacy in Computer Science.

Another argument that I’ve seen is that Computer Science tends to prioritize the complex technological solution over the simpler technological solution. One manifestation of this is to value high-end over low-end. This made me reflect on various research programs within Computer Science, including the relatively new “many-core” area. Many-core, like peta-scale and high-performance computing, emphasize in their very titles the high-endness of the technology platforms that are at once both problem and solution. I’ve wondered whether when many-core is not enough, whether we’ll move to an almost Seussian “many-more many-core” agenda.

And my point here is that it seems quite “natural” within Computer Science to organize an agenda around an abundance of complex technologies. ICT4D may have an abundance of cellphones, particularly low-end cellphones, but even that’s not always the case. The absence of complex technology makes the agenda harder to express. This is compounded by the fact that many other areas of Computer Science organize around machine components, databases, compilers, even networking, computer architecture, programming languages. Areas like Software Engineering and HCI are different, perhaps that also contributes to the difficulty that they sometimes have in being treated as legitimate areas of activity. Like Software Engineering and HCI, ICT4D as people note is not organized around abundance, it’s organized around a domain, and even that domain is contested and complicated.

ICD4D is truly interdisciplinary. It involves bringing people from multiple disciplines together. And the argument is made that the range of disciplines is bigger than HCI (also posited as an interdisciplinary field of Computer Science). But, I think it’s more than just in research process that interdisciplinary teams are needed, but also in the ways that solution success are measured. The objective of ICT4D is to solve hard research problems that simultaneously make a difference in the lives of people underserved by ICTs. We don’t measure CS by the good that it’s created for the middle class of America, we measure it by the complexity of the solution.

Actually, we do also measure the impact on the middle (and upper classes) of America, impact being the favoured keyword, when we talk about the innovations that Computer Science has provided and the ubiquity of those solutions in society (through, of course, largely corporate channels). So, our measures have been economic success for corporate America. But, does that seem like the right measure for ICT4D? Particularly since the business in a position to take advantage of ICT4D innovations is likely in the United States or another industrialized nation. But, even when we draw on this impact, people still conduct research on how we measure the impact of technologies.

ICT4D causes me, at least, to reflect on economic impact (which favors those who create successful start-ups since they are likely the only people who can easily draw a line between what they’ve done and how many people have purchased it or use it) as a metric for Computer Science’s impact. Additionally, given the difficulties of finding appropriate measures, I can’t help wondering whether ICT4D is being asked to put the cart before the horse, if we’re learning how to measure productivity gains for computer use in corporate America (who have had computers in place for decades) is it perhaps unrealistic to have well-understood metrics for settings where getting the computer in is going to be a significant first challenge?

Measuring the impact of the solution seems to be further compounded by the goals of the people who sometimes fund ICT4D research: NGO’s, philanthropic agencies, and so forth. Their goals and research goals can be hard to line up. This is not uncommon in research, all funding agencies have goals for the work that they support. But, the interaction between the traditional funding agencies for Computer Science and the emphasis on complex solutions, seems to be a better match, than the match between complex solutions with little problem discovery, and the goals of NGOs to understand long-term sustained improvement. The latter, at least I think, emphasizes solving the right problem, the one that can have the most impact, and given the lack of experience with the domain, that in turn means that the problem discovery phase is inherently going to be longer than the time that the measures of good Computer Science research support. There’s an interaction between the way that the science is rewarded and the way that the funding agencies reward it, and I think the gulf is wider in ICT4D because the science and the funding agencies didn’t evolve together.

In other words having a real-world, timely-measurable, good impact on a group of people for whom their problems and the relevance of technologies for solutions are open and ill-defined questions at the beginning of study, raises significant challenges and ones that seem at odds with the ways that Computer Science does disciplinary business. Further, it’s not clear that this situation improves when ICT4D is funded from traditional funding agencies.

Another assumption that comes to light when reading within the ICT4D literature concerns the place of abstraction in Computer Science. A solution that is generalisable, i.e. abstract enough that it works in all cases, is highly valued. In this way, Computer Science is perhaps no different from other sciences that seek fundamental principles. But, ICT4D is either a considerable distance from having those general solutions, or perhaps as some think is not a field of abstractions but of instances and understanding how instances differ as part of understanding what impact on people’s very different cultural, social, economic, geographical, political lives might be.

Finally, two other challenges for ICT4D. What impact means also turns on sustainability of the solution, it has to be something that works. Works in place, after the research team leave, by the people for whom it was designed. In traditional CS, if we do give our results to end-users (although frequently we let the marketplace do that for us) it is supported by a reliable power infrastructure, an educational infrastructure that gives many the knowledge to operate and manipulate the system, and so forth. So much less exists, and therefore so much more is required in ICT4D and by ICT4D practitioners. Also, these infrastructural absences appear to challenge our processes. HCI has many accounts of how participatory design failed because the people working with the researchers didnt understand why they didn’t know the answers, or that software was mallable enough to be the subject of redesign, or what the relationship between a paper prototype and the final system might be. Are we ready to have our methods turned over because actually they weren’t general enough? I think we should risk it, because what ICT4D will do is to expose the assumptions we make about access, wealth, market systems, education, power, etc… will all come to the forefront clearly.

So, I’d like to thank ICT4D for giving me an opportunity to look under the hood of Computer Science. As the area continues to grow, so these questions will be answered in some way, how is the open question right now.