Beki Grinter

Archive for the ‘C@tM’ Category

ICTD: Is it the Right Categorisation

In C@tM, computer science, discipline, HCI, ICT4D, research on March 14, 2012 at 9:42 am

Erik Hersman writes about why he doesn’t like the term ICT4D. The opening lines really resonate with me. If a project involving technology is done in poor parts of places like the United States or Europe it does not get the label ICTD. So, why does it get that label if its done in sub-Saharan Africa?

It echoes the remarks I blogged about from the opening talk where the speaker asked how we would feel if the focus of One Laptop Per Child was Alabama? Or, I think, in many parts of the United States. What would we be saying? I’m already aware that people in low-income neighborhoods can and do feel that they are the ongoing target of the United States’ medical community’s criticism and unfairly so. And they resist the messaging, viewing it as discriminatory.

Erik Hersman goes on to write about a variety of African start ups. Are they ICT4D? MixIt for example. What about technologies like Ushahidi, which started in Africa but has been used in settings that are not ICT4D.

At one level you could view this as a labeling problem. But there is also a research community gathering around it. As this field gains traction and matures, it seems like it’s a good time to ask whether its the right grouping. I’ve long held the view that we ought to look for common points of intersection for ICT interventions in any economically disadvantaged community. We’ve called this Computing at the Margins here at Georgia Tech, not sure that that’s the right label either, but the grouping is broader, and the idea is that what might be shared in common is that what these groups need is not more access to the same technologies, but technologies that speak more specifically to values that these groups hold (i.e., systems designed for them).

But here at ICTD 2012 I’m asking myself a second set of questions, fueled by the blog post that was referenced in the ICTD 2012 twitter stream, the plenary and other remarks I think I have heard during the sessions. And the questions are:

If the people who live in the places, who are technical innovators, (colleagues and partners) find this term problematic, should we?

Is ICTD a form of “othering” (of course you can ask this about Computing at the Margins too).

p.s. There’s been more written about ICT4D/ICTD with resources gathered here and there are a lot more dimensions to the debate about the name and the goals of the enterprise than those I’ve blogged about.

A Note from the Hyperdeveloped World

In C@tM, research on March 2, 2011 at 1:23 pm

This month’s Interactions magazine had several articles that reminded me once again how strange the world I inhabit really is. Three of the articles took up what, to me, seems normal, the immediate delivery of email, and a world in which a response to that delivery often expected.

Phoebe Sengers wrote about email in her discussion of her time spent living in Change Island, Newfoundland. An experience that caused her to reflect on a variety of values that frame her life in Ithaca but that were different to those abundant in Change Islands (she says this far more gracefully than I do). She describes a vision of email where users can control the speed at which it is sent, slowing down conversations, making them potentially both more manageable and more meaningful.

Susan Wyche’s piece on HCI4D and design takes up email again, as it is exchanged by Kenyan’s working in Nairobi with American co-workers. As she reports, Kenyan workers felt concerned that their American colleagues would perceive a delay in responding to email as a poor work ethic rather than being due to their lack of Internet connectivity. They felt the burden of trying to manage expectations of their American colleagues, and struggled with this. In this case one side was slowed down by infrastructure, but the other party in the exchange was not, and the values associated with managing email correspondence favoured (and derived from) infrastructure rich environments.

Marshini Chetty et al’s piece does not take up email directly, but highlights how business arrangements influence the use of infrastructure and the applications atop it. Countries that offer Internet plans for home users that are not sold by the speed of the pipe (as they are in the United States) but by how much people upload and download over the course of the month, shape how people choose to use the Internet. Managing those figures so that access to the Internet is preserved for an entire month (if caps are exceeded unless there’s an option to buy more data, its either a shift to a slower or no bandwidth) influences what people choose to do with their home Internet access, which must include email (perhaps especially the attachments).

In 2002 a paper by Lucy Suchman appeared, subtitled “Notes from the Hyperdeveloped World“. Although she focuses on a far broader set of flows, I can’t help feeling that these three articles provide an example of a small part of her argument, how values framed in contexts of unlimited ability to send email have been exported, by us, to places where they do not hold. And this reminds me of something that Gary Marsden once said, isn’t it about time we started designing for normal people.

 

An Agenda of Abundance

In C@tM, computer science, discipline, HCI, research on January 4, 2011 at 2:59 pm

As the reader knows, I think the discipline of Computer Science has abundance, both explicitly and implicitly, built into its research agenda. For example, the focus on Cloud Computing does not make sense without an abundance of machines, disks in the sky, and network connectivity. Recently, I’ve read several articles that have raised new questions and caused me to reformulate my initial position on a research agenda devoted to scarcity.

I finally read the 1965 article by Gordon Moore “Cramming More Components onto Integrated Circuits.” This was the article in which he expressed what would come to be known as Moore’s Law. Moore’s Law states that the number of transistors that can be placed on a silicon chip will continue to double at regular intervals. In 2005, there was a celebration of 40years of Moore’s Law holding true. (An aside, Douglas Engelbart argued a similar position in 1960 in a paper called Microelectronics and the Art of Similtude). The accuracy of these predictions has been an important driver in making Information and Communications Technology (ICT) more abundant.

I also read a piece by Bob Lucky called Electrical Engineering: A Diminishing Role? In this essay he argues that the abundance of technology is changing the distribution of jobs within that profession from Electrical Engineering to Computer Science, that the majority of work to be done exists at layers above those that are the provenance of the Electrical Engineer. The research version of this argument would be that the abundance of ICTs changed the distribution of knowledge required, opening up problems in Computer Science at a greater rate than it did problems in Electrical Engineering. He paints a bleak picture of a single Electrical Engineer, the last one, being the only person required to understand how the single chip works and on his or her shoulders holding up the entire ICT industry.

Some wonder whether abundance will have the same effect on Computer Science itself. Will the continued increased abundance of technology eventually require a re-distribution of skills. Commericalisation, while not given its name, seems to be the crux on which this argument turns. Abundance turns on the ability of companies to manufacture in high and ever-increasing volumes (this argument has a temporal quality, of abudance continuing to change the equation of the distribution and nature of the skills required).

One example of this argument applied to Computer Science comes from Rob Pike. Rob Pike gave a talk titled “Systems Research is Irrelevant.” He used all sorts of examples of the affect of abundance on Systems research, and proposed some solutions to reframing the Systems agenda (largely by not trying to compete with industrial innovations and exploring alternates). I wonder whether a version of this argument drives the Networking community. Their abudance is the Internet and there are debates within the community about whether to go Clean Slate (i.e., set the Internet aside and pursue alternate network architectures and protocols) or whether to continue to push on new research questions that stem from the very success of the Internet itself.

For the pessimist, the last Computer Scientist might be seen on the horizon. I remain less convinced. I do think that inspecting agendas of abundance can be helpful exercises for individuals and communities. I am a big fan of inspecting assumptions.

I’ve made a mistake in my own though. I previously wrote that Computing at the Margins was an agenda in scarcity. I said

Abundance is the set of problems that we have made, largely for ourselves.

Scarcity is the set of problems that we have made, largely for others.

And I was partially right, but not surprisingly the situation is more complicated than that formula implies. Computing at the Margins is simultaneously an agenda of abundance and scarcity. It is the abundance, particularly of cellphones, that is one catalyst for this research agenda. Yes, people have worked on the problems of how you design technologies for the Global South for a long time, but this area of research has gained more traction lately, and I suspect that that trend is associated with the increased access to ICTs in these environments.

But simultaneously it is also about scarcity. One form of technological scarcity is the infrastructure. One of the reasons that mobile phones are frequently posited as the platform on which an application will be built is because their infrastructure is the most well-developed. And it’s not that well-developed, it’s just the most developed among the options.

So, in my first formulation I erroneously created a binary opposition. I still think that there’s a useful framing of this research around scarcity. One reason for me is that it provides such a contrast to the traditional modes of working. But, the interplay between abundance and scarcity is, well more complicated than I originally described. No surprise to you all, but I’m still thinking it all through. And I think we should inspect abundance, is the mobile phone the solution? Probably not for all problems.

More on an Academic Blog

In academia, academic management, C@tM, computer science, crafts and craftiness, discipline, empirical, European Union, France, HCI, ICT4D, research, social media, wellness informatics on September 14, 2010 at 9:27 pm

I’ve written about academic blogging before, but recently I was asked some questions.

1) How did you get into doing a blog?
It was quite by accident. A colleague of mine created a private blog to capture her experiences of conducting fieldwork. She was using her blog to create a forum where she could get feedback from others and reflect on what she was learning. So I received an invitation to create an account and I did, and then I thought it would be an interesting experiment. It’s turned out to be an interesting experiment indeed.

Early on, my blog was unread and largely just a private (although entirely public) experiment. When I started pushing my posts to facebook and twitter it got more public. Another way I acquired audience was through timely posts where I just happened to have an early hit in Google searches. Another way, and this turns on my research interests, was to prepare a commentary on a Facebook meme. Using my research expertise I commented on the importance of this.

2) What is your blog about?
My blog is a mixture of topics. I’m aware that this is rather different from other blogs and I wonder whether it affects the readership. On the other hand, it’s a creative outlet and also within the scope of my research, so exploration is important.

Two persistent non-work themes:

  • Cross cultural adventures, for example, being British in the U.S. and encounters with my accent and living in France and coping with culture shock.
  • My family from whom I learnt skills that have morphed into my off-script crafting hobbies and a passion for family history and the way it transforms history from monarchy and war into ones of poverty and survival.

Work-related topics fall into four categories.

3) How much work is doing a blog?
As much as you want it to be!

When I’m writing about non-work related topics, the posts come pretty quickly and the only thing they do is share something with colleagues and friends. Although, like facebook, they start very interesting conversations. For example, the one about the convict in my family started discussions with several work colleagues at Georgia Tech and beyond. I’d written about it partially to document the journey of discovery and detective work that is genealogy, but by sharing it broadly I got not just advice on how to learn more, but also on literature that would help set context.

The work related ones take longer. Some of them do double duty, for example, I needed to synthesize the literature in ICT4D, and I was going to give a report about the workshop so I needed to have some means to collect all that information together. My blog helps me think about making arguments, it complements and extends my two decades of research experience. It’s not just a set of notes I draw on, but because it’s simultaneously unreviewed but read by scholars it improves my arguments.

4) What impact has it had on your professional life?
My colleagues in Computer Science and beyond have enthusiastically responded to my blog. The strength in diversity of topics has been that people have asked me to write on a variety of issues. I’ve been asked to discuss the disciplinary devolution, and asked to review manuscripts on this topic. I’ve written posts on writing for conferences and had others not explicitly invited picked up by the conference organization. I’ve been tweeted and retweeted. While I have not been asked to write about my cross-cultural experiences, I’ve had face to face conversations about them. This is also true of the sexual harassment post, it generated lots of community support.

5) How would you advise a student concerning the advantages and disadvantages of academic blogging?
I tried to answer this, and then decided that I would answer it in the form of some different questions.

What do I write about?
Things you’d feel comfortable with an audience of a) your Dad whose an academic b) your Mum who started her own business (intelligent layman with interest in “application”) c) your community of practice and d) anyone else reading. Perhaps you could explain a paper in your field? Assume that the authors are in your audience and as its been published the members of your community have not deemed to be serious.

Perhaps you could write about the related work in your area. Synthesis is a challenge in academic writing. Related work is not a stream of text that describes each paper in turn. It synthesizes the results from multiple papers, groupings form pro and con arguments that help make your case. The case is a) the aggregate findings that your research builds on and extends b) the novelty of your approach and c) the contribution of your research. Synthesis is also an exercise in being inclusive and humble, how do you engage and invest a community in your results otherwise/

What about your experiences in graduate school? What are your time management strategies? What do you know about the Ph.D. program at various points in the program.

Anonymous versus known?
There are good reasons to write an anonymous blog. Anonymity supports candor. Career experiences can fit into this category. The downside of anonymity is that no-one knows you. When it comes to your research, it’s good to be associated with it! Academic branding requires being able to associate a name to the research brand.

Snowbird: Local Global Development

In academia, C@tM, computer science, discipline, ICT4D on July 22, 2010 at 3:43 pm

I was one of three invited panelists in a session on Global Development.

I presented the case for Computing at the Margins and the results of an NSF sponsored workshop. The key point I made was to argue that Global Development has a domestic component focused on those who have been at the margins of technological innovation. The solutions won’t be identical of course, but there are classes of problems that span underserved parts of Industrialized nations as exist in Emerging nations. And from a scientific perspective, sharing knowledge across these boundaries ensures that we understand what’s generalizable and what is locally specific.

I also used my talk to connect this effort to other problems discussed. For example, DARPA and the NSF both have an interest in socio-computational systems (think wikipedia, North Korea Uncovered, wikipedia). So does Computing at the Margins: what would happen if more people could participate and benefit from these collaborative content creation experiences, what new ones would they create? How would it change or reveal the edges that DARPA seeks to understand? Nomadicity, the reflection that the global population is increasingly migratory, may also contribute to understanding why the edges no longer conform to national borders.

My two co-presenters, Lakshminarayanan Subramanian and Tapan Parikh talked about a variety of the technical challenges in Global Development and about the role it can play in education, as well as for the future of Computing. I was delighted to be in such good company. In this post I want to make notes about the things that got me thinking some more.

Tapan mentioned an article by Ammon Eden that articulated three paradigms of Computer Science research. I saw lots of people writing down the title of the paper, and I have previously blogged about it myself (it includes a link to the paper). Global Development he argued fits into the Science definition. I’m inclined to agree. But, as I’ve written about before I think global development exposes an interesting set of assumptions in Computing, so even if it fits paradigmatically, it’s not without challenges (or more optimistically, game changers that will productively extend the field of Computer Science). One other that now springs to mind, is the co-evolution of Computer Science the discipline with the National Science Foundation. Given how central the NSF is to Computing, and how long the two have co-evolved, it makes it even clearer to me that the NSF’s support is crucial in advancing this field. Now I understand this, the question I have is what to do about it. I’ll take answers from readers please.

Solving the right problem is important. It’s always important. But, in much of Computer Science the problem discovery phase takes far less time than the problem solution phase. Global Development, rather like HCI and Software Engineering I think, requires attention on problem discovery.

We talked about whether industry could make progress on these problems. We varied somewhat in how much we thought that industry was engaged in this space, but it is clear that corporate America is paying attention to emerging nations, as emerging markets. We also discussed the role that basic research, unfettered by the need to begin with existing platforms and solutions, could contribute.

The phrase end-to-end came up in two distinct way. First, there was agreement that this area of research requires solutions that span the distinct sub-fields of computer science. Simply put you need people who understanding networking, operating systems, and hci (and much more) in order to create a workable solution. That was dubbed end-to-end systems. Then there was also an end-to-end methodological discussion, about how both problem discovery requires and evaluation requires empirical research that wraps around the system development.

I’ve wondered this before, but I’ll wonder out loud on my blog, is HCI style research rather uniquely positioned to take advantage of these end-to-end requirements. Methodologically, HCI already has practices in place (I think some of these practices will not work in these settings and that innovation is required there, but that’s a different problem from not having any practices in place). And while this may be controversial, perhaps it’s a good time to let people know that HCI doesn’t just concern the interface and nor do HCI researchers limit themselves to toolkits for that. HCI researchers partner with or engage directly in a variety of technical concerns that go down the stack. When I’m being uncharitable I tend to think that sometimes CS think that HCI is rather superficial and afraid of the machine, I disagree intensely. (Which also reminds me that I heard someone talk about the field of database usability for the first time. Is there any part of CS to which HCI doesn’t have something to offer. No of course not!)

Colorado has just started a Masters Program in ICT4D.

We heard from a number of people about how working in this space had been a personally life changing event. The rewards of this research space are very significant. While agreeing I also observed that the broader impacts of this work are so blinding that they overwhelm the question of what the science is in this space. I feel strongly that the Computing Community is going to have to work together to make the scientific case for this space, to ensure funding and also tenure and promotion reward for people who engage in this space.

We were invited to create a layered diagram that illustrates the types of challenges for Computer Science across the spectrum when pursuing global development. And another person asked us whether you could create an introductory course on Computer Science using problems from global development as the examples. That’s a fascinating question.

And then the name discussion came up. The name for this field is extremely complicated and loaded. I suggested Computing for Normal People, a riff on Gary Marsden’s observation that computing has largely served the hyper-developed world, and that the next 5 billion constitute what is normal.

I heard of “the last electrical engineer” phenomenon. The idea is that once everything is known you only need one person who knows it to ensure that the knowledge is not lost.

Oh one other thing. There was a session on HealthIT. Health and wellness is a significant target for investment, including technological investment. People who seek to stay well and who have health issues come from all walks of life. Health is also a global issue, what starts in one place and easily spread to many. And what it means to be well, what it means to treat someone, also varies culturally. Development confronts and deals with issues of cultural variance and its implications for technological relevance all the time. There are also more technological-centric challenges such as getting care to everyone that needs it, where ever they are and whatever access to bandwidth and health care they may have. Access and empowerment through technologies seems like a crucial part, a domain, for Development. Conversely, dealing with underserved groups is a target domain for HealthIT.

Snowbird: Thinking Big in Computer Science

In academia, C@tM, computer science, discipline, research on July 20, 2010 at 12:56 pm

I’m in a session at the CRA Snowbird conference focused on thinking big in Computer Science as a means to pursue large grants. The session is organized by Debbie Crawford at the NSF. There were a range of speakers who each took a turn to provide their thoughts on pursuing large projects.

The first project is about robotic bees, the research to create them (it’s an NSF Expeditions). The problem set up is lovely. 30% of the worlds food requires pollination by bees. But bee colonies are dying. Can robotic bees help? It is simple to explain (and not to answer) and very compelling. Then there’s the team structure. They have 10 or so faculty in different research areas/disciplines, but all with core interests focused on robotic bees and other insects (I think that’s what I took away). Of course this suggests lots of related and prior work by the team members. Additionally they are all collocated, in Boston, with one person in Washington DC. Finally, collaborations existed among various team members also existed, so although the whole team had not worked together they all had some experience of working with other members of the team.

The process the Robobees team used to create the grant was a brainstorm meeting, collocated, purpose of which was to generate the outline for the Expeditions grant. They used the outline to divide the work, with each PI contributing text and figures where appropriate. Then a smaller number (guessing the lead PIs) integrated the text and circulated the document for feedback.

The next person to speak was from the DoE. I didnt personally get quite as much out of this talk as the others, I am sure that was due to my interests. What I did take away was that the DoE has lots of opportunities for computing, ranging from architectures, systems software, operating systems, programming languages, as well as the fields that make up computational science and engineering. If I was surprised, and perhaps I shouldn’t have been, the DoE is also focused on networks and remote collaboration tools to support distributed science.

The next person to speak was from DARPA. He talked about how to win (a DARPA contract).

In order of priority, he began with ideas matter. There’s a paper called the Army Capstone Concept that potential investigators should read. He asked the community to aim higher and bolder. He didn’t speak to this point, but I thought I saw on that slide it also said that the idea must be doable. The previous DARPA plenary said that it was alright to aim high and fail (at least initially) so I’d have liked to know more about doable. Second, it must fit the DARPA mission. Third and fourth were cost realism and the proposers’ capabilities and related experience. With respect to related experience he emphasized more than once that it should not just be your stature, but actual experience. He also said that it sometimes helped to write your proposal in parts, with budgets for the various parts because that can help in contracting (they may ask for some but not all of the parts I inferred from this, so modularity is advisable). Finally he emphasized engagement with the Program Managers, before the BAA and after the grant is awarded, he also reminded the audience that they read a lot of proposals, which I took as a reminder to make it engaging and interesting to read.

The next speaker came from the University of Michigan. He provided the experience of someone who has run large centers, and therefore has been successful in raising money for them. He did a great job of providing the faculty/lead PI perspective as well as suggesting what department chairs should do to help faculty who want to write large grants. He began by saying that not all faculty are interested in writing large grants and that in his opinion it’s pointless trying to encourage everyone to do so. Instead, find those who are willing and support them in doing it.

He argued that the reason to write large grants is the visibility and impact for both the individual and institution. Another value he highlighted was that a large effort can create a locus for other activity. So, a large center can spawn and facilitate related research efforts. But there are challenges. One is interdisciplinary. Not just external to CS but also among the specialities of CS. So all the challenges of doing interdisciplinary work apply. Another challenge is that you have to have complete coverage in the space you are proposing around, you have to plan for it from the beginning and the PI has to ensure that any gaps are filled, even if he gaps that need filling are not attractive research to the person that takes the ultimate responsibility. I had the impression that what he was saying was that one of the responsibilities of the PI was to fill those gaps.

So what can department chairs do to help? Reductions in teaching load, staff support, and institutional support. And recognize that the time spent can diminish ongoing research activities. Also since it requires resources, pick the best opportunities. And if you are successful recognize that you get a part of the action, because the large projects will span units and institutions.

Finally a CISE NSF person spoke. She mentioned two large center programs the ERC and the STC, as well as a center-like entity, university-industry centers for partnership. CISE has a center-like program called Expeditions, the current round of which will be announced next month. There’s no restriction on topic, but it should have impact on CISE, society, and possibly the economy. They look for something where the whole is greater than the sum of the parts, if they could have funded ten small projects instead of one large one it is not compelling. Expeditions was partially designed to fill a gap created by DARPA (but there was an observation that it might now be a gap that the new DARPA is filling).

Expeditions is also a mechanism to engage the Computer Science community to engage more in center like activities. The NSF representative observed that Computer Scientists participate less in (I presume this means lead) less ERC and STC centers than other disciplines. Expeditions is a launch pad for potentially taking things to a center activity when the Expedition is done. An interesting note, the number of submissions has dropped massively for Expeditions, 68, 48, 23 in the three years that it’s been active. Finally she noted that some Expeditions had lead PIs that were not Full Professors, noting that Assistants and Associates did succeed with these efforts.

Snowbird: Democratizing Innovation

In academia, C@tM, computer science, discipline, research on July 19, 2010 at 11:35 am

Just finished listening to a talk by the deputy director of DARPA. It was one in a series of talks about the “new” DARPA, which in this case was positioned as one that’s going to align more effectively with the culture of Universities.

Much could be said, but I want to focus on one aspect of the talk. One of the thrusts within DARPA is focused on understanding what social networks make possible. He talked about the Iranian election and how technologies were used to mobilize people in protest. This was part of a discussion about democratizing innovation.

What is that? My understanding is that it’s a focus on how social networking technologies make it possible for large groups to mobilize around shared interests (ideological, political, religious, entertainment) that are not related to geographic borders. Technologies are creating new borders, new edges, that DARPA needs to understand.

And this reminds me of Computing at the Margins and Global Development. We need to understand these edges too… It’s not just building technologies for those who have none, but leveraging what’s already in use to develop it further. But, I think that we’re also very actively attempting to change the boundaries, by bringing more people into the digital society. I’m still pondering the implications of this, while listening to the DARPA director discussion human motivations and the need for sociologists and so forth to understand how social networks work.

Local Health Systems

In C@tM, empirical, ICT4D, research, wellness informatics on April 26, 2010 at 12:43 pm

Browsing Ghana’s Ashesi University College website I found the following course.

Sociology: Traditional Medicine
The course is intended to throw light on the structure, function and practice of Africa traditional medicine and its relationship with modern (scientific/western) medicine. By the end of this course, students will be conversant with African traditional medicine and attempts made to incorporate it in primary health care. The course will:

  • Give an insight into African Traditional Medicine
  • Elucidate the pattern of articulation between different persuasions/types of traditional medicinal practice.
  • Determine the nature of interrelationships between traditional and modern medicine.

This caused me to reflect on a question I asked at the CHI WISH workshop in the session on addressing disparities health systems for low income contexts. The panel was composed of a number of speakers who work in African contexts and so I asked them how they integrated indigenous systems of medicine/health and wellness practice into their research.

I asked the question, because I think it’s one that has not received enough attention. And yet, it’s going to turn out to be crucial. It shows itself to be crucial in the United States, because not everyone responds to the medical establishment in the same way. Public Health researchers argue that culturally focused interventions and information are crucial to having people take up and apply the knowledge in their daily lives. It’s even as simple as making nutrition advice relevant to the food consumption practices and traditions of a particular community.

The answers I got were interesting. One that most interested me was that I learnt that South Africa is integrating indigenous medicine into the offerings of the health service. I can’t find a good reference for this, but I did learn that part of it is to also prevent illegal and harmful medications being sold. The others focused on seeing is as part of the context of health and wellness, part of the overall picture of what it means to be in good health. And as yet, I do not hear any conversation about integrating these types of practices into Health ICTs. And it’s not just the practices themselves, it’s the people, organizations, and institutions that indigenous medicine has that also need to be integrated in to be holistic and representative.

I can’t help thinking that this will be an area which presses very hard on the definition of health. And it may challenge us to design systems that we don’t entirely agree with because that’s what the end-user wants. And of course it’ll counter generalisable solutions. Health is cultural and local, and indigenous medical systems and all that they imply highlight this property of health.

Twitter and Earthquakes: Haiti, Chile, and now Baja California Mexico

In C@tM, computer science, empirical, HCI, research, social media on April 4, 2010 at 9:52 pm

As many of you know there’s been another large Earthquake, this time in Baja California Mexico. Original estimates were that it was a 6.9 magnitude earthquake (it has been revised to 7.2 by the USGS and I read reports of 7.4 from Twitter now). I was in Irvine when Northridge occurred, so first and foremost and as always my thoughts go out to those who went through it.

It’s inevitable that comparisons will be drawn. They were between Chile and Haiti’s earthquakes, the difference in responses and so forth. Hopefully, if one thing good can come out of a large earthquake is that it teaches us what we need to respond better and more effectively to the next one.

I have a smaller question, which focuses on the role of social media, I’ve blogged about my perceptions of the social media use in response to the Chile earthquake here. I’m finding it harder, personally, to find the streams on twitter associated with this latest Earthquake. Interesting to me since it happened to be felt in a place that’s very technologically enabled, Southern California (reports of tall buildings swaying in San Diego for example).

So my questions are as follows:

  1. How do the uses of various media streams, facebook, twitter, etc… vary according to the different earthquakes? I know that my colleagues at Colorado are working to try and unify the syntax of the responses to aid in search and rescue, but I’m also curious about the volume of responses and the mediums used. Why? Because I think we can understand the cultural differences in social media uptake through these sorts of events, and I think that’s not just an important and interesting research question, but also a crucial piece of the puzzle for understanding how to respond.
  2. Are there differences in how people use them? Again, turns on cultural concerns. I know we have a strong focus on the basics. Where are my friends and family? Where can I get food and water? What is the whole situation? But, what if any completely local responses are important? I recently watched a program that focused on recovery efforts in Haiti and included discussion of the role of voodoo leaders in shaping some people’s understanding of what had happened. I know what you’re thinking, yes, it pertains to some of the research that Susan Wyche does. Yes, that’s true, but it’s also important to understanding how to frame response, who might be involved, what the parameters of culturally appropriate action and interaction are… and surely that’s got to matter.

I am sure that there are better and more questions to ask. These are mine as I watch a few twitter streams counters go up, but so slowly in comparison with Chile. Of course it was said then that Chile had really taken to twitter, and now just from my sample (I sampled hashtags I could find, but of course that includes at this point several hundred different individuals and institutions…).

And so my thoughts are mostly with the people of Mexico and California. But my thoughts are also with the people of crisis informatics and my colleagues in Colorado. They have so much to do, so many possibilities, and why do I think also a sleepless night tonight while they gather data and begin their process of tweaking the tweet once again.

Development: A Case for Human-Centered Economics

In C@tM, computer science, discipline, empirical, HCI, ICT4D, research on March 31, 2010 at 10:13 am

I was reading a paper by Dorothea Kleine. (ICT4What? from the ICTD 2009 conference)

She observes the following paradox: the degree to which ICT4D researchers have to legitimise the impact of their research to funding agencies seems paradoxical given the widespread belief, and supporting rhetoric, of how much ICTs (most notably the internet) have changed the lives of millions of people.

I should immediately say that I’m not entirely suprised that this situation exists, nor am I certain that it’s a paradox. I think it exists because the impact of ICTs has largely not been for those who are the focus of ICT4D, but rather the focus for ICT4Me (i.e. a middle-class caucasian living in an urban city in the richest nation in the world).

However, she has a thesis about why this situation exists: two reasons. First, it turns on what she, and others, see as a definition of development that’s too closely coupled to economic growth. Second, the impact of ICTs is measured on a particular outcome, rather than the possibility that technology empowers people in a variety of ways. Briefly, the second seems very plausible based on a variety of accounts of impact I’ve read over the decades. Technology is unpredictable, users are surprising, outcomes are difficult to tie to a particular technology. All of these things are topics of research, which certainly suggests that measuring impact on specific a-priori outcomes is a non-trivial process.

So back to the first point. Defining development as economic growth. This is particularly interesting, particularly given that uneven growth, dependency and inequality are all features of the most prevalent type of economic system: capitalism. Is it just me, but thinking about things this way makes development seem almost menacing, the idea that intervention would potentially increase wealth but at the expense of someone else.

But, I feel times are changing. Not just there, but also here, there are questions of how we’re all going to manage economics in balance with other concerns such as the pressing social problems that increasingly seem to be defining the mission of Georgia Tech, or the ecological challenges that are increasingly confronting the United States.

In the School of Interactive Computing we’ve been having a discussion about a human-centered approach to Economics. It was triggered by Brook’s OP-Ed piece on the future of the discipline of Economics, but its also in Paul Krugman’s piece. In a nutshell, both of them are calls to reexamine economics, move it away from a mathematical, rational or quasi rational discipline (as an explanation for why Economists were the last to predict the recent economic downturn). Our school-wide discussions turned on what we, as faculty, thought a combination of people and machines might be able to bring to Economics. And then I was reading Kleine’s piece about development. Seems to me that there’s an important connection here. Is development the point of economics? What I mean is that development is the reason that individuals and nation-states practice any form of economic reasoning? In other words, is it to somehow move towards a desired state, either personally or as a nation, that is growth… by some definition…

And so what is development (a question that people in the development studies community, the ICT4D community and no doubt others, but perhaps less so economists, have been wrestling with)?

Kleine introduce’s Amartya Sen’s approach to development which focuses on freedom of choice for people in the personal, social, economic and political spheres. To accomplish this Sen has functionings: outcomes that people desire, and capabilities: the sets of functionings that a person can currently achieve. The goal is to provide more capabilities that will in turn let people reach more of their desired functionings. Economics may factor into this, but it is one component.

She suggests that ICT4D is potentially a test case for something better. And that something bigger is understanding how technology may play a role in a notion of development where the human is squarely at the centre making their own choices. And I wonder whether she’s right.