Beki Grinter

Posts Tagged ‘bell labs’

Teenagers and Telephone Switches: A View of Consumption from Down in the Infrastructure

In research on February 3, 2012 at 11:40 am

Recently Apple’s iOS 5 created the greatest demand on the UK broadband infrastructure ever seen. Since reading this I wonder what implications it may have or be having on broadband providers and their requests to the builders of the technologies that comprise the broadband infrastructure.

When I worked at Bell Labs, I worked with people in the switching division. It was the mid-90s and the average call time had shifted from 20 minutes to 3 hours due to dial up modems and the Internet. This was a really significant change because a series of switching design decisions were based on that 20 minute average. Suddenly this design decision was wrong, and that was generating all sorts of new work. The far away world of end-user use was changing Lucent’s business, and rapidly so. Changing the very infrastructure of networked activity by putting it under increasing pressure. Hence my curiosity about Apple’s similar move recently.

In HCI, the idea of network complexity and network cuts are receiving attention. By network complexity mean the idea behind acts are large and complex networks of people and technologies that extend in time and space beyond the moment of encounter back to the inception of design of any technology as well as the technical experiences of each individual and the way that that shapes all future encounters. Alex Taylor, in his paper in CHI 2011, introduced me to the idea that we cut these networks. In order to make an analysis we typically foreground some of the people and technologies involved in human computer interaction while hiding others. Lucy Suchman argues that one that is often hidden, at least partially, is the developer.

I always thought that seeing the work to rework the infrastructure was a unique opportunity. Now, I think the uniqueness of it was that I saw a piece of the network complexity typically cut from analyses, particularly those starting with the end-user. Instead of seeing dial up traffic and its objectives as the human-computer interaction, I saw engineers architecting, building, and testing switches that a distant arm of the company sold to telephone operators, who in turn had marketing departments who sold plans to end-users, who were using those plans to dial numbers that connected them to a completely new network that leveraged the telephone network but changed it. My position of privilege was to see deep into the infrastructure, to have that piece of the network not be cut (likely of course at the costs of others). To see the world of end-use from the point of infrastructure is, I think, somewhat more unusual.

New Insights into Familiar Friends

In computer science, research on September 23, 2011 at 12:55 pm

I’ve always felt lucky that I’ve had multiple jobs in the course of my career. Modulo the research statement, and the recruiting process, and the visa issues that that all came with, other than that it’s been a good thing 🙂 For example, I can compare and contrast experiences and learn from that. I feel so lucky to have met so many talented and creative researchers, and worked alongside them in different contexts.

So, today I want to describe another reason, which is the new insights I’ve gained into the technologies that had seemed like familiar friends.

At Bell Labs, I learned that the company had had marine biology research foci. I knew that Bell Labs had had an amazing history of innovation in unlikely areas for the telephone company (for example, the creation of the first synthetic version of the B12 vitamin). So, when I learnt about marine biology I thought how wonderful it was that the Labs continued this tradition of commitment to science. But I was soon corrected. Marine biologists are very central to telephony, because some surprising percentage of underwater cables are break  because they are a tasty meal for aquatic life. The Internet was, and remains, vulnerable to shark attacks.

At Xerox it was high-speed printers. Truthfully, after one of them printed out my dissertation quickly, they were a source of resentment for me. My thesis took months to write and less than three minutes to print. But one day while standing by one of these high-speed printers someone explained to me how they were designed to ensure both high speed while minimizing flammability. And it had never occurred to me how significant it is to design a high speed printer so that the paper is pushed quickly but in such ways that it does not catch fire.

I have no idea how you protect the Internet from the living or ensure that a printer won’t catch fire, but both of these encounters briefly opened up these technologies for me in ways that I had not considered. Familiarity was challenged by a realization that there was “a lot more going on here” than I had ever realized. And having that experience multiple times, shaped by companies who had to have a staff with a collective knowledge that reflected that “there’s a lot more going on here than you realize” is another reason I am so glad to have spent time in different places. Bell Labs and Xerox taught me through these sorts of encounters that technology is all about complexity.

Computer Science: Why I care

In academia, computer science, discipline, HCI, research on October 14, 2009 at 6:23 am

As I’ve said before, I’m very interested in disciplinary evolution. There are many reasons, but one of them is that I’ve been discussed as an example of someone who is not a Computer Scientist. At least three things bother me about this discussion. First, these criticisms are largely said about me and not to me. Second, it assumes that the discipline of Computer Science can be defined, and I don’t think evidence supports that. While I don’t completely agree with Eden’s arguments (as an example of writing about multi-paradigmatic behaviour in CS), I do concur that we’re proceeding in multiple distinct paradigms that come with different, possibly irreconcilable methodological, ontological, and epistemological assumptions, which makes me wonder whether we do collectively know what the discipline of Computer Science is all about. Third, the criticism also dismisses the commitments I’ve made to my profession as well as the assessments I have had by others regarding the role of my research in the field of Computer Science (an obvious example, I publish in conferences that are mostly sponsored by the ACM, the professional association for Computer Science researchers, and others cite my work in other Computer Science conferences).

I have three degrees, all in Computer Science. While degrees do not make a Computer Scientist, I would suggest that they give me many years of training for understanding what is included in Computer Science. But degrees can not define a Computer Scientist. After all some of most significant innovations come from people who don’t have degrees in Computer Science. No-one is what their degrees say they are, it’s what they choose to do and why.

So, my commitment to Computer Science was cemented in graduate school. I went to graduate school at UC Irvine. The other day I found a paper that discussed the program I was in in graduate school (the Computers, ORganizations, Policy and Society (CORPS) group). It was not HCI, although it was similar, it was focused on Computing as an empirical science, combining a priori theories that can explain technologies in use-context, with a posteriori empirical analysis of what happened when technologies were deployed in particular contexts. I was hooked, this made the Computer Science of numerical analysis, formal methods, graphics, make sense to me.

Three and a half years later I graduated with an MS and PhD. My thesis work explained how dependencies in code reflected dependencies in the division of labor, and showed how these labor relationships were not being accounted for in the processes used to develop software. Because of this, I received an offer of employment at Bell Labs, and I joined the Computer Science research division of Bell Labs. My job description, continue to do Computer Science research on the human-centered problems that continue to plague software development (in 1960’s it was a crisis, in the 1990’s it became a chronic crisis, and apparently hell). I’ve written about how amazing this time was, how much I learnt. Bell Labs demanded excellence in science, it was a world-class research laboratory, and so it held us all to the highest standards of research in our discipline: Computer Science. So, each year I continued to do research in this space and had the honour (it was terrifying at times) to have my performance assessed by the type of people whose contributions to Computer Science are central to the discipline. But, of course this was simultaneously the privilege of working at Bell Labs, to have your own standards set by people who made Computer Science.

Four years later it was clear that Bell Labs was going to go through what many nationally acclaimed scientific laboratories go through: downsizing. I joined the Computer Science Laboratory at Xerox PARC, as a member of the Distributed Systems area (why this comes as a surprise to people I do not know). CSL was very similar to Bell Labs, but PARC is physically smaller than Bell Labs was. So, that made it more intense, the evidence of PARCs contributions to Computer Science were everywhere, you could physically see them (like the Ethernet). Again, what I was responsible for doing was to advance Computer Science, that’s how I was judged.

So, my entire career through Bell Labs and Xerox PARC was as a practitioner of the research of Computer Science. That’s who mentored me, set the standards, and evaluated my contributions, with the help of external communities of researchers who accepted my papers into journals and conferences in the discipline of Computer Science.

From there I joined Georgia Tech, and one day I discovered  that I was in the School of Interactive Computing. And I like it very much. But, I think there’s some confusion about whether Interactive Computing is Computer Science. To me the answer is obvious, it’s the third paradigm of Computer Science. Its an empirical experimental discipline, drawing on a priori theory to inform computer program design, some of which are programs designed to push new computational space (such as robotics), others of which are designed to probe phenomena (like learning and how people do so). We use empirical scientific investigation to determine whether we have been successful, and if we have not what has failed. It is the science of computing that is the raison d’etre for Interactive Computing.

To those who have told someone, but not me, that I don’t do Computer Science this is my response. Computer Science is complicated to define, and we’d all be better served understanding it more deeply. And I am lucky to have had a career where the standards of engagement and assessment were set by people whose contributions to Computer Science are clear: who have collectively done the important work of defining the field. And I will also note here that I never heard any of those people discussing who was not Computer Science, they were far to busy trying to actually develop the field. Finally, I want to close with the comment that I am categorized as a minority in Computer Science because I am a woman. I struggle with that categorization, but I believe that some of the choices I made professionally have come with higher costs for me than they would if I had been a man. So, one reason I am very committed to Computer Science is that I’ve given a lot to it, but it came with costs—things I reluctantly gave up to pursue a career in Computer Science.

Three Paradigms of Research in Computer Science

In academia, computer science, discipline on October 13, 2009 at 12:25 pm

Recently I wrote about some of the challenges that the new discipline of ICT4D faces (based on my reading of other’s scholarly discussions), and what the discussion of those challenges tells us about Computer Science. I suggested that new fields provide an opportunity to look under the disciplinary hood of Computer Science, because disciplinary challenges are usually reflections of previously hidden assumptions.

But there’s another way to example the assumptions of a discipline which is to read papers that discuss them openly. I recently read Ammon H. Eden’sThree Paradigms of Computer Science” which does just that. He suggests that Computer Science is “unusual” in that it has three mutually exclusive paradigms that guide research in the discipline. The paradigms reflect three questions that in my own experience are asked about Computer Science. Is it a branch of Mathematics, Engineering or the Sciences? Currently he suggests that all three paradigms are at work in the methods and results being produced under the banner of Computer Science. So what are the three models?

Before turning to each of the paradigms note that for Eden, activity in Computer Science is organised around the program (including databases, WWW applications, OS, device drivers, viruses etc…) and as it is written and as it is run. So compares the paradigms based on how they treat the program methodologically, ontologically and epistemologically.

Rationalist Paradigm: Computer Science as a Branch of Mathematics (uses Theoretical CS as example)

As a branch of mathematics, writing programs is treated as a mathematical activity, and “deductive reasoning is the only accepted method of investigating problems.” p144. Programs are mathematical expressions. Research results, i.e., knowledge, focuses on program understanding their completeness (full and formal specification) and emphasizes a priori reasoning.

Technocratic Paradigm: Computer Science as a Branch of Engineering (uses Software Engineering as example)

The technocratic paradigm, Eden argues evolved in the face of arguments that the complexity of systems put the rationalist paradigm out of reach for classes of programs. Eden draws on the DeMillo, Lipton, Perlis (1979) as early evidence of this paradigm. As a branch of engineering, methods emphasize the production of reliable programs. The discipline draws on established engineering methods as well as demonstrating through rigourous testing that programs exhibit reliable behaviours. It’s impractical (or impossible?) to formally specify a program, so we turn to a posteriori knowledge (i.e., results from experience). And in this paradigm, he argues that the ontology is one of nominalism, programs do not exist in the abstract but only in the concrete. But he’s also quick to point out that there’s actually no clear theoretical commitment to the concept of a program by within this paradigm.

Scientific Paradigm: Computer Science as a Natural/Emprical Science (uses Artificial Intelligence as example)

This paradigm draws from Newell and Simon (1976). But, it’s an orientation to Computer Science as an empirical and experimental science. And it includes the experimental science of human-built entities, since programs are made by people. Eden argues that this paradigm differs from the Technocratic paradigm because the focus is not on reliability, but on scientific experimentation that is hypothesis driven, and includes also the use of programs as a tool in a hypothesis driven examination of phenomena that exist in the human or natural world. Methodologically, the scientific paradigm relies on deduction and empirical validation to explain, model and predict program behaviour. The difficulty, in practice, of always being able to deduce program properties means that the paradigm relies on both a priori and a posteriori knowledge. And the ontological assumptions made are that programs in execution are similar to mental processes.

Beki’s take away. I’ve been hearing discussions about whether Computer Science is math, engineering or science for a long time now. This helps understand that the discipline is actually all three. But, now I wonder whether it can survive as all three. Perhaps these are the cleaving points for a future for Computer Science? I also wonder whether my colleagues would subscribe to these paradigms, I’m guessing not all of them do. But I can’t help feeling that within all of this, and perhaps not entirely characterised by this piece, are some important things to understand about Computer Science. It’s definitely got me thinking, and a paper that does that is worth its weight in gold.

Newell and Simon’s Turing Award Speech from 1976.

“Computer Science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Never the less they are experiments. Each new machine that is build is an experiment. ACtually constructing the machine poses a question to nature and we listen carefully for the answer by observing the machine in operation and analysing it by all analytical and measurement means possible.”


“We build computers and programs for many reasons. We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about. Society often becomes confused about this, believing that computers and programs are to be constructed only for the economic use that can be made of them (or as intermediate items in a development sequence leading to use use). It needs to understand that the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature. It needs to understand that, as in any science, the gains that accrue from such experimentation and understand pay off in the permanent acquisition of new techniques; and that it is these techniques that will create the instruments to help society in achieving its goals.”