And I was drafting a reply, and I decided that I’d like to write it here.
The gist, as I read it, is that he asks why academic disciplines are organised by outcome rather than by methods. By asking this question you can explore other connections. In the case of Computer Science Education it turns the focus away from outcomes (measures of learning success, and towards the experiences that will create these desired outcomes, what is the experience of good education).
This got me thinking about what I see as an interesting difference between some of the sciences and others, which has some origins in methods, and theories.
I’ve spent most of my research career as a practicing Computer Scientist. My education is reasonably traditional, and my career has been entirely within institutions focused on the advancement of Computer Science. But, that said, I’ve spent my research career as a user of methods/theories that do not hail from Computer Science, but from Sociology/Anthropology. And to do that I’ve done my best to learn about the disciplines. And in that journey, I’ve been continuously struck by the volume of debate within those disciplines.
Specifically, I’m struck by how much discussion and difference there is in methodology and theory within both disciplines. My analogy, what would it mean if we had multiple and competing approaches to Computer Science. And I suppose we do. I understand that there are significant philosophical differences within AI. But, I don’t think we teach Computer Science in ways that amplify and centralise those philosophical differences. I am aware that these differences exist, but I’ve never had a class or seen a book that talks about these philosophical differences and why they exist, and what their origins are.
Are we poorer for that? I increasingly think so…
Another example. I used to be a Software Engineer (which explains why I still review papers for ICSE I suppose, and why I can’t stop subscribing to Software Engineering Notes). So there are a variety of different methods to organizing the work of Software development. Some of the new Agile or Pair-Programming techniques contrast with the Chief-Surgeon model. And while I have read arguments about the differences, and the outcomes and experiences that they make possible (in pair programming people share a machine, so we say that it’s a good way to learn and a good way for the person watching to catch mistakes of the person typing, we argue that that comes with a certain productivity hit because there are half the number of machines in operation, and so we continue…)…
So we have those debates, based on outcomes, and elements of the experience (which we conveniently blur into the debate), but we never really systematically unpack and discuss the many different ways that work can be divided. (My first advisor Rob Kling told me never to use the word organization as a noun—it was a convenient gloss over the vast array of organizational types—I think he’s right). Organization is a verb, and it is the division of labor and the assumptions that frame that particular set of institutional arrangements.
And I think in disciplines where there is lots of debate about the philosophical nature of the world, there’s far more explicit discussion of the theories and methods and their explanatory power as it relates to that particular set of philosophical commitments. I think Computer Science could benefit from the same approach. Why do we disagree? What does the nature of the disagreement tell us about the nature of the world?
Perhaps we don’t because we focus on the machine (for example, explaining differences as technical tradeoffs, or as a science of the innards of the machine itself). But, I think that those machines do not exist in a world in any way without the presence of humans. The computer was a human creation. It is imagined and built for humans, with human-centered goals (such as faster machines capable of solving more complex problems, relying on novel algorithms, protocols, and architectures). Our philosophy turns in significant part on a belief that what is done in the machine is justifiable because it makes advances possible but those advances are human.