Beki Grinter

Posts Tagged ‘practice’

The Case for Building on Your Strengths

In academic management, computer science, research on January 19, 2011 at 7:57 pm

I just saw Jim Coplien’s blog post on building on your strengths. In addition to recommending that you do that, he also highlighted the role of creating a team around you of people who complement your weaknesses. I mostly like the article. I think it is good and important to pay attention to your strengths. I think it’s a great idea to build a team of people who do have skills you lack.

I also think it can be rewarding to try and work on your weaknesses, particularly as you see improvements. Or to tackle things that are systemic weaknesses, put yourself in situations that challenge things that are hard to change but never-the-less are not as helpful as you would like in a career. The reward is different, either to see those improvements or just to know that you can. Perhaps I’m alone, but it’s one way I tackle my weaknesses.

And while perhaps the University can appear to be an individualistic culture, I find myself most impressed by leaders in academia who know how to create a team that is recognized and valued for their contributions.

The Difference between Theory and Practice

In computer science, discipline, empirical, research on February 24, 2010 at 3:04 pm

There’s a joke that goes like this

In theory there’s no difference between theory and practice. In practice there is.

I was reminded of this saying, when I read this piece in the New York Times (OK, yes, sometimes I have a backlog of reading).

This is a very strongly worded piece, and I am sure that some people will disagree with it. I find myself resonating with parts of it. That’s not terribly surprising of course. Perhaps what really resonates with me is being careful about separating economic systems from the context that surrounds them. The author says

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.

And I reasonate with that because I feel that sometimes this very same thing happens within Computer Science. Sometimes, the human-centered aspects of the discipline are articulated as not being a part of the discipline. I did say sometimes. The author is arguing that human-in-the-loop makes economics more complicated. The financial collapse is a story of humanness in all its forms… it doesn’t mean that understanding the mathematical underpinnings is not valuable, it just means its not enough.

And the same is true of a discipline of Computer Science. The Computer is a human-designed, human-built artifact. The principles upon which is it based, are human-generated (by Computer Scientists). Computers are also human-used, human-consumed. And we use these facts to talk about our solutions. Our visions of what is made possible through scientific and technological innovation in Computer Science are human-motivated. For example, people want to mine, search, and access data, stream video around their house, debug their code, connect their laptops securely to wireless networks, be supported by intelligent machines in office and house work, … the list goes on.

So, I vote for a Computer Science that pursues theory and practice, because they matter equally to our discipline.

OOPSLA becomes SPLASH

In computer science, discipline, research on February 14, 2010 at 2:59 pm

OOPSLA is changing it’s name to SPLASH.

Splash stands for Systems, Programming Languages, Applications: Sofware for Humanity. I was very excited when I saw the last two words, and curious.

Unfortunately the SPLASH website does not say much about the choice of name. Reading around I see that there are discussions about how much OO there was left in OOPSLA. But, I wish I knew what cased the last S and H.

I read though that OOPSLA attendance declining. And in another post, Ralph Johnson describes how other conferences will be co-located with OOPSLA (which will remain as a research track). I found the history of Onward! and it’s growth from within OOPSLA to then becoming a stand alone event interesting.

I participate in CHI. Sometimes we talk about the many different tracks as being confusing (for example, does a Work-in-Progress count as a publication, therefore preventing the authors from republishing the work later)? More generally what work do all the different tracks at CHI do? I know that they are hard to organize, so it had better be worth it.

I wish there was more on the OOPSLA change, I can’t help thinking that some of it makes the conference look more like CHI in terms of the variety of tracks. And I would love to know more about the conversations that led the community to decide that this name change was the right thing to do.

Growth of ICT4D research

In ICT4D, research on February 13, 2010 at 4:45 pm

Richard Heeks recently posted data showing the growth of ICT4D research.

I find this interesting.

I am sure that you can produce similar data about Biomedical Informatics research. Easily. That would not peek my interest as much.

Because there’s an important difference between Biomedical Informatics and ICT4D, a battle for legitimacy. From what I understand the history of Biomedical Informatics, in addition to being a history of growth is one of finding a name. Biomedical Informatics appears to supersede Health Informatics (although that’s still very much used). It’s also meant to imply more than Bioinformatics. And then there’s a history of different names being used in North America and Europe (I think that’s some of the difference between Health and Medical Informatics). But, things do largely seem to be converging on Biomedical Informatics as the right name for the discipline, with specialities in all sorts of things such as public health informatics, clinical informatics, bioinformatics, and so forth.

But, that’s the name… and the name has been changed and discussed to reflect what should be included in the field. (I have my own opinions of course, which turn on doing the thing that I find myself frequently doing, which is to inspect assumptions… that’s how the idea of Wellness Informatics started, as a means to organize that type of inspection… and I still don’t know where it stands, but I have and continue to enjoy the conversations and people that that process has facilitated).

By contrast, ICT4D has been growing while the people doing the research have been discussing what the research is in the field. That, at least to me, seems quite unusual. To have sustained growth and increased commitment to a field of research for which the case for the research in the field is not clear, even to some of those who do work in the field.

Now that’s interesting. There seems to be a collective “gut sense” that this is an area with rich possibility even though the nature of that possibility is hard to pin down. I wonder whether some of it can be explained by the low morale in CS, and what I see as some of the differences that this area supports… But I don’t know.

All I do know is that smart people, and increasing numbers of them, are putting their bets on ICT4D. And perhaps that’s as it should be. Some of my management are fond of the idea that high risk equals high reward. Well I’d say it’s pretty risky to take on things that the research reward is not clearly understood.

Media in the Modern University

In academia, academic management on February 5, 2010 at 6:12 pm

Does talking to media outlets have a place in the modern University?

The answer to this question is one that I have heard debated among my colleagues. It’s a complicated proposition, but I have long thought that the media does have a role to play in the life of a faculty member.

I understand the concerns.

It’s not usually the case that media outlets care much about a particular finding or result. There are exceptions of course, but I think what the media usually want is an expert to comment on a topic of their interest. This gulf takes some time to cross. I was once interviewed about the World Wide Web. It was when the 100 millionth web site was reached. I did not realise it was for television at the time I agreed to comment. I was asked a variety of questions, and luckily I know my web history since one of them was about the origins and intent of the original WWW site.

Why did they ask me? I’ve not done much research on the web. I mean I use the web, and certainly some of the things I have written about imply the presence of the web, but I would not describe the web itself as a central interest. They asked me because I think they wanted someone who could answer their questions. I could tie the pick up of the Web to both a driver and a reason for the Internet explosion, I could offer some reasons why people would use it (to form online communities) and so forth. I’m certainly not alone in being able to answer these questions, but I didn’t mind spending the time preparing for the interview so that I would provide the answers that they wanted and that would be useful.

Another concern that people have about engaging the media is that they may mis-characterise or miss the significance of the topic being discussed. I’ve certainly felt that sometimes the best things I have said during an interview were not picked up, and some of the more obscure things were. I think this is another type of translation work that I, as much as the reporter, own the responsibility for doing. What matters to me, or to my research community is not always what a reporter thinks will sell papers/adverts/etc… so I have to either decide that what they think is important is OK or work harder to explain the relevance clearly.

I wondered myself whether having my research appear in some places was potentially damaging to my reputation. I decided some time ago that it was likely not the thing I feared. So, when my research featured as the number 1 Threat Down on the Colbert Report I was actually quite proud. Some of it, especially the robot work, has been picked up in a variety of ways, all featuring the rather entertaining side of the work. What I think is most important now is to a) protect and explain the subjects of the research (we dont want to make fools of the people who have given us the data that we’re discussing) and b) to ensure that students involved in this work also don’t feel that their research has been subject to derision. Again in the case of the robot work, it is the case that people dress up their vacuums, indeed there’s a substantial revenue-generation industry in this area, and people do it for reasons that are similar to blinging phones and so forth. It’s very human, and it is, when you think about it, something that makes you smile. No-one was harmed, it’s actually a discussion about good feelings… heartwarming in a world that is full of bad news.

And good news is one reason to talk to the media. I was asked to comment on Atlanta’s increased wi-fi density. Without hesitation I said that I thought Atlantans should be proud of this because it reflects an orientation towards being experimental with technology (this was back when it was fairly new and San Francisco was the type of place people put in this wi-fi dense community network genre). That story generated a lot of positive feedback. A few people talked to me about it directly, but the Institute also collects metrics about the role of media and positive attributions of Georgia Tech. That story, which was not much longer than this paragraph, with that one quote was ranked one of the highest stories in that year. Surprised. I was too. But, I was entirely sincere (not just trying to boost GT’s reputation, that never occurred to me, usually I just think what do I say that reflects and doesn’t embarrass 🙂 and it seems like there was lots of good to just a little good phrase.

Something I don’t like is when I can not get the reporter to include all the names of the people who worked on the work. I have had stories that attributed pieces of research to me that were actually products of groups. In fact that’s often the case. That always makes me sad when that happens. The only way I’ve ever found to deal with it is to attempt to share some of the work of talking to reporters around. Assume that the person who does the talking will be the one who appears in print. But you can’t predict which stories will take off, which will be the ones picked up by many outlets. So, it’s an imperfect solution. The difficulty of this situation is only mitigated by the fact, at least in my experience, that the majority of what has appeared in the papers is not actually about my research, but about things that I have the expertise to comment on.

Of course, the arguments for engaging with the media turn on the value of translating the work that we do at Georgia Tech into things that potentially help inform the public, help people make decisions, or understand the significance of things that they are (in the case of technology) already or will use in their daily lives. In so doing it’s a way to communicate what the worth of an Institution of higher education is, what they get for the time that faculty are not in the classrooms teaching, which is the result of our research. Breaking down the walls of the ivory tower…

And then there is the question of continuing to promote and position Georgia Tech in what will be an increasingly competitive market. As my colleague Dick Lipton recently discussed there is stiff competition for education from a new type of University, characterised by the University of Phoenix. Media, and the visibility that Georgia Tech gets through the stories, is one way to explain our value, through making the products of what we do as well as the processes by which we do them clearer to groups of people who support us. So that’s why despite concerns, and sometimes the difficulty of attribution, I think it’s worthwhile.

An Academic Blog

In academia, academic management on February 3, 2010 at 10:01 am

I’ve just created a new section in my Georgia Tech Vita. The Georgia Tech vita is the official format for an academic resume. It’s long and detailed, but it is a very accurate reflection of the values of the Institute (what Georgia Tech thinks matters for academics). But it has some curious admissions. There’s nowhere to put the media hits that your research gets (and mine, for example, has been on the Colbert report, it was the number 1 threat down, so pretty serious stuff) and there’s nowhere to put blog posts. And yet, both have a place in the modern University, although here I will focus solely on blogs.

So, what then is the role of an academic blog? Why do I think that they matter? I’ve been doing some thinking about what you can accomplish with an academic blog. This is based on my own experiences as well as observing those of my colleagues with blogs. Thank you Mark, Rich, Dick, Andrea, Henrik and Amy.

Like other blogs it’s a way to get the word out. Quickly, especially by academic standards!

One of my most popular posts, about facebook and social networking, was a timely (and thoughtful I hope) response to a meme that had circulated around facebook earlier in the day. You can’t get that article submitted and reviewed in a single day but you can offer an analysis via the blog. I’d like to think that even though the response is not peer reviewed, and would certainly require more if it was peer reviewed, the process of sharing any thoughts on a blog is reviewed in a different way. People don’t have to read the blog after all. And, it’s also your reputation that’s wrapped up in the blog, so what you say is a reflection of your expertise as a researcher.

And I think this matters because as a researcher, especially one employed by the State of Georgia, I feel a responsibility to communicate not just with my academic colleagues but with anyone who happens to read my blog. I’m not always sure whether I do manage to communicate broadly, but I view the blog as part of that opportunity. And I see it in my colleagues posts too, a sense of explaining why some event or discussion has more significance than it might appear to at first blush. It probably is largely colleagues, but I’d be happy if it was broader than that too. I’ve always thought that it’s a responsibility, particularly of a tenured professor, to use their knowledge derived from the research process, to help explain and inform broadly.

Blogging also helps me think, not just about my research, but about my professional experiences. I frequently think things out in my writing. So, the blog is another forum for this type of thinking. I can see my thinking reflected not just in the lots of dead text that I carry through each posting (I delete it right before I publish). It’s also visible in the number of draft postings I have. Each accretes notes as I do more thinking about what the topic of the post is going to be about. Unless I write the post very quickly. Most posts, the majority, are written over several sessions. This one, for example, has been in the works for a few months now. Each time I think about blogs, or see something going on one of my colleagues blogs I add it to the list.

And then there’s the thinking that happens as I write the post up in full text. I have found myself coming to conclusions that were different from what I initially thought. This makes for considerable time spent writing, with less to show for it. But, hey, I’m an academic, this is not an entirely unknown phenomenon. But this process, even when it ends up being contradictory, is a great way of clearing my head and getting my ideas together and clearly. I think it’s helped me with my research. I use my blog in part to write about cross-cutting themes I see in the research that my students do, to synthesize the products of other research and workshops, think out assumptions, and to examine disciplinary business.

A colleague of mine suggested that a blog might be a way to kick start the process of writing a book. I’m still not sure what book my blog is (I’ll take suggestions). But, I view that as emblematic of the thinking work that a blog can provide. And it spills out into other places. I recently gave a talk where I was asked to reflect on futures for the field of Human-Centered Computing. My blog had helped me to think through a number of the ideas that I talked about. Perhaps you should never talk about things you have not published about, but for some types of topic (such as, say, the disciplinary business of Computing, it’s not precisely clear to me what I would have published or where if I was constrained to the more traditional writing mediums).

Some of my colleagues post fairly frequently to their blogs. I find this both inspirational and vaguely terrifying (Where do they find the time? Where they always have, through discipline). But what fairly frequent postings, particularly when they are all focused on the same topical area, do is show the importance and prevalence of a topic. I’ll borrow from Mark’s blog here, to which he posts multiple times a week. Even before his blog I knew that Computing Education mattered. Can you be a professor in Computing and not know this? Anyway, what I get from Mark’s blog is not just updates on critical issues in Computing Education but a sense of how important it is, how Computing Education is connected to other worlds of education, practice, and learning, and so forth. It’s not just the content it’s the frequency of that content that communicates this to me in a profound way.

It’s also a place to have an opinion, to take a position. In the reporting of research results, it is often best to not have an opinion, beyond those supported by the facts. The blog encourages the development of ideas beyond the scope of research. This of course likely can be abused, taken to extremes. But, I think it’s useful in measure. I do have opinions, quite a lot of them. Some in no particular order are that Computing4Good problematises human-centered computing research; that I am a Computer Scientist and that discussions about who/what counts are premature in a discipline that draws on methods/theories that are philosophically contradictory; that the University is an unusual organizational entity and should not be managed with tools imported wholesale from the business world as a result; and that recognising and understanding the human-built nature of computing systems is as important as understanding the patterns of human-used systems. You can find all of these opinions in my posts, perhaps not quite so succinctly stated, and while I would like to tell you that these are all opinions I have arrived at carefully and thoughtfully (and frequently through research), the blog is a place where I find it easier to express them.

Georgia Tech is doing through the strategic planning process (I have opinions about strategy too). And clearly this is way down on the list, but if Georgia Tech claims to value things (like media) then they should make sure that it’s reflected in the GT Vita format. What better way to incent desirable behaviour than providing an appropriate reporting method, the ability to demonstrate achievement in that area. And while they’re reconsidering how the GT Vita reflects the values of the Institute, I call on them to put in a section for blogs. Highly read posts, posts that whiz around the blogosphere, demonstrate a type of reason for existence. And I can’t help thinking that the personal-professional rewards of blogging are well worth the effort.

The Future of the University

In academia, academic management, computer science, discipline on January 29, 2010 at 1:00 pm

My colleague, Dick Lipton, has just written a piece about the future of the University.

Inspired by Georgia Tech’s strategic planning, which was started by Bud Peterson, our 11th and newest President, but also by Rich DeMillo and others (as he mentions), Dick asks the hardest question, and one that’s currently not in the GT strategic planning process.

Will Georgia Tech be around in 2035?

He posits two reasons why this is not being asked. One is that of course it’s obvious we’ll be around and two that its a really scary question to ask. He then goes on to contrast traditional brick and mortar institutions (like Georgia Tech which he calls UN’s) with those that are on-line (like the University of Phoenix which he calls ON’s) over the basic functions that a University provides.

Educate students. I see no reason that On could not do as good a job as Un’s with this basic goal. The usual response is that there would be a loss of interaction with the professors and with fellow students. In 25 years perhaps there could be much more interaction with the On model. Imagine that they have a virtual world where you can talk to my avatar—when ever you like and for as long as you like?

Socialize students. This is perhaps one of the places where Un have an advantage. But, On may already, or could in the next few years create mechanisms that help in this important area. Again 25 years is a very long time, in which huge changes could occur in how humans interact with each other.

Network students. This is one where Un think they have a lead, but I think that is unclear. The rise of net based communities of all kinds may make this a tie at best. One could imagine On putting enough resources into on-line communities of all kinds that give them a lead here.

Research and innovation. This is the place that I think Un have and will continue to have a unique advantage. I will come back to this in a moment.

Perhaps it’s because I am a human-centered computing researcher, so I want to riff on the role of technology in these settings.

I think technology will continue to improve, but distance learning, virtual worlds, etc… are the subject of considerable study. A common finding is that those worlds are not the same as “the real world” and that social dynamics take new forms. Beki’s prediction. While distance learning will improve, the On’s are going to have to figure out how to adapt their methods of learning to leverage the best of online opportunities. Now clearly they are off to a great start, and that’s one reason we have to seriously consider what the future of UNs in an ONs world is. But to create an experience online that matches the offline, or that is as capable of providing perhaps especially socialization and networking is really going to require innovation. Oddly, it might be the research and innovation done in an UN setting that helps them to do that. That would be ironic.

And, while technologies may provide new opportunities, I don’t think humans change as much as we sometimes think they do. I’ve had a good career to date in studying how people use technologies in novel ways. Like, for example, the very rapid uptake of Short Messaging Service among teens. There was no doubt that the technology enabled teenagers to message and communicate across temporal and spatial divides, but what they did with it was to do what they enjoy doing, communicating, socialising, and get help with their homework. These types of communications pre-date SMS, but they are greatly facilitated by it. So, the ON’s don’t have to rely on human change here, they have to hope for innovations in understanding how to make virtual worlds etc… as rich and enjoyable as the experiences that people can have in the physical world, or something better.

Perhaps the ONs can also look to the most traditional of the ONs, places like the Open University in the UK that has done distance learning since 1971. ONs are older than they appear.

But, while I have a slightly different take on Dick’s post, I agree that it’s a question worth asking. Scary as it is.

Academic Organisation

In academia, academic management, computer science, discipline on July 12, 2009 at 1:56 pm

A colleague of mine, Mark Guzdial has written a thoughtful and thought provoking post on his Computer Science Education blog.

And I was drafting a reply, and I decided that I’d like to write it here.

The gist, as I read it, is that he asks why academic disciplines are organised by outcome rather than by methods. By asking this question you can explore other connections.  In the case of Computer Science Education it turns the focus away from outcomes (measures of learning success, and towards the experiences that will create these desired outcomes, what is the experience of good education).

This got me thinking about what I see as an interesting difference between some of the sciences and others, which has some origins in methods, and theories.

I’ve spent most of my research career as a practicing Computer Scientist. My education is reasonably traditional, and my career has been entirely within institutions focused on the advancement of Computer Science. But, that said, I’ve spent my research career as a user of methods/theories that do not hail from Computer Science, but from Sociology/Anthropology.  And to do that I’ve done my best to learn about the disciplines. And in that journey, I’ve been continuously struck by the volume of debate within those disciplines.

Specifically, I’m struck by how much discussion and difference there is in methodology and theory within both disciplines. My analogy, what would it mean if we had multiple and competing approaches to Computer Science. And I suppose we do. I understand that there are significant philosophical differences within AI. But, I don’t think we teach Computer Science in ways that amplify and centralise those philosophical differences.  I am aware that these differences exist, but I’ve never had a class or seen a book that talks about these philosophical differences and why they exist, and what their origins are.

Are we poorer for that? I increasingly think so…

Another example. I used to be a Software Engineer (which explains why I still review papers for ICSE I suppose, and why I can’t stop subscribing to Software Engineering Notes). So there are a variety of different methods to organizing the work of Software development. Some of the new Agile or Pair-Programming techniques contrast with the Chief-Surgeon model. And while I have read arguments about the differences, and the outcomes and experiences that they make possible (in pair programming people share a machine, so we say that it’s a good way to learn and a good way for the person watching to catch mistakes of the person typing, we argue that that comes with a certain productivity hit because there are half the number of machines in operation, and so we continue…)…

So we have those debates, based on outcomes, and elements of the experience (which we conveniently blur into the debate), but we never really systematically unpack and discuss the many different ways that work can be divided. (My first advisor Rob Kling told me never to use the word organization as a noun—it was a convenient gloss over the vast array of organizational types—I think he’s right). Organization is a verb, and it is the division of labor and the assumptions that frame that particular set of institutional arrangements.

And I think in disciplines where there is lots of debate about the philosophical nature of the world, there’s far more explicit discussion of the theories and methods and their explanatory power as it relates to that particular set of philosophical commitments. I think Computer Science could benefit from the same approach. Why do we disagree? What does the nature of the disagreement tell us about the nature of the world?

Perhaps we don’t because we focus on the machine (for example, explaining differences as technical tradeoffs, or as a science of the innards of the machine itself). But, I think that those machines do not exist in a world in any way without the presence of humans. The computer was a human creation. It is imagined and built for humans, with human-centered goals (such as faster machines capable of solving more complex problems, relying on novel algorithms, protocols, and architectures). Our philosophy turns in significant part on a belief that what is done in the machine is justifiable because it makes advances possible but those advances are human.