Beki Grinter

Posts Tagged ‘CRA’

Post Docs in Computing

In academia, computer science, discipline, research on March 9, 2012 at 6:25 pm

The CRA has released their report on the status of post docs in Computing. It should not come as a surprise that they are worried about the increases in the number of post docs, and the potential expectation that one is required prior to taking a full time job. They offer some nice advice on when a post doc is most optimal.

But the report avoids asking some of the questions that will have to be asked and answered if we are, as the report urges, to avoid the situation that has occurred in some other fields where post docs are required.

Most importantly the question that I think they have to ask is why has the number of post-docs gone up? I suspect the answer is because there has been a decrease in the number of permanent positions in academia, industry and government. The post doc is actually a unique situation in the labor market. It’s the place where the difference between the numbers of people admitted into graduate school and the number of employment opportunities post graduate school come together. And I think as the gap between those two numbers has increased, so the number of post docs has increased.

Thinking about this makes me think about incentives and rewards. Where in the system can we change the incentive and reward structure in such a way that we reduce the difference between those two numbers.

Another observation I’ll make is their remark about post docs being useful but not optimal for dual bodies. I think it would be instructive to collect data about whether a choice to optimize for location is actually simultaneously a choice to not optimize for career. What are the long term consequences for those who chose to do this in the short term. The reason I ask is because as much as it sucks to be apart, if it turned out in the long run it was more advantageous for securing two faculty positions say, then people might think about that. I say this as someone who did spend two years apart. I know how much it sucks. I also know that I do like that we have great balance in our careers, and I think some of that came from the time we spent apart when we were both pursuing careers as industrial researchers.

Snowbird: Working with your Dean

In academia, academic management, computer science, discipline, women on July 22, 2010 at 4:03 pm

The final session I attended was a session called working with your Dean. Eight Deans offered their perspectives on this. It was a session that was targetted at Department Chairs, but even though I am not one I found it very useful, and really enjoyed hearing what the Deans had to say. The majority of the session was small group discussions, and I found the dialogs very productive and illuminating.

Some take aways I got from this session.

I continue to develop an understanding of the partnerships that are Deans and Chairs. How they both work together and towards execution of the University’s mission and strategy. That sounds obvious as a single sentence, but what emerged during the discussions were processes for doing this and details about how to go about this. Differences also emerged between Deans with Chairs, those without, Deans who have many departments in their School, and those who have less. And of course, for Computer Science which can be free-standing, within Engineering, within Sciences, etc… each one comes with different sets of constituents and concerns.

I learnt that there is scholarly evidence that shows that women are far more likely to leave if they get an alternate offer. This led to a discussion of how retention policies that force the acquisition of an alternate offer are likely to have a negative affect on diversity. I also learnt that some universities have policies on how many times someone can be a retention case, usually it’s a number of years between each retention bid.

Dual body opportunities also came up. I was delighted to learn that Deans increasingly see dual body opportunities as the normal hiring mode. Some are dual body academics, but there’s a large class of professional partners, so whether the partner happens to be an academic or not, hiring is increasingly having to account for the presence of another person who needs support.

Clear, open, communications came up over and over again. While I think this is true in any working environment, I have the impression that it is uniquely true in academia. Given the role of the faculty in raising funds that support the University operation, tenure and academic freedom, and a variety of other features that are unique to the University setting, communications seems to take on a particularly crucial role.

I heard people talk about the multiple constituencies that a leader interacts with including faculty, staff, students, the educational mission of the University, and the other University leadership.

Snowbird: Local Global Development

In academia, C@tM, computer science, discipline, ICT4D on July 22, 2010 at 3:43 pm

I was one of three invited panelists in a session on Global Development.

I presented the case for Computing at the Margins and the results of an NSF sponsored workshop. The key point I made was to argue that Global Development has a domestic component focused on those who have been at the margins of technological innovation. The solutions won’t be identical of course, but there are classes of problems that span underserved parts of Industrialized nations as exist in Emerging nations. And from a scientific perspective, sharing knowledge across these boundaries ensures that we understand what’s generalizable and what is locally specific.

I also used my talk to connect this effort to other problems discussed. For example, DARPA and the NSF both have an interest in socio-computational systems (think wikipedia, North Korea Uncovered, wikipedia). So does Computing at the Margins: what would happen if more people could participate and benefit from these collaborative content creation experiences, what new ones would they create? How would it change or reveal the edges that DARPA seeks to understand? Nomadicity, the reflection that the global population is increasingly migratory, may also contribute to understanding why the edges no longer conform to national borders.

My two co-presenters, Lakshminarayanan Subramanian and Tapan Parikh talked about a variety of the technical challenges in Global Development and about the role it can play in education, as well as for the future of Computing. I was delighted to be in such good company. In this post I want to make notes about the things that got me thinking some more.

Tapan mentioned an article by Ammon Eden that articulated three paradigms of Computer Science research. I saw lots of people writing down the title of the paper, and I have previously blogged about it myself (it includes a link to the paper). Global Development he argued fits into the Science definition. I’m inclined to agree. But, as I’ve written about before I think global development exposes an interesting set of assumptions in Computing, so even if it fits paradigmatically, it’s not without challenges (or more optimistically, game changers that will productively extend the field of Computer Science). One other that now springs to mind, is the co-evolution of Computer Science the discipline with the National Science Foundation. Given how central the NSF is to Computing, and how long the two have co-evolved, it makes it even clearer to me that the NSF’s support is crucial in advancing this field. Now I understand this, the question I have is what to do about it. I’ll take answers from readers please.

Solving the right problem is important. It’s always important. But, in much of Computer Science the problem discovery phase takes far less time than the problem solution phase. Global Development, rather like HCI and Software Engineering I think, requires attention on problem discovery.

We talked about whether industry could make progress on these problems. We varied somewhat in how much we thought that industry was engaged in this space, but it is clear that corporate America is paying attention to emerging nations, as emerging markets. We also discussed the role that basic research, unfettered by the need to begin with existing platforms and solutions, could contribute.

The phrase end-to-end came up in two distinct way. First, there was agreement that this area of research requires solutions that span the distinct sub-fields of computer science. Simply put you need people who understanding networking, operating systems, and hci (and much more) in order to create a workable solution. That was dubbed end-to-end systems. Then there was also an end-to-end methodological discussion, about how both problem discovery requires and evaluation requires empirical research that wraps around the system development.

I’ve wondered this before, but I’ll wonder out loud on my blog, is HCI style research rather uniquely positioned to take advantage of these end-to-end requirements. Methodologically, HCI already has practices in place (I think some of these practices will not work in these settings and that innovation is required there, but that’s a different problem from not having any practices in place). And while this may be controversial, perhaps it’s a good time to let people know that HCI doesn’t just concern the interface and nor do HCI researchers limit themselves to toolkits for that. HCI researchers partner with or engage directly in a variety of technical concerns that go down the stack. When I’m being uncharitable I tend to think that sometimes CS think that HCI is rather superficial and afraid of the machine, I disagree intensely. (Which also reminds me that I heard someone talk about the field of database usability for the first time. Is there any part of CS to which HCI doesn’t have something to offer. No of course not!)

Colorado has just started a Masters Program in ICT4D.

We heard from a number of people about how working in this space had been a personally life changing event. The rewards of this research space are very significant. While agreeing I also observed that the broader impacts of this work are so blinding that they overwhelm the question of what the science is in this space. I feel strongly that the Computing Community is going to have to work together to make the scientific case for this space, to ensure funding and also tenure and promotion reward for people who engage in this space.

We were invited to create a layered diagram that illustrates the types of challenges for Computer Science across the spectrum when pursuing global development. And another person asked us whether you could create an introductory course on Computer Science using problems from global development as the examples. That’s a fascinating question.

And then the name discussion came up. The name for this field is extremely complicated and loaded. I suggested Computing for Normal People, a riff on Gary Marsden’s observation that computing has largely served the hyper-developed world, and that the next 5 billion constitute what is normal.

I heard of “the last electrical engineer” phenomenon. The idea is that once everything is known you only need one person who knows it to ensure that the knowledge is not lost.

Oh one other thing. There was a session on HealthIT. Health and wellness is a significant target for investment, including technological investment. People who seek to stay well and who have health issues come from all walks of life. Health is also a global issue, what starts in one place and easily spread to many. And what it means to be well, what it means to treat someone, also varies culturally. Development confronts and deals with issues of cultural variance and its implications for technological relevance all the time. There are also more technological-centric challenges such as getting care to everyone that needs it, where ever they are and whatever access to bandwidth and health care they may have. Access and empowerment through technologies seems like a crucial part, a domain, for Development. Conversely, dealing with underserved groups is a target domain for HealthIT.

Snowbird: CRA in Washington D.C.

In computer science on July 22, 2010 at 2:35 pm

How do you make the case that the Federal Government should allocate discretionary funds for Computer Science research? This is a question that I’ve wondered about, but Peter Harsha’s talk at Snowbird was the first time I felt I understood what an answer to it would be. His talk, in short, was amazing. I enjoyed it, not just because he’s an exceptionally engaging speaker, but also because I felt that it was a useful combination of explanatory and fascinating.

Peter Harsha represents the Computer Research Association in Washington D.C. (he has a great blog also). I can’t possibly capture all of his talk, but I will put some thoughts down here. I want to also direct the reader to the article “Making the Case for Computing” which also discusses how Peter and Cameron Wilson, from the ACM, make their case.

First, I learnt about the type of work that people like Peter do for Computing, in other words what the CRA’s policy foci comprise. First, they work on raising funds for, and setting priorities for Computing research. Second, they focus on access to computing talent, which includes focusing on STEM and also understanding how immigration policies affect Computing occupations (speaking as a visa holder, thank you). Third, they also look at impediments to research, one example might be any changes in export control rules.

Perhaps it is because I am a foreigner that I did not know and consequently learnt that the Federal budget comes in two forms, mandatory and discretionary components. Research funding for agencies like the NSF, DARPA, Homeland Security all come out of the discretionary piece. There are a number of appropriations bills (i.e. the documents that specify the allocation of funds among the various pieces). I think that number is 12. Homeland Security, which includes HS research, has its own appropriation. The National Science Foundation’s budget is located within the Commerce, Justice and Science appropriation. DoD’s budget lives in the Defense appropriation, while the NiH’s budget is to be found in the Labor/HHS/Education appropriation. If I understand the process correctly, the President makes a budget that he sends to Congress who then review/change it, until an agreement is reached. So, what I learnt about changing the budgets was that say, for example, someone wants to up the NSF’s budget (yay), then it must come from somewhere else within that appropriation. It’s not possible to, for example, take from the NSFs budget to up the NiH’s budget because these agencies are in different appropriations.

Next, Peter talked about how they make the case for Computing, what’s the story?

Simply put the story is that Computing changes everything. The history of the field is compelling, not just because of the sheer number of scientific advances, but because of their role in advancing other sciences, business, society and so forth. Computing matters because its innovations reach beyond the discipline itself and into every part of human existence. That’s not what he said, that’s my paraphrasing.

Looking forward he suggested Global Development as one area of advancement that Computing could play a role in. That really cheered me up, I agree completely!

He also made the observation of how Federally funded research is at the center of the IT R&D ecosystem. I was reminded of another presenter at Snowbird who said that the NSF funds about 86% of all research in Computing in the United States. That’s much higher than other sciences, who have more distributed models of funding (ones spread across more agencies, and possibly other sources). Taken together it seems that the IT R&D ecosystem relies on the NSF in particular.

He also explained the processes by which they make the case for Computing. One way is to provide Congressional Testimony. But they also host events, and partner with other people who are also vested in making the case for Computing to host events (which also includes being part of larger science advocacy committees). They also use the press to help make the case. He said that CRA has a good brand, which helps. Finally he invited the audience to get involved, and explained how important it was for the Computing community to be involved in making the case.

I can’t possibly cover all the details, but he also provided the audience with detailed insight into how the appropriations and bill-setting processes work. It was mind-boggling. I think the key take away I got from this is that the government is where Politics meets politics. I don’t mean that pejoratively, rather I accept that all human organizations are comprised of people with goals that drive their actions and that in this case those goals are Political (on behalf of the citizenry that they represent) and their pursuit creates politics as inevitably there are collisions of belief and objectives. Winners and losers you might say. It was immensely helpful to understand the processes through which these agendas are executed over appropriations. I now know about the processes of motion to recommit with instructions, and line item voting.

He ended by highlighting how key members of various science and technology committees are retiring or likely to be replaced. That was sad, but perhaps not as sad as the observation that the Federal budget is tight, and looks like it will get tighter over time. He asked us to be involved, to help diversify the resources that the community draws on, and also to participate on more national advisory boards and so forth. I wish he had thoughts on how to be visible enough in order to be invited, but perhaps that’s just a silly question on my part.

He also asked us to sign up for action alerts, occasions where petitioning our representatives and senators to make the case for Computing. The CRA has an alert system here. Another way to be involved is to attend Congressional Visits, where community members go to DC and meet their Congress members at a hosted event (I think that’s what I understood it to be).

Snowbird: Faculty Hiring Gridlock

In academia, academic management, computer science on July 22, 2010 at 1:23 pm

Day 2 at Snowbird included attenting a panel on faculty hiring processes. The concern that triggered this panel is that there’s a gridlock associated with faculty hiring and that this is not good for departments or for the candidates themselves. The problem as explained was that faculty slots actually go unfilled, even in a tight market, because late offers mean that candidates accumulate offers (while waiting for that last late offer hopefully). When the candidate decides and other offers are turned down it’s too late in the hiring season for the Universities with unfilled slots to recruit in that year.

One presenter showed evidence that last year of the 114 slots that departments had (113 departments interviewed), only 71 were filled. There was a widespread belief, one that I concur with, that this was detrimental to the Ph.D.’s searching for appointments.

There’s a solution on the table, it turns on several parts. First, move all the deadlines earlier, submission of applications and the time of first offers to April 1. There was some discussion of which was more important, and the backend date seems to be the more important. Second, inform candidates who will not be interviewed early so that they can make alternate plans rather than waiting for things that never come. Third, to have deadlines for telling candidates who were interviewed that they will not receive an offer and also to have deadlines for how long offers are open. I should add that the solution was not proposed as “law” but perhaps more of as a set of guiding principles…

But there are logistical constraints. One challenge is that earlier application deadlines can be difficult because sometimes Deans/Departments don’t know whether they have positions. This is especially true in difficult budget times. But, during the discussion several other fascinating deadlines and complications emerged. Semesters versus quarters seem to have a significant effect of the hiring schedule. For example, May 1 first offer deadline is better for people in semesters since the faculty are around until the middle of June than it is for those on semesters whose faculty disappear by the end of May. That a difference exists makes it hard to lock down certain dates. And of course, it’s interesting how the summer arrangement (i.e., where faculty are not paid by the institution but through their own grants) also complicates the hiring process (by reducing the amount of the year in which it can be conducted).

Another piece of the solution proposed was to tell candidates earlier that they are not going to be interviewed, or for those who do interview, that they will not receive an offer. This runs up against legal concerns in some Universities, who do not permit rejection letters to go out until the slot is filled. I did not know, but I learnt that Universities in the AAU are required to make offers to tenured faculty by April 1, and some pointed out that perhaps we should take that deadline and make it a goal for all offers.

One final observation that fits into the “you can tell you’re working with Computer Scientists” category was the number of people who described this as a game theory problem, and applied such approaches to the understanding and resolution of this problem.

I attended since I was curious about what the problem was, and how one might propose a solution that needs to be coordinated across institutions, and this panel was valuable for understanding that process.

Snowbird: Peer Reviewing

In academia, computer science, discipline, research on July 20, 2010 at 1:17 pm

The third session I attended at CRA was on peer review, it was a panel organized by Moshe Vardi.

Computer Science is very unique. We rely heavily on conferences as the means of publication. More so than other fields. Additionally we have a model of specialized conferences, unlike other sciences that have an Annual Meeting, the last ACM Annual Meeting was in 1984.

Someone quipped that “a computer science conference is just a journal that meets at a hotel.”

So recently there have been concerned about the number of conferences, the quality of those conferences, and what it means to be driven by conference deadlines. Jeannette Wing also pointed out that this applies to funding deadlines. Another concern raised by her was how this taxes the community of reviewers. She also said something I liked which was a reminder, but well put, that conferences and journals are a means of documenting the discovery of scientific truth by building on past knowledge in order to share it with others. Finally, it was observed that conferences cost time and money.

Perhaps the most troubling concern was the implications of the profusion of conferences for the field of Computer Science. The concerns raised included a tendency towards incrementalism, conservatism (in submission and review I believe), that the field would splinter, and it would miss big ideas. Computer Science would lose it’s vibrancy and excitement.

But why does this happen, why do we continue to submit to conferences? That led to a discussion of how we understand impact. Not surprisingly given that this is largely a crowd of department heads and Deans, it led to a discussion of how impact is measured on the academic vita at those crucial points: admission into graduate programs, faculty hire, tenure and promotion to Full.

So this raises a two questions for me.

First, how do we change this, if we think we should? The scale of the change required seems vast to me, requiring both procedural and cultural changes. It requires changing behaviors of the 1000s of people collectively involved in Computer Science. It also requires convincing those at the earliest steps (the undergraduates who are considering graduate school and working on publications) that they still have a chance of participating in those later steps. Someones just mentioned that it’s going to involve ensuring that every single review letter changes in accordance…

Second, what about considering the production process? We spent our time focused on the outputs, but what about looking at the inputs into the system, i.e. the number of people we’ve trained. Specifically a focus on PhD production. If a faculty member can produce 14 students in 20 years, who are all trained in the process and seek to continue to publish, well that seems like a scaling up.

Snowbird: Thinking Big in Computer Science

In academia, C@tM, computer science, discipline, research on July 20, 2010 at 12:56 pm

I’m in a session at the CRA Snowbird conference focused on thinking big in Computer Science as a means to pursue large grants. The session is organized by Debbie Crawford at the NSF. There were a range of speakers who each took a turn to provide their thoughts on pursuing large projects.

The first project is about robotic bees, the research to create them (it’s an NSF Expeditions). The problem set up is lovely. 30% of the worlds food requires pollination by bees. But bee colonies are dying. Can robotic bees help? It is simple to explain (and not to answer) and very compelling. Then there’s the team structure. They have 10 or so faculty in different research areas/disciplines, but all with core interests focused on robotic bees and other insects (I think that’s what I took away). Of course this suggests lots of related and prior work by the team members. Additionally they are all collocated, in Boston, with one person in Washington DC. Finally, collaborations existed among various team members also existed, so although the whole team had not worked together they all had some experience of working with other members of the team.

The process the Robobees team used to create the grant was a brainstorm meeting, collocated, purpose of which was to generate the outline for the Expeditions grant. They used the outline to divide the work, with each PI contributing text and figures where appropriate. Then a smaller number (guessing the lead PIs) integrated the text and circulated the document for feedback.

The next person to speak was from the DoE. I didnt personally get quite as much out of this talk as the others, I am sure that was due to my interests. What I did take away was that the DoE has lots of opportunities for computing, ranging from architectures, systems software, operating systems, programming languages, as well as the fields that make up computational science and engineering. If I was surprised, and perhaps I shouldn’t have been, the DoE is also focused on networks and remote collaboration tools to support distributed science.

The next person to speak was from DARPA. He talked about how to win (a DARPA contract).

In order of priority, he began with ideas matter. There’s a paper called the Army Capstone Concept that potential investigators should read. He asked the community to aim higher and bolder. He didn’t speak to this point, but I thought I saw on that slide it also said that the idea must be doable. The previous DARPA plenary said that it was alright to aim high and fail (at least initially) so I’d have liked to know more about doable. Second, it must fit the DARPA mission. Third and fourth were cost realism and the proposers’ capabilities and related experience. With respect to related experience he emphasized more than once that it should not just be your stature, but actual experience. He also said that it sometimes helped to write your proposal in parts, with budgets for the various parts because that can help in contracting (they may ask for some but not all of the parts I inferred from this, so modularity is advisable). Finally he emphasized engagement with the Program Managers, before the BAA and after the grant is awarded, he also reminded the audience that they read a lot of proposals, which I took as a reminder to make it engaging and interesting to read.

The next speaker came from the University of Michigan. He provided the experience of someone who has run large centers, and therefore has been successful in raising money for them. He did a great job of providing the faculty/lead PI perspective as well as suggesting what department chairs should do to help faculty who want to write large grants. He began by saying that not all faculty are interested in writing large grants and that in his opinion it’s pointless trying to encourage everyone to do so. Instead, find those who are willing and support them in doing it.

He argued that the reason to write large grants is the visibility and impact for both the individual and institution. Another value he highlighted was that a large effort can create a locus for other activity. So, a large center can spawn and facilitate related research efforts. But there are challenges. One is interdisciplinary. Not just external to CS but also among the specialities of CS. So all the challenges of doing interdisciplinary work apply. Another challenge is that you have to have complete coverage in the space you are proposing around, you have to plan for it from the beginning and the PI has to ensure that any gaps are filled, even if he gaps that need filling are not attractive research to the person that takes the ultimate responsibility. I had the impression that what he was saying was that one of the responsibilities of the PI was to fill those gaps.

So what can department chairs do to help? Reductions in teaching load, staff support, and institutional support. And recognize that the time spent can diminish ongoing research activities. Also since it requires resources, pick the best opportunities. And if you are successful recognize that you get a part of the action, because the large projects will span units and institutions.

Finally a CISE NSF person spoke. She mentioned two large center programs the ERC and the STC, as well as a center-like entity, university-industry centers for partnership. CISE has a center-like program called Expeditions, the current round of which will be announced next month. There’s no restriction on topic, but it should have impact on CISE, society, and possibly the economy. They look for something where the whole is greater than the sum of the parts, if they could have funded ten small projects instead of one large one it is not compelling. Expeditions was partially designed to fill a gap created by DARPA (but there was an observation that it might now be a gap that the new DARPA is filling).

Expeditions is also a mechanism to engage the Computer Science community to engage more in center like activities. The NSF representative observed that Computer Scientists participate less in (I presume this means lead) less ERC and STC centers than other disciplines. Expeditions is a launch pad for potentially taking things to a center activity when the Expedition is done. An interesting note, the number of submissions has dropped massively for Expeditions, 68, 48, 23 in the three years that it’s been active. Finally she noted that some Expeditions had lead PIs that were not Full Professors, noting that Assistants and Associates did succeed with these efforts.

Snowbird: Democratizing Innovation

In academia, C@tM, computer science, discipline, research on July 19, 2010 at 11:35 am

Just finished listening to a talk by the deputy director of DARPA. It was one in a series of talks about the “new” DARPA, which in this case was positioned as one that’s going to align more effectively with the culture of Universities.

Much could be said, but I want to focus on one aspect of the talk. One of the thrusts within DARPA is focused on understanding what social networks make possible. He talked about the Iranian election and how technologies were used to mobilize people in protest. This was part of a discussion about democratizing innovation.

What is that? My understanding is that it’s a focus on how social networking technologies make it possible for large groups to mobilize around shared interests (ideological, political, religious, entertainment) that are not related to geographic borders. Technologies are creating new borders, new edges, that DARPA needs to understand.

And this reminds me of Computing at the Margins and Global Development. We need to understand these edges too… It’s not just building technologies for those who have none, but leveraging what’s already in use to develop it further. But, I think that we’re also very actively attempting to change the boundaries, by bringing more people into the digital society. I’m still pondering the implications of this, while listening to the DARPA director discussion human motivations and the need for sociologists and so forth to understand how social networks work.