Beki Grinter

Posts Tagged ‘Grounded Theory’

If I Can’t Use Grounded Theory What Should I Use Instead?

In academia, discipline, empirical, HCI, research on April 3, 2012 at 3:57 pm

I’ve been posting about the problems I see when people do not apply Grounded Theory properly. A consequence of this is that I’ve been asked for alternate recommendations. This presents me with a dilemma.

On the one hand, I’ve potentially put people in the situation of asking me for this type of advice. But, on the other hand I find it troublesome. I feel that becoming a researcher is about taking on the responsibility for trying to open up a set of alternatives that might be viable candidates. I feel that the methods as much as the responsibility for the research questions, outcomes and write up belong to the researcher.

But, trying to balance that belief I have about the nature of research and those who do it, against alternative world views, I would suggest the following sets of resources to pursue finding those alternative approaches to qualitative data analysis.

Miles and Huberman’s Qualititative Data Analysis, an Expanded Sourcebook, while somewhat dated now contains a variety of qualitative method approaches. Another book in a similar vein is Creswell. But truthfully, there are a lot of these types of books that will survey the breadth of qualitative methods. Just try typing Qualitative Data Analysis into Google Scholar… One thing you’ll notice is that Sage Press produces a lot of texts in this area. You might try looking at a few of those online and see whether they look helpful.

Another place to look is in the related literature to the problem that you’re interested in. What did they do. I would look at empirical studies that did not include technology as well. While HCI is theoretically diverse, other fields have other traditions of scholarship. Look at qualititative studies that sought to understand the context in which you want to operate or technologically support, how did they analyze their data? What are the sources they refer to. Follow those sources.

But, I am not going to recommend a particular alternative approach. It’s very difficult to do this without understanding the research in detail. Let me give you an example. At Georgia Tech thesis proposals are scheduled for 3 hours. Prior to that time a document of considerable length is read, typically I’ve seen them range from about 50-120 pages. And after taking the 2-3 hours to read that, spending time reflecting on it, and then having 3 hours of discussion I am in what I feel to be a reasonably good place to make recommendations.

And I believe it is the responsibility of the researcher to pick their methods. Even if that means that sometimes it results in trial and error. Trial and error is what research is about, it’s the process of developing expertise. And that includes with methods as well as with the domain and the technology.

Advertisements

Grounded Theory Equals More Than Just Open Coding

In empirical, HCI, research on March 28, 2012 at 5:56 pm

Some time ago I wrote a lengthy post on some of the abuses I see of Grounded Theory.

Today I want to focus on one of the most frequent forms of abuse that I forgot to single out for attention in that post and yet is common. Grounded Theory, (the Straussarian version), has three coding steps. Open Coding is the process of breaking down the data from observation or interviews into categories. Axial Coding develops out these categories in a variety of ways including developing connections among the categories by coming to see them as causes, consequences and more. Finally Selective Coding is the process of selecting a single category that becomes the foundation of the Grounded Theory, which happens through both continued development of multiple categories until they are fully connected (or eliminated) as part of the explanation (which is the theory). To follow the method of Grounded Theory means that you follow all of these steps.

And yet, one of the most frequent abuses of Grounded Theory I see is people citing that they have used Grounded Theory but they have in fact only done Open Coding. Imagine what would happen if you only completed part of the experiment. Would anyone believe that the results were valid or accurate? I have a hypothesis that analysis stops after Open Coding because usually the first time one starts on Axial Coding and Selective Coding the process raises more questions than it answers. These questions are designed to trigger more time in the field gathering data. The next round of Grounded Theory analysis would consist of further Open Coding, but also begin to address the gaps in the analysis at the Axial and Selective Coding phases. The process of Grounded Theory analysis is iterative with fieldwork, and cycles throughout these three steps during analysis (the balance of amount of time in each step varies over time).

Iterative analysis, repeated visits to the field, and using all three steps of Grounded Theory are what it means to use the method. You can’t just pick the bits of the method that are convenient and ignore the rest. Methods aren’t decomposable in this way. What results has no value because its not complete. There are methods that do stop with themes rather than a Grounded Theory, they’ve been tried and tested and they were designed to do that job. But they are not Grounded Theory. And if I seem a bit frustrated its because this strikes me as really undermining the enterprise of qualitative research, and research. If you are going to do something its worth doing properly surely.

Writing about Methods

In discipline, empirical, research on July 6, 2011 at 6:22 am

I’ve been meaning to read Kathy Charmaz’s book on Grounded Theory for a while and now I have I want to blog about something she drew my attention to: a paper by Howard Becker.In this paper he describes some of the discussions he had with Erving Goffman. One was about elaborating on research methods. Becker writes

I don’t remember, though I haven’t made an exhaustive search through his works to verify this, Goffman ever writing about any of the standard questions that inevitably arise in doing field research, such questions as access of research sites, relations with the people studied, ways of recording or analysing data, problems of reliability. All of thee were much discussed at the time, and many of us (I was among them) write about them, in an effort to clarify for ourselves what we were doing. Goffman never did.

This was a principled refusal, which he and I discussed a number of times. He felt very strongly that you could not elaborate any useful rules of procedure for doing field research and taht if you attempted to do that, people would misinterpret what you had written, do it (whatever it was) wrong, and then blame you for the resulting mess. He refused to accept responsibility for such unfortunate possibilities.

I find this really interesting. The rest of the paper is also a fascinating read, but I want to pause here. I’ve written before that I think that one of the reasons that Grounded Theory is so popular in HCI is because it has well specified methods. It tells someone what to do and how. In so doing it provides a justification. And as Charmaz argues that was quite intentional for at the time when Grounded Theory was being developed Qualitative Sociology was in decline, and not taken seriously.

Writing about our methods is common in HCI. A common genre of reporting empirical HCI research is to have a section on Methods and Participants. And I’ve heard people discuss whether a paper is clear enough about methods in committee meetings. Once a long time ago, I tried something somewhat different, I wrote a methods section that had a section on the Methods I had followed and then a section called Practice, on how they actually worked out. I would have done this again, but I never got any feedback, either positive or negative, from anyone about whether this was valuable.

One major argument for writing about methods in HCI is so that we the reviewers/audience can assess the results based on the methods. But, I am now reminded of the arguments about inter-rater reliability, for some types of analysis will knowing the methods actually lead to understanding of whether the analysis is correct. For now, I’ll continue to write about methods when I write about HCI. But I think its worth asking, does what you read about the methods actually explain the analysis?

Why Theory Matters in Grounded Theory

In discipline, empirical, HCI, research on June 27, 2011 at 8:45 am

I’ve written about Grounded Theory before. I’ve written about how its not an excuse to not know what you are doing. This is true of any research of course. I’ve also written about theoretical saturation, in which I commented on the importance of doing the theory development part of Grounded Theory.

Kathy Charmaz’s book reminded me of just how important theory development is in Grounded Theory. She discusses the history of Grounded Theory. At the time when Glaser and Strauss developed Grounded Theory, qualitative research was suffering from its lack of connection to theory. They argued that while many quantitative methods were empirically verifying or exposing problems in existing theory, what Grounded Theory could do was to develop theory. In other words create theory by iterating between data collection and analysis with the goal of converging on a theoretical understanding grounded in the research data.

I took the following away: the reason to use Grounded Theory is to develop a theory. Anything less or else is not Grounded Theory.

There are many reasons to read Charmaz book, but one of them is to understand the contexts in which Grounded Theory emerged. Seems appropriate for an approach that argues that you should pay attention to the data.

Qualitative Research in Software Engineering

In computer science, discipline, empirical, research on June 21, 2011 at 1:05 pm

A recent volume of Empirical Software Engineering was devoted to Qualitative Research in Software Engineering. Although it’s been over ten years since I did any qualitative reasearch in software engineering myself, I find myself drawn to knowing what’s going on.

And I was surprised.

Of the four articles that appeared in the journal, two used Grounded Theory. That wouldn’t shock me normally, but both of them used Glaserian Grounded Theory (referred to as “classical”), as opposed to the far more common Strasserian Grounded Theory seen in HCI papers. And both discussed some of the differences between the two, also something not seen in many HCI papers using Grounded Theory. I was very pleased at the level of detail at which they discussed the method that they were using, although surprised that it was the case in both that they had more access to people who knew the methods of “classical Grounded Theory” than Straussarian Grounded Theory (which one paper claimed was also more prevalent in Software Engineering.

Just struck me as interesting and different.

Inter-Rater Reliability

In discipline, empirical, HCI, research on September 9, 2010 at 9:13 am

So, continuing my series of posts on methods. I’d like to offer my thoughts on inter-rater reliability.

What is inter-rater reliability? It’s when two (or more) people independently code up qualitative data and then they compare the number of matching codes. The more codes that are the same, the more confidence you have in the data analysis produced.

I have mixed feelings about inter-rater reliability, I think it has a place, but like many things involving research methodology it involves judgement. And I think that judgement turns on the relationship between those codes and the final analysis.

What do I mean by that?

Well, in some circumstances, interviews or some other qualitative procedure, the development of codes maybe a very close approximation to the analysis. Perhaps, for example, the interview is a small component of the overall method or study, and there is a desire to generate codes that illustrate common themes. What reoccurred, what did not. Of course that’s not statistically generalizable, but perhaps here inter-rater reliability provides some assurance to the reader that what is being presented has been seen by multiple sets of eyes.

Where I find inter-rater reliability less compelling is when there is some distance between the codes and analysis. In other words where analysis or cycle of data collection take substantial time and energy post the development of the codes. The problem with inter-rater reliability when in those situations is that the codes are an interim product, the first or an early step in the analytic process. And there’s a nice study that suggests that even when coders agree, their analyses are framed differently.

For example, to return to Grounded Theory, there’s no mention of inter-rater reliability in any of the theoretical or practical elaborations. Why? Well because codes are merely an interim product, and they are not the only interim product generated (e.g., the memoing). And there’s a substantial distance that must be travelled during analysis between the time codes are generated and the end result. Codes for example may be incomplete, particularly in the process of selective coding. That triggers another round of data collection, and more code development. But most crucially, the analysis is more than the sum of the codes, it’s an interpretation, an explanation grounded in them, but accompanied by other scholarship, related work, analytic insight, etc… and it’s that piece of the process in addition to the codes that generates the final analysis, or grounded theory. And, knowing that two people could generate the same set of codes just isn’t a measure of whether the theory is compelling.

A set of criteria that I like come from Christine Halverson (although I take application very broadly, a practical outcome for me might be that I understand something about the relationship between people and technology better).

  • Descriptive power: make sense of and describe the world.
  • Rhetorical power:  describe the world through naming important aspects clearly and map them to the real world. Should help us communicate and persuade others.
  • Inferential power: inferences about phenomena that we do not yet completely understand. Predict the consequences of deploying a technology into the environment.
  • Application: can we apply the theory in such a way that we get design, or some other practical outcome.

12 does not equal Theoretical Saturation

In discipline, empirical, HCI, research on September 1, 2010 at 9:18 am

Since I’ve got a Grounded Theory focus right now, there’s something else I want to clear up.

12 does not equal theoretical saturation. Full theoretical development leads to theoretical saturation. And that is, of course, the stopping point for Grounded Theory research.

In my own experience, it was approximately 6 months in one field site, where I conducted approximately 200 interviews (mostly without a guide) and then visits to a number of other field sites. At these sites, I added another nearly 100 interviews, and the hours of observation in total are still measurable in months. In the end I visited seven different companies, although in my thesis I wrote about just three. At the seventh and last company, I heard nothing new with respect to my theory (I heard other things that were new but they concerned issues not relevant to the explanation I was attempting to build).

Since I was studying the relationship between technical and human dependencies in software development, it seemed crucial that I sample among different types of development, so I looked at companies who contracted, those who worked in a monopsony market, others who sold their software in the commercial marketplace. I wwanted to understand whether the market conditions had influence on my theory. I also tried to sample across size of company, start up small and growing to large stable organizations. Did size matter in coordination? I sampled across companies that built exclusively software for commercially available platforms, those that built on non-commercially available platforms, and those who built hardware. Where there differences based on the relationship to hardware, and did building hardware itself have any affect? Finally, I tried to get different types of product. Systems built for real-time operation, those for high reliability, others to address perceived or real consumer needs. In other words, to see whether the type of ode base and the prevailing concerns about its nature influenced my theory.

Throughout the six months at the field site, and throughout the remainder of the scheduling, visiting and meeting people at the other six sites, I conducted analysis. Data collection iterated with analysis. How many rounds did I do, I can’t even tell you. At first, I felt lost and bewildered, what on earth was I doing. Analysis generated more questions. Over time, the questions got more focused, and so the rounds of analysis and collection begun to converge and over time I got fairly specific questions.

I had gone in with a question about how software tools, specifically configuration management, structure the coordination of work that has an intangible quality i.e. software. Grounded Theory seemed like a good fit. First, I’d read a number of pieces about Articulation Work and knew that it was derived from Grounded Theory. So, thanks to Strauss I would be able to leverage the products of that theorising to give me direction in the form of a plan for my research questions, my interview questions, and some ideas of what I might find in my analysis and even some extra concepts to work with during my analysis (I looked for things that were similar, which is not hard given the nature of Grounded Theory analyses anyway).

There are other reasons, non Grounded Theory reasons to conduct research that may involve less empirical data than I collected. You may be evaluating a deployment (perhaps baselining and then evaluating). My point here is that that’s different from Grounded Theory, and should be treated as such, explicitly. As a colleague of mine says, when reading Grounded Theories, they always want to know what the theory is. If you don’t have one, how does it qualify.

Grounded Theory

In empirical, HCI, research on August 30, 2010 at 8:49 pm

Right, this has been coming for a while.

Grounded Theory is not an excuse to go out and study something when you have no idea what’s going to happen. That’s just madness.

Stepping back. Sometimes I hear that Grounded Theory allows you to go into the field, collect data, and only develop questions during analysis. That’s the part that worries me. Research is very expensive, not just financially, but far more importantly, in terms of time hence the madness described above.

So lets clear some some misconceptions.

1) It’s impossible not to have research questions. Perhaps they are not very well formed one (this is something people could easily say of me, I tend to work by instinct as much as by questions), but it’s pretty important to have questions—a sense that something is of interest. I’d go further though, I think it’s impossible not to have a particular set of hopes and interests, and even desires for the outcomes. Grounded Theory suggests that you capture these prior to going into the field. These are valuable resource and an important check (to check to the extent possible that you’re not leading the analysis towards the assumptions that you had before going in.

2) If you interview someone you almost certainly have to have expressable questions. Just saying.

So, Grounded Theory is a balance between exploring the data, and being open to developing new lines of questioning based on previously ill-understood, not-understood phenomena captured in the data.

Now, I also think Grounded Theory is tempting because it comes with a series of steps. Open coding, axial coding, and selective coding suggest that if appropriately followed a Grounded Theory will result. Many other intepretivist approaches do not come with those steps. Instead the reader has to pay close attention to the theory that drives the empirical work. One has to understand why, say, accounts matter to the ethnomethodological agenda, and then understand that the study of phenomena is likely drive in part to further illuminate the concept with respect to the particular setting.

If you want an example of something that also has a “steps” like feel to it, but is not Grounded Theory, try the Thinking Topics approach by Lofland and Lofland.

And if you want to understand the theory of Grounded Theory, try reading The Discovery of Grounded Theory.

So, what I am about to say is open to discussion (as if the rest is not 🙂 but its open to debate about whether and how much Grounded Theory is driven by data. Google if you will.

I have reasons to believe that it’s not entirely driven by the data, but that other factors come into play. First, you can structure grounded theory using any other theory developed by the method. Strauss says so, although Glaser may disagree (most people are following Straussian Grounded Theory as opposed to Glaserian Grounded Theory and the two differ).

Second, it seems to me that the questions you ask of the data during Straussian coding suggest certain types of outcome. The analysis, and the things that it sought to explain tends to have a temporal quality, it promotes an understanding of an arc of time. Causes, consequences, who did what to whom. All very temporal indeed. Many of the Grounded Theories I’ve read explore trajectories of work, of people interacting and acting towards an outcome (whether predicted or not). If you read enough of them you begin to get a feel for some commonalities, what they may work well at explaining. And since you’re not reading any related work (oh yes you should) you start to get a feel for the occasions when Grounded Theory might be most useful.

Third, surely the process of data collection in dispersed with analysis is also the reason that Grounded Theory is not entirely data driven. Data collection that follows a period of analysis must surely be driven by analytic concerns as well as data concerns. Gaps in the analysis that need to be addressed fuel the generation of further questions. I don’t think you can do Grounded Theory, at least not completely on one round of data collection, there must be cycles of collection and analysis, collection and analysis. This is also my defense for knowing when the process ends, when the analysis is complete. When there is nothing else left to explain. Surely then, and only then, you have a Grounded Theory that you can say the following of: that it describes the the world, that it is rhetorically powerful by being clear and persuasive, that it has inferential power (if a similar phenomenon is encountered the theory helps understand what may result), and that it has application.

While I’m here, let me clear up something else. I don’t want to read a Grounded Theory that doesn’t present what the theory is a theory about. I have a theory in mind, it’s a theory about why the division of labor among software developers, despite the goals of modularity, leads to the creation of dependencies that must be coordinated in order for code to successfully compile and run. Further, this theory shows how organizational hierarchies create distance that exacerbates the types of dependencies that exist and their ability to be coordinated. It enumerates dependencies that exist among individuals, between groups and divisions of a corporation, and those that span multiple corporations, and offers strategies for their coordination.