Beki Grinter

Posts Tagged ‘qualitative’

If I Can’t Use Grounded Theory What Should I Use Instead?

In academia, discipline, empirical, HCI, research on April 3, 2012 at 3:57 pm

I’ve been posting about the problems I see when people do not apply Grounded Theory properly. A consequence of this is that I’ve been asked for alternate recommendations. This presents me with a dilemma.

On the one hand, I’ve potentially put people in the situation of asking me for this type of advice. But, on the other hand I find it troublesome. I feel that becoming a researcher is about taking on the responsibility for trying to open up a set of alternatives that might be viable candidates. I feel that the methods as much as the responsibility for the research questions, outcomes and write up belong to the researcher.

But, trying to balance that belief I have about the nature of research and those who do it, against alternative world views, I would suggest the following sets of resources to pursue finding those alternative approaches to qualitative data analysis.

Miles and Huberman’s Qualititative Data Analysis, an Expanded Sourcebook, while somewhat dated now contains a variety of qualitative method approaches. Another book in a similar vein is Creswell. But truthfully, there are a lot of these types of books that will survey the breadth of qualitative methods. Just try typing Qualitative Data Analysis into Google Scholar… One thing you’ll notice is that Sage Press produces a lot of texts in this area. You might try looking at a few of those online and see whether they look helpful.

Another place to look is in the related literature to the problem that you’re interested in. What did they do. I would look at empirical studies that did not include technology as well. While HCI is theoretically diverse, other fields have other traditions of scholarship. Look at qualititative studies that sought to understand the context in which you want to operate or technologically support, how did they analyze their data? What are the sources they refer to. Follow those sources.

But, I am not going to recommend a particular alternative approach. It’s very difficult to do this without understanding the research in detail. Let me give you an example. At Georgia Tech thesis proposals are scheduled for 3 hours. Prior to that time a document of considerable length is read, typically I’ve seen them range from about 50-120 pages. And after taking the 2-3 hours to read that, spending time reflecting on it, and then having 3 hours of discussion I am in what I feel to be a reasonably good place to make recommendations.

And I believe it is the responsibility of the researcher to pick their methods. Even if that means that sometimes it results in trial and error. Trial and error is what research is about, it’s the process of developing expertise. And that includes with methods as well as with the domain and the technology.

Advertisements

Writing Up Qualitative Data for an Interdisciplinary Audience

In discipline, empirical on September 14, 2010 at 7:26 am

I was asked for my thoughts on how to write up qualitative data & analysis for interdisciplinary crowd.

I am an enthusiastic of qualitative research. Well, actually, I’m an enthusiastic of many types of research, I practice qualitative research.

And I would say that some of the communication begins long before you’ve ever started writing. Knowing that the audience is open to what you might have to say is going to make a world of difference. I’ve been lucky, I’ve encountered many audiences that have found something of worth in what I have had to say. I’ve been employed by people from those audiences and invited to participate in interdisciplinary research. Working with, or just talking with, people who represent the target audience can be immensely helpful.

An Aside: I’ve also encountered a few audiences who are not receptive, and in a very few cases people who are openly dismissive and hostile. I have never encountered this sort of hostility from people with vibrant research programs, those who seek out interesting problems, and who in the end care more about getting something solved than about whether it is “appropriate.” Also, in my experience this type of circling the wagons has always begun long before I’ve arrived. But once the wagons have been discovered I’ve tended to walk away and stay away. (I guess there are some audiences that are not worth the effort).

Listening and learning is a central part of the interdisciplinary experience. My advisor taught me that. He was interested in how the same words mean very different things depending on who you talk to. My experience bears this out. Whether you are participating in a project, or just going to a conference where your potential audience hangs out, attending closely to how they define terms is an essential part of the interdisciplinary communications experience.

Tell it how it is. I am sure that this is true in any discipline, I am sure that it’s especially common when new to the experience, but it’s crucial to tell it how it is. Qualitative research is not usually generalizable. It might be, but frequently that’s not what its intended to do, and nor is it the best empirical choice. So in addition to explaining what you did, explaining why you chose that method (strengths) and what the limitations are (weaknesses). It should be apparent in the write up why this method was the best for addressing the problem, despite its weaknesses. That’s a type of honesty that reviewers respect. The tiny minority who don’t respect that, don’t respect the time it took to do the research, and so they are lost before the enterprise begun.

I think those are the principles that have guided me in my research and its presentation. I’ve had so many different kinds of interdisciplinary opportunity and I count them as among the best research experiences I’ve had. I find the voyages into other research terrains fascinating, perhaps that’s because I find myself oriented towards thinking about them as human-centered research problems.

I’m not sure whether that answered the question, but that’s the best answer I have. Anyone else?

12 does not equal Theoretical Saturation

In discipline, empirical, HCI, research on September 1, 2010 at 9:18 am

Since I’ve got a Grounded Theory focus right now, there’s something else I want to clear up.

12 does not equal theoretical saturation. Full theoretical development leads to theoretical saturation. And that is, of course, the stopping point for Grounded Theory research.

In my own experience, it was approximately 6 months in one field site, where I conducted approximately 200 interviews (mostly without a guide) and then visits to a number of other field sites. At these sites, I added another nearly 100 interviews, and the hours of observation in total are still measurable in months. In the end I visited seven different companies, although in my thesis I wrote about just three. At the seventh and last company, I heard nothing new with respect to my theory (I heard other things that were new but they concerned issues not relevant to the explanation I was attempting to build).

Since I was studying the relationship between technical and human dependencies in software development, it seemed crucial that I sample among different types of development, so I looked at companies who contracted, those who worked in a monopsony market, others who sold their software in the commercial marketplace. I wwanted to understand whether the market conditions had influence on my theory. I also tried to sample across size of company, start up small and growing to large stable organizations. Did size matter in coordination? I sampled across companies that built exclusively software for commercially available platforms, those that built on non-commercially available platforms, and those who built hardware. Where there differences based on the relationship to hardware, and did building hardware itself have any affect? Finally, I tried to get different types of product. Systems built for real-time operation, those for high reliability, others to address perceived or real consumer needs. In other words, to see whether the type of ode base and the prevailing concerns about its nature influenced my theory.

Throughout the six months at the field site, and throughout the remainder of the scheduling, visiting and meeting people at the other six sites, I conducted analysis. Data collection iterated with analysis. How many rounds did I do, I can’t even tell you. At first, I felt lost and bewildered, what on earth was I doing. Analysis generated more questions. Over time, the questions got more focused, and so the rounds of analysis and collection begun to converge and over time I got fairly specific questions.

I had gone in with a question about how software tools, specifically configuration management, structure the coordination of work that has an intangible quality i.e. software. Grounded Theory seemed like a good fit. First, I’d read a number of pieces about Articulation Work and knew that it was derived from Grounded Theory. So, thanks to Strauss I would be able to leverage the products of that theorising to give me direction in the form of a plan for my research questions, my interview questions, and some ideas of what I might find in my analysis and even some extra concepts to work with during my analysis (I looked for things that were similar, which is not hard given the nature of Grounded Theory analyses anyway).

There are other reasons, non Grounded Theory reasons to conduct research that may involve less empirical data than I collected. You may be evaluating a deployment (perhaps baselining and then evaluating). My point here is that that’s different from Grounded Theory, and should be treated as such, explicitly. As a colleague of mine says, when reading Grounded Theories, they always want to know what the theory is. If you don’t have one, how does it qualify.

Grounded Theory

In empirical, HCI, research on August 30, 2010 at 8:49 pm

Right, this has been coming for a while.

Grounded Theory is not an excuse to go out and study something when you have no idea what’s going to happen. That’s just madness.

Stepping back. Sometimes I hear that Grounded Theory allows you to go into the field, collect data, and only develop questions during analysis. That’s the part that worries me. Research is very expensive, not just financially, but far more importantly, in terms of time hence the madness described above.

So lets clear some some misconceptions.

1) It’s impossible not to have research questions. Perhaps they are not very well formed one (this is something people could easily say of me, I tend to work by instinct as much as by questions), but it’s pretty important to have questions—a sense that something is of interest. I’d go further though, I think it’s impossible not to have a particular set of hopes and interests, and even desires for the outcomes. Grounded Theory suggests that you capture these prior to going into the field. These are valuable resource and an important check (to check to the extent possible that you’re not leading the analysis towards the assumptions that you had before going in.

2) If you interview someone you almost certainly have to have expressable questions. Just saying.

So, Grounded Theory is a balance between exploring the data, and being open to developing new lines of questioning based on previously ill-understood, not-understood phenomena captured in the data.

Now, I also think Grounded Theory is tempting because it comes with a series of steps. Open coding, axial coding, and selective coding suggest that if appropriately followed a Grounded Theory will result. Many other intepretivist approaches do not come with those steps. Instead the reader has to pay close attention to the theory that drives the empirical work. One has to understand why, say, accounts matter to the ethnomethodological agenda, and then understand that the study of phenomena is likely drive in part to further illuminate the concept with respect to the particular setting.

If you want an example of something that also has a “steps” like feel to it, but is not Grounded Theory, try the Thinking Topics approach by Lofland and Lofland.

And if you want to understand the theory of Grounded Theory, try reading The Discovery of Grounded Theory.

So, what I am about to say is open to discussion (as if the rest is not 🙂 but its open to debate about whether and how much Grounded Theory is driven by data. Google if you will.

I have reasons to believe that it’s not entirely driven by the data, but that other factors come into play. First, you can structure grounded theory using any other theory developed by the method. Strauss says so, although Glaser may disagree (most people are following Straussian Grounded Theory as opposed to Glaserian Grounded Theory and the two differ).

Second, it seems to me that the questions you ask of the data during Straussian coding suggest certain types of outcome. The analysis, and the things that it sought to explain tends to have a temporal quality, it promotes an understanding of an arc of time. Causes, consequences, who did what to whom. All very temporal indeed. Many of the Grounded Theories I’ve read explore trajectories of work, of people interacting and acting towards an outcome (whether predicted or not). If you read enough of them you begin to get a feel for some commonalities, what they may work well at explaining. And since you’re not reading any related work (oh yes you should) you start to get a feel for the occasions when Grounded Theory might be most useful.

Third, surely the process of data collection in dispersed with analysis is also the reason that Grounded Theory is not entirely data driven. Data collection that follows a period of analysis must surely be driven by analytic concerns as well as data concerns. Gaps in the analysis that need to be addressed fuel the generation of further questions. I don’t think you can do Grounded Theory, at least not completely on one round of data collection, there must be cycles of collection and analysis, collection and analysis. This is also my defense for knowing when the process ends, when the analysis is complete. When there is nothing else left to explain. Surely then, and only then, you have a Grounded Theory that you can say the following of: that it describes the the world, that it is rhetorically powerful by being clear and persuasive, that it has inferential power (if a similar phenomenon is encountered the theory helps understand what may result), and that it has application.

While I’m here, let me clear up something else. I don’t want to read a Grounded Theory that doesn’t present what the theory is a theory about. I have a theory in mind, it’s a theory about why the division of labor among software developers, despite the goals of modularity, leads to the creation of dependencies that must be coordinated in order for code to successfully compile and run. Further, this theory shows how organizational hierarchies create distance that exacerbates the types of dependencies that exist and their ability to be coordinated. It enumerates dependencies that exist among individuals, between groups and divisions of a corporation, and those that span multiple corporations, and offers strategies for their coordination.