Top

Beyond Anecdotes: HCI 2009 Tutorial Review

December 21, 2009

Given the choice, how many people would swap a gloriously sunny Saturday in Cambridge, England, for a 7-hour long tutorial about—wait for it—qualitative user research and analysis methods? Yet thirty odd people did just that, electing to closet themselves in one of the nicer rooms at Churchill College to listen to what UCD researcher David Siegel had to say. This tutorial turned out to be a highly motivating, fast-paced, and anecdote-rich journey through the process of designing and analyzing qualitative field work in a user-centered design (UCD) context.

As anybody involved in UCD, UX, or related work probably knows, field work—whether in the form of usability tests, interviews, or focus groups—is an essential tool of our trade. Yet making sense of the data and the field notes we collect can often be a non-trivial task. The material can easily build up into a tall stack of notes, transcripts, and visuals. Even more challenging is the task of communicating the results to our clients in a compelling and authoritative way. Based on extensive real-world examples, including his work on the Microsoft Tablet PC OS and other projects for similarly high-profile clients, David attacked these problems with vigor and enthusiasm. Here are the top-five take-away values.

Champion Advertisement
Continue Reading…

1. Dealing with the Quantitative/Qualitative Divide

People often criticize qualitative research for being sloppy, ad hoc, unscientific, subjective, and rather inconclusive when it’s all done. Quantitative studies, on the other hand, tend to garner more respect—in large part, because it’s so much easier to remember a conclusion that takes a form like 10% of the market want this feature. David tackled this issue extensively, but the essence of what he told us was this:

  • Quantitative studies must, by definition, include a qualitative element, because, at some point, somebody decides what questions to ask, what constitutes enough questions, and how to get responses. Very often, this qualitative aspect is completely ignored and hidden away. David argues that quantitative research is much more effective when we determine the parameters of research through a qualitative, or ethnographic, approach. The perfect example was a case of a qualitative exercise in which 43 participants debunked the so-called scientific results of a quantitative study that had a sample size of 17,000 people.
  • To make comprehension easier, you can interpret qualitative research in a quantitative way—by finding things to count or describing measurable data, using terminology such as this is rare behavior, this is a routine action, or all interviewees had an identical set of circumstances.

2. Building Scientific Rigor, Reliability, and Validity into Qualitative Field Studies

Scientific does not mean numeric. We need rigor in planning and carrying out a qualitative study to infuse qualitative data with an element of reliability—which directly impacts a study’s validity. Doing qualitative research reliably means paying attention to what contributes to scientific credibility, including such things as

  • paying attention to—and removing—personal bias or error
  • cross-checking results
  • ensuring data is accurate and traceable
  • keeping in mind the limitations of a research method

If you need a survey to find something out, do it. (This is the flip-side of the argument David made in point #1.) David presented several practical techniques for doing all of this—mostly based on judgmental heuristics.

However, David was also keen to stress that, while reliability is a necessary prerequisite for validity, it does not imply validity, which often depends on a researcher’s interpretation of the data. This seems to be quite a common pitfall for qualitative researchers, so we must take care not to jump to conclusions and invent cause-and-effect relationships or other spurious correlations on the basis of our personal bias or gut feelings.

3. Blank Slates Don’t Work

A commonly held misperception is that qualitative researchers should go into the field with a blank slate and be totally open to whatever they may find. While being open is admirable, trying to create a blank slate is an exercise in self-deception. Nobody ever has a blank slate—much less the companies who commission research. It is much better to recognize that fact and, instead, make use of the baggage and domain knowledge researchers and organizations have.

In fact, David strongly advocates going into the field with a rather detailed focus structure that includes a number of categories of things to look out for. (He dedicated almost an entire illuminating hour to how to create such a focus structure.) Rather than limiting a researcher’s openness, David argues that doing this actually makes it easier to notice the unexpected, because what you didn’t plan for tends to stick out like a sore thumb, forcing you to pay attention to it.

On a practical level, having such a focus structure also expedites the research and analysis process that in industry—as opposed to academia—may be a more important factor, while maintaining research integrity.

4. Preserving the Richness of Qualitative Data

The most wonderful thing about qualitative data can also be its undoing. One of the big advantages of qualitative field research—especially in the context of product design—is the richness of the data you can capture—including images, sound, text, quotations, ambience, artifacts, emotion, and humor. But how can we preserve that richness when distilling an extensive research exercise into a short and readable executive report?

David tackled this thorny problem head on, dedicating a whole afternoon to coding techniques, sprinkled with a few practical exercises to help us get our hands dirty. Diligence and consistency are a researcher’s greatest assets when it comes to coding up the data, but just as important is putting some significant group effort into clustering and dimensioning the codes you generate. There are various ways of doing this, but the most important thing to emerge from David’s talk was the importance of using more than one technique on the same data set, ideally with different people participating, to truly explore the best possible ways of characterizing your findings.

5. Communicating Field Findings

David concluded his day-long session with a discussion of the all-important issue of how to communicate field findings in a way that satisfies all four of the preceding points—in other words, making a compelling argument that is memorable and people won’t dismiss as unscientific, unproven, or simply your own opinion. He provided these important pointers:

  • triangulate—In any discussion, a point of view is strengthened if more than one person advocates it. Similarly, in presenting research results, showing that you can arrive at the same conclusion from different directions makes a point much more powerfully. Triangulation is key to achieving this, and you should take the extra time to analyze your data using three or more methods. The result of triangulation will either
  1. demonstrate to you that there might be a flaw in your argument—thus, saving you from embarrassment or
  2. ideally, make your argument much more defensible
  • behavioral characterization—Once again, keep the core element of qualitative research at the forefront of your work. Quantitative work characterizes and segments a market or demographic; qualitative work characterizes and tries to explain behavior. While using things like network or process flow diagrams to illustrate your results can be effective—in part, because they look suitably scientific!—they are, at the same time, so far removed from the real, raw behavioral data that it is almost impossible to have a proper discussion about them.
  • conditions, not frequency—Finally, don’t fall into the qualitative researcher’s standard pitfall of trying to build credibility by justifying your conclusions on the basis of statements like 8 out of the 12 users we interviewed did such and such. That is precisely the kind of statement that prompts a smart-ass quantitative dissenter to pipe up—and she would be right! Showing frequency in this manner is not your job. Instead, discuss the conditions under which something happens, how likely those conditions are to be ubiquitous—or not—and what’s at stake when they do occur.

This short review barely scratches the surface of the wealth of information and examples David brought to the discussion. I strongly recommend your keeping an eye out for sessions David Siegel and his colleague Susan Dray present at any HCI conference coming to a town near you. 

Thanks to Red Gate for both sponsoring the 23rd British HCI Conference and sending me along.

Growth Engineer at Automattic

Cambridge, UK

Richard A. MuscatPart of the UX team at Red Gate, Richard specializes in Web user experience. At Red Gate, UX designers and usability testers take a central role in product development, trying to ensure the company lives up to its motto: Ingeniously Simple Tools. In his spare time, Richard researches how startup companies can successfully apply user-centered design principles to early-stage business modeling. He has previously worked with several mobile and technology startups and was a Senior Web Specialist at Uniblue Systems. Richard completed an MA in Creativity and Innovation at the Edward de Bono Institute in Malta.  Read More

Other Articles on Conference Reviews

New on UXmatters