In recent months, there has been an interesting dialogue on the IxD Discussion mailing list, in which some participants have questioned the need for and benefits of doing user research rather than relying on the experience and intuition of designers. These comments led others to voice concerns about the actual quality of the user research companies are undertaking and the validity of any conclusions they have drawn from the resulting data.
Three articles or posts have been particularly influential in sparking interest in and debate on this topic. The first was Christopher Fahey’s excellent five-part series “User Research Smoke & Mirrors,” which laid out some of the problems that Chris sees with user research and discussed where UX professionals go awry during research and analysis. Of special interest to me was the following statement in Part 1 of the series:
“Many Web designers and consultancies, however, feel it’s not enough to use research to inform their design process. They go further: They try to make “scientific” user research the very foundation of their design process.”—Christopher Fahey
The second was a post to IxD Discussion by Maria Romera titled “On research,” in which Maria proclaimed:
“I do think that if more people had an understanding of quantitative and qualitative statistics and what inferences you can and cannot draw from them, there would be far-reaching consequences not just for research and design, not just for science, but for society and politics as well!”—Maria Romera
Third, I again came across a March 2004 article by Jakob Nielsen recently, “Risks of Quantitative Studies,” in which Jakob sets out some of the pitfalls of conducting quantitative usability studies. Starting with high-level concerns about numerical analysis, Jakob moves on to more concrete examples:
“It’s a dangerous mistake to believe that statistical research is somehow more scientific or credible than insight-based observational research. In fact, most statistical research is less credible than qualitative studies.”—Jakob Nielsen
For me, this collection of articles and the discussion surrounding these issues provoked a great deal of thought regarding
the benefits of user research
our unquestioning acceptance of numerical data as being authoritative
the quality of the design decisions we regularly make in the presence—or absence—of such research
User Research in Context
Before discussing the relative merits of these points of view with respect to user research, I want to review what we typically mean by user research within the context of UX design projects, which involve the design of the ways people interact with information and things—whether the focus is on information architecture, interaction design, or user experience design.
In user research, the most important broad distinction is between quantitative research—through which we derive specific counts or measurements—and qualitative research—involving qualities or characteristics. Examples of quantitative studies include
measurements of usability, including task-completion rates and time to completion
Web analytics, including traffic patterns, traffic volume, and abandonment and completion rates
analyses of click-through data
Qualitative studies include
contextual user observations
broader ethnographic enquiries
usability testing that does’t measure performance
A complete list of quantitative and qualitative research techniques would be much longer, but hopefully these examples serve to distinguish one from the other.
In a quantitative study, our aim is to measure something, then draw specific conclusions from the measurements—either by comparing them to other measurements or estimating performance across a broader audience. In a qualitative study, our goal is to capture the subjective responses of our test subjects in regard to something’s characteristics.
When carried out correctly, both quantitative and qualitative methods are equally valid and equally valuable. Where one seeks to quantify some aspect of use or behavior, the other seeks to gain an understanding of that behavior in a natural setting. In this sense, qualitative research is perceptual and personal, and thus, is bound to the researcher as much as it is to the subject.
Subjective Objectivity and So-Called Scientific Enquiry
Note that, contrary to popular belief, quantitative is not synonymous with objective, and qualitative is not synonymous with subjective. The belief that these terms are synonymous is actually at the root of many disagreements over whether one type of research is more scientific, or valid, than another. This is a central issue: Does it matter whether our research is scientific, and what do we mean by that? Let’s deal with the second part of this question first, since the first part will make more sense that way.
Without getting too deeply into the philosophical debate about scientific enquiry—for that, read Hume, Kant, or Bertrand Russell—generally speaking, we are talking about a process of observation, postulation, prediction, and experiment. However, throughout the recent debate on the benefits and purpose of user research, the tacit definition of scientific was objective—by which we mean that the observed or reported results arise independently of the researcher, who acts purely as observer and archivist. So, rather than perpetuating this popular, but not wholly accurate definition of quantitative research, I will simply refer to objective research throughout the remainder of this column.
The second tacit implication in the debate about user research is that objective research is somehow more acceptable than subjective research as the basis for design decisions—and even that subjective research is anathema.
It seems fairly clear that user research can serve different purposes—depending on the stage of a design project at which it occurs, the types of stakeholders involved, the strategic importance of a project, and the quality and experience of a design team.
The Role and Practice of User Research
For the seasoned and accomplished professional interaction designer or information architect, I suspect that primary user research plays the part of providing initial context and insights about users, and thereafter, acts as a safeguard against complacency and mental blind spots on the part of the designer. The designer’s experience, empathy, and intuition leads him or her toward a design solution that is very likely to be on target, and any user research a product team carries out on that solution will provide fine-tuning rather than requiring major revision and will validate design decisions they’Ave made rather than assist with decision making.
For other, perhaps less experienced designers, a more methodical use of user research aids in the decision-making process itself. Analysis of results directly influences the design of the solution and likely plays a more prominent role in the early stages of the design process than it would for more experienced designers. This is right and proper, because user research can assist less experienced, empathetic, or intuitive designers in fashioning elegant solutions to design problems.
In neither of these cases is there a clear argument for using quantitative or qualitative research to inform or evaluate design decisions. Depending on the enquiry a team is undertaking and its aims, specific research techniques will be more or less suited to a given project.
Focus on Good Research Practices, Not Good Types of Research
Of more concern to me is the use of poorly designed user research, poor analysis, and invalid conclusions in making design decisions, as in the following examples:
measuring the effectiveness of a new process in a laboratory setting without taking into account the user’s typical work context
using leading survey questions or closed questions where the options will clearly skew the user’s selection—for example, providing satisfaction ratings of Awful, Very Bad, Okay, Good, and Very Good, where the use of the pejorative term Awful will bias respondents
comparing mean, or average, values for variable data without incorporating a measure of variability into the analysis
ignoring time-series trends—for example, the seasonal changes in site traffic leading up to Christmas—when comparing data across time periods
comparing raw counts of changes in populations of different sizes—for example, an increase of 50 click-throughs versus an increase of 2,000 might actually represent a greater percentage increase in the first measure, indicating greater effectiveness
reporting average values for multiple attributes as being representative of some typical object rather than analyzing multiple attributes simultaneously using clustering techniques to identify groups of similar responses
for completion rates, particularly in quantitative studies with small sample sizes, not reporting confidence intervals—that is, a plus/minus range around estimated values—which provide a level of confidence that an actual value would fall within the range when measuring the entire population
This list of common user research errors represents just a few of the possible errors that quantitative analysis in particular can introduce and illustrates why Jakob Nielsen argues against the use of quantitative studies. However, such mistakes are by no means restricted to quantitative analysis. Observers in qualitative studies can just as easily skew results through poor interpretation of user behaviors and by introducing their own unqualified and untested biases into their reports.
However, the fact that all forms of user research require careful preparation and implementation is no reason to avoid or gainsay the benefits of such research. Some have argued that only those with academic training in research methods can carry out user research correctly. This attitude is a form of intellectual snobbery and fails, perhaps deliberately, to recognize the high quality of the skills of self-taught professionals who are practitioners in our field.
Others have argued that quantitative research is the only valid form of user research. This viewpoint has its basis in the categorization of the many specialized observation techniques that come from the social sciences as subjective—for example, ethnographic studies. Such qualitative research techniques have distinct scientific validity and are often the only appropriate forms of research for a project. Then again, Jakob Nielsen argues that, because of the relative difficulty of carrying out quantitative studies correctly, usability professionals should lean toward qualitative studies instead.
Discussing the relative merits of specific techniques in researching and analyzing specific types of issues within specific contexts would be of greater benefit to practitioners and allow all of us to progress in our endeavors. For example, a recent article by James Lewis and Jeff Sauro, of IBM and Oracle, respectively, in the Journal of Usability Studies (Issue 3, Vol 1), entitled “When 100% Isn’t Really 100%: Improving the Accuracy of Small-Sample Estimates of Completion Rates,” provides a review of available techniques for estimating the likely value of success or failure rates for tasks in usability studies and evaluates their relative strengths and appropriate applications for practitioners.
By addressing the issues surrounding user research and discussing the best methods of applying research results to design, we can avoid an over-reliance on what Christopher Fahey describes as a “faux-scientific” approach to our work and instead engage in a dialogue with our users that informs, illuminates, and validates the designs we produce.
Focusing on the business side of the user experience equation, Steve has over 14 years of experience as a UX design and strategy practitioner, working on Web sites and Web applications. Leading teams of user experience designers, information architects, interaction designers, and usability specialists, Steve integrates user and business imperatives into balanced user experience strategies for corporate, not-for-profit, and government clients. He holds Masters degrees in Electronic Commerce and Business Administration from Australia’s Macquarie University (MGSM), and a strong focus on defining and meeting business objectives carries through all his work. He also holds a Bachelor of Science in Applied Statistics, which provides a strong analytical foundation that he further developed through his studies in archaeology. Steve is VP of the Interaction Design Association (IxDA), a member of IA Institute and UPA, founder of the UX Book Club initiative, Co-Chair of of UX Australia, and an editor and contributor for Johnny Holland. Read More