Top

Identifying and Validating Assumptions and Mitigating Biases in User Research

Ask UXmatters

Get expert answers

A column by Janet M. Six
October 19, 2015

In this edition of Ask UXmatters, our panel of UX experts discusses how to identify and test assumptions and mitigate biases in user research. First, the panel considers whether we need to eliminate assumptions or they are actually an integral part of most projects. Then, they’ll discuss ensuring researchers’ and observers’ objectivity and how to determine what assumptions and biases exist on particular projects. Finally, they’ll look at best practices for mitigating bias in user research.

In my monthly column, Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].

Champion Advertisement
Continue Reading…

The following experts have contributed answers to this edition of Ask UXmatters:

  • Carol Barnum—Director of User Research and Founding Partner at UX Firm; author of Usability Testing Essentials: Ready, Set … Test!
  • Pabini Gabriel-Petit—Head of UX for Sales & Marketing IT at Intel; Principal Consultant at Strategic UX; Founder, Publisher, and Editor in Chief of UXmatters; Founding Director of Interaction Design Association (IxDA); UXmatters columnist
  • Steven Hoober—Mobile Interaction Designer and Owner at 4ourth Mobile; author of Designing Mobile Interfaces; UXmatters columnist
  • Jordan Julien—Independent Experience Strategy Consultant
  • Cory Lebson—Principal Consultant at Lebsontech; Author of UX Careers Handbook (forthcoming); Past President, User Experience Professionals’ Association (UXPA)
  • Daniel Szuc—Principal and Co-Founder of Apogee Usability Asia Ltd.
  • Jo Wong—Principal and Co-Founder of Apogee Usability Asia Ltd.

Q: How do you identify and weed out your assumptions and prejudices when facilitating user research?—from a UXmatters reader

“Every project begins with ideas that are based on assumptions—what we think we know,” answers Pabini. “Declaring our assumptions up front and testing those that present the highest risk are key steps in defining the problems a product or service team must solve. Only once we’ve done that can we form useful hypotheses regarding design solutions for those problems.

“However, it is essential that we validate our hypotheses with users. Doing so requires that we create and test a minimal prototype of our hypothetical solutions. User research and user-centered design approaches enable us to validate prototypes that are based on our assumptions and hypotheses. By validating our assumptions, we can reduce project risk very early in a product-development lifecycle—during the exploration phase, before a business has made any significant investment in a design solution. It also ensures good experience outcomes for our users.”

“Lately, I’ve run across a number of people who think that doing usability testing—or other user research—means we we’ll test every single thing.” replies Steven. “But we don't need to do that. As Stephen Hawking says, ‘As scientists, we step on the shoulders of science, building on the work that has come before us.’ We do not need to learn everything anew, but should be building on previous principles and patterns. Instead of weeding out our assumptions, let’s instead define, codify, and validate them.

“But that absolutely means we need to identify what we already know. However, I tend not to call these assumptions. For this exercise, let’s call this reasoning. A design addresses a number of features, and design elements present those features. These features presumably aren’t randomly assigned to a designer. There are reasons you’re implementing them. Ideally, we’d all document our reasoning as we design, but I’ll admit that I don’t actually do this myself. Using pattern libraries helps overcome this lack because, for anything that follows a pattern, we can use the explanation in the pattern.

“So, as part of your research planning, go through the design with your team, identify what your product does, what it should do, what parts are well understood because they’re built on well-understood principles and patterns, and what parts might be an issue. I did this just the other day with a project team. They had given me a list of features they wanted to test in a prototype, so I worked with the designers to turn it into a test plan, explaining, point by point, why some things were not worth testing and how to explore user interactions for the rest.

“Thus, we should test only to learn what we do not know and to clarify what we have previously observed, but not fully understood. Taking this approach is cheaper, quicker, and, most important. more effective because we do not get lost in a sea of data and can instead focus on what we need to know, then iterate to improve our understanding further.”

Ensuring Researchers’ and Observers’ Objectivity

“First and foremost, if it’s your design or a product or service that is your baby, do your best not to be the person moderating the research,” recommends Cory. “Even being a visible observer when another person is moderating can present difficulties.

“Several years ago, I was doing field research, and neither a dedicated observer room nor the remote transmission of sessions were feasible. So one stakeholder at a time observed the research live. At one point, I noticed that one of the observers was unconsciously nodding whenever a participant started down the right path. I sent a quick instant message to that observer, then made a physical shift before the next session, so the observer would be out of view of the participant. While this helped solve the problem, it also made me acutely aware of how easy it is to introduce potential bias into your research.

“While I’m an advocate of flexibility in research—which means, as a moderator, not necessarily following a script exactly and adding tangential probes whenever you think it’s appropriate—you must be mindful of what you are saying at all times. For example, during usability testing, it’s all too easy to find yourself saying things like ‘Great!’ when you observe task success or inadvertently giving clues when you introduce a task.”

“Start by not being a stakeholder!” exclaims Carol. “If you don’t have a stake in the game, you’re not likely to be biased in any direction. If your goal is to discover user experience—the good, the bad, and especially the ugly—your main motivation will be to make the user comfortable, put the user into scenarios that match the user’s real goals; then watch, listen, and learn from what users show and tell you.

“You can also manage the design team’s temptation to jump to conclusions too quickly or make assumptions on the basis of preliminary findings. Encourage observers to stay in listening mode until or unless they reach consensus that they’ve seen enough to know they have a problem, and they don’t need to hear from more users to know the problem exists. If the solution is at hand, perhaps you can implement it on the spot. Or you can change or eliminate a scenario, perhaps dropping a different one into its place. That way, you can keep learning from users as they engage with a user interface.

“At the conclusion of user research or usability testing, you can get at assumptions and prejudices by directly questioning the stakeholders who observed your research about what they thought they would see or learn and whether the research confirmed or exposed their prejudices. One of my favorite questions—which is not unique to my research methods—is to ask the observers for their Aha! moment. This question often elicits the hidden assumptions that people had going in and how their assumptions changed when observing users.”

Identifying Your Team’s Assumptions

“Assumptions and prejudices or bias are always present in projects,” reply Dan and Jo, who recommend reading “7 Cognitive Biases That Are Holding You Back,” by Sam McRoberts, and “The Myth of Rational Decision-Making,” by Vivian Giang. “It’s healthy, but rare, for a team to be able to list out their assumptions, so the team can determine what questions they need to ask along the way to better define needs as part of user understanding, as well as discover insights that inform product and service design.

“Challenging people’s assumptions is not about challenging any individual’s ego. The primary reason for challenging assumptions is to ensure that, over time, you refine the core meaning of a product and service you’re designing. This ensures that there is a strong effort to minimize wasting the team’s time and resources by designing features that are based on too many false assumptions and do not really matter to users.

“The main ways to identify assumptions are as follows:

  1. Have a conversation and list out everyone’s assumptions.
  2. Determine how you can turn those assumptions into questions.
  3. Have the team map their questions to research approaches that would enable the team to discover answers and validate or challenge specific assumptions.
  4. Through greater user understanding, determine whether these assumptions lead to other assumptions that may provide additional questions that would lead to further discovery over time.

“Allowing teams to challenge their assumptions in an open, respectful, and caring way helps them to open up to user learnings, leave their own egos behind, and gather evidence that leads to smarter product and service design decisions over time.”

Double-Blind Testing Versus Mitigating Bias

“Researchers have done a lot of work on experimenter bias over the years,” answers Jordan. “There are essentially two schools of thought on this: those who prefer double-blind testing and those who prefer proactive mitigation of experimenter bias.

“Depending on the double-blind method a researcher uses, it’s theoretically possible to completely remove experimenter bias from the equation by keeping certain details from the experimenter. This process generally involves a committee that has full knowledge of the test or research being conducted. In my opinion, this just shifts the potential bias to another person or group of people.

“The other approach is to proactively mitigate experimenter bias. Efforts to proactively mitigate experimenter bias assume that we can never completely remove all biases. Thus, the goal should be to mitigate any biases that could have a significant impact on the results of our research. This is where processes and methods come into play—both of which we generally develop to achieve consistent standards for quality and efficiency.

These methods may include guidelines or processes for avoiding experimenter bias. For instance, facilitating most types of user research requires that we do some background research up front. Several cognitive biases may influence the depth and breadth of this research. Therefore, you can include certain steps in your research methods whose explicit purpose is mitigating potential experimenter bias—for example, always collecting background research from at least three trusted sources.” 

Product Manager at Tom Sawyer Software

Dallas/Fort Worth, Texas, USA

Janet M. SixDr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research.  Read More

Other Columns by Janet M. Six

Other Articles on User Research

New on UXmatters