Top

Recruiting the Right Participants for User Research

Ask UXmatters

Get expert answers

A column by Janet M. Six
September 21, 2015

In this edition of Ask UXmatters, our panel of UX experts discusses how to determine whether they have recruited the right participants to enable them to conduct effective user research. Not only is it important to match the salient demographics and contexts of participants with those of a product or service’s actual users, it is also essential that we understand users’ real motivations for using a product or service. Therefore, a good screening process is a must. Our panelists also discuss what to do in low-budget situations. We must always remember that research participants are real people, with real feelings, who are contributing to the excellence of our designs.

Sponsor Advertisement
Continue Reading…

Every month in Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].

The following experts have contributed answers to this edition of Ask UXmatters:

  • Carol Barnum—Director of User Research and Founding Partner at UX Firm; Author of Usability Testing Essentials: Ready, Set … Test!
  • Dana Chisnell—Principal Consultant at UsabilityWorks; Co-author of Handbook of Usability Testing
  • Cory Lebson—Principal Consultant at Lebsontech; Author of UX Careers Handbook (forthcoming); Past President, User Experience Professionals’ Association (UXPA)
  • Gavin Lew—Executive Vice President of User Experience at GfK
  • Daniel Szuc—Principal and Co-founder of Apogee Usability Asia Ltd.
  • Jo Wong—Principal and Co-founder of Apogee Usability Asia Ltd.

Q: How can we determine whether we have recruited the right participants to conduct effective user research?—from a UXmatters reader

“First, you need to understand some key demographics of your user base,” replies Cory. “This does not mean that your sample of research participants needs to match each demographic category, item for item. For example, for any usability study I’ve done, I’ve never seen screening categories such as gender, race, ethnicity, or other similar demographics matter at all. But understanding broad age categories—which can sometimes act as a proxy for tech savviness—education and literacy, actual usage of a particular technology platform, or knowledge of a particular domain and its associated jargon can make a difference. In participants, these are the sorts of demographics that should typically reflect those of the types of users who would actually use a product.

“Assuming that time and budget constraints limit the number of participants in your study, focus on average or typical use, not edge cases. Thus, you should avoid participants who are guaranteed to have difficulty with a product, no matter what, as well as those who are super-knowledgeable about the product. Expert users may have a lot to say, but they won’t give you any data about how typical users would use a product.”

Supporter Advertisement
Continue Reading…

Screening Participants

“As researchers, it’s our obligation to tap the right participants to enable us to get insights that are meaningful and actionable,” responds Gavin. “This makes our participants—our data—the Achilles heel of research. The first step is to get recruiters you can trust. At GfK, we have a lot of them because we use recruiters with particular specialties.

“The second step is to prepare a recruiting screener that asks the right questions in a way that would prevent potential participants from being able to discern what answers would get them into a study. Let’s face it, there are professional participants out there, and we want to avoid them. As part of our training for researchers, a whole section covers writing screeners. One example of how to do this is to limit the number of yes/no questions. For example, if you are targeting a group of participants who haven’t done X in the last six months, instead of asking, ‘Have you done X in the last six months?’ you should ask them the date on which they last did X. They won’t know the answer that would get them through.

“Third, when asking warm-up questions at the beginning of a session or as a question in the pre-session questionnaire, ask your core questions again to verify that you have the right participants. We have found that people sometimes change their mind. Participants who lie often don’t remember their lies. Finally, if we ask participants about their experience with something, we ask them to bring that thing into the session and check their product IDs. Despite all of this, you’ll still sometimes get a bad participant or two. Tag them in your database to filter them out of future studies, and be sure to provide feedback about them to your recruiter.”

Relying on User Profiles and Prior User Research

“Recruiting the right participants is the most important part of conducting effective user research—especially when the number of participants is small,” says Carol, “If you don’t recruit actual target users, you won’t be able to confirm that the research findings are relevant. The way to determine whether you have recruited the right participants is to create a user profile that you base as much as possible on information about real users that you’ve gathered from prior research and other data sources. Sources of real information about your users can come from any available market research, customer service phone logs, sales information, and other internal resources.

“If no data is available, your best option, if budget permits, is to conduct your own research and create formal or informal personas of your users. If no data is available and there is no budget for conducting your own research about users, you’ll have to do the best you can on the basis of anecdotal information. Gather what information you can by talking informally to the sales and marketing team, the customer service team, and any others who have access to target users. Then, schedule a planning meeting with the core team to determine who they think the users are and what they think users want to do with the product or service you are testing.

Dont let the answer that ‘everyone is a potential user’ stand. You have to break down ‘everyone’ into subgroups of potential users, and each subgroup needs its own list of characteristics that make up that subgroup’s user profile. Depending on the size of your research budget, you should recruit participants from one or more of these subgroups of users. For each subgroup, write a detailed screener based on what you have determined to be necessary characteristics, as well as other characteristics that are nice to know about, but not essential to determining the eligibility of a participant for your study.

“Be prepared to screen many more people than you will need for your research. Working with a detailed screener, you’ll find that you have to eliminate many prospects who do not qualify for your study. Even prospects who meet many of your criteria may not make good participants if they do not have a real and current need for your product or service. Above all else, you must identify motivated participants who share the goals of real users or customers, so their actions and feelings reflect your target audience in a realistic way.

“The challenge of understanding who your users actually are is huge. The challenge of successfully recruiting such users for your studies is another big one. So think about this activity as a two-stage process:

  1. Understanding your users and creating a screener to match them
  2. Recruiting and scheduling participants that match them for your research studies

“If you complete both of these activities, you’ll be able to manage and control the whole recruiting process to ensure that you get the participants you want and need. However, be prepared to invest a lot time in recruiting and scheduling participants. If you will not be doing the recruiting yourself, see whether you can build some sort of applicant-review process into the screening process to provide quality control. If such a review is not possible, make sure that your screener is as detailed as you can make it, so your recruiting agency understands how to make a good match in selecting study participants.”

Understanding Users and Their Motivations

“This question really calls for a two-part answer,” replies Dana. “The first part is about making sure that participants are the kind of people who would normally use the product or service that you’re designing. This means they would practice the behavior you’re trying to understand and support. The second part is to ensure that they’re truly motivated to participate before you get them in a room with you.

“One way to know whether you’ve recruited the right people for your study is to focus on behavior. What behavior do you want participants to take in your study? Have they done this in real life—or are they about to do it in real life? For example, one terrific method of research for ecommerce Web sites is to conduct compelled shopping studies. User Interface Engineering (UIE) pioneered this approach in the 1990s. The idea is that you find people who are in the market for what you’re selling—cars, digital cameras, business cards, or whatever—and give them the money to buy it, then watch them use the Web site to make the purchase. They can either use the money to pay for the thing that they came to the site to buy, or they can keep the money if the design of the site fails to help them find what they want, then purchase it. There’s no pretending. This is not a hypothetical situation.

“The important thing is that participants have to be people who are in the market for the thing you’re selling. You can learn about this through careful sourcing and interviewing of candidates. At UsabilityWorks, we don’t use screening questionnaires. We do voice-to-voice interviews on the phone and start with open questions. For example, for an example ecommerce study, we might begin by asking about the last time the person bought a digital camera—if that’s what the study is about—and why they’re looking for a new digital camera now. This tells us a lot about what their experience was like, whether they’re truly in the market now, and whether they’ve done any research about the product yet. It’s harder for people to game their way into a study when they can’t try to pick the right answer from a list of multiple-choice options.”

Understanding Participants’ Feelings

“When recruiting participants, you must first and foremost understand that they are people with their own lives, emotions, frustrations, and needs,” answer Dan and Jo. “Often, clients give us a set of criteria for participants that describes people as belonging to particular market segments. But this has little to do with people’s willingness to participate or their interest in the domain you are studying. Participants need to feel that they can make an important and active contribution toward improving a product’s design. So our first step in determining whether we have recruited the right people for our user research is to stop describing them or treating them as lab rats in experiments.” 

Champion Advertisement
Continue Reading…

Principal at Lone Star Interaction Design

Dallas/Fort Worth, Texas, USA

Janet M. SixAs Principal of Lone Star Interaction Design in Dallas, Texas, Dr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research. Janet is the Managing Editor of UXmatters.  Read More

Other Columns by Janet M. Six

Other Articles on User Research

New on UXmatters