Top

Recruiting Better Research Participants

Practical Usability

Moving toward a more usable world

A column by Jim Ross
July 5, 2010

Recently, at the end of a busy day of usability testing, I looked at the list of participants we had recruited for another client’s usability test the following week. That’s interesting, I thought. Here’s another guy with the same unusual name as the participant we had earlier this morning. Wait a second! That’s the same guy! Somehow he had slipped through the recruiting process and had gotten on two different tests in consecutive weeks. Luckily, I noticed this ahead of time, and we were able to replace him with someone who wasn’t looking to supplement his income as a professional user research participant.

Recruiting the right participants is the foundation of effective user research, because your research results are only as good as the participants involved. Representative, well-spoken, and thoughtful research participants can provide invaluable feedback. Yet finding and recruiting such ideal participants and getting them to show up for their sessions is sometimes difficult.

Over my ten years of experience in user research, I’ve recruited my share of no-shows, professional user research participants, and oddballs—as well as unrepresentative, uncommunicative participants. Despite your best intentions, less-than-ideal people can slip through a seemingly sound screening process.

Champion Advertisement
Continue Reading…

Fortunately, there are ways to improve your recruiting success. In this column, I’ll share tips for writing a better screener, eliminating professional user research participants, minimizing no-shows, deciding who should do the recruiting, and what to do when the wrong people slip through your screening process. These tips are valuable to those who are new to user research, as well as to experienced researchers as a refresher on a part of the user research process that people often take for granted.

Write a Better Screener

Because user research is only as good as the participants you involve, the screener is one of the most important documents for a user research project. Yet it’s often the one people tend to take for granted. Few books or schools detail how to write an effective screener, and as a result, we often learn to write screeners through trial and error, rely on screeners the marketing department has created, or reuse screeners from previous projects without considering whether they are still appropriate. The following tips will help you create a better screener.

Prevent People from Hanging up on You

Unless you have good connections to potential participants—for example, through your client or a participant database—either you or your recruiter will be making cold calls to them. When you make cold calls, people will assume you’re a telemarketer, so you need to get right to the point and quickly establish that you’re not selling anything. Say you’re recruiting people for a paid study—mentioning the money up front gets people’s attention and keeps them on the phone.

Don’t Use the Screener to Gather Information

The purpose of a screener is to select the best participants for a study. The more questions you ask, the longer and more cumbersome a screening call becomes. So don’t include information-gathering questions unless they also serve a screening purpose. Asking participants to fill out a questionnaire at the beginning of a research session is a much better way to gather information.

Ask the Elimination Questions First

Don’t waste the time of either the recruiter or potential participants by making them go through a lengthy screener before getting to the questions that eliminate the most people. Ask those questions first, so only the most likely candidates must go through all of the questions.

Eliminate Conflicts of Interest

Eliminate potential participants who may have a conflict of interest with your client or who have too much insider knowledge. For example, if you were conducting usability testing on an airline’s Web site with travelers, you should screen out people who work for that airline or a competitor.

Recruit Based on Behavior and Attitudes

When recruiting for small-sample user research studies, the importance of behavior and attitudes surpasses that of demographics. For example, when doing user research on a flooring company’s Web site, it is far more useful to recruit homeowners who are in the market for flooring than it is to simply replicate the company’s customer demographics.

Consider what attitudes should qualify or disqualify someone from participating in your study. For example, a person who has strong loyalty to mom-and-pop shops and a dislike of big-box stores would not be a good participant for usability testing on Walmart’s Web site. In such a case, you should ask potential participants about the stores they frequent and their attitudes toward certain retailers.

Screen for Computer and Web Experience

Ensure that the participants’ experience with the computer and the Web matches that of your user groups. You’ll often want to eliminate people with too little or too much computer and Web experience, unless their level of experience is appropriate for your project. For example, in a usability test, you don’t want participants to confuse user interface problems with the problems new computer users face in general. Similarly, you won’t usually want to test with participants who are Web developers or UX designers. They bring a level of expertise and a focus that is not representative of a typical user.

When you are screening, don’t ask participants to assess their own computer and Web experience. Self-assessments are very subjective, and some people are reluctant to admit their inexperience. Instead, ask specific questions like whether they have a computer at home, their Internet access, how many hours they spend on the Internet per week, how many years they have been using a computer, and the type of computer activities they perform regularly. Then make your own assessment of their experience.

Eliminate the Strong, Silent Types

In a qualitative study, there are few things worse than a participant who gives only one-word answers. It requires a lot of work to drag useful information out of such people. To determine how expressive people are, ask a few open-ended questions that relate to the topic of your study.

Ensure That People are Physically Able to Participate

It’s obvious—but easily overlooked—that participants must be physically able to participate in a study. For example, in most cases, unless you’re doing usability testing for people with disabilities, participants need to be able to read a computer screen. In this case, you should ask, “Can you read a computer screen, using contact lenses or eyeglasses if necessary, without difficulty?” And remind scheduled participants to bring their contacts or glasses.

Eliminate the Usual Suspects

Recruiting companies sometimes rely on their participant databases before calling fresh participants. While many of these people are great participants who are very good at providing insightful feedback, others volunteer a little too often. They may be supplementing their income as a professional user research participant. (I’ll discuss this in greater depth in the next section.) To avoid the usual suspects, eliminate those who have recently participated in a study—for example, within the last six months.

Screen Out Professional User Research Participants

People who frequently supplement their income by participating in user research will say and do whatever it takes to get into a study. It’s often all too easy to figure out the correct responses and avoid being eliminated. However, there are ways to improve screener questions and eliminate those who stretch the truth to get included in a study.

Ask Open-Ended Questions

With multiple-choice questions, it’s often easy to guess which answers to choose. For example, consider this question:

“When was the last time you participated in a focus group or usability study?”

[   ] 6 months ago or less
[   ] More than 6 months ago
[   ] Never

It’s obvious the recruiter is looking for people who have not participated in a study for the last six months. Instead, ask the open-ended question, without reading the choices to potential participants, making it more difficult for them to determine the elimination criteria.

Include Multiple Elimination Answers for Questions

When it is necessary to ask multiple-choice questions, it’s possible to make the elimination answers less obvious by including multiple elimination answers for a single question. For example, if you’re trying to recruit people who watch 13 or more hours of TV per week, the safe answers to the following question seem obvious.

Approximately how many hours of TV do you watch per week?

[   ] 12 or less TERMINATE (eliminate the person from the study)
[   ] 13–18 CONTINUE
[   ] 19–24 CONTINUE
[   ] 25–30 CONTINUE
[   ] 31 or more CONTINUE

Because the middle three answers are so specific, they seem like safe choices to avoid being eliminated. It’s obvious that 12 or less or, possibly, 31 or more are the cutoff points. The wording of 12 or less and 31 or more implies that, if you watch that little or that much TV, the recruiter doesn’t care about the exact number of hours you watch, and you’ll be eliminated.

Instead, provide more than one elimination answer to hide the obvious cutoffs. For example, in the following question, it’s much less obvious which answers will result in elimination.

Approximately how many hours of TV do you watch per week?

[   ] 0–3 TERMINATE (eliminate the person from the study)
[   ] 4–6 TERMINATE (eliminate the person from the study)
[   ] 7–9 TERMINATE (eliminate the person from the study)
[   ] 10–12 TERMINATE (eliminate the person from the study)
[   ] 13–16 CONTINUE
[   ] 17–20 CONTINUE
[   ] 21–25 CONTINUE
[   ] 26–30 CONTINUE
[   ] 31–35 CONTINUE
[   ] 36–40 CONTINUE
[   ] 40 or more CONTINUE
However, be careful when asking which-of-the-following questions. Professional participants sometimes play it safe by selecting all or most of the answers, knowing there’s one expected answer that will get them in the study. Again, an open-ended question like the following is even better:

Are you planning on making any major purchases within the following year?

[   ] No TERMINATE (eliminate the person from the study)
[   ] Yes CONTINUE

If Yes, ask this follow-up question:

What are you planning to purchase?

A potential participant must mention a car or TERMINATE.

Test Your Screener with a Colleague

To find the holes in your screener, test it out with a colleague. Ask this person to do whatever it takes to get through the screener without being eliminated. This will give you a good sense of which questions you should improve.

Minimize No-Shows

There will always be no-shows, but there are steps you can take to make it more likely that your participants will actually show up for their sessions.

Locate Your Research Session Conveniently

Obviously, the more convenient the location of your user research session, the more likely people will want to participate. Consider the typical locations of your participants during your session times. Is your location easily accessible? Is there public transportation and convenient parking? Would it be more convenient for your participants if you went to them? Or consider conducting remote sessions through a phone call and Web conferencing software.

Schedule Your Research Conveniently

Conduct your user research sessions at times that are convenient to the participants. Of course, that sounds obvious, but often user researchers schedule sessions in the daytime on weekdays, because office hours on workdays are most convenient for them and the stakeholders observing the sessions. If you do this, you may run the risk of getting too many participants who have nothing else to do during the day—students, the unemployed, people on disability, and retired people.

Avoid Days That Produce More No Shows

Fridays and the days before holidays tend to produce more no-shows. People suddenly realize they could be doing something other than attending your user research session. People do tend to be more reliable on weekdays and are less tempted by other options.

Provide the Proper Incentives

Let’s face it, for most people, it’s the money that encourages them keep their commitment to participate in a user research session. If you don’t offer participants the right incentive for the effort involved, your no-show risk increases. The appropriate incentive depends on the people participating and what you’re asking them to do. For example, it would cost you more money to encourage a brain surgeon to participate in a one-hour usability test than it would to enlist the average consumer to participate in a session. Currently, incentives for average consumers for a one-hour activity are around $100, while that for highly specialized professionals can be $250 or more. Payment can be in the form of cash, checks, or even gift cards.

For online, unmoderated activities such as card sorting or usability testing, offering an entry in a drawing for a valuable prize is often more effective than paying a small incentive to each participant. For example, a chance to win an iPad can entice hundreds of people to complete your study.

However, participating is not always about the money. Some people are more motivated by the chance to help improve a user interface for product they use regularly. For example, employees using poorly designed business systems are often grateful that someone is finally asking for their opinions. The chance to improve the application is incentive enough.

Contact Participants to Confirm

Contact your scheduled participants a day or two before their user research sessions to make sure they are still planning to participate. Calling to confirm gives you an early warning regarding those who cannot participate, so you’ll have a head start on recruiting replacements.

Recruit Additional Backup Participants

Recruit backup participants to replace no-shows and problem participants. There are two effective approaches to securing backup participants: using floaters and overrecruiting.

Floaters

Floaters are people who are willing to wait around until they’re needed to fill in for a no-show or replace a scheduled participant who just isn’t a good fit for a study. Typically, each floater covers several session times, so they receive greater incentive payments. To eliminate risk, you could have a different floater cover each session time, but doing this is very expensive. Instead, balance cost and risk by having each floater cover multiple sessions.

Use floaters in the following circumstances:

  • You have people observing the sessions, and you want to ensure there is always a participant for them to observe rather than their having to wait around if a session gets cancelled.
  • You don’t have time for an extra day of backup sessions.
  • You’re renting a research facility, and you don’t want to pay for an extra day for conducting backup sessions.

Overrecruiting

Overrecruiting means recruiting more participants than you actually need. For example, if you need 12 participants, you could recruit 15 and schedule the three extra sessions at the end of the last day or on the following day. If everyone shows up for their scheduled sessions, you can dismiss the extra participants—after paying them their incentives, of course. Overrecruiting is less expensive than using floaters, because you’ll need fewer backup participants and can pay them standard incentives rather than the higher incentive payments floaters receive.

Use overrecruiting instead of floaters when the following conditions apply:

  • You want to save money on incentives.
  • You have time for an extra day of backup sessions.
  • You’re not paying for a research facility.
  • You don’t have people observing every session.
  • None of your participants would want to wait around as floaters.

Prepare for People Slipping Through Screening

Occasionally, despite your team’s best efforts, your recruiter will recruit a participant who is a poor fit for your user research. Reviewing the list of participants as they are recruited is helpful, but often, you won’t know a problem exists until you begin a research session. At that point, you have two choices:

  • You can complete the session anyway and, if necessary, eliminate the results from your findings later.
  • You can dismiss the participant and conduct the session with a floater or backup participant instead.

It can be awkward to reject a participant at the beginning of a research session, but it’s best to be honest about the situation. Thank these people for their time, and give them their incentive payment.

Decide Who Will Do the Recruiting

Someone has to do the difficult job of finding, contacting, screening, and scheduling participants. You can do it yourself, leave it up to your clients, or hire a recruiting company. There are advantages and disadvantages to each of these approaches.

Doing the Recruiting Yourself

Although doing the recruiting yourself gives you a lot of control, it can be very difficult and time consuming. If you regularly do your own recruiting, track participants in a database—both so you can use them in future studies and to ensure you don’t use the same people too often.

Recruiting visitors from your Web site can be an easier way of getting representative participants. People visiting your site are often the exact people you want to reach. Tools like Ethnio let you set up an online intercept that displays an invitation: “Would you like to participate in a study?” The intercept leads to a screening questionnaire. You can either contact volunteers immediately or add them to a database of participants, then contact them later when you need them. Online recruiting is far less expensive over time than using a recruiting company.

Do your own recruiting in the following circumstances:

  • You don’t have money to hire a recruiting company.
  • You have a lot of time.
  • You’re recruiting internal employees, and you have access to detailed lists of contacts such as an employee directory.
  • You have access to a list of potential participants such as a customer or membership list.
  • You can intercept users of your Web site and invite them to be participants.
  • You don’t mind doing cold calling or are particularly effective at recruiting prospective participants.

Having Your Client Do the Recruiting

Sometimes it’s easier for your client to recruit participants, especially if they can use their own employees or their customers. Employees and customers are more likely to accept an invitation to participate if it comes from someone they know—in this case, your client.

Provide your client with a participant profile, a session schedule, and a script detailing what they should tell participants about the study. Monitor the list of participants as your client recruits them to ensure they are the right fit.

Have your client do the recruiting in the following cases:

  • You need to recruit your client’s employees or customers.
  • Your client has detailed employee or customer contact lists.
  • Your client has the time to do the recruiting.
  • Your client has successfully done recruiting in the past.

Hiring a Recruiting Company

A good recruiting company can save you a lot of time and effort, often while providing you with the best participants. Just give the recruiting company a screener, and they’ll do the difficult work of calling, screening, and scheduling. It’s important to check on the participants as the recruiting company recruits them and let the recruiter know if any of the participants are unacceptable. To find a good recruiting company, ask other user researchers for referrals to the companies they trust.

Use a recruiting company when any of the following circumstances apply:

  • You have the money to hire a recruiting company—that is, approximately $100 to $150 per participant, in addition to participant incentives.
  • You don’t have time to do the recruiting yourself.
  • You’re not recruiting internal employees.
  • You’re not recruiting a client’s existing customers.
  • You can’t easily find potential participants to contact yourself.
  • You don’t want to have to cold call and screen potential participants.

When working with a recruiting company, give them feedback about particularly good or bad participants once you’ve completed your study. Most recruiters appreciate having this feedback on their service and knowing about the quality of their participants for future studies.

Conclusion

In summary, remember the following points:

  • Sequence your screener to get to the point quickly, asking the elimination questions first and avoiding questions that are strictly for information gathering.
  • Eliminate participants who have conflicts of interest or inappropriate computer and Web experience and those who are not very expressive of their thoughts.
  • Recruit predominantly based on participants’ attitudes and behavior rather than merely by demographics.
  • Screen out professional user research participants by asking open-ended questions, including multiple elimination answers, and avoiding giving away what you’re looking for. And test your screener with a colleague.
  • Minimize no-shows by locating and scheduling your research sessions conveniently for participants, providing proper incentives, confirming participants’ availability prior to your sessions, and recruiting backup participants.
  • Weigh the pros and cons of whether it is better to handle recruiting yourself, have your client do the recruiting, or use a recruiting company.

Recruiting the right participants for a study is a difficult task, but it is an essential part of effective user research. It’s well worth the extra time and effort to ensure you get representative participants who can provide useful qualitative feedback. 

Principal UX Researcher at AnswerLab

Philadelphia, Pennsylvania, USA

Jim RossJim has spent most of the 21st Century researching and designing intuitive and satisfying user experiences. As a UX consultant, he has worked on Web sites, mobile apps, intranets, Web applications, software, and business applications for financial, pharmaceutical, medical, entertainment, retail, technology, and government clients. He has a Masters of Science degree in Human-Computer Interaction from DePaul University.  Read More

Other Columns by Jim Ross

Other Articles on Usability Testing

New on UXmatters