If you’ve ever struggled to find user-research participants, you may have wished you had a list of people who have expressed an interest in taking part in future user-research activities. A user-research panel is exactly that: a list or database of potential research participants—who have given you their contact details and maybe some other information about themselves—that you can recruit for specific research activities as they come up.
We’re enthusiastic about user-research panels, but we’re also realistic about the amount of work they involve. So, in this column, we’ll briefly touch on the benefits of user-research panels, then present seven questions you should consider to ensure that your user-research panels are successful.
The seven questions to ask about user-research panels are as follows:
Who will manage the panel?
When you recruit people to the panel, what do you promise them?
How representative are the panelists of some particular user population?
Once you use a panelist as a participant, does that limit the panelist’s future opportunities to participate?
How much data will you capture at the start versus along the way?
Should you link research data from other studies?
What technology will you use?
The Benefits of User-Research Panels
We’re enthusiastic about user-research panels because they offer two important benefits:
Reducing recruiting time—Because the people on your user-research panel have already said they are interested in participating in your research, the panel can greatly reduce recruiting time—and either cut out a possibly lengthy procurement process or allow procurement to happen separately, in advance of a specific user-research activity. Therefore, using a user-research panel fits well with agile ways of working.
Helping you to recruit people with specific skills, who belong to particular professions, or who use particular technologies—A user-research panel can also be a great way of finding people who are hard to reach for any reason—for example, people who have specific skills, are from particular professional groups, or use particular technologies. Instead of trying to find these specialist participants in a hurry, you can work on finding them ahead of time and add them to your panel, knowing they are willing for you to contact them when you need them.
1. Who will manage the panel?
User-research panels don’t manage themselves. As your panel grows, it will likely require a dedicated person to run and manage it—in addition to the people who actually design and run research activities. Someone has to do all the following things and more:
Recruit panel members.
Answer panelists’ questions.
Keep panelists engaged.
Control access to the panel.
Refresh the panel when panelists drop out.
Recruit people for specific research activities.
Secure the private data that panelists have provided.
Have you identified the person who will do all of this for your organization and ensured that person has the budget and time necessary to do the job?
2. When you recruit people to the panel, what do you promise them?
When you recruit a potential participant to your panel, you are creating an expectation that you will indeed contact them. You’re also probably promising them some sort of reward—whether it’s a non-financial benefit such as feeling good about contributing to something that matters to them, a straight financial incentive of some sort, or perhaps points that accrue each time they take part in a research activity, then get converted to an actual reward.
Here are some things to consider regarding the promises you make to panelists participating in research activities:
What happens if a panelist happens to be unavailable for an activity you’ve planned?
How often do panelists expect to take part in research activities?
What if a panelist is never a match for the types of people you need for specific research activities?
The following questions relate to incentives and other rewards:
Is it ethical to expect panelists to be volunteers when everyone else on your project is being paid?
If you offer an incentive, what should the incentive be? Should you offer an incentive simply for being on the panel or only when panelists take part in a research activity?
Which incentives should you tell panelists about?
If you don’t offer an incentive, are you skewing your panel toward people who can afford to spend their time participating in research activities without being paid?
Let’s also think a bit about panelists who will participate in research relating to their professional roles or who belong to a specialized, niche audience. Here are some examples that complicate the concept of a financial incentive:
Policies may bar government employees from accepting any reward at all in connection with their participation on a panel, even a cup of coffee.
Highly paid, professional or technical users—for example, a doctor or a system administrator with expertise in a particular technology—might be willing to contribute their time to research activities relating to a product they really care about, but feel insulted if you offered them a relatively trivial amount of money.
3. How representative are the panelists of some particular user population?
People who are willing to sign up for a panel might be different from the people who wouldn’t, in ways that could affect their participation in a research activity. Here are some examples:
Panelists may be more positive about your organization than people in general would be.
Their personal reason for joining your panel might be that they consider themselves advocates, with strongly held views on certain topics such as privacy or the rights of the group they represent. While you must respect and welcome their views, the strength of their opinions may make these panelists somewhat different from the full range of people who might use your product or service.
Panelists may have specialized knowledge about your product or organization that makes them more interested in participating, but their knowledge differs from that of a typical user.
In the language of survey methodology, this problem is similar to non-response error, where the people who don’t respond differ from the people who do respond, in ways that affect the results of the survey.
Some things to consider about the representativeness of your panelists include the following:
Are the people who join your panel different from the people who don’t?
If they are different, will those differences affect your specific research activities? Or won’t they matter?
Is a user-research panel the best way to reach people who see themselves as advocates? Or would some other approach be more appropriate?
4. Once you use a panelist as a participant, does that limit the panelist’s future opportunities to participate?
How frequently would people typically encounter your product or service? If it supports activities that most people do only occasionally, consider whether you should limit the number of times a panelist can take part in your research—possibly to just a single research activity—ensuring that each panelist won’t become more familiar with the product or service than a typical user might be.
If you think back to matters relating to Question 1, which included refreshing the panel, you’ll realize that limiting panelists’ participation would add quite a bit of extra work for the person who manages the panel—as well as maybe cost. Would the panel still be cost effective?
If you really don’t want to limit panelists’ participation, it might seem harsh to expel them from the panel after they’ve taken part in just a single research activity. But if they take part in your research repeatedly, will that make them too familiar with the product or service?
In contrast, there’s another level of usage you should also consider. If most people who use your product or service spend large amounts of time using it, perhaps every day, how can you ensure that your panelists have enough experience to reflect that?
5. How much data will you capture at the start versus along the way?
The process of signing up for your panel is another key consideration. It’s tempting to approach this as an opportunity to construct a definitive profile of each panelist, asking every possible question that might help you to decide whether and when to invite a panelist to participate in a specific research activity. The problem is that asking every possible question is likely to overload potential participants, resulting in a higher drop-off rate during signup.
Another approach is to progressively ask more questions of participants. For example, you might decide to ask for additional details on a monthly cycle, changing the topic on which your questions focus each month. Perhaps you might ask about panelists’ social interests, families, or the technologies they use. Or maybe ask about something that’s relevant to the decisions you’ll be making during that particular month. Of course, when using this approach, it will take a lot longer to build up a complete profile of each panelist.
Here are some things to think about when signing up new panelists:
What is the least information you could request that would be useful and allow you to contact the panelist again?
Would panelists expect you to contact them regarding anything other than an invitation to participate in a user-research activity?
What sort of contact would panelists welcome? What questions are they interested in answering?
6. Should you link research data from other studies?
If you treat panelists as brand-new recruits each time they participate in a research activity, you risk asking them questions that they’ve already answered—questions they expect you to know they’ve answered. However, if you link panelists’ data from other studies, they’ll see those who are running a research activity as being up to date with everything they’ve previously told people in your organization.
For example, in a specific research activity, you might have asked a participant, “Do you have children?” If you don’t link the data from that prior research, you might get this answer from the participant, “I told you last time that I’ve got three.” Or even worse, the participant may weep because, as she told your colleague last time, her child recently died.
If you link the data that you capture during each specific research activity, you may build up a considerable body of very private data.
Here are some questions to consider:
Who will look after that data?
What are the information-security requirements?
Who should have access to the data?
Can you fulfill promises of anonymity and confidentiality regarding specific research activities?
7. What technology will you use?
There is a broad spectrum of available technologies that you can use to manage user-research panels—ranging from a pen and a paper notebook, in which you write the contact details of people who are willing for you to contact them, to megabucks Customer Relationship Management (CRM) systems, to market-research panel solutions. (This sort of thing is always called a solution, isn’t it?)
One possibility would be to start with the simplest possible technology—perhaps that paper notebook—then work up to fancier technologies only once you’ve addressed some of the other issues—especially and maybe most importantly, the challenges relating to Question 1: Who will manage the panel?
In a larger organization, a paper notebook simply may not be appropriate. What would happen if it went missing or a colleague at another location needed to use it? So you may need to think about a more robust technology straight away, to confront and explore solutions to information-security and procurement issues from the start. The risk is: you might become overly focused on the technology and fail to put enough effort into the people processes around it.
Some questions to consider include the following:
What technology could you use to manage the panel?
If you must procure new technology, what is your budget and who will decide how to spend it? Who will set up the system, and who will maintain it?
If you plan to use some technology you’ve already got, what will be the impact on that technology? Is it suitable for the purpose?
In this column, we’ve taken a good look at seven questions that you should consider when setting up a user-research panel. At this point, you may be thinking, This is too much to think about, so I’ll just keep using my current recruitment methods.
Don’t be put off by the need to consider all these issues. Using a user-research panel can be a wonderful way of keeping in touch with a wide selection of potential research participants, recruiting hard-to-find people when you need them, and ensuring participants are available when you need them. This is certainly preferable to your having to time your user-research activities around the demands of a recruiter.
Our aim for this column was to get you started putting together your user-research panel, while ensuring that the issues are out in the open right from the start. We hope this will help you to think through all the issues and create a successful panel that meets your needs.
Caroline became interested in forms when delivering OCR (Optical Character Recognition) systems to the UK Inland Revenue. The systems didn’t work very well, and it turned out that problems arose because people made mistakes when filling in forms. Since then, she’s developed a fascination with the challenge of making forms easy to fill in—a fascination that shows no signs of wearing off over 15 years later. These days, forms are usually part of information-rich Web sites, so Caroline now spends much of her time helping clients with content strategy on huge Web sites. Caroline is coauthor, with Gerry Gaffney, of Forms that Work: Designing Web Forms for Usability, the companion volume to Ginny Redish’s hugely popular book Letting Go of the Words: Writing Web Content That Works. Read More
Naintara has over 18 years of experience as a user researcher and has spent the last five years working to make UK government services more user centered. In 2012, she set up the first program of iterative user research within an agile delivery cycle at the Government Digital Service (GDS). Later, she was the User Research Lead for GOV.UK, the government’s largest digital transformation project, then went on to lead the user-research community across GDS. Previously, Naintara had led the Usability Team at Orange and was a consultant at Flow Interactive, working with clients such as Royal Mail, BBC, Department for Education & Skills, Nokia, and Vodafone. She has a bachelor’s degree in Psychology and a Master’s in Research, specializing in Human-Computer Interaction, both from the University of Manchester.