To clarify, there are a lot of tools out there that label themselves as usability testing tools, but don’t actually offer the capability of doing usability testing with users through task elicitation. Some of these tools are nothing more than survey tools, Web analytics tools with new and improved visuals—such as CrazyEgg and clickdensity—or Web analytics tools that turn analytics data into videos of actual user sessions—such as Userfly, ClickTale, TeaLeaf, and Clixpy. All of these tools provide a wealth of data about your Web site’s users. However, such tools are not the focus of this article. Instead, this article focuses on unmoderated usability testing tools that actually simulate traditional usability testing by asking participants to complete a series of tasks using your user interface and answering questions about their experience.
What You Can Learn
Unmoderated usability testing lets you do test sessions with hundreds of people simultaneously, in their natural environment, which, in turn, provides quantitative and even some qualitative data. The exact metrics and feedback you can collect vary, depending on the tool you use. (I’ll provide a list of unmoderated usability testing tools later.) Most unmoderated testing tools can gather the following quantitative data:
- task-completion rate
- time on task
- time on page
- clickstream paths
- satisfaction ratings or opinion rankings
- Web analytics data—such as browser, operating system, and screen resolution
Most of these tools can also capture qualitative feedback as users complete their tasks—such as users’ suggestions and comments. This is where the true value of unmoderated usability testing can come into play.
Some unmoderated testing tools can recruit users for tests by intercepting them on your live Web site. This lets you collect invaluable data on participants’ true intent and motivation for visiting your Web site.
How Actionable Is the Data?
How actionable your data is depends heavily on the types of tasks you ask participants to perform. If you have participants perform scavenger-hunt tasks—asking them to find specific content on a Web site—you may miss out on important feedback. Just because someone was able to find the information you requested doesn’t mean they understood it. To elicit more valuable information, you should try to make finding tasks more meaningful by having participants answer a question about the information they were asked to find. For example: Using the Web site, please find out which Smartphones are available on Verizon Wireless. Where can you purchase these phones locally?
The self-reported feedback and comments you get in response to open-ended questions can be the most valuable data you collect during an unmoderated test. Sometimes users’ direct quotations can be just as impactful as videos, especially when you start to see a consensus building among different participants.
Take satisfaction ratings and opinions with a grain of salt. Pay closer attention to what users actually do—not what they say they do. Participants can have a terrible experience using a user interface and still give it a high satisfaction rating. For this reason, I suggest asking participants open-ended questions about their experience rather than having them rate it.
Also, you must keep in mind that Web analytics alone cannot paint the full picture. Just because it took someone longer to complete a task doesn’t mean it was harder to complete. They could just have been more interested in the content. Without asking participants, you don’t really know for sure. You must be careful not to make hasty assumptions that are based on just the quantitative data you’ve collected.
Conducting Unmoderated Usability Tests
Creating and administering an unmoderated usability study is similar to the process of creating and administering an online survey, but with the additional steps of a traditional usability study, as follows:
- Define the study. Decide what tasks you are going to ask participants to perform, the order of the tasks, and what follow-up questions you want to ask them about their experience. Unfortunately, since you are not observing the tests, you can’t ask probing or follow-up questions on the fly, depending on what participants do. However, some unmoderated usability testing tools let you structure tests to ask probing questions after users’ perform specific interactions with a user interface.
- Recruit participants. You can choose to do the recruiting yourself or hire a recruiter. As I mentioned earlier, some unmoderated testing tools offer you the options of either intercepting users on your live Web site or recruiting them from the tool developers’ own panels of participants—which are pools of test participants they’ve recruited in advance. You should be careful when choosing participants from such panels as your representative users. Who participates in your test is just as important for an unmoderated usability test as it is for a moderated test. Your team will base important design decisions on the data you obtain, so participants should be real or prospective users of a product.
- Launch your test and send email invitations. Typically, an unmoderated test should be only 15–30 minutes in duration—comprising approximately 3–5 tasks—because the dropout rate tends to increase if a test takes longer.
- Analyze your results. Most unmoderated testing tools offer live, real-time reporting during tests.
Benefits and Drawbacks
Before choosing to conduct unmoderated usability tests, it’s best to take a look at their benefits and drawbacks in comparison to traditional moderated testing.
Benefits of unmoderated usability testing include the following:
- You can test hundreds of people simultaneously—while keeping them in their own natural environment.
- You can test multiple Web sites simultaneously—for example, competitor Web sites, different brands, or Web sites for different countries.
- You can test at a reduced cost—depending on the tool you use. There are definitely unmoderated usability testing tools that have ridiculously high prices, but some recent tools are very affordable, which can make unmoderated usability testing a less expensive option. (See my list of unmoderated usability testing tools.) Also, the participant honorariums for unmoderated tests are typically a lot lower.
- Doing unmoderated usability testing is a great way of planting the seed of UCD methodologies and introducing usability testing into a company, using limited resources and budget—assuming you can use one of the less expensive testing tools.
- There are fewer logistics to manage, with no need to set up testing schedules, set up and moderate individual test sessions, or worry about no-shows and getting last-minute replacements.
Drawbacks of unmoderated usability testing include the following:
- Nothing beats watching participants in real time and being able to ask probing questions about what they are doing as it’s happening—and you’ll miss out on this opportunity.
- Some participants may be interested only in earning the honorarium you’ve provided as an incentive. So, rather than taking the time to really perform each task and provide feedback, they’ll just click through the tasks without much thought. Luckily, you can filter such participants out of your findings by looking at their time on task or open-ended feedback. Depending on the capabilities of your chosen testing tool, this task can either be time consuming or quite painless.
- You cannot conduct interview-based tasks. Participants who are passionate about the tasks they are performing interact with a user interface differently from those who are just doing what they are told.
- Web analytics can mislead you by giving a wrong impression of a user’s experience. Also, what participants report on surveys can be very different in comparison to what they actually do. You can’t rely solely on rankings and satisfaction ratings to create an accurate picture of what your users actually need and want. Therefore, you should always include qualitative research questions in your unmoderated studies and analyze the self-reported feedback. If necessary, follow up with participants after a study to discuss their feedback.
- It’s possible for participants to think they’ve successfully completed a task when they haven’t. To move on to the next task, participants must be able to decide whether they’ve completed their current task. For this reason, you need to develop straightforward tasks that have well-defined end states.