Capturing Meaningful and Significant User Experience Metrics

May 23, 2011

How many times have you wondered how you can collect meaningful and significant metrics to validate your research? Many researchers struggle with this same dilemma on a daily basis. For example, how can we know the magnitude of the issues we are detecting in a traditional usability lab study? Surprisingly, there are many ways to capture useful UX metrics if you have the knowledge of what solutions to use and how to use them.

Champion Advertisement
Continue Reading…

What Solutions Are Available?

Some of the available solutions include

  • survey tools
  • Voice of the Customer (VOC) solutions
  • online usability testing tools

Survey Tools

The first and most obvious means of capturing such metrics is a survey tool, which allows you to capture large amounts of data across large numbers of participants, using a question format. Surveys employ various types of questions, including multiple choice, single choice, Likert scales, semantic differential scales, open-ended comments, and Net Promoter Scores (NPS). Surveys are valuable tools for quickly gathering large amounts of data to understand customers, ideas, and people’s behaviors and gather general or specific feedback. There are many survey tools available on the market.

Voice of the Customer Solutions

Voice of the Customer solutions allow you to collect feedback when customers visit your Web site and are in wide use. An invitation layer on your site can present an invitation to customers, asking them to answer a few questions once they have accomplished whatever prompted their visit to your site. Alternatively, you can gather such information from customers by providing a Feedback tab or an icon they can click whenever they would like to leave feedback about their experiences on your site.

Online Usability Testing Tools

User researchers and marketers employ tools for online usability studies—also known as, unmoderated remote usability testing solutions. These powerful tools enable participants to follow task-based scenarios that let researchers collect both behavioral and quantitative data through clickstreams. They also let researchers conduct surveys and Voice of the Customer studies to gather significant metrics. There are various remote usability testing solutions available on the market. Kyle Soucy reviewed a number of them in her UXmatters article “Unmoderated, Remote Usability Testing: Good or Evil?

UX Metrics

What type of UX metrics can—and should—you collect?

  • effectiveness ratios—You can easily gather effectiveness ratios such as success, error, abandonment, and timeout ratios through your task-based studies. These are excellent metrics for helping you to understand where you need to make improvements to your Web site.
  • efficiency ratios—Mean clicks per task, mean time per task, and unique page views per task are efficiency ratios that let you gauge how easy or complex various activities on your Web site are.
  • top box and bottom box metrics—You can gather these metrics using either 5-point or 7-point Likert scales. These metrics let you determine what percentage of your participants agree or disagree with statements such as the following: The site was easy to navigate. I liked the look and feel of the site. I would likely return to this Web site again.
  • learning curve, or learnability, metrics—Through testing that involves repetitive tasks, you can understand how many repetitions it takes for participants to learn how to successfully or efficiently accomplish a task. You can be obtain this information from metrics your remote usability testing tool has captured. Decreased time on task, clicks per task, and increased success ratios indicate improved usability through learning.
  • frequency by category or iteration—Through iterative design and usability testing, you can evaluate the overall number of issues or number of unique issues to determine whether they are decreasing over time. In addition, you can capture the frequency with which participants encounter a specific issue, which can help you determine the magnitude of the issue.
  • expectation metrics—Using pre-test and post-test surveys comprising Likert scales, you can compare the perceived difficulty of a task versus its true difficulty.
  • filtering—Using filtering, you can easily understand whether there are differences between the metrics for various user profiles. When using satisfaction ratings, Net Promoter Scores, or multiple-choice or single-choice survey questions, cross-tabulate the data according to effectiveness ratios such as success, error, or abandonment ratios. Comparing percentages can give you great insights into how participants—who are representative of different user profiles—perceive your Web site. You can find out whether participants are satisfied with your site. You can also determine how successful participants compare to those who experienced errors and abandoned tasks—in terms of their satisfaction, the likelihood of their promoting your site, or how they perceive your site. For example, did they find your site cluttered or visually unappealing?
  • brand-perception metrics—Collect participants’ pre-test and post-test brand perceptions and compare them to determine how using your site affects your customers’ opinion of your brand in positive or negative ways. You can accomplish this by asking participants to select brand adjectives from a list of positive and negative adjectives, then assessing what percentage of positive and negative adjectives they choose.
  • behavioral metrics—Record participants’ navigation paths to see how successful paths or paths that deviate affect satisfaction, abandonment rates, error ratios, time on task, and participants’ likelihood of promoting or returning to your Web site.
  • Net Promoter Scores—Determine how likely customers are to recommend your site to a friend or colleague.

Combining UX Metrics from Online Research Studies and Lab Studies

Combining online methods of research with lab studies allows you to gain a complete understanding of your Web site’s user experience. By first gathering the issues that are low-hanging fruit in a lab study, you can iteratively develop new and improved designs. Next, evaluate those designs in a more robust testing situation such as an online research study. You can collect metrics such as success rates, error rates, user satisfaction, and time on task to validate the new designs.

Alternatively, you can start with an online research study to detect critical issues and determine their magnitude. You can then do a traditional lab usability study and process the issues with participants.

Traditional lab research allows you to ask participants probing questions to help you detect and understand issues; online research lets you round out your data by quantifying those issues.

Collecting metrics is easy—so definitely not beyond the scope of what you can accomplish—if you know the right tools or solutions to use. Using remote unmoderated usability research solutions on the Web, you can satisfy your research needs by collecting the metrics I’ve mentioned here and much more. A research solution that supports task-based, true intent, and Voice of the Customer studies, as well as surveys, empowers you, as a user researcher, to meet business goals and make effective decisions. 

VP of Global Customer Success at Whisbi

San Francisco Bay Area, California, USA

Kim OslobKim has worked in the field of user experience research for over 10 years and has extensive experience with both qualitative and quantitative UX research, for Web sites, software, and mobile devices and applications. She is skilled at developing and managing usability research plans from concept through rollout. Since joining UserZoom four years ago, Kim has worked closely with high-profile clients, helping them to design and implement top-notch UX research and ensuring their success using UserZoom. She has also helped define UserZoom product strategy.  Read More

Other Articles on Usability Testing

New on UXmatters