Top

Choosing the Right Metrics for User Experience

June 2, 2014

Metrics are the signals that show whether your UX strategy is working. Using metrics is key to tracking changes over time, benchmarking against iterations of your own site or application or those of competitors, and setting targets.

Although most organizations are tracking metrics like conversion rate or engagement time, often they do not tie these metrics back to design decisions. The reason? Their metrics are too high level. A change in your conversion rate could relate to a design change, a promotion, or something that a competitor has done. Time on site could mean anything.

Champion Advertisement
Continue Reading…

UX strategists need to take charge of the metrics for online experiences. First, we’ll look at the current state of metrics in most organizations and some of the problems in defining metrics for user experience. Then, we’ll focus on three key types of metrics for user experience, how to track them, and how to integrate them into an organization’s measurement framework.

The Signal Problem

There is so much data available on sites and applications that it seems amazing insights would be sure to surface, yet that does not happen without smart decisions. The data that is available from off-the-shelf analytics, A/B tests, and even follow-up surveys does not always result in insights that inform the user experience.

  • The signals that are easiest to track don’t show what is really important. Pageviews, for example, are easy to collect, but do not tell you much about the experience people have actually had on your site or when using your application. Your organization’s goal is probably not to make sure that every user views a certain numbers of pages. So, while pageviews may be a great metric for ads, they are not a good way to track or encourage engagement.
  • Signals can be ambiguous. Think about time on site, which people often equate with engagement. More time could be positive, but it could also be negative—time that the user spent feeling confused, distracted, or frustrated. Even if you track engagement by looking at time spent on your site and pages per visit together, it’s still not clear how that equates to engagement.
  • The signals don't always map to design. Maybe after the launch of a new feature, traffic starts going up. The product team might think this is because of the new feature, sales may link the increase in traffic to a new promotion, and the UX team might assume that it’s connected with their new design. But the increased traffic might not relate to any of these potential causes. A/B tests do connect data with design, but the data is granular. A metric like the number of clicks for image A versus image B can help you to make tactical decisions regarding how to implement a user-interface element, but it does not work well for bigger design decisions.
  • There may be too many signals to act on. A lot of metrics get reported simply because they are flowing in from analytics tools that can track hundreds of metrics and are endlessly customizable. It is tempting to measure everything and hope that insights will emerge on their own, but usually, they won’t.
  • The right signals might not get captured at all. Companies usually track metrics post launch. Metrics that you could use to inform design need to be captured while you’re developing new ideas, but often those metrics are neither quantified nor tracked.

So, it’s difficult to find the right signals for user experience amidst all the noise in your data. Further complicating matters, companies often identify and track the metrics that could help you to understand how you’re doing against your UX objectives—the key performance indicators (KPIs)—elsewhere in the organization and without involving the UX team.

The Early, Modern History of UX Metrics

Most metrics are marketing oriented, not experience oriented. Unique visitors can tell you whether your marketing campaign worked and social mentions can tell you whether you’ve got a great headline, but these metrics do not reveal much about the experience people have had using a site or application.

Table 1 does not provide an exhaustive list of metrics, but it illustrates some of the differences between what marketing may be tracking and the types of metrics UX teams currently track.

Table 1—Comparison of marketing and usability metrics
Marketing Metrics User Experience Metrics

Conversion rate (site, campaign, social)

Task success rate

Cost per conversion (CPC)

Perceived success

Visits to purchase

Time on task

Share of search

Use of search or navigation

Net Promoter Score (NPS)

Ease of use rating, or SUS

Pageviews

Data entry

Bounce rate

Error rate

Clicks

Back-button usage

So far, user experience metrics focus on ease of use. Usability is familiar territory—and something that UX teams do well—so this makes sense as a starting point. The most commonly used metrics are performance measures such as time on task, success rate, or user errors. These are objective measurements that record what people actually do—though success rate can have an element of subjectivity, depending on how it’s measured.

Subjective measures like the System Usability Score (SUS) or simple satisfaction or ease-of-use rating scales gauge how people perceive an experience after using it. Sometimes UX teams capture both objective and subjective metrics to get a better picture of a site’s or application’s usability.

All of the UX metrics that I’ve listed in Table 1 reflect what UX teams would typically capture from a usability study rather than from analytics. This is another big difference between marketing metrics and user experience metrics. However, many organizations either do not quantify study data at all or track it inconsistently, usually because studies relate to a particular problem or are just exploratory and the study size is small. So they typically capture usability metrics like these only once in a while, if at all.

User experience is about more than just ease of use, of course. It is about motivations, attitudes, expectations, behavioral patterns, and constraints. It is about the types of interactions people have, how they feel about an experience, and what actions they expect to take. User experience also comprehends more than just the few moments of a single site visit or one-time use of an application; it is about the cross-channel user journey, too. This is new territory for UX metrics.

The (U)X Factor

Even though most metrics measure what people have done or said, they seem a little abstract. Marketing metrics focus on customer acquisition, so their emphasis is on getting attention and closing the deal. The language of conversion funnels and landing-page optimization plays down the human factor. Plus, these metrics miss all of the messy, in-between stuff that makes up the user’s actual experience with a site or application. Hesitation over a navigation category, using search because it’s too hard to deal with the site navigation, frustration over losing one’s place in the feed—all of these sorts of details are critical to understanding the user experience. This is also where UX teams excel.

The goal of UX metrics should be to bring the people—and maybe a little of the messiness of the actual experience—back into the mix. Contexts and connections are the missing links.

  • contexts—UX teams can already supply context for the numbers from analytics and show what has happened by filling in the how and the why through UX research and creating personas and journey maps. If you start tracking things like the relationships between interactions and the different goals and behaviors that are associated with users who start from different channels, you can also provide metrics on these contexts.
  • connections—In addition to filling in the blanks between what has happened and why, UX metrics can bridge the gap between the insights that emerged during the development process and what you track once a site has launched. More organizations are moving toward connecting and sharing data from quantitative and qualitative studies, customer service, and site analytics. But without metrics, it’s difficult to understand changes over time or to assign priorities to issues.

UX professionals can learn some things from marketing metrics, of course. Marketing teams have spent more time aligning KPIs with data collection and metrics. Assigning value to metrics is a given for marketing, but atypical for UX metrics—at least for now.

Three UX Metrics to Track

Focusing on high-level business value may not translate to a better user experience. But UX metrics can complement metrics that companies track using analytics—such as engagement time or bounce rate—by focusing on the key aspects of a user experience.

Table 2 lists three categories of big-picture UX metrics that correlate with the success of a user experience: usability, engagement, and conversion. While you may decide that just one metric in each category is key, you can combine several metrics in a category into a compound score that you can then track through your studies. The metrics shown in Table 2 help us to get a more nuanced understanding that provides a basis for making improvements to a user experience.

Table 2—Examples of three categories of UX metrics
Usability Engagement Conversion

Time on task

Attention minutes

Micro-conversion count

Task success

Happiness rating

Brand attribute

Perceived success

Flow state

Conversion rate

Confusion moment

Total time reading

Likelihood to recommend, or NPS

Cue recognition

First impression

Trust rating

Menu/navigation use

Categories explored

Likelihood to take action

Usability

Usability metrics focus on how easily people can accomplish what they’ve set out to do. This category of metrics includes all of the usability metrics that some UX teams are already tracking—such as time on task, task success rate, and an ease-of-use rating. It may also include more granular metrics such as icon recognition or searching versus navigating. Plus, it could include interaction patterns or event streams that show confusion, frustration, or hesitation.

Engagement

Engagement is the holy grail for many sites and is a notoriously ambiguous category of metrics. But UX teams could make a real contribution to understanding how much people interact with a site or application, how much attention they give to it, how much time they spend in a flow state, and how good they feel about it. Time might still be a factor in engagement metrics, but in combination with other metrics like pageviews, scrolling at certain intervals, or an event stream. Because this metric is tricky to read, it yields better results in combination with qualitative insights.

Conversion, or Likelihood to Convert

Conversion is the metric that everyone cares about most, But its use can mean focusing on a small percentage of users who are ready to commit at the expense of other people who are just becoming aware of your site or thinking about increasing their engagement with it. You can use UX metrics to design solutions for these secondary scenarios, too—for example, by looking at users’ likelihood of taking action on micro-conversions, in addition to considering conversion rate and Net Promoter Score (NPS).

The metrics in this category can help us to spot trends and get past the So what? question that applies to all data. The big metrics give us the big picture, showing how a site or application changes over time and how it lives in the world or the broader context of other experiences.

More Meaningful Metrics

Organizations across the board are seeking more meaningful metrics that go beyond the defaults that off-the-shelf analytics provide. This is a good thing for user experience because, while what UX teams need to track is more complicated, it ultimately provides more long-term value.

The newest developments in metrics are leveraging more complex signals. Rather than relying on single signals—for example, using the number of pageviews as a proxy for engagement or the number of clicks for a landing-page call to action to determine conversion—the trend is now toward using metrics that draw from multiple signals or event streams.

  • Multi-signal metrics look across data types or channels—for example, combining social sentiment with interactions. A multi-signal metric lets Modcloth identify and understand underserved audiences through a combination of Net Promoter Score, product reviews, and social-media posts. Trust factors are another possibility here, combining recognition of credibility cues in combination with a trust rate or a likelihood-of-recommending metric.
  • Event-stream metrics follow interactions in time—for example, the length of time a browser tab has been open, how long a video player has been running, and the movement of the mouse on a user’s screen. Medium looks at scroll positions in the event stream to track total time reading, its KPI. Attention time, signaled by a sequence of events, is another metric that gives greater insight into engagement. You could track a user’s flow state using a similar technique.

The right signals for user experience are usually a combination of interactions and perceptions. Interactions are what people actually do, including clicking, scrolling, and filling out a form. Perceptions are what people think about a user experience and how they feel about it. And if you track interactions and perceptions together, they become more meaningful.

Event streams, where a certain sequence of interactions is meaningful, share much in common with the types of qualitative research that UX teams do so well. In both online and lab studies, a behavior that I see all the time when people encounter a new site is their quickly scrolling down, then up again. You can track this as a first impression metric. Another metric that is based on an event stream is a confusion moment, when someone tries a quick sequence of interactions, then bounces.

Another potential input for UX metrics is aggregated data from qualitative studies. Rather than running a study expressly for the purpose of gathering quantitative data, the goal is instead to quantify the existing data from prior studies. You can do this by making a decision to capture certain metrics such as success rate, time on task, or trust rating in every study and also tallying the findings relating to features or design patterns from your data logs—for example, X% clicked a carousel item or Y% filtered search results. Then you can map these metrics to the big-picture metrics: usability, engagement, and conversion.

UX Metrics in the Organization

It’s not enough just to choose metrics that provide actionable insights and help us to improve the user experience. For UX metrics to have an impact, you should consider them in conjunction with an organization’s other metrics.

The three big-picture metrics also map easily to the KPIs that most organizations already track. For example, all organizations track engagement and conversion, so tracking UX-focused metrics adds context and depth that the analytics or survey numbers lack. While many organizations do not have metrics for usability, it’s often on their radar as a gap that they need to fill.

Usability, engagement, and conversion metrics fit very well into some commonly used measurement frameworks. Forrester’s CX measurement framework groups metrics into three categories:

  • descriptive metrics that tell what happened
  • perception metrics that focus on how customers’ perceived what happened
  • outcome metrics that describe what customers did or expect to do based on their perceptions

These categories of metrics essentially match up with the three categories of UX metrics: usability, engagement, and conversion.

Another popular framework, Avinash Kaushik’s See, Think, Do Framework, groups metrics by awareness, consideration, and action. However, the three categories of UX metrics do not fit neatly with this framework, so this is where tracking metrics in each category becomes important.

The Future of UX Metrics

Measuring UX metrics is great—if you are measuring the right thing: the thing that is going to change the relationship your organization has with its customers. Developing better metrics for understanding and managing the user experience will help organizations to focus on the user first. 

Founder of Change Sciences

New York, New York, USA

Pamela PavliscakPamela is founder of Change Sciences, a UX research and strategy firm for Fortune 500s, startups, and other smart companies. She’s got credentials—an MS in Information Science from the University of Michigan—and has worked with lots of big brands, including Ally, Corcoran, Digitas, eMusic, NBC Universal, McGarry Bowen, PNC, Prudential, VEVO, Verizon, and Wiley. Plus, Pamela has UX street cred: She’s logged thousands of hours in the field, trying to better understand how people use technology, and has run hundreds of UX studies on almost every type of site or application you could imagine. When she’s not talking to strangers about their experiences online or sifting through messy data looking for patterns, she’s busy writing and speaking about how to create better user experiences using data of all shapes and sizes.  Read More

Other Columns by Pamela Pavliscak

Other Articles on UX Strategy

New on UXmatters