Top

UX Analytics, Part II: Getting a Quick Win

May 10, 2011

In my first article in this series, “UX Analytics, Part I: A Call to Action,” I discussed the benefits of synthesizing qualitative user experience research insights with quantitative Web analytics data to generate deeper levels of insight around a Web site’s user experience. I finished the article with a list of steps you could take toward building partnerships and the skill sets you’d need to quantify the impact of usability or user experience issues visitors might encounter on a given Web site. Such issues will generally come to light as you get started planning your first UX analysis win.

First, consider what is the most critical action you want your customers to accomplish on your site—what is your primary conversion? For an ecommerce site, the purchase that a thank-you confirmation represents is commonly the key conversion. From there, work backward to determine the key steps a user takes to get to that conversion point. In checkout, it might be—in reverse order—order confirmation, order review, shipping/billing/payment information, and adding a product to the shopping cart.

Champion Advertisement
Continue Reading…

It’s likely that not all customers would take all of these steps to convert, so determine the key steps that all users must take—that is, shopping cart, order review, and order confirmation—which represent your site’s core business process. Next, consider the subflows that might exist within that process, depending on whether a user has an existing account.

Analysis: Start Broad

Next, meet with your Web analyst partner for an hour to collaboratively examine the abandonment rates for your various flows. Ask that person to first build a fallout, or abandonment, funnel report for your key business process, comprising the essential steps for perhaps the past quarter—assuming there were no significant enhancements made or a redesign of those pages during that timeframe. It’s important to recognize that the data you’re looking at means very little right now, because you have no context or reference point from which you can derive meaning. Plus, while the raw numbers have little meaning, the percentages are more interesting. Sit back and take it in for a few moments. Think about these questions:

  • Does the abandonment rate from the first to the last step surprise or concern you? If so, why?
  • Does the abandonment rate after any given step surprise or concern you? Why?
  • Do you think the data would surprise or concern business stakeholders? Why?
  • Based on your usability assessment of this business process, what are your preliminary hypotheses around this data?
  • What factors outside the user experience might account for the results?
    • Were any enhancements or bug fixes deployed to those pages during that timeframe?
    • Were there any marketing efforts during that timeframe that directed customers into the business process? (This is very unlikely for a checkout flow.)
    • Were there any unique seasonal issues—for example, seasonal customer buying patterns—during that timeframe that might account for unique fluctuations? (This is also fairly unlikely for a checkout flow.)
  • Where are customers going when they abandon at a given step?
    • Are they going to a Help or Contact Us page? Does that matter?
    • Are they exiting the site?
    • Are they backing up within the flow?
    • Are they returning to another process—for example, browsing products when they had already been in checkout?
  • What does your partner think?

In a larger organization, you might benefit from printing the report and using it as a springboard for discussion with others who are closer to the activity around site conversion. Use that time to understand the site’s Key Performance Indicators (KPIs) for the site’s key conversion points.

And it’s worthwhile to keep a copy of the report for your own reference. (Although your Web analytics partner could recreate it at any time.) You may later benefit by noting trends in this data over time—does the data remain stable, or are there fluctuations? Also, try to track down industry benchmarks around this business process. Where does your site’s performance fall in relation to these benchmarks?

Narrow In

More often than not, your first pass is a springboard for more questions that would then lead to your gaining actionable insights. You and your partner might still have another 45 minutes or so of your one-hour meeting. Therefore, ask your partner to add one of the subflows to the funnel to see how it alters the data. The same questions I asked earlier still apply, but now there is an additional layer of questions around the specific test case that the flow reflects. What new insights did you gain from this deeper examination? Does it support or shed light on any of your earlier hypotheses?

Spend the rest of the hour plugging in various subflows and variations of subflows to see what interesting patterns appear and help you evolve—or kill—your hypotheses.

What’s Your Story?

From the above exercise, you’ve gathered data and built some hypotheses. Perhaps you want to let your hypotheses simmer over time to give yourself the opportunity to gather historical data for trending or internal benchmarking. Or, perhaps you want to gather data from alternative yet complementary data sources—such as VOC (Voice of the Customer) reports, customer satisfaction survey comments, call-center reports, or usability testing—that would enable data triangulation or data synthesis for even more fine-tuned insights.

What recommendations might you make based on your hypotheses? Your recommendations could be as small as changing the copy for a call to action or as big as redesigning and re-architecting the logic behind a form. Or, your recommendations might suggest that you should conduct further user research such as moderated or unmoderated remote usability tests.

As an aside: The closer you are to understanding the level of effort that implementing your recommended changes would require, the better prospects you have of making a quick win.? Some recommendations—like re-architecting the sign-in process—might require a team of people to examine the current implementation in detail, then determine a better solution to the problem. Other recommendations might be simple to implement—for example, tweak error message copy to clarify what a user can do to recover from an error. Both options are perfectly viable, but the more awareness you have around level of difficulty means you can be that much better at setting expectations around when stakeholders might see results.

What’s the Impact?

At some point, you may be prepared to claim that, if your recommendations get implemented, you could, for example, decrease the abandonment rate on your site or increase the conversion rate for a given process or step. The level of rigor you can achieve around your projections of how much you can improve on these rates depends on your organization and its readiness to make rapid changes—or reverse changes if they produce undesirable effects. For small, agile organizations, a let’s-try-it-and-see approach should work fine for enhancements that cost very little effort to implement, assuming you follow up with post-implementation analysis to ensure the changes haven’t had a negative effect.

For larger organizations—even on your first attempt—you may want to partner with a trusted internal statistician—who might be your trusted Web analytics partner—to build out a revenue-projection model. For example: If we decrease abandonment at login by 25%, we may realize an additional $300K in revenue. This becomes increasingly important as a solution becomes more difficult to implement.

Re-evaluate

Once changes are implemented, again measure the data for the business process funnels and compare the new data to your previous measurements, making sure to examine the same sets of days both before and after making the changes—for example, seven weeks prior to and seven days after making the changes, with both sets of data running from Sunday through Saturday. And, make sure there weren’t other changes to the business process flow that you’re measuring or, if there were, provide some form of rationale in your methodology to account for those changes.

Conclusion

Funnel analysis can be an excellent first step toward eliciting some data regarding users’ experience and the impact of your recommended changes to that experience. As designers, we sometimes perceive the reasons for these abandonments as obvious, because we recognize that little things can be a big deal and cause customers to abandon. The challenge lies in understanding how severe issues are, measuring their impact, and projecting their potential, with the goal of getting those in leadership positions to take note, carry your issues forward, and instigate action. 

Senior Customer Experience / User Experience Analyst at Deluxe Corporation

Denver, Colorado, USA

Kristi OlsonKristi has been doing UX research for ecommerce Web sites since 2001 and information architecture since 2006. After specializing in qualitative research methods for Target and Evantage Consulting, she moved to her current employer, Deluxe, and transformed UX research by blending qualitative and quantitative insights in a new approach called UX analysis. Kristi is an expert in research methodology, designing and blending methods of research, and discovering creative approaches to gathering data that meet business objectives. Kristi has synthesized her long-time, right-brain approach to user research with the left-brain approach of Web analytics to more powerfully communicate with Ebusiness. She has been Deluxe’s Tealeaf owner, administrator, and primary business user since 2009 and is on the road to becoming an Omniture Discover power user. Kristi is currently completing her Master of Liberal Studies in Innovation Studies.  Read More

Other Articles on UX Strategy

New on UXmatters