International UPA 2010 Conference: Research Themes and Trends

By Michael Hawley

Published: June 21, 2010

“I … came away from the conference inspired by the variety of speakers, topics, and presentations.”

For the first time in its history, the International Usability Professionals’ Association (UPA) conference took place outside of North America. While this certainly shifted the percentage of attendees from different geographic regions, all reports are that the conference was well attended, with crowded presentations filled with attendees from Europe, North America, and Asia.

I was fortunate enough to attend UPA 2010 and came away from the conference inspired by the variety of speakers, topics, and presentations. As the author of a column here on UXmatters titled Research That Works, I was especially interested in discussions about the evolution and refinement of various user research methods. Compared to discussions at CHI or other conferences, UPA has a reputation for being geared toward usability and user research practitioners. I am happy to report that this tradition continues. It was very encouraging to hear from both usability professionals and academics who are continuing to refine our methodologies and challenge old thinking in the light of new technologies. Unfortunately, it was impossible to attend all of the sessions, but in this conference recap, I will outline several trends I recognized.

Optimizing and Extending Existing Research Methods

“Practitioners continue to build on proven methodologies and adapt them to real-world scenarios and evolving technologies.”

Practitioners continue to build on proven methodologies and adapt them to real-world scenarios and evolving technologies. There seems to be a tension between those who seek more rigorous approaches to user research and usability evaluation and those who have limited time and budgets and do what they can to convince business stakeholders of the value of research. Hopefully, continuing discussions on this topic can help us do both more effectively.

Express Usability: How to Conduct and Present a Usability Analysis in One Week or Less

“The goal was to lower the threshold for trying usability testing by offering business stakeholders abbreviated usability activities they could complete in one week.”

In their presentation titled “Express Usability: How to Conduct and Present a Usability Analysis in One Week or Less,” Sarah Weise and Linna Manomaitis discussed an approach to doing a full cycle of user research and analysis activities in one week. The goal was to lower the threshold for trying usability testing by offering business stakeholders abbreviated usability activities they could complete in one week.

In tailoring standard usability techniques to fit into a single week, they essentially created a list of the different activities they could perform and assigned those activities to three phases: data gathering, analysis, and deliverables and presentation. For a given project, they would work with stakeholders to focus the goals of their research and analysis, then select individual activities from each phase that they could complete in a week. For example, if their goal were to improve the navigation on a Web site, they might choose task analysis as the data-gathering technique, do an analysis, then prepare a set of wireframes as the deliverable. By grouping activities in their list in this way, focusing on one goal, selecting the activities that best supported that goal, and getting the job done in one week—that is, in 40 hours—they were able to demonstrate the value of usability testing to the business and build relationships with stakeholders that smoothed the way to their doing usability testing on other projects.

I was especially intrigued by the structure this approach offered, helping them deal with to the inevitable tight turnaround times most projects require, and was impressed by the way they used the one-week timeframe as a way of focusing their research efforts on achievable goals rather than trying to improve everything at once.

Combining Methods: Web Analytics and User Testing

“Many were less familiar with Web analytics, but interested in its potential for providing a quantitative balance to the qualitative findings from usability testing.”

Martijn Klompenhouwer and Adam Cox’s presentation, “Combining Methods: Web Analytics and User Testing,” described how it’s possible to combine these two different, proven methods to achieve greater insights. The room was overflowing with people in advance of the session, an indication to me of great interest in this topic. While many people in the UPA audience have experience with qualitative usability testing, I got the sense that many were less familiar with Web analytics, but interested in its potential for providing a quantitative balance to the qualitative findings from usability testing.

Martijn and Adam talked about several ways in which the two disciplines could work together. For example, when planning a usability test, Martijn used Adam’s analysis of Web analytics to get insights about personas that could potentially represent the people who are visiting their current site. Data such as the other sites from which visitors are coming to their site and search terms people are using to find their site can provide clues about their different audiences and inform scenarios for a usability study. Analytics can also help narrow the focus of a usability study, focusing it on parts of a site that have the biggest drop-off rates, bounce rates, or tendencies to branch to the site search. For example, an analysis of the drop-off rate for a checkout process funnel could help them determine on what part of the process they should focus their research.

In presenting the results they achieved with this method, Martijn and Adam told us that combining the two sets of information had the most impact: The Web analytics validated the findings of the usability study, while the usability study provided color and added explanations of why users did what they did, enriching the metrics. The presentation provided a well-prepared summation of the benefits of using Web analytics and usability studies together. There were many questions from the audience—most relating to details of implementation. Using quantitative metrics to complement usability studies is definitely a compelling approach. I’d like either the presenters or others to extend this discussion and present a tutorial that would help those who are interested in this approach to make sense of where to start—given a mountain of Web analytics that may be daunting to climb.

The Importance of Storytelling

“The topic of storytelling is still relevant and important to the successful practice of user-centered design.”

Storytelling was the theme of the UPA conference a couple of years ago, but the topic of storytelling is still relevant and important to the successful practice of user-centered design. Stories can be a very effective way of communicating ideas—both from research participants to researchers and from researchers to project teams or stakeholders. Fortunately, there were talks that covered the full range of this topic.

InfoPal: A System for Conducting and Analyzing Multimodal Diary Studies

“In diary studies, users record their experiences and interactions with a product or service independently, over an extended period of time.”

Jhilmil Jain presented a talk titled “InfoPal: A System for Conducting and Analyzing Multimodal Diary Studies.” In diary studies, users record their experiences and interactions with a product or service independently, over an extended period of time. Such studies are particularly useful when it is difficult to observe participants directly or over a long period of time. A drawback of traditional diary studies is that text-based diaries are not very dynamic and, often, users cannot easily complete them within the context in which they use an application—for example, while driving. Additionally, participants may forget to record something or might be aware of only a subset of items that may be important. Finally, diary entries are often tied to specific devices. In response to these drawbacks, Jhilmil and her team at HP Labs set out to develop a new system with the following goals in mind:

  • flexibility of expression, using one or more devices or modalities
  • seamless, collaborative diary creation by multiple participants

The result of their efforts is a system called InfoPal, which allows test participants to create diary entries using either a Web application or a mobile application, capturing text, voice input, pictures, videos, and audio recordings. The good thing is that the multimodal approach to data capture not only makes it easier for participants to create diary entries, it also enhances the researchers’ insights. While this tool is not available to the general public, their experience with InfoPal can provide some helpful guidelines for those looking to develop and conduct their own diary studies.

Using Stories Effectively in User Experience Design

“The most powerful experiences are those in which research participants tell their stories to stakeholders directly, through video clips of research sessions.”

Taking a more holistic look at stories, Whitney Quesenbery and Kevin Brooks discussed “Using Stories Effectively in User Experience Design.” To help the audience understand how stories are effective, they started with listening exercises for the audience that reinforced the power of storytelling in conveying messages. They showed an example that demonstrated the ineffectiveness of personas based on demographics and personality characteristics alone, then compared those personas to much more effective personas based on stories. Since I am a consultant who does research work for external clients, people sometimes asked me how I can convince clients and business owners to act on the merits of my research findings. Whitney and Kevin’s talk reminded me of how powerful stories are in this regard, when communicating with stakeholders. In fact, the most powerful experiences are those in which research participants tell their stories to stakeholders directly, through video clips of research sessions.

Whitney and Kevin also provided some insight into using stories to elicit feedback from participants. As they put it, to complete traditional task analysis and other documentation, you have to ask research participants about their frequency of use, how a design would fit into their work practice or a buying process, for example. But, if you can complement such questions with a simple request to “Tell me a story about the last time you did [scenario or task]…,” the resulting data is likely to be infinitely richer.

The Continuing Evolution of Eyetracking

I was very impressed by the quality and quantity of discussion related to eyetracking. By my count, there were at least three eyetracking vendors in the vendor area, and there were several talks on the topic. I was able to attend only one, but I think it attracted a good majority of the people who have experience and a continued interest in the refinement of eyetracking and analysis.

Discussion: Eyetracking Meets User Experience

“The discussions … were about the nuances of different report formats, protocols, and the interpretation of different metrics such as fixation time.”

Jennifer Knodler led a panel discussion titled “Eyetracking Meets User Experience,” which included Elisabeth Andre, Detlef Ruschin, Cathy Zapata, Dante Murphy, and Kate Caldwell. Their collective experience represented a good cross-section of vendors, consultants, and corporations who are using eyetracking in slightly different ways.

In previous presentations on eyetracking that I’ve attended, the discussion was about the merits of the method overall. In this presentation, however, there was not much of that at all. Rather, the discussions—which became rather heated at times—were about the nuances of different report formats, protocols, and the interpretation of different metrics such as fixation time.

One topic in particular about which panelists had different opinions was the relative effectiveness of heat maps versus gaze paths. Some defended heat maps as easy to interpret summaries of attention patterns that were easy to communicate to business stakeholders and other team members. Others felt heat maps were a bit too simplistic, while the gaze sequence information in gaze path graphs was significantly more valuable. While the panel’s general consensus was toward the more information-rich gaze path plots, I could see merits in and situations for using both approaches. The whole debate was indicative of the maturity of eyetracking methodologies and provided a reminder to those who don’t yet have much experience with them that they should get up to speed and consider eyetracking for their research toolkit.

Research and Design Beyond Usability

“What clients really want is … transformative solutions that will delight users and exceed their expectations.”

I was very happy to see several discussions of doing research and design with a focus on delight or happiness on the program. In my work, I find that clients now expect usability and ease of use as a given in the solutions I design. What clients really want is for us to develop transformative solutions that will delight users and exceed their expectations. While achieving this represents an opportunity for UX professionals to evolve our role, we also need to develop methodologies and approaches that align with these needs.

Designing for Delight

“All of the experiences that delighted users exhibited the same overall pattern: the user had some anxiety, the anxiety was resolved almost effortlessly
—or at least, better than others had been in the past—and the result was delight.”

In his presentation “Designing for Delight,” Giles Colborne showed several examples of design solutions that were delightful, but also talked about implications for research and the design process. Giles started by showing that, for experts and designers, delight is about novel approaches, attention to detail, associating yourself with delightful others, and humor. For example, he pointed to the skull and crossbones that appears in Google Chrome when a site certificate is questionable—a clever new approach that adds a little interest to the design. However, for people who are not designers—that is, most of the target population—these are interesting details, but not necessarily delightful.

Giles collected stories about experiences that people found delightful, and the results were very different from the designer’s take. For example, one person relayed a story about buying two airline tickets by mistake. When they talked to customer service, not only did the airline refund one ticket, they refunded the more expensive one—delightful! All of the experiences that delighted users exhibited the same overall pattern: the user had some anxiety, the anxiety was resolved almost effortlessly—or at least, better than others had been in the past—and the result was delight. Consequently, from a research and process perspective, Giles suggested researching and probing specifically for potential moments of anxiety and highlighting them in process flows and other documentation. By focusing design solutions on effortless, personal, and clever resolutions to users’ anxieties, we can structure the research and design process for developing delightful designs.

Design for Happiness

“Peter [Desmet]’s work focuses on determining the attributes of products and designs that make people happy, then doing user research and designing with a focus on those attributes.”

Invited speaker Pieter Desmet took this concept one step further in his talk “Design for Happiness.” Instead of working toward a delightful solution by being prepared and reacting to moments of anxiety, Peter’s work focuses on determining the attributes of products and designs that make people happy, then doing user research and designing with a focus on those attributes.

Peter explained that work in psychology—such as the Positive Psychology theme Martin Seligman proposed—found that there are several dimensions to happiness. While genetic makeup and people’s general personality determine part of their happiness, external factors can have a significant impact on their happiness. Products and designs that contribute to both a meaningful and a pleasurable life can contribute to someone’s overall happiness.

Of course, meaning and pleasure can be different things to different people. For example, to some people, living a meaningful life might be raising a child or making a difference in their community. The implication is that, in our research, we should focus on these considerations explicitly in an effort to help people achieve true levels of happiness. I don’t pretend to understand the nuances of the psychology behind his theories, and Peter also admitted this is a work in progress that he continues to explore. However, the idea that we should consider doing research, analysis, and design that build toward positive outcomes—rather than just avoiding negative outcomes and optimizing to reduce errors—is certainly a compelling future direction for our profession.

Maturing the Profession and the Professionals

“Several talks at the conference focused on the evolution of our skills as user researchers and practitioners of usability.”

Speaking of the continued evolution of our profession, several talks at the conference focused on the evolution of our skills as user researchers and practitioners of usability. We can all become better at what we do, so it was encouraging to hear presentations from those who discussed how.

Rent a Car in Just 60, 120, or 240 Seconds

“While Rolf [Molich] was interested in the specific results of the evaluations, he was even more interested in how different teams would independently conduct their evaluations and report on their findings.”

In a continuation of his Comparative Usability Evaluation (CUE) studies, Rolf Molich led a discussion titled “Rent a Car in Just 60, 120, or 240 Seconds.” The title of his presentation refers to a study that he organized, during which 15 separate usability teams evaluated the Web site for Budget, a car rental company in the US. The Web site claimed visitors could “Rent a Car in 60 Seconds,” and Rolf thought the claim would be interesting to study from a usability perspective.

While Rolf was interested in the specific results of the evaluations, he was even more interested in how different teams would independently conduct their evaluations and report on their findings. He found that different teams disagreed on their findings. For example, some found that the 60-second claim was accurate, others suggested that it be changed to 120 or 240 seconds, and still others suggested that the time wasn’t really important at all, as long as users remained confident in their ability to rent a car and that they had all of the information they needed at the right time. The differences in the recommendations of the 15 teams were striking.

This led to a discussion about the our profession’s state of maturity. If all of these groups could come up with such different recommendations, should our profession be more scientific? Should all groups have come up with the same findings? Or is user research and analysis more of an art?

Based on his years of experience in the industry, Rolf suggested that he could certainly recognize which teams were more rigorous in their approach and were better at communicating their findings. But he warned that the disparity suggested we should all be mindful of the quality of our own work, be constantly humble and aware, and strive to improve our own performance. In my mind, this is especially important in light of the sentiment surrounding express usability, which I discussed earlier. As we speed up our processes to meet tighter deadlines, we need to make sure we understand the implications of our approaches to user research and analysis.

Mentoring to Build UX Skills in Business Environments

“Deb [Sova] described a process of learning by shadowing where the junior team member would attend and assist the senior practitioner with project activities as they arose.”

To help people improve their understanding of different approaches to user research, Deb Sova and Laurie Kantner gave a presentation titled “Mentoring to Build UX Skills in Business Environments,” describing their mentoring experiences in different types of organizations.

Deb shared some success stories about a mentoring program in a large corporation for which she was able to map out learning plans, structure a review process, and enable a one-on-one learning environment in which a senior practitioner would teach a junior member of the team. Deb described a process of learning by shadowing where the junior team member would attend and assist the senior practitioner with project activities as they arose. Laurie, on the other hand, described a process in an agency where all the members of a team would learn from each other through regular topic discussions and design critiques. While the basics of their experiences were relatively straightforward, I found the emphasis on mentoring and shared learning compelling.

To achieve Rolf Molich’s goals for user research, experienced professionals need to learn from each other and mentor others. Consider it a call to action!

Conclusion

“Overall, the focus on real-world experience and practitioner-focused content at the UPA conference continues.”

Overall, the focus on real-world experience and practitioner-focused content at the UPA conference continues. Looking at the conference from the perspective of a user researcher, I was encouraged to see that there are thought leaders pushing the envelope on the application of traditional user research methods, while at the same time building upon core elements of techniques that have proven successful and maintaining rigor in their approaches. I hope our continuing attention to detail will help everyone in our profession prove their worth to those we work with. I look forward to the continued evolution of this discussion in 2011, in Atlanta!

2 Comments

Thanks for the summary. Sounds like it was a great conference, and a lot of the issues you mention—particularly around optimizing and extending research methods—are very relevant to my current, day-to-day work.

You make one statement I take issue with, which is that usability testing represents qualitative data versus Web metrics being quantitative.

Anyone who is doing usability testing and is not capturing any quantitative measures such as performance or subjective feedback data is missing out! This is not always an option for every usability test, and the type of quantitative data is different from what you get in Web metrics, but it’s still quantitative.

Jamie, thanks for your comment. And thanks for highlighting the quantitative nature of measurements that you can capture during usability testing. As practitioners, I agree that we should be aware of the nuances of tracking quantitative measures&8212;for example, during which phase of the design stage a particular metric is most useful, the importance of statistical significance, and observed behavior versus reported behavior.

In comparing Web analytics and usability testing, I find that one of the biggest differences is the impact of analytics data on stakeholders. Even if we conduct an online, unmoderated usability test with 50 people to gain some level of statistical confidence, reporting analytics on a real Web site or design with thousands of data points often has more impact on stakeholders.

Join the Discussion

Asterisks (*) indicate required information.