How Do Users Really Feel About Your Design?

Envision the Future

The role UX professionals play

A column by Paul J. Sherman
September 24, 2007

Perhaps you’ve done contextual inquiries to discover your users’ requirements and understand their workflows. You may have carried out participatory design sessions, usability tested your design, then iterated and improved it. But do you know how users really feel about your design? Probably not.

The user experience field has been trying to move beyond mere usability and utility for years. So far, no one seems to have developed easy-to-implement, non-retrospective, valid, and reliable measures for gauging users’ emotional reactions to a system, application, or Web site.

In this column, I’ll introduce you to a promising method that just might solve this problem. While this method has not yet been subjected to rigorous peer review or experimental testing, it offers an intriguing solution and is endlessly fascinating to me. And it just might prove to be the kind of powerful technique we’ve been looking for to illuminate users’ emotional reactions to our designs.

Champion Advertisement
Continue Reading…

Why Measure How Users Feel?

Many of us in the field of user experience believe that utility and usability are necessary, but somehow insufficient. Even for the most staid and straightforward business application, users form an affective reaction to the application during initial and subsequent use. It might not be as strongly valenced as, say, the reaction they have when browsing an online store, but the affective reaction is there nonetheless.

All things being equal, users evaluate a system that engenders positive emotional reactions more positively than a system that doesn’t. So we need to know whether—and the extent to which—a system’s use triggers positively valenced emotions.

And on the flip side, we’ve all seen users become frustrated when they can’t figure out how to accomplish their tasks. Just how frustrated are they? Mildly? Moderately? Are they so irritated with your design they’re ready to heave the device out the nearest window? Let’s hope not.

The point I’m trying to make is that, up till now, we’ve assumed users are somewhat frustrated when we observe indirect behavioral indicators such as menu hunting, false starts, and input errors. And we’ve assumed they’re really frustrated when we’ve heard a sigh of exasperation. However, these observations provide really coarse measures of affect. And very few of us capture them or use them systematically to make comparative judgments.

What’s more, alternatives for measuring delight and frustration—after-the-fact survey questions, verbal self-reports, and retrospective video self-evaluation—are notoriously subject to positivity bias and other vagaries of attribution bias. Wouldn’t you like to have a method that’s more granular and valid when you test your next design?

How Do We Currently Measure Delight and Frustration?

Currently, the most prevalent ways of measuring a user’s delight or frustration when using a product or Web site are retrospective self-report measures such as:

  • participant ratings of task satisfaction, task utility, system utility, and so on
  • participants’ responses to open-ended questions about desirability, satisfaction, and utility
  • repertory grid techniques that use semantic differences in word pairs to elicit participants’ evaluations of a user experience
  • Benedek and Miner’s Desirability Toolkit, which lets participants choose descriptive words or phrases that reflect their evaluations of a product’s user experience

Our methods of direct behavioral observation have so far been limited to the capture and rating of participants’ verbalizations and utterances, reflecting delight or frustration, during their use of a product.

Recently, I learned about a new method for assessing users’ emotional response to a product—one that relies on real-time observation of behavior and coding of participants’ facial expressions and gestures. Its creators, Eva de Lera and Muriel Garretta-Domingo, call their method the “Ten Emotion Heuristics.”

The Ten Emotion Heuristics

According to de Lera and Garretta-Domingo’s conception, users’ emotions are intimately bound to their appraisals of a user experience. Therefore, accurately measuring users’ emotions while they are learning and using a product provides a window into the quality of the user experience.

Given the validity and reliability challenges of self-report measures, they have chosen to rely on their observation of the occurrence and frequency of certain facial expressions and body gestures as proxies for users’ affective reactions. For example, if while using a product, a participant frowned or raised her brow, they coded her expression as confusion, exasperation, or frustration. Not surprisingly, a smile indicated pleasure or delight.

In their recent publication, “Ten Emotion Heuristics: Guidelines for Assessing the User’s Affective Dimension Easily and Cost Effectively,” de Lera and Garretta-Domingo described the advantages of their method: It is both inexpensive to implement and easy to understand. Since their method derives from the seminal research by Paul Ekman and his colleagues, they are confident that this method will prove to be valid across cultures. They have not yet rigorously validated the method, nor do they have reference or baseline measures—as is also the case with Benedek and Miner’s Desirability Toolkit. However, their concurrent observations have strongly indicated the method’s validity, because negative emotional markers nearly always occurred when users made observable errors or encountered usability problems.

I recently had a conversation with Eva de Lera about the “Ten Emotion Heuristics” via instant messenger, as follows:

PJS: I was at the UPA 2007 conference this year and was intrigued to hear about your research project on the “Ten Emotion Heuristics.” Would you briefly summarize what they are and how you came to work on this problem?

EdL: The heuristics are a set of guidelines to help assess a user’s affective state in an easy and cost-effective manner. The idea originated from a need to gather objective satisfaction measures and also a need to have different people gather this data in similar ways. The heuristics are a set of ten emotional cues that we have identified, and we use them as one measure. For example, identifying four or five negative emotional heuristics at the beginning of a given task provides us with a negative user experience measure.

PJS: I am looking at the poster PDF you presented at UPA 2007, in which you describe the behavioral marker corresponding to each heuristic. Do any of the markers have varying emotional valence? That is, are any of them positive in one context, but negative in another?

EdL: In our study, there really is only one positive emotion heuristic: the smile. The smile indicates engagement. This is our ultimate objective—to design engaging, satisfying interactions. So we aim at having users’ faces in a neutral expression, with no facial muscles tensed, no distractions, just a relaxed face, and hopefully, a slight smile or grin. And when we record a smile, that’s a positive marker of engagement.

The other nine heuristics are negative, except when you are evaluating the user experience of an interaction that has as its objective surprising users or frustrating them. An example where you would want to do this is an online game. In this case, we could not use our “Ten Emotion Heuristics.”

At Open University of Catalunya (UOC), a fully online university, we want the students to be engaged in their education and community through the virtual campus. We want a student to be able to carry out his or her tasks successfully and happily. We want students to say, “I love studying at UOC. When I’m finished with my Masters, I will go on to a different program.”

Not only do we want the virtual campus to work, but we want our students to enjoy the experience. Their tasks are generally administrative, related to courses, participation in forums, and so on. We don’t want them frowning, feeling lost, or confused.

PJS: So the context is clearly important then. Have you tried your “Ten Emotion Heuristics” when the user interface is, in fact, supposed to surprise or frustrate, such as in a game?

EdL: Unfortunately we haven’t tried that yet. At this moment, we are focusing on the smile heuristic. We are trying to relate smiling to efficiency and, more importantly, smiling without the connotation of happiness. We are trying to demonstrate that if you smile, you will be more effective and efficient with your search, purchase, consultation, message, and so on.

There is a lot of work we need to do in this field. We want to see how it affects interaction design in two ways:

  1. to be able to see whether smiling actually helps you be more effective and efficient, which according to previous research it should
  2. to begin identifying elements of interaction design that elicit a smile on the user’s face

PJS: Interesting. This is somewhat related, I think, to the cognitive reframing technique the therapeutic community sometimes utilizes. But that’s a tangent to explore another day. So, here’s something I’ve sometimes wondered. Can you recount an instance when you measured the emotion heuristics during a usability test, and it revealed information you would not have obtained solely through test performance or data gathering using a questionnaire? And what did you learn?

EdL: Definitely. What was interesting in our study is that we also gathered cognitive and subjective data. Interestingly, but not surprisingly, I believe, participants who had carried out their tasks in an intentionally frustrating environment—for example, an existing Web site with a terrible design— all declared in their feedback questionnaire that tasks had been easy and that they had a very satisfying experience.

This was hard to believe when most of them had been lost several times, confused, and frustrated. There are many reasons that explain why participants provide positive feedback even after a negative experience, and this is a major reason why we started working on the heuristics. We could not trust people’s subjective responses, but we could trust their faces a little more.

Still, you can also gather interesting data in user feedback questionnaires, so we continue to gather this data, as well as cognitive data such as time, errors, clicks, and so on. The “Ten Emotion Heuristics” are a third layer of the usability testing / evaluation process.

PJS: Yes, users tend to be pathologically positive in their written and verbal assessments, no matter how terrible the user interface actually is.

I’m intrigued by your use of self-assessments. What happens when you ask people to watch and reflect on their facial expressions? And how much do you teach them about the markers you’re looking for?

EdL: I actually do not teach them about what I am looking for. The good thing about the heuristics is that they do not require specific equipment or skills. We are able to observe them and record them using usability testing software like Morae, which we were already using in our user evaluations.

We tell participants that we are evaluating a Web site. They just don’ t know exactly what we are observing. When I have informally spoken to people about what facial expressions and gestures could tell us, they immediately appear stressed, as though I could see the truth by observing them, which I cannot. It definitely makes people uncomfortable, so I would not tell the participants that I’m watching their facial and gesture reactions.

PJS: Your and your colleagues’ work in this area is somewhat based on the work of Paul Ekman, who has established that there is a high degree of universality for many, if not most, human facial expressions. I seem to remember that the bulk of his work was in the context of human-to-human interactions. Did Ekman study facial expressions in the context of tool use or puzzle solving? And were the expressions still universal?

EdL: Ekman definitely inspired our work—as did as others like Rosalind Picard from MIT’s Affective Computing Research Group. Our interest is in understanding facial expressions and gestures in relation to system interactions, something that Dr. Picard and others have documented extensively. In the “Ten Emotion Heuristics,” our goal is to find assessment methodologies that are easy to implement and cost effective, but do not require special skills or expensive equipment.

And in regard to universality, we have gathered supportive research that indicates the heuristics could work across cultures, but we believe that new, more up-to-date research is needed. We are hoping to evaluate universality and multi-culturality this year, with the help of UPA’s worldwide members who have volunteered to use and test the “Ten Emotion Heuristics” in their user evaluations. We hope to provide some results in the near future.

PJS: You’ve just presented a full paper on the “Ten Emotion Heuristics” at HCI 2007 in Lancaster, UK. What were some of the comments and critiques you encountered there?

EdL:  HCI 2007 was a great opportunity to meet other affect-in-HCI researchers who, like myself, are searching for methodologies that can help us assess users’ affective dimension—before, during, and after interacting with a system. The response to the “Ten Emotion Heuristics” is highly positive, as it provides a tool—or guidelines—to help us evaluate and document affect objectively, easily, and cost effectively.

PJS: Do you have plans for developing tools to support the use of the emotion heuristics by other practitioners, during user experience research? For example, an easy-to-use coding system or rating sheet—or something like that?

EdL: We have already begun conducting more research on the heuristics—specifically, the smile heuristic. And we hope to continually develop new tools and methodologies that can help us design for engagement, for joy, for a satisfactory user experience. There is a large group of professionals and researchers out there, trying to pin down the ideal methodology or set of methodologies for assessing affect in helping to design interaction. There’s more great work to come from the community out there!

PJS: Thanks so much for your time, Eva. How can people learn more about the emotion heuristics?

EdL: Anyone who is interested in the “Ten Emotion Heuristics” can contact me using the comment form on this page. We are always looking for volunteers worldwide who want to test the heuristics and provide us with feedback and suggestions. Thanks, Paul, for your interest. 


Baber, Chris. “ Repertory Grid Theory and Its Application to Product Evaluation. In Usability Evaluation in Industry, edited by Patrick W. Jordan, Bruce Thomas, Bernard A. Weerdmeester, and Ian L. McClelland. London: Taylor & Francis, 1996.

Benedek, Joey, and Trish Miner. “Measuring Desirability: New Methods for Evaluating Desirability in a Usability Lab Setting.” (Word document) Redmond, WA: Microsoft Corporation, 2002. Retrieved September 22, 2007.

de Lera, Eva, and Muriel Garreta-Domingo. “Ten Emotion Heuristics: Guidelines for Assessing the User’s Affective Dimension Easily and Cost Effectively.”PDF Barcelona: Universitat Oberta de Catalunya, 2007. Retrieved September 22, 2007.

Hassenzahl, Marc. “Character Grid: A Simple Repertory Grid Technique for Website Analysis and Evaluation.” In Human Factors and Web Development, edited by Julie Ratner. 2nd ed. Mahwah, NJ: Lawrence Erlbaum, 2003.

Founder and Principal Consultant at ShermanUX

Assistant Professor and Coordinator for the Masters of Science in User Experience Design Program at Kent State University

Cleveland, Ohio, USA

Paul J. ShermanShermanUX provides a range of services, including research, design, evaluation, UX strategy, training, and rapid contextual innovation. Paul has worked in the field of usability and user-centered design for the past 13 years. He was most recently Senior Director of User-Centered Design at Sage Software in Atlanta, Georgia, where he led efforts to redesign the user interface and improve the overall customer experience of Peachtree Accounting and several other business management applications. While at Sage, Paul designed and implemented a customer-centric contextual innovation program that sought to identify new product and service opportunities by observing small businesses in the wild. Paul also led his team’s effort to modernize and bring consistency to Sage North America product user interfaces on both the desktop and the Web. In the 1990s, Paul was a Member of Technical Staff at Lucent Technologies in New Jersey, where he led the development of cross-product user interface standards for telecommunications management applications. As a consultant, Paul has conducted usability testing and user interface design for banking, accounting, and tax preparation applications, Web applications for financial planning and portfolio management, and ecommerce Web sites. In 1997, Paul received his PhD from the University of Texas at Austin. His research focused on how pilots’ use of computers and automated systems on the flight deck affects their individual and team performance. Paul is Past President of the Usability Professionals’ Association, was the founding President of the UPA Dallas/Fort Worth chapter, and currently serves on the UPA Board of Directors and Executive Committee. Paul was Editor and contributed several chapters for the book Usability Success Stories: How Organizations Improve by Making Easier-to-Use Software and Web Sites, which Gower published in October 2006. He has presented at conferences in North America, Asia, Europe, and South America.  Read More

Other Columns by Paul J. Sherman

Other Articles on User-Centered Design

New on UXmatters