International Usability Evaluation: Issues and Strategies

July 24, 2006

A CHI 2006 Special Interest Group (SIG): Presented by:

  • Emilie W. Gould
  • Aaron Marcus
  • Apala Lahiri Chavan

The CHI 2006 program provided this summary:
In this SIG, practitioners will discuss challenges they faced in selecting and customizing methods for international usability design. Facilitators and then participants will contribute experiences, case studies, and helpful multicultural contacts.

The CHI conference provides SIGs to enable conference attendees who share similar interests to meet for 90 minutes of facilitated discussion.

Like many UX professionals, I’m often involved in designing products that will be sold across the globe. Half of the challenge is acknowledging there is no one-size-fits-all set of design criteria. The other half is knowing the tradeoffs when choosing between usability methods for requirements gathering and evaluation. What many may find surprising is that our tried-and-true methods themselves can have limitations, depending on the context in which we apply them.

Champion Advertisement
Continue Reading…

These limitations were the well-timed subject of the International Usability Evaluation Special Interest Group (SIG). This session was packed before it even began, as attendees scrambled to find a seat at one of the five or six large round tables. As a first-timer at CHI, and one of the last to find a seat, I experienced a bout of nervousness, wondering how much group participation this would really involve. My fears were quickly assuaged once I learned how the session would progress—with the first half allotted to presentations by the panelists; the second, to small group discussions at each table.


I found the presentation by Apala Lahiri Chavan to be outstanding. She was an enthusiastic and very articulate speaker who also provided good, useful content. Chavan presented case studies on usability methods and requirements gathering that did and did not work in India. She described the reluctance of users to criticize designs for fear of offending the facilitators. Also, in certain contexts where class and/or status were factors, interviewees were reluctant to provide information. Upon encountering these and other issues, her team employed the following new strategies:

  • In addressing issues of status, they realized users were willing to provide more information to younger, though less experienced interviewers.
  • To address the reluctance of users to criticize user interfaces:
    • They created an evaluation forum that was something like a marketplace. By allowing participants—who were accustomed to bartering—to barter, researchers elicited the type of feedback they needed. (Since this method placed participants in what resembled a commercial context, it reminded me of the 7-11 Milk Experiment that Jared Spool and UIE use as the basis of their eCommerce site research.)
    • Instead of usability scenarios, they employed movie scripts and popular actors, because people in India are willing to openly discuss and criticize such enactments.

During the facilitated discussions, small groups talked about various usability methods, including surveys, interviews, usability tests, and some novel techniques; what issues they had with classic methods; where the issues related to culture, country, region, or language; and what, if any, strategies they used to overcome such issues. Thanks to the varied group of international participants, a range of issues did arise.

    • One practitioner spoke of traveling from a South East Asian country to Australia to work on financial transactions. She found that, in Australia, users were unwilling to discuss banking and money in ways she had not expected.
    • To address a participant’s problem with a usability test in an Asian country, the group concluded that some type of co-participant method might have elicited more articipation than a single-person think aloud test session.

I greatly appreciated the willingness of the presenters and discussion participants to share methods that had not worked for them and how, when possible, they modified them.

Key Takeaways

I took away the following ideas from this SIG:

  • Many usability methods were originally created in Western cultures and may not be as effective in other settings.
  • Consider using other known methods as alternatives to those you ordinarily use.
  • Try to anticipate whether culture, country, region, or language might limit the effectiveness of a method you are planning to use.
  • Share lessons learned relating to international usability evaluations with colleagues. 


Gould, Emilie W., Aaron Marcus, and Apala Lahiri Chavan. “International Usability Evaluation SIG: Issues and Strategies.” CHI 2006, Montréal, Québec, Canada, April 2006. Retrieved July 24, 2006.

Perfetti, Christine. The 7-11 Milk Experiment: How Does Site Design Affect Revenue? UIE Brainsparks, September 13, 2005. Retrieved July 24, 2006.

Senior Design Researcher at WeWork

New York, New York, USA

Michele MarutMichele was formerly a Human Factors Specialist with the medical manufacturing company Respironics. Since 1999, Michele has been applying Human Factors principles to the design and evaluation of a range of products. Her experience includes usability testing kitchen and bath products at Kohler and making Navy ships more user friendly at General Dynamics: Bath Iron Works. She currently serves as the UXnet Local Ambassador and IxDA Local Coordinator for Pittsburgh and chairs the Environmental Design Technical Group of HFES. Michele holds an M.S. in Human Environment Relations from Cornell University.  Read More

Other Articles on Conference Reviews

New on UXmatters