Top

Debunking the Myths of Remote Usability Studies

July 5, 2010

Success in a diverse global marketplace increasingly demands that companies engage customers from diverse global backgrounds in both discussions and usability studies. However, funding for user research travel is becoming more limited, and the availability of local users who meet the need for diversity is often insufficient. Therefore, UX professionals have started using remote usability testing methods to gather adequate user feedback.

The software development industry is relatively young, the UX professions within it are even younger, people working in user experience have different backgrounds, and their professional practice is still evolving. Remote usability activities have not yet been well studied. Consequently, a number of myths have arisen.

In this article, we’ll draw on our collective, first-hand experiences doing remote usability studies for numerous real-world projects to describe and debunk these myths. Our goal is to share knowledge and inspire action.

Champion Advertisement
Continue Reading…

The Myths

We’ve identified six myths about remote usability studies.

Myth #1—Remote usability studies are more prone to user distractions and interruptions that could invalidate results.

Our Experience—Interruptions are part of a natural work environment and actually make the testing environment and results more realistic.

When facilitating remote usability studies, a common frustration is our inability to control the users’ environment. During remote usability studies, users might be participating over the phone, from their offices or homes. Study participants frequently get interrupted by their coworkers—inviting them to go to lunch or asking them questions—or by kids playing in the house. Many facilitators feel these distractions are a challenge for remote usability studies, because they can sidetrack users from the tasks they’re working on. However, such interruptions and our apparent lack of control over them might be just what we need. Because users are working in their native work environments or at home, we can observe and capture more real-life feedback in these natural settings.

We have an interesting story to tell about interruptions. During one remote usability test session, a user was working on a task from her home office, with a connection to a source-code server. Like many other people on her team, she often works from home. The session got interrupted by her crying baby. She went away to calm down her child, then 15 minutes later, when she returned to the task, she was faced with a time-out error message! The software assumed users would always be working with the user interface, so if there was no interaction for more than 10 minutes, it terminated the server connection.

This is the type of feedback we could easily miss in a face-to-face usability test session in a lab, because it is usually difficult to mirror a user’s exact work environment, especially one that includes a crying baby.

Remote usability studies reflect the natural environments users typically work in. They enable us to observe unexpected factors that can affect a participant’s interactions with an application, such as interruptions by phone calls, a pop-up online message, a baby’s crying, or even a system crash. Our designs must not only consider users’ goals for performing tasks, but also reduce the impact of distractions that could happen in a user’s daily work.

Myth #2—Results from remote usability studies are as good as those from face-to-face usability studies.

Our ExperienceQuantitative data from remote usability studies can be as good as that from face-to-face usability studies. However, qualitative data from remote usability studies is often inferior.

Initially, remote usability studies were for testing Web sites. UX professionals have had early success with collecting Web site usability test metrics such as the number of clicks finding a target page requires. Such quantitative data is as easy, if not easier, to collect through remote usability studies, in comparison to face-to-face usability studies.

However, without a way to observe study participants during remote usability studies, facilitators cannot detect nonverbal signals. In such cases, lacking the direct observation of participants’ nonverbal expressions and behaviors and basing results only on metrics such as task completion rates or user satisfaction scores can lead to false optimism.

In our experience, certain types of usability studies are more difficult to conduct remotely. For example, requirements brainstorming and design walkthroughs are less suitable for remote usability studies, because they rely on synergy between a facilitator and participants through both verbal and nonverbal communications. For types of usability studies such as these, which rely on empathy, understanding, and creating a connection with participants, the loss of nonverbal communication in remote usability test sessions is significant. This kind of empathy is nearly impossible to build using Web conferencing software. Since 80% of our communication is nonverbal and these types of usability studies depend on nonverbal communication, the results of such remote studies are poor.

Myth #3—Poor audio and slow screen sharing hinder user feedback.

Our Experience—Audio, video, and screen-sharing applications have improved dramatically and enable good collection of user data.

Just a few years ago, getting these technologies to work for remote usability studies was a headache. Now, however, audio, video, and screen-sharing applications have improved dramatically. Today, we are no longer plagued by static-filled, half-duplex audio conferences and jumpy mouse pointers during screen sharing.

In our experience, new VoIP software sometimes works better than plain old telephone service (POTS) for audio conferences. POTS-based audio conferences have poor quality for trans-Atlantic and trans-Pacific calls and issue annoying beeps when participants put their calls on hold. In a few of our remote usability studies, participants have commented that the screen-sharing application performs so well, it seems as though they were sitting next to the facilitator in person, watching the same screen. We find that, at a minimum, good audio-conferencing and screen-sharing applications are technologies that are essential for remote usability studies.

Even with audio and screen sharing, interpersonal interactions during remote usability studies are still a bit impersonal. We have tried using a Webcam to make testing sessions more personal and gather some nonverbal feedback. However, for privacy and security reasons, a Webcam might not be acceptable to some remote participants. Therefore, we have tried another approach to enhancing visual contact between participants and session facilitators.

Recently, we held a series of remote usability sessions for design exploration with customers. During the first session, even after a virtual roundtable of self-introductions and ice-breakers, we found customers were not fully engaged, so were not providing feedback on the design. So, before the second session began, we asked the same participants to send us their photos, and we showed their photos to everyone at the beginning of the session. We displayed the photos during the session, so participants could match a photo with the person talking. The addition of visual identity seemed to help participants to open up and talk more and become more engaged.

Myth #4—It is quicker to recruit participants for remote usability studies.

Our Experience—While there are potentially more users available for remote usability studies, rapport building and user screening are more difficult.

Remote usability studies enable us to reach a broader range of users and increase our chances of finding qualified participants. However, if we use the same approach in screening participants as in conducting the actual sessions, we might not take full advantage of the larger user pool remote usability studies offer.

A common practice in participant screening is to evaluate candidates’ technical experience. However, we’ve found that the answers we get in response to questionnaires during screening usually don’t provide us with many clues about a candidate’s likely performance during a usability study. Participants who satisfy the screener perfectly might not be productive during a study if their personalities don’t suit the types of interaction a study requires or facilitators aren’t aware of their communication preferences.

Introverts are usually reserved in communicating with others until they know and trust them. In a local usability study, which permits face-to-face interaction, a skillful facilitator can create a connection with introverted participants, making them feel more comfortable and relaxed in expressing their thoughts. Facilitators can observe introverted participants’ nonverbal signals and prompt participants with a series of questions that helps them to understand what’s on their minds. However, the traits of introverts make it more difficult for facilitators to conduct remote usability sessions. This is especially true for studies that involve design brainstorming, where thoughts participants exchange among themselves can bring out the best results.

Based on our experience, it’s not always quicker to recruit participants for remote usability studies. We can offer the following recommendations to make the recruiting process more effective. First, add a few personality questions to the screener to determine whether a potential participant is an introvert. Second, try to recruit extroverts—or at least people who enjoy expressing their opinions in group sessions. Third, use an online polling system and private chats to elicit feedback from all participants. This lets you draw out the thoughts of introverts.

Myth #5—It is easier to ask for participants’ time for remote usability studies than for local usability studies.

Our Experience—We might get the participants’ time, but not necessarily their attention.

Because remote participants can participate in usability studies from their homes, cottages, or wherever is convenient for them, some believe it is easier to ask remote participants to give their time for a remote study. However, in our experience, it is harder to hold remote participants’ attention during a remote session, because they can put us on hold to take another phone call or start doing other things on their computer. Thus, we may not have their undivided attention throughout a remote session. Often, participants are not only physically absent, but are not fully present even in a virtual sense.

Myth #6—It is cheaper to conduct remote usability studies than local usability studies.

Our Experience—Considering the money and resources that are involved in each type of study, face-to-face usability testing is more cost effective.

For a face-to-face usability study, it costs money to recruit users, reimburse meal and travel expenses for participants, and visit a customer site to conduct field studies. For a remote usability study, it costs money to recruit users, pay for teleconferencing and the licensing fees for software tools. In terms of tangible costs, remote usability studies are much cheaper in general.

However, we also need to consider intangible costs to better compare the overall expenses of remote and face-to-face usability studies. The logistics, setup, and coordination of face-to-face sessions are simpler. For remote usability studies, we need to be careful about additional matters such as pairing up participants from different countries and cultures, ensuring participants’ software platforms and tools are not incompatible with ours, working effectively across different time zones, and the more frequent need to reschedule sessions.

For example, we recruited participants for one remote focus group session from both India and Europe, which posed a significant time-zone problem for us. To get all of these qualified participants together at the same time, some of them would need to stay up late at night to participate. They initially accepted the proposed session time, but in the end, they didn’t show up.

Another intangible cost occurs when our technology fails us. For example, we ran a remote usability session and recorded it. The session included some priceless moments when users were struggling to interact with the product. We were eager to show our recording to the product development team and influence changes. However, to our horror, the session recording failed to save. We lost the entire recording! After this episode, the entire team that was involved in the study felt completely stressed and demoralized. What dollar value should we put on a lost audio or video recording of such priceless moments? What dollar value should we put on the devastating psychological effect on the morale of the team members? We cannot assign a dollar value to such intangible costs, but their impact goes well beyond money.

Conclusion

Drawing on our collective, first-hand experiences of remote usability studies from numerous real-world projects, this article has debunked some of the myths that exist about remote usability studies. We have found that remote usability studies are not necessarily a cure for all of the problems that global usability testing presents. There are many considerations other than travel expenses. 

User Experience Analyst at IBM Canada Ltd.

Toronto, Ontario, Canada

Corrie KwanAs a user experience analyst at the IBM Toronto software laboratory, Corrie often works with users from different continents, doing all kinds of user research from requirements gathering to design validation. Her work also involves defining usage scenarios and designs for modeling tools and Web applications. She graduated from the University of Waterloo with a Bachelor’s in Mathematics, majoring in Computer Science.  Read More

Senior Design Manager at IBM Internet of Things

Toronto, Ontario, Canada

Jin LiAt the IBM Toronto software laboratory, Jin’s work as a user experience lead encompasses the full software design lifecycle from user requirements gathering to beta testing. In particular, he gathers, analyzes, and translates user requirements and usage scenarios into software and user interaction designs for application development tools and Web applications. Jin holds a Masters of Science in Computer Science, with a Human-Computer Interaction option, from the University of Toronto. He is a member of ACM and UPA.  Read More

Lead UX Architect / UX Manager at RBC Investor & Treasury Services

Toronto, Ontario, Canada

May WongAs a user experience analyst at the IBM Toronto software laboratory, May’s work involves defining usage scenarios and designs for software quality tools and Web applications. She has a Bachelor’s of Science in Computer Science and is currently pursuing an MBA at the University of Toronto.  Read More

Other Articles on Remote UX Research

New on UXmatters