Top

When Role Playing Doesn’t Work: Seven Guidelines for Grounding Usability Testing in Participants’ Real Lives

September 8, 2008

“Assume you live in San Francisco and you are looking for a hotel for a friend who is visiting you. You type hotel in San Francisco into a search box and get these results. Have a look at the results and select three hotels.”

Usability testing makes use of a lot of role-playing scenarios like this one, and many findings and design recommendations result from participants’ responses to these scenarios. But an over-reliance on role playing when testing a product and making design recommendations can have major downsides and risks, including the following:

  • identifying false usability issues and user needs that lead design iterations in the wrong direction and result in poor user experiences—potentially leading to users’ abandoning a product and revenue losses for a company
  • overlooking serious usability issues
  • losing opportunities to gain important insights
  • calling the reliability and quality of study findings into question

This article presents some common limitations and downsides of role-playing in usability testing and provides guidelines for avoiding them by grounding usability testing in participants’ real lives. All of these guidelines come from my experience in user research—mainly from testing Web sites—but these guidelines extend to all types of product usability testing. What makes the difference is the participants’ level of engagement with the product being tested, not the product itself.

Champion Advertisement
Continue Reading…

These guidelines are especially applicable to qualitative studies, where the goal is to understand whether participants are able to achieve a goal or perform a task they have in mind, see what problems they encounter, and identify the mental processes behind their behaviors—why they do what they do. Some of these guidelines might be less applicable if you need to collect strict performance data and statistical measures of usability—for example, success rates or times to complete a task—or if you need to resolve very simple usability issues. These guidelines are also better adapted to formative studies than summative ones.

However, whatever type of usability testing you are doing or type of product you are testing, it is always important to consider whether the artificiality of the testing conditions might bias your findings and to what extent you should ground usability testing in participants’ real lives.

1. Recruit passionate users.

By recruiting passionate users, I mean recruiting participants who

  • use the type of product or content you are testing in their real lives
  • are truly interested in the product or content
  • have an emotional connection to it

Recruiting participants who typically use the type of product or content you are testing in their real lives may seem obvious, but it is easy to overlook this when you are absorbed in the recruiting process and confronted with external deadlines and constraints on recruiting. This mistake is especially easy to make when testing a product or Web site that is somewhat specialized, but pretty easy for anyone to understand—for example, a dating, movie, or travel Web site.

Example: Testing a Dating Web Site

When testing a dating Web site, one might think: Oh, pretty much everybody knows about dating Web sites or has used one at some point, so let’s just ask the participants we’ve recruited for this other Web site to pretend they’re looking for a mate. If you’re testing common usability issues like the discoverability of a data refinement menu, this approach can work. The personal engagement and interest of participants in the activity of dating online would have very little impact on the discoverability of the menu.

However, if you are testing user perceptions of the subscription process—does it take too long to subscribe and create a profile—involving participants who are not interested in dating could be misleading. The emotional involvement a participant has in the process of finding a mate online can have a big influence on a participant’s perception of the process. If finding a mate is crucial to a participant, he might be ready to invest a lot of time and energy in it. His perception of the length of the process might be very different from that of someone who is not emotionally involved in the process. What one person might find unendurably long, the emotionally involved participant might find okay—if the process is so important to him, he’s ready to go through it.

It is simply impossible to put yourself in the shoes of somebody who is emotionally involved in the process of looking for a mate if you are not. You can’t predict how a person who is emotionally involved would react. Participants don’t need merely to understand the content; they need to be emotionally involved in the process.

This is what Jared Spool calls “passion and its effect” [2]: “Passion on a subject changes how participants invest in usability test tasks. That change can have profound effects on the results and the recommendations produced by the team.” Passion is about a participant’s emotional connection and engagement with a topic of interest. It is a user’s emotional connection to and knowledge of a topic that changes his perception of it. You can’t predict how a person who is passionate and knowledgeable about a topic will react.

Therefore, you must recruit people who are actually involved in the process you are testing and are passionate about it. This is especially true if you want to test how users respond to specific content or functionality, how they navigate and search for information, or whether certain content is useful and valuable to users.

For simple, general usability issues—such as the discoverability of a menu item or content on a page, role-playing might not be an issue. The degree of a participant’s engagement with a site or topic might not have much effect on its discoverability. Likely, you would be able to assess such issues by doing testing with any Internet users, regardless of whether they are into dating.

However, it is always important is to ask yourself to what extent having participants role-play might affect the reliability of your test findings. Sometimes, it won’t; but other times, it will.

2. Have participants perform realistic tasks.

The tasks participants perform during usability testing should match what they do with a product in their real lives. This is another guideline that sounds straightforward to usability experts, but it’s easy to overlook its importance when you are occupied with designing well-crafted role-playing scenarios that can answer your client’s or stakeholder’s questions about a Web site.

Example: Testing a News Release Search Engine

I once participated in usability testing for a corporate Web site, in which one of the goals was to test the effectiveness of a search engine in searching for the company’s news releases. We crafted well-written tasks in which we asked participants to find specific news releases. Everything went smoothly and the client was happy. The issues we identified allowed us to provide actionable recommendations for improving the presentation of the search results.

One of the tasks we gave to participants was to search for a news release the company had published about a specific product, on a specific date two years before the test. The search engine provided many pages of results. The only way to find the particular news release was to guess what page of results would include the news release from that specific date and randomly click a results page number at the bottom of the page. Finding the correct news release was a long and difficult process.

Our recommendation was obvious: We needed to let participants browse the results by year and month! Therefore, we designed a well-crafted, chronological sorting menu. Everything seemed perfect. However, six months later, an external firm evaluated the Web site and pointed out that “the navigation by month and year was heavy and doesn’t necessarily match users’ needs.” Then, I asked myself: What if that firm is right and searching by month is not one of our users’ primary goals? I realized we had based our new design entirely on a single task the client had provided, and I didn’t know whether this task was representative of the way users look for news releases. It might be—but it might not. Maybe, our conclusions weren’t far from the truth, but we hadn’t checked our assumption and had no way of knowing exactly. We could have been totally wrong and based the design on a false assumption, which nobody had ever challenged.

In this case, we had made the mistake of not checking whether the tasks our client gave us were actually the tasks the key users of the Web site performed in real life. While our mistake appeared obvious afterward, when you have to craft a usability test for a client on a tight schedule, it can be easy to overlook this step. While the consequences of neglecting to determine the appropriate tasks to test might be less critical in the case of an informational Web site, for a shopping Web site, this kind of mistake can be really damaging to the bottom line. Jared Spool gives a striking example: “The design recommendation seemed solid, yet sales had dropped 23% immediately after the changes were made.” [2]

Devising Realistic Tasks

There are several possible solutions to this problem, as follows:

  • investigate users’ needs and tasks
  • gather data about users’ real tasks
  • tailor task scenarios to participants’ real tasks
  • ask participants to actually purchase products during a usability test

The following sections describe them in detail. You can use these approaches alone or in combination with each other.

Investigating Users’ Needs and Tasks

Ideally, you should thoroughly investigate the needs and tasks of users beforehand, through extensive interviews or field research, and devise your tasks based on the knowledge you gain. Then, you can be sure your tasks represent users’ real behaviors and needs.

Gathering Data About Users’ Real Tasks

If you can’t conduct extensive user research, you can gather data about users’ real tasks through alternative means, then use this data to adapt your tasks to users’ real needs. To obtain this data, you can do any of the following:

  • Ask your stakeholders or clients. They sometimes know more than they think they know.
  • Get data from Customer Service.
  • Determine whether the company has conducted any previous studies, including market research or surveys. It is amazing the number of reports that lay dormant within organizations, and we reinvent the wheel again and again, because we don’t know these reports exist.
  • Look at some external literature. Others might have conducted studies on what users look for in a similar context.
  • Talk to some people you know who might fit user profiles—for example, coworkers, friends, or family members. This is not ideal, but in situations where you badly lack data, doing this might be helpful.

Tailoring Task Scenarios to Participants’ Real Tasks

You can collect information about participants’ real tasks during test sessions and tailor the task scenarios to each participant as you go. Ask participants to do something they actually need to do with your product—or to do something they’ve done recently—while you observe what they do.

Example: Testing a Yellow Pages Web Site

When testing a Yellow Pages Web site, I included a free-form task at the beginning of each test session—which I’d tailored to the needs of specific participants—before asking participants to perform the tasks I had prepared for them in advance. I asked participants to think of something for which they needed to search using the Yellow Pages and to search for it. Because participants were searching for something they really needed, they were totally immersed in the task—they did not have to pretend. The task was real, not artificial. I obtained very rich and unexpected insights from what I observed—about the quality and relevance of the search results and the discoverability and usefulness of the results refinement menu.

Depending on your situation, you can use variants of this approach to make tasks more realistic, as follows:

  • Mix free-form and predefined tasks.
  • Ask participants to think of some tasks they actually need to perform on a Web site before the session, as a homework assignment.
  • Follow Jared Spool’s Interview-Based Tasks approach [2], in which you add 30 minutes at the beginning of a test session for interviewing a participant about his passions, then create the test tasks with him, based on his passions and interests.

If project constraints don't let you tailor your task scenarios to each participant, you can either

  • dedicate the first five minutes of a session to investigating participants’ current goals and tasks on a site—If appropriate, you can try to adapt your predefined tasks slightly. (See Guideline 6.)
  • investigate the realism of each predefined task—For example, when presenting a task to a participant, ask: “What is your experience doing this kind of thing?” “Is that something you would do?” “What do you do in general?” If you can’t do more in-depth user research, this can, at least, help you evaluate a participant’s engagement with the task and, therefore, the reliability of your findings, based on each participant’s behavior or reaction when performing a task.

Asking Participants to Purchase Products During a Usability Test

If you are testing an online shopping site, you can ask participants to actually shop for a product. Ask participants to search for and buy a product they need or want during the test session. You can give them credit card information to use when making the purchase. Each participant’s incentive is the product he or she will buy. If a task is one a participant would typically perform using the site, this technique helps add more realism to the way the participant engages with the site.

3. Test using content that’s meaningful to participants.

Sometimes—because of either development constraints or the unfamiliarity of your stakeholders with usability testing—you might end up in a situation where you are asked to perform a test using content participants wouldn’t use in real life or even comprehend. For example, they might want you to ask participants to evaluate a description of an IT position, when participants don’t know anything about IT and would never look for such a position in real life.

It’s common to encounter such situations—particularly when testing mockups or prototypes for which only a small sample of content or data is available. Sometimes, the only content you have isn’t the type of content participants would use in real life. However, it’s important to ensure the content you use when testing is as relevant to participants as you can make it.

Example: A Job Description That Doesn’t Apply to Participants

When I was testing a job search Web site, I needed to test the relevance of the search results and the job descriptions to participants. The primary stakeholder had prepared a series of fake mockups, presenting an IT position, and wanted participants to give feedback on them. He wanted answers to the following questions: How would participants choose a position they were interested in on the search results page? What kind of content would be useful to them on the description page?

Since our participants matched a very general profile and did not work in IT, it was impossible to ask these questions of participants who were unfamiliar with IT positions and would never look for this type of position in real life. How could somebody who would never look for such a position put himself in the shoes of somebody who would and select an appropriate job? You can’t simulate knowledge you don’t have.

However, the stakeholder was anxious to have feedback on specific listings and descriptions. To minimize bias and answer the stakeholder’s questions, I mixed free-form searches on the live Web site—where a participant could search for a position in which he was really interested—with predefined searches using the mockups. For the predefined searches, we chose a type of position anybody could understand—like a writer—rather than choosing a specialized position that required very specific knowledge like IT. Conscious of the limitations of our approach, after conducting the study, I interpreted the results carefully.

Tips for Choosing Meaningful Content

How can you ensure content is meaningful to participants?

  • Don’t conduct usability tests using content participants would not use at all in their real lives.
  • Adapt your test tasks to specific participants on the fly, showing them content they understand and are familiar with. Either at the beginning of a test session or before a specific task, ask participants what content they’re interested in, then tailor the predefined task by using that content.
  • If you absolutely must test specific content or pages, mix free-form tasks that are tailored to each participant with predefined tasks to minimize bias. Ensure a page you’re testing is at least understandable to participants and be aware of its limitations.

4. Ask probing questions about content participants are passionate about.

As I discussed under Guideline 2, you can gather richer and more reliable insights by asking participants to react to content they’re interested in or are passionate about. If you’ve crafted a test plan before a test session, there are ways in which you can make changes to it on the fly that can greatly enrich the insights you gather.

Example: Testing Features of a Search Engine

When testing the usability of a site’s search engine, I had to determine whether participants could understand the various features and refinement options on the search results page and would find them useful. In my test script, I had included a mix of free-form searches that let participants search for anything they wanted and predefined searches, which required all participants to search for the same thing.

I had planned to ask probing questions about a participant’s comprehension of the features on the search results page and their usefulness during the predefined task. However, when conducting the test, I found I got more accurate, richer insights from participants when I asked about their understanding and the usefulness of the features when they were doing their own searches. When participants were searching for information that was meaningful to them and were totally immersed in the task, it was much easier for them to understand and judge the usefulness of the features and the appropriateness of their labels. So, I adapted the test tasks on the fly and asked probing questions about the search features when participants were doing their free-form searches.

Example: Interviewing Participants About Their Remodeling Projects

To assess user needs for a home remodeling Web site, I conducted twelve non-directive interviews about the process participants went through during a recent remodeling project. To ensure I covered the main types of home remodeling projects the Web site included—for example, bathroom, kitchen, or painting—I decided to interview two participants who had completed each key project.

At the beginning of each session, I asked participants to briefly describe the various projects they’d been involved in. Then, I picked the one I needed to ensure good representation of each type of project among my twelve interviews.

One of the participants was a musician and was currently building a music-recording studio. He was obviously passionate about it. But since that type of project was not among those I was investigating, I did not choose it as the theme of the interview, but instead, chose a small kitchen-remodeling project he had just finished with the help of his wife. He was obviously not particularly interested in this project, so I had to continually prompt him to get him to talk about that project. It was painful both for him and for me! I didn’t get either breakthrough or rich insights. When talking about the kitchen-remodeling project, he frequently referred to his studio project. In the end, I just decided to let him talk about that project. I couldn’t stop him anyway, and I got many rich and very original insights about the process he followed during his project. I didn’t have to probe much at all. He was so passionate about that project; he could talk about it for hours.

When doing the remaining interviews, I decided to choose the project each participant seemed most interested in and passionate about, instead of focusing only on the one I needed for my sample. I let participants describe their different projects and could easily tell by listening to them which one inspired the most energy and passion. Thus, I was able to glean rich insights from all of the interviews.

Tips for Asking Good Questions

What are good questions?

  • Whenever possible, ask probing questions about topics participants care about and have chosen themselves.
  • Do not hesitate to deviate from your initial plan and, instead, ask questions about things participants are more interested in or passionate about. You’ll get richer and more reliable insights. Of course, depending on the goals of your study, you should consider to what extent you can deviate from your plan without compromising the study.
  • Ask participants about their interests and product needs at the beginning of each session, to identify what they’re passionate about. Use the information you get from them to adapt your questions and tasks to fit their passions.

You are better off deviating from your initial plan and getting more insights than sticking with your plan and getting poor insights. Get off the beaten path!

5. Test using participants’ real data.

Sometimes, when testing prototypes in which participants need to input data, usability professionals create fictitious data that simulates the real thing and ask participants to pretend it is their data. This is often necessary when testing

  • prototypes of applications—such as banking, tax, or medical applications—or Web pages that display personal data from a user profile
  • mockups or paper prototypes of search engines, for which you might need to create canned queries

However, asking participants to react to fictitious data has some downsides we should be aware of.

Example: Testing a Feature Using a Paper Prototype

When testing user comprehension and the ease of use of a new feature, I tested a page that listed queries users had made using an internal search engine. Since it was a paper prototype study, I had to make up the queries. To make these static mockups more accurate, before the test sessions, I asked participants to tell me about some searches they intended to do on the site. I tailored a paper prototype to each participant’s searches by choosing queries relating to their searches. However, I still had to make up the exact queries. Since they had not typed the queries themselves, many participants were unable to recognize them as queries and, instead, thought they were categories we’d made up to better organize their searches.

A few weeks later, we tested the same feature using a live prototype that let participants do their own searches and enter their own queries. All of the participants were able to identify their own queries on the page listing the queries. We also found out that users’ expectations of what would occur when they clicked a query did not match what we had intended the system to do. We had not noticed this during the previous study.

Example: Testing Financial Applications at Intuit

At Intuit [2], user researchers have obtained great value from using real data when testing Intuit software for banking, personal taxes, and managing healthcare finances. Among other benefits, they’ve noticed usability issues that they’d overlooked during previous usability studies in which they’d used fictitious data.

These examples show, by testing with fictitious data, you can both miss important usability issues and falsely identify usability issues. Another major downside of using fictitious data is that you can’t be sure of the validity and complete identification of any usability issues you find. Since participants are less engaged with the data, they are also less fully immersed in the task. Therefore, you might overlook some of the usability issues they would encounter in a real situation, when using their own data.

Tips for Testing with Real Data

How realistic can you make test data?

  • Use participants’ real data whenever possible. Either integrate their data into the prototype or ask participants to bring their own data to the test session. For example, when testing a bookmark application at Yahoo!, one of my coworkers asked participants to send their bookmarks to her before the test sessions began. Thus, she was able to make the test more realistic.
  • If you can’t let participants enter their real data, be creative and find ways to provide the most realistic data possible. For example, if you’re creating mockups of a search results page, ask participants to send you the searches they are planning to do, then try to adapt the mockups to their searches.
  • Think about the trade-offs of having participants use their real data. Sometimes, using real data is very valuable, but sometimes, it is not. For my paper prototyping study, I had spent hours adapting the paper prototypes for each of the participants and printing them out, but in the end, I found doing so was not worth the time I had spent.
  • If you can’t use participants’ real data and must use fictitious data—as often happens in studies that occur during the early stages of development:
    • Try to make the data as realistic as possible.
    • Be aware of the limitations of using fictitious data and apply some caveats to the results.
    • Conduct additional studies later in the development process, in more realistic situations, and triangulate the data coming from these different studies.
    • Always question both the bias your methods can introduce and bias in your findings.
    • Take time to investigate participants’ contexts of use and encourage participants to think about their own situations when using the product, as Guideline 6 demonstrates.

6. Investigate participants’ contexts of use.

Real-life situations are rarely ideal. Project constraints may require usability professionals to test specific products, Web pages, or content—without knowing whether participants will find them interesting—or to use fictitious data to test early prototypes. What can you do in these cases to minimize bias?

One mistake usability professionals sometimes make is to plunge into the task scenarios right away at the beginning of a test session, without investigating a participant’s context of use—for example, by asking the participant questions about the product being tested and his experience with it. This can happen for several reasons:

  • A researcher might think he won’t have time to cover all the tasks. There are so many!
  • He might think asking such questions is not useful—that the data won’t help answer his questions—so he’s better off avoiding asking them.
  • He might be afraid of the unknown and getting off the beaten path by experimenting with something new.
  • A client or stakeholder might be resistant to investigating participants’ contexts of use, because they don’t see a tangible value in asking such questions.

However, taking the time to ask these questions is essential to correctly interpreting participants’ insights, especially when the content and tasks are predefined, and you cannot tailor them to each participant.

Asking participants about their contexts of use and their interests at the beginning of a test session helps you to assess the reliability of participants’ feedback throughout the session. It also helps put participants’ comments into perspective. Finally, it lets you tailor some of the tasks to the participants’ specific interests.

Example: Testing a Product Concept for Making Charitable Donations

To test the value of a new product concept for charities, I used a comics storyboard walkthrough. According to this method, developed by Mark Wehner, Customer Insights Researcher at Yahoo!, participants walk through a comics storyboard illustrating a product concept and provide feedback about the value of the concept and whether and how it fits into their real lives.

Knowing about participants’ interest in charities and making charitable donations beforehand was key to our evaluating the validity of their responses to the product concept. Their experience and interest in making donations would influence both their responses and their interest in the concept. So, not being aware of participants’ experience and level of interest might have misled us when interpreting their responses to the concept.

Therefore, at the beginning of each session, I spent time asking participants about their experience and interest in making donations to charities. This helped give greater validity to their responses to the concept and helped me understand why they were or were not interested in the concept. Also, knowing about the types of charities they were interested in—for example, charities focusing on the environment or AIDS—allowed me to adapt my storytelling to each participant’s situation. As I went through the comics storyboards we’d created for a specific type of charity—animal protection—I invited the participants to mentally replace that charity with whatever type of charity interested them. This increased the participants’ emotional connection to the stimuli, giving me more reliable insights about how this concept might fit into participants’ lives.

Tips for Investigating Contexts of Use

When investigating a participant’s context of use, you can adapt these tips to your specific situation.

  • What should you investigate?
    • participants’ motivations and interest in the product or content being tested—“How interested are you in…?”
    • participants’ experience with the product or content itself or with looking up information about it online, if you’re testing a Web site—“What is your experience with charities?” “What types of businesses are you looking for online?” “What have you looked at recently?”
    • any other aspects of the participants’ experience that might help put their reactions to the tasks into perspective—for example, good or bad experiences
  • How long and when should you investigate?
    • Depending on the goals of your study and the constraints you have, your investigation can be as short as 1 minute or as long as 10 to 15 minutes or more.
    • You can investigate participants’ contexts of use before they perform related tasks or integrate your investigation into each task. For example, when showing the product concept relating to charities during test sessions, I might have asked participants to imagine this is a charity in which they’re interested, then asked, “What charities are you interested in, if any?”
  • Why this is indispensable? Investigating participants’ contexts of use helps you to
    • assess the reliability of participants’ responses to a product
    • put participants’ reactions and behaviors into perspective
    • tailor some tasks to participants’ interests and needs—Even if you’ve created fictitious data beforehand and it doesn’t match participants’ real-life situations, you can encourage participants to think about their own day-to-day lives.
    • decide whether testing a specific product or content with a participant is worthwhile—For example, if a participant is not interested in news at all, it’s pretty much useless to ask him how he would select news articles on a news Web site.

7. Adapt to each participant on the fly.

Grounding usability testing in participants’ real lives requires that you

  • adapt to each participant’s context and tailor your tasks, your storytelling, or your examples to that participant and his or her passions, interests, and needs—For example, if a participant tells you he’s passionate about cars, ask him to search for information about cars.
  • are willing to change a task—or skip it altogether—if you realize a participant cannot relate to it at all
  • allow unpredictability and flexibility—Don’t follow a test script by the book! Don’t be afraid of going off the beaten path! You might encounter an unexpected, enlightening, or rich insight that will help you have more impact on the design.

Conclusion

The less your test tasks involve role-playing and the more they are grounded in participants’ real lives and passions, the more you will get out of your test sessions. When participants are better able to relate to a task, content, or data you are showing them, you’ll get richer and more reliable insights that will help you to better evaluate and improve your product. By grounding usability testing in participants’ real lives, you will get tremendous benefits, including

  • more valid and accurate insights
  • richer insights that artificial, role-playing situations would not reveal
  • assurance that your findings reflect what would happen in real life

You’ll also minimize some of the risks that are associated with artificial, role-playing situations, such as

  • falsely identifying usability issues and user needs that lead design iterations in the wrong direction and result in bad user experiences that might potentially lead to users’ abandoning your product and revenue losses for your company
  • overlooking serious usability issues

Try to make your test sessions as realistic as possible. Bring the field study approach into the lab by doing naturalistic usability tests.

If you have constraints that prevent your following all of these guidelines, at least take the time to investigate participants’ contexts of use and interests at the beginning of your test sessions, and try to tailor some tasks to participants on the fly or invite participants to think about their everyday lives while performing a task. With the constraints you have, do everything you can to make your tasks as realistic as possible.

Even the best naturalistic usability testing—whether in the lab or in the field—has its downsides and biases. Just by asking a participant to perform a task and observing the participant, you create an artificial situation, in which you lose some of the realism of the participant’s using the product on his own. It is essential to be aware of the biases and limitations of your studies and triangulate their results with those you derive from other methods that are more grounded in participants’ real lives or with behavioral data you’ve gathered in real-life situations—such as through bucket testing or longitudinal studies.

Customer research is the art of balancing data from different kinds of studies—whether realistic or role-playing, in the lab or in the field, self-reported or behavioral, or one time or longitudinal. Always question the validity and limitations of your findings—and of your methods—and try to balance them with other data to the best of your knowledge. We generally have to make trade-offs, the most important thing is to do it consciously. 

Bibliography

1. Rubin, Jeffrey, and Dana Chisnell. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Indianapolis, Indiana: Wiley Publishing, 2008.

2. Spool, Jared M. “Interview-Based Tasks: Learning from Leonardo DiCaprio.” User Interface Engineering, March 7, 2006. Retrieved June 8, 2008.

3. Zazelenchuk, Todd, Kari Sortland, Alex Genov, Sara Sazegari, and Mark Keavney. “Using Participants’ Real Data in Usability Testing: Lessons Learned.” CHI ’08 extended abstracts. Retrieved June 8, 2008.

Chief Research Officer at Brilliant Seeds LLC

San Francisco Bay Area, California, USA

Isabelle PeyrichouxIsabelle has 20 years of experience applying her research and analysis skills to User Experience and career exploration. With a multifaceted education and background in information science, psychology, coaching, personality differences, and relational mastery, she has contributed to innovative approaches in the fields of UX research and career exploration. Isabelle has created a step-by-step approach to career reinvention that goes beyond the limitations of traditional career counseling and has helped hundreds of people find fulfilling careers. Currently based in the San Francisco Bay Area, she has lived and worked in France, Switzerland, the United Kingdom, and Canada and has conducted several international user-research studies. As a user researcher, she has worked for companies such as Yahoo!, Bell Canada, and the French Speaking University Agency. An engaging speaker, Isabelle has given presentations and workshops for professional associations and conferences such as the IA Summit and at General Assembly. She will be speaking at the Association for Psychological Type International in 2021.  Read More

Other Articles on Usability Testing

New on UXmatters