Top

Discussion Guide Gaffes and How to Fix Them

Discovery

Insights from UX research

A column by Michael A. Morgan
December 7, 2020

In this edition of Discovery, we’ll take a closer look at the discussion guide—the main document you use to run your UX research sessions. Whether you’re conducting a usability evaluation or a one-on-one user interview to understand a specific domain, the discussion guide is more than just a driving force behind data gathering. Notably, it illuminates the ultimate game plan for your study.

Let’s examine some of the potential pitfalls of discussion guides and ways to avoid them. In this column, you’ll hear from some of my colleagues and other industry professionals who have come up with clever and useful ways of using the discussion guide as more than just a simple list of questions to ask your participants.

Champion Advertisement
Continue Reading…

What are some of the common problems with discussion guides? In this column, we’ll consider the following problems:

  • Questions are too broad to be useful.
  • Questions yield more questions than answers from participants.
  • Questions yield repetitive answers that waste precious time.
  • The sections don’t quite flow together, making it difficult for both the moderator and participants to follow.
  • There are too many questions so not enough time to cover all of them during sessions.

Questions are too broad to be useful.

During discovery research, questions should be broad enough to cover various types of circumstances that you need to understand, but not so broad that they apply to every single circumstance. Otherwise, the participants’ responses become less useful for stakeholders.

Let’s say you’re researching finance professionals who analyze a specific aspect of finance—for example, analysts and portfolio managers who look at environmental, social, and governance, ESG for short, aspects of public companies. While a question such as “What kinds of challenges do you experience in your daily workflow?” might be a good place to start your conversation, it’s too broad in scope for you to really understand the challenges users could be experiencing with ESG-related aspects of public companies. By asking this question, you might simply end up with twenty-minutes worth of useless data that doesn’t support your research goals.

Of course, it’s okay to start broad. Let’s look at some examples of broad questions that you can use to begin a discussion with participants.

Moderator: “Tell me about your role and responsibilities.”

But then you need to go another level deeper. Listen for any specific details relating to ESG. Give participants the opportunity to talk about what is top of mind for them. Usually these are the things that are most important to participants. If they mention ESG without your prompting them to do so, it’s probably somewhat important to them.

Participant: “I am an analyst who has been working in the equities space for the past ten years, focusing mainly on telecommunications, as well as some healthcare. We also buy some corporate, investment-grade bonds.”

Get a bit more specific and listen for any ESG-related points. Your next question, while still broad, should be slightly more specific. Senior UX Researcher Mark Safire from Bloomberg recommends providing optional prompts after asking broad questions.

Moderator: “Tell me about some of the types of data you look at in the context of your daily workflow.”

Ask a probing question about types of data, corporate boards, social issues, or doing the right thing.

Participant: “We look at the fundamentals. We do some technical analysis. I peruse the headlines daily to stay on top of the businesses in our portfolio. In some cases, we’ll look at the boards of these firms to see how they might be dealing with any union or employee grievances.”

Aha! Here is an ESG-related item that a participant mentioned, which gives you a great starting point for digging deeper into ESG!

Your subsequent questions can delve further into the challenges relating to the analysis of these kinds of data.

Moderator: “What exactly are you looking for when you’re examining corporate boards?”

In cases where you’re not lucky enough to receive that kind of entry point for the key topic, you may have to move the conversation forward by asking a more direct question. However, asking more directly doesn’t mean you should ask a binary Yes/No question. A participant might say Yes just to be nice, even though the answer is really No. Mark Safire suggests a more neutral, open-ended, yet still direct approach—for example, asking “What role, if any…” to get the participant to discuss the nature of their involvement, but signal that it’s fine if there isn’t any.

To continue our example, the moderator might ask the question this way:

Moderator: “What role does ESG-related data play in your analysis, if any?”

You can start by asking broad questions, with the intent of getting deep enough to answer your research questions and satisfy your study’s goals.

Questions yield more questions than answers from participants.

In exploratory research, we often need to discover the language of our target users—the words they use in describing a task or a domain that shapes the world they perceive. Moving from background questions to more specific ones that are relevant to the topic can become a slippery slope. If you introduce words into a conversation that could be central to a subsequent part of the session, which participants experience through a stimulus such as a prototype, your language might lead your participants. Try to avoid using specific terminology that might or might not be part of the users’ vernacular to avoid guiding or priming them in any way.

A recent study that I conducted around data transparency provides a great example. The prototype’s user interface used the word transparency, but we needed to understand what language the participants actually used. So in discussing this very topic at the beginning of their session and asking questions about whether they need transparency into their data, asking the question “Do you need transparency into the data you’re analyzing?” would have been putting words into their mouth, overtly giving them hints regarding what they were about to experience. On the other hand, rewording this question could have resulted in an incomprehensible question, yielding even more questions—or, even worse, a frustrating participant response such as “What the heck do you mean by that?!”

Moderator: “What is the role of the underlying information behind the data you’re looking at in your workflow, if any?”

Participant:Huh?!?!

Spending time to massage the wording of such a question is worth the effort. Try running the language by your product stakeholders to learn what they think. In our data-transparency study, we ended up using the language “accessing original information sources,” and our participants understood that phrase. Getting the occasional follow-up question from our participants—for example, “To what kind of data are you referring?”—suggested that they were on the same page with us.

Mark Safire suggests breaking up difficult questions into chunks. He told me, “If something is hard for people to understand, you could break it into two sentences”—for example, “Let’s talk a bit about how you analyze data. What role do your original information sources play, if any?”

Cognitive psychologist and a Bloomberg User Experience colleague Bonnie John told me she knows her questions and instructions are effective when the words and actions they yield are plausible responses to the instructions she gives participants or the questions she asks.

Questions yield repetitive answers that waste precious time.

Time is your most precious commodity during research sessions. Asking the same question multiple times—albeit perhaps in a slightly different form, although yielding the same response—would force you to sacrifice other questions or activities you want to cover with participants.

A couple of industry professionals with whom I’ve corresponded mentioned looking out for questions that yield similar responses, then eliminating the duplicate question. Aaron Miller, a Senior UX Researcher at ADP, recommends piloting your discussion guide as a way to weed out such questions.

The sections don’t quite flow together, making it difficult for both the moderator and participants to follow.

The discussion guide is more than just a list of questions on a piece of paper; it’s a scripted conversation. Bonnie John writes such scripts in what she thinks of as her own voice. Any conversation needs to have flow—a logical turning and cadence that makes human sense. Non-sequiturs such as “Speaking of dogs, tell me about your kitchen sink” would not feel right in a discussion about pet care—unless you’re doing research for a veterinary equipment company!

The best way to really know whether a section of your discussion guide is out of order is to pilot your discussion guides. All of the UX professionals I consulted stressed this when I interviewed them. Pilot your script with your colleagues and rehearse it out loud. How does it sound? Don’t rely solely on your internal voice to make that decision. Listen to how your words sound when you’re delivering them to someone else. If something doesn’t make sense, you should reorganize or perhaps even remove some of the content.

Never underestimate the importance of your script’s sequence. Be aware that the sequence of your discussion guide becomes even more important for a study in which you’re asking participants to interact with a stimulus such as a prototype. Showing participants a stimulus before beginning with some background and topical questions would give them hints and influence their responses to tasks during subsequent parts of the session. Once you’ve exposed such stimuli to participants, they become part of their own experience and shape their beliefs and perceptions. Asking your questions before showing participants anything ensures that the rationale behind their responses is based on their own experiences—not on what you’ll be showing them.

Just as with conversations, a discussion guide comprises a series of sections and the flow between them. Al McFarland, another Senior UX Researcher at ADP, approaches his discovery-research sessions—and the discussion guides that drive them—using a consistent, current state/future state format. Al said, “Practically all research studies can leverage the simple process flow for participant questioning and observation. Start with the current state of your research topic or purpose. Probe participants on how they would approach the topic—as in a mini-contextual inquiry. Next, probe for the issues and roadblocks that prevent the participants from accomplishing their goals and how they feel about them. Last, spend time probing what the participants see as the desired or needed conditions and the context that would help them more easily accomplish their goals—the future state.”

There are too many questions so not enough time to cover all of them during your sessions.

Have you ever finished developing your discussion guide, then realized that you actually have way too much material to fill the amount of time you’ve scheduled for your sessions? This is actually a good problem to have. It most likely means you’ve given careful thought to the content and direction of the conversation. However, it also means you’re going to have to cut some questions that might be really valuable in uncovering insights.

Having lots of questions in your guide might compel the researcher to want to ask all of them. Consider highlighting the questions that you must ask and indicating those that are optional or could serve as potential prompts. In this way, you can ensure that you cover the essentials during your interviews.

During your sessions, keep track of questions that participants have already answered without your having to ask them. This can free up time for other questions or activities that are important to your research.

When trimming your script, look for duplicate questions. You might be asking the same question in different sections, but in slightly different ways. If the additional information you might get from asking a similar, but still different question is worth the additional time, keep it. Otherwise, consider removing it.

Too many questions could also mean that you, the moderator, would be talking too much. ADP’s Aaron Miller suggests, “A discussion guide is working if it gets participants to do most of the talking, while ensuring they stay on track.”

To ensure that you get what you need from your participants, make your questions open-ended enough to warrant a deeper discussion on the topic. You might find opportunities to incorporate the additional questions that you might not get to into a broader, open-ended question. Your questions are the vehicle that drives your discussions with participants. They are a means to an end, not the end itself.

Conclusion: More Than Just a Scripted Conversation

To get the most out of your research sessions, consider your discussion guide as more than just a list of questions you’ll ask your participants. Think of it as the way you’ll orchestrate entire sessions—both for yourself, the researcher, and for your key stakeholders.

Use Your Discussion Guide as a Checklist

A couple of my colleagues described using checklists to ensure that they don’t forget to cover or do specific things during each session. Bloomberg’s Mark Safire includes “Setup” and “Post” sections for important self-reminders within certain sections of his script when he’s testing a stimulus such as a prototype. He told me, “A recent project required me to reset the prototype before each task. After the pilot test, I wrote ‘Reset prototype’ in my notes before each task!”

Bonnie John designates specific sections within her guide, making it easy to check them off as she completes them, without having to look away or think about them too much.

Use Your Discussion Guide to Engage Stakeholders

Jaris Oshiro, a Senior UX Researcher at Bloomberg, describes his approach to ensuring his stakeholders remain fully engaged in the research process:

“In the past, I’ve found that it can be difficult for the layperson reading the discussion guide to fully understand why you’re asking specific questions. To get around this issue, I’ve found that it’s best, after each question, to simply add another paragraph that is titled ‘What This Answers,’ then briefly describe, in laymen’s terms, why you’re asking that specific task question. This makes it a lot easier for your product managers and designers to better understand why you’re asking specific task-related questions and for you to get their buy-in for your research plans. In the long run, this ultimately helps you get greater participation in your research studies.”

Use Your Discussion Guide to Differentiate Directions and Dialogue

Similar to the way a screenplay has different formatting for dialogue that is spoken by the actors and stage directions that instruct cast members and off-stage crew members, the guide can blend both the dialogue between moderator and participant, as well as directions for on- and off-stage UX research activities. I typically format any stage directions in italics and place them within brackets. These might include any alternative paths.

For example, a question in the guide might be: “Do you ever analyze loans as part of your workflow? If so, how often?”

The stage direction following this question might be: [If they respond No, skip this section, then go to the next section of the guide. Otherwise, continue.]

Bonnie John applies different font formatting to separate directions and dialogue: “I put things that I am supposed to do in a different font from things I am supposed to say, so I can visually distinguish them.”

What are some useful techniques that you use to ensure your discussion guide serves you well during a research effort? Comment on this article to share your perspective. 

Senior UX Researcher at Bloomberg L.P.

New York, New York, USA

Michael A. MorganMichael has worked in the field of IT (Information Technology) for more than 20 years—as an engineer, business analyst, and, for the last ten years, as a UX researcher. He has written on UX topics such as research methodology, UX strategy, and innovation for industry publications that include UXmatters, UX Mastery, Boxes and Arrows, UX Planet, and UX Collective. In Discovery, his quarterly column on UXmatters, Michael writes about the insights that derive from formative UX-research studies. He has a B.A. in Creative Writing from Binghamton University, an M.B.A. in Finance and Strategy from NYU Stern, and an M.S. in Human-Computer Interaction from Iowa State University.  Read More

Other Columns by Michael Morgan

Other Articles on User Research

New on UXmatters