Top

Delivering Meaning to Your Stakeholders

Discovery

Insights from UX research

A column by Michael A. Morgan
October 5, 2020

In this edition of Discovery, I’ll complete my series on Visual Data Collection (VDC), which provides an efficient way of taking notes during research sessions. This method uses a combination of open- and closed-ended questions, along with screenshots of the prototype you’re testing. What makes my VDC method different from other notetaking techniques is the consistent use of annotations to mark up the screenshots, which affords easier analysis by the UX researcher, who can more efficiently tally up the results across participants.

In this column, I’ll consider how you can use the VDC method to deliver meaning to your stakeholders—the people you must influence and inspire for your research to have an impact—by communicating your results effectively. Thus, this column focuses on the Deliver swimlane of the Visual Data Collection journey map in Table 1, which gives you a quick overview of the VDC method. It comprises the various phases of UX research and shows how to incorporate the VDC method into each phase.

Champion Advertisement
Continue Reading…
Table 1—Visual Data Collection journey map
Set Up Collect Tally Analyze Deliver

Create a discussion guide with screenshots and annotations.

Create an annotation key using relevant annotations.

Complete the discussion guides for each participant, adding annotations.

Compile participants’ data into a single discussion guide, or tally sheet.

Identify common themes: the obvious, not-so-obvious, and what didn’t happen.

Compile your findings.

Include observations, evidence, and recommendations.

Delivering Meaning Is More Important Than the Medium

One of the most important aspects of the VDC method is its final step: delivering meaning. Documenting your research findings and making sure that your primary stakeholders understand and act upon them is absolutely critical. The medium you use to present your findings can differ. Throughout my career in User Experience, I’ve usually presented my findings in presentations or word-processing documents. However, I’ve also explored alternative formats such as mind maps. Programs such as Mindmanager ® let you organize your findings into hierarchies and visualize the relationships among them. Mind mapping is a particularly useful format because it more closely resembles the way we code and analyze our UX research findings.

Regardless of what format you use in delivering meaning, the unit of meaning that you’re delivering is the same: a research finding or insight. The format is simply the vehicle for conveying your findings.

Throughout the rest of this column, I’ll cover the components that make up a finding, again using the same fictional example: Middlepacker.com, a Web site for runners. This fictional site helps people learn more about running and sign up for races in their own locale.

Compiling Your Research Findings

Once you’ve completed data collection, you’ll hopefully have mounds of useful information that you need to communicate to your key stakeholders. You’ve also identified themes—observations of behaviors that recurred either across multiple sessions or a particular session—which could help stakeholders understand how users might interact with your product. For more about identifying themes, refer to this previous edition of Discovery: “Data Analysis: Making Sense of Tally Sheets.”

All findings comprise the same three components, as follows:

  1. A key, high-level insight or observation
  2. Evidence that supports this insight or observation
  3. Key takeaways—what you can learn from this insight. These takeaways could be actionable recommendations or inform the mindset of stakeholders and, thus, influence the direction of the product.

Now, let’s look at each of these components in greater depth.

Identifying Your Key, High-level Observation or Insight

Remember that observations and insights are not the same thing. An observation is something researchers and stakeholders witnessed during a research session. It answers this question: What did we see and hear during the session? In contrast, an insight is something you’ve learned from what you observed during your research. Thus, it’s an interpretation of what you observed rather than an actual user behavior. Insights answer the questions in the minds of your stakeholders—for example: What can I learn from this research that I didn’t already know? Which of my initial hunches can I now confirm? Let’s look at some examples of an observation and an insight.

An Example Observation

Let’s say you’ve interviewed twelve participants when concept testing Middlepacker.com and are now analyzing your data to identify some obvious themes. During your analysis of the registration process, you noticed that most participants—except one—took a while to locate the Register link. Many participants became extremely frustrated, indicating that they would probably just give up and use a competing Web site that provides the same services. But some said they would look for a support phone number to call to find out how to begin the registration process. So far, everything I’ve described are observations—not opinions or perspectives on what you observed.

Which of these observations is the main one—your key observation? Among your research findings, the headline observation is that the reactions of participants were a consequence of their not being able to find the Register link, then register.

Key observation: Most participants had difficulty locating the Register link.

This observation answers the stakeholder question: What did you see during the session? But it doesn’t give you enough information for your learning to be useful and actionable. You also need to answer the logical follow-up question: Why couldn’t participants easily locate the Register link? The supporting evidence answers this question. We’ll look at the supporting evidence shortly, but first, let’s look at an example of an insight.

An Example Insight

As I stated earlier, an insight is not what you observed, but an interpretation of what you observed. It expresses a researcher’s perspective on what you’ve learned from what you observed. This is where UX researchers really add value to the process.

Let’s say that, during their user interviews, a good number of participants mentioned a few examples of competing Web sites that offer a rewards program for each signup for a race. Participants said that they can use the points they earn from these Web sites to redeem merchandise at participating retailers. When we asked probing questions about the rewards programs, participants seemed strongly inclined to use those sites more frequently because they received something in return for registering. In our earlier example, we had learned the following through observation:

Observation: Many participants described competing Web sites with rewards programs.

From this observation, we also gleaned a particular insight. People who have registered on competing Web sites have two motivations:

  1. Feeling healthier and more fit because they use these sites and
  2. The possibility of earning points that they can apply toward retail purchases

Your insights take your observations to a higher level of abstraction. Such insights can inform your product direction, opening stakeholders’ eyes to the possibility of reassessing their approach to the product.

Your Supporting Evidence

You must have proof, or evidence, that validates your key observation or insight. Without that evidence, you just have a bunch of data that sounds interesting, but lacks any grounding in reality. The evidence is what really counts in bringing to life what happened during your research sessions for stakeholders. There are many ways to deliver this body of evidence. Let’s look at a few of them.

Delivering the Evidence Using Quotations

Once I’ve documented an observation or insight, I typically provide quotations from participants to support the finding—noting the corresponding participant number. If good quotations are not available, I’ll instead provide descriptions of what participants said or create fictional quotations, or paraphrases, that represent what they stated during the sessions. In such cases, I typically disclose that these paraphrases merely represent what participants said, but are not exactly what they said. Of course, it is certainly preferable to have actual quotations.

Quotations are particularly powerful when they convey very extreme positions. If most participants were unable to locate the Register link because they could not read the link text, try to find a few quotations that support this finding and present them to your stakeholders. For example:

Participant #6: “Are you kidding me?! Who could see that! It’s sooo small!”

Participant #2: “That would not be the first place I’d look for a Register link!”

If exact quotations are difficult to obtain—for example, because you did not record or transcribe the sessions—another technique you can use is to provide the gist of what participants said. You could invent fictional quotations, or paraphrases, that represent what they said. Although it might seem like you could mislead stakeholders by paraphrasing, as long as your paraphrases capture the essence of what participants said and do not go beyond the boundaries of the truth, providing them can be an effective way of delivering useful evidence:

Participant #6: “Wow! It’s so tiny! Who could possibly read that?!”

Participant #2: “Yeah, I wouldn’t have thought to look there first.”

Delivering the Evidence Using Narratives

Another way to convey evidence of your research findings is by providing detailed descriptions of situations that support an observation or insight. Such narratives might be less powerful than quotations, but they can provide enough color to convince your stakeholders, get them to empathize with users, and build a case for a specific decision.

An Example Narrative

For example, if the stakeholders for Middlepacker.com had no idea how the average runner would react to seeing ads on their Web site, the researcher would need to observe some situations in which participants think ads might or might not be appropriate. While a few participants elaborated on their experiences by remarking about ads on competitive Web sites, all of them shared a common response, displaying an unusually high tolerance for ads. Although we weren’t able to find a suitable quotation or create a paraphrase, one participant had an interesting story to tell about his experience with a competing Web site.

Here is an example of how we could present this finding to stakeholders in a narrative format—along with our accompanying insight:

Insight: Typical runners might be willing to tolerate ads on Middlepacker.com because of their prior experiences with ads on competing Web sites.

Narrative: One participant described visiting FastFeet.com and seeing an ad for foot fungus. While this ad turned her off, she understood that such sites are possible only because of their ad revenue and the support of a passionate group of volunteers.

Making Tactical or Strategic Recommendations

Once you’ve put together your research findings—including observations, insights, and evidence—what are you supposed to do with them? One obvious answer might be to act upon them by providing your recommendations to the product team. Of course, it’s not always possible to act on all of your findings. Some findings might lack sufficient specificity for you to come up with a particular recommendation or action in response to them. Let’s look at two types of recommendations: tactical and strategic recommendations.

An Example of a Tactical Recommendation

Tactical recommendations offer specific things a UX designer or product team could do to address a finding from research. Depending on the UX researcher’s background—for example, the researcher might also be a designer—and the researcher’s knowledge, a tactical recommendation might highly prescriptive. For example, the researcher might suggest colors or a specific user-interface element to use.

As a UX research specialist, I typically shy away from making very prescriptive tactical recommendations such as that in the following example:

Tactical recommendation: Use a green button instead of the blue link for registration.

Instead, I favor more descriptive recommendations such as the following:

Descriptive recommendation: Consider a more salient way of showing users where to begin registration.

Your job as a researcher is not to tell UX designers specifically what to design, but to advise them on what might work better with users, based on your research findings. When you give UX designers a more descriptive recommendation rather than a prescriptive one, they’ll probably be able to think of a broader spectrum of possible solutions, which the team can explore to increase their chances of success.

An Example of a Strategic Recommendation

Strategic recommendations in regard to a research finding are often less about recommending a specific thing to do and more about thinking differently about users or their experiences. During early-phase research, when a product or concept is in its infancy, you have a greater opportunity to affect the product’s direction. In fact, early-phase, strategic recommendations can form the cornerstone of your product strategy.

Let’s say that, during the course of your UX research, a few participants mentioned their being able to speed up their registration on competing racing Web sites by using third-party registration tools. They described a seamless process that saved them a lot of time they would otherwise have spent filling out complicated forms. Some said they would simply abandon the registration process if any racing site forced them to complete a convoluted, complicated registration form. This information would be extremely valuable for stakeholders to know as they began sprint planning for their site’s core features. Such a finding and its accompanying strategic recommendation might look something like the following:

Insight: A few participants mentioned complicated registration processes as a major barrier to their using racing sites.

Evidence:

  • They prefer using sites that have express, third-party registration processes.
  • They see long, convoluted registration forms as showstoppers.
  • P6 said: “No way! That’s too much work just to sign up for one race!”

Recommendations:

  • Ensure that the registration process is simple and fast or risk losing new customers.
  • Consider using third-party registration tools as a way of expediting the registration process.

Qualifying the Quantities: Using Counts in Your Findings

You might have noticed that, in the examples I’ve used throughout this column, I haven’t used any actual numbers. Instead, I qualify the quantities. So, instead of saying, “6 of 10 participants took a while to find the Register link,” I would say, “Some participants took a while to find the Register link.” Because UX research studies typically use smaller sample sizes, it is best to leave numbers out of the conversation altogether. However, to remain consistent in your use of language, I suggest you create a table that maps your descriptors of quantities—for example, many— to actual quantities. If, during a presentation, a question ever comes up regarding what some means, you can then give a consistent answer. Table 2 provides an example of quantity mapping. If you think your stakeholders would want the actual numbers, you could always append them to descriptors within parentheses—for example, “Many (7) participants….”

Table 2—Qualifying quantities for a study with 10 participants
Quantity Percentage Descriptor

1

10

“One participant…”

2

20

“A couple…”

3

30

“A few…”

4

40

“Some…”

5

50

“Some…”

6

60

“Some…”

7

70

“Many…”

8

80

“Most…”

9

90

“Most…”

10

100

“All…”

Conclusion

You can use the approach I’ve described in this column to present your UX research findings in a way that delivers meaning with the VDC method. However, you could also accomplish this goal in the same way if you weren’t using the VDC method. While the medium through which you deliver your findings might change, at their core, they always include an observation, an insight, the evidence, and your recommendations.

This is the final column I’ve planned for my series about Visual Data Collection (VDC). But perhaps I’ll publish more columns in the future about how others are adapting this method for their own purposes. If you’ve tried this method, I’d love to hear from you in the comments. Or you can message me on Twitter or LinkedIn. It would be great to hear your stories.

For those of you who missed my earlier columns on the VDC method, I encourage you to read the following past editions of Discovery:

Senior UX Researcher at Bloomberg L.P.

New York, New York, USA

Michael A. MorganMichael has worked in the field of IT (Information Technology) for more than 20 years—as an engineer, business analyst, and, for the last ten years, as a UX researcher. He has written on UX topics such as research methodology, UX strategy, and innovation for industry publications that include UXmatters, UX Mastery, Boxes and Arrows, UX Planet, and UX Collective. In Discovery, his quarterly column on UXmatters, Michael writes about the insights that derive from formative UX-research studies. He has a B.A. in Creative Writing from Binghamton University, an M.B.A. in Finance and Strategy from NYU Stern, and an M.S. in Human-Computer Interaction from Iowa State University.  Read More

Other Columns by Michael Morgan

Other Articles on Analysis

New on UXmatters