Top

Sensemaking with Annotations

Discovery

Insights from UX research

A column by Michael A. Morgan
April 23, 2019

In April 2017, I published an article on UXmatters that describes a method of data collection called the Visual Data Collection (VDC) method. This method has proven especially valuable during the early, concept-testing phases of product development when teams do one-on-one user interviews alongside creating early design prototypes. Subsequently, I presented this method at REcon 18, an annual conference at which experts in the field of UX research present timely and interesting UX-research trends.

In this edition of Discovery, my focus is on the annotations I’ve developed to help researchers during notetaking and analysis. In examples, I’ve used a fictional music exchange Web site called EarWorm.com, where customers can buy and sell music. For a refresher on how to use the VDC method, read my article, “Increasing Your Research Velocity with Visual Data Collection, which puts all of the components of the VDC method together.

Champion Advertisement
Continue Reading…

Annotations: A Brief Recap

In this context, annotations comprise a small set of symbols that a UX-research analyst should use consistently across all participants in a study. Their consistent use offers the analyst a quick, easy way of aggregating results for easier theme and pattern matching. Document your set of annotation symbols on a single cheat sheet, or annotation sheet, similar to that shown in Figure 1.

Figure 1—The annotation key for a particular study
The annotation key for a particular study

The Annotation Sheet

For research sessions, print out this sheet as a quick reference for notetakers to use in case they forget which symbols to use. Ideally, there should be no more than a dozen types of annotations. As you add more and more symbols, it becomes more difficult to keep track of annotations and use them in a consistent manner. If you think you have too many types of annotations, consider phasing out the ones you’re less likely to use or that aren’t in line with your current project’s research goals and objectives. You won’t use all of these annotations for all studies. It’s important to customize this sheet, capturing only the annotations that are absolutely necessary for a study.

Now, let’s take a look at some types of annotations I’ve found extremely useful for notetaking and analysis.

Sequences of Action and Interaction Types

Some of the first annotations I used involved a symbol that represents the sequence of actions, as well as the types of interactions a participant takes within a digital experience. The sequence-of-actions annotations are important in determining what participants notice most. The interaction type lets you understand how participants instinctively want to engage with page designs. This is especially important for mobile experiences, where distinctions between what might be swipeable versus tappable become very important for interaction designers.

These annotations—sequence of action and interaction type—represent two discrete dimensions of interactivity. Representing them as separate annotations lets you describe the order in which a participant goes through a prototype, without your necessarily having to describe how they interacted with it and vice versa.

Because the majority of the designs for which I have conducted research involved desktop prototypes, the interaction types were fairly limited—the user either clicked or tapped screen elements. That’s it! With more limited possibilities for interaction types, I folded the interaction type into the sequence of action annotation, as shown Figure 2. Combining sequence of action and interaction type into one annotation let me capture two important aspects of the design simultaneously. (If I were to test mobile software that involved swiping as well as tapping, I would consider adding annotations to record swipes—possibly combining them with the sequence of action.)

In the scenario shown in Figure 2, the participant clicked the view more link first, then the Share button, then tried to click the album cover. The numbers denote the sequence of action, while the carat symbol indicates the interaction type. Combining these two annotations saves space and time! Plus, having fewer symbols to remember helps you to focus on the participant’s behavior.

Figure 2—Combining sequence of action and interaction type
Combining sequence of action and interaction type

Use Cases

Often, during early-phase concept testing, stakeholders might want to discover what types of tasks users need to perform with the system. They might have a rough, but uncertain idea about how people would use a product. Using the language of business analysts and product owners, we came up with the use-case annotation.

According to usability.gov, a use case describes how a user performs tasks using a Web site or application. It also describes how a system responds to a user’s requests. The use case describes the context of use at a very basic level. When I was a business analyst, the use case formed the basis for the functional requirements that I provided to the engineers who were building my organization’s systems. Use cases comprise three main components:

  1. Preconditions for successfully completing the task
  2. The task the user is performing
  3. Expected feedback from the system

In the VDC method, the use-case annotation corresponds to the second of these components—the task the user is performing.

Let’s say your stakeholders are interested in learning the circumstances under which their customers—buyers and sellers of records—would want to explore a brand new band. Here is a description of the user’s context and a snippet of dialogue from a UX-research session during which this insight emerged:

Scenario:The participant is on the artist detail page for a favorite band, looking at the Discover similar artists feature.

Moderator: What would cause you to want to look for new artists?

Participant: As a DJ, I’m always looking for new artists—especially those in the same vein as some of my current faves. These serve as potential candidates for getting some playtime at upcoming gigs!


In this scenario, the annotation might look similar to that shown in Figure 3.

Figure 3—A use-case annotation
A use-case annotation

Anecdotes

For a more detailed scenario, which typically includes a narrative that the participant provides, use an anecdote annotation. The biggest difference between the use-case and anecdote scenarios is the level of detail they provide. Anecdotes typically include a lot more detail than use cases do, which simply provide some basic, high-level details. A use case is sufficiently broad that you can more easily recontextualize it, while an anecdote is already highly contextualized. Let’s look at a scenario snippet that depicts the use of an anecdote annotation:

Scenario: The participant is on the artist detail page for a favorite band, looking at the Discover similar artists feature.

Moderator: What would cause you to want to look for new artists?

Participant: DJ’ing is a part-time gig and doesn’t really pay the bills, so I have very limited time to look for new jams. In fact, my fans have been tweeting that I haven’t played much new music lately—just the same old tracks. This feature would save me time and help me to improve my reputation!


Figure 4 shows the use of an anecdote annotation.

Figure 4—An anecdote annotation
An anecdote annotation

Capturing anecdote annotations lets you easily identify stories within your research that could influence decisions that your team might potentially make during the design process. Keep in mind that, while this annotation puts a face on the user, it does not necessarily attempt to quantify users in any large-scale, statistically significant manner. But it helps you to provide a good answer when stakeholders question your insights: “How do you know? Prove it!” Easy. The proof is in the richly detailed stories of your participants—in the anecdotes!

Points of Confusion

When participants don’t understand a particular element in a design, it is useful to capture their reactions—especially if the same elements confuse many participants. For example, there might be a button label that participants find confusing. The confusion annotation consists of a question mark and a line that connects the question mark and the confusing element. Figure 5 shows an example of its use. In this scenario, the participant became confused about why a section with the heading You might also like… would include advertising. Such insights could help you to significantly improve the usability of the site’s content.

Figure 5—A confusion annotation
A confusion annotation

Elements Participants Understand and Don’t Understand

Another type of data that stakeholders find of interest is knowing whether participants understand specific aspects of a design. For example, you might be trying out an unconventional design pattern that your team is uncertain about. Would participants understand that specific aspect of the design or might you need to rethink it?

Let’s say your product team has been receiving reports from customer service about people uploading copyrighted content to the music site. The product team wants to encourage customers to be vigilant about reporting content that doesn’t belong on EarWorm.com. So they’ve decided to add a feature that lets customers report such incidents, and they want to learn whether participants understand what the Report button does. The understood / not understood annotation lets you capture such insights. In the scenario shown in Figure 6, the participant did not know exactly what he was reporting. This insight suggests that it might be necessary to add more information on the page to better convey how customers can report incidents.

Figure 6—An understood / not understood annotation
An understood / not understood annotation

Ideas

Even if recording user-interface ideas that participants generate is not one of your specific research goals, you still might want to record them—especially those that come from paying customers! Particularly during early-phase, concept testing, you might get an earful of ideas from participants. Fortunately, there is an annotation for these. No, it’s not a light bulb, but it looks kind of like one. Use an exclamation point to convey an idea that come from a participant—an Aha! moment.

When “crazy” ideas come early in the product-development lifecycle and development costs have yet to materialize, stakeholders are more likely to consider them than the issues you identify later on during usability evaluations, whose resolution would cause development costs to escalate dramatically. Capture the idea data that you collect in a feed, then assess this data to determine whether the ideas represent a potential goldmine. The scenario shown in Figure 7 provides an example of how to use the idea annotation. In this case, a participant described wanting to see album reviews to help make purchase decisions.

Figure 7—An idea annotation
An idea annotation

Annotation Insights

Here are some additional insights and considerations to help you get started using annotations in your research projects:

  1. Be economical in choosing your selection of annotations—and choose them wisely. Your first reaction to these strange symbols might be: Wow! How am I going to use all of these? Don’t let a flood of too many symbols overwhelm you. It’s important that you choose a limited set of annotations—neither too many nor too few. Think about your research questions and what annotations might be useful in answering them. For example, if stakeholders want to know whether something is missing from the prototype, it would probably make sense to include the idea annotation on your cheat sheet. On the other hand, if they want to know whether participants understand a particular aspect of the design, you should absolutely include the understood / not understood annotation. There is no need to boil the ocean and include every possible symbol you might conceivably use in your study.
  2. Practice. Practice. Practice. When you begin using this method, you might ask yourself: What if I apply the wrong symbol to a piece of data I’m collecting? You probably won’t get everything right the first time. The only way to become more consistent and disciplined in your notetaking is through practice. Integrate this notetaking approach into your upcoming studies—perhaps starting with a pilot study for which the risks for getting things wrong are lower. Commit to trying this method at least a few times to see how it works for you. Give this approach a fair shot. Don’t give up on using it after the first participant in your first study. Make another notetaker the official record keeper while you practice using this new method. Figure out what works and what doesn’t. Don’t be afraid to swap out certain annotations for other new or different ones. Use the annotations that work best for you. Print out your cheat sheet and always bring it with you to studies—keeping it next to you on the opposite side from the participant. It’s for your eyes only!
  3. With practice comes proficiency and recognition of annotations becomes automatic recall. Don’t worry if you initially have difficulty remembering all of the annotations. Once you’ve used these conventions for a while, you’ll no longer need your cheat sheet because you’ll have committed them to memory. Ultimately, the cheat sheet becomes an artifact from your study. File it away for future reference, in case you do a similar study.
  4. You’ll likely discover many other useful types of annotations. While, after a few studies using the annotations, you might think there are no more to discover, this is simply not the case. Your research questions change from study to study, so you’ll probably find the need to create new types of annotations. For example, the anecdote annotation came from hearing participants’ great stories describing circumstances of product need and use during exploratory research. The annotations we’d previously used did not really capture these rich details. So look for unique instances in your research data that might warrant your creating a new annotation.

Conclusion

Are you interested in sharing these annotations with your colleagues? What unique annotations are you using in your UX research. Please describe them in the comments and share them with me and the rest of the UX research community!

In a future Discovery column, I’ll discuss how to put all of these annotations to work during the analysis of research data. Stay tuned! 

Senior UX Researcher at Bloomberg L.P.

New York, New York, USA

Michael A. MorganMichael has worked in the field of IT (Information Technology) for more than 20 years—as an engineer, business analyst, and, for the last ten years, as a UX researcher. He has written on UX topics such as research methodology, UX strategy, and innovation for industry publications that include UXmatters, UX Mastery, Boxes and Arrows, UX Planet, and UX Collective. In Discovery, his quarterly column on UXmatters, Michael writes about the insights that derive from formative UX-research studies. He has a B.A. in Creative Writing from Binghamton University, an M.B.A. in Finance and Strategy from NYU Stern, and an M.S. in Human-Computer Interaction from Iowa State University.  Read More

Other Columns by Michael Morgan

Other Articles on Columns

New on UXmatters