Now, in this article, I’ll review the user-research articles on UXmatters that I’ve found most valuable because of their focus on in-depth, foundational research that is conducted during the early stages of the product-development cycle. I’ve also included some articles that emphasize key principles behind an effective user-research methodology.
I’ve based my assessment of these UXmatters articles on my twenty years of experience applying my research and analysis skills to User Experience and career exploration, as well as on my background in information science, psychology, coaching, personality differences, and relational mastery.
Triangulate data from multiple methods and sources.
Use the right tools and approaches to collect accurate data.
Optimize the accuracy of data collection during interviews.
Expand the richness of the data you collect by adapting on the fly.
For the purpose of reviewing these user-research articles on UXmatters, I’ve added two more criteria:
Acknowledge the limits and biases of your research.
Jim Ross, author of the UXmatters column Practical Usability, has written six of the ten articles, which I’ve selected because of his emphasis on foundational and field research. His articles consistently match many of the criteria I’ve listed.
I’ve tried to present these articles in a logical order, so articles build on one another, elaborate on certain facets of other articles, or add nuances to their topics. Therefore, I’ve grouped articles that cover related topics and methods together as much as possible.
Here are the ten user-research articles on UXmatters that I’ve found most valuable:
The article “The Role of Observation in User Research,” by Jim Ross, provides a clear overview of the different methods of observation in user research. The author describes how these methods differ depending on location, the amount of participant interaction, proximity to the participants, and participants’ awareness of their being observed, as well as the advantages and disadvantages of each method. This article focuses on naturalistic observation and explains why and how to integrate it more fully into user research.
I found this article especially valuable because of its focus on the following:
providing a clear overview of and introduction to the different methods of observation that are useful in user research—most of them for foundational research
triangulating methods such as contextual inquiry and naturalistic observation to maximize the advantages of both, while minimizing their disadvantages or limitations
maximizing the richness and accuracy of data collection by describing
how to plan sessions and combine research methods
how to optimize observation and notetaking
limiting confirmation bias by distinguishing between observations and interpretations of what the researcher observes
Becoming a Spy: Covert Naturalistic Observation
In his article “Becoming a Spy: Covert Naturalistic Observation,” Jim Ross describes the research method covert naturalistic observation, which has practical applications in psychology, anthropology, and other social sciences, as well as in UX research. Covert naturalistic observation is just one of the observation methods Jim covers in “The Role of Observation in User Research.” This article describes the advantages and disadvantages of this method, explains when to use it in UX research, and provides some tips on conducting this method of research.
I’ve included this article for the following reasons:
It introduces a method that UX researchers rarely use and invites them to think beyond the scope of traditional UX research methods.
Its intent is to explore ways to limit the Hawthorne Effect—the way people’s behavior changes when they know we’re observing them—in UX research by “observing behaviors in their natural contexts without any intervention or influence by the researcher and without participants knowing that they’re being observed.”
The author presents ways to make the most of this method and to gather as much data as possible.
He emphasizes the importance of combining this method with others—using triangulation—to minimize the limitations of each method. Specifically, the author suggests mixing covert naturalistic observation with overt methods such as user interviews or contextual inquiries.
Design for Fingers, Touch, and People, Part 1
Steven Hoober’s 2017 article “Design for Fingers, Touch, and People, Part 1” provides an update to his popular 2013 article “How Do Users Really Hold Mobile Devices?” In these articles, the author presents the findings of a combined body of research on the ways people hold and interact with their touchscreen devices, describes the implications for design, and provides concrete guidelines on how to design better digital products.
I find value in these two articles for the following reasons:
They include examples of covert naturalistic observation, a research method that Jim Ross describes in “Becoming a Spy: Covert Naturalistic Observation,” and describe its direct application in terms of design. The author observed people using mobile devices on the street, in airports, at bus stops, in cafés, and on trains and buses.
The author honestly shares the limitations of the data-gathering method he used—for example, the impossibility of observing correlations between tasks and different ways of holding phones.
This article provides a great example of the effectiveness and the need for the triangulation of many different research methods and sources, such as the following:
the author’s own covert naturalistic observation research in combination with intercepts and remote unmoderated testing
usage data from other countries and for other devices
meta-research on existing research in actual reports and publications
The author admits some erroneous assumptions he made about previous studies and his own biases, which is a great quality of a good researcher.
User Research Is Unnatural, Part II: Making User Research More Natural
The article “User Research Is Unnatural, Part II: Making User Research More Natural” is Part II of the two-part series “User Research Is Unnatural,” by Jim Ross. It offers recommendations and suggestions for minimizing the negative effects of some unnatural aspects of user research, advice on getting more realistic results, and tips for making user research more natural.
I find this article useful because of the following reasons:
It focuses on limiting biases by making user research as natural and realistic as possible and recommends relying more on pure observation to minimize the limitations and biases of methods that rely on interacting with participants, such as contextual inquiry.
The author provides interesting alternatives to our usual ways of doing research—such as conducting usability testing in the field, conducing remote research, whether moderated or unmoderated, and conducting natural observation.
I appreciate his emphasis on doing studies in participants’ natural environment, more like an anthropologist or sociologist.
Succeeding with Field Usability Testing and Lean Ethnography
In his article “Succeeding with Field Usability Testing and Lean Ethnography,” Steven Hoober describes an effective application of what Jim Ross recommends in his article “User Research Is Unnatural, Part II: Making User Research More Natural”—especially in relation to conducting usability testing in the field and natural observation. He addresses the importance of test design and failing early in the development process. His goal is to identify conceptual, structural, or architectural problems early to make it easy to redesign solutions. Steven describes the research methods that are useful for early design testing, including usability testing of mobile phone apps in the field—inside stores, on street corners, in factories and warehouses, in moving cars, in parking lots, and even in cubicles and offices; testing prototypes, whether paper or digital; and testing early designs in combination with some Lean, ad hoc ethnography. He provides information on logistics as well as good, detailed, practical recommendations regarding setup, when and where to test, procedures, notetaking, and video recording.
I find this article particularly valuable for the following reasons:
This article is based on lots of hands-on experience conducting research in the field.
The author treats user research as an R&D project or lab experiment, with early experiments, iterative design and testing, and opportunities to learn from failure.
He pays a lot of attention to reducing biases—for example, conducting usability testing in the field to reduce the negative effects of lab testing on participants’ behaviors and doing some Lean or incidental ethnography to limit the staging effects of usability testing.
The author relies on triangulating a combination of research methods such as usability testing—including observation and interviews—surveys, and Lean ethnography.
Steven provides a great example of one of the keys to applying in-depth research and analysis to achieve better outcomes that I recommended in my article “Applying In-Depth Research and Analysis to Achieve Better Outcomes”—adapting research studies on the fly to what is happening with the participant in the field. He describes adding some ad hoc ethnographic observations that he had not planned.
Modifying Your Usability Testing Methods to Get Early-Stage Design Feedback
The reality is that UX projects don’t always have adequate time or budget for the foundational research that Jim Ross and many UX researchers advocate. In his 2012 article “Modifying Your Usability Testing Methods to Get Early-Stage Design Feedback,” Michael Hawley suggests making some adjustments during usability testing to gather more foundational, high-level, strategic design input when it is not possible to conduct foundational research such as a contextual inquiry or ethnographic research.
The reasons I find this article valuable are as follows:
It has been my experience as a user researcher that time and budget don’t always permit foundational research. This article suggests some simple modifications you can make to your usability studies to obtain some early-stage feedback. While this definitely does not replace comprehensive, foundational research such as contextual inquiry or ethnography, it does provide significant value.
His article distinguishes between usability and usefulness and describes the different types of questions or tasks that get at one or the other.
Michael provides some tangible, actionable suggestions for adapting your usability-testing methods—for example, tailoring your pre-task questions, task scenarios, and post-task assessments to elicit more high-level feedback on usefulness.
Avoiding Hard-to-Answer Questions in User Interviews
User interviews and self-reported data are integral to most user-research methods because it’s almost impossible to get all the data we need only by observing users performing a task. However, some things are hard for users to predict or infer accurately. In his article “Avoiding Hard-to-Answer Questions in User Interviews,” Jim Ross lists the types of questions that people—users—have a hard time answering accurately and provides better alternatives for collecting the data we need.
I think this is a very useful article for the following reasons:
Jim discusses some foundations of effective user research—for example:
“In user research, participants provide the most accurate and useful information while they’re performing their typical tasks in their usual context or when trying out and comparing design solutions rather than simply looking at designs and providing their general opinions.”
“What people say doesn’t always match what they actually do.”
The article provides key recommendations to avoid collecting false insights or making common mistakes, and it tells you what to do instead.
It shows the limitations of self-reported data and why we need to rely more on observation to answer some specific research questions. People have a hard time answering accurately when you ask them to predict the future, are interviewing people to identify their needs, or are asking people to envision an improved design.
Strengths and Weaknesses of Quantitative and Qualitative Research
I find this article useful for the following reasons:
It gives a good overview of the strengths and weaknesses of quantitative and qualitative studies and explains the risks of an over-reliance on one or the other in product design. For example, the authors describe the difficulty of interpreting the quantitative data you’ve collected, which can lead to critical errors in product design if you don’t use qualitative data to understand the why behind the numbers, as well as user behaviors, needs, desires, routines, and use cases.
The article emphasizes the effectiveness of using these approaches in combination with each other—that is, triangulation—and gives some examples of how and when to combine them.
The authors describe the specific expertise that is necessary to conduct each type of study and successfully analyze the data you collect. This is why, as I mentioned in my article “Applying In-Depth Research and Analysis to Achieve Better Outcomes,” collaboration between different specialists and teams is often key to getting optimal results—for example, between UX researchers and data-mining analysts. It is unusual for one UX researcher to be equally strong in both quantitative and qualitative research.
I selected this article for the following reasons:
The article presents some key misconceptions about user research that can lead to misguided applications of user-research methods.
This is an important article for beginning researchers or for educating UX designers and stakeholders.
Some myths that it is important to dispel include the following:
Rather than “User research provides stunning, new revelations,” “User research provides very useful information—and perhaps a few amazing insights.”
Rather than “User research tells you how to design your product,” “User research provides information that informs design.”
Rather than “User research involves asking users what they want,” “User research is about inferring what users need.”
In the article “The Role of Observation in User Research,” which appears first in my list of the best user research articles on UXmatters, Jim Ross mentioned the method participant observation, in which researchers actively participate in an activity they are observing. In his article “Participatory Observation,” Jim describes this method, which is commonly used in anthropology and sociology, but not in UX research. The author presents this method’s advantages and disadvantages—including potential biases—describes the types of projects for which it is useful, and explains how to conduct participant observations.
I selected this article because of the following reasons:
The article introduces a method from anthropology and sociology that is not in common use in UX research and invites UX researchers to think beyond the scope of the traditional research methods we use.
The author is aware of the biases inherent in this method and shares ways of limiting them when conducting a study.
He recommends using triangulation, combining this method with other research methods such as user interviews and contextual inquiry to increase the effectiveness of your research.
Conducting good foundational research in the early stages of the development cycle makes a big difference in designing a digital product or service that achieves the highest levels of usability, usage, and user satisfaction.
The articles I’ve reviewed demonstrate the importance of the following aspects of successful user research:
conducting field and naturalistic user research
triangulating and combining research methods and data as much as possible
using the right combination of observation and self-reported data
reducing and acknowledging the biases of each method or study
adapting studies on the fly
The science of effective UX research combines many skills, minimizes biases, and optimizes the richness and accuracy of data collection.
Isabelle has 20 years of experience applying her research and analysis skills to User Experience and career exploration. With a multifaceted education and background in information science, psychology, coaching, personality differences, and relational mastery, she has contributed to innovative approaches in the fields of UX research and career exploration. Isabelle has created a step-by-step approach to career reinvention that goes beyond the limitations of traditional career counseling and has helped hundreds of people find fulfilling careers. Currently based in the San Francisco Bay Area, she has lived and worked in France, Switzerland, the United Kingdom, and Canada and has conducted several international user-research studies. As a user researcher, she has worked for companies such as Yahoo!, Bell Canada, and the French Speaking University Agency. An engaging speaker, Isabelle has given presentations and workshops for professional associations and conferences such as the IA Summit and at General Assembly. She will be speaking at the Association for Psychological Type International in 2021. Read More