Chances are that, if you do user research, you conduct a fair number of user interviews. When conducting interviews, our training tells us to minimize bias by asking open-ended questions and choosing our words carefully. But consistently asking unbiased questions is always a challenge, especially when you’re following a participant down a line of questioning that is important, and you haven’t prepared your questions ahead of time. Also, if you do a lot of interviews, you might fall into a pattern of asking the same types of questions for different studies. This might not bias participants, but you can bias yourself if you always investigate the same types of issues. Finally, are you sure you are asking the right questions? Your interview questions might be relevant to you and your project team, but are they the questions that will get at important issues from a user’s perspective?
In an effort to address some of these considerations, I’ve experimented with the Repertory Grid method—an interview technique that originated in clinical psychology and is useful in a variety of domains, including user experience design.
Personal Construct Theory
The Repertory Grid is a data extraction and analysis technique that has as its basis the Personal Construct Theory, which George Kelly developed in the 1950s. The central theme of the Personal Construct Theory is that people organize their experiences with the world into conceptual classifications that we can differentiate and describe using attributes of those classifications called constructs. Often, these constructs manifest themselves as polar opposites on a scale, so we can easily classify the elements of our world. For example, based on our experiences with people, we know that some are shy and others are outgoing. When we meet new people, we may consciously or subconsciously categorize them according to that construct.
An important element of the Personal Construct Theory is that each individual has his or her own unique set of constructs that are important to that person. Taking my example further, whether a new person is shy or outgoing might not be important to you in your categorization scheme, but it might be very important to someone else. George Kelly hypothesized that people are constantly challenging and growing their construct systems, but those systems remain unique to the individual, and the sum of each person’s experiences shapes them. In addition, the differences in people’s construct systems contribute to our different perceptions of the world and our behavior in it.
For example, when choosing a place to live, one person might organize her choices using a construct that rates locations according to how easy it is to get to work, because she’s experienced tough commutes in the past. Another person might organize his choices by climate or some other factor. According to the Personal Construct Theory, each person has his or her own unique system and prioritization of constructs, or way of construing the world.
It is this inherent difference in construct systems between people that introduces bias in research: The researcher has one set of constructs, and each participant has another. Especially in survey or structured-interview research, a researcher might ask questions he feels are important. Participants can answer those questions, but are they really the most relevant questions? For example, in evaluating the user experience of a Web site, we can ask participants whether they think the site is trustworthy and why. Each participant might be able to answer these questions, but is a trustworthy site important to that participant for that domain?
Even the most well-intentioned researcher, drafting questions that are as open-ended and unbiased as possible, still might lead some participants down an irrelevant path. Kelly developed the Repertory Grid as an interview technique that attempts to minimize the construct bias of the interviewer and systematically extract constructs for a particular domain that are important to participants. Why is it called the Repertory Grid? First, Repertory comes from the word repertoire, which refers to a participant’s repertoire of constructs. The Grid refers to the data extraction and analysis procedure researchers use to gather and compare information from a number of participants in a study.
The Repertory Grid Process
Traditionally, researchers conduct a Repertory Grid study by choosing several examples in a particular domain with which participants interact. Ideally, there will be 6–12 different examples that represent a wide variety of approaches and potential constructs. A Repertory Grid study proceeds according to the following four general steps:
During each session, either the participant or the researcher chooses to work with three random examples from the initial set. (Ideally, there are multiple participants in the study, and each participant works independently, with a different set of examples.)
This is the core aspect of eliciting constructs without introducing bias from the researcher. The researcher asks the participant to identify how two of the three examples are different from the third. The researcher does not provide a starting point, but just asks the participant about the constructs that are important from his or her perspective. Often the constructs that are most important to the participant are surprising—and sometimes not related to the topic that the researcher intended. However, this is the key aspect of the exercise—to uncover what is important to the participant.
Once the participant identifies a construct, or how two of the examples are different from the third, the participant names the two polar opposites of the construct, identifies which is good and which is bad, then writes the names of its two contrasting poles at the opposite ends of a row in the grid.
The participant continues the process of triading examples to identify additional constructs for the domain. Participants can change which two examples are alike and which are different for different constructs. The key is to elicit as many constructs as possible, without any suggestions from the researcher. The researcher can ask probing questions and ask the participant to think aloud, but suggesting dimensions for constructs introduces the bias that this method seeks to avoid.
After identifying and naming the contrasting poles for constructs during the triading step of this process, the participant rates all of the original examples in the study—that is, the 6–12 examples, including the three the participant used in triading—basing his or her ratings on the constructs the participant developed during triading. For each individual construct, the participant rates an example on a scale of 1 to 5, where 1 represents one end of the pole and 5 represents the other.
For example, if a participant identified a construct with the two poles organized and cluttered, the researcher would ask the participant to rate each example on a scale from 1 to 5, where 1 is organized and 5 is cluttered.
Depending on the number of examples and constructs the participant identified during the triading step, this rating process can take some time, so be sure to allow for it in your scheduling.
You can analyze the results of a Repertory Grid study both qualitatively and quantitatively. Often, a qualitative analysis is enough to develop a good understanding of the constructs that are important to the target audience. By reviewing notes from the triading sessions and conducting affinity diagramming sessions to assess the various participants’ constructs and language, researchers can identify themes that can inform their decision making for the domain. In addition, to statistically identify which constructs are most relevant and most clearly distinguish the selected examples, a researcher can apply factor analysis to the participants’ ratings of the examples. The result is a dendrogram or tree diagram like that shown in Figure 1, which is similar to what you would get during a card sort exercise and shows
which examples are most closely associated with one another
the selected examples’ most differentiating characteristics
Applying the Repertory Grid to User Experience Design
George Kelly was a clinical psychologist, so his application of the Repertory Grid was to help identify the constructs his patients used in interacting with those around them. In his work, the 6–12 examples for the domain were significant people in the lives of the patients. Kelly used the Repertory Grid method to help patients understand their issues with interacting with those people.
In user experience design, the subject matter is obviously different, but we can use the same process to identify the key constructs or considerations people have when interacting with systems.
In my work, I’ve applied the Repertory Grid method to user experience design in two different ways:
Getting feedback on user interface paradigms during conceptual modeling
Understanding a product’s competitors and positioning in its marketplace
Getting Feedback on User Interface Paradigms
During the conceptual modeling phase of a project, when I am developing paradigms for how a user might interact with a particular system—whether I am redesigning a particular page or an entire process—I’ve used the Repertory Grid to get feedback on which paradigm would work best.
In this case, the examples for construct elicitation are three different user interface paradigms I’ve either sketched, drawn as low-fidelity wireframes, or developed as visual comps. Generally, I ask participants to show me how they would complete particular tasks with each user interface—similar to the task scenarios participants follow during in a usability study. Because participants’ interactions with the three user interfaces help them develop some familiarity with each paradigm, they can then complete a Repertory Grid exercise comparing the three.
The triading phase of this process is key. Instead of simply asking the participants which user interface they like best and hoping they have good answers when I ask them Why?, the triading process brings out the specific attributes that differentiate the user interfaces in the minds of the participants. Additionally, the triading phase is important when comparing wireframes or other prototypes, because two things generally limit the rating and factor analysis:
The number of user interface paradigms we can create
The number of user interfaces participants can interact with and be able to retain in memory during an interview session
Analyzing a Competitive Marketplace
During early phases of projects, I’ve used a more traditional application of the Repertory Grid method—with ratings and statistical analysis of constructs—to develop a strong understanding of a product’s competitors and its current positioning in a market.
For example, during the business intelligence phase of a project, when a product team is defining the business goals and objectives for a product, business stakeholders usually have their own perspectives on how a Web site or application fits in the marketplace and what customers’ perceptions are. However, their experiences and constructs are likely different from those of their customers. Therefore, what stakeholders think is important might not be important to customers at all.
By conducting a Repertory Grid study, using competitive sites or products as examples and choosing participants who are familiar with those competitive products, a researcher can develop a strong understanding of customers’ perspectives on what is important. In this application of the Repertory Grid, I’ve either asked participants to interact with the example systems to bring them back to top of mind or simply shown them images of a Web site, brand, or application to trigger their memories. Participants complete the triading process using three examples, then rate all of the examples according to the constructs they’ve developed. The resulting factor analysis helps identify the differentiating characteristics of the product domain and positive characteristics on which we should focus. Additionally, the statistical aspect of the factor analysis is another tool that can aid in the presentation of the results to stakeholders—especially if they respond positively to quantitative analysis.
In applying these variations of the Repertory Grid to user experience design, I’ve found the method to be fun and engaging for participants and easy for researchers. The Repertory Grid method has a number of benefits for user experience research and design evaluation. Repertory Grid studies
quickly generate a large number of attributes, or constructs, that are useful in comparing different examples
elicit differentiating attributes in the participants’ vocabulary rather than the researcher’s vocabulary
identify constructs that are important to the participants rather than the researcher
provide a structured process for eliciting feedback that is easy for participants to understand
The most significant limitations of this method concern when you can use it effectively. Triading is an effective technique that you can use when you have just a few examples for comparison. However, to utilize the Repertory Grid to its full potential, it is best to have more numerous examples for comparison, and participants must develop some familiarity with all of them.
Also, as with any other qualitative interviewing method, there is potential for bias from a researcher who proposes constructs or leads participants during follow-up questions. However, when applicable, the Repertory Grid method helps researchers minimize bias while developing an understanding of a particular domain from the customer’s perspective. I recommend you use this method as a component of your user-centered design toolkit.
Kelly, George. The Psychology of Personal Constructs. New York: Norton, 1955.
Jankowicz, Devi. The Easy Guide to Repertory Grids. New York: Wiley, 2003.
Chief Design Officer at Mad*Pow Media Solutions LLC
Adjunct Professor at Bentley University
Boston, Massachusetts, USA
As Chief Design Officer at Mad*Pow, Mike brings deep expertise in user experience research, usability, and design to Mad*Pow clients, providing tremendous customer value. Prior to joining Mad*Pow, Mike served as Usability Project Manager for Staples, Inc., in Framingham, Massachusetts. He led their design projects for customer-facing materials, including e-commerce and Web sites, marketing communications, and print materials. Previously, Mike worked at the Bentley College Design and Usability Center as a Usability Research Consultant. He was responsible for planning, executing, and analyzing the user experience for corporate clients. At Avitage, he served as the lead designer and developer for an online Webcast application. Mike received an M.S. in Human Factors in Information Design from Bentley College McCallum Graduate School of Business in Waltham, Massachusetts, and has more than 13 years of usability experience. Read More