Card Sorting: Mistakes Made and Lessons Learned
Published: September 10, 2007
Card sorting is a simple and effective method with which most of us are familiar. There are already some excellent resources on how to run a card sort and why you should do card sorting. This article, on the other hand, is a frank discussion of the lessons I’ve learned from running numerous card sorts over the years. By sharing these lessons learned along the way, I hope to enable others to dodge similar potholes when they venture down the card sorting path.
Don’t Expect Too Much of Card Sorting
Card sorting is a deceptively simple method. The beauty of card sorting is that it just makes sense. I normally get enthusiastic nods of approval when explaining it to others. But therein lies one of the key problems with card sorting: our expectations of what it can do.
One of the earliest card sorts I ran was unnecessarily complex, involving over 100 cards with around 80 participants. Yes, what a sucker for punishment! We had started off with a simple research goal and unwittingly turned it into a monster. We wanted to find out everything. Part of the problem, in this case, was a misunderstanding on the part of the client.
Client: “So it’s like a survey? So we should have a lot of people, right? And it’ll tell us how we should structure our Web site?”
Sam: “Well, actually, no. It will help us identify patterns in how users expect to find content, but it won’t give us a complete structure.”
Client: “But it’s like a survey, right? So we should get as many people as we can and make sure we represent all our market segments.”
Sam: “Well, it depends...” (And now seriously regretting I had mentioned the word survey at all!)
This story illustrates two things:
- Card sorting can easily get out of control by trying to be all things to all people.
- Clients and stakeholders might mistakenly assume card sorting can produce a perfect answer.
The reality is that many people—and sometimes even ourselves if we were honest—expect card sorting to more or less create our information architecture. You don’t have an information architecture or the one you currently have is bad? Why that’s an easy problem to solve—just run a card sort.
Whoa! Hold up!
According to Lou Rosenfeld and Peter Morville’s information architecture bible, Information Architecture for the World Wide Web: Designing Large-Scale Web Sites—also known as the Polar Bear book—an information architecture comprises a navigation system, a classification system, a labeling system, and a search system. Card sorting certainly can provide input into an organization system—what content goes together—and a labeling system—what to call things—but it’s got very little to do with a navigation system or a search system.
More importantly, I’ve realized that card sorting can be all about timing.
Card sorting is most often employed for redesigns. In a redesign, it’s often the case that there is a huge amount of information that’s getting out of hand, there’s layer upon layer of internal politics, the technology is outdated and heavily constrained, there’s no central owner of the information, and worst of all, the business does not really have a clear Web and information strategy. The focus is typically on stuff the organization needs to deal with, not on investing in an asset. There’s no way a simple little card sort—or even a massively complex one—can fix all that.
Timing when you do a card sort is as important as how you actually do the card sort. Card sorting is often most useful once you’ve done some homework to find out about your users and understand your content. This knowledge provides a base from which you can create or improve your information architecture. Mistime your card sort and you run the risk of raising expectations that card sorting cannot possibly meet.
Messy Is Okay
I have a love/hate relationship with card sorting. Card sorting feels like it should give us all the answers, but it doesn’t. We have all this quantitative data—so why don’t we have clear-cut methods of analysis, and why in the world can’t we get clear cut answers?
I’m no statistician, and neither are most usability professionals. So, frankly, I avoid statistical analysis methods that I cannot explain to others. But this means I don’t tend to get clear-cut answers that result in a site map—such as dendrograms generate. I’m okay with that. In fact, I’m increasingly confident that a non-statistical approach might even be better.
In most of the card sorts I’ve facilitated, the best way of doing analysis was to eyeball the data. I’ve learned there is a lot of value in taking this approach. There are a number of tools to help you eyeball your data, and Donna Maurer’s spreadsheet template in particular is quite helpful.
I’ve accepted the fact that card sort analysis—much like usability test analysis—is often messy and subjective. It’s part science, but mostly art. As with many aspects of our work, there isn’t necessarily a single correct, quantitative answer, but rather a number of different qualitative answers—all of which could be correct. Our job is to use our experience and our understanding of people to make judgment calls.
Analysis is a barrier that prevents many people from running card sorts. My suggestion is to leave your obsession with getting the correct answer at the door and accept that analysis gets messy. Not only does this set the right expectations about what card sorting can deliver, it might also open your eyes to other insights you would otherwise miss.
No One-Hit Wonders Here
Iteration is a key aspect of design. As practitioners, we advocate for iterative rounds of usability testing. I believe card sorting should also be iterative.
On some of our early card sorting projects, my company held the unspoken view that a single card sort should tell us what we need to know. What invariably happened? We would end up with 100 cards at varying levels of granularity and users from four or five different market segments. Very soon, the complexity got difficult to manage, and we found that we were not really learning that much.
On more recent projects, we have tried to break card sorts down into more manageable activities. Each card sort addresses a certain research topic, then we plan subsequent sorts based on what we’ve learned. For instance, start at a high level to understand how users think about your information, then drill down into specific clusters of content where uncertainty exists and target a more specific segment of users.
Using software or online tools to reduce administrative overhead makes doing iterative card sorts easier, which is a key reason we’re doing fewer and fewer physical card sorts like that shown in Figure 1. In many instances, the benefit of being able to easily run multiple iterations of online card sorts online offsets the benefit of the richer, more qualitative feedback you can obtain by doing physical card sorts in person. Figure 2 shows an online card sort.
Figure 1—A physical card sort
Figure 2—An online card sort
Of course, given enough time and participants, doing iterative, physical card sorts in person is ideal. But no matter how you accomplish it, doing at least two or three rounds of card sorting can make a massive difference in what you learn—especially for more complex systems.
More Questions Than Answers
I would be interested in knowing how others report the findings of their card sorts. If your process is anything like mine, you’ve probably resorted to producing a site map of some sort. I remember once producing tables that showed the percentage probability each card belonged in a certain category. Nice to look at, but I do wonder whether my client knew how to interpret it correctly. I mean, what’s the difference really between something that is 73% likely to be Category A rather than 70% likely—or even 60%?
Meanwhile, there might be critical, qualitative findings you’ve missed. For example, findings like these:
- Users tend to organize all the cards according to their degree of relevance to them.
- Users tend to label groups with action words, not topical words.
- Your card labels are less than optimal, and they are biasing results
Such findings are a lot harder to identify, a lot more subjective, and seemingly a lot less useful, at least immediately.
The site map has become the goal. With the Web, we have become accustomed to getting quick answers. So we want a quick answer, and the site map gives us that quick answer. It’s too terrifying to think that perhaps the outputs from a card sort are new questions rather than definitive answers to old questions.
In the past, I’ve been involved with card sorting projects where the results did little to address a system’s findability problems. And this should be no surprise. Simply providing a site map from a single card sort is comparable to sticking a BAND-AID® over a serious wound.
Card sorting often raises more questions, but hopefully, the right types of questions. It is our job to address these questions through other research methods and, possibly, by doing subsequent card sorts.
Who Is the Real Culprit?
John: No one can ever find anything on our Web site.
Kate: Yeah, I know. It’s ridiculous. It must be our information architecture. (Having recently read something about information architecture.)
John: You think? I’ve wondered about that as well. Before we started working here, there was this guy who built the Web site himself. I doubt he even thought about the site’s information architecture.
Kate: You know what? We should do a card sort. I read about card sorts recently. ...
Through this seemingly innocent conversation, John and Kate have immediately assigned the blame to the Web site’s information architecture (IA). (Incidentally, the other popular culprit to which teams assign blame is the content management system.) As I mentioned earlier, card sorting seems like such a logical method, we assume it can fix all IA problems. It can help, but it won’t be the silver bullet.
When doing a recent card sort, we encountered this very scenario. I had a hunch the navigation system was the culprit. It was pretty messed up and went against users’ expectations. The navigation controls were all over the place, and their behavior was non-standard. While the labels needed some improvement and the classification scheme required more consistency, improved navigation would have made a much bigger impact. In addition, it was a Web site with highly specialized content, so reliance on a good search system was also critical. Tuning search to perform better could also have made a big difference. However, in the end, a card sort was what the client wanted. Not surprisingly, the results were not entirely helpful.
Demonstrating a Point of View
On another project, I was a little shocked to learn that the main reason we were running a card sort was to show who was right and who was wrong. Marketing had a particular view on how the Web site should be structured; IT, the UX design team, and the project manager each had their own views on this issue. Instead of debating this point endlessly, someone had the brilliant idea to consult their users and run a card sort.
Through the card sort, we learned that each of them was right about some part of the structure, and everyone came away with a better appreciation for the importance of a user-centered perspective.
While we ordinarily see card sorting solely as a design activity, I’ve learned that card sorting can have an equally important role in achieving buy-in and socializing a user-centered approach. For example, a general manager might start off being adamant that a label should be the name of her department. However, being involved in a group card sort can easily change her viewpoint when she learns others have equally valid points of view. Seeing the results of the card sort can do wonders.
In another example, a client used our online card sorting tool, OptimalSort, to quickly create online card sorts for their corporate Web site. Because it was so easy to take part in the card sort—all participants had to do to begin was click a link in an email invitation—people from all around the organization did the card sort. Soon, our client had all sorts of people telling her how insightful the experience of going through this exercise was for them. Many participants had no idea other departments saw things so differently, and they couldn’t wait to see how their users organize information!
So, arguably, it might be worthwhile to run a card sort just to achieve organizational buy-in and increased awareness of what you are doing.
I believe in card sorting. It’s a great method for understanding users, informing information design, validating classification systems, and communicating design. However, we need to be aware that card sorting has limitations, so we can avoid having unrealistic expectations of its results.
In this article, I’ve described some of the lessons I’ve learned through my experiences with card sorting. This article does not provide a comprehensive view of card sorting. I invite you to expand on it by sharing the lessons you’ve learned from card sorting.
Maurer, Donna. “Card Sort Analysis Spreadsheet.” Rosenfeld Media. Retrieved August 25, 2007.
Rosenfeld, Louis, and Morville, Peter. Information Architecture for the World Wide Web: Designing Large-Scale Web Sites, 3rd ed. Sebastopol, CA: O’Reilly, 2007.