Card Sorting: Mistakes Made and Lessons Learned

By Sam Ng

Published: September 10, 2007

“Card sorting is a deceptively simple method.”

Card sorting is a simple and effective method with which most of us are familiar. There are already some excellent resources on how to run a card sort and why you should do card sorting. This article, on the other hand, is a frank discussion of the lessons I’ve learned from running numerous card sorts over the years. By sharing these lessons learned along the way, I hope to enable others to dodge similar potholes when they venture down the card sorting path.

Don’t Expect Too Much of Card Sorting

Card sorting is a deceptively simple method. The beauty of card sorting is that it just makes sense. I normally get enthusiastic nods of approval when explaining it to others. But therein lies one of the key problems with card sorting: our expectations of what it can do.

One of the earliest card sorts I ran was unnecessarily complex, involving over 100 cards with around 80 participants. Yes, what a sucker for punishment! We had started off with a simple research goal and unwittingly turned it into a monster. We wanted to find out everything. Part of the problem, in this case, was a misunderstanding on the part of the client.

Client: “So it’s like a survey? So we should have a lot of people, right? And it’ll tell us how we should structure our Web site?”

Sam: “Well, actually, no. It will help us identify patterns in how users expect to find content, but it won’t give us a complete structure.”

Client: “But it’s like a survey, right? So we should get as many people as we can and make sure we represent all our market segments.”

Sam: “Well, it depends...” (And now seriously regretting I had mentioned the word survey at all!)

This story illustrates two things:

  • Card sorting can easily get out of control by trying to be all things to all people.
  • Clients and stakeholders might mistakenly assume card sorting can produce a perfect answer.

The reality is that many people—and sometimes even ourselves if we were honest—expect card sorting to more or less create our information architecture. You don’t have an information architecture or the one you currently have is bad? Why that’s an easy problem to solve—just run a card sort.

Whoa! Hold up!

“Card sorting certainly can provide input into an organization system—what content goes together—and a labeling system—what to call things—but it’s got very little to do with a navigation system or a search system.”

According to Lou Rosenfeld and Peter Morville’s information architecture bible, Information Architecture for the World Wide Web: Designing Large-Scale Web Sites—also known as the Polar Bear book—an information architecture comprises a navigation system, a classification system, a labeling system, and a search system. Card sorting certainly can provide input into an organization system—what content goes together—and a labeling system—what to call things—but it’s got very little to do with a navigation system or a search system.

More importantly, I’ve realized that card sorting can be all about timing.

Card sorting is most often employed for redesigns. In a redesign, it’s often the case that there is a huge amount of information that’s getting out of hand, there’s layer upon layer of internal politics, the technology is outdated and heavily constrained, there’s no central owner of the information, and worst of all, the business does not really have a clear Web and information strategy. The focus is typically on stuff the organization needs to deal with, not on investing in an asset. There’s no way a simple little card sort—or even a massively complex one—can fix all that.

Timing when you do a card sort is as important as how you actually do the card sort. Card sorting is often most useful once you’ve done some homework to find out about your users and understand your content. This knowledge provides a base from which you can create or improve your information architecture. Mistime your card sort and you run the risk of raising expectations that card sorting cannot possibly meet.

Messy Is Okay

“Card sort analysis—much like usability test analysis—is often messy and subjective.”

I have a love/hate relationship with card sorting. Card sorting feels like it should give us all the answers, but it doesn’t. We have all this quantitative data—so why don’t we have clear-cut methods of analysis, and why in the world can’t we get clear cut answers?

I’m no statistician, and neither are most usability professionals. So, frankly, I avoid statistical analysis methods that I cannot explain to others. But this means I don’t tend to get clear-cut answers that result in a site map—such as dendrograms generate. I’m okay with that. In fact, I’m increasingly confident that a non-statistical approach might even be better.

In most of the card sorts I’ve facilitated, the best way of doing analysis was to eyeball the data. I’ve learned there is a lot of value in taking this approach. There are a number of tools to help you eyeball your data, and Donna Maurer’s spreadsheet template in particular is quite helpful.

I’ve accepted the fact that card sort analysis—much like usability test analysis—is often messy and subjective. It’s part science, but mostly art. As with many aspects of our work, there isn’t necessarily a single correct, quantitative answer, but rather a number of different qualitative answers—all of which could be correct. Our job is to use our experience and our understanding of people to make judgment calls.

Analysis is a barrier that prevents many people from running card sorts. My suggestion is to leave your obsession with getting the correct answer at the door and accept that analysis gets messy. Not only does this set the right expectations about what card sorting can deliver, it might also open your eyes to other insights you would otherwise miss.

No One-Hit Wonders Here

“Each card sort addresses a certain research topic.”

Iteration is a key aspect of design. As practitioners, we advocate for iterative rounds of usability testing. I believe card sorting should also be iterative.

On some of our early card sorting projects, my company held the unspoken view that a single card sort should tell us what we need to know. What invariably happened? We would end up with 100 cards at varying levels of granularity and users from four or five different market segments. Very soon, the complexity got difficult to manage, and we found that we were not really learning that much.

On more recent projects, we have tried to break card sorts down into more manageable activities. Each card sort addresses a certain research topic, then we plan subsequent sorts based on what we’ve learned. For instance, start at a high level to understand how users think about your information, then drill down into specific clusters of content where uncertainty exists and target a more specific segment of users.

Using software or online tools to reduce administrative overhead makes doing iterative card sorts easier, which is a key reason we’re doing fewer and fewer physical card sorts like that shown in Figure 1. In many instances, the benefit of being able to easily run multiple iterations of online card sorts online offsets the benefit of the richer, more qualitative feedback you can obtain by doing physical card sorts in person. Figure 2 shows an online card sort.

Figure 1—A physical card sort

Physical card sort

Figure 2—An online card sort

Online card sort

Of course, given enough time and participants, doing iterative, physical card sorts in person is ideal. But no matter how you accomplish it, doing at least two or three rounds of card sorting can make a massive difference in what you learn—especially for more complex systems.

More Questions Than Answers

“Doing at least two or three rounds of card sorting can make a massive difference in what you learn.”

I would be interested in knowing how others report the findings of their card sorts. If your process is anything like mine, you’ve probably resorted to producing a site map of some sort. I remember once producing tables that showed the percentage probability each card belonged in a certain category. Nice to look at, but I do wonder whether my client knew how to interpret it correctly. I mean, what’s the difference really between something that is 73% likely to be Category A rather than 70% likely—or even 60%?

Meanwhile, there might be critical, qualitative findings you’ve missed. For example, findings like these:

  • Users tend to organize all the cards according to their degree of relevance to them.
  • Users tend to label groups with action words, not topical words.
  • Your card labels are less than optimal, and they are biasing results

Such findings are a lot harder to identify, a lot more subjective, and seemingly a lot less useful, at least immediately.

“Card sorting often raises more questions, but hopefully, the right types of questions.”

The site map has become the goal. With the Web, we have become accustomed to getting quick answers. So we want a quick answer, and the site map gives us that quick answer. It’s too terrifying to think that perhaps the outputs from a card sort are new questions rather than definitive answers to old questions.

In the past, I’ve been involved with card sorting projects where the results did little to address a system’s findability problems. And this should be no surprise. Simply providing a site map from a single card sort is comparable to sticking a BAND-AID® over a serious wound.

Card sorting often raises more questions, but hopefully, the right types of questions. It is our job to address these questions through other research methods and, possibly, by doing subsequent card sorts.

Who Is the Real Culprit?

John: No one can ever find anything on our Web site.

Kate: Yeah, I know. It’s ridiculous. It must be our information architecture. (Having recently read something about information architecture.)

John: You think? I’ve wondered about that as well. Before we started working here, there was this guy who built the Web site himself. I doubt he even thought about the site’s information architecture.

Kate: You know what? We should do a card sort. I read about card sorts recently. ...

Through this seemingly innocent conversation, John and Kate have immediately assigned the blame to the Web site’s information architecture (IA). (Incidentally, the other popular culprit to which teams assign blame is the content management system.) As I mentioned earlier, card sorting seems like such a logical method, we assume it can fix all IA problems. It can help, but it won’t be the silver bullet.

When doing a recent card sort, we encountered this very scenario. I had a hunch the navigation system was the culprit. It was pretty messed up and went against users’ expectations. The navigation controls were all over the place, and their behavior was non-standard. While the labels needed some improvement and the classification scheme required more consistency, improved navigation would have made a much bigger impact. In addition, it was a Web site with highly specialized content, so reliance on a good search system was also critical. Tuning search to perform better could also have made a big difference. However, in the end, a card sort was what the client wanted. Not surprisingly, the results were not entirely helpful.

Demonstrating a Point of View

“While we ordinarily see card sorting solely as a design activity, I’ve learned that card sorting can have an equally important role in achieving buy-in and socializing a user-centered approach.”

On another project, I was a little shocked to learn that the main reason we were running a card sort was to show who was right and who was wrong. Marketing had a particular view on how the Web site should be structured; IT, the UX design team, and the project manager each had their own views on this issue. Instead of debating this point endlessly, someone had the brilliant idea to consult their users and run a card sort.

Through the card sort, we learned that each of them was right about some part of the structure, and everyone came away with a better appreciation for the importance of a user-centered perspective.

While we ordinarily see card sorting solely as a design activity, I’ve learned that card sorting can have an equally important role in achieving buy-in and socializing a user-centered approach. For example, a general manager might start off being adamant that a label should be the name of her department. However, being involved in a group card sort can easily change her viewpoint when she learns others have equally valid points of view. Seeing the results of the card sort can do wonders.

In another example, a client used our online card sorting tool, OptimalSort, to quickly create online card sorts for their corporate Web site. Because it was so easy to take part in the card sort—all participants had to do to begin was click a link in an email invitation—people from all around the organization did the card sort. Soon, our client had all sorts of people telling her how insightful the experience of going through this exercise was for them. Many participants had no idea other departments saw things so differently, and they couldn’t wait to see how their users organize information!

So, arguably, it might be worthwhile to run a card sort just to achieve organizational buy-in and increased awareness of what you are doing.


I believe in card sorting. It’s a great method for understanding users, informing information design, validating classification systems, and communicating design. However, we need to be aware that card sorting has limitations, so we can avoid having unrealistic expectations of its results.

In this article, I’ve described some of the lessons I’ve learned through my experiences with card sorting. This article does not provide a comprehensive view of card sorting. I invite you to expand on it by sharing the lessons you’ve learned from card sorting.


Maurer, Donna. “Card Sort Analysis Spreadsheet.” Rosenfeld Media. Retrieved August 25, 2007.

Rosenfeld, Louis, and Morville, Peter. Information Architecture for the World Wide Web: Designing Large-Scale Web Sites, 3rd ed. Sebastopol, CA: O’Reilly, 2007.


Great article, Sam. I really like the stories of the strange situations you’ve encountered!

Great article! Having read some articles on how the goal, or task, of the sort and context affect how people organize the information, I wonder if some of the inconsistencies found with card sorting are not a result of an unclear task for the sort. In case you are interested, here is an article: Medin, D. L., Lynch, E. B., Coley, J. D., & Atran, S. (1997). “Categorization and reasoning among tree experts: Do all roads lead to Rome?” Cognitive Psychology, 32, 49-96. The article shows that tree experts will organize the same information differently, depending on the task/goal at hand.

I’m interested in empirically validating the results of card sorting—as well as analysis results from quantitative and qualitative approaches. In case you are wondering, I’m an HF grad student, and I like blending qualitative and quantitative techniques—and sometimes defending qualitative techniques! I’ve read many of Donna’s comments and find both of your articles very informative and helpful. Thanks for sharing your knowledge with us!

Good comment, Veronica. Completely agree. Much like Rolf Molich’s work on comparative studies, I can easily believe that variances can occur, and context is king.

I also think group dynamics can have a big influence on card sorts, for better or for worse.

Ultimately, though, I think the process of doing the card sort is potentially as important as the actual end result. Speaking as a practitioner, we favor good enough over perfect. We balance commercial realities against scientific accuracy, and I would contend that this is what ultimately leads to our work having more impact.

Thanks for your link to the article. I’ll have to check it out.

A great resource for card sorting that we use is William Hudson’s Syncaps package. It provides templates to print the cards with barcodes, which makes data entry more straightforward—especially for your 100 cards and 80 participants scenario! I think that just eyeballing the data is a little unprofessional, since you can be influenced by just one or two participants who you perceive to be the ones who got it, and Hudson provides software to do the analysis.

Hi Sam

Great article and great transparency on the limits of what card sorting can provide while still highlighting its value.

I’ll also commend you on optimalsort a really easy-to-use tool, which is proving very useful for discovering basic patterns as well as closed validation.

Nice article. Shows that even the simple tools can get complicated at a large scale.

Syncaps seem nice, but somehow having a barcode scanner doesn’t seem too appealing. :) Doing remote testing seems the way to go. I just hope online tools get better at analyzing data.

Great article! Thought you might be interested in taking a look at the card sorting case study that Donna Maurer just published on our behalf. It discusses the return on investment we helped Eurostar deliver last year via use of this method.

While it’s difficult to separate the impact of card sorting from the impact of the other activities that made up this project, in the year since its redesigned site launched, Eurostar’s online revenues grew from £110 million to £136 million—an increase of 24%, or £26 million!

Hi Sam, thanks for a great article. I agree with your comment that the process of doing the card sort is as important as the end result. In fact, we have found that the end results&#8212either a dendrogram or simply eyeballing spreadsheets&#8212don’t give us the best understanding of the organization schemes that participants envision. Dendrograms are especially difficult to interpret, because the collective aggregate of all the participants’ responses sometimes does not reflect any of the individual sorting schemes. We find card sorting to be valuable, but in a one-on-one setting where we can leverage thinking aloud.

Thanks again.

I agree with most of what you’ve written and have had similar experiences. I wanted to respond to a couple of your points&#8212 specifically, the relationship between organization, labeling, and navigation, and the use of dendograms and other statistics. Rather than leaving a long-winded comment on this site, I’ve posted my response here.

Veronica’s description of “Categorization and reasoning among tree experts: Do all roads lead to Rome?” isn’t quite accurate. The researchers didn’t vary the task/goal at hand. They varied the type of tree experts—for example, taxonomists, landscape workers, parks maintenance personnel.

Here’s the abstract of the study:

“To what degree do conceptual systems reflect universal patterns of featural covariation in the world (similarity) or universal organizing principles of mind, and to what degree do they reflect specific goals, theories, and beliefs of the categorizer? This question was addressed in experiments concerned with categorization and reasoning among different types of tree experts (e.g., taxonomists, landscape workers, parks maintenance personnel). The results show an intriguing pattern of similarities and differences. Differences in sorting between taxonomists and maintenance workers reflect differences in weighting of morphological features. Landscape workers, in contrast, sort trees into goal-derived categories based on utilitarian concerns. These sorting patterns carry over into category-based reasoning for the taxonomists and maintenance personnel, but not the landscape workers. These generalizations interact with taxonomic rank and suggest that the genus (or folk generic) level is relatively and, in some cases, absolutely privileged. Implications of these findings for theories of categorization are discussed.”

Thanks for the clarification. I realized I wasn’t very clear on my comment. When I cited that article, I meant that the goals that the tree experts had in mind when sorting varied based on the concerns typical of their profession, such as landscape workers vs. taxonomists. Their mental models were heavily guided by what they typically do. I guess this is a contextual type of influence on card sorting and do wonder how people in industry typically deal with this issue.

Only just encountered this site. Shows how grossly inefficient communication is between card / pile / free sorters!

I’m kind of at a bit of a stop in analyzing my card sort—too large! I have 100 sorters. Maura’s spreadsheet handles only 40. What is your recommendation on how best to analyze a card sort besides eyeballing?

Any help or advice is greatly appreciated.



Thanks for a great article. I’m getting ready to create my first card sort, and this article has reinforced that I am heading in the right direction.

Good article. I think now I am more relaxed, since I see that experts say what I was wondering. :-D

Extremely important to me is what Sam mentions: “Card sorting can have an equally important role in achieving buy-in and socializing a user-centered approach.”

And I would always combine the card sort with the thinking aloud method, because you see very interesting comments from users. Some might directly say: “Well, I have done it this way, but I thought about organizing it according to another completely different logic, too. Although I have done it like this.” These comments must be noted for the analysis.

The thinking aloud method is useful for gathering some navigational information, too. Some users might say, “This section goes here, but it should be linked to this other section, too. Thus, it’s perfect for a link in the “related topics” and provides guidance to you when writing the content.

Join the Discussion

Asterisks (*) indicate required information.