In the old days, card sorting was simple. We used index cards, Post-it notes, spreadsheets, and buggy software—USort and EZCalc—to analyze the results, and we liked it! But this isn’t another article about how to do card sorting. Nowadays, there are multiple techniques and tools, both online and offline, for generative and evaluative user research for information architecture (IA), which provide greater insights on organizing and labeling information.
In this column, I’ll summarize and compare the latest generative and evaluative methods for IA user research. The methods I’ll examine include open card sorting, Modified-Delphi card sorting, closed card sorting, reverse card sorting, card-based classification evaluation, tree testing, and testing information architecture with low-fidelity prototypes. I’ll cover the advantages and disadvantages to consider when choosing between these methods, when it makes sense to use each method, and describe an ideal combination of these methods.
Generative IA Research Methods
Information architecture user research seeks to understand how people think about information to determine the best ways of organizing and labeling content. This research can be either
generative—gathering user input on the organization and labeling of content—or
evaluative—determining whether people can correctly find things in an organizational structure we’ve created
Using both methods in an iterative approach to user research is often the best way to ensure an intuitive information architecture.
Initial User Research
User research is an important first step in any design project. For an information architecture project, it is important to first understand a site’s content and the users’ vocabulary before conducting generative research such as card sorting. This understanding helps you to choose what items to include and the terminology to use on the cards.
Open Card Sorting
The original method of getting user input on information architecture was open card sorting. In this method, researchers give participants a set of index cards, each containing a piece of content and, optionally, a description. Participants sort these pieces of content into categories that make sense to them, then name those categories. It’s called an open card sort because participants create their own categories, as shown in Figure 1—unlike a closed card sort, in which participants sort cards into predefined categories. There are various ways of analyzing the data from an open card sort—from informally eyeballing the data to using software that produces a cluster-analysis diagram showing the common groupings across participants, similar to that shown in Figure 2.
Individual Versus Group Card Sorting
Traditionally, we have done card sorting with individual participants to learn how they think about a site’s content, but you can also do card sorting as a group exercise. Having several people sort the cards together allows you to hear their discussions and the reasoning around how they group the cards. The disadvantage is that the results represent a consensus rather than reflecting each person’s thoughts about the content. Having each person do a card sort individually, then getting all participants together as a group to discuss everyone’s thought process in grouping and labeling the cards might result in a better group activity.
Benefits of Open Card Sorting
Open card sorting provides insight into how people think about content, including the following:
their mental models for the content
intuitive content groupings
alternative methods of grouping or accessing content
category and subcategory labels
relationships between items, which could be useful in determining related links and recommendations
Use card-sorting results only as a guide to organization and labeling. You should never take the results and use them as your actual site structure. It’s important to make sure others on a project team don’t take the results too literally.
Limitations of Open Card Sorting
Because card sorting involves in-person sessions, with physical cards, there are a few limitations:
It’s time consuming to run the sessions, record the groupings, and combine the data across participants.
Researchers and participants need to be together for the sessions, which either limits participants’ geographical representativeness or requires a lot of travel.
Open card sorting results in a lot of data with no easy tools or methods for analyzing the data.
Sometimes participants sort items based on superficial aspects rather than thinking about the meaning of the items.
Online Open Card Sorting
The limitations of in-person card sorting led to the creation of card-sorting software such as IBM’s Usort and EZcalc, then to the creation of online card-sorting tools such as WebSort, shown in Figure 3, and OptimalSort.
Online card-sorting tools provide the following advantages:
These online tools remove location limitations. Participants from anywhere in the world can participate without incurring the costs or taking the time that travel involves.
An almost unlimited number of people can participate. This is important because studies have shown that it is necessary to include many participants—at least 30 to 50—to get results that you can generalize to an entire user population. 
Unmoderated sessions let participants complete a study in their own time, free up researchers’ time, and let researchers complete a study in much less time.
These tools automatically capture, analyze, and present data through various useful visualizations. This eliminates a lot of the manual data recording and number crunching traditional card sorting requires.
However, online card sorting has some disadvantages:
Because the physical space on a computer screen is much smaller than a large table, participants can see fewer cards at a time, which makes it more difficult for them to sort and keep track of the categories.
Since no researcher is present, participants have no one to answer their questions or ensure they are doing the card sort correctly.
Researchers can’t observe and discuss grouping and labeling decisions with participants.
Some online card-sorting tools do not allow participants the flexibility to rename cards, leave cards unsorted, add new cards, put a card in more than one location, or create subgroups.
Combining Traditional and Online Card Sorting
It’s possible to get the best of both worlds by combining the advantages of online and in-person card sorting. For example, you can use online card sorting to get data from a large number of participants and hold a smaller number of in-person sessions to get qualitative feedback about participants’ grouping decisions.
Modified-Delphi Card Sorting
As an alternative to open card sorting, information architect Celeste Lynn Paul created a method called modified-Delphi card sorting.  Instead of having participants do their own card sorts in turn, then analyzing the combined data, when following the modified-Delphi method, participants work one after another, refining a single model.
How the Modified-Delphi Method Works
When using the modified-Delphi method of card sorting, the first participant does a traditional open card sort. Then, throughout the remaining sessions, each participant starts with the organization the previous participant created. A participant can choose either to modify a previous participant’s card sort, as shown in Figure 4, or to scrap it and start over, creating a completely different organization. A study continues until the organizational structure stabilizes and participants are no longer making any significant changes. The result is an organizational structure that participants have reached by consensus.
Benefits of the Modified-Delphi Method
In creating the modified-Delphi method of card sorting, Paul’s intent was to create a time-saving and more accurate method to replace open card sorting. It offers the following benefits:
It requires fewer participants.
Participants find it easier to think aloud because of the reduced cognitive load of refining an existing organization rather than creating an organization from scratch.
There is less data to analyze because participants work on a single organizational model. The final organizational structure is ready for review immediately after the final participant finishes sorting the cards.
According to Paul, a modified-Delphi card sort results in a better organizational structure than open card sorting does. Having participants refine a single model provides less random results with fewer outliers.
Limitations of the Modified-Delphi Method
The modified-Delphi method seems to have the following risks:
Since each participant is free to start the organization over from scratch and the number of participants is small, an outlier participant could compromise the study.
Each participant is influenced by the structure that previous participants have created. This may cause them to overlook other ways in which they might normally have thought about organizing the content and can lead to a groupthink effect.
Although it’s possible to modify some online card-sorting tools to do a modified-Delphi card sort, there are no online tools specifically for this purpose.
Evaluative IA Research Methods
Once you’ve created a taxonomy, you can evaluate it with users to determine whether they can find things where you’ve put them. Three methods of IA evaluation are closed card sorting, tree testing, and usability testing with low-fidelity prototypes.
Closed Card Sorting
Closed card sorting is similar to open card sorting except, instead of asking participants to create their own categories, you ask them to organize the cards into the categories you’ve created. This shows you whether participants group the items in the same way you’ve grouped them. It’s an easy way to see where your structure doesn’t match user expectations.
Traditionally, participants do closed card sorts using physical index cards, but you can also conduct closed card sorts with online card-sorting tools like WebSort, shown in Figure 5. Since this is an evaluative activity that is primarily concerned with whether people choose a particular category for a card, it’s well suited to online research with many participants.
Benefits of Closed Card Sorting
Closed card sorts provide the following benefits:
They provide an easy and quick way of evaluating whether your organization and labeling make sense to users.
The results clearly show which items do not make sense in your organization and where they should go instead.
The results are easier to interpret than the results of an open card sort.
Limitations of Closed Card Sorting
Closed card sorts have the following limitations:
Online tools allow participants to sort items only into top-level categories. You can’t see in what subcategories they would place items.
Closed card sorting is based on the premise that people would look for items on a Web site in the categories where they placed cards when organizing them during a card sort, but some have questioned whether a categorizing activity like card sorting equates to the finding activity that is characteristic of people’s use of Web sites.
Testing for Findability
The concern that the categorization of information doesn’t necessarily match how people find information has led to the development of other methods of evaluating the organization of information through findability tasks. Three methods have evolved over time: reverse card sorting, card-based classification evaluation, and tree testing.
Reverse Card Sorting
In reverse card sorting, researchers show participants the first level of a taxonomy and give them a series of findability tasks, asking where they would look first to find each item. For example, a researcher might ask: Where would you expect to find cookbooks? The limitation with reverse card sorting is that it usually includes only the top level of an organizational structure.
Card-Based Classification Evaluation
In 2003, Donna Spencer wrote about a technique she had created for evaluating a taxonomy outside its user interface: card-based classification evaluation.  Similar to reverse card sorting, in this method, researchers present participants with an information hierarchy and ask them where they would expect to find particular items. The big difference with card-based classification evaluation is that it asks participants to navigate through an entire hierarchy, drilling down through multiple subcategories until they select the final category where they would expect to find an item. And instead of giving participants simple tasks that answer questions like Where would you expect to find item A?, it uses realistic tasks that are similar to those in usability testing—such as, Which Australian city has the highest crime rate?
Card-based classification evaluation presents an information hierarchy on index cards, with each card representing a different level of a hierarchy. Each time a participant taps a category, the facilitator switches to the index card that shows the subcategory the participant selected. This requires some skill on the part of the facilitator to keep track of the cards and be able to find the right one to present based on the participant’s tap.
With the difficulty of coordinating many physical cards and the increasing popularity of online card-sorting tools, some have seen value in creating online, card-based classification evaluation tools. Optimal Workshop, the creators of the online card-sorting tool, OptimalSort, were the first to come out with such an online tool, TreeJack. They changed the name of the method from card-based classification evaluation to the more manageable term tree testing. Subsequently, other companies have developed similar tree-testing tools, including PlaneFrame and C-Inspector.
Since TreeJack has emerged as the leader in tree-testing tools, we’ll discuss tree testing in the context of that tool. TreeJack presents participants with realistic tasks, asking them try to find items by clicking through multiple levels of an information hierarchy, as shown in Figures 6–9. When participants find the final category where they would expect to find an item, they click an I’d find it here button. TreeJack then presents the next task.
One of the main benefits of using an online tree-testing tool is that it automatically collects and presents the data to you in a variety of useful ways. Figures 10–12 show the results of a TreeJack study. For each task, it shows participants’ overall success rate and the locations in the hierarchy participants chose for a particular item. It also shows where participants first clicked at the beginning of each task—giving you insights into where they expected to find each item—and the exact path through the hierarchy each participant took during each task.
Benefits of Tree Testing
Tree testing has many advantages over other evaluation techniques, including the following:
It allows you to test outside an interface design, focusing purely on content organization and labeling.
Since it’s a task-based, information-finding activity, not an information-organizing activity, it more closely approximates what people do when navigating through a Web site.
Unlike closed card sorting, it lets you test multiple levels of an information hierarchy.
TreeJack lets you easily test many different tasks, presenting a random subset of tasks to each participant.
Online tree-testing tools automate session facilitation, data collection, and the initial analysis of the results.
Online tree-testing tools allow you to reach a large number of participants, anywhere in the world.
Limitations of Tree Testing
Tree testing also has a few limitations:
Since tree testing occurs outside any interface design, it doesn’t take into account other factors that might affect findability, such as navigation design or other interface elements that may compete for attention.
Since tree testing is unmoderated, researchers cannot observe participants or discuss their decisions and the reasoning that went into the choices they made.
Usability Testing with Low-Fidelity Prototypes
Once you have thoroughly evaluated and refined information organization and labeling through card sorting and tree testing, usability testing with low-fidelity prototypes—similar to that shown in Figure 13—lets you evaluate how well you have implemented the information architecture in an interface design. Since you’ve already resolved the major organization and labeling issues at this point, testing can focus on navigation design and other interface elements that impact findability.
Although there are many online, unmoderated tools you could use for usability testing, I think it’s better, at this stage, to get the rich, qualitative feedback you can get only from moderated usability tests. If you are going to do some unmoderated usability testing, save that for much later in the design process when you could do a summative usability test of a high-fidelity prototype. Because low-fidelity prototypes don’t contain all content and functionality, they require a moderator to guide participants through the test and interpret their feedback.
Benefits of Testing with Low-Fidelity Prototypes
Testing with low-fidelity prototypes offers the following benefits:
You can create low-fidelity prototypes quickly and easily.
Testing with prototypes lets you see how navigation design and interface elements affect findability.
Testing is more realistic in the context of an actual screen design.
You can easily make changes throughout an iterative testing and design process.
Limitations of Testing with Low-Fidelity Prototypes
Testing with low-fidelity prototypes has the following limitations:
Low-fidelity prototypes rarely include all pages that would be necessary to allow participants to navigate through an entire path to each item.
Usually, navigation elements are not fully functional in low-fidelity prototypes.
Combining IA Research Methods
These IA research methods are not meant to be mutually exclusive. As Table 1 shows, it’s best to combine research methods, using different methods at the various stages of an iterative design process. Start with user research to understand users, their tasks, their understanding of a site’s content, and their language.
Use open card sorting or Modified-Delphi card sorting to generate ideas for organization and labeling. You may want to conduct both in-person card-sorting sessions—to get qualitative information about why people organize and label items the way they do—and online card sorting—to gain greater confidence in your results by including a large number of participants. It may be useful to combine Modified-Delphi card sorting with online open card sorting and compare the results.
Once you’ve created your taxonomy, skip closed and reverse card sorting to go directly to tree testing for feedback on its organization. This lets you test all levels of your information hierarchy through realistic finding tasks. Make adjustments to your taxonomy based on the results and conduct additional rounds of tree testing as necessary.
Finally, once you’ve created your initial interface designs, mocking them up in a low-fidelity prototype, conduct usability testing with them. Make design changes based on the results, then test again, later in the design process, using a higher-fidelity prototype to evaluate the design with fully interactive navigation.
Table 1—Comparison of methods of IA research
Open card sorting
Relationships between items
Early user research
Modified-Delphi card sorting
Relationships between items
Early user research
Closed card sorting
Information architecture design
Reverse card sorting
Information architecture design
Usability testing with low-fidelity prototypes
Compared to the tools we had just five years ago, we’ve come a long way with the number of methods and tools that are available to help us gather user feedback on information architecture. Online tools provide the ability to reach a large number of participants, anywhere in the world, and require less time for conducting the research, gathering the information, and analyzing the results. With such quick, easy, and inexpensive methods, there is no longer any reason not to include user research and evaluation throughout a user-centered design process—even during the earliest stages of the process.
Jim has spent most of the 21st Century researching and designing intuitive and satisfying user experiences. As a UX consultant, he has worked on Web sites, mobile apps, intranets, Web applications, software, and business applications for financial, pharmaceutical, medical, entertainment, retail, technology, and government clients. He has a Masters of Science degree in Human-Computer Interaction from DePaul University. Read More