Top

Card Sorting in the Age of AI: Adapting Classic Methods for Modern Challenges

Structuring Success

Organizing content to empower users

A column by Henry Adepegba
February 23, 2026

Card sorting has been a cornerstone of information architecture for decades. This method works well because it lets real people show you what they think. You can hand participants a set of labeled cards representing pieces of content, ask them to group the cards in whatever way makes sense to them, then study the patterns that emerge. The result: a window into users’ mental models that no amount of internal brainstorming could replicate.

However, the conversation around card sorting is changing. Artificial intelligence (AI) tools can now generate category structures in seconds. Large language models (LLMs) such as ChatGPT and Claude can sort a list of 40 content items into plausible groupings without your needing to recruit a single participant. Some teams have started asking whether traditional card sorting is still worth the effort. Others have gone even further and entirely replaced participant research and information architecture with AI outputs.

Both reactions miss the mark. AI does not make card sorting obsolete, but it does change how smart teams should use it. In this first installment of my new column Structuring Success, I’ll share what AI can and cannot do for card sorting, illustrate the differences that AI can make with real product examples, and offer a practical framework for combining human insights with AI’s speed.

Champion Advertisement
Continue Reading…

Card sorting has been a cornerstone of information architecture for decades. This method works well because it lets real people show you what they think. You can hand participants a set of labeled cards representing pieces of content, ask them to group the cards in whatever way makes sense to them, then study the patterns that emerge. The result: a window into users’ mental models that no amount of internal brainstorming could replicate.

However, the conversation around card sorting is changing. Artificial intelligence (AI) tools can now generate category structures in seconds. Large language models (LLMs) such as ChatGPT and Claude can sort a list of 40 content items into plausible groupings without your needing to recruit a single participant. Some teams have started asking whether traditional card sorting is still worth the effort. Others have gone even further and entirely replaced participant research and information architecture with AI outputs.

Both reactions miss the mark. AI does not make card sorting obsolete, but it does change how smart teams should use it. In this first installment of my new column Structuring Success, I’ll share what AI can and cannot do for card sorting, illustrate the differences that AI can make with real product examples, and offer a practical framework for combining human insights with AI’s speed. READMORE

Champion Advertisement
Continue Reading…

What AI Can and Cannot Do

AI is now entering the card-sorting space from two new directions. The first is simulation—using a large language model (LLM) to perform the sort itself. Provide the LLM with a list of content items, ask it to organize them into groups, and receive a category structure in seconds. The second is analysis—using AI to process the results of a human card sort faster, merging similar category labels, generating clustering visualizations, and summarizing qualitative data from think-aloud sessions.

On the simulation side, the results are genuinely interesting. Research from MeasuringU compared ChatGPT’s card-sorting output against data from 200 human participants who sorted 40 items on the Best Buy Web site. ChatGPT produced five categories, matching the most common number of groups among human sorters. The overall structures were broadly similar, and the overlap in item placement was substantial. At first glance, when comparing the outputs of humans and the AI, shown in Figures 3 and 4, it looks like AI might be able to do the job.

Figure 3—How human participants organize content items
How human participants organize content items
Figure 4—How AI organizes the same content items
How AI organizes the same content items

But look more closely and cracks appear. When humans sorted those Best Buy items, they created categories that were based on how they use the products in their lives. In contrast, an AI defaults to the conventional product taxonomies that its training data reflects. This distinction matters enormously. A navigation structure that is built on an industry’s taxonomy feels logical to product managers but foreign to shoppers and other visitors. A structure that is built on users’ mental models feels natural from the first click.

A 2025 study examining AI-augmented card sorting across 28 diverse datasets confirmed this pattern. The researchers found that AI-simulated sorts produced logically coherent structures, but could not capture the context-dependent, sometimes surprising ways in which specific user groups think about specific content. Their conclusion was unambiguous: AI card sorting is a useful supplement, but not a replacement for card sorting by humans, because the human factor in UX research remains critical.

There is also the matter of qualitative depth. Jakob Nielsen has argued that the most valuable part of a card-sorting study is often the think-aloud data—the moments when participants explain why they placed a particular card in a specific group. Their reasoning reveals the motivations and assumptions behind users’ behaviors. When a participant sorting content for a healthcare portal put prescription refills under things I do every month rather than pharmacy services, that decision conveyed information about a way in which patients organize their health management that no AI could surface on its own.

On the analysis side, AI is far less controversial and far more useful. Tools such as Maze, Great Question, and OptimalSort now offer AI-powered features that automatically merge similar category labels from open sorts, generate agreement matrices, and summarize patterns across large participant pools. These capabilities address one of card sorting’s biggest practical challenges: the hours of manual work that are necessary to make sense of open-sort data when every participant creates unique labels. AI can compress that analysis time from days into minutes without sacrificing accuracy.

A Practical Framework That Combines Card Sorting and AI

Rather than choosing between AI and traditional card sorting, the most effective approach uses each method where it performs best. Through my own practice, I have refined a four-step framework that balances the speed of AI with the depth of human insights.

Figure 5—A four-step framework for AI-augmented card sorting
A four-step framework for AI-augmented card sorting
  • Step 1: Generate hypotheses with AI. Before you recruit any participants, feed your card labels into an LLM and ask it to sort them from the perspective of your target users. Run the prompt three or four times with slight variations to see where the AI’s output is consistent and where it shifts. Consistent groupings suggest items with strong natural associations, while inconsistencies highlight ambiguous items that would benefit most from human testing. This step takes minutes and costs nothing, but it gives you a meaningful baseline to work from.
  • Step 2: Run a targeted human card sort. With your AI-generated hypotheses as a reference point, conduct an open card sort with 15 to 20 participants who are representative of your actual user base. The Nielsen Norman Group recommends 15 to 30 participants for qualitative insights. In my experience, 20 well-recruited participants are enough to reveal the patterns that matter most. Pay close attention to the places where human results diverge from the AI baseline. When participants at the University of Iceland consolidated those 13 categories into a handful of clear groups, the value of the study was not in confirming what the team already suspected, but in discovering groupings that no one on the team had considered.
  • Step 3: Accelerate analysis with AI tools. Once you have your human data, use AI-powered analysis features of tools such as Maze or OptimalSort to merge similar labels, generate clustering diagrams, and identify cards that participants struggled to place. Then apply your own expertise to interpret those patterns, weigh them against business requirements and content strategy, and craft a proposed navigation structure. The AI handles the tedious parts. You handle the judgment calls.
  • Step 4: Validate with tree testing. Once you have a proposed structure, test it. Tree testing asks users to find specific items within a simplified navigation hierarchy, which tells you whether your card-sorting insights translate into findable content. In a Fortune 500 financial-services case study by John Nicholson, he conducted tree testing to evaluate three candidate information-architecture (IA) models that he had derived from card sorts. The winning model outperformed the other models’ findability metrics by 75%. Skipping this validation step would mean you were building on assumptions rather than evidence.

Designing Better Card Sorts for Today

Beyond the AI question, the practice of card sorting itself deserves ongoing refinement. Three lessons from recent projects and research stand out.

  • Write better card labels. Research from the Nielsen Norman Group has shown that participants tend to group cards by matching keywords rather than by thinking about what each card means. For example, if a set of cards includes Employee Directory, Employee Benefits, and Employee Training, participants will lump those cards together because of the shared word, not because those items actually belong in the same category. Before you run a study, review your labels for shared keywords that might create false groupings. This is another area where AI can help during the planning phase: ask an LLM to review your card labels and flag potential keyword-matching risks.
  • Scope your card sorts carefully. When Bit Zesty worked on Barnardo’s Web-site redesign, they faced an information architecture with 185 page labels. Sorting all 185 at once would have overwhelmed participants, so they chose to limit the card sort to 40 key labels, then validate the full structure through tree testing afterward. For large or rapidly evolving content libraries, consider running card sorts that focus on specific content areas rather than attempting to sort everything in a single session. Keeping the card count between 30 to 50 cards manages cognitive load and produces cleaner data.
  • Do not overlook the qualitative layer. Remote, unmoderated card sorts are convenient and scalable, but they sacrifice the think-aloud insights that make card sorting truly powerful. Whenever budget allows, conduct at least a few moderated sessions during which you can ask participants why they made certain grouping decisions. A Museum of Art and Design case study in New York City illustrated the value of moderation well. During moderated sessions, participants explained that they did not understand labels such as MAD Ball and Burke Prize, which were internal names that meant nothing to visitors. That qualitative feedback was more actionable than any clustering diagram.

Looking Ahead at the Future of Card Sorting

Card sorting is not a relic of a simpler era of Web design. It is a research method that you can adapt to your current design challenges—just as you can adapt a good information architecture. AI makes generating starting points faster, analyzing results easier, and exploring candidate structures possible before investing in recruitment. But AI does not remove the need to understand how your actual users think about your specific content within a specific context.

In the future, the information architects who will get the most value from card sorting will be those who treat AI as a collaborator rather than a shortcut. Use AI for speed, pattern recognition, and analysis at scale—where it excels. Reserve human participation for what only humans can provide—the messy, context-dependent mental models that turn a logical taxonomy into a navigation structure that people want to use.

In the next installment of Structuring Success, I’ll explore cross-platform information architecture and the challenges of maintaining structural consistency when users move between the Web, mobile, and voice user interfaces. Until then, try my AI-augmented framework on your next project. You might be surprised by what the combination of machine speed and human insights can reveal. 

Freelance Writer

Abeokuta, Ogun State, Nigeria

Henry AdepegbaHenry is an SEO Content Writer and Researcher with 5 years of experience. He focuses on writing content that brings enlightenment to UX designers, content designers, and product managers. He has worked as a Senior Content and UX writer at Brave Achievers, a company that is dedicated to mentoring emerging product designers and equipping them with solid tutoring. He has also freelanced for pangea.a, creating articles on UX design for their platform. While he writes about other things from time to time, he dedicates a large portion of his time to writing about everything UX.  Read More

Other Articles on Information Architecture

New on UXmatters