Top

The AI Value Rubric: A Structured Approach to Prioritizing AI Solutions

December 15, 2025

Some have hailed the introduction of artificial intelligence (AI) as a seismic shift, an epochal change, and having the most powerful impact on business since the Internet. However, this article might disappoint you. In truth, AI is not meant to do everything, all the time.

In the face of technological hype, product teams and UX professionals risk falling into the “AI-for-everything” trap. They often pursue agentic AI solutions because the technology is novel, not because it represents the most efficient or high-value answer to a user painpoint. The outcome is predictable: wasted resources, unnecessary complexity, and user frustration with a product that fails to solve the right problem. Therefore, the most effective AI strategy involves mastering the power of no. UX designers must move the conversation from Can we build this with AI? to this more responsible and strategic question: Should we build this with AI?

Answering this question requires a rigorous, data-informed tool for governance. Therefore, I have developed the AI Value Rubric to force a structured conversation balancing user needs, business objectives, and technical aptitude to help product teams separate high-impact AI opportunities from technical vanity projects.

Champion Advertisement
Continue Reading…

Deconstructing the Rubric: Five Dimensions of Value

The AI Value Rubric evaluates potential AI use cases across five distinct dimensions. Score each of these on a simple 1–5 Likert scale, with 1 representing the lowest value / impact and 5 the highest. A high total score indicates a prime candidate for an agentic solution.

Dimension 1: Frequency: How Often

This dimension measures how often the painpoint or task occurs within a defined timeframe—for example, daily, weekly, or per user session, as follows:

  • Why it matters—AI workflows—which are complex to build, train, and maintain—offer the highest return on investment (ROI) when they’re handling repetitive, high-volume tasks. Automating a task that occurs once a year for a single user wastes capacity in contrast with tasks that occur more frequently.
  • Scoring guide—1, Rare / Annual, to 5, Daily / Hourly
  • Example—Score 1 for automating a reset-password workflow for an individual user versus Score 5 for creating an agent to perform daily data reconciliation across thousands of records.

Dimension 2: Impact on Time / Effort: How Hard

This dimension measures the friction and manual effort that completing the task currently requires, as follows:

  • Why it matters—This quantifies the slog factor. High-friction, multi-step, or mentally taxing tasks are ideal candidates for delegation to an agent. Agents excel at asynchronous, complex data gathering, giving cognitive time back to the human. [1]
  • Scoring guide—1, Negligible Effort, to 5, Hours / Days Lost per instance
  • Example—Score 1 for creating a one-off summary of a single meeting transcript versus Score 5 for creating an agent to perform a weekly trend analysis across dozens of documents, then synthesize a report.

Dimension 3: Business Impact: So What?

This dimension measures the direct effects the painpoint has on the organization’s strategic objectives, as follows:

  • Why it matters—UX professionals secure resources by aligning user painpoints with measurable business outcomes. This score forces teams to articulate value in terms of revenue, retention, compliance, or operational efficiency.
  • Scoring guide—1, Minimal impact, no bottom-line effect, to 5, Critical impact, significant revenue or compliance effects
  • Example—Score 1 for updating a knowledge-base article about a minor feature of a business-to-business (B2B), software-as-a-service (SaaS) platform versus Score 5 for designing and implementing a new machine-learning (ML) model for a B2B, AI-powered marketing tool that would potentially increase lead conversions across all customer segments.
Champion Advertisement
Continue Reading…

Dimension 4: User Frustration: Emotion

This dimension measures the psychological cost of the task—that is, the user’s emotional state when experiencing the painpoint, as follows:

  • Why it matters—High frequency does not always equal high pain. Some tasks are frequent, but tolerable. Other tasks are rare, but rage inducing. This psychological tie-in is essential, as emotional pain drives user abandonment.
  • Scoring guide—1, Mild annoyance, to 5, Severe frustration / rage quits
  • Example—Score 1 for automating a ToolTip popup that appears slightly too quickly versus Score 5 for creating an agent to fix a perpetually spinning Loading… icon that blocks the final Save button after 45 minutes of meticulous work. The user is now weeping softly.

Dimension 5: AI Opportunity: Aptitude

This dimension provides a technical assessment of whether AI or a simpler software solution would best solve the problem and requires an understanding of the models currently in use or that you’re building. This requires working with disciplines across and outside of User Experience to be truly successful. This dimension measures the opportunity that using AI would deliver, as follows:

  • Why it matters—Prevention of the over-engineering of solutions. If a problem involves solving only strict, deterministic rules, an agentic AI would be overkill. A high score here requires that solving a problem would necessitate probabilistic reasoning, synthesis of unstructured data, or complex multi-step orchestration.
  • Scoring guide—1, Low, indicating a rule-based solution would be preferred, to 5, High, indicating a solution would require synthesis and reasoning.
  • Example—Score 1 for building an AI tool to decide whether you need an umbrella. (It's just checking a weather API, not predicting the downfall of an empire.) Versus Score 5 for creating an agent to synthesize market trends, sentiment analysis, and regulatory changes to generate a personalized, legally sound, and profitable quarterly investment strategy.

Case Study: Applying the AI Value Rubric

To demonstrate the utility of the AI Value Rubric, consider a common painpoint of an internal enterprise tool: the Procurement Team’s Manual Quote Approval process. Table 1 represents a breakdown of this painpoint across the five dimensions of the AI Value Rubric.

Table 1Painpoint: A manual quote-approval process takes too long
Dimension Score (1–5) Justification

Frequency

4

Occurs multiple times a week, resulting in a continuous drain on resources.

Time / Effort

5

Requires cross-referencing three separate vendor PDFs, checking against a dynamic internal-compliance spreadsheet, and involves two layers of human email approval. Takes 45 minutes per quote.

Business Impact

4

Delays directly lead to missed vendor deadlines, causing project setbacks and increased contract costs.

User Frustration

4

High tedium, repetitive data entry, and anxiety over compliance errors.

AI Opportunity

5

Requires reading unstructured PDF (Portable Document Format) data, extracting specific pricing variables, cross-referencing dynamic rules that require large-language model (LLM) reasoning, and initiating and orchestrating multi-step workflows. This is a perfect use case for an agent.

Total Score

22

(4 + 5 + 4 + 4 + 5)

The Verdict

A total score of 22 indicates a green light for an agentic AI solution. The confluence of high frequency, high effort, and a problem that specifically requires AI’s ability to handle unstructured data makes this a prime candidate for investment. The UX team would be justified in proposing a budget to build an agent to automate the data extraction and pre-approval logic.

Interpreting the Data: The Four Quadrants of Prioritization

Once you score the top painpoints, you can visualize them on a prioritization map to guide strategic decision-making, as follows:

  • High-Value Zone—High Pain / High AI Opportunity—These Agentic Stars warrant immediate investment. They solve high-impact user problems for which an agent is the superior, most efficient tool.
  • Traditional Dev Zone—High Pain / Low AI Opportunity—This is a common trap. If a problem causes high pain, but requires strict, deterministic logic—for example, validating a date field—build a standard, reliable software feature, not an AI agent. Avoid complexity where simplicity suffices.
  • Shiny-Object Zone—Low Pain / High AI Opportunity—This is a technological vanity trap. Just because an agent can solve a problem easily doesn’t mean it offers significant value to the user or the business. Deprioritize such AI implementations to avoid wasting resources.
  • Ignore Zone—Low Pain / Low AI Opportunity—Move to the backlog or delete. Do not allocate any significant development time.

Visualizing the Strategy

The final step is to translate the dimensions’ scores into a compelling visual that aids effective stakeholder presentations. You can map the data as follows:

  • X-axis—Business Impact
  • Y-axis—Frequency
  • Bubble Size—Impact on Time / Effort
  • Color—AI Opportunity

This visualization allows a VP of Product or a C-level executive to instantly see where the big bubbles, representing high effort, align with high-impact business outcomes. It makes the case for investment undeniable.

Figure 1 shows an example of such a visualization, using the previous example, the Manual Quote Approval task.

Figure 1—Visualization of a proposed AI solution on a prioritization map
Example visualization of a proposed AI solution on a prioritization map

Table 2 breaks down the elements of this prioritization map using the AI Value Rubric’s guidelines.

Table 2—Breaking down the elements of a prioritization map
Visual Element Mapped Dimension Score Interpretation

X-axis Position

Business Impact

4, High Impact

The position of the bubble at the far right shows that the solution would solve a problem that directly influences revenue and costs.

Y-axis Position

Frequency

4, High Frequency

The position of the bubble high on the chart confirms that this is a recurring, systemic problem, justifying the automation effort.

Bubble Size

Impact on Time / Effort

5, Very Large

A very large bubble—representing the 45 minutes of manual, cross-referencing work for each quote—quantifies the magnitude of the painpoint.

Bubble Color

AI Opportunity

5, Highest Intensity

The dark color representing the highest score places this point firmly in the Agentic Stars quadrant, visually confirming that the complexity of the task requires the synthesis and orchestration capabilities of an AI agent. Note: You can choose the meanings of your colors, then provide a color key and use the colors consistently.

The Power of the Strategic No

The AI Value Rubric is an essential governance tool. It moves the product conversation away from subjective enthusiasm for a technology toward an objective, quantifiable assessment of its value. It provides UX professionals with a structured, data-driven method of leading product strategy.

The most effective AI strategy involves mastering the power of deciding what not to build. By quantifying the value of a solution before writing a line of code or a single prompt, UX professionals can ensure that agentic AI remains a powerful tool for human empowerment, not a driver of technical vanity projects. 

UX Researcher at ServiceNow

Philadelphia, Pennsylvania, USA

Victor YoccoVictor is a UX researcher, author, and speaker with over 15 years of experience helping the world’s largest organizations build human-centered products. His work focuses on the intersection of psychology, communication, and design. He has authored numerous publications on UX topics, including his book Design for the Mind: Seven Psychological Principles of Persuasive Design and the forthcoming book, Designing Agentic AI Experiences. Victor holds a PhD in Environmental Education, Communication, and Interpretation from The Ohio State University.  Read More

Other Articles on Artificial Intelligence Design

New on UXmatters