Top

Statistically Right, Humanly Wrong: The Limits of Data in AI-Driven Experiences

September 8, 2025

In the age of artificial intelligence (AI), data is often cited as the new oil—a phrase that data scientist Clive Humby coined in 2006, capturing the idea that data is a resource to be mined, refined, and leveraged for competitive advantage. However, unlike oil, data is abundant, generated continually, and fundamentally tied to human activity and behavior. This abundance has led to the belief that more data would naturally yield better, more intelligent solutions.

However, as machine learning (ML) and predictive analytics are increasingly powering digital products, a paradox emerges: despite the abundance of data, a staggering 85% of AI implementations ultimately fail. This high failure rate reveals a critical disconnect: even with the proliferation of data and the sophistication of AI algorithms, many AI-powered products still deliver experiences that feel impersonal, irrelevant, or even alienating.

Champion Advertisement
Continue Reading…

Since we live in a world that is generating over 400 million terabytes of data every day and already have the technology to build intelligent systems at our fingertips, why do so many AI applications continue to fall short? Maybe it’s because more data doesn’t automatically lead to better experiences. Without factoring in context, user intent, and behavioral insights, data and fancy algorithms alone cannot deliver on the promise of AI.

The core issue lies in a fundamental design tension: when AI systems rely too heavily on forecasting—using past data to predict what comes next—and lack foresight—the ability to imagine different future scenarios and adjust accordingly—they risk missing the mark entirely. Systems that merely extrapolate from yesterday’s patterns are ill equipped to support the nuanced, context-rich realities of human life. Why? Human goals and behaviors are not entirely predictable; they evolve, shift, and adapt over time.

In this article, I’ll examine why gathering more data does not automatically translate to users’ having better experiences. I’ll also explore how UX professionals can move beyond predictive modeling to designing AI systems that are context aware, ethically aligned, and capable of supporting genuine human growth.

When Life Stops Following the Data Script

Even the most celebrated AI-powered products can falter when reality diverges from the data script. For example, Nest, the intelligent thermostat that launched in 2011, quickly became the prodigy of anticipatory design, learning from users’ habits and environmental data to proactively adjust the temperature and save energy. This approach was so compelling that Google acquired the company just three years after its launch, transforming Nest from a novelty into the mainstream success shown in Figure 1.

Figure 1—Google’s Nest Learning Thermostat
Google's Nest Learning Thermostat

Image source: Google blog

But, like so many predictive systems, Nest struggles with context awareness. A simple example: when a baby joins the household, priorities change, but this system cannot adapt to this new context. Comfort and safety take precedence over energy efficiency, but Nest doesn’t know how to adapt to that change. The system, whose training has been based on months or even years of the household’s prior routines, continues to optimize for yesterday’s needs, remaining unaware that the stakes and user’s goals have fundamentally changed. No amount of historical data can help Nest intuitively adapt to this new context unless they design the system to recognize and respond to such shifts.

Nest performs well until life stops following the existing data script. This is the critical limitation of AI systems that rely solely on forecasting: while they are statistically correct, they are often humanly wrong. They excel at repeating the patterns of the past, but stumble when users’ lives evolve in ways that the data never saw coming.

Champion Advertisement
Continue Reading…

The Problem with an Over-reliance on Forecasting

Forecasting is the backbone of most of today’s AI-powered products. By analyzing historical data and behavior patterns, forecasting systems attempt to predict what will happen next. This approach excels in business-to-business (B2B) contexts where processes are structured and repetitive. Think of supply-chain-management systems predicting inventory needs, customer-relationship-management (CRM) platforms suggesting the next step in a sales sequence, or project-management tools estimating task-completion times based on team velocity.

However, predictive systems can fall short when a prediction makes sense statistically, but fails to account for human nuances and priorities. Unlike the predictable patterns of business workflows, human behavior in entertainment, learning, wellness, or creative work is far more nuanced and context dependent. A prediction that makes perfect statistical sense can miss the mark entirely when it fails to account for individual priorities, changing moods, life circumstances, or the simple human desire for novelty and surprise.

The challenges AI systems face today aren’t just technical—they’re profoundly human. The old Digit app is a prime example. It was designed to help people automatically save money by analyzing users’ spending habits and moving small amounts into a savings account. On paper, this sounded perfect—automated, effortless savings. In practice, it overlooked a critical aspect: contextual awareness. The service didn’t account for people living paycheck to paycheck, where even a small withdrawal at the wrong moment could cause real harm. Nor did it consider the unpredictability of real life, irregular incomes, or emergencies. Without real-time awareness, the system triggered overdrafts and surprise deductions, creating stress rather than security for its users.

Figure 2—The Digit app
The Digit app

The problem wasn’t the idea or the technology; it was the design. Digit relied too much on past data and lacked the ability to adapt to the present reality. This is a classic case of over-reliance on forecasting without enough foresight to account for uncertainty, change, and context. If techniques of foresight such as scenario planning or horizon scanning had been integrated into the design, Digit could have anticipated edge cases, and the service experience would have improved accordingly.

The lesson is that AI systems must be flexible to be resilient. They need to understand real-life context and, above all, be transparent. If users lose trust, the system fails, no matter how smart it is.

Mint, shown in Figure 3, offers another instructive example. This app promised to help people manage their finances through automated budgeting and personalized financial advice, aiming to anticipate users’ needs and simplify financial decision-making. But its too generic guidance frequently fell short, leading users to lose interest. Intuit ultimately acquired Mint in 2009, and the product’s original vision faded as the product evolved.

Figure 3—The old Mint app
The old Mint app

Mint’s core issue was a lack of personalization. The system relied on static models instead of adapting to real-time financial data or user behaviors. Many users, especially those with low financial literacy, tended to over-trust the service, assuming its advice was personalized or expert approved, even though it wasn’t. With better methods of foresight, such as scenario planning and user visioning, Mint could have identified different user types and designed advice that matched their unique goals and perspectives. If a system can’t adapt, explain itself, or support user growth, it loses relevance and risks eroding users’ trust. AI systems must go beyond broad forecasts and deliver actionable, ethical, user-centered support or risk becoming just another app people quickly delete.

Why Do So Many AI Systems Fail?

Many AI systems fail not because of technical limitations, but because they lean too heavily on forecasting and, thus, neglect the power of foresight. Forecasting is about projecting from the past into the present and the future, using quantitative data to estimate what might happen next. It is focused, probabilistic, and effective for stable, repetitive behaviors. However, forecasting alone has its limitations: it often overlooks edge cases, misinterprets context, and struggles to adapt as users’ needs evolve. Most importantly, when these systems stumble, they risk breaking users’ trust.

In contrast, foresight is qualitative, exploratory, and is driven by possibility. Where forecasting asks, “What’s likely to happen based on what we know?” foresight asks, “What could happen, and how might we prepare for it?” Foresight tools such as scenario planning, horizon scanning, and can help us navigate uncertainty, anticipate change, and design for flexibility and resilience.

The Mint Example: What Foresight Could Have Changed

Let’s return to Mint. Mint’s approach was rooted in forecasting: it analyzed past spending to offer budgeting advice. But this advice was overly generic and static, especially as users’ financial lives became more complex and unpredictable. The system failed to adapt to new realities such as the growing number of people with irregular income streams, including freelancers, independent professionals, and creators who manage multiple sources of revenue. Mint also missed the rise of digital assets such as cryptocurrencies, which were changing how people save and invest.

If Mint had embraced foresight—not just forecasting—it could have anticipated these shifts. Scenario planning might have revealed an increasing diversity in income patterns and financial behaviors. Horizon scanning could have picked up on the emergence of new digital assets and alternative forms of wealth management. By integrating these insights, Mint could have designed budgeting tools that flex with variable incomes and supported new asset classes, keeping pace with evolving user needs.

Thus, to build transparent, resilient, trustworthy AI applications, we need to integrate two complementary capabilities:

  • forecasting—By leveraging historical and contextual data to predict what a user might do or need next, we can understand the present through the lens of the past.
  • backcasting—By starting with a vision of a desirable future and working backward to identify the steps we need to take today to reach that future, we can shift the focus from what is probable to what is possible and meaningful.

While forecasting predicts what users are likely to do, backcasting asks: What future do we want to help users reach? This approach enables systems to support meaningful, long-term behavioral change—for example, helping users achieve future-oriented goals such as saving for retirement or developing a new skill by mapping out their necessary actions and support.

This is where the future-cone exercise that Figure 4 depicts takes on an important role. A foresight tool that helps teams visualize a range of possible, plausible, probable, and preferable futures, the future cone maps different trajectories, enabling designers and strategists to better understand not just what is likely to happen but what could happen and what should happen. Integrating the future cone into the design process encourages teams to explore a wider array of scenarios, challenge their assumptions, and identify opportunities for innovation. By pairing forecasting and backcasting with this future cone, we can create AI experiences that are not only adaptive and resilient but also genuinely supportive of long-term user growth.

Figure 4—Future-cone diagram, showing types of alternative futures
Future-cone diagram, showing types of alternative futures

In practice, this means designing systems that don’t just react to what users have done in the past, but can help them move toward what they’re aspiring to become. By blending forecasting’s quantitative strengths with foresight’s qualitative vision, we can create AI experiences that are not only adaptive and resilient but also genuinely supportive of long-term human growth. However, achieving this level of design maturity is possible only when organizations invest in robust UX research capabilities. Deep, ongoing research is essential to uncover the evolving motivations, hidden needs, and shifting contexts that data alone cannot reveal. As AI systems become more complex and future oriented, the ability to understand users at psychological and behavioral levels is not a luxury; it’s a competitive necessity. However, we have unfortunately observed a trend in the opposite direction.

UX Research Remains More Crucial Than Ever

Recent industry shifts have highlighted a dangerous misconception: that AI could make traditional UX research obsolete. Major companies such as Shopify have dropped UX titles or reduced research roles, betting that algorithms can replace the nuanced work necessary to understand users. But the data tells a different story: 88% of users won’t return after a poor experience, and design-led companies consistently outperform the marketplace.

While AI excels at surfacing correlations and automating patterns, it cannot grasp the whys behind human choices—the emotional drivers or the unspoken needs that shape real experiences, as the previous examples show. As David Ogilvy famously put it, “People don’t think what they feel, don’t say what they think, and don’t do what they say.” The role of the UX researcher is to uncover these hidden truths and anticipate actual needs before users can even articulate them. Those who cut UX research and analysis risk becoming blind to the very signals that drive user engagement, loyalty, and growth.

To build AI systems that are not only statistically accurate but also socially meaningful, businesses must combine the predictive strength of AI algorithms with the strategic insights and empathy of UX research. Only then can we design user experiences that are resilient, adaptive, and genuinely supportive of human needs.

Conclusion

As we have seen, achieving the full power of AI-driven experiences does not rely entirely on gathering more data or creating more powerful algorithms. AI’s greatest promise derives not just from analyzing what has happened before, but from shaping what comes next. Most AI solutions—especially those outside the generative AI space—are, by design, future oriented. Their true value lies in their ability to anticipate users’ needs, adapt to evolving contexts, and help both users and organizations navigate an increasingly complex and ever-changing landscape. But the realization of this forward-looking power is also their greatest challenge.

Relying solely on prediction keeps AI systems anchored in the past—repeating patterns and missing the subtleties of real human lives. When AI systems fail to adapt to new contexts or unexpected events, they risk losing users’ trust and, thus, their relevance.

To succeed, businesses and UX designers must move beyond simple forecasting. Fulfilling the promise of AI demands pairing human-centered design with the strategic depth of foresight. This means not just asking, “What will users do next?” but also, “What futures are possible—and which ones should we help create?” By integrating scenario planning, backcasting, and other methods of foresight, we can design user experiences that are flexible, resilient, and align with human intent.

Ultimately, defining the next wave of AI-driven experiences will not be about how well they can predict, but how wisely they can help us prepare for—and shape—the future. To build truly adaptive, trustworthy, and human-centered systems, we must design with both the next moment and the long-term horizon in mind. Only then can AI fulfill its promise: not just anticipating what is likely, but empowering what is possible. 

Senior Experience Designer at Hexagon

Porto, Portugal

Joana CerejoJoana is a passionate UX designer, with over a decade of experience. She’s on a mission to enhance the human experience through UX design by blending design, management, and engineering, resulting in a data-driven approach that informs product strategy and elevates the human experience. Joana specializes in UX research, UX metrics, and UX strategy, helping companies embrace design’s value at the highest level. Since 2012, she has been teaching across various platforms, both in person and remotely, from universities to technical schools and beyond. She is intrigued by the intersection of artificial intelligence (AI), machine learning (ML), and big data (BD) with the human experience and crafting autonomous products that not only mirror humans but also understand their impacts. Joana is currently pursuing a PhD in AI and Design, focusing on anticipatory experiences. She is pushing the boundaries of human-centered design to create sustainable human-machine relationships. In addition to certifications in the fields of design and engineering, she was a Nominee for the 2021 Women in AI Awards by VentureBeat as a Rising Star in the AI Innovation Awards.  Read More

Other Articles on Artificial Intelligence Design

New on UXmatters