AI Applications: Four UX Questions to Ponder

Envisioning New Horizons

A critical look at UX design practice

A column by Silvia Podesta
June 3, 2024

The latest Accenture consumer report provides an emblematic summary of our huge expectations around generative artificial intelligence (AI), as follows:

“Generative AI is upgrading the Internet from informative to intelligent, and the experience of using it from transactional to personal.”

There is no denying that a shift toward AI applications is really happening. The seductive power of language is leading online users to interact with digital media in new ways. AI-powered conversational interfaces are spearheading a behavioral change that is poised to transform user experiences on the Web.

Champion Advertisement
Continue Reading…

Still, what brings people online will likely remain the same as it is today: retrieving information, executing transactions, expressing our own creativity, looking for pastimes, and connecting with others. These are the fundamental human needs that continue to prompt UX design interventions. So the good news is that we won’t have to rethink the fundamentals of good UX design, but AI systems present unique design challenges of their own.

As we navigate the complexities of incorporating AI into novel user journeys, there are four questions we need to answer to create sound user experiences that can stand the test of evolving digital interactions. If we answer these questions and address these issues appropriately, we can ensure the viability of AI-driven experiences from an enterprise standpoint and, thus, their adoption as part of robust business models.

1. Will AI replace human work or augment it?

This is a loaded question given that it concerns what is, possibly, the biggest public fear that the advent of generative AI has ignited. However, getting the answer to this question right at the inception of any AI project is fundamental because it often relates to stringent business and strategic goals.

Deciding whether to enhance current human operations with AI versus replace a manual process altogether depends on the cost of making errors—that is, prediction mistakes.

When the cost of making errors is low—for example, for a recommendation system for movies and TV series—automation is generally a viable choice and can actually deliver significant gains—both in terms of savings for the company and a better experience for users. Recommending the wrong movie to a customer won’t likely be a big deal, and the greater number of satisfied customers who do enjoy discovering new titles as a result of more personalized suggestions for their home entertainment would likely offset the occasional error.

Conversely, when the cost of making errors is high—for instance, within a medical context—using AI only to better support human decision making would be the preferable option. In a critical scenario—where a person’s well-being or a company’s revenues are at stake, the ultimate agency should rest with the human expert.

However, the examples I’ve just given are oversimplifications. By and large, full automation would be limited only to specific tasks or jobs in a typical enterprise workflow. Therefore, we can employ automation and human augmentation simultaneously in an application.

Consider the following scenario, from a real customer case: A cosmetics retailer wants a tool for optimizing decisions regarding its inventory, with a view of delivering more personalized products to its customer base. The new system should automate the rather inhumane process of selecting the most appropriate product combinations out of an inventory that includes thousands of products. Criteria for product selection—including product expiration date, availability, and average stock level over a given period—favor optimizing inventory in the first place.

Further iterations using this model would allow even more granularity—for example, the solution could exclude specific items from a bundle to avoid sending the same product to a customer group twice or to avoid sending people products that don’t fit their skin color or hair type. This model might be able to offer great opportunities. But should the system make the final decision on the final bundle of products to send to subscribing customers?

In this real-life situation, employees at the company were adamant that they should make the final decision and personalize the selections that the system recommended. There were, in fact, creative factors that the AI could not capture—for instance, how the look and feel of the selected products would resonate with their intended audience. So they requested editing options to enable them to change and replace recommended items in a specific bundle.

After a closer analysis of this workflow, they described two distinct processes:

  1. Automation—An AI in the backend crunches data to provide insights to the team regarding what products to prioritize for inventory optimization and stock turnover.
  2. Augmentation—An AI gives human experts a set of options for combining high-priority products to best satisfy customers, but gives them the chance to act upon the results and make their own decisions. This second step highlights the collaborative aspect of interactions between human agents and an AI.

If you are designing a new experience with an AI that is part of a process flow, consider splitting it into microtouchpoints and the ratio of human agency versus machine automation at each step. This allows a more granular understanding of costs, benefits, and savings from the application of AI.

2. What is the mental model for AI?

All human experiences happen in context. This is no less true for those involving an AI. When trying to figure out users’ interactions with an AI—be it a chatbot user interface or a more invisible agent working in the backend—it is useful to think about the broader context in which this relationship plays out. The type of context, in fact, guides people’s behaviors, expectations, and actions toward the AI. It influences the mental model, which in UX design describes the set of beliefs that a user has about an application.

At IBM, we have developed a framework comprising four contexts to help UX designers define interactions with an AI agent and foresee their psychological and emotional implications. These four contexts are as follows:

  1. Assistant—The user consciously requests the activation of AI capabilities, by toggling them on.
  2. Coach—This is a more noticeable presence that guides the user to better outcomes through taking appropriate actions.
  3. Teacher—In this context, the AI shows the user how to do something properly.
  4. Partner—The AI shares the experience with the user as it progresses, learning by doing with the user and coexperiencing the solution with the user. (This context has its roots in the software development approach known as pair programming, where two developers sit side by side and work on the same computer.)

By choosing one of these contexts, a UX designer can do the following:

  • Define an AI persona, with personality attributes and a communication style.
  • Set the degree of agency for the AI, making it less or more of a protagonist in the user’s workflow.
  • Design the most appropriate user-interface components to bring a particular context to life. This could be a simple toggle within the assistant context or a spacious side panel for cocreative tasks within the partner context.

3. What would its output look like?

We have all become quite familiar with the possible outputs of generative AI systems: images, text, code, sounds, and mixed media are all things that this technology can create from user inputs. Nevertheless, the question of what an AI outcome would look like in a specific application is never obvious. The answer to this question would heavily impact the final user experience, so it’s best to get this right as early in the design process as possible.

An example from my own experience is a case in which users needed AI-generated summaries of lengthy contracts to better perform due diligence. How long should a summary be to be valuable to a user? How much would it cost to generate a one-page summary of a contract versus a short paragraph when the application scales up? How could we balance these two potentially mutually exclusive needs?

The first step would be assessing users’ expectations and requirements, but the next step would be to immediately quantify the potential impact of any choice with business stakeholders. This alignment could lead to a review of the initial UX assumption: if you need to reduce the length of the summary, in what other ways could users get the information they need? So you might want to add references to source documents or extract some data in the form of keywords or visualizations. These tweaks would not only change your outlook about the AI outcome from what you’d originally planned, but require AI engineers to build the model so it could produce such outputs.

Therefore, determining what form an AI output should take is a stepping stone around which these different efforts revolve: designing a user experience and building the model. It is of the utmost importance to get the team’s and stakeholders’ alignment behind what the AI should produce.

4. What could be the unintended consequences of AI?

As I mentioned earlier, AI poses unique challenges in terms of its broader user experience. We know that algorithms and models perpetuate and amplify whatever biases are present in their training data, which can lead to discriminatory outcomes. Thus, they can expose users to privacy and security risks such as cyberattacks. They can also engender harmful outputs, cause ethical issues, and foster a lack of accountability. They can even make users become over reliant on the technology.

For all these reasons, it is crucial to identify and address all potential unintended implications of AI technology as you begin designing a novel solution. At IBM, we have developed dedicated design-thinking frameworks to help companies reflect on these broader consequences during cocreation workshops. In my experience, it is not always easy to engage business stakeholders on this topic. However, as people’s general awareness of AI’s potential dangers increases, regulations and governance initiatives are evolving at a dizzying speed. Businesses will ultimately need to get ahead of the game to avoid costly outcomes such as fines, legal cases, or simply the loss of their investments in AI.

The short-lived Savey-bot, an AI-powered application that a New Zealand supermarket launched in 2013, provides cautionary tale. The app was initially conceived as a tool to help consumers create recipes from leftovers, but the app went feral when users started adding toxic ingredients in response to recipe prompts. In a statement, the supermarket promised to “keep fine tuning their controls” of the meal planner to ensure it was safe and useful. Preempting people’s misuse of the app would have been a far better strategy.


AI is here to stay, and users can expect to see simpler, more engaging digital experiences that are powered by this technology. Out-of-the-box solutions such as ChatGPT—however powerful they might be—might not necessarily in themselves ensure a great user experience in all circumstances.

You should incorporate the UX design fundamentals that I discussed earlier in this column in creating the backbone of any AI-driven experience, regardless of the specific technology it uses. For AI tools to be a force for good and positive empowerment, we need to carefully consider and plan for factors such as human agency and an over reliance on AI, ethical implications, and expected user behaviors. 


Accenture Song. “Accenture Life Trends 2024.” (PDF) Accenture, October 17, 2023. Retrieved November 27, 2023.

Tess McClure. “Supermarket AI meal planner app suggests recipe that would create chlorine gas.” The Guardian, August 10, 2023. Retrieved May 5, 2024.

Kyle Wiggers. “OpenAI debuts GPT-4 ‘omni’ model now powering ChatGPT, TechCrunch, May 13, 2024. Retrieved May 14, 2024

Innovation Designer at IBM

Copenhagen, Denmark

Silvia PodestaAs a strategic designer and UX specialist at IBM, Silvia helps enterprises pursue human-centered innovation by leveraging new technologis and creating compelling user experiences. Silvia facilitates research, synthesizes product insights, and designs minimum-viable products (MVPs) that capture the potential of our technologies in addressing both user and business needs. Silvia is a passionate, independent UX researcher who focuses on the topics of digital humanism, change management, and service design.  Read More

Other Columns by Silvia Podesta

Other Articles on Artificial Intelligence Design

New on UXmatters