UX designers and researchers have spent decades perfecting the art of the click. We’ve mapped every pixel of the user journey to ensure that, when someone wants to reach a goal, the path is clear, the button is visible, and the feedback is instantaneous. But the landscape is shifting. We are moving from tools that wait for instructions to artificial-intelligence (AI) agents that act on our behalf.
When an AI agent makes a choice—for example, a financial bot moving $5,000 into a high-yield savings account or an automated hiring tool filtering out a candidate—the user experience becomes the delegation of authority.
Agentic AI represents a shift toward systems that operate with greater autonomy, making decisions on behalf of users who have provided minimal input. The promise of these systems is that they will gradually move beyond basic tasks and take on increasingly complex responsibilities, as both the technology and users’ trust in it mature.
But, as these systems gain autonomy, the UX researcher must step into a new role: the ethical arbitrator. In this role, researchers will still measure usability, but also examine the safety, transparency, and moral weight of the decisions a machine makes on the user’s behalf.
Champion Advertisement
Continue Reading…
The Conflict of Delegation
In a traditional application, the user is the driver. In an agentic system, the user becomes a passenger. That shift creates psychological tension between efficiency and autonomy.
A business might want a financial agent to automate as much as possible to boost engagement or grow assets under management. But, if the agent moves money in a way that the user doesn’t understand, it violates the psychological contract of trust. When the agent makes a wrong choice—even one that’s mathematically sound, but emotionally jarring—the user feels a loss of control.
We need to understand exactly where the boundary falls between a user’s feeling empowered and a user’s feeling overridden. If we don’t research the boundaries between what people are willing to delegate and what they’re not, we risk building products that feel more like digital kidnappers than digital assistants. These products will go unused and, worse, they can erode a user’s relationship with the brand behind them.
Consider a scenario that illustrates how technical compliance can mask ethical failure: Because Sarah’s primary financial goal is to save for a house, she delegates portfolio management to her bank’s autonomous agent with this goal in mind. The agent, running advanced optimization algorithms, might identify a volatile but high-growth stock as the fastest path to her down-payment target. Acting on its optimization mandate, the agent might move a significant portion of Sarah’s earmarked house fund into this high-risk position, without consulting her or getting explicit consent for a trade of this magnitude.
The Failure of Contextual Empathy
On paper, the agent is doing its job: maximizing the probability of reaching the savings target.
In practice, it has ignored something no algorithm can easily quantify: Sarah’s risk tolerance, her emotional attachment to her house fund, her anxiety about market swings, and her need for liquidity. If the agent understands the goal but not the human context surrounding it, the gap is a failure of contextual empathy.
The result is a broken relationship. Sarah no longer sees the agent as a helpful tool. She views it with suspicion. Her own technology has acted upon her rather than partnering with her. For AI-driven services, the most critical feature is often the maintenance of user trust—and once that trust is broken, recovery is expensive and uncertain.
Champion Advertisement
Continue Reading…
Questions Researchers Should Be Asking
The potential for this kind of failure now compels UX researchers to move beyond usability to ethical arbitration. We need to become systemic auditors, probing the design philosophy and interaction model that could allow failure to occur. Three lines of inquiry matter most:
Transparency and delegation of risk—Did Sarah fully understand the degree of autonomy and risk she was granting the agent? Was her initial consent broad enough to cover a drastic, unilateral move, or did the system exploit ambiguity in the onboarding flow?
Priority of explanation over action—Before executing the trade, should the system have provided a clear, plain-language rationale? Should Sarah have been given a projection of risk and a call to action requiring explicit confirmation? A design philosophy that defaults to ask first, act second is almost always ethically stronger than act now, inform later.
Need for checkpoint mechanisms—When an autonomous action crosses a predefined threshold—for example, moving a large percentage of a fund, shifting into a new risk profile, or engaging with an asset class that the user has never approved—the system should trigger a mandatory pause. This checkpoint should require human review and reaffirmation of delegated authority. Without it, autonomous systems are prone to a kind of decision drift to which users never consented.
The Psychology of the Black Box
Communication theory tells us that trust requires a shared mental model. When you delegate a task like “Grab me a coffee” to a colleague, there’s either an implicit understanding of preferences or an explicit acknowledgment that preferences must be stated. AI agents often skip both steps. They take an input, produce an output, and show nothing of the reasoning in between.
This lack of shared reasoning triggers what I think of as Black Box Anxiety. Without visibility into the agent’s reasoning, users tend to fall into one of two traps: automation bias, trusting the machine blindly until something goes badly wrong, or automation rejection, refusing to use the tool altogether.
The researcher’s job is to open that black box. We need to identify what I call critical transparency moments, the specific points in a workflow where an agent must show its work to maintain the user’s sense of control. By revealing the AI’s reasoning at these moments, we allow users to validate the AI’s logic against their own knowledge, correct misalignments before finalizing a flawed output, and build a stable mental model of the system’s behavior over time. Conveying this logic clearly transforms an opaque, anxiety-inducing system into a predictable, trustworthy partner.
A Framework for Ethical Arbitration
UX leaders increasingly serve as the ethical interface between AI capabilities and human values. To move beyond subjective judgment, we need structured evaluation criteria. The following four heuristics provide concrete markers for assessing agentic systems across intent, control, transparency, and boundaries.
Explicit intent alignment—Asking the agent to confirm its understanding of how to reach a goal, not just what the goal is. Picture a user telling an AI scheduling assistant, “Book me a routine teeth cleaning with Dr. Smith next Thursday.” A well-designed agent might respond: “I understand you want a routine cleaning with Dr. Smith for next Thursday. There’s an opening at 10am. Does that work, or would you prefer later in the afternoon?” The agent confirms the constraints and offers a decision point before acting.
Intervention threshold—Asking whether the user has a clear, persistent way of undoing an action or instantly reclaiming control. In a warehouse-management system, for example, if an AI detects a potential stockout of a high-demand item, the emergency brake might be a visible on-screen button that lets the warehouse manager override the AI’s automatically generated picking order to prioritize higher-value customers of which the system is unaware.
Proportional transparency—Asking whether the agent’s explanation matches the stakes for the decision. This is straightforward. A bot choosing a music playlist needs far less transparency than one recommending an insurance plan or flagging an employee for a performance review.
Value guardrails—Asking whether there are hard boundaries at which the agent cannot act without live human confirmation. If a scheduling agent’s proposed time conflicts with a high-priority event on the user’s calendar such as a child’s school play or a board meeting, the agent must stop and ask for confirmation. This constraint should be hard-coded, not left to the agent’s discretion.
Making the Case to Stakeholders
When you sit down with a Product Manager who wants seamless automation, your role as an ethical arbitrator is to demonstrate that friction can be a helpful feature. An agent that pauses and says, “I’m about to move this money, here’s why. Do you agree?” is technically slower than one that executes silently. However, that pause is what builds the long-term trust a product needs to survive.
This principle scales to enterprise contexts. In a supply-chain system, an agent’s stopping to say, “I’m about to approve a major inventory transfer across three regions, based on the latest demand forecast. Here’s a simulation of the downstream impact on lead times and fulfillment rates. Do you authorize this?” might add a few minutes of delay. But that deliberate pause is not a bottleneck. It is a governance mechanism—one that builds the operational confidence that is necessary for the system to maintain compliance within a regulated environment. By requiring a manager to acknowledge a high-stakes, irreversible decision, the system signals that it is working with that manager to manage risk, not just for the manager to speed up transactions.
Present your findings not as a list of complaints, but as a map of what I call trust risks. Translate ethical and usability compromises into business metrics that your stakeholders already care about. Show churn that results from users feeling blindsided by autonomous actions. Quantify the support tickets that opaque decision-making generates. Articulate the long-term brand damage that accumulates from negative reviews and social-media backlash when users feel they are being acted upon rather than served. Connect specific design decisions to these cascading consequences. When you ensure that the ethical imperative is inseparable from the financial one, the conversation changes.
Where This Takes Us
We are now designing partnerships in which the AI systems we build are autonomous participants in people’s lives. Core to this new mandate is protecting users’ right to remain the final authority over their own decisions.
The agents and algorithms that we’re designing have a growing capacity to influence, persuade, and subtly shape human behavior. Without a strong ethical framework, that capacity could become a liability. The future of UX research is not just about making things easy to use, but about making them safe to trust. We must build that safety upon transparency, accountability, and an unwavering commitment to users’ control over their decisions, their data, and their lives.
The UX researcher must step forward into the role of the ethical arbitrator and ensure that every interaction we design reinforces rather than erodes the user’s fundamental autonomy.
Victor is a UX researcher, author, and speaker with over 15 years of experience helping the world’s largest organizations build human-centered products. His work focuses on the intersection of psychology, communication, and design. He has authored numerous publications on UX topics, including his book Design for the Mind: Seven Psychological Principles of Persuasive Design and the forthcoming book, Designing Agentic AI Experiences. Victor holds a PhD in Environmental Education, Communication, and Interpretation from The Ohio State University. Read More