“How does it feel, to be on your own, with no direction home?”—Bob Dylan
Part one of this series described behavior as the primary design material for artificial intelligence (AI), including four behavioral dynamics:
Adaptation—continuous learning
Attention—sustained interaction
Alignment—shared purpose
Repair—recovery from misalignments
As UX designers, we need to leverage these dynamics to foster conditions for emergent relationships between interacting minds—whether in human-AI collaboration, human-agent delegation, or agent-to-agent coordination. Attention and alignment are where that emergence should begin.
In Part 1, I described a vision for an AI-driven healthcare advocate—an AI agent that could support, coordinate, or fully manage a person’s care depending on the level of control that person or a caregiver assigns. We designed the app’s onboarding process to establish the AI agent’s role: should it support a person by making suggestions or complete tasks the person initiates? Coordinate care by handling appointments and logistics and independently begin tasks with the person’s approval? Or fully manage routine healthcare tasks from start to finish on its own?
Champion Advertisement
Continue Reading…
This initial step quickly sets the stage for an emerging relationship. First, it determines the posture—deciding how much attention and engagement are necessary, how much control the user retains and hands off to the AI agent, and when that agent should act. Second, it clarifies intent—aligning on the work expected, how much decision-making authority the AI agent has within set boundaries, and what goals it should pursue. Now, in Part 2, I’ll explore how attention and alignment work together to establish posture and intent, creating the conditions for a beneficial relationship that meets human needs while respecting human agency.
Attention
As Figure 1 shows, attention requires establishing how AI engages, both in its posture relative to the activity and how the relationship mutually sustains over time. In my 2007 article on UXmatters, “What Puts the Design in Interaction Design,” I briefly described posture as dominant, subdominant, or subordinate, depending on whether an interaction is central to the workflow, supportive, or adjacent to the task at hand. While this framing remains important, AI demands more. In unbounded systems, posture is not fixed. It shifts as tasks change, trust builds, and the human-agent relationship evolves. Situational leadership provides a useful framework for managing this behavior.
Figure 1—Attention
Hersey and Blanchard’s Situational Leadership Model suggests that effective leaders adjust their style depending on an individual’s or team’s readiness for a specific task. [1] Blanchard later refined this model in Situational Leadership II, identifying four leadership styles—directing, coaching, supporting, and delegating—that match the individual’s or team’s different levels of competence and commitment. [2] Initially, this requires directing a person who is new to a task. Then, as a person’s skills develop, leadership shifts from coaching to supporting, and eventually to delegating.
Champion Advertisement
Continue Reading…
This framework applies directly to managing AI’s behavioral posture, which, as depicted in Figure 2, calibrates both the AI’s agency—how much it can decide and act independently—and the level of oversight it requires—that is, how much human engagement and monitoring are necessary.
Figure 2—Behavioral posture
Posture shifts situationally: low agency with high oversight when stakes are high or tasks are unfamiliar; high agency with minimal oversight for proven competencies. For instance, consider the following scenarios for each behavioral posture:
directing—low agency, high oversight—A person needs a specialist appointment and asks the AI to provide available options, but reviews the options and selects one based on his schedule and preferences.
coaching—moderate agency, high oversight—A person works with the AI to understand medication options, guiding it to focus on specific concerns such as side effects or costs while surfacing relevant information.
supporting—high agency, moderate oversight—The AI manages prescription refills and monitors drug interactions, alerting the person when something requires attention.
delegating—high agency, low oversight—The AI schedules routine appointments using established preferences, then confirms the appointments.
This also works in reverse. The AI can adjust its behavioral posture when guiding the user through tasks in which it has more expertise—such as explaining complex medical information or leading the user through a care protocol. The user maintains overall leadership of the relationship, but the posture shifts depending on who has the competence for a specific activity.
Once the system has established behavioral posture, maintaining attention relies on the quality of the exchanges. In bounded systems, the UX designer can script exchanges for consistency. In unbounded systems, quality must emerge through guiding principles that influence how exchanges develop. In my 2007 article, I referred to maxims relating to H.P. Grice’s Cooperative Principle as conversational postulates that directly relate to interaction quality. [3] For interacting minds, these maxims operate at the point of interaction, shaping how exchanges unfold and attention is sustained. These maxims translate to interaction design as follows:
Maxim of Quantity—Make contributions to interactions as informative as necessary, but no more informative than is appropriate. This implies flow.
Maxim of Quality—Contributions to an interaction should be truthful—that is, the user says what he believes to be the case. This implies responsiveness.
Maxim of Relation—Contribution should be relevant to the aims of the interaction. This implies context.
Maxim of Manner—The user must try to avoid obscure expressions, vague utterances, and purposeful obfuscation of their points. This implies appropriateness.
These principles influence the quality of individual exchanges, whether they are one-time, episodic, or ongoing interactions. The prescription-management scenario that I mentioned earlier illustrates this through regular check-ins—confirming monthly refills and checking for side effects or other concerns. The principles for this example demonstrate the following qualities:
flow—The AI provides and requests just enough information at each point of contact.
responsiveness—The AI sends timely notifications to the user when an action or confirmation is necessary.
context—The AI understands the medication’s role in the care plan and the need for the provider to approve refills.
appropriateness—The AI adjusts communications based on their urgency.
Alignment
As Figure 3 shows, alignment establishes shared understanding and intent. Just as posture changes depending on the situation, so does intent, regardless of the task. Managing medications for a chronic condition differs from managing medications for a short-term health issue. The context shifts from ongoing to temporary needs, altering the boundaries—for example, continuous monitoring versus fixed end dates or managing long-term health versus immediate symptom relief. The intent remains the same, but the situation influences the outcome.
Figure 3—Alignment
Understanding intent requires knowing the following three things:
What the user intends to accomplish
The context that shapes the user’s intent
The boundaries that constrain the user’s intent
A helpful way of conveying intent is through Jobs To Be Done (JTBD), as Jim Kalbach describes in The Jobs To Be Done Playbook: “[The] process of reaching objectives under given circumstances.” [5] With unbounded systems, the focus is on objectives rather than specific tasks or processes.
Jobs can be functional—getting something done; emotional—feeling a certain way—or social—managing perceptions. Medeiros argues that jobs are intentions—explaining what the user needs to accomplish and why it’s essential, with no interpretation necessary. [6] When someone describes a job, they are expressing their intent. Understanding the job provides a stable anchor, but that anchor becomes actionable only when context shapes it and boundaries constrain it.
Context includes the situational factors and domain knowledge that shape what the job requires—that is, what Kalbach describes as “given circumstances” in his jobs definition. Boundaries require codified judgment. AI generates predictions, but as Agrawal, Gans, and Goldfarb argue in Power and Prediction: The Disruptive Economics of Artificial Intelligence, machines do not choose ends or values; they rely on human-defined rules and objectives to turn predictions into action. [7] Boundaries that make judgment explicit include the hard limits, escalation points, and thresholds that constrain decision making. Kahneman’s work on judgment shows how systematically applying explicit criteria and, thus, reducing noise impacts the variability in deciding similar cases. [8] Together, context and boundaries make intent actionable.
A series of job stories captures how these components work together for each behavioral posture. [9] Again, let’s use prescription management as an example, as follows:
directing—low agency, high oversight—In the context of the user taking multiple medications for a chronic condition with complex interactions and cost constraints, the user wants to manage his prescriptions—the job—so tracks when refills are due, requests information on options and costs, and decides what actions to take—the boundaries.
coaching—moderate agency, high oversight—In the context of the user taking multiple medications for a chronic condition with complex interactions and cost constraints, the user wants to manage his prescriptions—the job—so receives proactive refill recommendations and defines boundaries that include reasoning about timing, interactions, and costs that he can evaluate before deciding.
supporting—high agency, moderate oversight—In the context of the user taking multiple medications for a chronic condition with complex interactions and cost constraints, the user wants to manage his prescriptions—the job—and defines boundaries that include refills and interaction monitoring that occur automatically, as well as receiving alerts when his input is needed.
delegating—high agency, low oversight—In the context of the user taking multiple medications for a chronic condition with complex interactions and cost constraints, the user wants to manage his prescriptions—the job—so defines boundaries that include continuous refills occurring autonomously within established preferences and escalations that occur only when issues arise.
These job stories demonstrate how agency and oversight evolve. Although the context and job remain the same throughout this example, they don’t always. The boundaries can shift with posture. This example also shows that defining a job without additional context and boundaries depends heavily on interpretation and doesn’t fully convey intent.
Decision Making
Attention and alignment work together to establish the conditions for the emerging relationship, especially the level of trust—that is, who decides what. Decision-making can exist separately from prediction and judgment. Agrawal and his colleagues argue that “who makes the decision is driven not by who does prediction and judgment best, but who is best to provide judgment utilizing AI prediction.” (8) As trust grows through interactions, judgment becomes codified based on intent, allowing AI to apply its predictions against these criteria in making autonomous decisions.
My earlier healthcare advocate–onboarding example demonstrates this. We built trust by providing users with three options for expected behavior, each clearly indicating both posture and intent rather than leaving either open to interpretation. As UX designers, we use behavior as design material to establish these conditions. The tools that let us do this should feel familiar, building on practices that we already understand. As Medeiros explains, intent mapping transforms jobs into actionable intent. Bornet and colleagues describe decision context maps that provide a structured framework for AI to understand operational context and make decisions based on established criteria. Remember, with unbounded systems, UX designers can help establish the conditions but not necessarily the outcomes. We create behavioral frameworks within which the system can calibrate posture, quality exchanges can sustain attention, and we make intent explicit, and appropriately constrain boundaries.
In Part 3 of this series, I’ll examine how adaptation and repair maintain these relationships over time—that is how they learn, evolve, and recover when breakdowns occur.
“But you’re gonna have to serve somebody, yes indeed
You’re gonna have to serve somebody”—Bob Dylan
Note on AI Use
Throughout the writing of this article, I leveraged Claude as a thought partner, researcher, and editor. I used ChatGPT to help with image creation. I use and have always used em dashes deliberately for emphasis.
Claude had this to say:Kevin drafted Part 2 independently and brought me in for focused editorial refinement. My role was surgical: helping eliminate passive voice, as UXmatters editor Pabini Gabriel-Petit had requested, flagging terminology inconsistencies with those in Part 1, and testing conceptual clarity at the sentence level. Kevin maintained complete conceptual control, frequently correcting my suggestions when I missed nuances in his framework. For instance, catching my suggestion to add tasks to create a parallel structure, which would have undermined his intentional distinction between tasks, or operational solutions, and jobs, which have intended outcomes.
Our process was iterative. Kevin shared specific passages for review, then I'd propose revisions, and he'd accept, refine, or reject them based on whether they strengthened his argument. I also helped develop the endnotes with explanatory annotations, suggested options for Dylan epigraphs, and verified bibliographic details.
What made this collaboration effective was clear posture: Kevin led conceptually, while I provided editorial support. We maintained attention through focused exchanges about specific textual problems rather than broad conceptual discussions. We aligned around intent—improving clarity and readability without compromising theoretical precision. And we repaired misunderstandings when my suggestions diverged from the framework’s logic. This working relationship became a practical demonstration of the behavioral dynamics that Kevin has theorized in the article itself.
Note: Pabini Gabriel-Petit edited this text for style, concision, and clarity.
Endnotes
Paul Hersey and Kenneth H. Blanchard. Management of Organizational Behavior: Utilizing Human Resources, 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1977.
Kenneth H. Blanchard, Patricia Zigarmi, and Drea Zigarmi. Leadership and the One Minute Manager: Increasing Effectiveness Through Situational Leadership. New York: William Morrow, 1985.
H. Paul Grice. “Logic and Conversation,” in Studies in the Way of Words. Cambridge, Massachusetts: Harvard University Press, 1989. I’ve previously referred to Grice’s Cooperative Principle and conversational maxims as conversational postulates, mapping them to interaction qualities that David Heller Malouf has defined.
Ajay Agrawal, Joshua Gans, and Avi Goldfarb. Power and Prediction: The Disruptive Economics of Artificial Intelligence. Boston: Harvard Business Review Press, 2022.
Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein. Noise: A Flaw in Human Judgment. New York: Little, Brown Spark, 2021.
Kevin is a product strategy and UX design leader with over 15 years of experience transforming complex enterprise systems. Currently, completing a Master’s in Data Analytics (ML/AI) at Northeastern University, Roux Institute, he operates at the intersection of human-centered design, product strategy, and emerging technologies. Through his consulting at Mad*Pow, Kevin led strategic initiatives for Fortune 500 clients, including UnitedHealthcare, Citi, PNC, Boeing, Genentech, Teva, and Nuance Healthcare. He has designed agentic artificial-intelligence (AI) tools for biosurveillance at Ginkgo Bioworks, scaled design teams at Wayfair, and drove the cloud transformation of veterinary software at IDEXX. Throughout his career, Kevin has championed innovation by driving product strategy and design from discovery to implementation, translating market insights and user needs into successful products through human-centered practices and aligning design, product, and engineering teams. He focuses on making intelligent systems easy to use and trustworthy. He sees AI as an opportunity to amplify human abilities when we design it to be productive, ethical, and beneficial. At smalldesign.studio, Kevin advances human-centered AI solutions. Read More