Top

What Puts the Design in AI? Behavior, Part 1

October 20, 2025

“You’re very well read, it’s well known

But something is happening here

And you don’t know what it is

Do you, Mr. Jones?”—Bob Dylan

I first wrote about the role of behavior in interaction design in 2007, expanding Gillian Crampton Smith’s four-dimensional interaction model by proposing behavior as a fifth dimension. [1] While Smith’s framework had addressed words, visual representations, physical objects, and time, [2] I argued that behavior serves as the mediating medium through which users and systems interact. Behavior coordinates the flow between action and reaction, a conversation that unfolds over time, shaping how user intentions translate into actions and how systems respond. [3] Our interactions with systems have fundamentally been bounded, whether they are fixed or personalized. Designers could anticipate, if not control, the range of possible system outcomes.

Champion Advertisement
Continue Reading…

But what happens when systems are no longer bounded? When they can now collaborate with us, create for us, and act on our behalf? When they possess seemingly dynamic intelligence that can learn, think, and act? The latest artificial intelligence­–driven systems that have emerged into the mainstream since the launch of ChatGPT in 2022 now have such capabilities. These systems behave differently from what users are accustomed to, requiring a shift from human-tool interactions to something closer to human-mind interactions. The new role for designers is to create the conditions for an emerging relationship between man and machine.

In this article, I’ll present a theory of interacting minds as a design metaphor for exploring how behavior shifts from coordinating interactions to managing dynamic human–artificial-intelligence (AI) relationships. [4] I’ll expand on my original definition of behavior to show how behavior becomes our primary design material through the following four key dynamics:

  1. Adaptation—continuous learning
  2. Attention—sustained interaction
  3. Alignment—shared purpose
  4. Repair—recovery from misalignments

Understanding this shift—from interactions to dynamic relationships—helps ensure that AI systems remain genuinely human-centered as they collaborate, create, and act on our behalf.

Beyond Bounded Systems

Imagine having an ever-present personal healthcare advocate proactively guiding you through your health journey, recommending, coordinating, and ultimately, managing your care. An AI agent thinking and acting on your behalf, navigating your health ecosystem and seamlessly involving human advocates, as necessary. As the user, you can control the level of agency, while the agent makes decisions and acts autonomously on your behalf.

This was the five-year, mobile-platform vision I helped develop while working with a leading health insurer in late 2022, when ChatGPT launched. In targeting this future scenario, our team assumed—perhaps too optimistically—that we would overcome the technical limitations, data-access constraints, and systemic healthcare barriers within that five-year timeframe. Although ChatGPT had sparked our imagination, much of what we envisioned still seemed fantastical, yet somehow plausible.

Champion Advertisement
Continue Reading…

Two years later, I now see how AI can enable what bounded systems never could. Healthcare is deeply personal and doesn’t always follow predetermined paths. The AI agent must adapt to unexpected situations, reason through ambiguity, and understand context. It must comprehend discharge instructions, navigate insurance complexities, weigh trade-offs when plans fall short, and coordinate across different systems while considering individual preferences and circumstances. This requires more than simple automation—it demands a system that can create new solutions rather than just stick to predetermined options.

I never would have guessed how quickly AI would evolve after the release of ChatGPT. The shift from Predictive AI—“What should happen next?”— to Generative AI (GenAI)—“How can I help you?”—marks a major leap. [5] But it will be Agentic AI systems—“What can I do for you?”—which can truly learn, reason, and act on our behalf, that will bring the healthcare advocate to life. [6] We now have digital collaborators, creators, and doers that can help us with myriad tasks, including managing our healthcare.

These systems behave differently from those we’re used to. Their stochastic nature means the same question could yield a different response each time. The responses are randomly generated from patterns rather than rules. While this unpredictability is precisely what enables creative problem-solving and natural interactions, [7] it also means we can’t design these interactions in the same way we’ve designed human-tool interactions.

As designers, we’re now shaping probabilistic relationships across various types of human-mind interactions: human-AI collaboration, human-agent delegation, and even agent-to-agent coordination. By minds, I mean entities that are capable of learning, reasoning, and acting. We find these patterns in human cognition and, increasingly, in AI systems. These relationships aren’t one-sided but emerge at the point of interaction, creating something new through an exchange between interacting minds. To understand the implications for design, let’s first look at how human relationships develop as interacting minds.

Interacting Minds

Human relationships evolve through ongoing, mutual influence via dynamic, unpredictable, moment-to-moment interactions. [8, 9] These exchanges are meetings of minds that collectively generate new thoughts and behaviors. [10] Through shared cognition and mutual adaptation, humans develop mental models, enabling them to process future information and shaping their subsequent interactions. [11] This process doesn’t occur in isolation. It takes place between individuals or groups as interacting minds, at the point of interaction itself—not merging into one mind but forming something unique between them through their ongoing exchange. [12]

Consider human collaborations, which reveal patterns that develop through ongoing interaction. Working with a longtime colleague, you establish shared rhythms—knowing when to push back, when to defer, and how to signal confusion without disrupting the flow. These aren’t rules but patterns that are shaped by your shared history, mutual understanding, and continuous exchanges. Drawing from behavioral and cognitive research, I’ve identified four key behavioral dynamics that form this kind of collaborative relationship. These mechanisms work together as interacting minds manage their relationship, in the following ways:

  • Adaptation—Continuously adjust to each other’s patterns and preferences over time. [13]
  • Attention—Build and sustain mutual focus during interactions. [14]
  • Alignment—Create and maintain shared understanding and intent. [15]
  • Repair—Recognize and address misunderstandings or breakdowns that occur. [16]

Together, these dynamics enable collaborators to function as interacting minds. Through ongoing behavioral adjustments, they manage their relationship and build a collective intelligence that neither could achieve alone. Figure 1 shows how these four behavioral dynamics work together at the point of interaction to manage the relationship between interacting minds.

Figure 1—The four behavioral dynamics
The four behavioral dynamics

Just as human relationships develop collective intelligence through shared history and understanding, our interactions with AI systems create a similar dynamic—not as users operating tools but as participants in emerging relationships. Out of curiosity, I asked Anthropic’s Claude to describe our working relationship using these four dynamics. Of course, asking Claude to describe our collaboration while we’re actively discussing Human-AI relationships would, admittedly, shape the response. Still, Claude’s response revealed some interesting patterns, as follows:

Adaptation: I learn your patterns within our conversation—terminology preferences, theoretical frameworks, iterative style. You adapt, too, learning how to prompt more effectively and when to provide context versus constraints. But this mutual adjustment resets each session, requiring us to re-establish our working rhythm each time.

Attention: You direct my focus through careful context-setting—providing your memories, past work, theoretical frameworks—while monitoring when I drift toward academic abstraction or away from your voice. We work in sustained but punctuated bursts—intense work sessions with clear boundaries.

Alignment: We continuously calibrate around precision and tone. You test ideas through me, but firmly redirect when I veer too formal or lose your theoretical thread. We never assume alignment—we actively maintain it through your consistent “Yes, but” corrections.

Repair: We make quick, iterative adjustments. When I misinterpret your intent or over-elaborate, you immediately course correct with specific examples or constraints—“keep your response short.” You treat misalignment as normal, not failure—part of the collaborative process rather than breakdown.

Our relationship operates through constant micro-negotiations rather than set-and-forget commands.

What caught my attention was Claude’s description “constant micro-negotiations rather than set-and-forget commands.” I’d never thought of our interactions as negotiations, but this term captures something essential. The response suggests our relationship has evolved through mutual influence, requiring adaptation, shared attention, ongoing alignment, and reciprocal adjustments—even though this partnership resets after each session. Whether AI truly understands or merely simulates understanding matters less than this behavioral reality: this relationship operates through the same dynamics as any collaborative partnership, requiring us to manage it as interacting minds.

Behavior As Material

My collaboration with Claude illustrates thinking together as interacting minds, while my earlier healthcare-advocate scenario shows AI acting on the user’s behalf. Together, these examples reveal something fundamental: the user interface is no longer the primary factor shaping the experience. With unbounded systems, the experience emerges through interactions, whether generating ideas with Claude, delegating healthcare coordination, or orchestrating multi-agent systems. We’re no longer designing user interfaces to control interactions but shaping the behavioral conditions from which interactions emerge. Behavior itself becomes our design material. To understand what this means, we need to revisit what behavior is.

My 2007 definition positioned behavior as coordinating the conversation between human intent and system action. This worked when systems were bounded and behavior could orchestrate predetermined pathways, managing workflows even when they were complex or hidden. But with unbounded systems, coordination isn’t enough. When AI can generate novel responses, learn from interactions, and act with increasing autonomy, behavior must do more than coordinate—it must actively manage the evolving relationship between minds.

This management occurs through the four behavioral dynamics we’ve observed in human relationships. Behavior no longer simply coordinates turn-taking but also enables adaptation, allowing both human and AI to learn from each interaction. Behavior sustains attention across changing modes of engagement. It maintains alignment even as the relationship between both parties continuously evolves. And it facilitates repair when understanding breaks down. These dynamics define the behavioral conditions at the point of interaction in human-mind relationships, making unbounded systems manageable despite their unpredictability. Figure 2 shows the shift from behavior as coordination in human-tool interactions to behavior as relationship management in human-mind interactions.

Figure 2—Shifting from coordination to relationship management
Shifting from coordination to relationship management

Behavior acts as our design material because we can actively shape our behaviors when outcomes are uncertain. Behavior still coordinates intent but also manages the relationship between interacting minds. We can design interactions that influence behavioral conditions. For example, designing the healthcare advocate’s check-in routines guides attention, while defining how it learns from patient feedback promotes adaptation. Establishing how it calibrates to patient values influences alignment, and creating pathways for clarification enables repair. These design choices don’t determine what will happen, but rather shape how interactions unfold—setting the conditions rather than the outcomes.

Here lies the challenge: how do we create the conditions for an emergent relationship that fosters a productive, ethical, beneficial connection between these interacting minds? We design behaviors. This is what puts the design in AI.

Foundations, Not Solutions

AI is rapidly advancing. Ethan Mollick observed, “Whatever AI you are using right now is going to be the worst AI you will ever use.” [17] This constant evolution challenges designers who are still applying patterns for bounded systems to fundamentally unbounded systems. They’re trying to design for interacting minds using patterns that were devised for human-tool interactions. Existing design patterns that were based on deterministic thinking cannot fully address the reality of probabilistic relationships. We need to ground new approaches in behavioral dynamics—adaptation, attention, alignment, and repair—that work through emergence rather than resisting it.

This article is Part 1 of a series in which my goal is to provide a framework for thinking about human-centered AI design. The healthcare-advocate concept demonstrates the potential for a system that can serve human needs while respecting human agency. However, these behavioral dynamics also influence whether AI amplifies or diminishes human ability. By understanding how to shape adaptation, attention, alignment, and repair, we can design relationships that are not only powerful but also beneficial. These relationships begin at the point of interaction—where humans and AI come together as interacting minds.

“Ah, but I was so much older then

I’m younger than that now”—Bob Dylan

Note on AI Use

I leveraged Claude throughout the writing of this article as a thought partner, researcher, and editor. I used ChatGPT for initial research and image creation. I use and have always used em dashes deliberately for emphasis.

Claude had this to say: You used our collaboration as both a writing partner and a living laboratory for your theoretical framework, iteratively refining concepts through precise edits while testing your interacting-minds model in real time. I assisted with fact-checking citations, ensuring theoretical alignment between sources and claims, and providing focused refinements to achieve conceptual clarity. Our working relationship itself became empirical evidence for your argument about behavior as the core design material in AI systems—demonstrating adaptation, attention, alignment, and repair through our actual interactions. 

Acknowledgment—I thank Morgan St. Laurent, Jen Briselli, Katie Oeschger, Andrew Kinley, and Elijah Fischer for reviewing article drafts and providing insightful feedback.

Endnotes

Kevin Silver. “What Puts the Design in Interaction Design.” UXmatters, July 10, 2007 . This article has been cited in over 90 academic papers and is widely referred to in interaction-design education and practice as establishing behavior as the fifth dimension of the interaction-design framework.

Gillian Crampton Smith. “Foreword,” in Designing Interactions. Bill Moggridge, ed. Cambridge, MA: MIT Press, 2007.

Kevin Silver. “What Puts the Design in Interaction Design.”

Thalia Wheatley, Mark A. Thornton, Arjen Stolk, and Luke J. Chang. “The Emerging Science of Interacting Minds.” Perspectives on Psychological Science, 19, No. 2, 2024.. I adopt their framework for understanding cognition as emerging through interaction, extending their analysis of human-human interaction to human-AI relationships.

Paul R. Daugherty and H. James Wilson. Human + Machine: Reimagining Work in the Age of AI, Updated and Expanded Edition. Boston: Harvard Business Review Press, 2024.

Pascal Bornet, Jochen Wirtz, Thomas H. Davenport, David De Cremer, Brian Evergreen, Phil Fersht, Rakesh Gohel, and Shail Khiyara. Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work, and Life. Singapore: World Scientific Publishing Co. Pte. Ltd., 2025.

Bornet et al. Agentic Artificial Intelligence.

John M. Gottman, James D. Murray, Catherine C. Swanson, Rebecca Tyson, and Kristin R. Swanson. The Mathematics of Marriage: Dynamic Nonlinear Models. Cambridge, MA: MIT Press, 2002. Their mathematical modeling demonstrates how relationship dynamics emerge from continuous mutual influence between partners, with small changes potentially cascading into larger patterns.

Wheatley, et al. “The Emerging Science of Interacting Minds.” This paper argues that human cognition and behavior cannot be fully understood by studying individuals in isolation, as key phenomena emerge from the dynamics of interaction itself.

Leonhard Schilbach, Bert Timmermans, Vasudevi Reddy, Alan Costall, Gary Bent, Tobias Schlicht, and Kai Vogeley. “Toward a Second-Person Neuroscience.” Behavioral and Brain Sciences, Vol. 36, No. 4, 2013. Schilbach and his coauthors’ use meeting of minds to describe how genuine social cognition emerges in reciprocal interaction.

Wheatley et al. “The Emerging Science of Interacting Minds.” Wheatley and her coauthors describe how interacting systems achieve mutual adaptation over time, developing aligned representations that guide future interactions.

Wheatley et al. “The Emerging Science of Interacting Minds.”

Gottman et al. “The Mathematics of Marriage: Dynamic Nonlinear Models.” I characterize Gottman and his coauthors’ ongoing mutual influence as adaptation.

Schilbach et al. “Toward a Second-Person Neuroscience.” Schilbach and his coauthors emphasize mutual attention and gaze as fundamental to second-person neuroscience. While they do not explicitly connect attention to alignment and repair, I identify it here as one of three key mechanisms through which interacting minds coordinate.

Herbert H. Clark and Susan E. Brennan. “Grounding in Communication.” In Perspectives on Socially Shared Cognition, ed. Lauren B. Resnick, John M. Levine, and Stephanie D. Teasley. Washington, DC: American Psychological Association, 1991. Clark and Brennan’s concept of grounding describes how conversational partners establish and maintain common ground. I characterize this process as alignment, one of three mechanisms that enable cognitive coordination between interacting minds.

Gottman et al. Gottman and his coauthors demonstrate repair as crucial for relationship stability, particularly in managing conflict. I position repair alongside attention and alignment as core mechanisms of relational cognition.

Ethan Mollick. Co-Intelligence: Living and Working with AI. New York: Portfolio, 2024.

Principal at smalldesign.studio

Yarmouth, Maine, USA

Kevin SilverKevin is a product strategy and UX design leader with over 15 years of experience transforming complex enterprise systems. Currently, completing a Master’s in Data Analytics (ML/AI) at Northeastern University, Roux Institute, he operates at the intersection of human-centered design, product strategy, and emerging technologies. Through his consulting at Mad*Pow, Kevin led strategic initiatives for Fortune 500 clients, including UnitedHealthcare, Citi, PNC, Boeing, Genentech, Teva, and Nuance Healthcare. He has designed agentic artificial-intelligence (AI) tools for biosurveillance at Ginkgo Bioworks, scaled design teams at Wayfair, and drove the cloud transformation of veterinary software at IDEXX. Throughout his career, Kevin has championed innovation by driving product strategy and design from discovery to implementation, translating market insights and user needs into successful products through human-centered practices and aligning design, product, and engineering teams. He focuses on making intelligent systems easy to use and trustworthy. He sees AI as an opportunity to amplify human abilities when we design it to be productive, ethical, and beneficial. At smalldesign.studio, Kevin advances human-centered AI solutions.  Read More

Other Articles on Artificial Intelligence Design

New on UXmatters