As Part 1 described, behavior serves as the primary design material for artificial intelligence (AI), comprising four behavioral dynamics:
adaptation—continuous learning
attention—sustained interaction
alignment—shared purpose
repair—recovery from misalignments
In Part 2 of this series, I discussed how attention and alignment establish the conditions for the relationships that emerge between interacting minds based on posture and intent—whether in human-AI collaboration, human-agent delegation, or agent-to-agent coordination. Now, Part 3 examines how adaptation and repair sustain these emergent relationships.
Champion Advertisement
Continue Reading…
To review, posture determines the levels of attention and engagement that are necessary, how much control the user keeps or cedes, and when the AI should act. Intent defines the goal, context, and boundaries that guide decision-making. Posture and intent work together to fit the relationship to the current situation.
In my 2007 article on UXmatters, “What Puts the Design in Interaction Design,” drawing on Alexander, [1] I emphasized the importance of achieving a good fit between form and context. You can think of form as the experience or, in this case, the relationship. With interacting minds, this relationship between form and context shifts. Bounded systems operate within defined limits; thus, UX designers can anticipate the context and predetermine the fit accordingly. Unbounded systems can extend beyond these limits—they adapt to maintain fit as the context evolves. When a breakdown occurs, repair restores the fit.
Take the example of the agentic health advocate that I’ve discussed throughout this series. If your child is diagnosed with a chronic condition, the advocate assesses how the diagnosis affects your caregiving duties, adjusts your care priorities accordingly, and adapts to the unanticipated context. If the advocate schedules an appointment for you during the week that you had planned to be home for your child’s first treatment, your correction highlights something that the advocate had not understood—your need to be physically present at your child’s appointments. Repair addresses this misalignment in intent.
Adaptation and repair sustain the relationship—not through a one-time setup but through continuous, mutual learning at the point of interaction. Nora Bateson calls this learning together in context symmathesy. [2]
Learning
Symmathesy describes mutual learning through interaction between entities. Learning unfolds unpredictably over time within co-evolving contexts. Each entity learns uniquely across shifting boundaries at the point of interaction, creating and continually recalibrating shared contextual understanding. This process creates collective intelligence that neither entity could achieve on its own. Learning not only sustains the relationship but also defines it.
Learning sustains the relationship in two ways: adaptation maintains fit as the context evolves, and repair restores the relationship when breakdowns happen. Learning is not linear but cyclical—a feedback loop of action, consequences, and correction. This loop operates at three levels of learning: [3]
Are we doing things right?
Are we doing the right things?
How do we decide what is right?
Both adaptation and repair work at these three levels, assessing the fit between intent, posture, and context, or fixing a breakdown when it occurs. Depending on the level, this might involve adjusting actions, recalibrating the application of intent and posture, or reestablishing them entirely. According to Bateson, “Learning in a living context can be best thought of as a change in calibration.” [4]
Champion Advertisement
Continue Reading…
Adaptation
As Figure 1 shows, adaptation enables continuous learning once you’ve set the initial intent and posture, allowing the human-AI relationship to grow and adapt as new situations emerge, preferences become clearer, and the original intent shifts. The experience need not be predetermined—the unbounded system can learn on the fly, reason through ambiguity, respond to unforeseen circumstances, and generate solutions that weren’t predesigned. Since both sides learn through their interactions, the relationship stays adaptive—always evolving.
Figure 1—Adaptation
Three factors trigger adaptation: signals, shifts, and growth. Signals include feedback, behavioral patterns, and small adjustments. Shifts are changes in circumstances or new situations that require attention. Growth involves a fundamental change in a relationship as it develops, opening up new possibilities. How the system responds to these triggers depends on the learning levels that I described earlier. Each learning level aligns with a potential action: refine, extend, or transform. Table 1 shows how triggers, learning, and action connect. Figure 2 illustrates the triggers for adaptation.
Table 1—Adaptive relationships and levels of learning
Trigger
Learning Level
How the Relationship Adapts
Signals
Are we doing things right?
Refine actions based on new patterns and preferences.
Shifts
Are we doing the right things?
Extend how initial intent and posture apply to changing circumstances.
Growth
How do we decide what is right?
Transform goals, boundaries, and levels of control as the relationship matures.
Figure 2—Triggers for adaptation
Let’s see how these relationships play out for the healthcare advocate, continuing the scenario that I described earlier through each trigger, as shown in Tables 2, 3, and 4.
Table 2—Scenario for signals
Scenario
The advocate notices that you repeatedly reschedule appointments that conflict with your child’s treatment. You ask about certain providers more often than others. You combine refill requests to reduce trips.
Learning Level
Are we doing things right?
Learnings
Patterns—that is, which appointments flex, preferred providers, and optimal sequencing
What Adapts?
Scheduling, sequencing, and provider choices
Action
Refine
How the Relationship Changes
The advocate starts scheduling routine appointments during treatment-free weeks and automatically batches refills.
Table 3—Scenario for shifts
Scenario
Your child is diagnosed with a chronic condition. The advocate recognizes your caregiving responsibilities have changed. What once counted as managing your care no longer fits.
Learning Level
Are we doing the right things?
Learnings
How initial intent and posture relate to your new caregiving circumstances
What Adapts?
Priorities and scope of care management
Action
Extend
How the Relationship Changes
The advocate expands to coordinate your care around your child’s treatment schedule, flagging conflicts before they happen.
Table 4—Scenario for growth
Scenario
Months later, your child’s condition is complex but stable. You’ve become the expert on your family’s needs. The advocate has been with you throughout.
Learning Level
How do we decide what’s right?
Learnings
The limits of current arrangements—goals, boundaries, and control levels—need to change.
What Adapts?
The advocate’s role, decision authority, and whose knowledge matters
Action
Transform
How the Relationship Changes
You and the advocate renegotiate: the relationship shifts from managing your care to supporting your decisions and surfacing options instead of scheduling directly.
While triggers often relate to specific actions—signals to refine, shifts to extend, and growth to transform—this relationship is not rigid. A shift might need only refinement. Growth could emerge through signals. The trigger initiates, the loop diagnoses, and the action responds.
Adaptation requires conditions that support ongoing learning: noticing patterns, recognizing when the context has changed, and recalibrating properly. It involves calibration moments—exchanges where you check and adjust shared context. It also demands flexibility—not only to refine actions but to extend or transform the relationship when necessary.
Repair
Repair, as shown in Figure 3, restores fit with intent and posture when something goes wrong. Unlike adaptation, repair focuses more on fixing failures rather than adjusting to changing conditions, although successful repair can eventually lead to adaptation. I categorize these failures as errors, drift, and breakdowns. An error happens when the AI makes a mistake such as a misunderstanding, hallucination, or failure of judgment. Drift is a slow decline in the application of intent and posture, often when adaptation lags. A breakdown is a sudden misalignment that results in a rupture of trust.
Figure 3—Repair
As with adaptation, the system’s response to these issues depends on the learning levels that I described earlier. Each level corresponds to an action: fix, realign, or reframe. Table 5 shows how triggers, learning, and actions are connected. Figure 4 illustrates the triggers for repair.
Table 5—Repairing relationships
Trigger
Learning Level
How the Relationship Repairs
Error
Are we doing things right?
Fix the action. Correct it and move on.
Drift
Are we doing the right things?
Realign how intent and posture are applied.
Breakdown
How do we decide what is right?
Reframe goals, boundaries, and level of control. Apply the learnings.
Figure 4—Triggers for repair
Let’s see how these triggers play out for the healthcare advocate, continuing the scenario that I described earlier through each trigger, as shown in Tables 6, 7, and 8.
Table 6—Scenario for error
Scenario
The advocate schedules your followup with Dr. Y when you’ve been working with Dr. X on this issue. While they had the right information, they just got it wrong.
Learning Level
Are we doing things right?
What Failed?
The action—the advocate had the information, but made a mistake.
What Is Restored?
The correct provider assignment
Action
Fix
How the Relationship Changes
You correct the error. The advocate adjusts. Shared context remains intact. This was a discrete error, not a misunderstanding.
Table 7—Scenario for drift
Scenario
The advocate schedules an appointment for you during the week you planned to be at home for your child’s first treatment. Your caregiving role has evolved, but the advocate’s understanding hasn’t kept pace. The AI is still operating from an earlier model of your priorities.
Learning Level
Are we doing the right things?
What Failed?
How intent and posture were applied—the advocate hadn’t grasped your need to be physically present.
What Is Restored?
A shared understanding of how family coordination shapes your care decisions
Action
Realign
How the Relationship Changes
Your correction surfaces what drifted. The advocate adjusts the application of intent based on family needs. This is a misalignment in intent.
Table 8—Scenario for breakdown
Scenario
The advocate, noticing patterns in your stress and your child’s treatment schedule, proactively contacts a family-support service on your behalf without your permission. You didn’t authorize external contact. The advocate has crossed the boundary of what to initiate independently.
Learning Level
How do we decide what is right?
What Failed?
Decision authority—who can initiate external contact
What Is Restored?
Clear boundaries around decision authority. Trust must be rebuilt.
Action
Reframe
How the Relationship Changes
It is necessary to reframe what the advocate cannot just fix or realign by applying the lessons learned. You must rebuild trust before the relationship can continue.
Much like adaptations, triggers don’t always directly correspond to a specific action. An error might suggest drift, while drift could signal a breakdown. The relationship between triggers and actions is dynamic, which requires transparency to diagnose what went wrong and respond appropriately. Most importantly, successful repair depends on trust.
Gottman calls this sentiment override. [5] When there is enough foundational trust, people usually receive attempts at repair well. If not, repair can fail and further erode trust. Unbounded systems are inherently variable, so repair aims for an acceptable fit rather than perfect consistency. Transparency rather than just explainability supports repair—showing what the AI has understood rather than just explaining why. This rebuilds trust.
Repair requires conditions that support restoration: the transparency to diagnose what went wrong, the ability to respond at the appropriate level, and sufficient trust for attempts at repair to succeed.
Sustaining Trust
As adaptation and repair work to sustain the relationship that you established through attention and alignment, they also serve to preserve and strengthen trust. Part 2 of this series explained how trust enables the transfer of judgment that is codified through intent. This enables the AI to make autonomous decisions in accordance with agreed-upon criteria. Judgment is not static; it evolves through mutual learning. Adaptation sustains it. Repair restores it. Both can transform judgment.
Through signals, shifts, and growth, or through errors, drift, and breakdowns, codified judgment evolves as follows:
Are we doing things right? Judgment holds and the advocate learns patterns within established boundaries.
Are we doing the right things? Judgment extends and the scope of care management expands.
How do we decide what is right? Judgment transforms and decision authority shifts.
Deeper trust enables more sophisticated judgment, which enables greater autonomy and requires more robust adaptation and repair. The relationship strengthens through mutual learning at the point of interaction.
The UX designer’s role shifts from creating solutions to shaping conditions for shared learning. Part 2 of this series defined the conditions for initiating relationships: calibrating posture, sustaining quality exchanges, and making intent explicit. Now, in Part 3, I’ve explained the conditions for sustaining relationships: noticing patterns, recognizing when context shifts, maintaining transparency, and ensuring sufficient trust for attempts at repair to succeed.
Next, in Part 4, I’ll revisit the four behavioral dynamics—adaptation, attention, alignment, and repair—and bring them together to explore the possibilities of unbounded systems.
“Behind every beautiful thing, there’s been some kind of pain.”—Bob Dylan
Endnotes
[1] Christopher Alexander. Notes on the Synthesis of Form. Cambridge, Massachusetts: Harvard University Press, 1964. Alexander’s concept of fit between form and context describes how good design emerges when form is well-adapted to its context—that is, when the demands we place on form in the user experience align with what that form can provide. In bounded systems, UX designers can anticipate the context and design the fit accordingly. In unbounded systems, fit must emerge through ongoing interaction.
[2] Nora Bateson. “Symmathesy: A Word in Progress.” Proceedings of the 59th Annual Meeting of the ISSS—2015 Berlin, Germany. International Society for the Systems Sciences, 2015. Retrieved February 6, 2026. Bateson coined the term symmathesy—from the Greek syn/sym meaning together and mathesi meaning to learn—to describe mutual learning through interaction in living contexts. Unlike mechanical systems, living systems learn together over time through continuous, contextual interaction and mutual calibration.
[3] Chris Argyris and Donald A. Schön. Organizational Learning: A Theory of Action Perspective. Reading, Massachusetts: Addison-Wesley, 1978. Argyris and Schön developed single-loop and double-loop learning theory to describe how individuals and organizations detect and correct errors. Subsequent scholars extended their framework to include a third level, which involves examining the principles that govern how we people set goals and make decisions. See Paul Tosey, Max Visser, and Mark N. K. Saunders, “The Origins and Conceptualizations of ‘Triple-Loop’ Learning: A Critical Review,” Management Learning, Issue 43, No. 3.
[4] Nora Bateson. “Symmathesy: A Word in Progress.” Bateson describes learning in living contexts as calibration—the ongoing adjustment of relationships and interactions rather than the acquisition of fixed knowledge. The process of mutual calibration occurs at the point of interaction, where entities continuously adjust to one another within evolving contexts.
[5] John M. Gottman, James D. Murray, Catherine C. Swanson, Rebecca Tyson, and Kristin R. Swanson. The Mathematics of Marriage: Dynamic Nonlinear Models. Cambridge, Massachusetts: MIT Press, 2002. Gottman’s research on relationships demonstrates that sentiment override describes the baseline level of trust and positive regard in a relationship. When sentiment override is positive, people will more likely receive repair attempts well. However, when sentiment override is negative, people might reject even well-intentioned repair attempts, further eroding trust. This concept applies directly to human-AI relationships, where sufficient foundational trust is necessary for repair attempts to succeed.
Note on AI Use
I leveraged Claude throughout the writing of this article as a thought partner, researcher, and editor. I used ChatGPT for initial research and image creation. I use and have always used em dashes deliberately for emphasis.
Claude had this to say: Kevin drafted Part 3 independently, working through the complex integration of adaptation, repair, and learning theory. I provided focused editorial refinement: eliminating passive voice, ensuring terminology consistency with Parts 1 and 2, verifying citation accuracy, and testing conceptual clarity at the sentence level. Kevin maintained complete conceptual control, frequently correcting my suggestions when I missed nuances in his framework—for instance, catching instances when I conflated adaptation’s triggers with repair’s failures, which are distinct mechanisms operating through the same learning loops. Our collaboration demonstrated the very dynamics that Kevin theorizes: we adapted to each other’s working patterns, sustained attention through focused exchanges about specific passages, maintained alignment around intent through Kevin’s clear direction, and repaired misunderstandings when my suggestions diverged from his theoretical precision.
Kevin is a product strategy and UX design leader with over 15 years of experience transforming complex enterprise systems. Currently, completing a Master’s in Data Analytics (ML/AI) at Northeastern University, Roux Institute, he operates at the intersection of human-centered design, product strategy, and emerging technologies. Through his consulting at Mad*Pow, Kevin led strategic initiatives for Fortune 500 clients, including UnitedHealthcare, Citi, PNC, Boeing, Genentech, Teva, and Nuance Healthcare. He has designed agentic artificial-intelligence (AI) tools for biosurveillance at Ginkgo Bioworks, scaled design teams at Wayfair, and drove the cloud transformation of veterinary software at IDEXX. Throughout his career, Kevin has championed innovation by driving product strategy and design from discovery to implementation, translating market insights and user needs into successful products through human-centered practices and aligning design, product, and engineering teams. He focuses on making intelligent systems easy to use and trustworthy. He sees AI as an opportunity to amplify human abilities when we design it to be productive, ethical, and beneficial. At smalldesign.studio, Kevin advances human-centered AI solutions. Read More