Incorporating artificial intelligence (AI) into internal operations is about more than technology. It’s about the people using that technology. Getting the right mix of automation and human oversight is essential. UX research helps us find this balance, ensuring that AI tools boost productivity without reducing user awareness, trust, or comfort. In this article, I’ll look at what UX research tells us about the impacts of AI and share some practical ways of balancing AI automation with meaningful human involvement.
How Automation Changes User Behavior
Highly automated systems can quietly change the way people work. One common problem is automation-induced complacency, when users become less alert because they assume the AI has it covered and they start paying less attention to what the system is doing. This has happened in high-stakes activities such as aviation and driving when pilots or drivers trusted autopilot systems too much and didn’t notice problems until it was too late. The key point: no automation is perfect. When people start believing the machine will always catch mistakes, they stop catching those mistakes themselves. This loss of awareness can lead to serious errors.
Champion Advertisement
Continue Reading…
The same thing can happen in IT operations. Imagine a monitoring tool that automatically checks for issues or handles deployments. Engineers might stop doing manual reviews and miss unusual patterns, thinking the system will fix everything on its own. This can create real risks such as the following:
automation bias—an over-reliance on what the automated system decides
skill erosion—a loss of abilities through lack of practice
blind spots—missing problems that fall outside the AI’s programming
UX research helps spot these unintended effects early, so teams can build in protections against unwarranted trust. The goal is not to use less automation, it’s to calibrate the use of automation properly. Users should rely on an AI’s help while staying alert and in control. Both humans and AI have important roles to play. UX researchers study how to keep humans from becoming passive observers who just watch an AI work.
Keeping Humans Engaged: Design Strategies That Work
To prevent automation-induced complacency, UX designers can create interaction patterns that keep people appropriately involved and in the loop. Rather than letting users fade into the background, well-designed AI systems regularly invite the user’s input or oversight. Let’s consider some design strategies that work well.
Confirmation Check-Ins
The system pauses at important moments and asks for human confirmation before moving forward. For example, an AI-assisted deployment tool might prepare a release, then ask the engineer: “Ready to deploy version 2.1 to production?” This requires a human to actively click to confirm. Such check-ins ensure that the user stays mentally present and takes ownership of decisions rather than assuming the AI can handle everything perfectly.
Champion Advertisement
Continue Reading…
Easy Undo and Override Options
Giving users a simple way to undo or override AI actions keeps them attentive. Think of features like Gmail’s Undo Send, which gives the user a few seconds to stop an email message from going out. Such safety nets remind users that they can step in at any time, which reinforces their sense of control. Researchers call this calibrated trust, when the system balances automation with human agency so users feel empowered, not sidelined.
Clear Status Dashboards
Making an AI’s activities visible at a glance helps people maintain awareness of what is happening. For instance, an AI operations dashboard might show the following:
a live feed of checks or fixes the AI is running
an alert queue showing current automated actions
real-time status updates on system health
When users can see the AI’s thought process or progress, they’re less likely to be caught off guard. Plus, they can jump in if something looks wrong. Research shows that people trust AI more when they can understand why it made a particular decision or took a specific action. A simple explanation can significantly improve the user’s understanding and trust. For example, “Flagged this server because central-processing unit (CPU) usage stayed above 90% for 5 minutes.”
These design patterns stop users from slipping into a false sense of security. Building structured human oversight into the user experience reduces the risk of automation-induced complacency. The user stays engaged as the AI’s partner. The AI handles the heavy lifting such as processing data and running routine tasks, but regularly prompts the human user to observe, confirm, or decide. This ensures users keep their situational awareness and final authority.
Involving Users from the Start: User-Centered Design
Another essential practice from UX research is bringing in users early and often. Within the context of IT operations, the users are the developers and operations engineers who actually use these AI-powered tools. Getting users involved through participatory design and regularly conducting usability testing is invaluable.
When teams actively include users in the design process, they can spot problems that cause confusion or mistrust long before launching a tool widely. This approach adheres to the following core principles of participatory design:
collaboration—working together with users as partners
continuous feedback—regularly checking in and adjusting
user empowerment—giving users real influence over outcomes
Studies consistently show that products that are designed with user involvement have higher satisfaction and engagement rates because users’ actual needs and concerns shape what gets built.
What This Looks Like in Practice
Real IT operations projects might include the following:
workshops in which developers and designers sketch out user-interface ideas together
regular usability-testing sessions in which actual operations engineers test new AI-driven features
think-aloud protocols in which users verbalize their thoughts while using the tool—“Hmm, why did it suggest that fix? I’m not sure I trust it.” Or “What does this alert mean exactly?”
These moments of uncertainty or hesitation are incredibly valuable to UX designers because they reveal exactly where the user experience breaks down.
A Real Example: Alert Fatigue
In one internal test of an AI-powered monitoring dashboard, operators kept ignoring certain automated alerts. Through interviews and observation, researchers discovered why: the team had been burned by too many false alarms in the past. They had learned not to take the alerts seriously anymore.
This problem is called alert fatigue, and it happens when a system triggers too many warnings without clear priorities. The system warned users too often, so they stopped paying attention. Once the design team understood this, they knew they needed to change the alert system to rebuild user trust. Solutions might include the following:
reducing false positives through better filtering
clearly labeling alert-severity levels
grouping notifications by priority
adding context to explain why each alert matters
UX research methods such as contextual inquiry—watching users in their actual work environment—and think-aloud usability testing—listening to users relate what they’re doing during test sessions—can surface such critical trust issues early. By working with users, UX designers can refine an AI tool by adjusting thresholds, adding clearer descriptions, or reorganizing how information appears.
The result is a system that earns users’ trust because their input shaped the system. When users feel heard and see their feedback reflected in the final product, they’re much more likely to trust and, thus, use it fully. Involving users isn’t just nice to have, it’s essential for building AI features that people actually embrace instead of work around.
Building and Maintaining User Trust
A recurring theme in human-AI interaction research is the importance of user trust: Do users trust the AI’s recommendations? Under what conditions? UX researchers examine this issue by studying users’ perceptions—how users interpret the AI’s outputs and the reasoning behind them.
The Trust Factor: Transparency and Explainability
One major finding is that transparency and explainability strongly influence trust. If users don’t understand why an AI made a particular suggestion, they’re more likely to distrust or ignore it. But when users feel that the AI’s reasoning aligns with their own thinking, their confidence grows.
Consider an AI system that recommends actions for resolving an IT incident. If it simply said, “Restart service X now,” some operators might feel skeptical and want to know the following:
Why Service X specifically?
What data led to that suggestion?
How confident is the system?
What would happen if user didn’t follow the recommendation?
Without an explanation, an operator might second-guess the AI or ignore the recommendation entirely even if it’s correct—because the system feels like a black box. Research shows that when users cannot see and, thus, question an AI’s decision-making process, they disengage more quickly and feel less in control. Nobody wants to blindly follow guidance coming from a black box, especially when systems and services are on the line.
Making AI Reasoning Visible
To build trust, successful AI systems make their reasoning visible in simple terms. For example:
Instead of: “Restart service X now.”
Try this: “Restart service X now. Response times have doubled in the last 3 minutes, and memory usage is at 98%. This pattern matches 15 previous incidents that were resolved by restarting this service.”
This explanation helps the operator understand the following:
what the AI observed
why it matters
what past experience informed this recommendation
With this context, operators can make informed decisions. They might agree and act quickly, or they might notice something the AI didn't—for example, that scheduled maintenance was happening—and choose a different path.
Building Trust Over Time
Trust in AI systems is not built overnight; it develops through users’ having consistent, positive experiences:
Accuracy matters. The AI needs to be right most of the time. A few bad recommendations can quickly erode users’ trust.
Acknowledge uncertainty. When the AI is not confident, it should say so. Saying “I’m 60% confident that this is the issue” is more trustworthy than acting certain when it is not.
Learn from mistakes. When the AI gets something wrong, the system should acknowledge it and show how it can improve.
Respect expertise. The AI should work as a helper that enhances human expertise, not as a replacement that dismisses professional judgment.
Key Takeaways for IT Operations Teams
When implementing AI in IT operations, keep these UX research insights in mind:
Design for active engagement, not passive monitoring. Build in regular touchpoints where humans confirm, decide, or override.
Make AI reasoning transparent. Users need to understand why the AI is suggesting what it does. Even brief explanations can make a big difference.
Involve actual users early and often. The people who will use the tool should help shape it from the beginning.
Watch for automation-induced complacency. Build in safeguards that keep users alert and aware, not checked out.
Balance automation with human control. Users should feel empowered and in charge, not overruled or sidelined.
Test in real contexts. Observe how people actually use the tool in their daily work, not just in demo scenarios.
Measure and maintain trust. Pay attention to whether users trust the AI, then make design adjustments when users’ trust drops.
Conclusion
Integrating AI into IT operations is as much about understanding people as it is about the technology. UX research provides the insights that are necessary to create AI tools that truly help systems boost productivity while keeping humans engaged, aware, and in control.
The goal is not to maximize automation at all costs. It is to find the right balance in which AI handles what it does best—processing large amounts of data, spotting patterns, and handling routine tasks—while users contribute what they do best—human judgment, understanding context, creative problem-solving, and final decision-making.
By applying these UX research insights, designing for engagement, involving users throughout the design process, making AI reasoning transparent, and actively building users’ trust, IT teams can create AI-powered operations tools that people actually want to use. The result is technology that enhances human capabilities rather than trying to replace them.
Eltigani is a seasoned Information Technology (IT) professional with over 14 years of experience in cloud computing and digital transformation across the banking, education, and public sectors. He plays a pivotal role in enhancing service reliability, leading release management, and driving innovation initiatives that align with enterprise digital transformation goals. His expertise spans cloud infrastructure, artificial intelligence (AI), and IT operations strategy. Eltigani’s background combines strong technical skills with business insight, underpinned by an M.Sc. in Big Data Technologies from the University of East London and multiple professional certifications. Beyond his technical achievements, Eltigani is passionate about the intersection of technology, human experience, and innovation, bridging engineering excellence with user-centric outcomes. Read More