Top

Balancing AI with Human Authority and Empowerment

Enterprise UX

Designing experiences for people at work

A column by Jonathan Walter
May 22, 2023

If it were physically possible to throw a ball at the Internet, your throw wouldn’t have to be very accurate to strike some article, blog post, or social-media blurb about artificial intelligence (AI). It seems that AI is on every leader’s mind at every technology company—taking the business world by storm—and the general public has been all too happy to follow suit. There’s a new gold rush at hand, with the spoils going to the companies, startups, and enterprising individuals who find ways to best leverage the capabilities of this new technology.

But is AI all that advantageous—or ready to become so? As a UX professional working in the industrial-automation domain, I want to share some perspectives in this column that might be surprising to you. Most importantly, we must endeavor now, more than ever, to balance human authority and empowerment with the automated and artificial solutions that we’ll create and likely champion going forward. In this article, I’ll delve into the following:

  • embracing narrower applications of AI
  • exercising patience and prudence regarding bigger AI applications
  • understanding what UX professionals can do to contribute to the future of AI  
Champion Advertisement
Continue Reading…

Embracing Narrower Applications of AI

While flashy generative AI applications such as the text-based ChatGPT and the image-based MidJourney are enticing and exciting to use, the result has been that users often assume that all AI apps operate as some sort of black box into which they can input some data and, voilà, have it magically spit out incredibly useful information that ties together disparate data, sources, and media. In fact, Canva uses words such as magic or magical because that’s what AI feels like.

However, AI’s embodiments and use cases are still beyond what many of us can fathom. Plus, it’s often unclear how the technology has arrived at its conclusions. While AI is useful in many cases, these tantalizing new tools—and our perceptions of them—can cause us to miss out on some very practical and pragmatic ways to apply AI within our industry domains—solutions that are often right under our noses. Let’s first touch on the following:

  • understanding narrow AI
  • lessons from automation

Understanding Narrow AI

There is still much to discover in regard to specific, narrowly scoped AI use cases. Some refer to these specific use cases as Artificial Narrow Intelligence (ANI), which narrows the scope of the technology to focus on assisting people in performing their own specific tasks. In our daily lives, tools such as Siri by Apple or Alexa by Amazon are familiar embodiments of ANI that are highly successful. The spectrum of ANI is fairly vast, even though it’s not on the same scale as other types of AI such as Artificial General Intelligence (AGI) or even Artificial Super Intelligence (ASI), as depicted in Figure 1.

Figure 1—The three types of artificial intelligence
The three types of artificial intelligence

Image source: Great Learning

I contend that ANI, or Stage-1 of AI, is where forward-thinking companies and UX designers should continue to focus their energy because there are still many unrealized benefits. At Rockwell Automation, we’ve created ways to enhance maintenance workers’ productivity by using applied AI that assists or augments people in performing their tasks rather than attempting to replace their value as human workers. Maintenance workers still create and track their work orders just as they normally would, and an AI augments their productivity by observing the data inputs they’re logging and rendering complementary reports that predict breakdowns and delays. The automated tool and the human work hand in hand. In such technologies, AI is not a full replacement for human workers, but a controlled capability that assists them, while ensuring the humans are still empowered to assert authority over the AI through interpretation or empowered decision-making.

The key is to find the right balance between what a human should handle and what an AI should handle. After all, as human beings, we probably cannot predict the future. Our brain is not powerful or fast enough to accept and process millions of data points, then identify patterns that have strong odds of becoming repeatable and predictable over time. But an artificial brain, which can be as powerful and as fast as our technology allows, can help predict future outcomes if we give it the right information and context. In an effective ANI use case, AI is doing something that a human worker cannot do and works in parallel with humans to foster more productive, predictable outcomes. Humans then interpret the data and make decisions that drive positive business outcomes, which ideally are both ethical and sustainable and can ensure the safety and productivity of other humans. Over time, our trust in AI will grow, assuming these outcomes are truly positive.

Lessons from Automation

Staying within the domain of automation for now, there are several concepts to consider that don’t necessarily involve significant AI. However, because of their similarity, they could provide valuable lessons if we take the time to recognize them. Let’s look at the industrial domain, in which many smaller manufacturers are finding success by augmenting their workforce through Cobots, or Collaborative Robots.

As Figure 2 shows, cobots perform alongside workers in a shared space and do the “dirty, dangerous, and dull” work that could lead to repetitive-strain injuries in humans. Cobots are generally more affordable and easier to integrate with existing systems than fully automated robots, thus making them attractive to lower-tier manufacturers. Plus, they are great examples of how we can blend the human with the artificial. After all, they don’t need to be gated off or zoned restrictively, which often requires expanded security measures, complex safety configuration, and more money. The cobot is a narrower application of robotics, and there are still innumerable use cases for this level of robotic assistance that remain largely undiscovered.

Figure 2—Cobots working alongside humans
Cobots working alongside humans

Image source: Design World

However, AI notwithstanding, the blend of human interactions with automated solutions still carries some risks, especially when it finds its way into the consumer domain. Take the automotive industry, which has grappled with this dicey balance for years now, as I described in my UXmatters article “Satisfying Fundamental Human Needs” back in 2018. Vehicles still have not reached level-five autonomous driving, which means a vehicle is fully autonomous—at least, not in a way that would let drivers plop into their passenger seat with a steaming cup of coffee and peruse their smartphone while their car does all the work of driving them to their destination.

As shown in Figure 3, all the levels beneath Level 5 require a human driver to know when and how to become situationally aware versus when and how to cede control to the automated system. There are simply too many gaps in the technology and too much ambiguity to give humans the coveted Bladerunner dream they’ve long lusted after. The years-long march toward fully autonomous driving—within a massive and highly lucrative industry no less—should tell us something, especially when considering the AI boom that is at hand. There needs to be ample time to iron out the kinks, especially if a technology must be self-aware at some point.

Figure 3—The five levels of autonomous driving
The five levels of autonomous driving

Image source: Acko Drive

Exercising Patience and Prudence Regarding Bigger Applications of AI

There are reasons why technology leaders such as Elon Musk, Steve Wozniak, and thousands of others have signed an open letter that urges a six-month pause on the further development of powerful AI systems: the many kinks in AI systems, AI’s breakneck growth, and the fact that we don’t yet fully understand the ramifications of AI. The letter states as follows:

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

Plus, there are ethical and societal impacts that we must consider. We have an ethical duty not to advocate for short-term efficiencies when we’ve not yet managed their human costs. Humans could successfully adapt to the new world of AI if we properly prepared and empowered them, perhaps bringing qualities to the table that weren’t previously obvious. Concerns that the data and training of these systems are often flawed persist. Insufficient or flawed inputs would lead to insufficient or flawed outputs, causing AI systems to hallucinate and provide information that might seem coherent—given the conversational tone of the system—but utterly false or nonsensical. For some time to come, AI is still going to be like a young, exuberant golden retriever that doesn’t know its own strength. Best to let AI spend some time in obedience training, improving upon its narrower applications and growing gradually, before it bites its owner and potentially does irreparable damage.

Understanding What UX Professionals Can Do to Contribute to the Future of AI

As UX professionals, I know it can be tempting to worry about the ramifications of a technology that, if we gave it the appropriate inputs and requirements, could essentially design a user interface. Or that, if we supplied it with the correct research findings, could produce an analysis of our UX research. But before we allow worry to grip us, let’s step back and consider how immature and imperfect this technology still is. If we narrowly scope the applications of AI to specific use cases, its utility becomes more controlled and arguably more assistive to us as UX designers. Let’s consider the following:

  • ideas for how UX designers can leverage AI
  • AI design principles to remember

How UX Designers Can Leverage AI

While some of the following ideas might not yet be fully feasible, we’ve already taken some steps that indicate they are now on the horizon. AI, if we scope it purposefully and specifically, could help the UX design profession by doing the following:

  • assisting designers with iterative UX or UI design—For example, an AI could use a template that a human designer has cultivated through the responsible use of UX research and design methods to refine a design.
  • accepting raw UX research results and outputting a detailed analysis—The AI could provide requirements for what to do next, and experienced UX researchers could review them before presenting a formal report or analysis to stakeholders.
  • conducting a cognitive walkthrough of a workflow—If we gave an AI sufficient evidence regarding a workflow’s steps, it could provide additional analysis that UX professionals could leverage as they conduct their own walkthroughs, perhaps uncovering unsuspected issues that human researchers might have missed.
  • constructing a style guide or perhaps even a full design system, along with recommended code—For capacity-strapped UX and software-development professionals, having the ability to give an AI high-fidelity mockups of a Web site or application, as well as providing an understanding of its intended target users and their contexts, could guide the development of a system or a reusable guide and save a team countless hours of tactical work.
  • enabling users to discover or interact with existing content in different, if not unexpected ways—For example, Luke Wroblewski implemented a large-scale AI model for language (LLM) on his Web site so users can “Ask LukeW” questions relating to User Experience and technology and receive answers that draw on the myriad articles, presentations, and videos that have accumulated on his site over the past 27 years.
  • offloading repetitive design specification work—We could give an AI a mockup, providing information that could serve as annotations for developers to help them correctly code and implement a design in production. Or, to take things a bit further, we could allow an AI to code a working proof of concept (PoC) so UX designers could have more fruitful conversations with human developers when it was time to author production-quality code.
  • supplementing design-thinking workshops or brainstorming sessions—If we gave an AI sufficient data, it could generate additional ideas that would fuel divergent thinking and perhaps fill in some conceptual gaps that humans have missed.

Do you have other ideas or even examples of AI use cases for UX designers? Please share them in the comments.

AI Design Principles to Remember

When designing user interfaces that incorporate AI technology to enable users to achieve their goals, we should help the users to do just that: achieve their goals. Therefore, we should hold fast to the principles of UX design, which do not change with the advent of new technology. As author Bree Chapin explains in her article “UX Design Principles for AI Products,” on UX Collective:

“Your design delivers an experience. AI is the vehicle but probably not the destination.” Plus, transparency is key. “One thing that we in the tech world can sometimes lose sight of is that most of folks out there using technology may not be as informed about how it all works and what it all means.”

When users input data into the black box of AI, it’s important to give them feedback about what’s happening. The importance of providing feedback has long been an important principle of user-interface design. For example, we commonly offer helpful status messages and determinate or indeterminate status bars or spinners when a system is loading or crunching data. Giving users commensurate levels of assurance is becoming ever more important as the pace of the race toward AI ascendancy picks up, including how the technology has come to its conclusions. In his UX Collective article, “Testing AI Concepts in User Research,” author Chris Butler states:

“When a decision is made by an algorithm, there may be no chance to understand why [that decision was made].” Plus, multimodal AI interactions that include voice or auditory technology must provide at least the same level of feedback, even though the user might not be able to see the feedback, which delivers accessibility and inclusivity benefits.

Moreover, as Chapin shares, simplicity will always reign supreme. As UX designers, we’re in the business of eliminating distractions—and what’s more distracting than a technology that’s capable of rendering a stylized image of a clown riding a dalmatian into battle against an army of angry miniature poodles in mere seconds? Therefore, we must shield users from the complexity and inner workings of the technology while still providing appropriate, timely contextual feedback and simple means of interacting.

Finally, as UX designers, we must always advocate for human beings and give them the means by which they can manually intervene and assert authority over an AI. After all, many of today’s automated systems are still woefully deficient in intelligence. I recently learned this the hard way when a credit-card company’s system unilaterally canceled my corporate business card—while I was traveling on business—because I’d requested a new card after failing to receive a replacement a month earlier. Customer representatives were unable to manually reinstate my current card. Neither they nor their managers were empowered to override a system to which they had granted omnipotent power. Now, imagine if such a system could think for itself and decide what it feels is best—about everything. As UX designers, particularly those of us designing enterprise software for workers, we must ensure that we design the appropriate hooks, or on/off switches, for these user interfaces as well, so human intervention is always possible.

Conclusion

It’s probably foolhardy to think that I could possibly come to any conclusions about the present state of AI acceleration, but I’ll still give it a go. First, there is great untapped potential in the shallower depths of AI, which is an ocean unto itself. Therefore, it would behoove companies, startups, and entrepreneurs to take exhaustive approaches to finding new ways for AI to augment their existing solutions, which would drive their bottom line and create customer value. For now, we must not become overly enamored with the capabilities that could be lurking deeper in those vast depths. It’s always best to avoid creating solutions that chase fictional problems—or worse, do more harm than good.

Second, AI still has much growing up to do, especially for the bigger, more intelligent applications in the AGI and ASI ranges. Best to let them mature and, hopefully, benefit from some degree of government regulation. In doing so, we can free ourselves from the burden of distractions and focus on how we can practically and pragmatically leverage this technology. Again, AI should be a tool that we wield, not a tool that controls us.

Lastly, let’s consider some pragmatic, ethical ideas for leveraging AI as UX designers. We must discover ways of balancing AI technologies with our own human authority and empowerment—perhaps allowing an AI to take on more monotonous or repetitive tasks while we use our unique skills to solve equally unique problems. UX designers still have an important seat at the table. Our original ideas and our ability to challenge assumptions, understand the whys, speak the truth to decision-makers, and demonstrate empathy for users are still superpowers that AI cannot easily replicate. AI could be a valuable new tool in our UX design tool belt if we can learn how to wield it effectively. We are living in an exciting time!  

Director of User Experience at Rockwell Automation

Cleveland, Ohio, USA

Jonathan WalterJon has a degree in Visual Communication Design from the University of Dayton, as well as experience in Web development, interaction design, user interface design, user research, and copywriting. He spent eight years at Progressive Insurance, where his design and development skills helped shape the #1 insurance Web site in the country, progressive.com. Jon’s passion for user experience fueled his desire to make it his full-time profession. Jon joined Rockwell Automation in 2013, where he designs software products for some of the most challenging environments in the world. Jon became User Experience Team Lead at Rockwell in 2020, balancing design work with managing a cross-functional team of UX professionals, then became a full-time User Experience Manager in 2021. In 2022, Jon was promoted to Director of User Experience at Rockwell.  Read More

Other Columns by Jonathan Walter

Other Articles on Artificial Intelligence Design

New on UXmatters