Conversations around artificial intelligence (AI) inevitably lead either to dreams of a world in which computers predict every need one might have or fears of the impending doom of humanity through a SkyNet / Ultron / War Games scenario.
As entertaining as these discussions might be, our focus should instead be on what AI needs to do to provide better functionality and achieve greater acceptance by society—that is, by users. Sometimes, technology—including some advances in AI—seems to be advancing simply because developers want to see whether they can build it. But, as UX professionals, we want to see meaningful advancements in AI that deliver useful functionality to users. This is key to the success of AI.
While AI does deliver ever greater opportunities for efficiencies, people who hear about this immediately fear for their job. However, successful manufacturing companies know that the key is striking the right balance between robots and people. The first step is understanding what user needs robots address. Here are some examples:
safety—There were 991 construction deaths in the US in 2016 and thousands of injuries. How many of these injuries and deaths could we avoid through automation?
accuracy—More than 600,000 bridges in the US require inspection. How much of this work could drones do?
convenience—When 100 million people are more than 80 years old, what jobs can robots do to improve their quality of life?
UX Considerations in Improving the Acceptance of AI
There are three essential considerations in making AI successful, as follows:
The foundation of AI is pattern recognition. Once AI learns a pattern, it can use that pattern to make predictions about the outcomes of similar patterns. However, while we usually give AI the raw data it needs to recognize patterns, we don’t always provide the context necessary to making good decisions. By not providing the proper context, we do a disservice to AI.
An example is IBM Watson for Oncology’s suggested cancer treatments. Once data from the Sloan Kettering Cancer Center were fed into Watson, its AI suggested treatments for various types of cancer, in different parts of the world. Watson was able to suggest the correct treatment for lung cancer over 96% of the time in India. Everyone patted themselves on the back because the AI could predict what oncologists would suggest! However, in South Korea, only 49% of Watson’s suggestions for treating gastric cancer were correct. Why? The AI failed because South Korea’s treatments for gastric cancer are not in line with Sloan Kettering’s recommended treatments.
Another approach would be to look at the differential between Watson’s and oncologists’ suggested treatments rather than simply asking AI to predict the recommended treatment. Then AI would be one of many voices. Perhaps oncologists could work together to assess the AI’s suggestions and understand how to improve their treatment. Because Watson lacks the oncologists’ context, instead of focusing on how well its AI can mimic their suggested treatments, we should discover why differences in treatment exist and endeavor to improve health outcomes.
When we build AI technology, there are three stages during which we must consider context:
Before it’s built—Beyond uncovering a user need that technology would address, we must make sure that we understand the AI’s context of use to ensure that this factors into the AI process. This would ensure that we collect the right data.
During implementation—When we input the data, we must include its context. For example, if you were collecting data on user behavior in a car in comparison to a bedroom or kitchen, it is clear that context would be important.
Using the collected data—Currently, AI is a black box. We throw data in and see what comes out. But the user must use the AI to accomplish something. Only when we take a user-centered design approach to understanding the application of our insights will we really see how powerful AI can be.
The potential of AI is astounding. It will likely be one of the defining technologies of the 21st century. However, AI is only as good as the data and information we feed to it. Only by providing AI with the proper context can we help this technology to advance and ensure that AI delivers on its promise of simplifying life for users.
Our understanding of user interactions with AI systems is still developing. How should someone use AI? Is use even the right term when it comes to AI? Once AI becomes fully realized, we might see complex AI systems that intertwine the systems of a home, car, office, appliances, and personal technology gadgets, with all of them talking to each other and exchanging information without the user having to do anything.
Think ahead to the future when you’ll have your own personal AI. Our interactions with AI systems might consist of nothing more than offhand comments. We would essentially be interacting with the AI without even knowing that we’re doing so.
For example, if I were making breakfast and muttered to myself, “Almost out of milk,” a strong AI would remind me at an appropriate time and place to buy milk—or simply take the initiative to order a gallon of milk from the automated grocery service in my area and time its delivery for when I’ll be home from work. Or maybe I wouldn’t even need to state that I’m out of milk for the AI to act. Finishing a gallon of milk might be a passive interaction that prompts the AI to take the next logical step and order milk automatically.
In the future, the user would not need to proactively use the AI. Instead, the system would simply pass and parse data behind the scenes.
Trust in AI has been a recent topic of discussion in technology. For people to want to use AI on a regular basis, they need to trust it.
Everyone has probably tried Siri. The early, buggy interactions people have had with Siri have had a negative impact on trust. The question is: how many people deliberately use Siri today—or, of the last few times Siri appeared, how many were accidental? Our lack of trust in Siri has eroded our perception of the value of voice assistants. This has scared so many users away from voice assistants that most have not even tried Microsoft’s Cortana. Have you tried Cortana, or did you just think, “Eh, it’s like Siri?”
It took a device with an entirely new form factor to get people to try voice assistants again. Alexa sits on a table and has finally encouraged people to give voice assistants—that is, AI-enabled voice devices—a second chance. Luckily, by the time Alexa appeared on the market, the technology had evolved, so has become more widely accepted and adopted.
Siri and Cortana have also advanced and evolved. But how many have re-engaged and tried them again? And why? Because of trust. Trust in AI is created when we ask a question and receive the right answer, when we give the AI a task and the AI performs it correctly, when we purchase a product and receive the correct product—and, possibly most importantly, when an AI keeps our personal information safe.
Once an AI system has achieved these three components of success—context, interaction, and trust—it will be much more likely for it to hit the mainstream and that AI will become the runaway success that futurists predict it will be. Even if we never fully realize these components or truly deliver on the promise of AI to users, the developers of AI systems should always keep their users in mind. After all, we’re ultimately creating these AI systems for their benefit.
The Singularity and the Future It Will Bring
Futurist Ray Kurzweil believes that we are rapidly approaching the Singularity—the point at which the computing power of technology exceeds the computing power of people. A variety of emerging technologies will fuel this Singularity, including AI, robotics, and nanotechnology.
Once this Singularity arrives, Kurzweil and other similar-minded theorists believe that life as we know it today will no longer prevail. He compares the difficulty of describing this post-Singularity society today as being just as difficult as describing to a caveman how different life would be with bronze tools and agriculture. This future is likely to be radically different. What should we do to help shape our future rather than simply sit back and watch it happen?
If you buy into the whole notion of Kurzweil’s Singularity, how should you design for a future that predictions say will be wildly different from anything we’ve ever known or could now fathom? How would a UX designer implement traditional usability principles such as effectiveness, efficiency, and satisfaction—or will these principles become relics that we leave by the wayside as radically different interaction models emerge?
Now, let’s think about how AI and robotics have the potential to completely flip the paradigm of usability and user experience. The user should not have to learn how to use an AI system. AI is supposed to do the learning—about our habits and routines and what actions to take in response to whatever happens. There will be a role reversal in which—using UX research terminology—the user becomes the stimuli and the stimuli becomes the user. The human being would become the stimuli to which the technology learns to react and respond.
A robot is essentially an AI that has a corporeal form. The addition of a physical form creates further challenges—regardless of whether that form is vaguely humanoid. How would users properly interact with a fully autonomous mechanical being that can act on its own? The flip-side to this question is just as important: how does a robot interact with the user?
Before we dive into answering these questions, let’s get on the same page about what a robot is. A robot must be able to perform tasks automatically based on stimuli from either the surrounding environment or another agent—for example, a person, a pet, or another robot. When people think of robots, it’s often of something similar to Honda’s ASIMO or their more recent line of 3E robots. Our definition also includes less conventional robots such as autonomous vehicles and machines that can perform surgery.
A research team at the University of Salzburg has done extensive research on human-robot interactions, testing a human-sized robot in public in various situations. One finding that is particularly interesting is that people prefer robots that approach from the left or right, but not head on.
In San Francisco, a public-facing robot that works at a café knows to double-check how much coffee is left in the coffee machines and gives each cup of coffee a little swirl before handing it to a customer.
UX Design Principles for Robotics and AI
While a robot in Austria that approaches from the left and a robot in San Francisco that swirls a cup of coffee might not seem related, both point to UX design principles that we should keep in mind as public-facing robots become more ubiquitous:
A robot should be aware that it is a robot and endeavor to gain the trust of an untrusting public. People’s preferences for robots not to approach them head on and always to remain visible to the user are evidence of a lack of trust.
Design a robot knowing that people like to anthropomorphize objects. For example, people prefer the coffee-serving robot to do the same things a barista might do, even if they’re things the robot doesn’t need to do.
As with all design principles, these are likely to evolve. Once robots become more ubiquitous in our lives and people are accustomed to seeing them everywhere, different preferences for the ways in which humans and robots interact may become the norm.
This may already be the case in Japan, where robots have been working in public-facing roles for several years. While anthropomorphic robots are still the dominant type of bot in Japan, there is now a hotel in Tokyo that is staffed entirely by dinosaur robots. The future is now, and it is a weird and wild place.
Gavin has 25 years of experience, working in both corporate and academic environments. He founded the UX consultancy User Centric, growing it to become the largest, private UX consultancy in the US. After selling the company, he continued to lead its North American UX team until 2018, which become one of the most profitable business units of its parent organization. Gavin then founded Bold Insight, where he is Managing Director. He is a frequent presenter at national and international conferences. He is the inventor on several patents. He is an adjunct professor at DePaul and Northwestern Universities and a guest lecturer at University of California, San Diego’s Design Lab. Gavin has an MA in Experimental Psychology from Loyola University. Read More
Research and storytelling intersect in Ryan’s background, allowing him to discover qualitative and quantitative trends and craft an effective narrative around those trends. During eight years working in User Experience, he has executed projects spanning multiple industries and devices across all development stages. Ryan has an MA in Digital Storytelling and an MS in Information and Communication Sciences from Ball State University. Read More