Designing Conversational Chatbot User Interfaces

February 20, 2017

A chatbot is an application that can simulate having a conversation with a human being. There are two types of chatbots:

  1. Rule-based chatbots—These chatbots are only as smart as their developers have programmed them to be, their capabilities are limited, and using them can be frustrating.
  2. Chatbots that use artificial intelligence (AI)—These chatbots actually understand human language—not just commands—and they continually become smarter as they learn from the people who interact with them.

At the Fast Co. Festival in November 2016, chatbots reigned supreme. Chatbots and virtual assistants are becoming standard features of mobile user interfaces. Google released its smart, instant-messaging app Allo in September 2016. This mobile app both supports dictation and functions as a virtual assistant. Apple introduced the iOS feature Siri way back in October 2011 and have since improved it. Siri dictation has been an integral part of iOS since May 2012. According to recent rumors, Samsung’s Bibby—their AI virtual assistant—may debut in their imminent release of the Galaxy 8.

Champion Advertisement
Continue Reading…

Now that chatbots are becoming integral parts of our lives, it’s important to consider how we can design better chatbot user experiences.

Chatbots Are Not Human

Although chatbots can mimic human conversation and, thus, interact with human beings, they are not human. A chatbot pretending to be human can create confusion and engender unpredictable emotional responses from users, causing the chatbot to lose users’ trust. Therefore, chatbots should never attempt to convince users that they are human.

For a chatbot project called CARL, shown in Figure 1, usability-test participants mentioned that they enjoyed seeing an animated bot. They appreciated the distinctive appearance of the chatbot, which clearly was not human. They also said it would be strange if CARL looked human, but was actually a chatbot.

Figure 1—CARL, from The Noun Project
CARL, from The Noun Project

A solutions engineer at Amazon confirmed this finding, saying, “No one likes a bot that looks human.” Therefore, UX designers should use visuals or dialogue that clearly communicate that a chatbot is a robot.

Why don’t people like humanized chatbots? Test participants for the CARL project mentioned that they felt it was “creepy” when the chatbot CARL was represented by a real person’s image. They did not like “feeling tricked.” One said, “The human Carl isn’t my friend. He’s creepy. But I like the animated version of Carl. I think it’s funny. Animated Carl reminds me of my friend who’s a movie snob. I like that.”

Chatbots Are Manifestations of Personas

Users associate chatbots with people they know. They project an identity onto the chatbot that they associate with the artificial intelligence. These associations can occur because of either the chatbot’s user interface or its response patterns.

An IDEO study analyzed women’s responses to a Coach chatbot, shown in Figure 2. In this study, the female users projected a pushy, male, Coach identity onto the chatbot—even though the Coach was actually a woman who controlled the behavior of the simulated chatbot. Even without intending to do so, people humanize chatbots, connecting them with what is familiar. The results of this IDEO study were similar to those for CARL.

Figure 2—Chatbot simulation
Chatbot simulation

The animated CARL chatbot was a relatable character with which users could identify. Many people mentioned that CARL reminded them of their friends. Weeks after the study concluded, participants continued to send email messages, asking, “How’s Carl doing?”

Conversation Is Key

No one enjoys talking to a poor conversationalist. Similarly, no one wants to respond to a chatbot that can’t carry on a proper conversation. Designing apt conversational responses requires intention. A designer must figure out the right linguistic patterns, tone, and interactions. One approach to conversational design is language mapping, which involves creating a hierarchy that includes every sentence that constitutes a possible response. While language mapping can be an effective method of conversation design, it is also time-consuming.

What exactly is a language map? Language maps break down conversations into sentences and words. There are four types of language maps, each of which relies on a different approach to language mapping:

  1. Morphological processing—This approach involves breaking language strings into discrete words, subwords, and punctuation, as in the example shown in Figure 3.
  2. Syntax analysis—The most common type of language map is a syntax analysis, which breaks up strings of words to determine their syntactical relationships, as shown in Figure 4.
  3. Semantic analysis—This form of analysis represents words with symbols that help a chatbot to understand the meaning of a sentence, as shown in Figure 5.
  4. Pragmatic analysis—This analysis breaks down strings of words to understand the intention of the speaker, as shown in Figure 6.
Figure 3—Morphological processing
Morphological processing
Figure 4—Syntax analysis
Syntax analysis
Figure 5—Semantic analysis
Semantic analysis
Figure 6—Pragmatic analysis
Pragmatic analysis

Knowing When to End a Conversation

Knowing when to end a conversation is as important as how a conversation begins. While a chatbot may be part of a messaging system, Web page, or mobile app, the user should always have the option of ending a conversation. Chatbots should not continue to pester users after they’ve indicated they want to end a conversation. Overly persistent chatbots provide a bad user experience.

Figure 7 shows an example of a chatbot, Sephora, that communicates too much. Sephora offers unsolicited tips and carries on conversations without any user response. The chatbot offers a quiz—to which the user finally responds. Then Sephora continues to flood the user with a stream of information, an unsolicited video, and repeats recommended tips. Note that the user responded to this overly persistent bot only once.

Figure 7—Sephora

There is no way for the user to leave the chat or mute the Sephora chatbot. Nor is there any way to limit or customize the number of chats the user receives or stop the loop of repetitive tips. It would be easy to solve all of these problems with a better understanding of the user’s wants and needs. Users should be able to end a conversation at any time and customize the chatbot settings to best suit their needs.

Personalization Through Anticipatory Design

When people hear canned messages from telemarketers, they hang up. When people receive canned responses from chatbots, they tend to ignore them and avoid using the chatbot. This results in a poor user experience.

For the best user experience, chatbots should address the user by name—“Hey, Amelia!” They should also provide information the user wants. For example, if the user were on a weather site, the current weather for the user’s location would be an appropriate response. However, if the user were on a design site, information on the latest design news or trends would be more likely to capture the user’s attention.

MIT’s chatbot, Eliza, first demonstrated the user-attention problem back in 1964. Eliza played a psychotherapist, asking standard questions and rephrasing answers as questions. If the user decided to be unpredictable and go off script, Eliza’s ability to converse would break. The user experience was destroyed because of Eliza’s inability to hold the user’s attention. The best solution for personalizing conversations is anticipatory design. What is anticipatory design?

Anticipatory design is the algorithm-powered, user-centric design discipline behind this world, and we’re already seeing products and services successfully leveraging machine learning to infer users’ preferences.

“In the next stage of anticipatory design, products and services will aim to preempt every want and need. In the morning, as you’re getting ready for work, a voice-activated personal assistant will assess your commute, alert you to train delays, and confirm that road traffic is light before calling you an Uber to get you to the office before your early-morning meeting—automatically, without consulting you, knowing it’s doing the right thing. As you approach the office, your go-to café tracks your location, so it can have your coffee waiting as you walk in.”—Sophie Kleber of Huge Inc., from “How to Get Anticipatory Design Right

Anticipatory design predicts future user behavior based on the user’s past behavior and routines. By simplifying the choices, the chatbot designer aims to make the user’s life easier. However, this approach does not account for deviations from the norm, which require the ability to adjust responses or lead to failures. To prevent such failures, a safety net is necessary—for both the user and the platform.

When anticipatory design gets the choices wrong, the chatbot risks multiple levels of failure, as shown in Figure 8. The initial feelings the user may face are confusion, lack of trust, and anger. For example, the user may have responses such as the following:

  • confusion—“Why am I getting this? How is this relevant to me?”
  • lack of trust—“The bot gave me a great recommendation for my vacation, but gave me a horrible recommendation this time.”
  • anger—“Why is the bot recommending maternity wear for me? I’m not pregnant! This is embarrassing!”
Figure 8—Balancing probability against cost
Balancing probability against cost

Image from Huge, Inc.

Risks that designers must consider include angering the user and other costs of being wrong. Depending on the audience and industry, users may be more willing or able to forgive mistakes in anticipatory design. For chatbots, the risk is relatively low across industries. The chatbot is part of an online platform or digital product. Embarrassing the user is the most likely problem. But depending on the brand or platform, the chatbot may be able to mitigate the harm by providing a token of remorse.

A greater problem with chatbot anticipatory design is apathy. The user may not trust the chatbot because its recommendations or conversation do not move forward. If the chatbot does not induce users to take action, they may ignore the chatbot completely. This creates a huge user-experience problem. Therefore, a chatbot must be engaging, call the user to action, and accurately predict the user’s behavior through conversation. Success in doing this engenders trust. Figure 9 illustrates a human’s trust in a bot.

Figure 9—Trusting a bot
Trusting a bot

Image licensed from Shutterstock

The Future of Chatbots’ Voice Interfaces

Well-designed chatbots provide a pleasant user experience for people. Artificial intelligence enables chatbots to continually improve and create better conversations. This, in turn, makes people more open to using a chatbot to overcome the painpoints that automation entails. One example is planning your week using a chatbot, but this is the just beginning.

In the future, chatbots will have voice integration. As conversational design for chatbots improves, Alexa and other voice user interfaces will also improve. Combining voice and chatbot features lets designers create increasingly higher levels of personalization. A user could choose only to hear a conversation or also to view an exchange of information, then choose to respond to the chatbot using speech or text messaging. This flexibility changes the chat user experience completely. The chatbot offers a combination of both visual and auditory sensory experiences. Figure 10 shows Blair, a chatbot with integrated voice interactions.

Figure 10—Blair, a prototype of a chatbot with integrated voice interactions
Blair, a prototype of a chatbot with integrated voice interactions

Anticipatory design will enable chatbots to serve human beings on a personalized level, even without their having human understanding. Through anticipatory design, an artificial-intelligence chatbot can perceive human needs on more levels and assist people in more diverse ways. 

UX Designer at Goldman Sachs

New York, New York, USA

Amelia WongOriginally trained as an intellectual-property attorney, Amelia now brings technology to reality through design. She bridges form and function by creating beautiful things that work. She also has experience in strategy and product innovation. Recently, she has been designing voice interactive experiences for Alexa and chatbots. Amelia provides detailed, creative solutions to challenging problems.  Read More

Other Articles on User Interface Design

New on UXmatters