In the ideal interaction between humans and computers, technology handles the routine, mundane tasks at which it excels, allowing people to focus on higher-level, more important aspects of achieving their goals. Nevertheless, until recently, technology’s role in providing user assistance has been limited to providing traditional online Help and on screen instructions. However, as technology becomes ever more powerful, it increasingly has the ability to offer more proactive user assistance and even perform certain tasks automatically, easing the cognitive load on the user.
At its best, proactive user assistance can be very helpful. At its worst, it can be distracting, even annoying to users who receive either unwanted assistance or incorrect information. Remember Clippy, shown in Figure 1, the animated-paperclip assistant in Windows 95 that irritated legions of computer users? There’s nothing more annoying than a system’s automatically taking unwanted actions or constantly offering undesired suggestions.
Source: Clippy, created by J. Albert Bowden II and licensed under CC BY 2.0
The goal of anticipatory design is to provide a better user experience by creating technology that anticipates the user’s needs, then automatically makes decisions and takes actions on behalf of the user. It’s great when a technology is smart enough to anticipate users’ needs correctly and make the right decisions. Unfortunately, most technology is still too dumb to do that. Unlike a human assistant, who can sense when he’s providing undesired assistance and learn continuously, most technology blindly persists in making the same mistakes. If technology isn’t smart enough to provide the right assistance at the right times, it’s better to provide no assistance at all.
In this column, I’ll consider five degrees of user assistance and discuss how they can be either helpful or a hindrance.
Five Degrees of User Assistance
Technology can provide at least the following five levels of assistance to users:
Passively providing online Help content. Here’s help if you need it.
Asking if the user needs help. Can I help you?
Proactively offering suggestions that users can accept or ignore. Is this what you want, or do you want to correct this?
Alerting the user that it’s going to take an action automatically, unless the user says not to. I’m going to do this, unless you tell me not to.
Automatically taking an action for the user, without asking for permission. I’ve got this for you. Don’t worry about it.
Passively Providing Online Help Content
Here’s help if you need it.
Providing a traditional Help section or instructions is the least intrusive method of providing user assistance. Information is available for users who need it, but others can ignore it. This is the real-world equivalent of a Help desk in an airport or a salesperson in a store. They’re there to help you, if you ask for assistance.
A Help section like that for Spotify, shown in Figure 2, on-screen instructions, and contextual Help are all traditional examples of passive Help systems. Calling or chatting online with a customer-support representative are other examples of passive assistance. Even voice-activated digital assistants such as Siri, Cortana, and Amazon’s Alexa are passive assistants who wait for users to ask for help.
In providing passive assistance:
Make passive assistance available and visible, but ensure it’s unobtrusive enough that those who don’t need it can ignore it.
Prevent the accidental triggering of Help requests—such as ToolTips that appear as the user moves the mouse around the screen—or a voice-activated digital assistant. For example, before I turned off the “Hey Siri” function on my iPhone, Siri would sometimes interrupt audio books I was listening to in my car, mistaking the audio-book narration for a question.
Asking If the User Needs Help
Can I help you?
Rather than passively waiting for users to seek assistance on their own, a user-assistance system can surmise that the user may need help, asking, “Can I help you?” You should use this method of triggering user assistance with great caution. While it can be very helpful when the user actually does need assistance, it can be extremely annoying when the user does not want any assistance.
The real-world equivalent of this form of digital assistance is an employee coming up to you in a clothing store, asking whether you need help. While it’s great when you really do need assistance, it can seem pushy and off-putting when you don’t want help.
If you used Microsoft Windows in the mid-1990s, you probably remember Clippy, the digital assistant in Office that I mentioned earlier—or other digital-assistant incarnations. Clippy, shown in Figure 1, was an animated paperclip that would pop up on the screen, mostly at inopportune times, interrupting your train of thought with unhelpful offers of assistance. Until you dismissed Clippy, he would hover at the side of the screen, making distracting movements and even knocking on the screen to get your attention.
A more recent example of this sort of user assistance is a Web site that senses when a user has been on a complicated screen for a long time, then displays a Help dialog box or an invitation to chat. When you use this approach sparingly and make sure it’s not too distracting, this can be an effective user-assistance technique. However, if it interrupts the user too often, it’s the digital equivalent of the pushy salesperson who keeps bothering you with offers of help, when you’d really rather just browse on your own.
In offering to help users:
Offer assistance only when the system can very reliably detect that the user actually does need help.
If the system isn’t smart enough to correctly determine when the user needs help, it’s better to provide passive assistance by making Help options visible on the screen and allowing users to request assistance.
Provide warning messages such as Are you sure? when the user takes an action that could have serious consequences or may have been accidental—such as leaving a document without saving or deleting something important when it can’t be undone.
Proactively Offering Suggestions
Is this what you want, or do you want to correct this?
When a user-assistance system is smart enough to predict accurately what the user might want, it can offer suggestions that the user can either accept or ignore. This method of assistance can be successful when the suggestions are usually correct and it’s easy to ignore them when they’re not what the user wants.
A real-world equivalent of this type of assistance occurs when a grocery store puts chocolate bars and graham crackers near marshmallows. If I’m buying marshmallows, this reminds me that I might want to make s’mores and makes it easy to get all the ingredients in one place. However, if I want only marshmallows, I can still choose to buy them and ignore the other items.
Chrome’s Autofill feature, shown in Figure 3, provides suggested options from data it’s previously saved, allowing the user to either select an existing option or type in a different value. As the user types a value into the field, the matching autofill options appear on a list beneath it. The user can either click an option to fill in the field or just keep typing. This feature is visible and easy to use, but it’s also easy to ignore.
Another example of the use of suggestions is the autocomplete feature in search engines such as Google and Bing, shown in Figure 4. Displaying similar search queries can save the user time, allowing the user to select the desired search query rather than typing the entire query. This can also help people formulate their search queries. It’s just as easy to ignore the suggested search queries as it is to select one of them.
Microsoft Word’s spelling checker proactively warns the user about possible spelling errors by underlining the misspelled words, allowing the user to correct them right away, as shown in Figure 5, or ignore what are actually correct spellings. Right-clicking a misspelled word shows a list of potentially correct spellings that the user can easily select. While these suggestions are usually correct, it’s annoying when a person’s name, company name, a newly coined term, or any other word that is missing from the spelling checker’s dictionary persistently appears as a spelling error. Fortunately, Word provides an Add to Dictionary command that lets users prevent a word from being identified as a misspelling in the future.
When proactively providing suggestions:
Make suggestions only if the system can provide accurate, useful suggestions.
Ensure that the suggested options are visible, but not too intrusive, so the user can easily select or ignore them.
Provide the ability to reject incorrect suggestions—such as the Ignore All and Add to Dictionary commands in Word.
Provide the ability to turn off suggestions or notifications.
Alerting the User That It’s Going to Take an Action Automatically
I’m going to do this, unless you tell me not to.
At this level of user assistance, the system prepares to perform actions automatically, based on what it anticipates the user most likely wants to do, but it notifies the user before taking the action and provides a way to override the action. The user can see that the system is about to do something, then either complete or stop the action.
A real-life example of this type of help is your spouse texting you, “I’m going to order pizza tonight.” You can either just let that happen, or you can text back, “No, let’s get Chinese instead.”
An example of this user-assistance technique is the autocorrect feature on mobile phones. Before making a correction, autocorrect displays the correction it’s going to make and provides the ability to override the change. As shown in Figure 6, autocorrect indicates that it’s going to correct the user’s text entry, unless the user tells it not to by tapping the X. Although we’ve all seen and experienced humorous or annoying autocorrect errors, most of the time autocorrect does accurately correct users’ sloppy, mobile-phone typing. Without autocorrect, users would either have to accept many more errors or type extremely slowly and carefully. Autocorrect works well because its suggested changes are usually correct, and users don’t need to take any extra actions.
Although autocorrect is very helpful when it works well, it can be extremely annoying when the actions a system proposes are usually undesired. For example, while Todoist is a great to-do list app for the iPhone, it has an annoying natural-language feature for dates. As the user types in a task—such as, “Schedule a dentist appointment for next Tuesday”—the app highlights date-related words such as next Tuesday, indicating that it’s going to put that task on Tuesday’s to-do list, unless the user taps next Tuesday to remove the highlight. As shown in Figure 7, I want to do this task tomorrow, but Todoist wants to schedule this task on Tuesday. Rather than being helpful, this feature is incredibly annoying. If I want to schedule a dentist appointment for Tuesday, I would never want to do that task on Tuesday. I’d want to make the appointment days before then. Even worse, it does the same thing for holidays. For example, it tries to automatically schedule tasks such as Buy a Halloween costume and Plan Halloween partyon Halloween, instead of weeks ahead of time when the user would normally perform those tasks.
Consider the following guidelines when enabling a system to perform actions automatically:
Enable the system to perform actions automatically only when there’s a high likelihood that the action will be correct, helpful, and the one the user desires.
Clearly indicate that the system is about to perform an action automatically.
Provide an easy way for the user to cancel the action.
Indicate that the system performed the action and provide an easy way to undo it.
Automatically Taking an Action
I’ve got this for you. Don’t worry about it.
With this level of user assistance, the system anticipates what the user wants to do and automatically performs the action, without first asking the user for permission. Recently, designers have called this anticipatory design. Anticipatory design uses data on user behavior and the user’s prior actions to automate certain decisions and actions. The goal is to shift the burden to the system by limiting the number of minor steps and decisions that the user must take. This frees up the user’s mental resources for higher-level tasks.
A real-world equivalent would be a personal assistant who performs errands such as taking your clothes to the dry cleaners, then picking them up, without waiting for a specific request to do that. Your assistant knows this is something you regularly need and doesn’t bother you about such mundane tasks.
In addition to flagging possibly misspelled words and providing suggestions for correcting them, Microsoft Word’s spelling checker automatically changes certain, obvious misspellings without asking for confirmation by the user. It does this when it’s certain that the correction is what the user actually intended to type. For example, it’s unlikely that someone would want to type teh, so Word automatically changes it to the word the, without asking or even notifying the user.
Quicken recognizes downloaded transactions and categorizes them based on the user’s previous categorizations. For example, it knows that a Walgreens transaction probably belongs to the category Medical: Medicine, as shown in Figure 8. However, since this isn’t always perfectly accurate, Quicken highlights all newly downloaded transactions for the user to review. If the category is correct, the user can click the highlighted dot to indicate that. If it’s not correct, the user can change the category.
If the user misspells a Google search query, instead of stupidly searching for the misspelled query, the system searches for what it has assumed the user really intended to type. However, just in case this assumption was wrong, it displays the search query for which it actually searched and provides a link to trigger a search for the the user’s original query, as shown in Figure 9.
These days, most applications download and install updates automatically, in the background, rather than burdening users with annoying update notifications. Old Java and Adobe update notifications seemed to interrupt the user’s work with update requests constantly. Now companies have learned that users don’t want their tasks interrupted by requests to download and install updates. It’s much better to do this automatically, in the background.
However, automatic actions can be terribly annoying when they’re not what the user wants. For example, whenever anyone plugs their mobile phone into the USB port on my Toyota Prius, it automatically begins playing the first song from their iTunes library, in alphabetical order. That’s never been the action that anyone riding in my car has ever wanted.
Consider the following best practices when allowing systems to perform actions automatically:
Allow the system to perform actions automatically only when you’re certain that’s what the user wants and when the consequences of the action are minor. For example, correcting obvious spelling errors like teh is almost always what the user wants and is of little consequence, while automatically performing a bank transfer would not be something that most people would be comfortable with, unless they had set up the automatic transfer themselves.
Indicate that the system automatically performed an action, unless it’s absolutely certain the action was what the user wanted. For example, showing the corrections for obvious spelling errors is enough feedback. People don’t need to be distracted by the details of updates and other behind-the-scenes activities they don’t really care about.
Provide the ability to undo undesired actions.
Provide an easy way for users to prevent the system from performing undesired, automatic actions in the future.
Learning to Provide Better Assistance
While human assistants can easily learn whether their assistance is helpful or annoying, then adjust their behavior appropriately, machines blunder on, making the same annoying mistakes over and over again. However, with artificial intelligence, increased use of user data, and careful design, technology can now learn from its mistakes and provide better user assistance. As technology gets smarter, it will increasingly be able to assist users by taking on the burden of their more mundane tasks and decisions, allowing users to focus their minds on valuable, more satisfying activities.
Principal User Experience Architect at Infragistics
Cranbury, New Jersey, USA
Jim has spent most of the 21st Century researching and designing intuitive and satisfying user experiences. As a UX consultant, he has worked on Web sites, mobile apps, intranets, Web applications, software, and business applications for financial, pharmaceutical, medical, entertainment, retail, technology, and government clients. He has a Masters of Science degree in Human-Computer Interaction from DePaul University. Read More