Envisioning the Future of User Experience
Published: April 9, 2007
Welcome to my UXmatters column—Envision the Future. In this column, I will share my perspectives on the role UX professionals will play in the future and answer a few forward-looking questions about the field of user experience such as:
What is the future of user experience as a practice, as a philosophy of design, and as a research topic?
What are the challenges and opportunities facing UX practitioners as we strive to better integrate our methods, processes, and philosophies into traditional ideation, design, and development processes?
These are big questions. User experience happens whether someone has designed the elements influencing a user’s experience thoughtfully or accidentally. Anywhere there’s a user interface, there’s an interaction waiting to happen and a user experience about to occur.
I intend to explore these areas of inquiry in several ways. I’ll write about particular topics relating to the questions I’ve posed and carry out interviews and discussions with a variety of people in our field—from the visionaries to the UX managers and individual contributors who daily create and validate the user experiences of products and services.
One other thing: I’m going to try my absolute best to avoid the platitudinous, self-important pontification that can afflict commentators in our field. Feel free to slap me down if you think I’m straying close to this line—or cross it. Okay, enough with the meta-stuff.
Here are just a few questions I think the practitioners in our field should be asking themselves and that I intend to explore through this column with your help and assistance.
The User Experience: Is It All in People’s Heads?
Isn’t the user experience something that really occurs only between the ears? Certain popular definitions imply this, but never explicitly state it. Here’s one straw-man definition from Wikipedia: “the overall experience and satisfaction a user has when using a product or system.”
If the user experience is all in the mind, can we really design it? Aren’t we just influencing the user experience rather than designing it? At first blush, this question seems overly academic, but maybe it’s worth exploring. Can we really work on something if it’s not entirely clear to us what’s under our control and what is not?
Why Do Users Blame Themselves?
Some of the most robust experimental findings in social psychology relate to attribution theory, the fundamental attribution error, and the actor-observer bias.
Attribution theory states that we have a strong—and some say possibly hardwired—propensity to attribute other people’s actions to stable factors that are intrinsic to an individual person. This is known as the fundamental attribution error. We fall victim to the fundamental attribution error when we overattribute other people’s behavior to their personalities. In this situation, we usually tell ourselves that other people are behaving a certain way, because, well, that’s just the way they are. The flip side of the fundamental attribution error is that we tend to attribute our own actions to the particulars of the situation we happen to be in.
Many researchers refer to these dual tendencies toward overattributing others’ actions to personality and our own actions to situational factors as the actor-observer bias.
But here’s the thing: Over and over, I’ve seen users—regular people, not technical types—overattribute their difficulty in using computers or technology to something about themselves: “I must be dumb, because I can’t figure out how to do this,” “Wow, I really must be clueless, I can’t find the button I’m supposed to click,” and so on. I know many of you have observed the same thing. Quite often, this self-blame is associated with increased negative affect. In other words, time and time again, people experience difficulty, blame themselves, and pretty soon start feeling badly.
Why do people blame themselves when they run into usability problems? The actor-observer bias suggests that if people are struggling with a user interface, they’ll attribute their difficulty to external, not internal factors. This reversal of a robust finding in experimental psychology is striking. If we knew what accounted for this phenomenon, might we be better able to design user interfaces so users doesn’t blame themselves and feel badly about themselves and technology?
We know that there is one constant across almost all work environments: Interruptions and distractions characterize most work. Yet when we usability test products, we often test them in laboratory conditions, with interruptions intentionally eliminated or kept to a minimum.
Does this make sense? If the prevailing context of use for many business applications is rife with interruptions and distractions, should we consider this factor when designing interactions? Would studying how users fare with work-like interruptions and distractions yield valuable data for product design teams? Should there be guidelines for the design of applications people are likely to use under conditions where there are frequent interruptions? Just the fact that I’m asking these questions means that I think there should be. But this begs a larger question: Just how does one go about designing an application for the distracted?
The Nested Folder Metaphor: No Longer Sufficient
I don’t know about you, but I often struggle to remember where I put my documents and other digital objects. And I’m constantly making on-the-fly taxonomic decisions—and, inevitably, consistency errors—when I create new nested folder structures. Here’s just one example: Prior to 2003, I was organizing my family pictures by month and year, using this scheme:
And so on. That’s all well and good, but evidently I forgot about that taxonomic decision, and in 2004, I started naming my folders like this:
Now, there probably won’t be huge practical implications for this inconsistency—or will there be? Who knows what’ll happen ten years from now when I try to revisit my, by then, ancient digital photos. Will I be able to find them?
Bottom line: I’m devoting too much cognitive effort to maintaining a mental map of my main computers’ folder structure. Sure, I bet I could find a great new scheme for organizing my data on lifehacker.com or 43 Folders, but the point is that I’d still be adjusting myself to work within the confines of this decades-old organization scheme.
There’s got to be a better way. Google is trying to get the world to adopt its desktop tools for content retrieval. But keyword search is so after the fact. Is there a better, more natural way of organizing information on a multi-purpose computing device such as a PC? Or maybe we need multiple schemes—like folksonomy construction through tagging in combination with various other ways of organizing content.
Raising Your Voice
Speech-enabled user interfaces are becoming more prevalent as organizations try to leverage speech recognition as a means of controlling labor costs. Every last one of us has had at least one bad experience with an interactive voice response system (IVR) or speech-recognition-capable application. It seems that the VUI (Voice User Interface) world needs as much help now as the GUI (Graphic User Interface) world did fifteen years ago. How are good VUI experiences created, and who’s doing the work of improving the VUI experience?
Pardon Me While I Reboot My Pants
In a similar vein, there are many as-yet-unsolved issues in the area of pervasive/ubiquitous computing. As the computer—and our means of interacting with the computer—move off the desktop and into the world at large, the UX community is challenged on several fronts. What are the special challenges of designing user experiences for, say, an augmented reality, location-aware pair of sunglasses?
I look forward to exploring these and other questions about the future of user experience in this column, but I’ll need your help. Do you have questions you think UX practitioners should be addressing? If so, feel free to suggest them by joining the discussion here. And please feel free to share your perspectives on the future of user experience as well.
In the meantime, I’d like to explore one issue in detail in this month’s column. I call it the problem of the perpetual super-novice.
The Perpetual Super-Novice
After becoming familiar with computers, desktop applications, and the Web, many people continue using the same inefficient, time-consuming, mouse-driven interaction styles. Others fail to discover shortcuts and accelerators in the applications they use. Why? What can applications, operating systems, and Web sites do to better facilitate a person’s progression from novice to expert usage?
Let’s take a moment and define our terms before we go any further. Here are three classifications for levels of user expertise that I typically employ when thinking about this issue:
- The beginner—The beginning user has never or rarely used an application, device, or product before. For beginners, almost every interaction with a system is exploratory. Their physical movements—and/or the on-screen representations of their movements—are mostly explicit and thoughtful. In this phase, users are trying to figure out what a system does and how they can use it. They are actively creating and modifying their mental models.
- The novice—The novice user has ascended the learning curve somewhat. Novice users have committed certain basic operations of a system to memory—cognitive or muscle memory. They are comfortable within a circumscribed area of a system’s total functionality. Their mental model of how and why a system behaves as it does is by no means complete—and in fact, might be quite inaccurate. But their limited knowledge has no adverse effects, so long as novice users stay within their comfort zone. If novice users need to learn a new area of functionality, their behavior reverts to that of a beginner while learning.
- The expert—The expert user not only has mastery over many aspects of a system, the user’s mental model of the system is complete and accurate enough that learning a new area of functionality occurs rapidly and easily. Expert users not only know a system; they know how to learn more about the system.
Certainly, people become experts when there is a strong extrinsic motivation to do so. For example, you might learn how to do a mail merge in your word processing application, because, well, the boss just asked you to do a mail merge. But in the absence of extrinsic motivation, it seems that many people stay novices or, at most, become a form of knowledgeable novice that I call the “super-novice.” Super-novices know a lot about the little part of a system they’re used to and almost nothing about the other parts.
The thing is, people’s mastery of a system in other domains tends to expand naturally as people become more and more experienced with the system. For example, a novice bicycle rider at first sticks to the basics—mount, pedal, brake, turn, dismount. As new riders gain more experience, they tend naturally to explore the capabilities and performance characteristics of the bicycle. Can I hop a curb? What happens if I lock the brakes? How easily can I pop a wheelie?
Most desktop and Web applications do a so-so job of encouraging exploration and increased mastery. Can we do a better job of helping people ascend the learning curve? How? These questions become especially interesting in light of the fact that many systems can easily track how and in what order people perform certain interactions. Are there teachable moments that user interface designers can exploit to encourage exploration and help turn novices into experts? Taking this approach would seem to blur the traditional hard line of demarcation between the application itself and its associated user assistance. Maybe it’s time that line should get blurred.