Anything outside those happy-path design boundaries we term edge cases. The result is often a design that works great when it works, but fails horribly when it doesn’t.
When I was working in a very methodical design environment, employing use cases to define clear requirements, I didn’t worry much about this. We had a rigorous design phase before developers wrote any code. During this phase, we would first work through the happy path—the direct and preferred route from trigger to successful outcome—and make sure we could provide that solution. Then, we would review each step in the use case for failure modes, and we would design appropriate mitigations.
Looking back, however, I should not have been so complacent. For the most part, we investigated technical failures—such as a server’s not being available—or non-compliance with business rules—such as a user’s not having sufficient funds in an account from which he wanted to transfer money. In other words, I accepted a complex environment, but still assumed a more-or-less spherical user.
Now that I work in an agile environment, I worry about this more, because the accelerated development pace and emphasis on minimizing preproduction design documentation seem to make any cracks in the system large enough for things to fall through.
I recently posed a question on my blog, The Humane Experience, about how to systematically avoid this kind of pitfall in agile, and Richard Mateosian posted this comment:
“Assume an amorphous cow and see what goes wrong. In other words, make your implicit assumptions explicit, then see what happens as you violate one or more of them.”—Richard Mateosian
Richard certainly makes a lot of sense, and for most of us, this is nothing new. I can recall many examples where I have done that, but not always systematically and explicitly. In this column, I examine specific steps UX professionals can take in considering the amorphous user.
There are a couple of models that can guide us in dealing with negative scenarios—meaning scenarios that deviate from the happy path and result in user failure. One model, negative scenario testing, comes from quality assurance; the other, negative case analysis, from qualitative research.
Negative scenario testing deliberately breaks rules in a user interface to see how the system responds. Its underlying assumption is that the design is right, and some user will inevitably be wrong. It mainly ensures that the system programmatically rejects invalid entries without breaking. For example, if a functional specification calls for new passwords to have both alphabetic and numeric characters, a QA engineer creates negative test scenarios in which he tries to create a password comprising all alphabetic or all numeric characters. If the system prevents these attempts, the product passes those tests. Of course, good design in these cases ensures that the system provides adequate feedback to the user, informing him that he has made an error, and ensures the user suffers no catastrophic loss of work as a result.
Negative case analysis is a qualitative user research technique that tries to account for observed outliers that do not fit the theoretical results. Its purpose is to refine the theory, so those outliers do fit. Its underlying world view is: Hmmm, something’s going on that the theory doesn’t account for. Let’s make the theory richer. This basic model is a good one for UX designers and UI (User Interface) designers to emulate. For them, we might restate this as follows: Hmmm something’s going on in the user’s world that our user interface doesn’t account for. Let’s make our user interface richer. This certainly describes the spirit of usability testing, in which we view user failures as legitimate outliers that our design assumptions have not accommodated and our subsequent reaction is to adjust the user interface.
But I’m looking for an approach that’s a bit lighter—one that fits at the beginning of a design cycle and reduces the likelihood of our taking wrong directions in the first place. Agile creates a fast-paced environment, and getting developers to revisit an issue from the last sprint is not easy.
I’ve gotten some good results by treating negative scenarios with the same rigor that I treat happy paths—going into the same detail and depth of exploration where it seemed warranted.
The process I’ve used is similar to negative case analysis, in that its inherent assumption is that user errors are often the result of rational user behavior—that is, error is not a result of a user’s doing anything wrong—or at least not illogical. Although negative case analysis attempts to rationalize outliers, I’m instead using scenarios to anticipate the reasonable outliers that a current design has overlooked, in the hope that I can make the design richer or at least more forgiving. I also think rich descriptions of scenarios help developers and stakeholders develop empathy for users, making them more willing to accommodate their well-meaning errors. In my doctoral research, I studied the effects participating in usability studies had on development teams. One of the things I learned was that the willingness of a team to fix a problem increased significantly when the developers empathized with users.
The emphasis of this approach is on describing scenarios where users could take reasonable alternative paths that would lead to failure. How reasonable is reasonable? That’s a legitimate question and depends on how much design time you have. Eventually, all what-ifs decompose to a level where you have to accept that users may just have to ask a friend for help or call the Help Desk. But even then, you can prepare the Help Desk to handle the inevitable calls better, as we will see in a later example.