Top

Book Excerpt: Why We Fail

March 9, 2020

This is an excerpt from Victor Lombardi’s book Why We Fail: Real Stories and Practical Lessons from Experience Design Failures. 2013, Rosenfeld Media.

Chapter 10: Avoid Failure

Cover: Why We FailAlthough there is no secret formula for creating successful customer experiences, what I can offer you is a method to help you avoid failure while you search for successful designs. These recommendations counter the [following] deficiencies: …

  • We’re all vulnerable to psychological biases that make it difficult to accept errors and share information about problems.
  • Contemporary digital products and services engage us in more complex ways, and because our reasons for using them are multifaceted, our experiences of them are emotional and subjective. They are experiential products, so testing product performance alone is insufficient to avoid failure.
Champion Advertisement
Continue Reading…

A Method That Avoids Failure

Our method helps us avoid failure in several ways:

  • It helps us build an accurate understanding of reality by generating verifiable data.
  • It can be adopted by an entire organization because it reflects the needs of design, technology, and management.
  • It allows us to research and test the customers’ experience, not just the product or service.
  • It covers everything from customers’ perceptions of brand and design to the ability to use and interact with a product.
  • It uses a realistic amount of resources to get the job done.

An Accurate Understanding of Reality

In experience design and other fields, there is an abundance of methodologies for innovation and controlling quality. Several of the methodologies that thrive today share a common thread that can help us measure customer experiences—the same essential method that has contributed to hundreds of years of scientific progress: the scientific method.

This is the same scientific method that we learned about in school, and it can be summarized in four steps:

  1. Make an observation.
  2. Form a hypothesis.
  3. Run an experiment.
  4. Interpret the results.

Let’s say a chemist makes an educated guess that mixing two chemicals will produce a new chemical that has the properties she’s looking for. She doesn’t merely think about what should work, write down the formula, and send it to the manufacturing team. She mixes the chemicals and then measures what happens. The results may not be quite right but they may be close, and they help her make another guess that leads to another experiment. Gradually she creates what she’s looking for. This deliberate, iterative method underlies our civilization’s massive scientific progress since the 17th century. Only through first-hand observation do we have an accurate understanding of reality.

Depending on your perspective, the need for this kind of rigor may sound obvious, especially if you’re already doing something like this on a departmental level, such as online marketing. Or it may sound too scientific, if for example, your practice follows an auteur model where designs emanate from someone’s creative vision alone. Ultimately, design is about creating something that works for people, and we can use a methodical process for discovering if that something did indeed work.

The initial decades of digital design were about understanding new technology and acclimating to new product and service paradigms. There was a long learning curve and a lot of failure, mine included. Now we’re acclimated to the technology, and we’re ready to be more methodological about how we conceptualize, design, and test our ideas.

In design, software development, and business management—as in science—the scientific method holds the key to discarding years of amateurish guessing about what will succeed and replacing it with a way of discovering what we need to know by testing the viability of anything that is new and uncertain.

The Experience Development Method

Standing on the shoulders of the scientific method–based approaches that came before, I developed a method called Experience Development (Figure 10.4). It includes the key characteristics of preceding methods, notably:

Figure 10.4—Essential stages of the Experience Development method
The essential stages of the Experience Development method

The primary stages of the Experience Development method.

  • Formulate testable hypotheses.
  • Test the hypotheses with customers using prototypes.
  • Plan with milestones instead of dates.
  • Iterate on the above steps using the knowledge gained to establish experience, financial, and other metrics.

The Experience Development method includes two key additions:

  1. Hypotheses, prototypes, tests, and measurements are created by an integrated team of management, design, and development specialists
  2. Hypotheses and testing methods specifically address the customer experience

Here’s an overview of the basic steps of the method. For more detailed instruction on related methods see the resources listed at the end of this chapter.

Step 0: Organize around The Customer Experience

Before using the method, an organization should assemble a small, integrated team that works in time spans corresponding to the problems at hand. To begin, first determine if the relevant hypotheses are strategic (long term) or tactical (short term).

Strategic hypotheses concern major new features, major changes, a new product, or even a new business in which the customer experience is critical to success in the long term. … Work on strategic problems requires a dedicated team for an extended period to iteratively test and update hypotheses to refine the solution.

Tactical hypotheses concern small new features and small changes that affect short-term success. For example, “If we simplify this key screen by moving all the optional information to another screen, then 90 percent of customers will complete the form faster and report higher satisfaction.” We may have hypotheses about whether the hardware is fast enough or whether the customer service chat is effective. The process for creating and testing more tactical hypotheses is the same. Unlike strategic hypotheses, tactical hypotheses may not require a dedicated team or continuous activity. They can be tested at set intervals, such as at the beginning of each six-week sprint in an agile process, or following quarterly competitive benchmark reviews.

A characteristic of this method is keeping tests small so that many iterations can be performed and more can be learned. This is accomplished more quickly by a team that is as small as possible. It’s not surprising that Customer Development emerged from and thrives at startup companies, because they are usually small by nature. If your organization is small, you’re set. If your organization is large, you may need to create a subgroup of specialists that can work autonomously to quickly move through the entire iterative cycle of prototyping, testing, and learning. The Skunk Works configuration pioneered by Lockheed and copied widely provides a model where innovative development can blossom even within large, complex, bureaucratic organizations. Here are some of the rules created by Kelly Johnson who ran Lockheed’s Skunk Works in the early years:

  • The Skunk Works’ program manager must be delegated practically complete control of his program in all aspects. He should report to a division president or higher.
  • The number of people having any connection with the project must be restricted in an almost vicious manner. Use a small number of good people.
  • Very simple drawing and drawing release system with great flexibility for making changes must be provided in order to make schedule recovery in the face of failures.
  • There must be a minimum number of reports required, but important work must be recorded thoroughly.
  • The contractor [Lockheed] must be delegated the authority to test his final product in flight. He can and must test it in the initial stages. If he doesn’t, he rapidly loses his competency to design other vehicles.
  • Access by outsiders to the project and its personnel must be strictly controlled.
  • Because only a few people will be used in engineering and most other areas, ways must be provided to reward good performance by pay, not simply related to the number of personnel supervised.

We’ve seen customer experience failures that emanate from various disciplines, not just design. So this method requires an integrated team of management, design, and development specialists working in concert, conceptualizing the strategic advantage and executing on a prototype and test that accurately proves or disproves the hypothesis. For example, a minimally small software team would include one person knowledgeable in strategy and marketing, another in design and research, and another in programming. When creating prototypes for testing, passing requirements “over the wall” to a design or development department or a vendor will result in suboptimal results.

In other methods, the responsibility for customer research can be delegated to anyone, but if product success relies on a positive customer experience, it’s vital that an experienced professional performs the research. We shouldn’t expect stellar research results from someone who has little research experience any more than we should expect a graphic designer to write quality software code.

Step 1: Understand the Customer Experience

Modern technology companies conduct research all the time. They scan competitors’ patent applications, monitor technology pipelines, and benchmark performance. Companies whose success depends on experiential products also need to observe their customers’ experiences. For example, they should be aware of what their new customers expect and how well those expectations are being met by the products. What emotions do the customers feel? How do they actually use the products? What are the expectations, emotions, and actions of the competitors’ customers? And because these all change over time, the observations need to happen as regularly as internal performance reviews. Out of these observations come testable hypotheses.

Team members usually have ideas about what may go wrong, but are sometimes hesitant to share them. A pre-mortem is one way to elicit these ideas. A pre-mortem is the opposite of a post-mortem: at the beginning of a project, I pose the hypothetical that the project has failed. I then ask the team why they think it failed, and what changes would have helped avoid the failure. We can then turn those hypothetical solutions into testable hypotheses and test them.

I must emphasize that to avoid the failures outlined in this book, you need to observe the experience, not the product. Here’s an example from the automotive world: Subaru collaborated with Toyota to build an affordable sports car that would be wonderful to drive. The project team culled design ideas from what made past models enjoyable to drive and developed a prototype. During testing the engineers iteratively formed hypotheses about what would work better, made adjustments, ran tests, and measured the results. But although they were modifying the prototype car, it was not the car they measured:

We didn’t set up any numerical targets like lap times or acceleration. We had one test driver, and after each set of tests, the only thing we’d ask was, “Did you enjoy it?”

Of course he said yes every time. The most important thing for me was—well, we say it like this: you must have a smile behind the wheel.

A smile is one of the better outward signs of a good experience. Although many of us may like our cars, how many of us have cars that make us smile when we drive them?

The result of all that smiling is the 2012 Subaru BRZ, also known as the Scion FR-S and the Toyota GT86, a $28,000 four-seat coupe. Though it’s not as fast, flashy, or technologically interesting as other sports cars, it has been praised by customers and automotive journalists.

You can see this shift to measuring experience among automotive media as well. For example, the Winding Road automotive Web site has a ranking system called the Involvement Index that ranks cars according to how well they engage the driver rather than how fast they accelerate or how many gadgets they include.

Motor Trend magazine does something similar now, ranking the Best Driver’s Cars each year. In the 2012 test, the Subaru BRZ placed fourth, beating an $84,000 Jaguar, a $295,000 McLaren, and even a $393,000 Lamborghini.

Step 2: Form a Hypothesis

As we observe, we come up with questions. What if we left out features x, y, and z to make the product faster? What if we wrote more humorous copy? What if we let people customize it more? Before we can test these questions, we turn them into hypotheses that make it clear what is being tested and what results we expect.

With my perfect hindsight I’ll use as examples the products profiled in this book. For each product, I created a sample hypothesis and a corresponding test. …

If we could travel into the future and see how products fail and then come back to the present, we would always know what changes to make. Or if we had infinite resources we could test everything. Because we can’t do either of these things, we make educated guesses about which hypotheses are the most important.

Uncovering implicit assumptions and determining which ones are critical is one of the most difficult and ill-defined parts of scientific method–based methodologies. Working backward from the failures profiled in this book, we can generate relevant hypotheses by asking these questions:

  1. What does our design do differently than the currently existing successful designs?

For example, the key difference might be the product concept (iDrive, Wave), the interaction (iDrive, Wave, OpenID, Final Cut Pro X, Plaxo, Symbian), the audience (OpenID, Pownce), or the features (Final Cut Pro X).

  1. We believe we have a particular competitive advantage, and that our customers’ experiences are somehow better or different with our product. Do customers actually experience this difference?

This question, in my experience, isn’t asked nearly enough in contemporary experience design practice, especially given how many failures are due to losing to competitors (Zune, Wesabe, Pownce, Symbian).

  1. Do our customers’ experiences rely on technology performing in a new or better way?

For example, Symbian’s touchscreen interface suffered from delays between the touch and the on-screen feedback. Another classic example is Apple’s Newton MessagePad, which when first released featured handwriting recognition that often produced unintelligible results.

  1. Where could our current customer experience improve?

From your observation, testing, and customer feedback, you should have an idea of which part of the experience is not pleasant, where there are too many steps, what abilities are missing, and so on. These are the places that competitors will target, as we saw with Wesabe.

Ranking these hypotheses to determine which are most important and deserve testing can be done using criteria relevant to your business, such as:

  • Which hypotheses arose from the most customer feedback?
  • Which are most important to our strategy?
  • Which are most important to our brand?
  • Which rely on the effects of external market forces, such as a social network reaching critical mass?

Step 3: Run an Experiment

Along the spectra of attitudinal to behavioral and quantitative to qualitative, there are myriad customer research methods we might use to test a hypothesis. What’s important is that we test the experience, not the product. For example, design researchers have ways to gauge desirability by measuring the emotions people feel while using a product. One way is to observe them, as in the Subaru example. Another is to ask people to indicate how they felt by looking at pictures of facial expressions and selecting one.

Adherents of Customer Development and Lean Startup sometimes use a proxy for experience called currency. In this technique, the potential customer is shown a prototype and asked to pay for the eventual real product. The currency can be an actual cash payment, but it can also be a noncash transaction in another form, such as a letter of intent. Currency demonstrates a commitment on the part of the customer, and requesting currency after testing a prototype can be easier than gauging experience directly. For example, Microsoft could have let target customers try a pre-production version of the Zune and asked them for a cash deposit, perhaps with the promise of getting the product before anyone else upon its release.

Step 4: Interpret the Results

One beautiful aspect of the scientific method is that the test results are usually true or false. Assuming the hypothesis is well stated with a clear threshold of failure, and that the test is appropriate and executed properly, there’s little to interpret.

Contrast this with, for example, a product usability test performed without using a hypothesis, which was and is common. A customer would be asked to perform a number of typical tasks using a product while a researcher measured variables such as the time needed for task completion, the number of errors made, and how the customer rated his or her satisfaction with the product. After the tests, the accumulated data might not paint a clear picture of success or failure, leaving the researcher to ask, “Did the customers complete the tasks fast enough? Did they make too many errors? Is their level of satisfaction high enough?”

The hard work of interpretation happens when the test result is false. Then the team must decide what to do in the next iteration. Make a small change to the design? Change the code or hardware to perform differently? Alter the business model and make corresponding changes to the prototype? Being able to quickly move from a false test result to another iteration is why starting with a small, integrated team is so important.

Experience Development Summary

  1. Is the problem strategic? If so, establish a dedicated, integrated team of specialists to work on a continuous basis. If the problem is tactical, the team can work on an as-needed basis. If inside a large organization, follow a skunk works model to create a small subgroup capable of working quickly.
  2. Build an understanding of the current customer experience through customer research and competitive research, and by tapping internal wisdom through tools such as pre-mortems.
  3. Form and prioritize hypotheses, especially concerning how the product is different from existing successful designs, the strategic differentiation as the customer experiences it, the performance of new technology, and known existing deficiencies in the experience.
  4. Test the critical hypotheses.
  5. Use the test results, particularly the false results, to re-prioritize and re-form the hypotheses, and begin the process again.

Conclusion

The case studies in this book demonstrate that contemporary digital products and services engage us in qualitatively more complex ways than in the past. Our reasons for using them are multifaceted. Our experiences of them are emotional and subjective. They are experiential products, and to avoid failure we have to go beyond understanding product performance to understanding customers’ experiences.

In some cases, the product shortcomings were no secret but were ignored by those in charge. We’re all vulnerable to psychological biases that make it difficult to accept errors and share information about problems. Methodologies based on the scientific method can make it safe to test ideas and have some fail, and to base decisions on verifiable data. The Experience Development method employs teams that integrate management, design, and technology so that test data can be shared naturally, a hallmark of quality control processes.

We have not yet proven beyond a doubt that these methods significantly and consistently improve the customer experience. But the empirical evidence we have so far is compelling. In addition to the success stories mentioned above, a number of venture capitalists support these methods, which is telling because they have the most at stake in terms of direct financial impact. Fred Wilson, a venture capitalist and principal at Union Square Ventures, reported that “There is a very high correlation between the lean startup approach and the top performing companies in our two funds.” And Scott Anthony at the Harvard Business Review cited Steve Blank and Rita Gunther McGrath as two of the twelve people in business history “who have brought the greatest clarity to the field of innovation,” alongside the likes of Thomas Edison and Joseph Schumpeter.

The allure of methods to some people is anathema to others. As with any approach, an overreliance on methods can become reductionist. Methods are tools, not instruction books. The method alone does not ensure success and doesn’t obviate the need for a team with acute thinking. A good scientist invents effective, relevant hypotheses that all contribute to some larger proof. The experiments must be valid tests. And the results must be interpreted intelligently and objectively. As venture capitalist Fred Wilson remarked, “Lean startup is a machine; garbage in will give you garbage out.” 

Discount for UXmatters Readers—Buy Why We Fail: Real Stories and Practical Lessons from Experience Design Failures online from Rosenfeld Media, using the discount code UXMATTERSWWF, and save 15% off the retail price.

Head of Design, Commercial Lending, at Capital One

New York, New York, USA

Victor LombardiVictor helps organizations design products and build businesses that offer the best possible consumer experiences. Previously, for over a decade, he worked as a user-interface designer, contributing to more than 40 desktop and Web applications for organizations such as General Electric, Cisco, J.P. Morgan, and the Southern Poverty Law Center. He earned his bachelor’s degree in journalism from Rutgers University and a master’s degree in music technology from New York University. Victor combines his liberal-arts education and technology-industry work into a perspective that is both practical and human. He is passionate about learning and sharing what he learns, co-founded the Information Architecture Institute and the Overlap conference, and has taught at the Parsons School of Design and the Pratt Institute. He is the author of the Rosenfeld Media book, Why We Fail: Real Stories and Practical Lessons from Experience Design Failures.  Read More

Other Articles on Design Process

New on UXmatters