User Interface Design Flaw Cause of Ford Recall | User Research: How Much Is Enough?

Ask UXmatters

Get expert answers

A column by Janet M. Six
June 22, 2015

In this edition of Ask UXmatters, our panel of UX experts discusses two topics:

Imagine that you’re driving your luxury car down the road when your front-seat passenger decides to change the radio station, and all of a sudden your car unexpectedly shuts down and comes to a screeching halt. This is exactly what happened to at least one owner of a 2015 Lincoln MKC. How did this happen? The location of the Engine Start/Stop button was where the driver or a passenger could inadvertently hit it. In this day and age of better UX design, how could this design have made it to market? Could usability testing have prevented this?

For our second topic, our panel discusses how to determine whether you’ve completed sufficient user research. How do you know when you’ve completed enough user research to inform product design? Is it a certain number of participants? A certain amount of time? Are we ever truly finished with user research?

Champion Advertisement
Continue Reading…

In this month’s edition of Ask UXmatters, our panel of UX experts answers our readers’ questions about a broad range of user experience matters. To get answers to your own questions about UX strategy, design, user research, or any other topic of interest to UX professionals in an upcoming edition of Ask UXmatters, please send your questions to: [email protected].

The following experts have contributed answers to this edition of Ask UXmatters:

  • Dana Chisnell—Principal Consultant at UsabilityWorks; Co-author of Handbook of Usability Testing
  • Leo Frishberg—Product Design Manager at Intel Corporation
  • Jordan Julien—Independent Experience Strategy Consultant
  • David Kozatch—Principal at DIG
  • Cory Lebson—Principal Consultant at Lebsontech; Author of UX Careers Handbook (forthcoming); Past President, User Experience Professionals’ Association (UXPA)
  • Gavin Lew—Executive Vice President of User Experience at GfK
  • Baruch Sachs—Senior Director, User Experience at Pegasystems; UXmatters columnist

User Interface Design Flaw Cause of Ford Recall

Q: Ford recently recalled some of their SUVs because drivers were accidentally turning them off because of the user interface design. How do these types of design errors still happen in this day and age?—from a UXmatters reader

Recently, Ford recalled 13,000 Lincoln MKC SUVs because of the possibility that drivers might accidentally shut down their vehicles by inadvertently hitting the Engine Start/Stop button, which is directly below the gearshift selector buttons and beside the all-in-one navigation/radio/climate-control touchscreen. According to a CNN story, “Ford Recalls SUVs Because Drivers Are Accidentally Turning Them Off,” One driver experienced a sudden, hard stop after his passenger accidentally hit the button while trying to operate the radio. You can see an image of the design in question in that article.

To make matters worse, if a driver or passenger accidentally shuts down the engine by hitting the Engine Start/Stop button, the airbags will not deploy as expected, according to The Register’s article “Ford Recalls SUVs … to Fix the UI,” and Edmunds’ article “2015 Lincoln MKC Recalled to Relocate Push-Button Start.” The engine shuts down unexpectedly and the airbags stop working! This is bad! You can see a short discussion and demo of this unexpected engine shutdown occurring at low speed, in the Consumer Reports video “Lincoln MKC Ignition Button Recall Highlights Risk.” The video also shows how easy it could be to inadvertently hit the button while bracing the hand to operate the touchscreen system when the vehicle is in motion. Ford’s solution for this problem is to place the Engine Start/Stop button at the top of the gearshift. You can see its new position in this video, too. In this age of better UX design, how could the original design ever have made it to market?

“I find this story fascinating,” answers David. “Based on the photograph, it’s clear that its designers were breaking some rules: an ignition button or key entry should never be in the same line as the transmission. And the button is small—smaller than what drivers, including those familiar with button-based ignitions, have come to expect when powering up a 5,000–6,000-pound machine. The engineers should have figured this out before they started building the car. We can’t know for sure what went wrong at Ford, but some of the ways in which they could have avoided this problem include the following:

  1. Doing initial paper and/or model prototyping to answer questions about how to use the control and to determine comfort-level ratings of the user interface, in comparison to what drivers are currently using, would likely have brought up these issues: ‘Why would you put the ignition in same place as the transmission?’ and ‘Why is the button so small?’
  2. Simulating the driving experience using the user interface would have revealed the problem—especially if the tests had involved completing several maneuvers over a period of time that made use of the Sport feature.

“It is very likely that form was following function here—that is, the engineers probably determined the optimal location for the power switch beneath the dash based on engineering requirements. From an engineering standpoint, this probably meant a location that did not require a lot of rejiggering of other functions. Unfortunately, this placement was less than optimal for the driver.

“By the way, a pet peeve of mine regarding a lot of user interfaces for electronics devices is hiding the On/Off button. Many manufacturers combine On/Off with other functions, but simplicity is always best, which means On/Off should always be a clearly marked, stand-alone function. Toyota figured this out for the push button in their Prius and Lexus autos, but many car audio manufacturers have not.”

This question reminded David of an earlier discussion in the Ask UXmatters column “Asking Probing Questions During a Usability Study,” and he said, “A lack of probing questions likely contributed to Ford’s failure to discover this issue before shipping the SUV. The engineers at Ford were looking at that power button, but not seeing it in the way their customers would see it.”

Designing in a Vacuum

“Unfortunately, product design sometimes occurs in a vacuum,” replies Gavin. “It’s based on great engineering, but created in a vacuum. The old adage of wishing architects had to live in the glass houses they’ve designed applies. Good design involves elegant interactions that occur among different users, environments, and contexts. By understanding how a product design fits in the real world, you can identify potential problems.

“The reality is that, sometimes, we identify these problems too late in the development process, when physically moving a button that could potentially cause problems is really expensive and would need to be cost justified. Sometimes timelines and budgets don’t allow the testing, redesigning, or reimplementing of products. So the best thing we can do is to try to simulate user experiences and test prototypes earlier in the development process. when changes are cheaper. This is the best time to affect positive change.”

The Realities of Complex Design

“Without being intimately involved in the work of this particular design team, it is hard to quarterback what should have happened,” answers Baruch. “However, I have worked on both hardware and software for large, complex military systems, as well as complex, enterprise software products with many different components, so I can say that these types of things happen for a number of different reasons:

  1. Lack of cohesion across a design team. Cars are made up of countless systems and subsystems, so their creation involves multiple design teams. It’s all supposed to come together with the proper oversight, but things get missed.
  2. Not enough testing. Or not the right kind of testing that would pick up this sort of problem.
  3. Having other priorities. People deal with poor design all the time, and we are pretty good at coping with it. Sometimes this reality makes it hard to push through the right design at the expense of other considerations.

“These three things happen all the time. When you think about the pressures American car companies face to be profitable, while creating exciting new designs for pretty complicated pieces of equipment, it is easy to see how any of these things might have happened in this case. Also, despite many modern businesses’ adoption of design thinking, the automotive industry is still rather old school and hierarchical. Somebody, somewhere approved this design, and that was that.”

The Evolution of Automotive Design

“The automotive industry has been catching up to consumer technology trends over the past decade,” responds Jordan. “Many experts believe that experience designs for autos are in the awkward-teenager stage that Apple went through during the late ’90s. Automotive manufacturers have access to today’s technology, but often are not structured to take advantage of the unique expertise of user interface designers and interaction designers that has developed to optimize today’s technology. But we’ll never rid ourselves of poor design choices altogether.

“Luckily, in this case, Ford’s poor design decision didn’t cost lives. However, it does illustrate explicitly how iterative design works. Ford will never, ever put a Start button back in that location or treat its design in this way again. Perhaps, with additional testing or research, they could have excluded this poor design decision from consideration earlier on—but perhaps not. Drivers of the new MKC might eventually get used to the position of the button. If it weren’t a safety issue, I’d find it very interesting that Ford would spend all that money on a recall for a UI-design issue.

“I’m sure there are similar issues with other vehicles—but perhaps not with a system that’s as critical as the Engine Start button. For example, I don’t like the way the power windows work on older Volvos. Nor do I like the door handles on certain Jeeps. But neither of these vehicles was recalled.”

User Research: How Much Is Enough?

Q: How do you determine whether you’ve completed sufficient user research to inform a product design?—from a UXmatters reader

“We are never done,” asserts Leo. “In our practice. we look at sufficient in terms of:

  1. Time—How much time have we devoted to research?
  2. Resources—How much money have we spent and how many people have we dedicated to research?
  3. Saturation—How much new information did we get in the most recent round of research?

“Each of these could be the gating factor, depending on the project. Of the three—except in the case of large, complex projects—saturation happens first. We usually get to diminishing returns fairly quickly—within three to six sessions for simple engagements—but we use the very rich, discounted technique Presumptive Design (PrD). Even without PrD, using more typical contextual inquiries or user interviews, we usually converge after six sessions—again, depending on the complexity of the project.

“If a project is complex, you need to increase time and resources proportionately to its complexity, but one of these may hit a budgetary constraint before we’d anticipate hitting saturation. In such instances, we get creative about logistics and operations to maximize the results we can achieve within the cost envelope.”

Never assume that you have completed sufficient user research to fully inform a product design before you develop the product,” advises Cory. “Ultimately, however, you’ll have resource constraints that are going to limit how much research you can get done. Work within those constraints, whatever they may be, but try to break your research efforts into smaller chunks. That’s the beauty of an iterative approach. You don’t need to achieve perfection, but your product keeps getting better and better with each small cycle.”

The Law of Diminishing Returns

“There’s a notion in ethnography called saturation,” replies Dana. “You’ve learned everything you can learn, and you’re not hearing anything new or different. We call this the point of least astonishment—a phrase I stole from Jared Spool because I like it so much. You’ll know it when you get there. In my experience, though, one round of user research often spawns new questions once you start to implement what you think you know. So, the research is never actually done. To have a rich design that meets users’ needs and delivers the experience you want them to have, you pretty much have to keep doing research iteratively throughout the life of the design.”

“This is a great question,” answers Gavin. “It’s always difficult a priori to know how much research is enough. We train people on estimating sample sizes for formative and summative studies. The simple answer is that this is not easy. The right answer depends on the complexity of the device, the user group, the number of tasks, and even the effect size that we anticipate. By effect size, I mean: Are we looking for big differences, or showstoppers? Or are we looking for smaller things that could lead to impacts that could derail the product or, in the case of healthcare systems, result in a life-threatening situation.

“For most forms of UX research, there is the notion of diminishing returns—whereby, after five, seven, or ten participants, you see the same issues coming up again and again, and you discover fewer and fewer new issues. However, these small sample sizes are often insufficient to get executives to buy into your findings. Often, we do what we call a smell check. When does it smell like we’ve had enough participants to make an issue feel real. However, in the case of medical devices, there are strong guidelines that dictate using something like 15 participants per user group or 25 participants for a high-risk device such as an infusion pump.” 

Product Manager at Tom Sawyer Software

Dallas/Fort Worth, Texas, USA

Janet M. SixDr. Janet M. Six helps companies design easier-to-use products within their financial, time, and technical constraints. For her research in information visualization, Janet was awarded the University of Texas at Dallas Jonsson School of Engineering Computer Science Dissertation of the Year Award. She was also awarded the prestigious IEEE Dallas Section 2003 Outstanding Young Engineer Award. Her work has appeared in the Journal of Graph Algorithms and Applications and the Kluwer International Series in Engineering and Computer Science. The proceedings of conferences on Graph Drawing, Information Visualization, and Algorithm Engineering and Experiments have also included the results of her research.  Read More

Other Columns by Janet M. Six

Other Articles on Usability Challenges

New on UXmatters