Usability for Mobile Devices

Insights from Research

Walking in your customers’ shoes

September 5, 2010

The mobile space is the new Wild West of technology. Much like the Web during the 1990s, mobile is the new domain at the forefront of innovation. Users are discovering new capabilities, integrating them with their daily lives, and experiencing new interaction models. The tech equivalent of indie bands, independent developers—working solo or in small teams—can create innovative new software in the form of mobile applications. These apps have the potential of launching a few software engineers from dorm rooms and garages into tech giants, in the tradition of Google or Facebook. Of course, accompanying this new era of innovation is a new set of usability concerns for software that runs on mobile devices small enough to fit in your pocket, which you can use while simultaneously walking around and interacting with the world around you.

Sponsor Advertisement
Continue Reading…

Dealing with Physical Constraints

There are some well-known constraints we must take into consideration when designing and developing mobile apps—mostly surrounding a device’s form factor and physical user interface. Thus, the type of device on which a mobile app will run is a major design consideration. One nice aspect of designing apps for the iPhone is that the device’s form factor and physical user interface are standardized and well known. Plus, you can market your app and people can buy it using the familiar user interface of the iPhone app store.

However, when designing apps for other brands of mobile devices, you’ll need to deal with significant variability in their screen sizes, form factors, and physical user interfaces. For example, a Blackberry phone may have a small screen with a physical QWERTY keyboard, as on the Blackberry Curve, or it may have a larger touchscreen and a virtual keyboard like the Blackberry Storm. Consequently, the interaction design for each of these devices must be quite different. The Storm requires large buttons to facilitate touchscreen interaction, while the Curve requires smaller navigation elements, so they’ll fit on the smaller screen. For this reason, it’s important to specify the mobile devices on which you intend your app to run when defining requirements.

The Mobile Space Is Not the Web

Many of the assumptions about user interactions that drive Web design do not hold true for the mobile space. It’s essential to recognize that your users will not be sitting at a desk and looking at a big screen for substantial amounts of time, in a relatively peaceful environment. Instead, your users will be mobile—perhaps walking down the street, sitting on a train, or waiting for a latte at a coffee shop—using your app in environments where they will be surrounded by stimuli. So rather than running in a quiet office, library, or home, your app must compete with countless, extremely compelling external stimuli such as the constant movement of people and vehicles around them, as well as interactions with other people. Also, because your app runs on such a small screen and carries less auditory impact, it is not as immersive and is less able to hold users’ attention than a desktop or Web application. Therefore, it is essential that users be able to open your app quickly, accomplish what they hope to accomplish, then exit quickly and return their attention to the outside world. Accomplishing this type of lightning-fast engagement is essential for the success of an application in the mobile space.

One of the easiest ways of achieving the kind of quick engagement the mobile space requires is to streamline your app’s functionality. This means restraining any form of feature proliferation. Single-function or limited-function apps have a definite advantage when it comes to quick engagement. Google provides a great example of this philosophy with their Google Maps mobile app, which is separate from the Google search app. Thus, they can provide various capabilities, while limiting the functionality they incorporate into a single mobile app. This approach offers several advantages. First, it allows users to understand the utility of each app easily—enabling them to more quickly choose the app they need among the collection of apps on their phone, as well as to quickly make a purchase decision when buying a new app. It also makes it easier for users to organize their apps by function, especially now that more smartphones are incorporating file structures as a way of dealing with app proliferation on devices. Finally, single-function apps have simpler user interfaces, which reduces screen clutter and lets users access key functionality quickly and easily, while also minimizing the impact of distractions by reducing the amount of attention using an app requires.

In a real-world environment, a mobile app must overcome competition for a user’s attention, which goes far beyond just overcoming a competing app’s claims on a user’s attention, as on the Web. For example, if you are developing a news delivery app, it’s important to take into consideration why a user would use your app rather than just grab the newspaper sitting next to him at the local coffee shop. Navigation apps have done a great job of competing for users’ attention. Their competition includes traditional maps, printed directions, and dedicated navigation devices. Traditional maps don’t provide position tracking or turn-by-turn directions. Printed directions don’t provide positional information either, so are not helpful if you miss a turn and can’t generate updated directions when you are mobile. Dedicated navigation devices can’t incorporate other functionality like making phone calls, checking your email, or updating your Facebook status, so they add device clutter. A mobile navigation app offers significant advantages over each of the competing methods of completing the same objectives. Your mobile app should offer similar advantages to help ensure its success.

It is also important to take into account the reality that users may be engaging in simultaneous activities that not only require their attention, but may also take up the use of one hand. For instance, a person may be trying to use your mobile app while also trying to carry groceries, walk a dog, or carry a cup of coffee. If users discover that it is difficult to use your mobile app in such situations, they may avoid using it unless they can devote their full attention to it. Thus, single-handed operation is a major consideration for mobile apps.

Mobile Usability Testing

This is a user research column, so we definitely want to address how to do user research for mobile apps. Let’s discuss some factors you should take into account when doing usability testing next.

Considering the Context of Use

As we stated previously, it is extremely important to take into account the different contexts in which people use mobile apps. For this reason, it’s sometimes useful to incorporate environmental distractions into the context in which usability testing takes place. There are few ways of doing this. First, you can perform a field study in which a person attempts to use a device in a real-world environment like a coffee shop or other public place. This kind of approach maximizes external validity, which refers to the extent to which you can generalize the findings of a research study to the real world.

Another testing strategy is to simulate a real-world environment within a controlled area like a usability testing lab. This might include playing music or videos in the background or even inserting additional people into the testing environment to periodically initiate communication with a participant. All of this occurs as the participant simultaneously attempts to perform tasks on a mobile device. Finally, you can introduce distractions in the form of distracter tasks. Introducing a distracter task involves instructing a participant to stop what they are doing, perform a prescribed action, then return to what they were originally doing. An example might be: Whenever you hear the bell ring, stop what you are doing and write down what time it is in this notebook. While distracter tasks do not consist of meaningful actions and need not mimic real-world situations, they are relatively easy to incorporate in a test scenario and give you an idea of how distractions can impact a user’s interaction with your mobile app. If you’re able to do iterative usability testing, you can begin with more traditional usability testing, then introduce distraction elements further along in the testing process.

Capturing Data

Data collection can be a challenge when testing mobile devices. Because of mobile devices’ small size, it can be difficult to get a clear video recording of participants’ interactions with a device. This is particularly true for touchscreen devices, because when a participant interacts with a touchscreen, his or her hand also obscures the user interface. This can make is difficult to see what buttons a participant presses. Also, participants with large hands may obscure the user interface merely by holding a device in their hands. Finally, because mobile devices are usually handheld devices, people tend to move around quite a bit while interacting with them, rendering normal stationary cameras ineffective.

The easiest way to work around these issues is to do initial testing using functional mockups on a desktop or notebook computer. This can provide great initial guidance on your app’s interaction design, but it’s important not to rely entirely on this kind of testing. At some point during development, you should do usability testing on an actual mobile device.

When testing on mobile devices, there are several approaches we employ. To capture user interactions despite participants’ hands sometimes obscuring the screens of mobile devices, we like to use a screen capture utility that outputs a video stream. This lets us see what’s happening on a mobile device’s screen on a connected computer, on which we can save the video stream. To do this, you’ll need to find a screen recording utility for the platform for which you’re designing your app. Here are some screen capture utilities for a few popular platforms:

Although recording interactions lets us see what user interface elements participants use, it doesn’t tell us whether a participant might have hit a button by accident or show us the participant’s reaction. To capture that information, we like to focus a second camera on the device a participant is using. This helps us capture the way the device fits in the participant’s hands, accidental activations, mistakes in interactions, and any other issues involving the use of the device’s physical user interface.

Because mobile devices tend to move around as participants hold them, we sometimes assign someone to be a dedicated camera operator. In this way, we avoid asking participants to try to hold the device still or in an awkward position where it is visible to a camera, as well as mounting the device in a way that would not represent how participants would actually use the device. However, if having a dedicated camera operator is not a possibility, we ask participants to hold a device in a way that is comfortable for them and set up a stationary camera, zooming in on the device to a distance that ensures the device remains in frame, so long as participants’ movement remains within a reasonable range. Usually, we also set up a second stationary camera to capture participants’ facial expressions as they use an app. By combining these three data sources, we can capture just about any relevant information during a usability test.


The mobile space is an exciting new technology frontier. Eventually, it will probably come to be dominated by massive software companies and require substantial budgets to create apps that have a chance at success. But for the time being, independent developers and a new generation of startups have the opportunity to blaze a new path by creating innovative mobile apps that redefine the way people interact with mobile technology. To meet these ambitious objectives, designers and developers of mobile apps must understand a new set of usability constraints that are unique to the mobile domain, including the variability of the physical user interfaces for devices on a single platform, usage contexts that are rich in distractions and competition for users’ attention, and users’ sometimes engaging in simultaneous activities that require an app to maintain usability during single-handed operation.

Usability testing of mobile devices also presents some new challenges. Traditional, lab-based usability testing doesn’t adequately simulate the actual operational context of a mobile app. Collecting relevant data during a usability test session in which a participant is using a handheld device also requires some new approaches. Fortunately, tools exist to solve this data-capture challenge, and new tools are continually emerging as the smartphone market continues to grow.

This innovative mobile space has the potential to launch the next tech titan. Among the countless app developers who are now working in this space, the companies that rise to the top will necessarily have done much to overcome the new usability challenges mobile devices present. 

Principal Researcher and Co-Founder at Metric Lab

Redwood City, California, USA

Demetrius MadrigalDemetrius truly believes in the power of user research—when it is done well. With a background in experimental psychology, Demetrius performed research within a university setting, as well as at NASA Ames Research Center before co-founding Metric Lab with long-time collaborator, Bryan McClain. At Metric Lab, Demetrius enjoys innovating powerful user research methods and working on exciting projects—ranging from consumer electronics with companies like Microsoft and Kodak to modernization efforts with the U.S. Army. Demetrius is constantly thinking of new methods and tools to make user research faster, less costly, and more accurate. His training in advanced communication helps him to understand and connect with users, tapping into the experience that lies beneath the surface.  Read More

Principal Researcher and Co-Founder at Metric Lab

Redwood City, California, USA

Bryan McClainBryan is passionate about connecting with people and understanding their experiences and perspectives. Bryan co-founded Metric Lab with Demetrius Madrigal after doing research at NASA Ames Research Center for five years. While at NASA, Bryan worked on a variety of research studies, encompassing communication and human factors and interacting with hundreds of participants. As a part of his background in communication research, he received extensive training in communication methods, including certification-level training in police hostage negotiation. Bryan uses his extensive training in advanced communication methods in UX research to help ensure maximum accuracy and detail in user feedback. Bryan enjoys innovating user research methods that integrate communication skills, working with such companies as eBay, Kodak, Microsoft, and BAE Systems.  Read More

Other Columns by Demetrius Madrigal

Other Columns by Bryan McClain

Other Articles on Software User Experiences

New on UXmatters