As UX researchers, working on cloud products at Google, we aim to make our products easier for enterprise users to use—for example, the technical developers and administrators of large applications such as shopping and human-resources (HR) Web sites. A significant component of learning about users’ behaviors is understanding how they navigate across surfaces—the different devices on which they use our applications to complete their goals. For example, enterprise users who are building or managing an application or service such as a shopping Web site might work across multiple surfaces when building and testing the site.
We learned early on that switching between multiple surfaces is a natural behavior for enterprise users, but the key question we wanted to answer was when and why do users choose to move between surfaces. So we performed UX research to learn about these user behaviors. We also created a framework that would help teams across the company to prioritize the surfaces for which they should build. In this article, we’ll outline our research process and share the lessons we learned along the way.
What Do We Mean by Surfaces?
Our users, who develop and administrate enterprise applications, work primarily across four different types of surfaces.
1. Graphic User Interfaces (GUIs)
Users interact with a system or application through visualizations, icons, and graphic user-interface elements rather than code. A GUI is ideal for learning a new environment because the user can easily follow complex workflows. Plus, a GUI provides visual cues regarding successful task completion. GUIs support the visualization of graphs, metrics, and reports, making them great user interfaces for monitoring tasks.
2. Command-Line Interfaces (CLIs)
To use a CLI system or application, the user types a brief line of commands or a script on a terminal. CLIs are incredibly useful when the user needs to run the same command multiple times. With a CLI, the user can quickly write a script that automates a repetitive process. As a result, development environments typically have a CLI.
3. Application-Programming Interfaces (APIs)
This software allows two applications to communicate with each other through code. For example, API calls power the maps features of ride-sharing services such as Lyft or Uber by communicating with map-rendering services such as Google Maps or OpenStreetMaps.
4. Infrastructure as Code (IaC)
IaC consists of tools that let users write configuration files, or config files, that automate infrastructure operations. For example, suppose that a growing startup is running its services on Google Cloud. Then, after a year, because of their growing user base, they decide they want to expand to Amazon Web Services (AWS). They would have to rewrite all their infrastructure operations to work on AWS, which would require a huge manual effort. This is where IaC can makes life simpler. IaC lets users use templates to automate a lot of the functions of APIs and CLIs, easily enabling their code to work across different infrastructure platforms and handling operations across multiple platforms. Terraform and Ansible are examples of such config files in domain-specific languages.
The Research Question
When we build user experiences for our users, we want to provide a happy path that reduces the number of steps they must take to complete a goal. As a result, we might consider the need for context switches between user interfaces to be a suboptimal experience. However, our research with enterprise users of cloud products has shown that it is natural for users to switch between multiple surfaces to complete a single goal. Figure 1 shows how a UX research participant navigated between different surfaces to complete a goal in one of our cloud products. Each dot represents a click toward a different surface.
As we mentioned earlier, the most important question we needed to answer through our research was: When and why do users choose to use different surfaces? We also wanted to understand how we could provide a seamless experience for our users across the different surfaces.
Our UX Research Process
To understand how users employ different surfaces to complete their goals, we analyzed dozens of research sessions to develop our insights and, ultimately, create our framework.
Step 1: A Literature Review
Our work built on 30 internal research studies across eight different cloud products such as GKE, GCP, Anthos, Rancher, and Terraform. Our methods included user interviews, usability testing, cognitive walkthroughs, surveys, conjoint analysis, and quantitative data analysis. We also analyzed external articles and academic papers on human-computer interaction (HCI) theory to generate our initial insights. Figure 2 shows some of the data we gathered.
Step 2: Task Mapping
Next, we needed to better understand how users traverse surfaces in the real world. We each examined and mapped a few common user journeys, which helped us to distinguish how users use surfaces in completing different goals. Figure 3 shows a sketch of a task map for a user journey.
Step 3: Developing the Framework
Using the insights from this process, we created a decision tree that highlights the factors that influence users’ decisions regarding what surfaces to use, which is shown in Figure 4. By using this method, we were able to operationalize and test our insights. This decision tree provided the foundation for our framework.
The Output: A Scalable Framework
We started this work to demystify how our users employ different surfaces to reach a goal. We wanted to understand how users currently use the different surfaces and help teams to figure out which ones to use when building particular products—and at what points in the development process. As a result, we created a framework to provide guidance that can help any team make decisions regarding the surfaces on which to focus for their product.
The framework, which is shown in Figure 5, comprises three parts:
1. User Goals
We identified eleven user goals for our product. These included generic goals such as learning, discovering, onboarding, and automating. We matched specific goals to surfaces that users commonly use in achieving those goals.
2. User Proficiency
We determined the users’ level of domain, or technical, knowledge and product expertise. We created three categories to represent a user’s level of expertise, as shown in Figure 7.
3. Product Maturity
Depending on a particular product’s maturity—for example, Beta, Private Preview, or General Availability—it might be necessary to prioritize different surfaces. For a new product, there is a greater need to build GUI surfaces that help users to learn. But, for more mature products, users might want the automation capabilities that an API or IaC offers.
We learned quite a bit about enterprise user behaviors from our research, including the following insights:
Surfaces complement each other. It is natural for enterprise users to rely on suites of surfaces in accomplishing their goals. They frequently require multiple touchpoints and, thus, surfaces, in completing a goal—using multiple surfaces within a short time frame. For example, these could include typing a command into a CLI, then validating the results in a GUI. If users have difficulty accessing one surface after another—or if the surfaces are too different from one another or use different terminology—this could mean a frustrating experience for the user.
The users’ choice to rely on a suite of surfaces depends on their task and proficiency and the product’s maturity. These three variables can help you determine which surfaces to prioritize building. For example, if you’re creating a new product for expert users, prioritizing a GUI would be worthwhile. But be sure to provide an avenue for the user to learn and grow into more advanced, automated surfaces. On the other hand, if you already have an existing product for expert users, you’ll need to prioritize automation technologies such as CLIs and APIs.
When using a new tool or application, expert users rely on having either extra guidance or a GUI. When we began this project, our team assumed that expert users would use only automation technologies. Through our research, we debunked this myth. Expert users use GUIs heavily to learn about a new product and validate the results of their actions, even if they’re using automated tools to complete those actions. We also found that, while expert users decrease their usage of GUIs over time, many still use GUIs for their daily tasks. Therefore, it’s important to make sure you have a GUI to pair with your CLI or API. Try to discover what users like about the GUI and factor those preferences into the design of the other surfaces.
Best Practices and Lessons Learned from This Project
Now, let’s look at some lessons we’ve learned from our experience as UX researchers. If you work on a product that provides multiple surfaces for your users, we hope you’ll find the best practices we’ve shared in Table 1 helpful.
Table 1—Best practices for UX research on cross-surface user interfaces
Reasons for Our Recommendations
Talk to different stakeholders early on and consolidate relevant research findings to learn about different use cases.
This helps ensure that you don’t start from square one and enables you to triangulate your data and draw more powerful conclusions.
Identify the assumptions and myths of different teams on which they base their product decision making.
Doing this early helps you better understand the expectations of team members and align your insights with cross-functional team members.
Triangulate qualitative and quantitative data to strengthen your insights.
Don’t just look at one finding or another. Bringing together your insights at scale and in depth can help you to make your insights bulletproof.
Create a framework that product owners can leverage in making their decisions. Evangelize the framework and educate your teams on how to use it.
Your work is increasingly valuable depending on how successful you are in communicating its usefulness. In addition to consolidating insights, spending the time to create an easy-to-use artifact helps many more teams to use your great work!
By conducting this research, we gained an understanding of how enterprise users work across different surfaces to complete their goals. We were also able to influence internal teams to apply this knowledge to their products. It is natural for users to switch between surfaces. Ultimately, users try to decrease the effort and risk of completing any task so they choose whatever suite of surfaces would help them to achieve their goal faster.
Therefore, to enable seamless transitions between surfaces and improve the overall user experience, UX researchers, UX designers, and product owners need to understand user behaviors across surfaces. We hope the insights and approaches we’ve shared in our article have helped you to understand the factors motivating users to choose multiple surfaces and piqued your interest in understanding the cross-surface usage of your product.
Hayley is a hybrid UX researcher and designer at Google. She uses her research background to ground the experiences and systems that she designs to meet the real needs of users. She cares deeply about making people’s lives easier and more enjoyable through the design of better product experiences. Hayley is currently pursuing her Masters in Human-Centered Design and Engineering at the University of Washington, where she applies her expanding UX skill set toward environmental and societal challenges. Read More
Aakanksha is a mixed-methods UX researcher at Google. She is passionate about enabling seamless user experiences across Google’s systems and products. She uses her quantitative and qualitative research skills to understand user behaviors, and endeavors to bring the user’s voice to the forefront. Read More
Sohit leads a team of researchers in Google Travel. He is a Board Certified Human Factors Practitioner with expertise in designing input devices and interactions. In the past, Sohit worked on multimodal input devices for Windows 10 and personalization for audio and visual experiences at Spotify. He loves using a mixed-methods approach to problem solving and draws from disciplines such as Human Factors, Behaviorial Science, and Social Sciences to help design product experiences that create user value as well as business value. Read More
Harini is a mixed-methods UX researcher at Google. She has a background in electronics engineering, computer science, and cognitive science. She was previously a software engineer. Harini currently manages a team of quantitative user researchers. Her areas of interest include the developer experience, designing and researching user experiences for non-GUI interfaces such as command-line interfaces (CLIs) and application-programming interfaces (APIs), and accessibility of developer tools. Read More