Adopting systems thinking is critical for designing effective UX research, particularly at the enterprise level. Enterprise software is a complex ecosystem that propagates data from one set of applications to another, typically with no explicit articulation to the user of the rules that govern this flow of data. Making key software-architecture decisions based on an understanding of user needs regarding the transmission of data throughout this ecosystem is essential.
Domain-driven design (DDD) is a set of modeling techniques that can facilitate systems thinking. DDD is an approach to modeling software that accounts for business processes by explicitly articulating the relationships among teams and technical systems with the intent of accelerating the discovery work that is necessary to iterate on existing software.
The DDD concept of context mapping is a way of illustrating particular types of relationships between both teams and technical systems. It’s important to understand these relationships from a UX perspective because they illuminate how interactions between enterprise systems can exert significant impacts on the user experience. In this article, while I won’t elaborate on all of the different types of context mapping that the DDD approach comprehends, I’ll instead focus on a specific type of context mapping called the customer-supplier relationship.
The customer-supplier relationship describes how two internal teams—and by extension, their associated systems—might relate to each other. Figure 1 illustrates this relationship. The supplier is upstream from the customer and, therefore, provides data to the customer. The customer and supplier teams must work together to ensure that customers receive what they need. The type of information the supplier propagates downstream determines whether the customer’s needs are met.
Image source—Figure adapted from Domain-Drive Design Distilled, by Vaughn Vernon
To make this relationship more concrete, let’s consider a real-world example. Imagine a call tracking-configuration application that lets users associate specific tracking numbers with certain marketing events. For example, users might want to set up separate tracking numbers for phone calls coming from an email-marketing campaign versus phone calls coming from visits to a specific landing page.
This call tracking-configuration application has enormous power to influence the data it sends to downstream applications. So, now, let’s also imagine that this configuration application affects the downstream team that is responsible for building and maintaining analytics reporting. Their reports provide information on the efficacy of various marketing strategies. For example, they can answer questions about whether the email-marketing campaign has generated more phone calls than the marketing campaign driving users to a specific landing page.
To obtain clean, interpretable data in the downstream application, it’s necessary to be very careful about allowing users to add, define, and manipulate information in the upstream configuration application. Figure 2 provides an example of the relationship between the configuration application, or supplier, and the reporting application, or customer. The teams that support these separate, but interacting applications must work together to ensure that they meet the customer’s needs.
From a UX perspective, it’s important to examine user needs in the downstream application first so you can understand how to structure the upstream application. The downstream application is where the impact of the configuration application manifests most directly. Although the downstream application is reliant on the upstream application from a technical perspective, from a UX perspective, the upstream application should function in the service of the downstream application’s needs.
Let’s take a closer look at how the design of the upstream application can affect the various types of data that populate the downstream application. Figure 3 illustrates how the structure of the upstream supplier user interface—that is, the configuration application—could permit users to input data into open fields.
In Figure 3, the Lead Source field is an open field in which users can type whatever name they want to use to identify the lead source. While it might seem generous to allow users the flexibility to type whatever data they want, this has a number of negative consequences. First, there is no ability to standardize lead-source names. In this example, unwanted variability has gotten introduced into the system by spelling home page in two different ways: home page and homepage. Because these source names are different in the upstream application, they get propagated to the downstream application as two distinct data points—even if they refer to the same thing. The result is that data does not roll up appropriately in the downstream application, so the analytics reporting won’t be accurate.
Users should have the option of viewing all home-page data in aggregate—irrespective of whether the data is for the customer-service number or the retail-sales number—without having to export the data from the downstream reporting application and manipulate it manually. Thus, the configuration application’s design has a profound impact on the accuracy and value of the reporting application. As you can see in Figure 4, the impact on the analytics reporting is such that some data points are broken out into separate rows rather than their being rolled up into the same category to show all home-page metrics in aggregate.
The reporting application provides data on how many phone calls originated from each of the listed lead sources. Users who want to know the total number of phone calls originating from the home page—regardless of whether they came in on the customer-service line or the retail-sales line—must add up the home-page data manually to get these numbers.
Therefore, allowing users to introduce unwanted variability in upstream applications has the potential to introduce inaccuracies in downstream reports.
Understanding Downstream Application Needs
First understanding user needs at the level of the downstream application helps inform the design of the upstream application. It’s important to keep in mind that users of the configuration application might not also use the downstream reporting application. In this case, prioritize interviewing users who rely on the reporting application because that is where the downstream impacts of the configuration application’s design manifest most directly.
The best way to assess these impacts is by interviewing users to find out how they utilize the analytics reporting, what views of the data are critical, and whether they are able to interpret the data in the reports with no additional data manipulation. Questions you should consider asking include the following:
Does the analytics data in the reporting application give you the information you want with no need for further manipulation?
Do you ever need to export the data into Excel to aggregate any of the data points?
Do ever need to export the data into Excel to segment any of the data points?
What is the value of doing this?
What are the different ways in which you need to be able to view the data?
Do you ever need to go into a different application to obtain additional data to make sense of the information in a report?
How do you combine the information from this application and other applications to get the data you need?
Users might tell you, for example, that they need to segment by lead source so they can differentiate between leads that come in through the home page versus the contact page. Alternatively, they might tell you they want to combine all leads that come in through the home page, regardless of whether the leads came from the customer-service number or the retail-sales number. The way in which users can interact with the upstream configuration application determines how users can view their data in the downstream reporting application.
Analytics reporting exists, in part, to prevent users from having to do unnecessary work. If users must export and manipulate data from reports, they are not realizing the application’s full value. Never penalize users for trying to consume data. Unfortunately, that is what many applications do to users.
Let’s consider how we might structure the upstream configuration application to accommodate the needs of users of the downstream analytics-reporting application. This requires re-examining the design of the configuration application. Allowing users to input whatever information they want into open fields in the configuration application increases the risk of introducing unwanted variance into downstream reports. Standardization of field inputs would mitigate this risk. Research into user needs for the downstream reporting application would enable identification of the ways in which they need data to be presented in reports. This research could also inform the types of standardized, prepopulated drop-down list items to use in the configuration application.
As this article has shown, domain-driven design concepts such as the customer-supplier relationship can give UX professionals, technical teams, and product management a framework through which to conceptualize how independent, but connected applications interact with and influence each other. This approach to software modeling can help facilitate the type of systems thinking that is so crucial to user-centered software development at the enterprise level. It also shows the need for teams to communicate generously with each other to ensure they anticipate and tackle the challenges that emerge when users must navigate complex product ecosystems.
Amy has over 20 years’ experience as a professional researcher, both in the field of neuroscience and, more recently, in the UX space. She earned her PhD in Psychological & Brain Science from Dartmouth College. Amy’s training has informed her approach to understanding users. Her primary professional interests align with platform UX, a term she has coined to describe her approach to understanding the user experience from the perspective of platform-level concerns. These concerns include data sharing, data availability, permissions, and information architecture. Amy has dedicated herself to promoting the value of UX research among stakeholders and pioneering collaborative-research approaches for UX professionals, technical teams, and product managers.