Leveraging User Data by Embedding UX Design Knowledge in Products
Published: September 20, 2010
“Science is what we understand well enough to explain to a computer. Art is everything else we do.”—Donald Knuth
The role of data in a UX design process usually goes something like this: User researchers or UX designers gather data about users and their needs, using a variety of qualitative and quantitative approaches. They then analyze the data—often developing documentation that synthesizes the data, such as a task analysis or a set of personas. Finally, they use their analysis as a basis for making design decisions or influencing the strategy of the broader organization. Throughout this process, UX professionals mediate the relationships between the data that describes users and their requirements, design goals, and business objectives, seeking to align them as closely as possible. This article looks at how we can make this process of data analysis and design—or redesign—more effective by embedding UX design knowledge in computer systems.
Leveraging User Data in Content Delivery
Companies are increasingly making more imaginative use of actual user data to improve their user experiences. Famously data driven, Google recently determined which shade of blue was most effective in encouraging users to click links by testing different shades with huge numbers of users. Netflix ran a competition to improve the accuracy of its algorithm for predicting the movies users would like. And, of course, Amazon has its review and recommendation features, which let customers rate product reviews and reduce the extent to which people can game the system. All of these data-driven approaches require large numbers of users to provide quantifiable user data. Leveraging user data through using dynamic, data?driven algorithms enables sites to deliver customized content to users, which they’ve tailored more flexibly to a particular user’s needs. Drawing on the collective behaviors of users lets us create better user experiences—and, in most cases, increases sales as well.
That we would begin to embed knowledge about UX design in products was inevitable. Most processes become increasingly automated as our understanding of them grows. As UX professionals, we can either accept this, adapt, and try to influence the outcome or be steamrolled by it. As Knuth pointed out, science is what we understand. If we can automate some of what we understand, then why not automate it and focus on what are arguably the more interesting and challenging aspects of UX design?
User researchers elicit user data that informs both UX design and the development of the algorithms that deliver the types of customized, data-driven content I’ve described, focusing the entire design and development process on actionable user data. To design effective algorithms, it is essential that algorithm developers have a deep understanding of both a site’s subject matter and how people might use—or misuse—a site. Thus, integrating some aspects of user research into algorithms can have a huge impact on their effectiveness. UX designers must be able to design products that themselves elicit relevant data from users—the data that feeds the content-delivery algorithms.
We can view UX designs that use dynamically generated content in terms of Big-Static versus Little-Dynamic design decisions. For example, in a Web application Big-Static design decisions encompass the overall structure of a page, the broad type of content a site presents, and a site’s information architecture. We make these types of decisions infrequently, and they set the context for Little-Dynamic design decisions. The Little-Dynamic design decisions are those that we can delegate to algorithms. They dictate where and how we deliver dynamic, data-driven, and user-generated content. This combination of Big-Static and Little-Dynamic design decisions helps to ensure that users experience the predictability of a consistent site design together with the delivery of dynamic content that is relevant to their needs.
Customizing content in this way builds toward a Web-3.0, or semantic Web, view of the world: a world view that lets us provide much more relevant user experiences based on a richer understanding of our users. One of the major obstacles the semantic Web faces is the construction of detailed ontologies—that is, knowledge structures that define the relationships between different terms. This is a huge challenge. Ontologies generally exist for only specific problem domains—for a very good reason: meaning significantly overlays much human knowledge—meaning that derives from culture and human experience. Thus, capturing knowledge in a way that is useful outside of well?defined boundaries is extremely difficult.
In real?world problem domains, the use of heuristic approaches is far more effective. These techniques are not especially rigorous and do not provide detailed information on particular topics. However, they do provide data that, in aggregate—over thousands or millions of users—provide real value. In many ways, a heuristic approach is a smarter approach, because system smarts then derive from the user base in a model that is akin to that of parallel processing: lots and lots of small decisions and fragments of knowledge, in aggregate, can produce a similar result to a more complex, centrally defined model.
The challenge for the UX designer is to make getting this information from users simple and engaging. We must design sites so users want to invest their time in actively providing this information. We are already seeing the efficacy of this approach when users tag images, videos, music, and bookmarks; provide ratings; and write reviews.
However, Hunch is using a slightly more sophisticated approach. The site asks a series of quick-fire, multiple?choice questions to build a user profile that guides future advice, as shown in Figure 1.
Figure 1—A question on Hunch
As always, in the Web environment, evolution plays its part—approaches that work thrive; those that don’t fall by the wayside. In a sense, the evolution of user-generated content mirrors the development of computing and the Internet itself—something that is quick, easy, and works beats complex and slow every time—and is also more likely to achieve first?mover advantage when going to market.
In the future, a key skillset for UX professionals will be their ability to influence user behavior—thus, improving sites’ data-gathering capabilities and enabling us to obtain data that we can use to improve the user experience for all users. In developing the ability to influence behavior, we can draw upon a number of different fields of research, including behavioral economics, cognitive and social psychology, and perhaps even techniques we can learn from the light and dark sides of social engineering.
Increasingly, major companies are recognizing the need to learn about users—not only through what users do on the company’s own sites, but also through their public personas on social networking sites such as LinkedIn, Facebook, and Twitter. (The knowledge we can obtain from social networking sites underlies much of their perceived value.) Knowing what people like, how they interact, and what their friends like can provide a rich source of data that we can use in customizing our Web sites’ content for individual users. One of the major challenges facing UX professionals at the moment is the need to effectively and systematically map user data onto features we can implement on the Web, in a way that lets us capture and implement them automatically, with little or minimal human intervention.
All design involves compromise: we manage design compromises by creating primary and secondary personas and prioritizing design decisions accordingly. We can specify an effective approach for the implementation of data collection, analysis, and use only if we thoroughly understand users’ needs within particular contexts. Plus, in the process of doing so, we can expand our own understanding of our users and the user experiences that would meet their needs. The future lies in our developing quick, simple techniques that leverage our understanding of large numbers of users: as Stalin put it, “Quantity has a quality all its own.”