A number of my previous Research That Works columns on UXmatters have focused on semi-structured user research techniques. My interest in these techniques stems from my desire to get the most out of my time with research participants and to leverage foundational work from other disciplines to gain unique insights for user experience design. With this in mind, a colleague of mine recommended that I try the laddering method of interviewing, which is a technique that is particularly helpful in eliciting goals and underlying values, and therefore, possibly helpful during early stages of user experience research, as I learned after a brief review of the literature on this topic. This column introduces the laddering technique and describes my first experience trying it for myself. Read More
In the design process we follow at my company, Mad*Pow Media Solutions, once we have defined the conceptual direction and content strategy for a given design and refined our design approach through user research and iterative usability testing, we start applying visual design. Generally, we take a key screen whose structure and functionality we have finalized—for example, a layout for a home page or a dashboard page—and explore three alternatives for visual style. These three alternative visual designs, or comps, include the same content, but reflect different choices for color palette and imagery.
The idea is to present business owners and stakeholders with different visual design options from which they can choose. Sometimes there is a clear favorite among stakeholders or an option that makes the most sense from a brand perspective. However, there can often be disagreements among the members of a project team on which design direction we should choose. If we’ve done our job right, there are rationales for our various design decisions in the different comps, but even so, there may be disagreement about which rationale is most appropriate for the situation. Read More
A common activity at the outset of many design projects is a competitive review. As a designer, when you encounter a design problem, it’s a natural instinct to try to understand what others are doing to solve the same or similar problems. However, like other design-related activities, if you start a competitive review without a clear purpose and strategy for the activity, doing the review may not be productive. One risk is that you may find you’ve wasted your time reviewing and auditing other sites, because you end up with findings that don’t help you design your own solution. Another risk is that the design and interactions of competitor offerings might influence your solution too heavily, whether you intend them to or not. Once you’ve seen how others have solved a particular problem, their solutions may subconsciously affect your own thinking.
But while competitive reviews pose some risks, I contend that doing them is still valuable. Designing without first understanding what others are doing in the same competitive space means you’ll miss out on an opportunity to leverage others’ experience, and you might not be cognizant of possible threats to your strategy. To differentiate your Web sites and applications in the marketplace, you must be aware of what others are doing. Key to a successful competitive review is to have a clear objective for your review and minimize the risk of bias when doing your own designs. In this column, I’ll discuss a structured approach to competitive reviews I’ve used successfully to help my team understand the competition. This approach focuses on identifying opportunities for differentiation. Read More