Top

Introducing the Build-Measure-Learn Approach to an Analytics Tool’s Redesign

October 20, 2014

As UX professionals, we pride ourselves on making software that is human friendly and easy to use. But keeping the right balance between adding features that customers and users need and maintaining a clean, simple user-interface design is often harder than it seems it should be. This is a challenge that most product teams have in common. In this case study, I’ll describe how our team at Bloomfire integrated Lean UX into our product-development process to address this challenge.

How can you distinguish between what the people who purchase and use your products say they want and what they actually need? Luckily, there are some effective ways to reduce the risk that you might design products your customers don’t want or your users can’t use and, instead, to design for their actual needs.

Champion Advertisement
Continue Reading…

Small Failures: The Path to Big Successes

One principle of Lean UX is to either get user validation or fail as early as possible. Failures teach you what’s not working in your product concept or what could work better. These failures may be small things that you can easily fix—such as a button label that users don't understand or a color for a link that doesn’t suggest that it’s clickable. Or it might be an entire array of features that create more confusion and frustration than the initial problems that you intended them to solve. Learning and applying such lessons early on helps you to build a better product.

While using failures to create a better product may sound great, how can you do this and keep your job in the process? In my experience, it’s best to approach the product-design process as a series of small, low-risk experiments.

Introducing Experimentation into the Design Process

Companies have only finite amounts of time and resources to design and build solutions—especially companies with small teams like ours at Bloomfire. Our team already knew that thoughtfully building in opportunities to collect user feedback earlier in the product-design process would help us to know more quickly

  • whether we were building tools that our customers and users actually need
  • when it’s time to stop investing effort in an approach that’s not working and go back to the drawing board

Our Product and Engineering teams are moving away from putting out releases that deliver large features after lengthy development cycles to building new features iteratively, over successive lighter-weight releases. So we wanted to experiment with how to introduce the validated learning and build-measure-learn approach of Lean UX into our new process. Updating an in-app Analytics page offered the perfect opportunity for us to do this.

When Defining the Problem Is the Problem

Lean UX bounds the problem that a team will solve in a particular release as a minimum viable product (MVP). Here’s how we did that for Bloomfire, a social, knowledge-sharing platform that lets companies tap into the collective knowledge of their employees. Knowledge sharing can take the form of sharing files, asking questions and getting answers, solving problems, locating information, and identifying team members who are subject-matter experts.

Our Analytics page already shared a lot of high-level, general information about what was happening within a user community. Most of this information was available to all members of a community—with just a few visualizations that were available only to administrators. Because of this feature’s broad audience, it didn’t offer much in-depth data or the ability to drill down to information about what specific members were doing or to see the level of engagement around individual pieces of content.

However, within the last year, our market focus has shifted from small businesses to enterprise customers. Based on our conversations with the Customer Success and Sales teams, we knew that there were major painpoints for existing users who administered their company’s user community. While we knew a lot about the needs of our small-business customers, we wanted to collect more feedback about how our new enterprise customers were using Bloomfire and what essential information about their communities was either missing or difficult to find.

We kicked off the project with the understanding that any new Analytics feature would need to evolve and adapt quickly once we had launched it.

We decided that our MVP would not include any data visualizations, and we would instead focus on raw data reports. This would allow us to modify the data that each report included—based on customer feedback—without much development time. Since the ways different customers use the product can vary so greatly, our MVP also focused on allowing users to export data to CSV, so they could download, then customize their reports in whatever way they wanted outside of Bloomfire. Though we had decided to focus on creating and exporting raw-data reports, we weren’t sure which reports to create first or what data they should include.

Being Careful About What We Asked

To determine what reports to prioritize and what data they should comprise, we decided to create an in-app survey that would have a two-fold mission:

  1. Gather more feedback directly from customers and users.
  2. Develop a list of customers that we could reach out to during the design phase.

When a user visited the Analytics page and some trigger criteria had been met, this survey appeared within the application. Our survey asked a single, open-ended question: “If you had to name one thing to change on the Analytics dashboards, what would that be?”

The usefulness of the answers that we received to our survey question was somewhat mixed. The more helpful answers validated some of our initial assumptions—such as the need for more flexibility on date ranges and the ability to view a community’s entire history. A few data values that we hadn’t expected to be useful also surfaced. But the less helpful answers were either too vague or seemingly contradictory—for example, “More clarity,” “Cram more data into a single screen view,” or simply requests for “More data!”

A huge advantage of our survey was that it collected participants’ email addresses—which correspond to user names—helping us to build a list of customers to whom we could reach out. On subsequent projects, we’ve found that our in-app surveys give us access to a broader sample of customers who use a feature—not simply a list of the most vocal customers or people our team has contacted recently.

Scheduling Time for Experiments

There are many ways in which we could interpret “more”—especially given the different ways in which our customers use the product—so there was risk that our guess would be wrong. How were we going discover the right balance between providing more details while sticking to our mission of simplicity? After our initial analysis of the survey results, we decided to run two rounds of qualitative experiments:

  1. Validate that we had included the right data in the reports.
  2. Test a user interface that would let users find and select their reports.

So, over a two-week period, we set aside two days a week for remote usability tests. The first week, we ran the first experiment; the next week, the second experiment. We invited customers who had responded to the Analytics survey to participate in a half-hour usability study, using GoToMeeting. Customers chose the time slots that worked best for them over the four days available. More than a dozen customers volunteered their time to test drive our design ideas and shared invaluable feedback.

Validating Our Assumptions First

For the first week of testing, we designed an experiment to validate that we were creating reports that included the data points that our customers needed. We decided that, if we started by conducting usability testing on the spreadsheets, it would help us to know whether we’d chosen the right information—before we started designing the screen that let users select their reports. So we designed spreadsheets that represented the reports that users would download from the Analytics page. These spreadsheets, which mocked up three months of activity for a hypothetical community, addressed the use cases that we’d identified. We imported our mock data into spreadsheets in Google Sheets and linked it together in a Google Docs document.

At the beginning of each virtual test session, we asked users to think back to the last time they needed to answer questions about their community—whether they referred to this data regularly or occasionally needed answers to questions that they weren’t quite sure how to answer. We then asked them to tell us about their three most common usage scenarios. If we had prepared a spreadsheet that addressed one or more of the scenarios that they described, we directed them to the appropriate spreadsheet and used their scenarios as their test tasks. (We chose to limit each test session to three tasks so we could stay within the planned session duration.) But we had lists of up to five possible tasks that we could choose to test—in case their scenarios were edge cases or we hadn’t prepared a spreadsheet for a scenario.

We instructed participants to pretend that they’d downloaded each spreadsheet from their Bloomfire site and were viewing them offline. We asked them to read through each of the column headers in a spreadsheet, then describe what each label meant and what data it represented. We learned that some of the column labels that we had chosen were confusing—or meaningless in the context of users’ mental model of Bloomfire. For example, we learned that using the label Consumption to identify how often community members had read a contribution wasn’t as meaningful as the label Views.

Together, the open-ended questions that we asked at the end of each test session and customers’ sharing their screen with us during the virtual sessions gave us the opportunity to learn about some of the painful, circuitous routes that customers were taking to find data that we hadn’t surfaced in the Analytics tools. For example, several customers showed us the workaround they used to learn whether a new community member had viewed all of the content in a Series playlist.

Users can add the following types of contributions to their Bloomfire community:

  • posts
  • questions
  • series, or virtual playlists of posts and questions

To determine whether a community member had viewed all of the contributions in a series, users were visiting each contribution in the series, choosing the Viewed details for a post or question, then searching for that member’s name in the list. They repeated this process until they’d visited each contribution in the playlist. Depending on the number of items in a series, this could be a very time-consuming workaround.

Iterating, Iterating, Iterating

Throughout the two days of testing, we used both users’ feedback and the stumbling blocks that we had observed during testing to inform design changes to the spreadsheet. Between test sessions, we iterated the design; then we tested again.

Once we were comfortable that we were showing the right information, our next experiment focused on designing a user interface that would let users select their desired report and easily move through different levels of detail.

Using what we had learned by testing the raw-data exports, we built a light-weight Axure prototype of a user interface that let users navigate between the reports. The purpose of the second experiment was to validate how to group the reports logically, how to navigate between different levels of detail, and what to label the groups. Much like in the first experiment, we were able to make small changes to the prototype between testing days to see which changes got the best results.

Well Begun Is Half Done

Weeks later, a few of the users who had volunteered for the earlier experiments are now participating in a limited beta program for the new Analytics tools. Our beta testers are already sending us feedback. We’re pairing that user feedback with regular calls to customers who are very engaged with Analytics. We’re also preparing traffic reports to monitor usage trends for the Analytics page to see what metrics change.

We look forward to sharing the first parts of the newly redesigned reporting features with our customers very soon. After doing several rounds of experiments, we’re confident that the design direction we’ve taken is many times better than what we would have designed without our customers’ help. 

Manager of User Experience at Lifesize

Owner at Winnermint

Austin, Texas, USA

Stephanie SchuhmacherStephanie has more than 18 years of experience in user-interface (UI) and UX design for Web and mobile applications. At Bloomfire, she leads UX and UI efforts relating to the development of the company’s enterprise knowledge and collaboration software and is responsible for UX prototyping, user interface design, and usability testing. Stephanie is co-founder and organizer of the Ladies That UX, Austin—the local chapter of Ladies That UX, an international organization for women in UX design. She is a former adjunct faculty member of Austin Community College, where she taught and developed the curriculum in Web design and user interface design for the Visual Communications department. She spoke about Money Outside the Mainstream: Designing the Experience of Alternative Financial Products at UXPA Austin’s World Usability Day 2012 event. Stephanie has a B.A. in Art from The University of Texas at Austin with a focus in Transmedia Studio Art.  Read More

Other Articles on Lean UX

New on UXmatters