The Interview Survey: Getting the Accuracy of Interviewing Thousands of Users

By Dmitri Khanine

Published: March 7, 2016

“How do you find out what your market really wants?”

People get excited about design—especially the latest innovations from amazing leaders in user experience such as Apple and Google. But these organizations built their cool designs on a solid foundation of user research that let them achieve an intimate understanding of their customers.

Every amazing product starts with a great product manager who works hard to achieve a deep understanding of their market. This level of understanding doesn’t come easy. Achieving it takes years of observation and lots of communication. That’s why companies try to hire people who have backgrounds that are similar to those of their customers. Former accountants make great ERP Business Analysts. Former salespeople lead the development of sales automation tools. Nevertheless, most of the time these guys are wrong! There are lots of products that nobody wants to buy.

For example, it’s not enough to be a software developer yourself and believe that another developer would love to use a coding automation tool that would meet your needs. The market might not want it. Or perhaps organizations wouldn’t pay for that kind of tool. Savvy investors will point that out.

So how do you find out what your market really wants? If you’re working in a large enterprise and can’t talk to most of your users, how can you find out what users want? You might answer that you’ll conduct surveys. But do they really work?

The Problem with Surveys

“Most surveys don’t really give us definitive results. Even when it feels like the results are giving us a good indication, we can’t really know for sure. Why? Because most surveys are built on assumptions—lots of assumptions.”

Let’s face it, most surveys don’t really give us definitive results. Even when it feels like the results are giving us a good indication, we can’t really know for sure. Why? Because most surveys are built on assumptions—lots of assumptions.

Smart researchers have figured out that many folks just love to make us happy, so they’ll tell us exactly what they think we want to hear. Questions such as Are you satisfied with your current accounting system? would most likely get you a Yes, even if the system were a nightmare to use. Users might compare it to their supplier’s system that uses paper, or they could give you a Yes just so you’ll stop bothering them with questionnaires.

So smart researchers use Likert scales to make survey answers more accurate, asking questions such as: Your current accounting system is: Very useful, somewhat useful, not really useful, or not useful at all? Guess where most of the answers will be. They’ll be split between very useful and somewhat useful. So, to get more accurate data, smart researchers try using quantitative scales and ask questions such as How many times a day do you use your accounting system? and How long does it take to load on average?

But does that work? What if they were wrong to focus on the accounting system? What if, no matter how poor that system might be, the problem really lies in missing data—perhaps from major suppliers or subcontractors? What if that’s the real reason behind the screwed-up reports and all the additional time bookkeepers must spend working on the system, driving their time spent way above the industry average?

How would you figure that out with a survey? You can’t—unless you ask open-ended questions such as How would you make your Accounting Department more efficient? But that won’t work either. Why?

Why Can’t You Just Ask Users What They Want?

“Interviews do work, but only if you ask the right kinds of questions. Interviews let you collect accurate data if you ask users about their problems first.”

I’ll bet you know the answer: they won’t be able to tell you. People didn’t ask Henry Ford to build a car. They wanted faster carriage horses. They had no idea what a gasoline engine was and ridiculed him when they saw him driving his horseless carriage.

Observation doesn’t always work either! Watching people driving their carriages would have been of little value to Henry Ford—no matter how many people he observed or how much time he spent watching them.

Nobody asked Steve Jobs for an iPod either. They had no idea that it was even possible to store a thousand songs on a mini hard drive in an MP3 player that they could take with them anywhere. Sure, once Apple created the iPod, people were amazed and wanted it, but Apple had to conduct many interviews to find out that market existed.

Interviews do work, but only if you ask the right kinds of questions. Interviews let you collect accurate data if you ask users about their problems first. So questions such as What do you hate most about your current MP3 player? would have worked. Apple would have learned about short battery life and having to load new songs every couple of days. But asking What would you like to see in the perfect MP3 player? would not have worked.

However, you need to conduct a lot of interviews to go after a new market—or even if you’re just trying to improve a business process in a large corporation. You’ll need to talk to a lot of people before you’ll start seeing different people describing the same types of problems over and over again—revealing the repeated patterns. And even when you do, how would you pick the problems people feel most strongly about that you’d really need to resolve?

Even if you could determine what problems they really need to solve, you might find that people wouldn’t be willing to pay for a product that solves them! Instead, they’d pay for a solution to the problem they want to solve—and that might be a very different problem.

Solution: The Interview Survey

“What if we could combine the benefits of well-conducted interviews with the low cost and speed of an electronic survey?”

What if we could combine the benefits of well-conducted interviews with the low cost and speed of an electronic survey? What if there were a simple, reliable way of interviewing a thousand people in a single week—hearing them mention every problem they or their organization is struggling with and being able sort their responses according to their passion and commitment to solving them?

What if we were also able to map the responses to groups of users, so we could separate the problems of data-entry clerks from those of accountants and network admins? What kinds of possibilities would that open up for you? Isn’t that the kind of knowledge you’d ultimately like to have about your market or your organization? Well, it certainly worked well for us!

We used the interview survey shown in Figures 1 and 2 to collect insights about the Oracle Content Management market. As shown in Figure 1, we first asked people about the single, biggest issue they have with Oracle Content Management, making it easy for them to focus on one problem.

Figure 1—Question about the single, biggest issue

Question about the single, biggest issue

Then, as shown in Figure 2, we asked respondents our demographic questions, so we could map the most common types of problems to different types of users in the overall audience.

Figure 2—Demographic question

Demographic question

Finally, we asked an optional question, requesting respondents to provide their contact information, so we could get on the phone with some of them and find out more about the types of problems they were experiencing.

We now call this Interview Survey approach a Biggest Issue Bucket Survey, or BIB Survey, because it helps us to identify the biggest issues that our business users in our market are facing, identify which groups of users feel most strongly about solving those issues, and define buckets of the types of problems that people are most frequently looking to resolve.

Figure 3 shows an export of the survey data, to which we’ve added a few more columns to aid analysis.

Figure 3—Export of survey data for analysis

Export of survey data for analysis

More specifically, we added the following columns:

  • Score—This column describes the level of engagement of a particular survey respondent. This is critical information for us because we want to emphasize which responses are from people who are maintaining or changing their situation. We focused on the top 20% of responses with the highest scores—assigning higher scores to the responses of people who
    • provided more thorough descriptions of their problems or were willing to spend some time on the phone with us to discuss their situation further.
  • Category 1, 2 and 3—In these columns, we assigned up to three broad categories of problems under which we thought each user’s Single Biggest Issue (SBI) fell. In the example shown in Figure 3, these were Indexing, Database Problems, and Customizations. Even though we asked users about their #1 Single Biggest Issue, some actually described more then one problem in their response. By assigning different categories to those responses, we were able to handle each issue separately.

Once we sorted the responses by score, we could analyze the answers respondents gave for their Single Biggest Issue (SBI) for some common problem categories and take note of the language they used in describing them.

Conclusion

“By conducting a BIB Survey, you can get as close to a good interview as possible, while reaching thousands of users.”

By conducting a BIB Survey, you can get as close to a good interview as possible, while reaching thousands of users. You no longer have to limit yourself to a handful of interviews because of a lack of time or resources, then hope you’ve uncovered all of the major issues. Stop taking that risk! Now you can potentially ask every single user. You can also reach out to other departments and get an outside view.

My team developed this BIB Survey tool based on the ideas of Ryan Levesque, [1] the ultra-successful marketer and product developer. We have had great success with it. I hope you find this approach valuable and that you’ll implement your own BIB Surveys in your organization.

Do you think you’ll try doing a BIB Survey? Do you have any ideas to add? I’d love to hear your thoughts in the comments.

Endnote

[1] Ryan Levesque. Ask: The Counterintuitive Online Formula to Discover Exactly What Your Customers Want to Buy, Create a Mass of Raving Fans, and Take Any Business to the Next Level. Plano, TX:Dunham Books, April 21, 2015.

2 Comments

I definitely agree this type of Interview Survey is useful to get into better detail about issues people might have with a Web site. However, I’m doubting the functionality of giving examples in the first question (Figure 1). By giving them suggestions on what to write, we might be affecting their thought process; they will start to think more about their stuck documents and might ignore issues that are not in the same category—such as, for example, the design. When asking questions, we should be really careful to not imply anything.

Thanks, Anja. Definitely a good point. We haven’t seen a lot of people going in that direction—indexer and on—but you’re right that every little bit helps!

Join the Discussion

Asterisks (*) indicate required information.