It’s not uncommon for projects to lack the time, money, or resources to conduct ideal user research activities. There are many reasons why this occurs:
Sometimes we’re brought onto a project late.
Perhaps we’re new to an organization that doesn’t really get UX.
Maybe a company is rushing to bring a product to market for some reason—and there are plenty of good and bad reasons this might be so—and there simply isn’t time to “go big”.
Perhaps your client or organization is following an Agile development methodology.
At such times, it can be tempting to just throw up our hands in dismay and do nothing or lament the fact that everything isn’t perfect. But the simple fact is that, as UX professionals, we can always add value, at any stage in a project—even if a project team can’t act on our advice straight away. Read More
In recent months, there has been an interesting dialogue on the IxD Discussion mailing list, in which some participants have questioned the need for and benefits of doing user research rather than relying on the experience and intuition of designers. These comments led others to voice concerns about the actual quality of the user research companies are undertaking and the validity of any conclusions they have drawn from the resulting data.
Three articles or posts have been particularly influential in sparking interest in and debate on this topic. The first was Christopher Fahey’s excellent five-part series “User Research Smoke & Mirrors,” which laid out some of the problems that Chris sees with user research and discussed where UX professionals go awry during research and analysis. Of special interest to me was the following statement in Part 1 of the series:
“Many Web designers and consultancies, however, feel it’s not enough to use research to inform their design process. They go further: They try to make “scientific” user research the very foundation of their design process.”—Christopher Fahey
Recently, I was reading through a sample chapter of a soon-to-be-published book. The book and author shall remain nameless, as shall the book’s topic. However, I was disappointed to read, in what otherwise appeared at first glance to be an interesting publication, a very general, sweeping statement to the effect that qualitative research doesn’t prove anything and, if you want proof, you should perform quantitative research. The author’s basic assumption was that qualitative research can’t prove anything, as it is based on small sample sizes, but quantitative research, using large sample sizes, does provide proof.
This may come as a shock to everyone, but quantitative research does not provide proof of anything either.
Here, I’m using the word proof in the mathematical sense, because that is the context within which the author made those statements. In mathematics, a proof is a demonstration that, given certain axioms, some statement of interest is necessarily true. The important distinction here is the use of the word necessarily. In user research, as with all avenues of statistical inquiry, we’re able to demonstrate only that a hypothesis is probably true—or untrue—with some specific degree of certainty.
Granted, I’m being pedantic; and you might think this just an interesting exercise in semantics. But let me take you through a brief survey of this topic, then perhaps you’ll appreciate the importance of this distinction. Read More