Surviving Enterprise UX: the importance of targeted user research
Making sense of the Madness of User Research, not just the Methods
In the previous installment of this series, we discussed strategies for communicating the value of UX in organizations that do not already prioritize the effort. Today, we’ll talk about the importance of other “soft” skills, this time in the area of User Research — critical analysis and targeted focus.
As the resident UX elder on most of the design teams I work with, I’m frequently asked about the best way to conduct research. I have trouble answering this question every time. I know what they’re asking. They want me to give my opinion on the best way to conduct a particular method. What I want to say in response is “the way that gets you the insight you need”. This answer would not be helpful of course so I default to the description of steps involved in the “best practice” way of conducting whatever method we’re discussing.
I’ve long been a critic of dogmatic methodology regardless of whether it’s taken from industry leaders or created in-house. I have always viewed UX methods as a toolkit from which one must choose the appropriate one for the challenge at hand. And therein lies the problem. Quite often practitioners don’t want to spend time conducting critical analysis of the real problem to determine what’s best given the myriad of factors that will impact how they need to work.
They want to follow a recipe.
Here’s the thing… there isn’t one. And if I gave you one, you should be highly skeptical.
Prescribing specific methods misses the point.
There are dozens of methods to get to the same information, and within each of these there are alternative ways of conducting each that would still be considered best practice (I know, I know, Nielsen Norman Group purists will strongly disagree with me here). How-tos on conducting research methods abound. Books, blogs, videos can all show you how to conduct a method. So, I’m not going to talk about best practices for research methods. For that, I defer to my preferred method bibles — the NN/g website, or “Universal Methods of Design” by Bella Martin and Bruce Hanington, or more recently, “This Is Service Design Doing: Applying Service Design Thinking in the Real World 1st Edition” by Marc Stickdorn, Markus Edgar Hormess, Adam Lawrence, and Jakob Schneider. My point is, there are a myriad of reliable sources you can go to for guidance on user research methodology. However, the methods themselves aren’t actually where the real challenge lies.
The real challenge of UX research isn’t in conducting it…
As I said, the methods are not hard. Sure they take a bit of practice to perform smoothly (we’ve all had the occasional cringe-worthy user interview or workshop), but after a few times, the tasks and activities involved should become second nature. Besides which, the current trend toward democratizing research means the tasks and activities involved in gathering insights are evolving to be a shared responsibility, not just the researcher’s job.
Everyone in the organization should do research.
On a recent project, I took the entire cross-functional product team in the field including our stakeholder. While I didn’t expect each to be looking for the same insights that I was, hearing user perspectives first-hand was invaluable for each of them for different reasons. There’s nothing more motivating than hearing a user’s frustrations and challenges directly and in person. That experience gives life to the decks and reports that follow in a way that charts and graphs alone simply can not.
So yes, data collecting and insight gathering should be everyone’s job. With some coaching and direction from an experienced researcher, anyone can be empowered to gather feedback from users.
So if everyone should be doing research, why do you need researchers?
Seems like a reasonable question on the face of it. While I agree that the act of gathering user insights should be shared, the truth is, it is only a fraction of the work UX researchers do.
The hardest parts of UX research come at the beginning and at the end.
As I mentioned, the methods of collecting insight and data are not difficult. They just take a bit of practice. The true expertise of a researcher comes in at the bookends of the research sprint.
The beginning of a research sprint should start with a critical assessment.
Knowing how to look at the landscape in front of you and assess what kind of insight is needed; from whom, how much, how deep. This is one of the areas of research that requires the most expertise. It is also the most overlooked.
I’ve seen many UX researchers and designers default to user testing as the first (and often only) user research they do. Others will conduct a workshop or two as their only discovery activity. Still, others will just do interviews and call it a day. These are all critical and valuable ways to gain useful information about users. But are they always right for what one needs to know? Don’t get me wrong, talking to users always has value no matter how you do it, but it’s not always the best method to uncover the truth that will lead to something actionable. Depending on what you’re building, sometimes observing without engaging can illuminate key insights that not even users themselves are conscious of.
Time, budget and access to users often dictate that insight gathering be limited to one method or the other. This makes it even more critical to choose the right one that gives you the most bang for your buck.
So if we were to break down the UX research process (note I said process, not methods) into steps, it would look something like this:
- Figuring out what to learn (uncovering the real problem)
- Deciding how to learn it (creating a research plan)
- Uncovering or observing evidence (conducting the chosen method)
- Making sense of what was learned (analysis, synthesis, and reporting)
- Deciding how to act on it (leveraging the insights gained)
- Ensuring and planning for consistent action (informed product optimization)
Of that list, only 3 and 4 are considered research in many organizations. So if everyone can do number 3, we probably don’t need researchers to do number 4, or so the thinking goes. What you end up with at the end of this type of research sprint is a nice report with varying levels of detail that seems to give insights into user behavior, but contains nothing actionable because 1 and 5 were skipped and 2 wasn’t given enough thought.
I’ve seen very elaborate spreadsheets with the most granular detail imaginable delivered as “research” even though the endless rows and columns contained not a single piece of useful information. I’ve also seen (and sadly, before I knew better created) superficial slide decks synthesizing non-statistically significant data like a 5 person usability test. These decks contained nothing deep enough to answer any of the real questions.
No matter how deep or detailed your synthesis is, the focus should always be on actionable insights. My response to research documentation is always, “This is great, but do you know what to do next based on it? Does it tell you how to help optimize a user’s experience to achieve their goals? Can you build or modify software using the insights from it?” If not, then you’ve wasted precious time and typically a lot of money.
Look, I get it. I’ve been there. User research can be rewarding and a lot of fun for those of us who do it. I never get tired of learning more about the people who will ultimately use a product my team is building. Before I gained more experience, I would justify going as broad and deep as possible because “You don’t know, what you don’t know” — and key insights can be uncovered in random ways. This is what I firmly believed.
Except… they aren’t.
I could have come to all of those same revelations with more careful planning and targeting prior to starting a study.
Being targeted and focused with user research in a lean environment is critical.
If you don’t conduct your initial problem analysis thoroughly and plan well, you risk not getting enough insight from users in time and either shortchanging the end product or becoming the bottleneck for development. If that happens, your product team may well decide that user research simply isn’t worth the impact it’s having on velocity. And that is a very hard position to come back from.