The Clearleft Podcast

Apple Spotify RSS

Season Three

Design Research

Download mp3

This is the Clearleft podcast.

Earlier this year Clearleft ran an event called UX Fest. It was the online version of our regular UX London conference. One of the speakers was Teresa Torres. She began by differentiating between discovery and delivery.


So all product teams have to discover what to build and they also have to deliver that product.


If you’re familiar with the double diamond model of the design process, this should resonate with you.


So we look at discovery as all the activities that we’re doing to decide what to build. So think about all the decisions that we’re making. How are we making those decisions? What data are we collecting, who we interacting with?


That’s the first diamond in the double diamond diagram.


Whereas delivery is the work that we’re doing to build and ship scalable production quality products.


That’s the second diamond.

Teresa’s focus is on that first diamond. She’s written a book called Continuous Discovery.


I define continuous discovery is at a minimum weekly touch points with your customers by the team building the product, where that team is conducting small research activities in pursuit of a desired product outcome.


Sounds great. So what does the team look like for continuous discovery?


It starts with this idea of a product trio. So this is becoming a more common model. Some people call it a triad. Some people call it a three legged stool. It’s this idea of the product manager, the design lead and the tech lead being jointly responsible for those discovery decisions.

With a trio based model, we’re basically saying let’s have these three roles work together from the beginning. Let’s have them drive our discovery decisions and jointly decide what to build.


Wait a minute. A project manager. A design lead. And a tech lead. That sounds great. But what Teresa has described sounds a lot like research. After her talk I asked her, where are the researchers?


So a lot of people interpret the trio as we’re leaving user researchers out. So I want to be really clear. We’re not. The vast majority of companies don’t have user researchers and they don’t have enough user researchers. Maybe 30 years now, we’ll get there. Hopefully it doesn’t take that long.

If we worked in an environment where most teams had a user researcher on their team, I would be talking about a quad. But that’s not our reality, right? So the idea of a trio is trying to reflect the resources that are available to most teams.


Ouch. So in an ideal world researchers would be involved in the continuous discovery process. But when it comes to research, we are not living in an ideal world. Gregg Bernstein, a researcher also spoke at UX Fest.


On LinkedIn right now, there are about 127,000 open front-end designer positions worldwide. There are roughly 53,000 open product designer position. There are roughly 49,000 open front end engineer positions. I think you know where I’m going with this. There are roughly 2,400 open user research roles on LinkedIn right now.

And here’s the thing: most every organization knows it should be doing research. We know that we should be gathering the information we need to do our best work. And we have these models that tell us to, for example, empathize and prototype and test with our users or we need to analyze and then test and then prototype, or we need to enter a discovery process where we focus group and then interview and then survey.

My point is we know we need to put users at the center of our work. And despite knowing all this, and despite knowing that there’s these models that we’ve made that demonstrate this, the deck is stacked against research.


So both Teresa and Gregg gave the same message. Researchers are thin on the ground. But even if research isn’t a role at your organization, perhaps it can be a skill.


So if you aren’t a researcher and I’m guessing you’re not, because again, there are very few of us, but there’s a lot of everybody else. But if you’re not a researcher and you want to bring the practice of research into your work, what can you do to get those missing perspectives? You can start within your own organization. You can start with support. That’s usually the best place to start. And it starts with one Slack message or one single email.

In the early days of MailChimp. When I was a researcher there, I would send instant messages to the people on the support team, whether I knew them or not. I would just ping them out of the blue.

Let’s say I’m working on a project, I’d say, Hey, what can you tell me about our billing page? Or I’m just curious, what is the single biggest complaint and what might we do to fix it? You know, what’s the thing that would reduce support tickets by 10%? I wanted an additional perspective in my work.

I wanted that context.


Another researcher who used to work at MailChimp is Stephanie Troeth. She’s also worked as a researcher at Clearleft.


Hi, I’m Steph. I am a strategist and researcher as well as a writer.


I asked Steph if research could be a skill rather than a role. Shouldn’t a well-rounded designer be able to do research?


It is definitely a different mindset. I’ve worn all these hats and I know that if I have a designer hat on, it’s not the same as a research hat.

The thing is you can’t wear both at the same time. I can not go into a room and be a researcher and a designer simultaneously. Because when you are a researcher, when you have the research hat on, the research mindset, your goal is to put aside your own ideas of what is right, wrong, true or false, or what is good, or what is bad. When you’re a researcher you’re going into space and like, “I’m going to learn everything I can. I’ve got some assumptions. Yeah, I got these. I can throw them out. They’re just assumptions. I got some hypotheses. Yeah. I can throw them out. They’re here.” So when you’re a researcher, you are not solutionizing and that is really difficult for most designers to do from my personal experience. It takes practice.

But on the other hand, when you’re coming into a situation as a designer, you thinking like, oh right, how do I come up with potential solutions? And so, you know, the mindset’s completely different.

And I know I cannot do both at the same time.


Here’s another former Clearleft researcher, Maite Otondo.


So you can be a generalist, but I guess that is only going to take you so far.

The thing is you need probably different sets of skills to conduct research, and then to create design concepts, and then to actually create the final design.

So I’m of the opinion that it’s better to have teams with people from different skills that can work together across that process.


I asked Maite to define some terms for me. Like what’s the difference between qualitative research and quantitative research?


In very simple terms, qualitative research is about understanding people’s perception about something. So understanding their stories in their own words.

And quantitative information is about understanding volume. So how many times something happened or how many people say something.

So I guess the difference is one is about stories told by people and the other is about volume.


But this categorization of qualitative and quantitative research isn’t the only dichotomy. There’s also generative research and evaluative research.

Maite describes the difference using the familiar double diamond model.


I guess you could say, in the first diamond you would explore people’s context. And then potentially what people’s needs are and what people’s behaviors are to start eventually thinking of a solution that could actually help them.

So some people call it generative research, the one you would conduct on the first diamond. So the more exploratory one.

And then the one you would conduct on the second part of the diamond would be called evaluative research. That’s a term that’s used in the design industry. I don’t think that comes from researchers, but like comes mostly from designers I think. And then on the second part it’s more about people evaluating something that you have created.

So the first one probably is less product focused, it’s more context focused. Whereas evaluative research tends to be more focused on the product or the design that you’re working on.


Here’s Clearleft strategy director, Andy Thornton.


I think generative research is about exploring the problem space. It’s much more open-ended.

And evaluative research it’s much more about testing a thing, testing a prototype, testing the hypothesis. And that’s how I describe the difference between those two is if you’re talking about the scientific method, which I know makes some researchers nervous, cause especially design researchers, it’s not entirely a rigorous scientific process, but evaluative research is when you’ve already formed a hypothesis and you want to validate or invalidate that hypothesis to see if it’s got some truth in a variety of scenarios.

Whereas I think generative is less about getting as concrete an answer possible and much more about finding the right questions to ask that then need to be solved through a creative design process.


So the first one is about exploring the problem. And the second one is about exploring the solution.

So I guess in the first step, it’s a bit more general and it’s a somewhat broader view. And then once you have that understanding of the context and the things that people are trying to achieve and people’s needs, then you can go and evaluate a specific solution.


Generative research is the type of research that’s going to bring you more questions. It’s going to broaden up your perspective. Evaluative research tends to be more about, is this good? Is this working like it should? So it’s quite narrow in terms of outcomes and what you’re trying to establish.

I see it as a spectrum. And the exciting places tends to be sort of like halfway in the middle.

I love generative research, but often you can get back a lot of data that you may not know what to do with. So sometimes an interesting space to be is in the middle where you bounce between the two based on the data that you get back.

If we have something like a prototype or something that you’ve already built and you’re trying to decide, is this working as well as it should, you might actually get and be open-minded that you might get answers you were not expecting at all. And let that guide you to say, actually, we need to understand the landscape far more. Then you sort of swing back on one side and say, okay, let’s try and understand the broader questions.

You can also do the other way. You can start out with broad questions, trying to understand the context. And then you see there are specific problems that people are struggling with and you go, Hey, here’s where we can try out something and see if it helps us understand the bigger picture better.

So for me, it’s not two separate camps, it’s a spectrum. And I feel like when you’re doing effective, pragmatic research, you’re bouncing between the two all the time.


Research is about the present and design is about the future.

So I guess it depends on how far you want to go in the future. It depends on what you’re trying to achieve.

So if you’re thinking, I don’t know, if you want to explore the future of something in 50 years, maybe it’s enough to know general trends about things. Whereas if you want to launch something next year, maybe the level of detail that you’re going to need to inform those decisions is going to be a bit bigger. So I think it depends.

Obviously research can reduce the risk because at least you understand the context and the people you are designing for. So that helps you frame decisions because you understand why it’s going on. So you have a better idea of how something that you create is going to change the future based on what you understand about the present.


So research can help reduce risk, which is great. But there’s a danger in pushing that too far.


I’ve seen organizations where everything needs to be tested before being launched, for example.

So you have a bunch of researchers just conducting research about very small things about like very small interface decisions. That is least impactful kind of research you can actually conduct. You’re missing the value that a bunch of researchers could add to an organization just by, you know, asking them to conduct research that the only aim of that research is to reduce risk. So I think it’s a bit problematic to think that is the only role of research. But at the same time, obviously it can help you reduce risk.


Research can be a form of validation, right?


That’s Fonz Morris, another speaker at UX Fest. He works at Netflix where they do a huge amount of evaluative research.


But I think research is really necessary because if we’re coming up with these decisions in our team here in California, how are we really going to know what people outside of our team think about that without doing the research? So I think the research is really more of education first than just validation. It’s us being able to learn from these ideas that we have. Does this really work for our users? The validation comes second to me, as opposed to the education where I need to know what my users think about this, whether it’s good or bad. I really value when I get feedback that may not be, this is the best design or this is completely opposite of what I thought because that’s allowing me and my team to now think about this from another perspective and have even more empathy for our users. So I’m, I’m a super, super supporter of research.


So a lot of us have adopted what I call a validation mindset, where we do most of the work upfront. When we’re all done, when we’ve got the prototype ready to go, we’ve designed everything, we put it in front of customers and we validate did we get it right.

The challenge with the validation mindset, we do need to do that validation research, but the challenge is we’re often getting feedback too late, right?

At the point that we’re getting feedback, our engineers are waiting for it to go into the next sprint. We have very little time to make changes and we tend to just make the small cosmetic changes and we don’t have time to make the big changes that would dramatically improve the product based on our customer’s feedback.


I do have a problem with using the word validation. I have a serious problem. And I know at Clearleft we’ve used that. I’ve had conversations about this with people in the studio. ’Cause I think it’s not about validating. I don’t think the role of research is validating design.

I think the role of research is explaining a context. So describing a context from the point of view of the people that experience it in a way that inspires action, in a way that inspires design, in a way that helps us frame, imagine and facilitate a better future for the people in this context.

When it comes to evaluating solutions I think it’s about understanding what we think works well and what we think doesn’t work well, but that doesn’t mean validating. I think if you go to, for example, a usability testing session with the mindset of validation, that is going to influence the ways in which you ask questions, the tasks you create.

We know we’re biased already. So I think if we go with that validation mindset, we might end up maybe increasing our chances of confirmation bias, for example. So we are just going to be looking for the things that are useful for us to validate. Whereas we should be, I guess, challenging the solution and capturing the data that maybe supports some design decisions and capturing the data that goes against some design decisions.

And we use that to assess the solution but it’s not that we validate. I think sometimes designers maybe use these terms to increase the confidence of the stakeholders. So we’re going to validate something and that sounds very like people are going to trust me and we’re going to validate this and then we’re going to go with it.

But researchers in other contexts would say, for example, instead of validate a hypothesis, they would say they are going to falsify a hypothesis. And that is a different approach. So I don’t think it’s about validation. I think it’s about gathering information to assess design decisions. And maybe as a result of that process, say yes, we think that was the right decision to make. And in a way you validate that, but that is a second step. I don’t think he should go to any engagement with people to assess a design solution thinking that your role is to validate that.


The trick I think in generative research is to understand the assumptions that you’re coming in with. And then be very, very open-minded and think "actually I got my assumptions completely wrong", throw them out the window and let’s reform our assumptions. ’Cause when you’re doing generative research, it’s the wrong stage to have hypotheses. Hypotheses are towards the more evaluative end of that spectrum.

If I only had my assumptions reinforced, I might have done something wrong, to be honest with you. For me, a measure of how good of a researcher I am is do I find something I did not expect to find? If I found everything that I expected to find, I might not have done my job quite so well.


From everything I’ve heard, it sounds to me like there’s a narrative arc to how research is treated in an organization. To start with an organization, might not have any researchers, even if they say they get the value of research. And then. Then when they do hire researchers, those researchers only get to do evaluative research.

What’s the next chapter in that story? Is it even possible to shift from evaluative research to the bigger picture, more open-ended generative research?


It is possible to shift a team’s outlook from starting with evaluative to generative. It takes about 18 months to two years to do that process.

The idea of evaluative research tends to be either a checkbox, it’s not working, let’s fix whatever’s not working. It’s the bandaid and you always start with that bandaid. But the more research you do, you start making people think about their broader questions, little by little, and that’s how you get to a bigger strategic or generative piece.

The other option sometimes is, for me, I try to work with the highest level of decision maker that I can. Because their questions tend to be the broader ones and the more fundamental ones.

For example, in your episode on service design you talk about all the pieces to make a good experience. In the same way research has the same mindset, where for me it’s like, where are the decisions being made that is causing a bad customer experience or a bad user experience? And often it’s at the business level.

And so if I want to change the user outcome, I almost have to figure out where that decision point is. Very similar to if you’re coming in as a strategist for user experience. But I’m also a strategist so maybe that’s why I think that way.

For me, it’s about where do decisions get made? Because you can do all the research you want and if nobody listens to it or acts on it, why do you spend the money? There is such a thing as research waste or research debt, if you like, where you have a lot of research, but nothing gets done.

And often the conversations do have to happen at the stakeholder level in order for the research to make an impact.


For research to make an impact, there might be some uncomfortable truths to be told along the way.


I think we want to tell the truth.

And sometimes the truth is uncomfortable because it would actually say the market that you’ve been trying for the last few years is not ever going to work because people are not open to buy it. That has happened to me several times in the research work that I’ve done. But then obviously you spin around and say, I’ve just saved you millions.

The challenge is people don’t like ideas being killed. I know that designers struggle with this too. The whole concept of someone has an idea and they’re stuck with it and they’re going to go with it. And when you’ve got something like research coming in and say, actually, we don’t think it’s going to work, nobody wants to hear that, you know?

Actually one of the interesting issues we have in the growing and the careers of researchers is we are not always taught how to frame the truth.

Which is again, why for me, speaking to stakeholders in the early days is important because it helps you frame it the way they need to hear it.

I think by nature researchers would want to tell the truth. I think often we are constrained.

One of the things that got me into research in the first place was some really hard knocks through a startup where I had somebody else help me with the usability. However, it didn’t matter what we did, the CEO wasn’t ever going to listen. And this happens quite a lot. You get people in positions of power who think they know everything already there is to know, and they don’t need to hear anything new.

And if the narrative internally in your organization is that way, any research will not really have real impact.

What we’re up against is the fact that people think they know what there is to know, that they’re geniuses and they are going to succeed.

What we’re up against is the lack of humility in the industry.

Generally you get two camps, you get the people who believe in research and you know, who come from the design background and understand the value of it.

And then you get the other camp where it’s, yeah, I know this industry, I’ve been doing this for 50 years and sod any research. So you get both. It’s just a lot easier to find the latter than the former sadly.


Thinking back to what Teresa and Gregg said, I had one last question for Steph. Why aren’t companies hiring more researchers?


I think it’s because the truth hurts.


My thanks to Steph Troeth and Maite Otondo, both brilliant researchers I’ve had the pleasure to work with at Clearleft. Now that I know how research is underserved in most organizations, I feel very lucky that at Clearleft research and researchers are an integral part of our process.

In fact, we’re looking for another design researcher right now. Could that be you? Head along to

And thanks for listening.