Four characteristics of bad research
This is an excerpt from an upcoming workshop series I’m running in Melbourne in September 2019.
Back a couple of weeks ago we talked about aligning whys and whats and thinking about hows in all sorts of abstract ways.
I admit I was simply keen (and just a bit nervous) about getting my first newsletter up and out there so I didn’t spend much time at all on thinking about audience or any of the stuff I coach and consult about - I traded purely on the goodwill of my close community.
So thanks for coming through!
What post #3 is about is an expansion of the first newsletter with:
the inclusion of an example to take things from abstract to grounded
a dive into what commonly happens when consulting researchers (be they agencies or independents) do research and how it’s often terribly, terribly bad.
This topic drives me absolutely bonkers because of the amount of money my clients (and the world at large - and, yes, I wish they were the same thing) spend on incompetent, cowardly, callow and shallow research which creates no substance: either new knowledge, action or intent to act.
Basically, to bring it back to our theory so far, all this bad research reveals is an alignment of organisational whats with customer whats. No clear hows (let alone whys) are shared for the reasons we will list and explore below.
The Scenario
N.B. This is made up because I don’t want to publicly shame anyone, no matter how ashamed they should be.
A large city council 1 engages a consultancy to help them understand their residents’ feelings about changes in parking zones around a commercial hub.
At the onset everything seems okay. They (the consultants) look at residents, commercial, shoppers, tourists, service vehicles etc. - pretty much a solid cross-section of all who might be impacted upon by the zoning changes.
Once they’ve agreed on who they’re researching, they do some qualitative work to understand what people think. Great starting point 2. But cracks are starting to form rapidly in the background (we’ll talk about them shortly). After a few interviews and workshops of varying shapes and sizes they have a whole bunch of data.
And this is where it turns to absolute shit. Right here. Why?
Because they just don't do anything with that data. Anything. At all.
They collate their data into clusters of similar concepts and turn the data into something that looks like this:
Trading in “the WHAT”
Sorry? So. The Hell 3. What?
It’s true. This is sadly the level of detail frequently provided to paying clients: a pie-chart of “duh”, stinky and useless and worth so deeply little as to make me grumpy just to write this.
I mean, of course the participants would have these issues, would feel this way. The distribution is interesting but useless in the absence of, well, anything to formulate an intelligent inquiry from.
To go back to the “cracks appearing” comment above, what’s even more annoying is that the consultants had people in interviews and workshops where they could have uncovered something interesting, let alone so much more.
What these consultants are doing is “trading in the WHAT”: it’s superficial, it’s abstract and flatulent while masquerading as simple, concrete and useful. I’ll explain why shortly and give some depth to it, both in terms of why it’s bad and how to, for the love of all that’s good, not do it yourself.
So, as an attempt to get something juicy, the consultants “wisely and eruditely” ask the participants in this reality-TV-transcript farce to describe what they would prefer to happen or, rather, what they would want in order to offset the negatives. It’s a “stiff biscuits, we’re doing it anyway” situation. Oh, so juicy. Like the juice at the bottom of your rubbish bin.
And it looks something like this:
Or this rather:
Now that bin juice really stinks. “Might; consider; no; free stuff,” all of these are useless too. They’re INDICATORS of information, of insight opportunities, simply not useful yet (except in the broadest of ways, but only to design more research). Why not useful yet? Because no clear action can be taken that’s either sensible or feasible. That’s why it’s bad research. It leads to nothing because:
I’m still asking, “So what?”
For this outcome they could have just done an online survey with a whole bunch of “on a scale of 1 to 5” questions and a free-text box for “anything we’ve missed”.
Research MUST BE ACTIONABLE in order to be VALUABLE to people. 4
So what does it take to make this research actionable?
Let’s return for a moment (while I regain my composure) to what we mentioned up front, about this research being:
Shallow
Callow
Cowardly
Incompetent
Let’s take them one at a time. This is fun.
1. Shallow
The immediate opportunity created by having people there to be asked questions of is that, if you know your business, you can discover so much richness and depth to their thoughts, circumstances, aspirations, frustrations etc. It’s in the richness that the value of research lies.
In my trifling example above, depth is never achieved, even though there are wealths of opportunities to uncover it, to dive in and find out why people want, think and feel what they do. That the consultants didn’t/couldn’t take an opportunity to dive deeper is an example of shallow research. Consider this:
“63% of businesses wanted dedicated parking as a response to the re-zoning.”
How does this offset the negative of the re-zoning in the first place? Why is dedicated parking important in this instance, especially when these businesses are (presumably) only trading in short-term customer experiences in-store? What is it about these business customers that requires parking directly outside their place of business? Is there bulk- or heavy items being sold directly?
Sadly, even this superficial and off-the-cuff list of questions cannot be answered now without the client paying for more research. In those hows and whys is an infinite array of insights waiting to be discovered which might significantly shift the way the re-zoning innovates positively rather than negatively. Sadly, as the questions were never asked, the opportunities cannot be known.
2. Callow
This points to the immaturity, be it youth or inexperience, of the people running the research. Typically, large agencies call upon few key experienced people compared with the numbers in their dedicated team. The experience people are usually in stakeholder management roles and provide oversight (at maybe one day a week or less) to the team’s actions and deliberations.
The economies that determine team design in consultancies favour profit and partnership maintenance over clear, deep research. They have a process, they scale the process and add text and images to templates. Unfair? Have you ever seen the accidental inclusion of other client data or branding in draft slide decks sent to your teams? It’s too common to be unfair to call it out.
The irony is that often the experience to run good research is there, it’s just in the wrong place to do any good for the client. And if the research expertise isn't there, and there’s no ability to identify bad research and inexperience in the client, then the monster grows unchallenged.
3. Cowardly
One thing that is crucial to a successful outcome for any client is the courage to share what you think and why you think it. Maybe even how to get the outcomes you’re chasing.
The catch is: it’s really risky for a consultancy to make recommendations. It takes time, courage and leaps of inference to create meaningful and profound outcomes. It takes NOTHING to aggregate what people think and turn that into your message.
See what’s happening here? Not only is a lot of traditional consultant research shallow, lacking depth to really see what’s going on and what you could do to make it better, but it lacks the courage to say what the research might actually mean. People hide behind “what the participants said” as a way to mitigate the risk of being wrong through any substantial insight.
They fail to draw opportunity from rigorous and intelligent inference and speculation.
In this way, real design research is courageous. Good research attempts to own insight and connect it toward actions which might positively influence outcomes for mutual benefit. Bad research is superficial cowardice hiding behind wish lists, verbatim and trading in the simple whats of participants who could have been doing something far more valuable with their time.
Real design, driven by good research, is fundamentally courageous because it calls upon individuals and teams to decipher what they’re learning and communicate it in a way which leads clearly to action and outcomes.
Real research requires leaps. Leaps of logic, leaps of inference, leaps of empathy, of faith and intuition. Leaps of courage to share what you truly think and feel based on the richness you’ve had the good intent and deliberate action to be immersed in, and the subtle, empathetic and generative approach you've taken to identify the profound in the commonplace.
Cowardice hides in the commonplace. Courage thrives in expressing the possible.
4. Incompetent
In truth, incompetence is a description of the set of three things above, like this complicated image shows:
Bad research is incompetent. Don’t confuse that with inconclusive or unsuccessful research though, sometimes you don't get what you’re after (although it’s pretty darn rare to come up completely empty if you have an inkling of
how to get from what to how then why and back again (while adding new and actionable value all the way).
Bad research is everywhere.
N.B. There are a whole lot of reasons why bad research happens. It’s not always the fault of the researcher or the consultancy. Often, immature clients (possessing some of the traits we’ve covered above) will constrain the scope to prove their own biases out, to confirm their own hypotheses rather than test them. They might also crush the courageous leaps of even the most experienced teams by a need for “proof”: a paper trail of evidence which cannot exist in the abductive and playful collaborations of even the most pragmatic and rigorous design research 5.
Good research needs the space, competence and courage of both researchers and clients to succeed. A courageous researcher and a cowardly client do not good bedfellows make, and yet are a common scenario.
Finding clients who “get it” is rare and quite an emotional discovery. Clients who understand that real insight leading to positive change (call it innovation, improvement, whatever) is an uncertain and nuanced art, science and practice are deeply valuable and one should do EVERYTHING in one’s power to keep their flame alive.
TLDR; So, how do you spot bad research?
That’s actually easier than one might think.
If you find yourself asking “so what” frequently and at the end of a research document then you’re in the land of bad research. “So what” SHOULD happen, but when conducting the research and making sense of your material, your findings, to construct insights for action. If you’re asking it after all that’s happened then it’s likely bad research.
If you see wishlists and actions purely derived from verbatims and clustering activities, pie charts of responses, then you’re in bad research territory 6.
If there are no interesting actions (or any actions at all that can be derived from the research) then it’s bad. Research is an enabler of action, not an end in its own right (except in academia, but I’m a working person who’s working in a world where return on investment is measured more in outcomes than in progressing theory 7.
Postscript
As I mentioned at the beginning, this week’s newsletter is an excerpt from a series for workshops I’m running in Melbourne in September 2019. I’ll be advertising on my LinkedIn and my Twitter but I've included a link here:
Post-postscript
Like and share please! These newsletters take considerable time to craft (this one was about seven hours) and the more people are reading them the more I’ll love you all and ensure that what I write is actually worth writing. This, like all things really, is a “life in beta” activity - if it’s valuable to everyone (imagine the VX maps from last newsletter) then I’ll keep going but wailing into a void is something I’ll happily leave to my goth past :)
All my love and see you next newsletter with an exploration of a very common scenario in the life of a consultant: bathrooms.
Matt
Could be private too, the impacts are the same for our purposes here ↩︎
But not end-point. ↩︎
Fuck. ↩︎
Except maybe in academic circles. This isn’t a bad thing - academic research can be powerful, but we’re working with clients who want to do things sooner rather than later, remember. ↩︎
Henry Ford’s “faster horse” quote is a good example of how customer research has limited value if taken verbatim, just like we’ve spoken about so far. ↩︎
Check to see if surveys are involved, it’s likely they are. Surveys are great to test scale of good qualitative research, but not good to uncover anything deep and meaningful. ↩︎
I love academic research, but it doesn’t yet have a good place in design practice for me. Yet. Maybe my impeding PhD will prove me wrong on that (I hope). ↩︎