Methodology matters. Perhaps this much is obvious, but as the demand for market researchers to deliver better insights yesterday increases, dialing up the pressure to limit planning time, it’s worth re-emphasizing the impact of research approach on outcomes. Over the past year, I’ve come across numerous reminders of this while following this election cycle and the excellent coverage over at Nate Silver’s FiveThirtyEight.com. I’m not particularly politically engaged, and as the long, painful campaign has worn on I’ve become even less so; but, I keep coming back to FiveThirtyEight, not because of the politics, but because so much of the commentary is relevant to market research. I rarely ever visit the site (particularly the ‘chats’) without coming across an idea that inspires me or makes me more thoughtful about the research I’m doing day-to-day, and generally speaking that idea centers on methodology. Here are a few examples:
In my day-to-day work, I would guesstimate that 90-95% of the studies I see are intended to capture a more specific population of interest than the general population, making the screening criteria used to identify members of that population absolutely vital. In general, these criteria consist of a series of questions (e.g., are you a decision maker, do you meet certain age and income qualifications, have you purchased something in X category before, would you consider using brand Y), with only those with the right pattern or patterns of responses getting through.
But what if there were a better way to do this? Reading the above on FiveThirtyEight got me thinking about the kinds of studies in which using a probabilistic screener (and weighting the data accordingly) might actually be better than what we do now. These would be studies where the following is true:
- Our population of interest might or might not engage in the behavior of interest
- We have some kind of prior data on the behavior of interest tied to individual characteristics
“Yeah right,” you might say, “like we ever have robust enough data available on the exact behavior we’re interested in.” Well, this might be a perfect opportunity for incorporating the (to all appearances) ever-increasing amounts of passive customer data that are available into our surveys. It’s inspiring, at any rate, to think about how a more nuanced screener might make our research more predictive.
Social Desirability Bias & More Creative Questioning
Social desirability is very much a market research-101 topic, but that doesn’t mean it’s something that’s either been definitively solved for or that the same solution would work in every case. The issue comes up a lot, not only in the context of respondent attitudes, but even more commonly when asking about demographics like income or age. There are lots of available solutions, some of which involve manipulating the data to ‘normalize’ it in some way, and some of which involve creative questioning like the example shown above. I think the right takeaways from the above are:
- Coming up with creative variations on your typical questions might help avoid respondent bias, and even has the potential to make questions more engaging for respondents
- It’s important to think critically about whether or not creative questioning will resonate appropriately with your respondents
- Is someone you respect voting for Donald Trump?
- Do the blogs you prefer to read tend to favor Trump or Clinton?
- What media outlets do you visit to get your political news?
The Vital Importance of Context
At the heart of FiveThirtyEight’s commentary here is a reminder of the vital importance of context. It’s all very well to push respondents through a series of scales and return means or top box frequencies; but depending on the situation, that may tell only a small part of the story. What does an average rating of ‘6.5’ really tell you? In the end, without proper context, this kind of result has very little inherent meaning.
So how do we establish context? Some options (all of which rely on prior planning) include:
- Indexing (against past performance or competitors)
- Trade-off techniques (MaxDiff, DCM)
- Predictive modeling against an outcome variable
Wrapping this up, there are two takeaways that I’d like to leave you with:
- First, methodology matters. It’s worthwhile to spend the time to be thoughtful and creative in your market research approach.
- Second, if you aren’t already, head over to FiveThirtyEight and read their entire backlog of 2016 election coverage. The site is an incredible reservoir of market research insight, and I can say with 95% confidence that you’ll be happy you checked it out.
Liz White is a member of CMB’s Advanced Analytics team, and checks FiveThirtyEight.com five times a day (plus or minus two times).