WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

If you can’t trust your sample sources, you can’t trust your data

Posted by Jared Huizenga

Wed, Apr 19, 2017

people with word bubbles-2.jpgDuring a recent data collection orientation for new CMB employees, someone asked me how we select the online sample providers we work with on a regular basis. Each week, my Field Services team receives multiple requests from sample providers—some we know from conferences, others from what we’ve read in industry publications, and some that are entirely new to us.

When vetting new sample providers, a good place to start is the ESOMAR 28 Questions to Help Buyers of Online Samples. Per the site, these questions “help research buyers think about issues related to online samples.”

An online sample provider should be able to answer the ESOMAR 28 questions; consider red flagging any that won’t. If their answers are too brief and don’t provide much insight into their procedures, it’s okay to ask them for more information, or just move along to the next. 

While all 28 questions are valuable, here are a few that I pay close attention to:

Please describe and explain the type(s) of online sample sources from which you get respondents. Are these databases?  Actively managed research panels?  Direct marketing lists?  Social networks?  Web intercept (also known as river) samples?  

Many online sample providers use multiple methods, so these options aren’t always exclusive. I’m a firm believer in knowing where the sample is coming from, but there isn’t necessarily one “right” answer to this question. Depending on the project and the population you are looking for, different methods may need to be used to get the desired results.

Are your sample source(s) used solely for market research? If not, what other purposes are they used for? 

Beware of providers that use sample sources for non-research purposes. If a provider states that they are using their sample for something other than research, at the very least you should probe them for more details so that you feel comfortable in what those other purposes are. Otherwise, pass on the provider.

Do you employ a survey router? 

A survey router is software that directs potential respondents to a questionnaire for which they may qualify. There are pros and cons to survey routers, and they have become such a touchy subject that several of the ESOMAR 28 questions are devoted to the topic of routers. I’m not a big fan of survey routers, since they can be easily abused by dishonest respondents. If a company uses a survey router as part of their standard practice, be sure you have a very clear understanding of how the router is used as well as any restrictions they place on router usage.

You should also be wary of any sample provider who tells you that your quality control (QC) measures are too strict. This happened to me a few years ago and, needless to say, it ended our relationship with the company. This is not to say that QC measures can’t be too restrictive, and in those cases you can actually be throwing out good data.

At CMB, we did a lot of research prior to implementing our QC standards.  We consulted peers and sample providers to get a good understanding of what was fair and reasonable in the market. We investigated speeding criteria, red herring options, and how to look at open-ended responses. We revisit these standards on a regular basis to make sure they are still relevant. 

Since each of our tried and true providers support our QC standards, when a new (to us) sample provider tells us we’re rejecting too many of their panelists due to poor quality, you can understand how that raises a red flag. Legitimate sample providers will appreciate the feedback on “bad” respondents because it helps them to improve the quality of their sample.

There are tons of online sample providers in the marketplace, but not every partner is a good fit for everyone. While I won’t make specific recommendations, I urge you to consider the three questions I referenced above when selecting your partner.

At Chadwick Martin Bailey, we’ve worked hard to establish trusted relationships with a handful of online sample providers. They’re dedicated to delivering high quality sample and have a true “partnership” mentality. 

In my world of data collection, recommending the best sample providers to my internal clients is extremely important. This is key to providing our clients with sound insights and recommendations that support confident, strategic decision-making. 

Jared Huizenga is CMB’s Field Services Director, and has been in market research industry for nineteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

 

 

Topics: methodology, data collection

Panels: The Unsung Research Hero

Posted by Will Buxton

Wed, Jan 25, 2017

Who We Are.jpg

Market research has its rock star methodologies—segmentations, conjoint analyses, Bayes Nets —attention-grabbing methods that can garner incredible insights and drive acquisition and growth. You can find a lot of blogs (and white papers and conference presentations) on these methods but this blog isn’t one of them. No, this blog is dedicated to the unsung research methodology: proprietary panels.

Admittedly, a panel doesn’t sound sexy—it's a group of respondents who are regularly tapped to answer business questions relating to anything from product testing to ad testing. Whether it’s a consumer or business-to-business (B2B) panel, panels collect ongoing feedback from a select group of people who adhere to certain criteria.

So why consider a panel for your next research project?

Quality participants: Panels offer on-demand access to a pool of aware, engaged, and knowledgeable participants who are typically well-versed in the client/product offerings.

Speed of production: Panelists provide the opportunity for “quick hit” projects that typically require upfront education, set up, and programming time.

Efficiency: Panels use a standard process for timing, deployment and reporting, all of this saves time—both for the provider and the client.

Cost: Depending on survey length and complexity, a panel can be a more cost-effective way to contact customers/providers because of the preexisting relationship between client and panelist. This can avoid the need for large incentives.

Responsiveness: Panelists are more responsive than Gen Pop sample because of the aforementioned relationship. This allows for a quicker collection of more respondents and a faster project turnaround.

Dedicated resources: Each panel (at least here at CMB) has a dedicated, well-trained team that is privy to how the panel operates, including client restrictions and best practices.

So while traditional MaxDiff or Discrete Choice Model might have more buzzword appeal around the office, don’t underestimate the value a customer/B2B panel can bring to your research project.            [Twitter bird.pngTweet this!]

Will is a Project Manager who is clearly trying to turn CMB into a panel house.

PS – Join Dr. Erica Carranza on 2/1 and learn about our newest methodology, AffinIDSM, that’s grounded in the importance of consumer identity.

Register Now!

 

Topics: methodology, consumer insights, panels

Dear Dr. Jay: HOW can we trust predictive models after the 2016 election?

Posted by Dr. Jay Weiner

Thu, Jan 12, 2017

Dear Dr. Jay,

After the 2016 election, how will I ever be able to trust predictive models again?

Alyssa


Dear Alyssa,

Data Happens!

Whether we’re talking about political polling or market research, to build good models, we need good inputs. Or as the old saying goes: “garbage in, garbage out”.  Let’s look at all the sources of error in the data itself:DRJAY-9-2.png

  • First, we make it too easy for respondents to say “yes” and “no” and they try to help us by guessing what answer we want to hear. For example, we ask for purchase intent to a new product idea. The respondent often overstates the true likelihood of buying the product.
  • Second, we give respondents perfect information. We create 100% awareness when we show the respondent a new product concept.  In reality, we know we will never achieve 100% awareness in the market.  There are some folks who live under a rock and of course, the client will never really spend enough money on advertising to even get close.
  • Third, the sample frame may not be truly representative of the population we hope to project to. This is one of the key issues in political polling because the population is comprised of those who actually voted (not registered voters).  For models to be correct, we need to predict which voters will actually show up to the polls and how they voted.  The good news in market research is that the population is usually not a moving target.

Now, let’s consider the sources of error in building predictive models.  The first step in building a predictive model is to specify the model.  If you’re a purist, you begin with a hypotheses, collect the data, test the hypotheses and draw conclusions.  If we fail to reject the null hypotheses, we should formulate a new hypotheses and collect new data.  What do we actually do?  We mine the data until we get significant results.  Why?  Because data collection is expensive.  One possible outcome from continuing to mine the data looking for a better model is a model that is only good at predicting the data you have and not too accurate in predicting the results using new inputs. 

It is up to the analyst to decide what is statistically meaningful versus what is managerially meaningful.  There are a number of websites where you can find “interesting” relationships in data.  Some examples of spurious correlations include:

  • Divorce rate in Maine and the per capita consumption of margarine
  • Number of people who die by becoming entangled in their bedsheets and the total revenue of US ski resorts
  • Per capita consumption of mozzarella cheese (US) and the number of civil engineering doctorates awarded (US)

In short, you can build a model that’s accurate but still wouldn’t be of any use (or make any sense) to your client. And the fact is, there’s always a certain amount of error in any model we build—we could be wrong, just by chance.  Ultimately, it’s up to the analyst to understand not only the tools and inputs they’re using but the business (or political) context.

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

PS – Have you registered for our webinar yet!? Join Dr. Erica Carranza as she explains why to change what consumers think of your brand, you must change their image of the people who use it.

What: The Key to Consumer-Centricity: Your Brand User Image

When: February 1, 2017 @ 1PM EST

Register Now!

 

 

Topics: methodology, data collection, Dear Dr. Jay, predictive analytics

A Year in Review: Our Favorite Blogs from 2016

Posted by Savannah House

Thu, Dec 29, 2016

pexels-photo (2).jpg

What a year 2016 was.

In a year characterized by disruption, one constant is how we approach our blog: each CMBer contributes at least one post per year. And while asking each employee to write may seem cumbersome, it’s our way of ensuring that we provide you with a variety of perspectives, experiences, and insights into the ever-evolving world of market research, analytics, and consulting.

Before the clock strikes midnight and we bid adieu to this year, let’s take a moment to reflect on some favorite blogs we published over the last twelve months:

    1. When you think of a Porsche driver, who comes to mind? How old is he? What’s she like? Whoever it is, along with that image comes a perceived favored 2016 presidential candidate. Harnessing AffinIDSM and the results of our 2016 Consumer Identity Research, we found a skew towards one of the candidates for nearly every one of the 90 brands we tested.  Read Erica Carranza’s post and check out brands yourself with our interactive dashboard. Interested in learning more? Join Erica for our upcoming webinar: The Key to Consumer-Centricity: Your Brand User Image  
    2. During introspection, it’s easy to focus on our weaknesses. But what if we put all that energy towards our strengths? Blair Bailey discusses the benefits of Strength-Based Leadership—realizing growth potential in developing our strengths rather than focusing on our weaknesses. In 2017, let’s all take a page from Blair’s book and concentrate on what we’re good at instead of what we aren’t.
    3. Did you attend a conference in 2016? Going to any in 2017? CMB’s Business Development Lead, Julie Kurd, maps out a game plan to get the most ROI from attending a conference. Though this post is specific to TMRE, these recommendations could be applied to any industry conference where you’re aiming to garner leads and build relationships. 
    4. In 2016 we released the results of our Social Currency research – a five industry, 90 brand study to identify which consumer behaviors drive equity and Social Currency. Of the industry reports, one of our favorites is the beer edition. So pull up a stool, grab a pint, and learn from Ed Loessi, Director of Product Development and Innovation, how Social Currency helps insights pros and marketers create content and messaging that supports consumer identity.
    5. It’s a mobile world and we’re just living in it. Today we (yes, we) expect to use our smartphones with ease and have little patience for poor design. And as market researchers who depend on a quality pool of human respondents, the trend towards mobile is a reality we can’t ignore. CMB’s Director of Field Services, Jared Huizenga, weighs in on how we can adapt to keep our smart(phone) respondents happy – at least long enough for them to “complete” the study. 
    6. When you think of “innovation,” what comes to mind? The next generation iPhone? A self-driving car? While there are obvious tangible examples of innovation, professional service agencies like CMB are innovating, too. In fact, earlier this year we hired Ed Loessi to spearhead our Product Development and Innovation team. Sr. Research Associate, Lauren Sears, sat down with Ed to learn more about what it means for an agency like CMB to be “innovative.” 
    7. There’s something to be said for “too much of a good thing” – information being one of those things. To help manage the data overload we (and are clients) are often exposed to, Project Manager, Jen Golden, discusses the merits of focusing on one thing at a time (or research objective), keeping a clear space (or questionnaire) and avoiding trending topics (or looking at every single data point in a report). 
    8. According to our 2016 study on millennials and money, women ages 21-30 are driven, idealistic, and feel they budget and plan well enough. However, there’s a disparity when it comes to confidence in investing: nearly twice as many young women don’t feel confident in their investing decisions compared to their male counterparts. Lori Vellucci discusses how financial service providers have a lot of work to do to educate, motivate and inspire millennial women investors. 
    9. Admit it, you can’t get enough of Prince William and Princess Kate. The British Royals are more than a family – they’re a brand that’s embedded itself into the bedrock of American pop culture. So if the Royals can do it, why can’t other British brands infiltrate the coveted American marketplace, too? Before a brand enters a new international market, British native and CMB Project Manager, Josh Fortey, contends, the decision should be based on a solid foundation of research.
    10. We round out our list with a favorite from our “Dear Dr. Jay Series.” When considering a product, we often focus on its functional benefits. But as Dr. Jay, our VP of Advanced Analytics and Chief Methodologist, explains, the emotional attributes (how the brand/product makes us feel) are about as predictive of future behaviors of the functional benefits of the product. So brands, let's spread the love!

We thank you for being a loyal reader throughout 2016. Stay tuned because we’ve got some pretty cool content for 2017 that you won’t want to miss.

From everyone at CMB, we wish you much health and success in 2017 and beyond.

PS - There’s still time to make your New Year’s Resolution! Become a better marketer in 2017 and signup for our upcoming webinar on consumer identity:

Register Now!

 

Savannah House is a Senior Marketing Coordinator at CMB. A lifelong aspiration of hers is to own a pet sloth, but since the Boston rental market isn’t so keen on exotic animals, she’d settle for a visit to the Sloth Sanctuary in Costa Rica.

 

Topics: strategy consulting, advanced analytics, methodology, consumer insights

But first... how do you feel?

Posted by Lori Vellucci

Wed, Dec 14, 2016

EMPACT 12.14-2.jpg

How does your brand make consumers feel?  It’s a tough but important question and the answer will often vary between customers and prospects or between segments within your customer base.  Understanding and influencing consumers’ emotions is crucial for building a loyal customer base; and scientific research, market research, and conventional wisdom all suggest that to attract and engage consumers, emotions are a key piece of the puzzle. 

CMB designed EMPACTSM, a proprietary quantitative approach to understanding how a brand, product, touchpoint, or experience should make a consumer feel in order to drive their behaviors.  Measuring valence (how bad or good) and activation (low to high energy) across basic emotions (e.g., happy, sad, etc.), social and self-conscious emotions (e.g., pride, embarrassment, nostalgia, etc.) and other relevant feelings and mental states (e.g., social connection, cognitive ease, etc.), EMPACT has proved to be a practical, comprehensive, and robust tool.  The key insights around emotions emerge which can then drive communication to elicit the desired emotions and drive consumer behavior.  But while EMPACT has been used extensively as a quantitative tool, it is also an important component when conducting qualitative research.

In order to achieve the most bang for the buck with qualitative research, every researcher knows that having the right people in the room (or in front of the video-enabled IDI) is a critical first step.  You screen for demographics and behaviors and sometimes attitudes, but have you considered emotions?  Ensuring that you recruit respondents who feel a specific way when considering your brand or product is critical to being able to glean the most insight from qualitative work. (Tweet this!)  Applying an emotional qualifier to respondents allows us to ensure that we are talking to respondents who are in the best position to provide the specific types of insights we’re looking for. 

For example, CMB has a client who learned from a segmentation study which incorporated EMPACT that their brand over-indexed for eliciting certain emotions that tended to drive consumers away from brands within their industry.  The firm had a desire to craft targeted communications to mitigate these negative emotions among this specific strategic consumer segment.  As a first step in testing their marketing message and imagery, focus groups were conducted. 

In addition to using the segmentation algorithm to ensure we had the correct consumer segment in the room, we also included EMPACTscreening to be sure the respondents selected felt the emotions that we wanted to address with new messaging.  In this way, we were able to elicit insights directly related to how well the new messaging worked in mitigating the negative emotions.  Of course we tested the messaging among broader groups as well, but being able to identify and isolate respondents whose emotions we most wish to improve ensured development of great advertising that will move the emotion needle and motivate consumers to try and to love the brand.

Want to learn more about EMPACT? View our webinar by clicking the link below:

Learn More About EMPACT℠

Lori Vellucci is an Account Director at CMB.  She spends her free time purchasing ill-fated penny stocks and learning about mobile payment solutions from her Gen Z daughters.

Topics: methodology, qualitative research, EMPACT, quantitative research

Why Researchers Should Consider Hybrid Methods

Posted by Becky Schaefer

Fri, Dec 09, 2016

As market researchers we’re always challenging ourselves to provide deeper, more accurate insights for our clients. Throughout my career I’ve witnessed an increased dedication to uncovering better results by integrating traditional quantitative and qualitative methodologies to maximize insights within shorter time frames.Qualitative.jpg

Market research has traditionally been divided into quantitative and qualitative methodologies. But more and more researchers are combining elements of each – creating a hybrid methodology, if you will – to paint a clearer picture of the data for clients. [Tweet this!]

Quantitative research is focused on uncovering objective measurements via statistical analysis. In practice, quant market research studies generally entail questionnaire development, programming, data collection, analysis, and results, and can usually be completed within a few weeks (depending on the scope of the research).  Quant studies usually have larger sample sizes and are structured and setup to quantify data on respondents’ attitudes, opinions, and behaviors.

Qualitative research is exploratory and aims to uncover respondents’ underlying reasons, beliefs and motivations. Qualitative is descriptive, and studies may rely on projective techniques and principles of behavioral psychology to probe deeper than initial responses might allow. 

While both quantitative and qualitative research have their respective merits, market research is evolving and blurring the lines between the two.  At CMB we understand each client has different goals and sometimes it’s beneficial to apply these hybrid techniques.

 For example two approaches I like to recommend are:

  • Video open-ends Traditional quantitative open-ends ask respondents to complete open-ended questions by to entering a text response. Open-ends give respondents the freedom to answer questions in their own words versus selecting from a list of pre-determined responses. While open-ends are still considered to be a viable technique, market researchers are now throwing video into the mix. Instead of writing down their responses, respondents can record themselves on video. The obvious advantage to video is that it facilitates a more genuine, candid response while researchers are able see respondents’ emotions “face to face.” This is a twist on a traditional quantitative research that has the potential to garner deeper, more meaningful respondent insight.
  • In-depth/moderated chats let researchers dig deeper and connect with respondents within the paradigm of a traditional quantitative study. In these short discussions respondents can explain to researchers why they made a specific selection on a survey. In-depth/moderated chats can help contextualize a traditional quantitative survey – providing researchers (and clients) with a combination of both quantitative and qualitative insights.

As insights professionals we strive to offer critical insights that help our clients and partners answer their biggest business questions. More and more often the best way to achieve the best results is to put tradition aside and combine both qualitative and quantitative methodologies.

Rebecca is part of the field services team at CMB, and she is excited to celebrate her favorite time of year with her family and friends.  

Topics: methodology, qualitative research, quantitative research

The Elephant, the Donkey, and the Qualitative Researcher: The Moderator in Market Research and Politics

Posted by Kelsey Segaloff

Wed, Nov 23, 2016

capitol-32310_1280.pngAmericans have a lot to reckon with in the wake of the recent vote. You’re forgiven if analyzing the role of the presidential debate moderator isn’t high on your list. Still, for those of us in the qualitative market research business, there were professional lessons to be learned from the reactions to moderators Lester Holt (NBC), Martha Raddatz (ABC), Anderson Cooper (CNN), and Chris Wallace (Fox). Each moderator took their own approach and each was met with criticism and praise.

As CMB’s qualitative research associate and a moderator-in-training, I noticed parallels to the role of the moderator in the political and market research space. My thoughts:

 The moderator as unbiased

"Lester [Holt] is a Democrat. It’s a phony system. They are all Democrats.” – Donald Trump, President-Elect

Concerns regarding whether or not the debate moderators were unbiased arose throughout the primaries and presidential debates. Moderators were criticized for techniques like asking questions that were deemed “too difficult,” going after a single candidate, and not adequately pressing other candidates.  For example, critics called NBC’S Matt Lauer biased during the Commander-in-Chief forum. Some felt Lauer hindered Hillary Clinton’s performance by asking tougher questions than those asked of Donald Trump, interrupting Clinton, and not letting her speak on other issues the same way he allowed Donald Trump to.

In qualitative market research, every moderator will experience some bias from time to time, but it’s important to mitigate bias in order to maintain the integrity of the study. In my own qualitative experience, the moderator establishes that they are unbiased by opening each focus group by explaining that they are independent from the topic of discussion and/or client, and therein are not looking for the participants to answer a certain way.

Qualitative research moderators can also avoid bias by not asking leading questions, monitoring their own facial expressions and body language, and giving each participant an equal opportunity to speak. Like during a political debate, preventing bias is imperative in qualitative work because biases can skew the results of a study the same way the voting populace fears bias could skew the perceived performance of a candidate.

 The moderator as fact-checker

“It has not traditionally been the role of the moderator to engage in a lot of fact-checking.” – Alan Schroeder, professor of Journalism at Northeastern University

Throughout the 2016 election moderators were criticized for either fact-checking too much or not fact-checking the candidates enough. Talk about a Catch-22.

In qualitative moderating, fact-checking is dependent on the insights we are looking to achieve for a particular study. For example, I just finished traveling across the country with CMB’s Director of Qualitative, Anne Hooper, for focus groups. In each group, Anne asked participants what they knew about the product we were researching. Anne noted every response (accurate or inaccurate), as it was critical we understood the participants’ perceptions of the product. After the participants shared their thoughts, Anne gave them an accurate product description to clarify any false impressions because for the remainder of the conversation it was critical the respondents had the correct understanding of the product.

For the case of qualitative research, Anne demonstrated how fact-checking (or not fact-checking) can be used for insights. There’s no “one right way” to do it; it depends on your research goals.  

 The moderator as timekeeper

“Basically, you're there as a timekeeper, but you're not a participant.” – Chris Wallace, Television Anchor and Political Commentator for Fox News

Presidential debate moderators frequently interjected (or at least tried to) when candidates ran over their allotted time in order to stay on track and ensure each candidate had equal speaking time. Focus group moderators have the same responsibility. As a qualitative moderator-in-training, I’m learning the importance of playing timekeeper – to be respectful of the participants’ time and allow for equal participation.  I must also remember to cover all topics in the discussion guide. Whether you’re acting as a timekeeper in market research or political debates, it’s as much about the audience of voters or clients as it is about the participants (candidates or study respondents).  

The study’s desired insights will dictate the role of the moderator. Depending on your (or your client’s) goals, bias, fact-checking, and time-keeping could play an important part in how you moderate. But ultimately whether your client is a business or the American voting populace, the fundamental role of the moderator remains largely the same: to provide the client with the insights needed to make an informed decision.

Kelsey is a Qualitative Research Associate. She co-chairs the New England chapter of the QRCA, and recently received a QRCA Young Professionals Grant!

Topics: methodology, qualitative research, Election

Dear Dr. Jay: Weighting Data?

Posted by Dr. Jay Weiner

Wed, Nov 16, 2016

Dear Dr. Jay:

How do I know if my weighting matrix is good? 

Dan


Dear Dan,DRJAY-9.png

I’m excited you asked me this because it’s one of my favorite questions of all time.

First we need to talk about why we weight data in the first place.  We weight data because our ending sample is not truly representative of the general population.  This misrepresentation can occur because of non-response bias, poor sample source and even bad sample design.  In my opinion, if you go into a research study knowing that you’ll end up weighting the data, there may be a better way to plan your sample frame. 

Case in point, many researchers intentionally over-quota certain segments and plan to weight these groups down in the final sample.  We do this because the incidence of some of these groups in the general population is small enough that if we rely on natural fallout we would not get a readable base without a very large sample.  Why wouldn’t you just pull a rep sample and then augment these subgroups?  The weight needed to add these augments into the rep sample is 0. 

Arguments for including these augments with a very small weight include the treatment of outliers.  For example, if we were conducting a study of investors and we wanted to include folks with more than $1,000,000 in assets, we might want to obtain insights from at least 100 of these folks.  In a rep sample of 500, we might only have 25 of them.  This means I need to augment this group by 75 respondents.  If somehow I manage to get Warren Buffet in my rep sample of 25, he might skew the results of the sample.  Weighting the full sample of 100 wealthier investors down to 25 will reduce the impact of any outlier.

A recent post by Nate Cohn in the New York Times suggested that weighting was significantly impacting analysts’ ability to predict the outcome of the 2016 presidential election.  In the article, Mr. Cohn points out, “there is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.”  This man carried a sample weight of 30.  In a sample of 3000 respondents, he now accounts for 1% of the popular vote.  In a close race, that might just be enough to tip the scale one way or the other.  Clearly, he showed up on November 8th and cast the deciding ballot.

This real life example suggests that we might want to consider “capping” extreme weights so that we mitigate the potential for very small groups to influence overall results. But bear in mind that when we do this, our final sample profiles won’t be nationally representative because capping the weight understates the size of the segment being capped.  It’s a trade-off between a truly balanced sample and making sure that the survey results aren’t biased. [Tweet this!]

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

Keep the market research questions comin'! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit yours anonymously by clicking below:

 Ask Dr. Jay!

Topics: methodology, Dear Dr. Jay

MR Insights from the 2016 Election: A Love Letter to FiveThirtyEight

Posted by Liz White

Thu, Nov 03, 2016

social-icons-01.png

Methodology matters.  Perhaps this much is obvious, but as the demand for market researchers to deliver better insights yesterday increases, dialing up the pressure to limit planning time, it’s worth re-emphasizing the impact of research approach on outcomes.  Over the past year, I’ve come across numerous reminders of this while following this election cycle and the excellent coverage over at Nate Silver’s FiveThirtyEight.com.  I’m not particularly politically engaged, and as the long, painful campaign has worn on I’ve become even less so; but, I keep coming back to FiveThirtyEight, not because of the politics, but because so much of the commentary is relevant to market research.  I rarely ever visit the site (particularly the ‘chats’) without coming across an idea that inspires me or makes me more thoughtful about the research I’m doing day-to-day, and generally speaking that idea centers on methodology.  Here are a few examples:

Probabilistic Screening

Prrobabilistic screening example.png

In my day-to-day work, I would guesstimate that 90-95% of the studies I see are intended to capture a more specific population of interest than the general population, making the screening criteria used to identify members of that population absolutely vital. In general, these criteria consist of a series of questions (e.g., are you a decision maker, do you meet certain age and income qualifications, have you purchased something in X category before, would you consider using brand Y), with only those with the right pattern or patterns of responses getting through.

But what if there were a better way to do this? Reading the above on FiveThirtyEight got me thinking about the kinds of studies in which using a probabilistic screener (and weighting the data accordingly) might actually be better than what we do now. These would be studies where the following is true:

  1. Our population of interest might or might not engage in the behavior of interest
  2. We have some kind of prior data on the behavior of interest tied to individual characteristics

“Yeah right,” you might say, “like we ever have robust enough data available on the exact behavior we’re interested in.” Well, this might be a perfect opportunity for incorporating the (to all appearances) ever-increasing amounts of passive customer data that are available into our surveys. It’s inspiring, at any rate, to think about how a more nuanced screener might make our research more predictive.

Social Desirability Bias & More Creative Questioning

Prrobabilistic screening example.png

Prrobabilistic screening example.png

Social desirability is very much a market research-101 topic, but that doesn’t mean it’s something that’s either been definitively solved for or that the same solution would work in every case. The issue comes up a lot, not only in the context of respondent attitudes, but even more commonly when asking about demographics like income or age. There are lots of available solutions, some of which involve manipulating the data to ‘normalize’ it in some way, and some of which involve creative questioning like the example shown above. I think the right takeaways from the above are:

  • Coming up with creative variations on your typical questions might help avoid respondent bias, and even has the potential to make questions more engaging for respondents
  • It’s important to think critically about whether or not creative questioning will resonate appropriately with your respondents
Plus, brainstorming alternatives is fun! For example:
  • Is someone you respect voting for Donald Trump?
  • Do the blogs you prefer to read tend to favor Trump or Clinton?
  • What media outlets do you visit to get your political news?

The Vital Importance of Context

Prrobabilistic screening example.png

At the heart of FiveThirtyEight’s commentary here is a reminder of the vital importance of context. It’s all very well to push respondents through a series of scales and return means or top box frequencies; but depending on the situation, that may tell only a small part of the story. What does an average rating of ‘6.5’ really tell you? In the end, without proper context, this kind of result has very little inherent meaning.

So how do we establish context? Some options (all of which rely on prior planning) include:

  • Indexing (against past performance or competitors)
  • Trade-off techniques (MaxDiff, DCM)
  • Predictive modeling against an outcome variable

Wrapping this up, there are two takeaways that I’d like to leave you with:

  • First, methodology matters. It’s worthwhile to spend the time to be thoughtful and creative in your market research approach.
  • Second, if you aren’t already, head over to FiveThirtyEight and read their entire backlog of 2016 election coverage. The site is an incredible reservoir of market research insight, and I can say with 95% confidence that you’ll be happy you checked it out.

 Liz White is a member of CMB’s Advanced Analytics team, and checks FiveThirtyEight.com five times a day (plus or minus two times).

Topics: methodology, Market research

Can Facial Recognition Revolutionize Qualitative?

Posted by Will Buxton

Wed, Aug 03, 2016

Full disclosure: I’m an Android and Google loyalist, but please don’t hold that against me or the rest of my fellow Android users, who, by the way, comprise 58% of the smartphone market share in the United States. As a result of my loyalty, I’m always intrigued by Google’s new hardware and software advancements, which are always positioned in a way that leads me to believe they will make my life easier. Some of the innovations over the years have in fact lived up to the hype, such as Google Now, Google Drive, and even Google Fusion, while others such as Google Buzz and Google Wave have not.

As a researcher, last year’s launch of Google Photos caught my eye. Essentially, Google
Photos now utilizes facial recognition software to group or bunch your photos based on people in them, scenery (i.e., beaches and Google_Photos_icon.svg-1.pngmountains) and even events (i.e., weddings and holidays). To activate the facial recognition feature, all you have to do is tag one photo with an individual’s name and all other photos with that person will be compiled into a searchable collection. Google uses visual cues within the photos and geotagging to create other searchable collections. While these features might not seem extraordinary—I can see who was the most frequent star of my photos (my enormous cat) or where I most commonly take photos (honeymoon sans enormous cat)—I began to imagine the possible impact these features could have on the market research industry.

Visual ethnographies are one of many qualitative research options we offer at CMB. This is a rich form of observation, and, for some companies, it can be cost prohibitive in nature, especially ones focused on a “cost-per-complete.” But, what if there was a way to remove some of the heavy lifting of a customer journey ethnography by quantifying some of the shopping experience using technology that could track date/time, location, shopping layout, products viewed, order in which products are viewed, and so on, all through recognition software? Would the reduction in hours, travel, and analysis be able to offset the technological costs of these improvements?

Market research, and, in particular, qualitative research have always been a combination of art and science, and to expect any technological advancement to adequately perform any cogent analyses is a bit premature and perhaps too reminiscent of The Minority Report. (I don’t think it worked out well). But the promise of these powerful tools makes it an exciting time to  be a qualitative researcher!

Will Buxton is a Project Manager on the Financial Services team. He enjoys finding humor in everyday tasks, being taken seriously, and his enormous cat.

Learn more about how our dedicated Qualitative practice helps brands Explore, Listen, & Engage.

 

 

 

Topics: methodology, qualitative research, mobile, storytelling, customer journey