5 Key Takeaways from The Quirk's Event

Posted by Jen Golden and Ashley Harrington

Thu, Mar 05, 2015

Quirks Event LogoLast week, we spent a few days networking with and learning from some of the industry’s best and brightest at The Quirk's Event. At the end of the day, a few key ideas stuck out to us, and we wanted to share them with you. 1. Insights need to be actionable: This point may seem obvious, but multiple presenters at the conference hammered in on this point. Corporate researchers are shifting from a primarily separate entity to a more consultative role within the organization, so they need to deliver insights that best answer business decisions (vs. passing along a 200 slide data-dump). This mindset should flow through the entire lifespan of a project—starting at the beginning by crafting a questionnaire that truly speaks to the business decisions that need to be made (and cuts out all the fluff that may be “nice to have” but is not actionable) all the way to thoughtful analysis and reporting. Taking this approach will help ensure final deliverables aren’t left collecting dust and are instead used to lead engagement across the organization. 

2. Allocate time and resources to socializing these insights throughout the organization: All too often, insightful findings are left sitting on a shelf when they have potential to be useful across an organization. Several presenters shared creative approaches to socializing the data so that it lives long after the project ended. From transforming a conference room with life-size cut-outs of key customer segments to creating an app employees can use to access data points quickly and on-the-go, researchers and their partners are getting creative within how they share the findings. The most effective researchers think about research results as a product to be marketed to their stakeholders.
 
3. Leverage customer data to help validate primary research: Most organizations have a plethora of data to work with, ranging from internal customer databases to secondary sources to primary research. These various sources can be leveraged to paint a full picture of the consumer (and help to validate findings). Etsy (a peer-to-peer e-commerce site) talked about comparing data collected from its customer database to its own primary research to see if what buyers and sellers said they did on the site aligned with what they actually did. For Etsy, past self-reported behaviors (e.g., number of purchases, number of times someone “favorites” a shop, etc.) aligned strongly with its internal database, but future behavior (e.g., likelihood to buy from Etsy in the future) did not. Future behaviors might not be something we can easily predict by asking directly in a survey, but that data could be helpful as another way to identify customer loyalty or advocacy. A note of caution: if you plan on doing this data comparison, make sure the wording in your questionnaire aligns with what you plan on matching in your existing database. This ensures you’re getting an apples to apples comparison.
 
4. Be cautious when comparing cross-country data: A multi-country study is typically going to ask for a “global overview” or cross-country comparison, but this can lead to inaccurate recommendations. Most are aware of cultural biases such as extreme response (e.g., Brazilian respondents often rate higher on rating scales while Japanese respondents tend to rate lower) or acquiescence (e.g., China often has the propensity to want to please the interviewer), and these biases should be kept in the back of your mind when delving into the final data. Comparing scaled data directly between countries with very different rating tendencies could lead to to falsely thinking one country is underperforming. A better indication of performance would be to provide an in-country comparison to competitors or looking at in-country trending data.
 
5. Remember your results are only as useful as your design is solid: A large number of stakeholders invested in a study’s outcome can lead to a project designed by committee since each stakeholder will inevitably have different needs, perspectives, and even vocabularies. A presenter shared an example from a study that asked recent mothers, “How long was your baby in the hospital?” Some respondents thought the question referred to the baby’s length, so they answered in inches. Others thought the question referred to the baby’s duration in the hospital, so they answered in days. Therein lies the problem.  Throughout the process, it’s our job to ensure that all of the feedback and input from multiple stakeholders adheres to the fundamentals of good questionnaire design: clarity, answerable, ease, and lack of bias.

Have you been to any great conferences lately and have insights to share? Tell us in the comments!

Jen is a Project Manager on the Tech practice who always has the intention to make a purchase on Etsy but never actually pulls the trigger.  

Ashley is a Project Manager on the FIH/RTE practice who has pulled the trigger on several Etsy items (as evidenced in multiple “vintage” tchotchkes and half-complete craft projects around her home).

Topics: Big Data, Research Design, Conference Insights

Deflategate and the Dangers of Convenience Sampling

Posted by Athena Rodriguez

Wed, Jan 28, 2015

The Patriots have landed in Phoenix for yet another Super Bowl, but there are still those who can’t stop talking about “Deflategate.” Yes, that’s what some are calling the controversy surrounding those perfectly legal 12.5 PSI inflated footballs that lost air pressure due to changing atmospheric conditions and repeated Gronking* after touchdowns during the first half of the Pats-Colts showdown.

Here in Boston, we were shocked to turn on the TV and hear the terrible accusations. Were we watching and reading the same things as the accusers? Did those doubters not watch the press conferences (all three of them) where our completely ethical coach proclaimed his team’s innocence? Did they not understand that Belichick even conducted a SCIENCE EXPERIMENT? 

Or could it be simply that the doubters live outside of New England?

athena blog

The chart above makes it pretty obvious—from Bangor to Boston, we just might have been hearing the voices of a lot more Pats fans. This is, in fact, a really simple illustration of the dangers of convenience sampling—a very common type of non-probability sampling.

Sure it’s a silly example, but as companies try to conduct research faster and cheaper, convenience sampling poses serious threats. Can you get 500 completes in a day? Yes, but there’s a very good chance they won’t be representative of the population you’re looking for. Posting a link to your survey on Facebook or Twitter is fast and free, but whose voice will you hear and whose will you miss?

I’ve heard it said that some information is better than none, but I’m not sure I agree. If you sample people that aren’t in your target, they can lead you in the completely wrong direction. If you oversample in a certain population (ahem, New Englanders) you can also suffer from a biased, non-representative sample.

Representative sampling is one of the basic tenets of survey research, but just because it’s a simple concept doesn’t mean we can afford to ignore it. Want your results to win big? Carefully review your game plan before kicking-off data collection.

  • Sample Frame: Is the proposed sample frame representative of the target population?
    • Unless you are targeting a niche population. . .
      • online panel “click-throughs” should be census balanced
      • –customer lists must be reflective of the target customers (if the population is all customers, do not use email addresses unless addresses exist for all customers or the exceptions are randomly distributed)
      • –compare the final sample to the target population just to be sure
  • Selection: Does the selection process ensure that all potential respondents on the frame have an equal chance of being recruited throughout the data collection period?
    • To be sure, you should. . .
      • randomize all lists before recruiting
      • not fill quotas first
      • not focus on hard-to-reach respondents first
  • Data collection: Will the proposed data collection plan adversely affect sample quality?
    • –Ask yourself:
      • Are fielding dates unusual (e.g., holiday, tax returns, Super Bowl, etc.)?
      • Is the schedule long enough to cover weekdays and weekends? Will it give procrastinators sufficient time to respond?
  • Structure: Will important subgroups have sufficient sample sizes if left to fall out naturally?
    • –If not, set quotas. . .
      • –Quota groups must be weighted back to their natural distribution before analysis or treated as an oversample and excluded from any analysis at the total level.
  • Size: Is the proposed sample size sufficient?
    • –We must always balance costs against sample size, but, at the same time, we must recognize that we need minimum sample sizes for certain objectives.  

Are there times you might need some quick and dirty (un-Patriot like) results? Absolutely. But, when you’re playing for big insights, you need the right team.

*spiking the football after a touchdown.

Athena Rodriguez is a Project Consultant at CMB. She’s a native Floridian, who’s looking forward to the end of the Blizzard of 2015 and the start of Sunday’s game!

Topics: Boston, Television, Research Design, Media & Entertainment Research

Conjoint Analysis: 3 Common Pitfalls and How to Avoid Them

Posted by Liz White

Thu, Jan 08, 2015

conjoint analysis, cmbIf you work in marketing or market research, chances are you’re becoming more and more familiar with conjoint analysis: a powerful research technique used to predict customer decision-making relative to a product or service. We love conjoint analysis at CMB, and it’s easy to see why. When conducted well, a conjoint study provides results that make researchers, marketers, and executives happy. These results:

  • Are statistically robust
  • Are flexible and realistic
  • Describe complex decision-making
  • Are easy to explain and understand

If you need a quick introduction or a refresher on conjoint analysis, I recommend Sawtooth Software’s excellent video, which can be found here.For these reasons conjoint analysis is one of the premiere tools in our analytical toolkit. However, as with any analytical approach, conjoint analysis should be applied thoughtfully to realize maximum benefits. Below, I describe three of the most common pitfalls related to conjoint analysis and tips on how to avoid them.

Pitfall #1: Rushing the Design

This is the most common pitfall, but it’s also be the easiest one to avoid. As anyone who has conducted a conjoint study knows, coming up with the right design takes time. When planning the schedule for a conjoint analysis study, make sure to leave time for the following steps:

  • Identify your business objective, and work to identify the research questions (and conjoint design) that will best address that objective.
  • Brainstorm a full list of product features that you’d like to test. Collaborate with coworkers from various areas of your organization—including marketing, sales, pricing, and engineering as well as the final decision-makers—to make sure your list is comprehensive and up-to-date.
    • You may also want to plan for qualitative research (e.g., focus groups) at this stage, particularly if you’re looking to test new products or product features. Qualitative research can prioritize what features to test and help to translate “product-speak” into language that customers find clear and meaningful.
    • If you’re looking to model customer choices among a set of competitive products, collect information about your competitors’ products and pricing.
    • Once all the information above is collected, budget time to translate your list of product features into a conjoint design. While conjoint analysis can handle complex product configurations, there’s often work to be done to ensure the final design (a) captures the features you want to measure, (b) will return statistically meaningful results, and (c) won’t be overly long or confusing for respondents.
    • Finally, budget time to review the final design. Have you captured everything you needed to capture?  Will this make sense to your customers and/or prospective customers? If not, you may need to go back and update the design. Make sure you’ve budgeted for this as well.

Pitfall #2: Overusing Prohibitions

Most conjoint studies typically involve a conversation about prohibitions—rules about what features can be shown under certain circumstances. For example:

Say Brand X’s products currently come in red, blue, and black colors while Brand Y’s products are only available in blue and black. When creating a conjoint design around these products, you might create a rule that if the brand is X, the product could be any of the three colors, but if the brand is Y, the product cannot be red.

While it’s tempting to add prohibitions to your design to make the options shown to respondents more closely resemble the options available in the market, overusing prohibitions can have two big negative effects:

  1. Loss of precision when estimating the value of different features for respondents.
  2. Loss of flexibility for market simulations.

The first of these effects can typically be identified in the design phase and fixed by reducing the number of prohibitions included in a model. The second is potentially more damaging as it usually becomes an issue after the research has already been conducted. For example:

We’ve conducted the research above for Brand Y, including the prohibition that if the brand is Y, the product cannot be red. Looking at the results, it becomes clear that Brand X’s red product is much preferred over their blue and black products. The VP of Brand Y would like to know what the impact of offering a Brand Y product in red would be.  Unfortunately, because we did not test a red Brand Y product, we are unable to use our conjoint data to answer the VP’s question.

In general, it is best to be extremely conservative about using prohibitions—use them sparingly and avoid them where possible. 

Pitfall #3: Not Taking Advantage of the Simulator

While the first two pitfalls are focused on conjoint design, the final pitfall is about the application of conjoint results. Once the data from the conjoint analysis has been analyzed, it can be used to stimulate virtually any combination of the features tested and predict the impact that different combinations will have on customer decision-making. . .which is just one of the reasons conjoint analysis is such a valuable tool. All of that predictive power can be distilled into a conjoint simulator that anyone—from researchers to marketers to C-suite executives—can use and interpret.

At CMB, the clients I’ve seen benefit most from conjoint analysis are the clients that take full advantage of the simulators we deliver, rather than simply relying on the scenarios created for reporting. Once you receive a conjoint simulator, I recommend the following:

  1. Distribute copies of the simulator to all key stakeholders.
  2. Have the simulator available when presenting the results of your study, and budget time in the meeting to run “what-if” scenarios then and there. This can allow you to leverage the knowledge in the room in real time, potentially leading to practical and informed conclusions.
  3. Continue to use your simulator to support decision-making even after the study is complete, using new information to inform the simulations you run. A well-designed conjoint study will continue to have value long after your project closes.

Liz is a member of the Analytics Team at CMB, and she can’t wait to hear your research questions!

Topics: Advanced Analytics, Research Design

Be Aware When Conducting Research Among Mobile Respondents

Posted by Julie Kurd

Tue, Oct 28, 2014

mobile, cmb

Are you conducting research among mobile respondents yet? Autumn is conference season, and 1,000 of us just returned from IIR’s The Market Research Event (TMRE) conference where we learned, among other things, about research among mobile survey takers. Currently, only about 5% of the market research industry spend is for research conducted on a smartphone, 80% is online, and 15% is everything else (telephone and paper-based). Because mobile research is projected to be 20% of the industry spend in the coming years, we all need to understand the risks and opportunities of using mobile surveys.  

Below, you’ll find three recent conference presentations that discussed new and fresh approaches to mobile research as well as some things to watch out for if you decide to go the mobile route. 

1. At IIR TMRE, Anisha Hundiwal, the Director of U.S. Consumer and Business Insights for McDonald’s, and Jim Lane from Directions Research Inc. (DRI) did not disappoint. They co-presented the research they had done to understand the strengths of half a dozen national and regional coffee brands, including Newman’s Coffee (the coffee that McDonald’s serves), around 48 brand attributes. While they did share some compelling results, Anisha and Jim’s presentation primarily focused on the methodology they used. Here is my paraphrase of the approach they took:

  • They used a traditional 25 minute, full-length online study among traditional computer/laptop respondents who met the screening criteria (U.S. and Europe, age, gender, etc.), measuring a half dozen brands and approximately 48 brand attributes. They then analyzed results of the full-length study and conducted a key driver analysis.
  • Next, they administered the study using a mobile app for mobile survey takers among similar respondents who met the same screening criteria. They also dropped the survey length to 10 minutes, tested a narrower set of brands (3 instead of 6), and winnowed the attributes from ~48 to ~14. They made informed choices about which attributes to include based on their key driver analysis (key drivers to overall equity, and I believe I heard them say they added in some attributes that were highly polarizing).

Then, they compared mobile respondent results to the traditional online survey results. Anisha and Jim discussed key challenges we all face as we begin to adapt to smartphone respondent research. For example, they tinkered with rating scales and slide bars by setting the bar on the far left at 0 on a 0-100 rating scale for some respondents and then setting it at the mid-point for others to see if results would be different. While the overall brand results were about the same, the sections of the rating scales respondents used differed. Further, they reported that it was hard to compare detailed results for online and mobile because different parts of the rating scales were used in general. Finally, they reported that the winnowed attribute and brand lists made insights less rich than the online survey results.

2. At the MRA Corporate Researcher’s conference in September, Ryan Backer, Global Insights for Emerging Tech at General Mills, also very clearly articulated several early learnings in the emerging category of mobile surveys. He said that 80% of General Mills’ research team has conducted at least one smartphone respondent study. (Think about that and wonder out loud, “should I at least dip my toe into this smartphone research?”) He provides a laundry list of the challenges they faced and, like all true innovators, he was willing to share his challenges because it helps him continue to innovate.  You can read a full synopsis here.

3. Chadwick Martin Bailey was a finalist for the NGMR Disruptive Innovation Award at the IIR TMRE conference.  We partnered with Research Now for a presentation on modularizing surveys for mobile respondents at an earlier IIR conference and then turned the presentation into a webinar. CMB used a modularized technique in which a 20 minute survey was deconstructed into 3 partial surveys with key overlaps. After fielding the research among mobile survey takers, CMB used some designer analytics (warning, probably don’t do this without a resident PhD) to ‘stitch’ and ‘impute’ the results. In this conference presentation turned webinar, CMB talks about the pros and cons of this approach.

Conferences are a great way to connect with early adopters of new research methods. So, when you’re considering adopting new research methods such as mobile surveys, allocate time to see what those who have gone before you have learned!

Julie blogs for GreenBook, ResearchAccess, and CMB.  She’s an inspired participant, amplifier, socializer, and spotter in the twitter #mrx community so talk research with her @julie1research.

Topics: Data Collection, Mobile, Research Design, Data Integration, Conference Insights

Qualitative, Quantitative, or Both? Tips for Choosing the Right Tool

Posted by Ashley Harrington

Wed, Aug 06, 2014

quantitative, qualitative, methodologyIn market research, it can occasionally feel like the rivalry between qualitative and quantitative research is like the Red Sox vs. the Yankees.  You can’t root for both, and you can’t just “like” one.  You’re very passionate about your preference.  But in many cases, this can be problematic. For example, using a quantitative mindset or tactics in a qualitative study (or vice versa) can lead to inaccurate conclusions. Below are some examples of this challenge—one that can happen throughout all phases of the research process: 

Planning

Clients will occasionally request that market researchers use a particular methodology for an engagement. We always explore these requests further with our clients to ensure there isn’t a disconnect between the requested methodology and the problem the client is trying to solve.

For example, a bank* might say, “The latest results from our brand tracking study indicate that customers are extremely frustrated by our call center and we have no idea why. Let’s do a survey to find out.”

Because the bank has no hypotheses about the cause of the issue, moving forward with their survey request could lead to designing a tool with (a) too many open-ended questions and (b) questions/answer options that are no more than wild guesses at the root of the problem, which may or may not jibe with how consumers actually think and feel.

Instead, qualitative research could be used to provide a foundation of preliminary knowledge about a particular problem, population, and so forth. Ultimately, that knowledge can be used to help inform the design of a tool that would be useful.

Questionnaire Design

For a product development study, a software company* asks to add an open-ended question to a survey: “What would make you more likely to use this software?” or “What do you wish the software could do that it can’t do now?”

Since most of us are not engineers or product designers, this question might be difficult for most respondents to answer. Open-ended questions like these are likely to yield a lot of not-so-helpful “I don’t know”-type responses, rather than specific enhancement suggestions.

Instead of squandering valuable real estate on a question not likely to yield helpful data, a qualitative approach could allow respondents to react to ideas at a more conceptual level, bounce ideas off of each other or a moderator, or take some time to reflect on their responses. Even if the customer is not a R&D expert, they may have a great idea that just needs a bit of coaxing via input and engagement with others.

Analysis and Reporting

In reviewing the findings from an online discussion board, a client at a restaurant chain* reviews the transcripts and states, “85% of participants responded negatively to our new item, so we need to remove it from our menu.”

Since findings from qualitative studies are not necessarily statistically significant, using the same techniques (e.g., descriptive statistics and frequencies) is not ideal as it implies a level of precision in the findings that is not necessarily accurate. Further, it would not be cost-effective to recruit and conduct qualitative research with a group large enough to be projectable onto the general population.

Rather than attempting to quantify the findings in strictly numerical terms, qualitative data should be thought of as more directional in terms of overall themes and observable patterns.

At CMB, we root for both teams. We believe both produce impactful insights, and that often means using a hybrid approach. We believe the most meaningful insights come from choosing the approach or approaches best suited to the problem our client is trying to solve. However, being a Boston-based company, we can’t say that we’re nearly this unbiased when it comes to the Red Sox versus the Stankees Yankees.

*Example (not actual)

Ashley is a Project Manager at CMB. She loves both qualitative and quantitative equally and is not knowledgeable enough about sports to make any sports-related analogies more sophisticated than the Red Sox vs. the Yankees.

Click the button below to subscribe to our monthly eZine and get the scoop on the latest webinars, conferences, and insights. 

Subscribe Here

Topics: Methodology, Qualitative Research, Research Design, Quantitative Research