WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

To Infinity and Beyond: Why Going the Extra Mile Pays Off

Posted by Hannah Jeton

Fri, Jul 10, 2015

cmb, questionnaire design, research with kidsCMB’s portfolio is pretty expansive: we do research in a variety of industries and work with many different types of consumers—from insurance consumers to techies to medical experts to small business owners. We know how to get answers, and, more importantly, we are experts in getting to know our audiences. We learn more every day from our work, and we’re always willing to go above and beyond for our clients.I recently worked on a project with a kids’ media company that struck me as new, exciting, and something that I thought I would never come across unless I worked at a boutique kids-only research firm—I couldn’t wait to dive in! Our client wanted to understand kids today and segment the market beyond age and gender. The design, methodology, and project planning came naturally. However, when we reviewed the wording of the questionnaire, we knew we would have to step into our “kid” shoes and adjust accordingly.

Our approach to capture the appropriate wording and diction was twofold. First, we did a series of in-depth interviews, which uncovered ways to speak to various types of kids—leaving no child behind. (For example, we learned that there’s a group of kids out there who wish that they had a superpower of being able to shoot flames out of their backsides—please note this was selected over being invisible, being able to fly, and other “more appropriate” powers). Upon completing the IDIs, we felt that we could make lists exhaustive and kid-friendly, so we moved forward and programmed the survey.

Our next step was to do a round of pre-testing the program we planned to field, so we programmed the survey and observed while kids and parents went through the survey together. This allowed us to see which questions worked, which ones didn’t, which ones we had to further “gamify,” which ones had responses that were too similar, and which ones were confusing to kids. A max difference exercise is just one example of how the pre-testing helped us. We ran a force choice task asking kids to select between 2 descriptors, such as smart v. pretty, popular v. famous, and pretty v. popular. As we watched our pre-testers go through this process, we overhead a common sentiment: “why can’t I be both?”—which was an indicator that the force choice exercises were not a great method to use when doing research with kids. Thus, this pre-testing further refined our design into a clear, fun, and strong program that both kids and parents had fun doing together.

You may be asking yourself: why go the extra mile? Well, because we knew it would pay off, and it did! These extra steps allowed us to create an extensive segmentation that moved far beyond age and gender (the average age across the 7 segments varied between 8.2 years old to 8.6 years old) and had an 85% classification rate on the typing tool. This project was unique and adorable, but it was also a wonderful learning opportunity. It emphasized the importance of getting to know your audience and also proved that upfront research with experts can go a long way when you go into the field.

Hannah is a Project Manager on the TEM team and considers herself closer to being a kid than to having one. Her superpower of choice is omnilingualism.

Watch our recent webinar with Research Now to hear the results of our recent self-funded Consumer Pulse study that leveraged passive mobile behavioral data and survey data simultaneously to reveal insights into the current Mobile Wallet industry in the US.

Watch Now!

Topics: research design, research with kids

Better Demographics = Better Insights

Posted by Eliza Novick

Thu, Jun 25, 2015

There is a strong belief that gender identity can be used to predict behavior in the marketplace, and we see evidence of this belief in advertising every day (and we also regularly poke fun at this idea - see the video below). Despite this, the standard approach to collecting information about gender and behavior often lacks the depth and complexity necessary to reach the meaningful insights around gender identity. How can we fix this? One way forward is to incorporate social science into our questionnaire design. 

 

There’s a large body of evidence from social science research that indicates social identities, like gender, can have concrete economic implications for people belonging to certain groups. Gender is not only an expression of individual identity, but is also negotiated on a group level as we practice and enforce patterns of hierarchical social, political, and economic relationships (including work and family life). So, while one woman’s social, political, and/or economic profiles may deviate from the profiles of women as a group, she’s still subject to the systematic opportunities and barriers that these group profiles represent.

At CMB, we often leverage social science in questionnaire design to elicit responses that most closely reflect the market. As an industry, we could (and should) go further in the way we collect demographic information. For example, respondents are typically allowed to select only one employment status from a list of several options: employed part time, employed full time, full time homemaker, full time student, retired, or unemployed. From the social science perspective, this question is problematic because it ignores the fact that respondents may fall into more than one category and that women are more likely than men to experience overlap in these categories in their lifetime. A question like this might produce compromised data, particularly for respondents who are young, female, and/or low-income. Another example is marital status: is the marketplace behavior of a same-sex unmarried couple categorically different than that of a couple in a traditional marriage? Depending on the industry, the answer may vary, but with a few easy questionnaire tweaks, we can capture that information.

From segmentation to optimization, demographic information is often a critical part of the analyses that solve our clients’ business challenges. But our answers to their problems are only as good as the questions our surveys are asking. Revisiting demographic collection is an easy update that goes a long way towards generating higher quality data, making better evidence-based recommendations, and pushing businesses forward.  

Eliza Novick is an Associate Researcher at CMB. Her favorite Boston attraction is the New England Aquarium, particularly the Edge of the Sea exhibit where you can pick up clams and starfish. 

Want to know the latest on barriers and opportunities for the next generation of mobile payment providers?

DOWNLOAD THE FREE REPORT

Topics: data collection, research design

99 Problems, but Project Execution Ain't One

Posted by Cara Lousararian

Wed, Mar 25, 2015

CMB, rock-solid executionAfter nearly a decade working on highly complex and strategic research projects, I’ve learned the one thing you can count on when dealing with massive amounts of data is Murphy’s Law—anything that can go wrong, will go wrong. No matter how much planning we do (and we take planning very seriously), the nature of market research means there’s bound to be a hiccup or two along the way.One of the best ways to deal with Murphy's Law is to accept that issues will arise but to make sure they don’t get in the way of the end goal—actionable insights. At CMB, our ability to seamlessly execute projects hinges on our capacity to adjust and course correct (when needed) to keep things on track. We put a lot of preparation and time in putting together solid project plans, focusing on business decisions, and conducting stakeholder interviews, but we also place a lot of emphasis on hiring and training strong problem solvers. We do this because we know that even the best laid out plans can still go awry, which is why it's important to manage problems proactively. For example, CMB firmly believes in conducting stakeholder interviews at the beginning of nearly all research engagements. This allows us to proactively re-shape/re-think the questionnaire design based on the information we’re hearing from the stakeholders. This helps prevent getting to the final presentation and delivering insights that are not relevant or useable for the key stakeholders.

Even the Patriots, as successful as they were this season and in the Super Bowl, run into problems and issues in each game that they play, regardless if they are playing the worst or best team in the league. If you read Peter King's Monday Morning Quarterback column the day after the Super Bowl, you'll remember that he highlighted Bill Belichick's pre-Super Bowl game meeting with his staff. Josh McDaniels, the Patriots’ offensive coordinator, summarized the meeting and said that Bill's main message was this: "This game is no different than any other one. It’s a 60-minute football game, and whatever issues we have, let’s make sure we correct them, coach them, and fix them. That’s our job." During that meeting, McDaniels, wrote two notes on his game play clipboard, "adjust" and "correct problems and get them fixed." Going into the game with those mantras was a reminder for him that the game is dynamic, and even the best laid plans need to be adjusted throughout the course of play.

While we can’t rely on Tom Brady, our approach to research engagements is no different. We encounter complex challenges day in and day out, and as our clients' needs change, we continue to think creatively and provide new and better solutions. When working with CMB, you can feel confident that we're putting together a solid project approach while simultaneously planning for the problems that may lie ahead. We might not be the Patriots, but we’re champions at execution just the same.

Cara is a Senior Research Manager. She enjoys spending time with her husband and dog, and she is STILL reveling in the "high" from the Patriots Super Bowl win.

Are YOU a strong problem solver? Come join our team!

Open Positions

Topics: Chadwick Martin Bailey, Boston, research design

Follow the Humans: Insights from CASRO’s Digital Research Conference

Posted by Jared Huizenga

Mon, Mar 09, 2015

iStock 000008338677XSmallI once again had the pleasure of attending the CASRO Digital Research Conference this year. It’s the one of the best conferences available to data collection geeks like me, and this year’s presentations did not disappoint. Here are a few key takeaways from this year’s conference.

1. The South shuts down when it snows. After having a great weekend in Nashville after the conference, my flight was cancelled on Monday due to about an inch of snow and a little ice. Needless to say, I was happy to return to Boston and its nine feet of snow.

2. “Big data” is an antiquated term. Over the past few years, big data has been the big buzz in the industry. Much like we said goodbye to traditional “market research,” we can now say adios to “big data.” Good riddance. The term was vague at best. However, that doesn’t mean that the concept is going away. It’s simply being replaced by new, more meaningful terminology like “integrated data” and “multi-sourced data.” But one thing isn’t changing. . .

3. Researchers still don’t know what to do with all that data. What can I say about multi-sourced data that I haven’t already said many times over the past couple years? Clients still want it, and researchers still want to oblige. But this fact remains: adequate tools still do not exist to deliver meaningful integrated data in most cases. We have a long way to go before most researchers will be able to leverage all of this data to its full potential in a meaningful way for our clients.

4. There’s a lot more to mobile research than how a questionnaire looks on a screen. For the past three or four years, it seems like every year is going to be “the year of mobile” at these types of conferences. Because of this, I always attend the mobile-related sessions skeptically. When we talk about mobile, more often than not, the main concern is how the questionnaire will look on a mobile device. But mobile research is much more than that. One of the best things I heard at the conference this year was that researchers should “follow the humans.” This is true on so many levels. Of course, a person can respond to a questionnaire invitation on his/her mobile device, but so much of a person’s daily life, including behaviors and attitudes, is shaped by mobile. Welcome to the world of the ultra-informed consumer. I can confidently say that 2015 is most definitely the year of mobile! (I do, however, reserve the right to say the same thing again next year.)

5. Researchers need to think like humans. It’s easy to get caught up in percentages in our world, and researchers sometimes lose sight of the human aspect of our industry. We like to think that millionaire CEOs are constantly checking their emails on their desktop computers, waiting for their next “opportunity” to take a 45-minute online questionnaire for a twenty-five cent reward. I attended sessions at the conference about gamification, how to make questionnaires more user-friendly, and also how to make questionnaires more kid-friendly by adding voice-to-text and text-to-voice options. All of these things have the potential to ease the burden on research participants, and as an industry, this must happen. We have a long way to go, but. . .

6. Now is the time to play catch-up with the rest of the world. Last year, I ended my recap by saying that change is happening faster than ever. I still think that’s true about the world we live in. With all of the technological advances and new opportunities provided to us, it’s an exciting time to be alive. However, I’m not sure I can honestly say that change is happening faster than ever when it comes to the world of research. I’ve been a part of this industry for a very fulfilling seventeen years, and sometimes my pride in the industry clouds my thinking. Let’s face the facts. The truth is that, as an industry, we are lagging far behind as the world speeds by. Research techniques and tools are evolving at a very slow pace, and I don’t see this changing in the near future. (In our defense, this is true for many industries and not only market research.) I still believe that those of us who are working to leverage the changing world we live in will be much better equipped for success than those who sit idly and watch the world fly.

I’m still confident that my industry is primed and ready for significant and meaningful change—even if we sometimes take the path of a tortoise. As a weekend pitmaster, I know that low and slow is sometimes the best approach. The end result is what really counts.

Jared is CMB’s Field Services Director, and has been in market research industry for seventeen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the cooking team Insane Swine BBQ.

 

Topics: big data, mobile, research design, conference recap

5 Key Takeaways from The Quirk's Event

Posted by Jen Golden and Ashley Harrington

Thu, Mar 05, 2015

Quirks Event LogoLast week, we spent a few days networking with and learning from some of the industry’s best and brightest at The Quirk's Event. At the end of the day, a few key ideas stuck out to us, and we wanted to share them with you. 1. Insights need to be actionable: This point may seem obvious, but multiple presenters at the conference hammered in on this point. Corporate researchers are shifting from a primarily separate entity to a more consultative role within the organization, so they need to deliver insights that best answer business decisions (vs. passing along a 200 slide data-dump). This mindset should flow through the entire lifespan of a project—starting at the beginning by crafting a questionnaire that truly speaks to the business decisions that need to be made (and cuts out all the fluff that may be “nice to have” but is not actionable) all the way to thoughtful analysis and reporting. Taking this approach will help ensure final deliverables aren’t left collecting dust and are instead used to lead engagement across the organization. 

2. Allocate time and resources to socializing these insights throughout the organization: All too often, insightful findings are left sitting on a shelf when they have potential to be useful across an organization. Several presenters shared creative approaches to socializing the data so that it lives long after the project ended. From transforming a conference room with life-size cut-outs of key customer segments to creating an app employees can use to access data points quickly and on-the-go, researchers and their partners are getting creative within how they share the findings. The most effective researchers think about research results as a product to be marketed to their stakeholders.
 
3. Leverage customer data to help validate primary research: Most organizations have a plethora of data to work with, ranging from internal customer databases to secondary sources to primary research. These various sources can be leveraged to paint a full picture of the consumer (and help to validate findings). Etsy (a peer-to-peer e-commerce site) talked about comparing data collected from its customer database to its own primary research to see if what buyers and sellers said they did on the site aligned with what they actually did. For Etsy, past self-reported behaviors (e.g., number of purchases, number of times someone “favorites” a shop, etc.) aligned strongly with its internal database, but future behavior (e.g., likelihood to buy from Etsy in the future) did not. Future behaviors might not be something we can easily predict by asking directly in a survey, but that data could be helpful as another way to identify customer loyalty or advocacy. A note of caution: if you plan on doing this data comparison, make sure the wording in your questionnaire aligns with what you plan on matching in your existing database. This ensures you’re getting an apples to apples comparison.
 
4. Be cautious when comparing cross-country data: A multi-country study is typically going to ask for a “global overview” or cross-country comparison, but this can lead to inaccurate recommendations. Most are aware of cultural biases such as extreme response (e.g., Brazilian respondents often rate higher on rating scales while Japanese respondents tend to rate lower) or acquiescence (e.g., China often has the propensity to want to please the interviewer), and these biases should be kept in the back of your mind when delving into the final data. Comparing scaled data directly between countries with very different rating tendencies could lead to to falsely thinking one country is underperforming. A better indication of performance would be to provide an in-country comparison to competitors or looking at in-country trending data.
 
5. Remember your results are only as useful as your design is solid: A large number of stakeholders invested in a study’s outcome can lead to a project designed by committee since each stakeholder will inevitably have different needs, perspectives, and even vocabularies. A presenter shared an example from a study that asked recent mothers, “How long was your baby in the hospital?” Some respondents thought the question referred to the baby’s length, so they answered in inches. Others thought the question referred to the baby’s duration in the hospital, so they answered in days. Therein lies the problem.  Throughout the process, it’s our job to ensure that all of the feedback and input from multiple stakeholders adheres to the fundamentals of good questionnaire design: clarity, answerable, ease, and lack of bias.

Have you been to any great conferences lately and have insights to share? Tell us in the comments!

Jen is a Project Manager on the Tech practice who always has the intention to make a purchase on Etsy but never actually pulls the trigger.  

Ashley is a Project Manager on the FIH/RTE practice who has pulled the trigger on several Etsy items (as evidenced in multiple “vintage” tchotchkes and half-complete craft projects around her home).

Topics: big data, research design, conference recap

Deflategate and the Dangers of Convenience Sampling

Posted by Athena Rodriguez

Wed, Jan 28, 2015

The Patriots have landed in Phoenix for yet another Super Bowl, but there are still those who can’t stop talking about “Deflategate.” Yes, that’s what some are calling the controversy surrounding those perfectly legal 12.5 PSI inflated footballs that lost air pressure due to changing atmospheric conditions and repeated Gronking* after touchdowns during the first half of the Pats-Colts showdown.

Here in Boston, we were shocked to turn on the TV and hear the terrible accusations. Were we watching and reading the same things as the accusers? Did those doubters not watch the press conferences (all three of them) where our completely ethical coach proclaimed his team’s innocence? Did they not understand that Belichick even conducted a SCIENCE EXPERIMENT? 

Or could it be simply that the doubters live outside of New England?

athena blog

The chart above makes it pretty obvious—from Bangor to Boston, we just might have been hearing the voices of a lot more Pats fans. This is, in fact, a really simple illustration of the dangers of convenience sampling—a very common type of non-probability sampling.

Sure it’s a silly example, but as companies try to conduct research faster and cheaper, convenience sampling poses serious threats. Can you get 500 completes in a day? Yes, but there’s a very good chance they won’t be representative of the population you’re looking for. Posting a link to your survey on Facebook or Twitter is fast and free, but whose voice will you hear and whose will you miss?

I’ve heard it said that some information is better than none, but I’m not sure I agree. If you sample people that aren’t in your target, they can lead you in the completely wrong direction. If you oversample in a certain population (ahem, New Englanders) you can also suffer from a biased, non-representative sample.

Representative sampling is one of the basic tenets of survey research, but just because it’s a simple concept doesn’t mean we can afford to ignore it. Want your results to win big? Carefully review your game plan before kicking-off data collection.

  • Sample Frame: Is the proposed sample frame representative of the target population?
    • Unless you are targeting a niche population. . .
      • online panel “click-throughs” should be census balanced
      • –customer lists must be reflective of the target customers (if the population is all customers, do not use email addresses unless addresses exist for all customers or the exceptions are randomly distributed)
      • –compare the final sample to the target population just to be sure
  • Selection: Does the selection process ensure that all potential respondents on the frame have an equal chance of being recruited throughout the data collection period?
    • To be sure, you should. . .
      • randomize all lists before recruiting
      • not fill quotas first
      • not focus on hard-to-reach respondents first
  • Data collection: Will the proposed data collection plan adversely affect sample quality?
    • –Ask yourself:
      • Are fielding dates unusual (e.g., holiday, tax returns, Super Bowl, etc.)?
      • Is the schedule long enough to cover weekdays and weekends? Will it give procrastinators sufficient time to respond?
  • Structure: Will important subgroups have sufficient sample sizes if left to fall out naturally?
    • –If not, set quotas. . .
      • –Quota groups must be weighted back to their natural distribution before analysis or treated as an oversample and excluded from any analysis at the total level.
  • Size: Is the proposed sample size sufficient?
    • –We must always balance costs against sample size, but, at the same time, we must recognize that we need minimum sample sizes for certain objectives.  

Are there times you might need some quick and dirty (un-Patriot like) results? Absolutely. But, when you’re playing for big insights, you need the right team.

*spiking the football after a touchdown.

Athena Rodriguez is a Project Consultant at CMB. She’s a native Floridian, who’s looking forward to the end of the Blizzard of 2015 and the start of Sunday’s game!

Topics: Boston, television, research design, digital media and entertainment research

Conjoint Analysis: 3 Common Pitfalls and How to Avoid Them

Posted by Liz White

Thu, Jan 08, 2015

conjoint analysis, cmbIf you work in marketing or market research, chances are you’re becoming more and more familiar with conjoint analysis: a powerful research technique used to predict customer decision-making relative to a product or service. We love conjoint analysis at CMB, and it’s easy to see why. When conducted well, a conjoint study provides results that make researchers, marketers, and executives happy. These results:

  • Are statistically robust
  • Are flexible and realistic
  • Describe complex decision-making
  • Are easy to explain and understand

If you need a quick introduction or a refresher on conjoint analysis, I recommend Sawtooth Software’s excellent video, which can be found here.For these reasons conjoint analysis is one of the premiere tools in our analytical toolkit. However, as with any analytical approach, conjoint analysis should be applied thoughtfully to realize maximum benefits. Below, I describe three of the most common pitfalls related to conjoint analysis and tips on how to avoid them.

Pitfall #1: Rushing the Design

This is the most common pitfall, but it’s also be the easiest one to avoid. As anyone who has conducted a conjoint study knows, coming up with the right design takes time. When planning the schedule for a conjoint analysis study, make sure to leave time for the following steps:

  • Identify your business objective, and work to identify the research questions (and conjoint design) that will best address that objective.
  • Brainstorm a full list of product features that you’d like to test. Collaborate with coworkers from various areas of your organization—including marketing, sales, pricing, and engineering as well as the final decision-makers—to make sure your list is comprehensive and up-to-date.
    • You may also want to plan for qualitative research (e.g., focus groups) at this stage, particularly if you’re looking to test new products or product features. Qualitative research can prioritize what features to test and help to translate “product-speak” into language that customers find clear and meaningful.
    • If you’re looking to model customer choices among a set of competitive products, collect information about your competitors’ products and pricing.
    • Once all the information above is collected, budget time to translate your list of product features into a conjoint design. While conjoint analysis can handle complex product configurations, there’s often work to be done to ensure the final design (a) captures the features you want to measure, (b) will return statistically meaningful results, and (c) won’t be overly long or confusing for respondents.
    • Finally, budget time to review the final design. Have you captured everything you needed to capture?  Will this make sense to your customers and/or prospective customers? If not, you may need to go back and update the design. Make sure you’ve budgeted for this as well.

Pitfall #2: Overusing Prohibitions

Most conjoint studies typically involve a conversation about prohibitions—rules about what features can be shown under certain circumstances. For example:

Say Brand X’s products currently come in red, blue, and black colors while Brand Y’s products are only available in blue and black. When creating a conjoint design around these products, you might create a rule that if the brand is X, the product could be any of the three colors, but if the brand is Y, the product cannot be red.

While it’s tempting to add prohibitions to your design to make the options shown to respondents more closely resemble the options available in the market, overusing prohibitions can have two big negative effects:

  1. Loss of precision when estimating the value of different features for respondents.
  2. Loss of flexibility for market simulations.

The first of these effects can typically be identified in the design phase and fixed by reducing the number of prohibitions included in a model. The second is potentially more damaging as it usually becomes an issue after the research has already been conducted. For example:

We’ve conducted the research above for Brand Y, including the prohibition that if the brand is Y, the product cannot be red. Looking at the results, it becomes clear that Brand X’s red product is much preferred over their blue and black products. The VP of Brand Y would like to know what the impact of offering a Brand Y product in red would be.  Unfortunately, because we did not test a red Brand Y product, we are unable to use our conjoint data to answer the VP’s question.

In general, it is best to be extremely conservative about using prohibitions—use them sparingly and avoid them where possible. 

Pitfall #3: Not Taking Advantage of the Simulator

While the first two pitfalls are focused on conjoint design, the final pitfall is about the application of conjoint results. Once the data from the conjoint analysis has been analyzed, it can be used to stimulate virtually any combination of the features tested and predict the impact that different combinations will have on customer decision-making. . .which is just one of the reasons conjoint analysis is such a valuable tool. All of that predictive power can be distilled into a conjoint simulator that anyone—from researchers to marketers to C-suite executives—can use and interpret.

At CMB, the clients I’ve seen benefit most from conjoint analysis are the clients that take full advantage of the simulators we deliver, rather than simply relying on the scenarios created for reporting. Once you receive a conjoint simulator, I recommend the following:

  1. Distribute copies of the simulator to all key stakeholders.
  2. Have the simulator available when presenting the results of your study, and budget time in the meeting to run “what-if” scenarios then and there. This can allow you to leverage the knowledge in the room in real time, potentially leading to practical and informed conclusions.
  3. Continue to use your simulator to support decision-making even after the study is complete, using new information to inform the simulations you run. A well-designed conjoint study will continue to have value long after your project closes.

Liz is a member of the Analytics Team at CMB, and she can’t wait to hear your research questions!

Topics: advanced analytics, research design

Be Aware When Conducting Research Among Mobile Respondents

Posted by Julie Kurd

Tue, Oct 28, 2014

mobile, cmb

Are you conducting research among mobile respondents yet? Autumn is conference season, and 1,000 of us just returned from IIR’s The Market Research Event (TMRE) conference where we learned, among other things, about research among mobile survey takers. Currently, only about 5% of the market research industry spend is for research conducted on a smartphone, 80% is online, and 15% is everything else (telephone and paper-based). Because mobile research is projected to be 20% of the industry spend in the coming years, we all need to understand the risks and opportunities of using mobile surveys.  

Below, you’ll find three recent conference presentations that discussed new and fresh approaches to mobile research as well as some things to watch out for if you decide to go the mobile route. 

1. At IIR TMRE, Anisha Hundiwal, the Director of U.S. Consumer and Business Insights for McDonald’s, and Jim Lane from Directions Research Inc. (DRI) did not disappoint. They co-presented the research they had done to understand the strengths of half a dozen national and regional coffee brands, including Newman’s Coffee (the coffee that McDonald’s serves), around 48 brand attributes. While they did share some compelling results, Anisha and Jim’s presentation primarily focused on the methodology they used. Here is my paraphrase of the approach they took:

  • They used a traditional 25 minute, full-length online study among traditional computer/laptop respondents who met the screening criteria (U.S. and Europe, age, gender, etc.), measuring a half dozen brands and approximately 48 brand attributes. They then analyzed results of the full-length study and conducted a key driver analysis.
  • Next, they administered the study using a mobile app for mobile survey takers among similar respondents who met the same screening criteria. They also dropped the survey length to 10 minutes, tested a narrower set of brands (3 instead of 6), and winnowed the attributes from ~48 to ~14. They made informed choices about which attributes to include based on their key driver analysis (key drivers to overall equity, and I believe I heard them say they added in some attributes that were highly polarizing).

Then, they compared mobile respondent results to the traditional online survey results. Anisha and Jim discussed key challenges we all face as we begin to adapt to smartphone respondent research. For example, they tinkered with rating scales and slide bars by setting the bar on the far left at 0 on a 0-100 rating scale for some respondents and then setting it at the mid-point for others to see if results would be different. While the overall brand results were about the same, the sections of the rating scales respondents used differed. Further, they reported that it was hard to compare detailed results for online and mobile because different parts of the rating scales were used in general. Finally, they reported that the winnowed attribute and brand lists made insights less rich than the online survey results.

2. At the MRA Corporate Researcher’s conference in September, Ryan Backer, Global Insights for Emerging Tech at General Mills, also very clearly articulated several early learnings in the emerging category of mobile surveys. He said that 80% of General Mills’ research team has conducted at least one smartphone respondent study. (Think about that and wonder out loud, “should I at least dip my toe into this smartphone research?”) He provides a laundry list of the challenges they faced and, like all true innovators, he was willing to share his challenges because it helps him continue to innovate.  You can read a full synopsis here.

3. Chadwick Martin Bailey was a finalist for the NGMR Disruptive Innovation Award at the IIR TMRE conference.  We partnered with Research Now for a presentation on modularizing surveys for mobile respondents at an earlier IIR conference and then turned the presentation into a webinar. CMB used a modularized technique in which a 20 minute survey was deconstructed into 3 partial surveys with key overlaps. After fielding the research among mobile survey takers, CMB used some designer analytics (warning, probably don’t do this without a resident PhD) to ‘stitch’ and ‘impute’ the results. In this conference presentation turned webinar, CMB talks about the pros and cons of this approach.

Conferences are a great way to connect with early adopters of new research methods. So, when you’re considering adopting new research methods such as mobile surveys, allocate time to see what those who have gone before you have learned!

Julie blogs for GreenBook, ResearchAccess, and CMB.  She’s an inspired participant, amplifier, socializer, and spotter in the twitter #mrx community so talk research with her @julie1research.

Topics: data collection, mobile, research design, data integration, conference recap

Qualitative, Quantitative, or Both? Tips for Choosing the Right Tool

Posted by Ashley Harrington

Wed, Aug 06, 2014

quantitative, qualitative, methodologyIn market research, it can occasionally feel like the rivalry between qualitative and quantitative research is like the Red Sox vs. the Yankees.  You can’t root for both, and you can’t just “like” one.  You’re very passionate about your preference.  But in many cases, this can be problematic. For example, using a quantitative mindset or tactics in a qualitative study (or vice versa) can lead to inaccurate conclusions. Below are some examples of this challenge—one that can happen throughout all phases of the research process: 

Planning

Clients will occasionally request that market researchers use a particular methodology for an engagement. We always explore these requests further with our clients to ensure there isn’t a disconnect between the requested methodology and the problem the client is trying to solve.

For example, a bank* might say, “The latest results from our brand tracking study indicate that customers are extremely frustrated by our call center and we have no idea why. Let’s do a survey to find out.”

Because the bank has no hypotheses about the cause of the issue, moving forward with their survey request could lead to designing a tool with (a) too many open-ended questions and (b) questions/answer options that are no more than wild guesses at the root of the problem, which may or may not jibe with how consumers actually think and feel.

Instead, qualitative research could be used to provide a foundation of preliminary knowledge about a particular problem, population, and so forth. Ultimately, that knowledge can be used to help inform the design of a tool that would be useful.

Questionnaire Design

For a product development study, a software company* asks to add an open-ended question to a survey: “What would make you more likely to use this software?” or “What do you wish the software could do that it can’t do now?”

Since most of us are not engineers or product designers, this question might be difficult for most respondents to answer. Open-ended questions like these are likely to yield a lot of not-so-helpful “I don’t know”-type responses, rather than specific enhancement suggestions.

Instead of squandering valuable real estate on a question not likely to yield helpful data, a qualitative approach could allow respondents to react to ideas at a more conceptual level, bounce ideas off of each other or a moderator, or take some time to reflect on their responses. Even if the customer is not a R&D expert, they may have a great idea that just needs a bit of coaxing via input and engagement with others.

Analysis and Reporting

In reviewing the findings from an online discussion board, a client at a restaurant chain* reviews the transcripts and states, “85% of participants responded negatively to our new item, so we need to remove it from our menu.”

Since findings from qualitative studies are not necessarily statistically significant, using the same techniques (e.g., descriptive statistics and frequencies) is not ideal as it implies a level of precision in the findings that is not necessarily accurate. Further, it would not be cost-effective to recruit and conduct qualitative research with a group large enough to be projectable onto the general population.

Rather than attempting to quantify the findings in strictly numerical terms, qualitative data should be thought of as more directional in terms of overall themes and observable patterns.

At CMB, we root for both teams. We believe both produce impactful insights, and that often means using a hybrid approach. We believe the most meaningful insights come from choosing the approach or approaches best suited to the problem our client is trying to solve. However, being a Boston-based company, we can’t say that we’re nearly this unbiased when it comes to the Red Sox versus the Stankees Yankees.

*Example (not actual)

Ashley is a Project Manager at CMB. She loves both qualitative and quantitative equally and is not knowledgeable enough about sports to make any sports-related analogies more sophisticated than the Red Sox vs. the Yankees.

Click the button below to subscribe to our monthly eZine and get the scoop on the latest webinars, conferences, and insights. 

Subscribe Here

Topics: methodology, qualitative research, research design, quantitative research

Discrete Choice and the Path to a Car Purchase

Posted by Heidi Hitchen

Wed, Jun 11, 2014

Decision

One chilly night in February, I was heading home from a friend’s birthday festivities when my car just stopped working. I had just enough oomph and momentum from the hill I was on to pull off to the side of the road. I found myself stranded in the middle of the city, waiting for a tow truck until 4AM and vowing to myself the whole time that I wouldn’t deal with this clunker anymore. It was time for a new car. During the next two weeks, without wheels, I did my research on the Internet and made my way over to a local Toyota dealership. I walked in knowing exactly what I wanted: a 2014 green Corolla. I even knew the various payment and financing options I was prepared for. And wouldn’t you know it—I ended up getting exactly what I said I wanted.As easy as that sounds, my path wasn’t straight to the doors of the Toyota dealership. I had gone through a variety of different makes, models, financing options, and colors. At the end of researching each car, I asked myself not only if I would really buy this car, but also if I would truly be happy with it. It wasn’t until I asked myself this question for the first time that I realized I was essentially creating my own Discrete Choice Measurement (DCM), specifically a Dual-Choice DCM (DCDC).

DCM is a technique that presents several configurations of product features to respondents and asks them to pick which configuration they would most prefer. In a Dual-Choice DCM, a follow-up question is asked to determine whether the respondent would actually buy the preferred package. This second question is crucial—I might choose a Lamborghini but there’s little chance (OK, no chance) that I will actually purchase one.

Dual-Choice DCM scenarios are the gold standard for product development work and can lend more accurate insights into a buying scenario by:

  • more closely representing a consumer’s purchase decision
  • helping us better understand consumer preferences
  • more accurately reflecting market potential
  • dissecting the product into pieces, which allows us to measure price sensitivity and willingness to pay for the product as a whole as well as individual components
  • simulating the market interest in thousands of potential product packages for product optimization as the analysis examines how a product can be changed to perform better by identifying (and tweaking) individual product features that affect purchase decisions

Being able to produce more realistic results is obviously an important part of any research, and it just goes to show that DCMs can truly help with any decision making process. Running a DCM in my head prior to purchasing my car was truly helpful, so it’s no surprise that our clients often rave about the DCMs and Dual-Choice DCMs in our analytics program.

Heidi is an Associate Researcher who graduated from Quinnipiac University with a dual-degree in Marketing and Over-Involvement. After realizing she lacks hobbies now that student organizations don’t rule her free time, Heidi is taking sailing classes and looks forward to smooth sailing on the Charles River by the end of the summer.

Want to know more about our advanced analytic techniques, including our innovative Tri-Choice Approach? Let us know and we’ll be happy to talk through how we choose the right techniques to uncover critical consumer insights. Contact us.

Topics: advanced analytics, research design