WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

A Data Dominator’s Guide to Research Design…and Dating

Posted by Talia Fein

Wed, Jan 20, 2016

people_on_date.jpgI recently went on a first date with a musician. We spent the first hour or so talking about our careers: the types of music he plays, the bands he’s been in, how music led him to the job he has now, and, of course, my unwavering passion for data. Later, when there was a pause in the conversation, he said: “so, do you like music?”

Um. . .how was I supposed to answer that? There was clearly only one right answer (“yes”) unless I really didn’t want this to go anywhere. I told him that, and we had a nice laugh. . .and then I used it as a teaching opportunity to explain one of my favorite market research concepts: Leading Questions.

According to Tull and Hawkins’ Marketing Research: Measurement and Method, a Leading Question is “a question that suggests what the answer should be, or that reflects the researcher’s point of view. Example: “Do you agree, as most people do, that TV advertising serves no useful purpose?”

In writing good survey questions, we need to give enough information for the respondent to fully answer the question, but not too much information that we give away either our own opinions or the responses we expect to hear. This is especially important in opinion research and political polling when slight changes in word choice can create bias and impact the results. For example, in their 1937 poll, Gallup asked, “Would you vote for a woman for President if she were qualified in every other aspect?” This implies that simply being a woman is a disqualification for President. (Just so you know: 33% answered “Yes.”) Gallup has since changed the wording—“If your party nominated a generally well-qualified person for President who happened to be a woman, would you vote for that person?”—and the question is included in a series of questions in which “woman” is replaced with other descriptors, such as Catholic, Black, Muslim, gay, etc. Of course, times have changed, and we can’t know exactly how much of the bias was due to the leading nature of the question, but 92% answered “Yes” as recently as June 2015.

The ordering of questions is just as important as the words we choose in specific questions. John Martin (Cofounder and Chairman of CMB, 1984-2014) taught us the importance—and danger—of sequential bias. In writing a good questionnaire, we’re not only spitting out a bunch of questions and receiving responses—we’re taking the respondent through a 15 (or 20 or 30) minute journey, trying to get his/her most unbiased, real, opinions and preferences. For example, if we start a questionnaire by showing a list of brands and asking which ones are fun and exciting, and then ask unaided which brands respondents know of, we’re not going to get very good data. Just like if we ask a person whether he/she likes music after talking for an hour about the importance of music in our own lives, we might get skewed results.

One common rule when it comes to questionnaire ordering is to ask unaided questions before aided questions. Otherwise, the aided questions would remind respondents of possible options—and inflate their unaided answers. A couple more rules I like to keep in mind:

  1. Start broad, then go narrow: talk about the category before the specific brand or product.

Remember that the respondent is in the middle of a busy day at work or has just put the kids to bed and has other things on his/her mind. The introductory sections of a questionnaire are as much about screening respondents and gathering data as they are about getting the respondent thinking about the category (rather than what to make for the kids’ lunch tomorrow).

  1. Think about what you have already told the respondent: like a good date, the questionnaire should build.

In one of my recent projects, after determining awareness of a product, we measured “concept awareness” by showing a short description of the product to those who had said they were NOT aware of it and then asking them if they had heard of the concept. Later on in the questionnaire, we asked respondents what product features they were familiar with. For respondents who had seen the concept awareness question (i.e., those who hadn’t been fully aware), we removed the product features that had been mentioned in the description (of course, the respondent would know those).

  1. When asking unaided awareness questions, think about how you’re defining the category.

“What Boston-based market research companies founded in 1984 come to mind?” might be a little too specific. A better way of wording this would simply be: “What market research companies come to mind?” Usually thinking about the client’s competitive set will help you figure out how to explain the category.

So, remember: in research, just as in dating, what we put out (good survey questions and positive vibes) influences what we get back.

Talia is a Project Manager on CMB’s Technology and eCommerce team. She was recently named one of Survey Magazine’s 2015 Data Dominators and enjoys long walks on the beach.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital. This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

Topics: methodology, research design, quantitative research

My Data Quality Obsession

Posted by Laurie McCarthy

Tue, Jan 12, 2016

3d_people_in_a_row.jpgYesterday I got at least 50 emails, and that doesn’t include what went to my spam folder—at least half of those went straight in the trash. So, I know what a challenge it is to get a potential respondent to even open an email that contains a questionnaire link. We’re always striving to discover and implement new ways to reach respondents and to keep them engaged: mobile optimization is key, but we also consider incentive levels and types, subject lines, and, of course, better ways to ask questions like highlighter exercises, sliding scales, interactive web simulations, and heat maps. This project customization also provides us with the flexibility needed to communicate with respondents in hard-to-reach groups.

Once we’ve got those precious respondents, the question remains: are we reaching the RIGHT respondents and keeping them engaged? How can we evaluate the data efficiently prior to any analysis?

Even with the increased methods in place to protect against “bad”/professional respondents, the data quality control process remains an important aspect of each project. We have set standards in place, starting in the programming phase—as well as during the final review of the data—to identify and eliminate “bad” respondents from the data prior to conducting any analysis.

We start from a conservative standpoint during programming, flagging respondents who fail any of the criteria in the list below. These respondents are not permanently removed from the data at this point, but they are categorized as an incomplete and are reviewable if we feel that they provide value to the study:

  • “Speedsters”Respondents who completed the questionnaire in 1/5 of the overall median time or less. This is applied to evaluate the data collected after approximately the first 20% or 100 completes, whichever is first.
  • “Grid Speedsters”:When applicable, respondents who, for two or more grids of ten or more items, has a grid speed less than 2 standard deviations from the mean for the grid. Again, this is applied after approximately the first 20% or 100 completes, whichever is first.
  • “Red-Herring”We incorporate a standard scale question (0-10), which is programmed at or around the estimated 10-minute mark in the questionnaire, asking the respondent to select a number on the scale. Respondents who do not select the appropriate number are flagged.

This process allows us to begin the data quality review during fielding, so that the blatantly “bad” respondents are removed prior to close of data collection.

However, our process extends to the final data as well.  After the fielding is complete, we review the data for the following:

  • Duplicate respondents: Even with unique links and passwords (for online), we review the data based on the email/phone number provided and the IP Address to remove respondents who do not appear to be unique.
  • Additional speedsters: Respondents who completed the questionnaire in a short amount of time. We take into consideration any brand/product rotation as well (evaluating one brand/product would take less time than evaluating several brands/products). 
  • Straight-liners: Similar to the grid speeders above, we review respondents who have selected only one value for each attribute in a grid. We flag respondents who have straight-lined each grid to create a sum of “straight-liners.” We review this metric on its own as well as in conjunction with overall completion time. The rationale being that if respondents are only selecting one value throughout the questionnaire and appear in the straight-lining flag, these individuals will also have sped through the questionnaire.
  • Inconsistent response patterns: In grids, we can sometimes have attributes that would use the reverse scale, and we review those to determine if there are contradictory responses. Another example might be a respondent who indicates he/she uses a specific brand, and, later in the study, the respondent indicates that he/she is not aware of that brand.

While we may not eliminate respondents, we do examine other factors for “common sense”:

  • Gibberish verbatims: Random letters/symbols or references that do not pertain to the study across each open ended response
  • Demographic review: Review of the demographic information to ensure that they are reasonable and in line with the specifications of the study

As part of our continuing partnership with panel sample providers, we provide them with the panel ID and information of those respondents who have failed our quality control process. In some instances, in which the client or the analysis require that certain sample sizes are collected, this may also necessitate replacing bad respondents. Our collaboration allows us to stand behind the quality of the respondents we provide for analysis and reporting, while also meeting the needs of our clients in a challenging environment.

Our clients rely on us to manage all aspects of data collection when we partner with them to develop a questionnaire, and our stringent data quality control process ensures that we can do that plus provide data that will support their business decisions. 

Laurie McCarthy is a Senior Data Manager at CMB. Though an avid fan of Excel formulas and solving data problems, she has never seen Star Wars. Live long and prosper.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

 

Topics: Chadwick Martin Bailey, methodology, data collection, quantitative research

Embracing Mobile Market Research

Posted by Brian Jones

Thu, Jul 23, 2015

Who are the mobile consumers?

mobile research, cmbLet’s get this straight: I am not addicted to my smartphone. Unlike so many of my fellow train commuters who stare zombie-eyed into their small screens, I am not immersed in a personal relationship with pixels. I have an e-Reader for that. But, my smartphone IS my lifeline.I’ve come to depend exclusively on my phone to keep me on-time and on-schedule, to entertain me (when not using my e-Reader), to stay in touch with family and friends, and to keep up-to-date with my work email. It’s my primary source for directions, weather, news, photography, messaging, banking, and a regular source for payment, shopping, and ticketing/reservations. I haven’t purchased a PC in nearly a decade, and I don’t have a landline. I also use my smartphone to take market research questionnaires, and I am far from alone. 

Data around smartphone usage aligns with my personal experience. In a recent CMB online study of U.S. consumers, optimized for mobile devices, 1 in 6 Millennials completed the questionnaire on a smartphone. Other studies report similar results. This example illustrates the issue with representativeness. Major panel vendors are seeing over half of Millennials joining their panels via a mobile device. 

mobile research, cmb

How do we adapt?

Much has been hypothesized about the future of market research under the new paradigm of mobile commerce, big data, and cloud services. New technologies and industry convergence (not just mobile) have brought sweeping changes in consumer behaviors, and market researchers must adapt.

A key component of successful adaptation will be greater integration of primary market research with other data streams. The promise of passive or observational data is captivating, but it is largely still in the formative stages. (For more on passive data, check out our recent webinar.) We still need and will likely always need active “please tell me” research. The shift from phone to online data collection has quickly been replaced with the urgency of a shift to mobile data collection (or at least device agnostic interviewing). Our industry has lagged behind because the consumer experience has become so personalized and the trust/value equation for tapping into their experiences is challenging. Tackling mobile market research with tactical solutions is a necessary step in this transition.

What should we do about it?  

  1. Understand your current audience. Researchers need to determine how important mobile data collection is to the business decision and decide how to treat mobile respondents. You can have all respondents use a mobile device, have some use a mobile device, or have mobile device respondents excluded. There are criteria and considerations for each of these, and there are also considerations for the expected mix of feature phones, smartphones, tablets, and PCs. The audience will determine the source of sample and representation that must be factored into the study design. Ultimately, this has a huge impact on the validity and reliability of the data. Respondent invitations need to include any limitations for devices not suitable for a particular survey.
  2. Design for mobile. If mobile participation is important, researchers should use a mobile first questionnaire design. Mobile optimized or mobile friendly surveys typically need to be shorter in length, use concise language, avoid complex grids and answering mechanisms, and have fewer answer options, so they can be supported on a small screen and keep respondents focused on the activity. In some cases,questionnaire modularization or data stitching can be used to help adhere to mobile design standards.
  3. Test for mobile. All questions, images, etc. need to display on a variety of screen sizes and within the bandwidth capacity of the devices that are being used. Android and iOS device accommodation covers most users. If app based surveys are being used, researchers need to ensure that the latest versions can be downloaded and are bug-free. 
  4. Apply data protection and privacy standards. Mobile market research comes with a unique set of conditions and challenges that impact how information is collected, protected, and secured. Research quality and ethical guidelines specific to mobile market research have been published by CASRO, ESOMAR, the MMRA (Mobile Marketing Research Association), and others.
  5. Implement Mobile Qualitative. The barriers are lower, and researchers can leverage the unique capabilities of mobile devices quite effectively with qualitative research. Most importantly, willing participants are mobile, which makes in-the-moment research possible. Mobile qualitative is also a great gateway to explore what’s possible for mobile quantitative studies. See my colleague Anne Hooper’s blog for more on the future of qualitative methodologies.
  6. Promote Research-on-Research. Experts need to conduct and publish additional research-on-research studies that advance understanding of how to treat mobile respondents and utilize passive data, location tracking, and other capabilities that mobile devices provide. We also need stronger evidence of what works and what doesn’t work in execution of multi-mode and mobile-only studies across different demographics, in B2B studies, and within different countries.

But perhaps the most important thing to remember is that this is just a start. Market researchers and other insight professionals must evolve from data providers to become integrated strategic partners—harnessing technology (not just mobile) to industry expertise to focus on decision-making, risk reduction, and growth.

Brian is a Senior Project Manager for Chadwick Martin Bailey, the photographer of the image in this post, and an 82 percenter—he is one of the 82% of mobile phone owners whose phone is with them always or most of the time. 

Watch our recent webinar that discusses the results of our self-funded Consumer Pulse study on the future of the mobile wallet. 

Watch Here!

Topics: methodology, qualitative research, mobile, research design

Mobile Passive Behavioral Data: Opportunities and Pitfalls

Posted by Chris Neal

Tue, Jul 21, 2015

By Chris Neal and Dr. Jay Weiner

Hands with phonesAs I wrote in last week’s post, we recently conducted an analysis of mobile wallet use in the U.S. To make it interesting, we used unlinked passive mobile behavioral data alongside survey-based data.In this post, I’ve teamed up with Jay Weiner—our VP of Analytics who helped me torture analyze the mobile passive behavioral data for this Mobile Wallet study—to share some of the typical challenges you may face when working with passive mobile behavioral data (or any type of passive behavioral data for that matter) along with some best practices for dealing with these challenges:

  1. Not being able to link mobile usage to individualsThere’s a lot of online passive data out there (mobile app usage ratings, web usage ratings by device type, social media monitoring, etc.) that is at the aggregate level and cannot be reliably attributed to individuals. These data have value, to be sure, but aggregate traffic data can sometimes be very misleading. This is why—for the Mobile Wallet project CMB did—we sourced mobile app and mobile web usage from the Research Now mobile panel where it is possible to attribute mobile usage data to individuals (and have additional profiling information on these individuals). 

    When you’re faced with aggregate level data that isn’t linked to individuals, we recommend either getting some sample from a mobile usage panel in order to understand and calibrate your results better and/or doing a parallel survey-sampling so you can make more informed assumptions (this holds true for aggregate search trend data, website clickstream data, and social media listening tools).
  1. Unstacking the passive mobile behavioral data. Mobile behavioral data that is linked to individuals typically comes in “stacked” form, i.e., every consumer tracked has many different records: one for each active mobile app or mobile website session. Analyzing this data in its raw form is very useful for understanding overall mobile usage trends. What these stacked behavioral data files do not tell you, however, is the reach or incidence (e.g., how many people or the percentage of an addressable market) of any given mobile app/website. It also doesn’t tell you the mobile session frequency or duration characteristics of different consumer types nor does it allow you to profile types of people with different mobile behaviors. 

    Unstacking a mobile behavioral data file can sometimes end up being a pretty big programming task, so we recommend deciding upfront exactly which apps/websites you want to “unstack.” A typical behavioral data file that tracks all smartphone usage during a given period of time can involve thousands of different apps and websites. . .and the resulting unstacked data file covering all of these could quickly become unwieldy.
  1. Beware the outlier! Unstacking a mobile behavioral data file will reveal some pretty extreme outliers. We all know about outliers, right? In survey research, we scrub (or impute) open-ended quant responses that are three standard deviations higher than the mean response, we take out some records altogether if they claim to be planning to spend $6 billion on their next smartphone purchase, and so on. But outliers in passive data can be quite extreme. In reviewing the passive data for this particular project, I couldn’t help but recall that delightful Adobe Marketing ad in which a baby playing with his parents’ tablet repeatedly clicks the “buy” button for an encyclopedia company’s e-commerce site, setting off a global stock bubble. 

    Here is a real-world example from our mobile wallet study that illustrates just how wide the range is of mobile behaviors across even a limited group of consumers: the overall “average” time spent using a mobile wallet app was 162 minutes, but the median time was only 23 minutes. A very small (<1% of total) portion of high-usage individuals created an average that grossly inflated the true usage snapshot of the majority of users. One individual spent over 3,000 minutes using a mobile wallet app.
  1. Understand what is (and what is not) captured by a tracking platform. Different tracking tools do different things and produce different data to analyze. In general, it’s very difficult to capture detailed on-device usage for iOS devices. . .most platforms set up a proxy that instead captures and categorizes the IP addresses that the device transmits data to/from. In our Mobile Wallet study, as one example, our mobile behavioral data did not pick up any Apple Pay usage because it leverages NFC to conduct the transaction between the smartphone and the NFC terminal at the cash register (without any signal ever being transmitted out to the mobile web or to any external mobile app, which is how the platform captured mobile usage).   There are a variety of tricks of the trade to account for these phenomenon and to adjust your analysis so you can get close to a real comparison, but you need to understand what things aren’t picked up by passive metering in order to apply them correctly.
  1. Categorize apps and websites. Needless to say, there are many different mobile apps and websites that people use, and many of these do a variety of different things and are used for a variety of different purposes. Additionally, the distribution of usage across many niche apps and websites is often not useful for any meaningful insights work unless these are bundled up into broader categories. 

    Some panel sources—including Research Now’s mobile panel—have existing mobile website and app categories, which are quite useful. For many custom projects, however, you’ll need to do the background research ahead of time in order to have meaningful categories to work with. Fishing expeditions are typically not a great analysis plan in any scenario, but they are out of the question if you’re going to dive into a big mobile usage data file.

    As you work to create meaningful categories for analysis, be open to adjusting and iterating. A certain group of specific apps might not yield the insight you were looking for. . .learn from the data you see during this process then try new groupings of apps and websites accordingly.
  1. Consider complementary survey sampling in parallel with behavioral analysis. During our iterative process of attempting to categorize mobile apps from reviewing passive mobile behavioral data, we were relieved to have a complementary survey sampling data set that helped us make some very educated guesses about how or why people were using different apps. For example, PayPal has a very successful mobile app that is widely used for a variety of reasons—peer-to-peer payments, ecommerce payments, and, increasingly, for “mobile wallet” payments at a physical point of sale. The passive behavioral data we had could not tell us what proportion of different users’ PayPal mobile app usage was for which purpose. That’s a problem because if we were relying on passive data alone to tell our clients what percent of smartphone users have used a mobile wallet to pay at a physical point of sale, we could come up with grossly inflated numbers. As an increasing number of mobile platforms add competing functionality (e.g., Facebook now has mobile payments functionality), this will remain a challenge.

    Passive tracking platforms will no doubt crack some of these challenges accurately, but some well-designed complementary survey sampling can go a long way towards helping you read the behavioral tea leaves with greater confidence. It can also reveal differences between actual vs. self-reported behavior that are valuable for businesses (e.g., a lot of people may say they really want a particular mobile functionality when asked directly, but if virtually no one is actually using existing apps that provide this functionality then perhaps your product roadmap can live without it for the next launch).

Want to learn more about the future of Mobile Wallet? Join us for a webinar on August 19, and we’ll share our insights with you!

Chris Neal leads CMB’s Tech Practice. He judges every survey he takes and every website he visits by how it looks on his 4” smartphone screen, and has sworn off buying a larger “phablet” screen size because it wouldn’t fit well in his Hipster-compliant skinny jeans.

Dr. Jay heads up the analytics group at CMB. He opted for the 6 inch “phablet” and baggy jeans.  He does look stupid talking to a brick. He’s busy trying to compute which event has the higher probability: his kids texting him back or his kids completing an online questionnaire. Every month, he answers your burning market research questions in his column: Dear Dr. Jay. Got a question? Ask it here!

Want to learn more about combining survey data with passive mobile behavioral data? Watch our recent webinar with Research Now that discusses these findings in depth.

Watch Now!

Topics: advanced analytics, methodology, data collection, mobile, Dear Dr. Jay, webinar, passive data

Upcoming Webinar: Passive Mobile Behavioral Data + Survey Data

Posted by Chris Neal

Mon, Jul 13, 2015

mobile research, mobile data collection, The explosion of mobile web and mobile app usage presents enormous opportunities for consumer insights professionals to deepen their understanding of consumer behavior, particularly for “in the moment” findings and tracking consumers over time (when they aren’t actively participating in research. . .which is 99%+ of the time for most people). Insight nerds like us can’t ignore this burgeoning wealth of data—it is a potential goldmine. But, working with passive mobile behavioral data brings with it plenty of challenges, too. It looks, smells, and feels very different from self-reported survey data:

  • It’s big. (I’m not gonna drop the “Big Data” buzzword in this blog post, but—yep—the typical consumer does indeed use their smartphone quite a bit.)
  • It’s messy.
  • We don’t have the luxury of carefully curating it in the same way we do with survey sampling. 

As we all find ourselves increasingly tasked with synthesizing insights and a cohesive “story” using multiple data sources, we’re finding that mobile usage and other data sources don’t always play nicely in the sandbox with survey data. Each of them have their strengths and weaknesses that we need to understand in order to use them most effectively. 

So, in our latest in a series of sadomasochistic self-funded thought leadership experiments, we decided to take on a challenge similar in nature to what more and more companies will ask insights departments to do: use passive mobile behavioral data alongside survey-based data for a single purpose. In this case, the topic was an analysis of the U.S. mobile wallet market opportunity. To make things extra fun, we ensured that the passive mobile behavioral data was completely unlinked to the survey data (i.e., we could not link the two data sources at the respondent level for deeper understanding or to do attitudinal + behavioral based modeling). There are situations where you’ll be given data that is linked, but currently—more often than not—you’ll be working with separate silos and asked to make hay.

During this experiment, a number of things became very clear to us, including:

  • the actual value that mobile behavioral data can bring to business engagements
  • how it could easily produce misleading results if you don’t properly analyze the data
  • how survey data and passive mobile behavioral data can complement one another greatly

Interested? I’ll be diving deep into these findings (and more) along with Roddy Knowles of Research Now in a webinar this Thursday, July 16th at 1pm ET (11am PT). Please join us by registering here

Chris leads CMB’s Tech Practice. He enjoys spending time with his two kids and rock climbing.

Watch our recent webinar with Research Now to hear the results of our recent self-funded Consumer Pulse study that leveraged passive mobile behavioral data and survey data simultaneously to reveal insights into the current Mobile Wallet industry in the US.

Watch Now!

Topics: advanced analytics, methodology, data collection, mobile, webinar, passive data, integrated data

Qualitative Research Isn't Dying—It's Evolving

Posted by Anne Hooper

Wed, May 06, 2015

qualitative research, anne hooperBack in 2005, Malcolm Gladwell told us that focus groups are dead. Just last November, Jim Bryson, CEO of 20/20 Research, questioned whether qualitative research was thriving or dying: If we take a narrow, more traditional view that qualitative is defined by the methods of face-to-face focus groups or interviews, particularly those held in a qualitative facility, then the case can be easily made that qualitative is dying.”

To all of this, I say: wait, what?! Qualitative is dying? I refused to believe it, so I embarked on a journey to explore where qualitative has been, and more importantly, where it’s going. During my research, I found plenty of evidence to support the fact that qualitative is not, in fact, dying. Great news, right? (Especially for me, because if it were true, I just might be out of a job I love.)I took a look at the fall 2014 Greenbook Research Industry Trends (GRIT) Report and focused on the data from Q1-Q2 of 2013 and Q1-Q2 2014. In this data, I learned:

  • The use of traditional in-person focus groups increased from 60% (Q1-Q2 2013) to 70% (Q1-Q2 2014).
  • Within the same time period, the use of in-person, in-depth interviews increased from 45% to 53%.
  • Interviews and groups using online communities increased from 21% to 24%.
  • The use of mobile qual (e.g., diaries, image uploads) increased from 18% to 24%.

Yes, it’s important to note that not all qualitative methodologies saw an increase in usage within this timeframe. In fact, there was a decrease in the usage of telephone IDIs, in-store shopping/observations, bulletin board studies, both chat-based and webcam-based online focus groups, and telephone focus groups.  All this notwithstanding, I think it’s fair to say that qualitative is still very much alive and well.

So why do people keep talking about qualitative dying? We can’t deny that there are a number of factors that affect how and when we use qualitative methodologies today (technology, access to big data, and text analytics are a few). But, this doesn’t mean qualitative is disappearing as a discipline. Qualitative is evolving at a rapid pace and feels more relevant than ever. Sure, we need to keep up with client demands for faster and cheaper research, but there will always be a need for the human mind (i.e., a qualitative expert) to analyze and synthesize the data to provide meaning and context behind the way people think and behave—and that is where actionable insights are born.   

Now that we know qualitative really isn’t dying, what does 2015 (and beyond) hold for us? The future is about truly integrated research—in which qualitative and quantitative are consistently, thoughtfully, and purposefully used together to provide well-rounded, actionable insights. We’re poised to do exactly that with our dedicated analytics team and network of expert industry qualitative partners. By using two equally important disciplines that are both alive and well, we can provide our clients critical insights they can really use. Far from killing off qualitative insights, technology and an evolving marketplace are helping make qualitative insights even stronger.

Anne Hooper is the Qualitative Research Director at CMB. After recently finding out that her 13 year old daughter did a quantitative assessment of her Jazz Band’s upcoming Disney trip itinerary, she’s determined that an intervention may be in order.

Topics: methodology, qualitative research

Qualitative, Quantitative, or Both? Tips for Choosing the Right Tool

Posted by Ashley Harrington

Wed, Aug 06, 2014

quantitative, qualitative, methodologyIn market research, it can occasionally feel like the rivalry between qualitative and quantitative research is like the Red Sox vs. the Yankees.  You can’t root for both, and you can’t just “like” one.  You’re very passionate about your preference.  But in many cases, this can be problematic. For example, using a quantitative mindset or tactics in a qualitative study (or vice versa) can lead to inaccurate conclusions. Below are some examples of this challenge—one that can happen throughout all phases of the research process: 

Planning

Clients will occasionally request that market researchers use a particular methodology for an engagement. We always explore these requests further with our clients to ensure there isn’t a disconnect between the requested methodology and the problem the client is trying to solve.

For example, a bank* might say, “The latest results from our brand tracking study indicate that customers are extremely frustrated by our call center and we have no idea why. Let’s do a survey to find out.”

Because the bank has no hypotheses about the cause of the issue, moving forward with their survey request could lead to designing a tool with (a) too many open-ended questions and (b) questions/answer options that are no more than wild guesses at the root of the problem, which may or may not jibe with how consumers actually think and feel.

Instead, qualitative research could be used to provide a foundation of preliminary knowledge about a particular problem, population, and so forth. Ultimately, that knowledge can be used to help inform the design of a tool that would be useful.

Questionnaire Design

For a product development study, a software company* asks to add an open-ended question to a survey: “What would make you more likely to use this software?” or “What do you wish the software could do that it can’t do now?”

Since most of us are not engineers or product designers, this question might be difficult for most respondents to answer. Open-ended questions like these are likely to yield a lot of not-so-helpful “I don’t know”-type responses, rather than specific enhancement suggestions.

Instead of squandering valuable real estate on a question not likely to yield helpful data, a qualitative approach could allow respondents to react to ideas at a more conceptual level, bounce ideas off of each other or a moderator, or take some time to reflect on their responses. Even if the customer is not a R&D expert, they may have a great idea that just needs a bit of coaxing via input and engagement with others.

Analysis and Reporting

In reviewing the findings from an online discussion board, a client at a restaurant chain* reviews the transcripts and states, “85% of participants responded negatively to our new item, so we need to remove it from our menu.”

Since findings from qualitative studies are not necessarily statistically significant, using the same techniques (e.g., descriptive statistics and frequencies) is not ideal as it implies a level of precision in the findings that is not necessarily accurate. Further, it would not be cost-effective to recruit and conduct qualitative research with a group large enough to be projectable onto the general population.

Rather than attempting to quantify the findings in strictly numerical terms, qualitative data should be thought of as more directional in terms of overall themes and observable patterns.

At CMB, we root for both teams. We believe both produce impactful insights, and that often means using a hybrid approach. We believe the most meaningful insights come from choosing the approach or approaches best suited to the problem our client is trying to solve. However, being a Boston-based company, we can’t say that we’re nearly this unbiased when it comes to the Red Sox versus the Stankees Yankees.

*Example (not actual)

Ashley is a Project Manager at CMB. She loves both qualitative and quantitative equally and is not knowledgeable enough about sports to make any sports-related analogies more sophisticated than the Red Sox vs. the Yankees.

Click the button below to subscribe to our monthly eZine and get the scoop on the latest webinars, conferences, and insights. 

Subscribe Here

Topics: methodology, qualitative research, research design, quantitative research

Parents at the Tumble Gym: A Segmentation Analysis

Posted by Jessica Chavez

Wed, Jun 25, 2014

segmentation, parenting, cmb, chadwick martin baileyOn Saturdays, when the weather is not fit for the playground, I take my toddler to a tumble gym where he can run, climb, and kick balls around with other kids his age.  Parents must accompany kids in the play area as this is a free-form play center without an employed staff (other than the front desk attendant).  As a market researcher and a perpetual observer of the human condition, I’ve noticed that these parents fall into three distinct groups: the super-involved group, the middle-of-the-road group, and the barely-involved group.The super-involved parents take full control of their child’s playtime.  They grab the ball and throw it to their kid. They build forts. They chase the kids around.  They completely guide their child’s playtime by initiating all the activities.  “Over here, Jimmy!  Let’s build a ramp and climb up!  Now let’s build a fort!  Ooh, let’s grab that ball and kick it!”

The middle-of-the-road group lets the kids play on their own, but they also keep an eye out and intervene when needed. For example, a parent in this group would intervene if the child is looking dangerously unstable while climbing the fort, or if the child steals another kid’s ball and sparks a meltdown.

The barely-involved parents tend to lean against the wall and stay on their phones—probably checking Facebook. They don’t know where their kid is or what their kid is doing.  For all they know, their child could be scaling a four foot wall and jumping onto another kid’s head.

This just demonstrates this simple fact: people are more the same than they are different.  This is why I love segmentation studies—it’s fascinating that almost everyone can be grouped together based on similar behaviors.

At CMB, we strive to make our segmentation studies relevant, meaningful, and actionable.  To this end, we have found the following five-point plan valuable for guiding our segmentation studies:

  • Start with the End in Mind: Determine how the definition and understanding of segments will be used before you begin.
  • Allow for Multiple Bases: Take a comprehensive, model-based approach that incorporates all potential bases.
  • Have an Open Mind: Let the segments define themselves.
  • Leverage Existing Resources: Harness the power of your internal databases.
  • Create a Plan of Action: Focus on internal deployment from the start.

Because each segmentation study is different, using appropriate selection criteria ensures that segments can be acted upon.  In the case of the tumble gym patrons, we might recommend that marketing efforts be based on a psychographic segmentation.  What are the parenting philosophies?  In what ways does this motivate the parents, and how can marketing efforts be targeted to the low-hanging fruit?

Incidentally, I find that I fall into the middle segment.

Jessica is a Data Manager at CMB and can’t help but mentally segment the population at large.

Want to learn more about segmentation? In the “The 5 C’s of Great Segmentation Socializers,” Brant Cruz shares 5 tips for making sure your segmentation is embraced and used in your organization. 


Webinar: Modularized Research Design for a Mobile World

Join us and Research Now to learn about the modularized traditional purchasing survey we created, which allows researchers to reach mobile shoppers en mass. We'll review sampling and weighting best practices and study design considerations as well as our “data-stitching” process. 

Watch Now!

Topics: methodology, market strategy and segmentation

Global Mobile Market Research Has Arrived: Are You Prepared?

Posted by Brian Jones

Wed, May 14, 2014

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI LightspeedThe ubiquity of mobile devices has opened up new opportunities for market researchers on a global scale. Think: biometrics, geo-location, presence sensing, etc. The emerging possibilities enabled by mobile market research are exciting and worth exploring, but we can’t ignore the impact that small screens are already having on market research. For example, unintended mobile respondents make up about 10% of online interviews today. They also impact research in other ways—through dropped surveys, disenfranchised panel members, and other unknown influences. Online access panels have become multi-mode sources of data collection and we need to manage projects with that in mind.

Researchers have at least three options: (1) we can ignore the issue; (2) we can limit online surveys to PC only; or (3) we can embrace and adapt online surveys to a multi-mode methodology. 

We don’t need to make special accommodations for small screen surveys if mobile participants are a very small percentage of panel participants, but the number of mobile participants is growing.  Frank Kelly, SVP of global marketing and strategy for Lightspeed Research/GMI—one of the world’s largest online panels—puts it this way, we don’t have the time to debate the mobile transition, like we did in moving from CATI to online interviewing, since things are advancing so quickly.” 

If you look at the percentage of surveys completed on small screens in recent GMI panel interviews, they exceed 10% in several countries and even 15% among millennials.

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI Lightspeed

There are no true device agnostic platforms since the advanced features in many surveys simply cannot be supported on small screens and on less sophisticated devices.  It is possible to create device agnostic surveys, but it means giving up on many survey features that we’ve long considered standard. This creates a challenge. Some question types aren’t effectively supported by small screens, such as discrete choice exercises or multi-dimensional grids, and a touchscreen interface is different from what you get with a mouse. Testing on mobile devices may also reveal questions that render differently depending on the platform, which can influence how a respondent answers a question. In instances like these, it may be prudent to require respondents to complete online interviews on a PC-like device. The reverse is also true.  Some research requires mobile-only respondents, particularly when the specific features of smartphones or tablets are used. In some emerging countries, researchers may skip the PC as a data collection tool altogether in favor of small screen mobile devices.  In certain instances, PC-only or mobile-only interviewing makes sense, but the majority of today’s online research involves a mix of platform types. It is clear we need to adopt best practices reflect this reality. 

Online questionnaires must work on all or at least the vast majority of devices.  This becomes particularly challenging for multi-country studies which have a greater variety of devices, different broadband penetrations, and different coverage/quality concerns for network access and availability.  A research design that covers as many devices as possible—both PC and mobile—maximizes the breadth of respondents likely to participate.  

There are several ways to mitigate concerns and maximize the benefits of online research involving different platform types. 

1.      Design different versions of the same study optimized for larger vs. smaller screens.  One version might even be app-based instead of online-based, which would mitigate concerns over network accessibility. 

2.      Break questionnaires into smaller chunks to avoid respondent fatigue on longer surveys, which is a greater concern for mobile respondents. 

Both options 1 and 2 have their own challenges.  They require matching/merging data, need separate programming, and require separate testing, all of which can lead to more costly studies.

3.      Design more efficient surveys and shorter questionnaires. This is essential for accommodating multi-device user experiences. Technology needs to be part of the solution, specifically with better auto detect features that optimize how questionnaires are presented on different screen sizes.  For multi-country studies, technology needs to adapt how questionnaires are presented for different languages. 

Researchers can also use mobile-first questionnaire design practices.  For our clients, we always consider the following:

  • Shortening survey lengths since drop-off rates are greater for mobile participants, and it is difficult to hold their focus for more than 15 minutes.

  • Structuring questionnaires to enable smaller screen sizes to avoid horizontal scrolling and minimize vertical scrolling.

  • Minimizing the use of images and open-ended questions that require longer responses. SMS based interviewing is still useful in specific circumstances, but the number of key strokes required for online research should be minimized.

  •  Keeping the wording of the questions as concise as possible.

  • Carefully choosing which questions to ask which subsets of respondents. We spend a tremendous amount of equity in the design phase to make surveys more appealing to small screen participants. This approach pays dividends in every other phase of research and in the quality of what is learned.

Consumers and businesses are rapidly embracing the global mobile ecosystem. As market researchers and insights professionals, we need to keep pace without compromising the integrity of the value we provide. Here at CMB, we believe that smart planning, a thoughtful approach, and an innovative mindset will lead to better standards and practices for online market research and our clients.

Special thanks to Frank Kelly and the rest of the Lightspeed/GMI team for their insights.

Brian is a Project Manager and mobile expert on CMB’s Tech and Telecom team. He recently presented the results of our Consumer Pulse: The Future of the Mobile Wallet at The Total Customer Experience Leaders conference.

In Universal City next week for the Future of Consumer Intelligence? Chris Neal, SVP of our Tech and Telecom team, and Roddy Knowles of Research Now, will share A “How-To” Session on Modularizing a Live Survey for Mobile Optimization.

 

Topics: methodology, data collection, mobile, data integration

A Perfect Match? Tinder and Mobile Ethnographies

Posted by Anne Hooper

Wed, Apr 23, 2014

Tinder JoeI know what you are thinking...“What the heck is she TALKING about? How can Tinder possibly relate to mobile ethnography?”  You can call me crazy, but hear me out first.For those of you who may be unfamiliar, Tinder is a well-known “hook up” app that’s taken the smartphone wielding, hyper-social Millennial world by storm. With a simple swipe of the index finger, one can either approve or reject someone from a massive list of prospects. At the end of the day, it comes down to anonymously passing judgment on looks alone—yet if both users “like” each other, they are connected. Shallow? You bet. Effective? Clearly it must be because thousands of people are downloading the app daily.

So what’s the connection with mobile ethnography? While Tinder appears to be an effective tool for anonymously communicating attraction (anonymous in that the only thing you really know about the other person is what they look like), mobile ethnography is an effective tool for anonymously communicating daily experiences that we generally aren’t as privy to as researchers. Mobile ethnography gives us better insight into consumer behavior by bringing us places we’ve never gone before but are worthy of knowing nonetheless (Cialis, anyone?). Tapping into these experiences—from the benign to the very private—are the nuts and bolts behind any good product or brand.

So how might one tap into these experiences using mobile ethnography? It’s actually quite easy—we create and assign “activities” that are not only engaging for participants, but are also designed to dig deep and (hopefully) capture the "Aha!" moments we aim for as researchers. Imagine being able to see how consumers interact with your brand on a day-to-day basis—how they use your product, where their needs are being fulfilled, and where they experience frustrations. Imagine “being there” when your customer experiences your brand—offering insight into what delights and disappoints them right then and there (i.e., not several weeks later in a focus group facility). The possibilities for mobile ethnography are endless...let’s just hope the possibilities for Tinder come to a screeching halt sooner rather than later.

Anne Hooper is the Director of Qualitative Services at CMB. She has a 12 year old daughter who has no idea what Tinder is, and she hopes it stays that way for a very long time.

Topics: methodology, qualitative research, social media