WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Spring into Data Cleaning

Posted by Nicole Battaglia

Tue, Apr 04, 2017

scrubbing.jpegWhen someone hears “spring cleaning” they probably think of organizing their garage, purging clothes from their closet, and decluttering their workspace. For many, spring is a chance to refresh and rejuvenate after a long winter (fortunately ours in Boston was pretty mild).

This may be my inner market researcher talking, but when I think of spring cleaning, the first that comes to mind is data cleaning. Like cleaning and organizing your home, data cleaning is a detailed and lengthy process that is relevant to researchers and their clients.

Data cleaning is an arduous task. Each completed questionnaire must be checked to ensure that it's been answered correctly, clearly, truthfully, and consistently. Here’s what we typically clean:

  • We’ll look at each open-ended response in a survey to make sure respondents’ answers are coherent and appropriate. Sometimes respondents will curse, other times they'll write outrageously irrelevant answers like what they’re having for dinner, so we monitor these closely. We do the same for open-ended numeric responsesthere’s always that one respondent who enters ‘50’ when asked how many siblings they have.
  • We also check for outliers in open-ended numeric responses. Whether it’s false data or an exceptional respondent (e.g. Bill Gates), outliers can skew our data and lead us to draw the wrong conclusions and make more recommendations to clients. For example, I worked on a survey that asked respondents how many cars they own.  Anyone who provided a number that was three standard deviations above the mean was set as an outlier because their answers would’ve significantly impacted our interpretation of the average car ownershipthe reality is the average household owns two cars, not six.
  • Straightliners are respondents who answer a battery of questions on the same scale with the same response. Because of this, sometimes we’ll see someone who strongly agrees or disagrees with two completely opposing statements—making it difficult to trust these answers reflect the respondent’s real opinion.
  • We often insert a Red Herring Fail into our questionnaires to help identify and weed out distracted respondents. A Red Herring Fail is a 10-point scale question usually placed around the halfway mark of a questionnaire that simply asks respondents to select the number “3” on the scale. If they select a number other than “3”, we flag them for removal.
  • If there’s incentive to participate in a questionnaire, someone may feel inclined to participate more than once. So to ensure our completed surveys are from unique individuals, we check for duplicate IP addresses and respondent IDs.

There are a lot of variables that can skew our data, so our cleaning process is thorough and thoughtful. And while the process may be cumbersome, here’s why we clean data: 

  • Impression on the clientFollowing a detailed data cleaning processes helps show that your team is cautious, thoughtful, and able to accurately dissect and digest large amounts of data. This demonstration of thoroughness and competency goes a long way to building trust in the researcher/client relationship because the client will see their researchers are working to present the best data possible.
  • Helps tell a better storyWe pride ourselves on storytelling–using insights from data and turning them into strong deliverablesto help our clients make strategic business decisions. If we didn’t have accurate and clean data, we wouldn’t be able to tell a good story!
  • Overall, ensures high quality and precise dataAt CMB typically two or more researchers are working on the same data file to mitigate the chance of error. The data undergoes such scrutiny so that any issues or mistakes can be noted and rectified, ensuring the integrity of the report.

The benefits of taking the time to clean our data far outweigh the risks of skipping it. Data cleaning keeps false or unrepresentative information from influencing our analyses or recommendations to a client and ensures our sample accurately reflects the population of interest.

So this spring, while you’re finally putting away those holiday decorations, remember that data cleaning is an essential step in maintaining the integrity of your work.

Nicole Battaglia is an Associate Researcher at CMB who prefers cleaning data over cleaning her bedroom.

Topics: data collection, quantitative research

But first... how do you feel?

Posted by Lori Vellucci

Wed, Dec 14, 2016

EMPACT 12.14-2.jpg

How does your brand make consumers feel?  It’s a tough but important question and the answer will often vary between customers and prospects or between segments within your customer base.  Understanding and influencing consumers’ emotions is crucial for building a loyal customer base; and scientific research, market research, and conventional wisdom all suggest that to attract and engage consumers, emotions are a key piece of the puzzle. 

CMB designed EMPACTSM, a proprietary quantitative approach to understanding how a brand, product, touchpoint, or experience should make a consumer feel in order to drive their behaviors.  Measuring valence (how bad or good) and activation (low to high energy) across basic emotions (e.g., happy, sad, etc.), social and self-conscious emotions (e.g., pride, embarrassment, nostalgia, etc.) and other relevant feelings and mental states (e.g., social connection, cognitive ease, etc.), EMPACT has proved to be a practical, comprehensive, and robust tool.  The key insights around emotions emerge which can then drive communication to elicit the desired emotions and drive consumer behavior.  But while EMPACT has been used extensively as a quantitative tool, it is also an important component when conducting qualitative research.

In order to achieve the most bang for the buck with qualitative research, every researcher knows that having the right people in the room (or in front of the video-enabled IDI) is a critical first step.  You screen for demographics and behaviors and sometimes attitudes, but have you considered emotions?  Ensuring that you recruit respondents who feel a specific way when considering your brand or product is critical to being able to glean the most insight from qualitative work. (Tweet this!)  Applying an emotional qualifier to respondents allows us to ensure that we are talking to respondents who are in the best position to provide the specific types of insights we’re looking for. 

For example, CMB has a client who learned from a segmentation study which incorporated EMPACT that their brand over-indexed for eliciting certain emotions that tended to drive consumers away from brands within their industry.  The firm had a desire to craft targeted communications to mitigate these negative emotions among this specific strategic consumer segment.  As a first step in testing their marketing message and imagery, focus groups were conducted. 

In addition to using the segmentation algorithm to ensure we had the correct consumer segment in the room, we also included EMPACTscreening to be sure the respondents selected felt the emotions that we wanted to address with new messaging.  In this way, we were able to elicit insights directly related to how well the new messaging worked in mitigating the negative emotions.  Of course we tested the messaging among broader groups as well, but being able to identify and isolate respondents whose emotions we most wish to improve ensured development of great advertising that will move the emotion needle and motivate consumers to try and to love the brand.

Want to learn more about EMPACT? View our webinar by clicking the link below:

Learn More About EMPACT℠

Lori Vellucci is an Account Director at CMB.  She spends her free time purchasing ill-fated penny stocks and learning about mobile payment solutions from her Gen Z daughters.

Topics: methodology, qualitative research, EMPACT, quantitative research

Why Researchers Should Consider Hybrid Methods

Posted by Becky Schaefer

Fri, Dec 09, 2016

As market researchers we’re always challenging ourselves to provide deeper, more accurate insights for our clients. Throughout my career I’ve witnessed an increased dedication to uncovering better results by integrating traditional quantitative and qualitative methodologies to maximize insights within shorter time frames.Qualitative.jpg

Market research has traditionally been divided into quantitative and qualitative methodologies. But more and more researchers are combining elements of each – creating a hybrid methodology, if you will – to paint a clearer picture of the data for clients. [Tweet this!]

Quantitative research is focused on uncovering objective measurements via statistical analysis. In practice, quant market research studies generally entail questionnaire development, programming, data collection, analysis, and results, and can usually be completed within a few weeks (depending on the scope of the research).  Quant studies usually have larger sample sizes and are structured and setup to quantify data on respondents’ attitudes, opinions, and behaviors.

Qualitative research is exploratory and aims to uncover respondents’ underlying reasons, beliefs and motivations. Qualitative is descriptive, and studies may rely on projective techniques and principles of behavioral psychology to probe deeper than initial responses might allow. 

While both quantitative and qualitative research have their respective merits, market research is evolving and blurring the lines between the two.  At CMB we understand each client has different goals and sometimes it’s beneficial to apply these hybrid techniques.

 For example two approaches I like to recommend are:

  • Video open-ends Traditional quantitative open-ends ask respondents to complete open-ended questions by to entering a text response. Open-ends give respondents the freedom to answer questions in their own words versus selecting from a list of pre-determined responses. While open-ends are still considered to be a viable technique, market researchers are now throwing video into the mix. Instead of writing down their responses, respondents can record themselves on video. The obvious advantage to video is that it facilitates a more genuine, candid response while researchers are able see respondents’ emotions “face to face.” This is a twist on a traditional quantitative research that has the potential to garner deeper, more meaningful respondent insight.
  • In-depth/moderated chats let researchers dig deeper and connect with respondents within the paradigm of a traditional quantitative study. In these short discussions respondents can explain to researchers why they made a specific selection on a survey. In-depth/moderated chats can help contextualize a traditional quantitative survey – providing researchers (and clients) with a combination of both quantitative and qualitative insights.

As insights professionals we strive to offer critical insights that help our clients and partners answer their biggest business questions. More and more often the best way to achieve the best results is to put tradition aside and combine both qualitative and quantitative methodologies.

Rebecca is part of the field services team at CMB, and she is excited to celebrate her favorite time of year with her family and friends.  

Topics: methodology, qualitative research, quantitative research

A Data Dominator’s Guide to Research Design…and Dating

Posted by Talia Fein

Wed, Jan 20, 2016

people_on_date.jpgI recently went on a first date with a musician. We spent the first hour or so talking about our careers: the types of music he plays, the bands he’s been in, how music led him to the job he has now, and, of course, my unwavering passion for data. Later, when there was a pause in the conversation, he said: “so, do you like music?”

Um. . .how was I supposed to answer that? There was clearly only one right answer (“yes”) unless I really didn’t want this to go anywhere. I told him that, and we had a nice laugh. . .and then I used it as a teaching opportunity to explain one of my favorite market research concepts: Leading Questions.

According to Tull and Hawkins’ Marketing Research: Measurement and Method, a Leading Question is “a question that suggests what the answer should be, or that reflects the researcher’s point of view. Example: “Do you agree, as most people do, that TV advertising serves no useful purpose?”

In writing good survey questions, we need to give enough information for the respondent to fully answer the question, but not too much information that we give away either our own opinions or the responses we expect to hear. This is especially important in opinion research and political polling when slight changes in word choice can create bias and impact the results. For example, in their 1937 poll, Gallup asked, “Would you vote for a woman for President if she were qualified in every other aspect?” This implies that simply being a woman is a disqualification for President. (Just so you know: 33% answered “Yes.”) Gallup has since changed the wording—“If your party nominated a generally well-qualified person for President who happened to be a woman, would you vote for that person?”—and the question is included in a series of questions in which “woman” is replaced with other descriptors, such as Catholic, Black, Muslim, gay, etc. Of course, times have changed, and we can’t know exactly how much of the bias was due to the leading nature of the question, but 92% answered “Yes” as recently as June 2015.

The ordering of questions is just as important as the words we choose in specific questions. John Martin (Cofounder and Chairman of CMB, 1984-2014) taught us the importance—and danger—of sequential bias. In writing a good questionnaire, we’re not only spitting out a bunch of questions and receiving responses—we’re taking the respondent through a 15 (or 20 or 30) minute journey, trying to get his/her most unbiased, real, opinions and preferences. For example, if we start a questionnaire by showing a list of brands and asking which ones are fun and exciting, and then ask unaided which brands respondents know of, we’re not going to get very good data. Just like if we ask a person whether he/she likes music after talking for an hour about the importance of music in our own lives, we might get skewed results.

One common rule when it comes to questionnaire ordering is to ask unaided questions before aided questions. Otherwise, the aided questions would remind respondents of possible options—and inflate their unaided answers. A couple more rules I like to keep in mind:

  1. Start broad, then go narrow: talk about the category before the specific brand or product.

Remember that the respondent is in the middle of a busy day at work or has just put the kids to bed and has other things on his/her mind. The introductory sections of a questionnaire are as much about screening respondents and gathering data as they are about getting the respondent thinking about the category (rather than what to make for the kids’ lunch tomorrow).

  1. Think about what you have already told the respondent: like a good date, the questionnaire should build.

In one of my recent projects, after determining awareness of a product, we measured “concept awareness” by showing a short description of the product to those who had said they were NOT aware of it and then asking them if they had heard of the concept. Later on in the questionnaire, we asked respondents what product features they were familiar with. For respondents who had seen the concept awareness question (i.e., those who hadn’t been fully aware), we removed the product features that had been mentioned in the description (of course, the respondent would know those).

  1. When asking unaided awareness questions, think about how you’re defining the category.

“What Boston-based market research companies founded in 1984 come to mind?” might be a little too specific. A better way of wording this would simply be: “What market research companies come to mind?” Usually thinking about the client’s competitive set will help you figure out how to explain the category.

So, remember: in research, just as in dating, what we put out (good survey questions and positive vibes) influences what we get back.

Talia is a Project Manager on CMB’s Technology and eCommerce team. She was recently named one of Survey Magazine’s 2015 Data Dominators and enjoys long walks on the beach.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital. This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

Topics: methodology, research design, quantitative research

My Data Quality Obsession

Posted by Laurie McCarthy

Tue, Jan 12, 2016

3d_people_in_a_row.jpgYesterday I got at least 50 emails, and that doesn’t include what went to my spam folder—at least half of those went straight in the trash. So, I know what a challenge it is to get a potential respondent to even open an email that contains a questionnaire link. We’re always striving to discover and implement new ways to reach respondents and to keep them engaged: mobile optimization is key, but we also consider incentive levels and types, subject lines, and, of course, better ways to ask questions like highlighter exercises, sliding scales, interactive web simulations, and heat maps. This project customization also provides us with the flexibility needed to communicate with respondents in hard-to-reach groups.

Once we’ve got those precious respondents, the question remains: are we reaching the RIGHT respondents and keeping them engaged? How can we evaluate the data efficiently prior to any analysis?

Even with the increased methods in place to protect against “bad”/professional respondents, the data quality control process remains an important aspect of each project. We have set standards in place, starting in the programming phase—as well as during the final review of the data—to identify and eliminate “bad” respondents from the data prior to conducting any analysis.

We start from a conservative standpoint during programming, flagging respondents who fail any of the criteria in the list below. These respondents are not permanently removed from the data at this point, but they are categorized as an incomplete and are reviewable if we feel that they provide value to the study:

  • “Speedsters”Respondents who completed the questionnaire in 1/5 of the overall median time or less. This is applied to evaluate the data collected after approximately the first 20% or 100 completes, whichever is first.
  • “Grid Speedsters”:When applicable, respondents who, for two or more grids of ten or more items, has a grid speed less than 2 standard deviations from the mean for the grid. Again, this is applied after approximately the first 20% or 100 completes, whichever is first.
  • “Red-Herring”We incorporate a standard scale question (0-10), which is programmed at or around the estimated 10-minute mark in the questionnaire, asking the respondent to select a number on the scale. Respondents who do not select the appropriate number are flagged.

This process allows us to begin the data quality review during fielding, so that the blatantly “bad” respondents are removed prior to close of data collection.

However, our process extends to the final data as well.  After the fielding is complete, we review the data for the following:

  • Duplicate respondents: Even with unique links and passwords (for online), we review the data based on the email/phone number provided and the IP Address to remove respondents who do not appear to be unique.
  • Additional speedsters: Respondents who completed the questionnaire in a short amount of time. We take into consideration any brand/product rotation as well (evaluating one brand/product would take less time than evaluating several brands/products). 
  • Straight-liners: Similar to the grid speeders above, we review respondents who have selected only one value for each attribute in a grid. We flag respondents who have straight-lined each grid to create a sum of “straight-liners.” We review this metric on its own as well as in conjunction with overall completion time. The rationale being that if respondents are only selecting one value throughout the questionnaire and appear in the straight-lining flag, these individuals will also have sped through the questionnaire.
  • Inconsistent response patterns: In grids, we can sometimes have attributes that would use the reverse scale, and we review those to determine if there are contradictory responses. Another example might be a respondent who indicates he/she uses a specific brand, and, later in the study, the respondent indicates that he/she is not aware of that brand.

While we may not eliminate respondents, we do examine other factors for “common sense”:

  • Gibberish verbatims: Random letters/symbols or references that do not pertain to the study across each open ended response
  • Demographic review: Review of the demographic information to ensure that they are reasonable and in line with the specifications of the study

As part of our continuing partnership with panel sample providers, we provide them with the panel ID and information of those respondents who have failed our quality control process. In some instances, in which the client or the analysis require that certain sample sizes are collected, this may also necessitate replacing bad respondents. Our collaboration allows us to stand behind the quality of the respondents we provide for analysis and reporting, while also meeting the needs of our clients in a challenging environment.

Our clients rely on us to manage all aspects of data collection when we partner with them to develop a questionnaire, and our stringent data quality control process ensures that we can do that plus provide data that will support their business decisions. 

Laurie McCarthy is a Senior Data Manager at CMB. Though an avid fan of Excel formulas and solving data problems, she has never seen Star Wars. Live long and prosper.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

 

Topics: Chadwick Martin Bailey, methodology, data collection, quantitative research

Qualitative, Quantitative, or Both? Tips for Choosing the Right Tool

Posted by Ashley Harrington

Wed, Aug 06, 2014

quantitative, qualitative, methodologyIn market research, it can occasionally feel like the rivalry between qualitative and quantitative research is like the Red Sox vs. the Yankees.  You can’t root for both, and you can’t just “like” one.  You’re very passionate about your preference.  But in many cases, this can be problematic. For example, using a quantitative mindset or tactics in a qualitative study (or vice versa) can lead to inaccurate conclusions. Below are some examples of this challenge—one that can happen throughout all phases of the research process: 

Planning

Clients will occasionally request that market researchers use a particular methodology for an engagement. We always explore these requests further with our clients to ensure there isn’t a disconnect between the requested methodology and the problem the client is trying to solve.

For example, a bank* might say, “The latest results from our brand tracking study indicate that customers are extremely frustrated by our call center and we have no idea why. Let’s do a survey to find out.”

Because the bank has no hypotheses about the cause of the issue, moving forward with their survey request could lead to designing a tool with (a) too many open-ended questions and (b) questions/answer options that are no more than wild guesses at the root of the problem, which may or may not jibe with how consumers actually think and feel.

Instead, qualitative research could be used to provide a foundation of preliminary knowledge about a particular problem, population, and so forth. Ultimately, that knowledge can be used to help inform the design of a tool that would be useful.

Questionnaire Design

For a product development study, a software company* asks to add an open-ended question to a survey: “What would make you more likely to use this software?” or “What do you wish the software could do that it can’t do now?”

Since most of us are not engineers or product designers, this question might be difficult for most respondents to answer. Open-ended questions like these are likely to yield a lot of not-so-helpful “I don’t know”-type responses, rather than specific enhancement suggestions.

Instead of squandering valuable real estate on a question not likely to yield helpful data, a qualitative approach could allow respondents to react to ideas at a more conceptual level, bounce ideas off of each other or a moderator, or take some time to reflect on their responses. Even if the customer is not a R&D expert, they may have a great idea that just needs a bit of coaxing via input and engagement with others.

Analysis and Reporting

In reviewing the findings from an online discussion board, a client at a restaurant chain* reviews the transcripts and states, “85% of participants responded negatively to our new item, so we need to remove it from our menu.”

Since findings from qualitative studies are not necessarily statistically significant, using the same techniques (e.g., descriptive statistics and frequencies) is not ideal as it implies a level of precision in the findings that is not necessarily accurate. Further, it would not be cost-effective to recruit and conduct qualitative research with a group large enough to be projectable onto the general population.

Rather than attempting to quantify the findings in strictly numerical terms, qualitative data should be thought of as more directional in terms of overall themes and observable patterns.

At CMB, we root for both teams. We believe both produce impactful insights, and that often means using a hybrid approach. We believe the most meaningful insights come from choosing the approach or approaches best suited to the problem our client is trying to solve. However, being a Boston-based company, we can’t say that we’re nearly this unbiased when it comes to the Red Sox versus the Stankees Yankees.

*Example (not actual)

Ashley is a Project Manager at CMB. She loves both qualitative and quantitative equally and is not knowledgeable enough about sports to make any sports-related analogies more sophisticated than the Red Sox vs. the Yankees.

Click the button below to subscribe to our monthly eZine and get the scoop on the latest webinars, conferences, and insights. 

Subscribe Here

Topics: methodology, qualitative research, research design, quantitative research

What's the Story? 5 Insights from CASRO's Digital Research Conference

Posted by Jared Huizenga

Wed, Mar 19, 2014

CMB and CASROWho says market research isn’t exciting? I’ve been a market researcher for the past sixteen years, and I’ve seen the industry change dramatically since the days when telephone questionnaires were the norm. I still remember my excitement when disk-by-mail became popular! But I don’t think I’ve ever felt as excited about market research as I do right now. The CASRO Digital Research Conference was last week, and the presentations confirmed what I already knew—big changes are happening in the market research world. Here are five key takeaways from the conference:

  1. “Market research” is an antiquated term. It was even suggested that we change the name of our industry from market research to “insights.” In fact, the word “insights” came up multiple times throughout the conference by different presenters. This makes a lot of sense to me. Many people view market research as a process whereas insights are the end result we deliver to our clients. Speaking for CMB, partnering with our clients to provide critical insights is a much more accurate description of our mission and focus. We and our clients know percentages by themselves fail to tell the whole story, and can in fact lead to more confusion about which direction to take.

  2. “Big data” means different things to different people. If you ask ten people to define big data you’ll probably get ten different answers. Some define it as omnipresent data that follows us wherever we go. Others define it as vast amounts of unstructured data, some of which might be useful and some not. Still others call it an outdated buzzword.  No matter what your own definition of big data is, the market research industry seems to be in somewhat of a quandary about what to do with it. Clients want it and researchers want to oblige, but do adequate tools currently exist to deliver meaningful big data? Where does the big data come from, who owns it, and how do you integrate it with traditional forms of data? These are all questions that have not been fully answered by the market research (or insights) industry. Regardless, tons of investment dollars are currently being pumped into big data infrastructure and tools. Big data is going to be, well, BIG.  However, there’s a long way to go before most will be able to use it to its potential.

  3. Empathy is the hottest new research “tool.” Understanding others’ feelings, thoughts, and experiences allows us to understand the “why behind the what.”  Before you dismiss this as just a qualitative research thing, don’t be so sure.  While qualitative research is an effective tool for understanding the “why,” the lines are blurring between qualitative and quantitative research. Picking one over the other simply doesn’t seem wise in today’s world. Unlike with big data, tools do currently exist that allow us to empathize with people and tell a more complete story. When you look at a respondent, you shouldn’t only see a number, spreadsheet, or fancy graphic that shows cost is the most important factor when purchasing fabric softener. You should see the man who recently lost his wife to cancer and who is buying fabric softener solely based on cost because he has five years of medical bills. There is value in knowing the whole story. When you look at a person, you should see a person.

  4. Synthesizers are increasingly important. I’m not talking about the synthesizers from Soft Cell’s version of “Tainted Love” or Van Halen’s “Jump.” The goal here is to once again tell a complete story and, in order to do this, multiple skillsets are required. Analytics have traditionally been the backbone of market research and will continue to play a major role in the future. However, with more and more information coming from multiple sources, synthesizers are also needed to pull all of it together in a meaningful way. In many cases, those who are good at analytics are not as good at synthesizing information, and vice versa. This may require a shift in the way market research companies staff for success in the future. 

  5. Mobile devices are changing the way questionnaires are designed. A time will come when very few respondents are willing to take a questionnaire over twenty minutes long, and some are saying that day is coming within two years. The fact is, no matter how much mobile “optimization” you apply to your questionnaire, the time to take it on a smartphone is still going to be longer than on PCs and tablets. Forcing respondents to complete on a PC isn’t a good solution, especially since the already elusive sub 25 year old population spends more time on mobile devices than PCs. So what’s a researcher to do? The option of “chunking” long questionnaires into several modules is showing potential, but requires careful questionnaire design and a trusted sampling plan. This method isn’t a good fit for all studies where analysis dictates each respondent complete the entire questionnaire, and the number of overall respondents needed is likely to increase using this methodology. It also requires client buy-in. But it’s something that we at CMB believe is worth pursuing as we leverage mobile technologies.

Change is happening faster than ever. If you thought the transition from telephone to online research was fast—if you were even around back in the good old days when that happened—you’d better hold onto your seat! Information surrounds every consumer. The challenge for insights companies is not only to capture that information but to empathize, analyze, and synthesize it in order to tell a complete story. This requires multiple skillsets as well as the appropriate tools, and honestly the industry as a whole simply isn’t there yet. However, I strongly believe that those of us who are working feverishly to not just “deal” with change but to leverage it, and who are making progress with these rapidly changing technological advances, will be well equipped for success.

Jared is CMB’s Director of Field Services, and has been in market research industry for sixteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ

 

CMB Insight eZine


Sign-up here
for out monthly eZine for the latest Consumer Pulse
reports, case studies conference updates, webinars and more

Topics: qualitative research, big data, mobile, research design, quantitative research, conference recap

Deconstructing the Customer Experience: What's in Your Toolkit?

Posted by Jennifer von Briesen

Wed, Sep 25, 2013

Disassembled rubix 1More and more companies are focusing on trying to better understand and improve their customers’ experiences. Some want to become more customer-centric. Some see this as an effective path to competitive differentiation. While others, challenging traditional assumptions (e.g., Experience Co-creation, originated by my former boss, Francis Gouillart, and his colleagues Prof. Venkat Ramaswamy and the late C.K. Prahalad), are applying new strategic thinking about value creation. Decision-makers in these firms are starting to recognize that every single interaction and experience a customer has with the company (and its ecosystem partners) may either build or destroy customer value and loyalty over time.

While companies traditionally measure customer value based on revenues, share of wallet, cost to serve, retention, NPS, profitability, lifetime value etc., we now have more and better tools for deconstructing the customer experience and understanding the components driving customer and company interaction value at the activity/experience level. To really understand the value drivers in the customer experience, firms need to simultaneously look holistically, go deep in a few key focus areas, and use a multi-method approach.

Here’s an arsenal of tools and methods that are great to have in your toolkit for building customer experience insight:

Qualitative tools

  • Journey mapping methods and tools

  • In-the-moment, customer activity-based tools

    • Voice capture exercises (either using mobile phones or landlines) where customers can call in and answer a set of questions related to whatever they are doing in the moment.

    • Use mobile devices and online platforms to upload visuals, audio and/or video to answer questions, (e.g., as you are filling out your enrollment paperwork, take a moment to take a quick—less than 10 second video, to share your thoughts on what you are experiencing).

  • Customer diaries

    • E.g., use mobile devices as a visual diary or to complete a number of activities

  • Observation tools

    • Live or virtual tools (e.g., watch/videotape in-person or online experiences, either live or after the fact)

    • On-site customer visits: companies I’ve worked with often like to join customers doing activities in their own environments and situational contexts. Beyond basic observation, company employees can dialogue with customers during the activities/experiences to gain immediate feedback and richer understanding.

  • Interviews and qualitative surveys

  • Online discussion boards

  • Online or in-person focus groups

Quantitative tools

  • Quantitative surveys/research tools (too many to list in a blog post)

  • Internal tracking tools

    • Online tools for tracking behavior metrics (e.g., landing pages/clicks/page views/time on pages, etc.) for key interactions/experience stages. This enables ongoing data-mining, research and analysis.

    • Service/support data analysis (e.g., analyze call center data on inbound calls and online support queries for interaction types, stages, periods, etc. to look for FAQs, problems, etc.).

What tools are you using to better understand and improve the customer experience? What tools are in your toolkit?  Are you taking advantage of all the new tools available?

Jennifer is a Director at  South Street Strategy Group. She recently received the 2013 “Member of the Year” award by the Association for Strategic Planning (ASP), the preeminent professional association for those engaged in strategic thinking, planning and action.

Topics: South Street Strategy Group, strategy consulting, methodology, qualitative research, quantitative research, customer experience and loyalty

Can Quantitative Methods Uncover Emotion?

Posted by Megan McManaman

Wed, Sep 14, 2011

Grocery shoppingPicture yourself pushing your cart down the grocery store aisle, you’ve planned your meals and are making choices that suit your family’s tastes and budget. The decisions you make are rational and logical. But as anyone who’s ever felt a sense of nostalgia over a chocolate chip cookie, or empowered by their choice of the natural peanut butter, can tell you, they are also emotional.

As market researchers we’re interested in knowing what decisions consumers make, how they make them, and why. Traditionally, we’ve used quantitative (survey) approaches to discover the “what” and the “how,” and turned to qualitative methods (IDI’s, focus groups) to understand the “why,” including the emotions underlying these decisions.  But merely asking people to name their emotions is not enough, language biases, the tedium of having subjects choose from lists of 50 or more emotions, and the dangers of self-report for something so nebulous, are all difficulties faced by researchers. To address these biases, scientists and quantitative researchers have come to recognize the extent to which decision-making takes place in the subconscious mind. The question is: how can we apply rigorous measurement to what seem like the most irrational, unpredictable human characteristics?

Medical science has offered new possibilities using relatively established technologies to gain insight and understanding into the relationships between human emotion and brain activity. EEG’s, eye tracking, and even MRI’s have helped us understand the nuances and complexity of the brain’s response in very concrete and visual ways. An fMRI like the one pictured below, and other technologies, are valuable in their ability to measure brain responses that the subject might not even know they’re having. But there are limitations, beyond being prohibitive from a cost perspective, the results lack the nuance and detail necessary for effective application for market researchers.

fmri measuring brain response
Researchers from AdSAM, a research company focused on Emotional Response Modeling, have developed a methodology using non-verbal techniques to identify and measure emotional response to understand consumer attitudes, preferences, and behavior. This approach uses pictorial scales to capture emotional reactions and predict behavior while minimizing the language biases common in verbal approaches and contextualizing the results of more costly brain imaging approaches.

Guy thinking resized 600Want to know more? Please join us on September 21st as CMB’s Jeff McKenna and AdSAM’s Cathy Gwynn discuss the development and application of this new approach to emotional response measurement.

 

 

 

Posted by Megan McManaman. Megan is part of CMB’s marketing team, and she isn't proud to say buying ketchup makes her happy.

Topics: emotional measurement, webinar, quantitative research

Compilation Scores: Look Under the Hood

Posted by Cathy Harrison

Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: advanced analytics, methodology, qualitative research, quantitative research