WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

Qualitative Research Isn't Dying—It's Evolving

Posted by Anne Hooper

Wed, May 06, 2015

qualitative research, anne hooperBack in 2005, Malcolm Gladwell told us that focus groups are dead. Just last November, Jim Bryson, CEO of 20/20 Research, questioned whether qualitative research was thriving or dying: If we take a narrow, more traditional view that qualitative is defined by the methods of face-to-face focus groups or interviews, particularly those held in a qualitative facility, then the case can be easily made that qualitative is dying.”

To all of this, I say: wait, what?! Qualitative is dying? I refused to believe it, so I embarked on a journey to explore where qualitative has been, and more importantly, where it’s going. During my research, I found plenty of evidence to support the fact that qualitative is not, in fact, dying. Great news, right? (Especially for me, because if it were true, I just might be out of a job I love.)I took a look at the fall 2014 Greenbook Research Industry Trends (GRIT) Report and focused on the data from Q1-Q2 of 2013 and Q1-Q2 2014. In this data, I learned:

  • The use of traditional in-person focus groups increased from 60% (Q1-Q2 2013) to 70% (Q1-Q2 2014).
  • Within the same time period, the use of in-person, in-depth interviews increased from 45% to 53%.
  • Interviews and groups using online communities increased from 21% to 24%.
  • The use of mobile qual (e.g., diaries, image uploads) increased from 18% to 24%.

Yes, it’s important to note that not all qualitative methodologies saw an increase in usage within this timeframe. In fact, there was a decrease in the usage of telephone IDIs, in-store shopping/observations, bulletin board studies, both chat-based and webcam-based online focus groups, and telephone focus groups.  All this notwithstanding, I think it’s fair to say that qualitative is still very much alive and well.

So why do people keep talking about qualitative dying? We can’t deny that there are a number of factors that affect how and when we use qualitative methodologies today (technology, access to big data, and text analytics are a few). But, this doesn’t mean qualitative is disappearing as a discipline. Qualitative is evolving at a rapid pace and feels more relevant than ever. Sure, we need to keep up with client demands for faster and cheaper research, but there will always be a need for the human mind (i.e., a qualitative expert) to analyze and synthesize the data to provide meaning and context behind the way people think and behave—and that is where actionable insights are born.   

Now that we know qualitative really isn’t dying, what does 2015 (and beyond) hold for us? The future is about truly integrated research—in which qualitative and quantitative are consistently, thoughtfully, and purposefully used together to provide well-rounded, actionable insights. We’re poised to do exactly that with our dedicated analytics team and network of expert industry qualitative partners. By using two equally important disciplines that are both alive and well, we can provide our clients critical insights they can really use. Far from killing off qualitative insights, technology and an evolving marketplace are helping make qualitative insights even stronger.

Anne Hooper is the Qualitative Research Director at CMB. After recently finding out that her 13 year old daughter did a quantitative assessment of her Jazz Band’s upcoming Disney trip itinerary, she’s determined that an intervention may be in order.

Topics: methodology, qualitative research

Qualitative, Quantitative, or Both? Tips for Choosing the Right Tool

Posted by Ashley Harrington

Wed, Aug 06, 2014

quantitative, qualitative, methodologyIn market research, it can occasionally feel like the rivalry between qualitative and quantitative research is like the Red Sox vs. the Yankees.  You can’t root for both, and you can’t just “like” one.  You’re very passionate about your preference.  But in many cases, this can be problematic. For example, using a quantitative mindset or tactics in a qualitative study (or vice versa) can lead to inaccurate conclusions. Below are some examples of this challenge—one that can happen throughout all phases of the research process: 

Planning

Clients will occasionally request that market researchers use a particular methodology for an engagement. We always explore these requests further with our clients to ensure there isn’t a disconnect between the requested methodology and the problem the client is trying to solve.

For example, a bank* might say, “The latest results from our brand tracking study indicate that customers are extremely frustrated by our call center and we have no idea why. Let’s do a survey to find out.”

Because the bank has no hypotheses about the cause of the issue, moving forward with their survey request could lead to designing a tool with (a) too many open-ended questions and (b) questions/answer options that are no more than wild guesses at the root of the problem, which may or may not jibe with how consumers actually think and feel.

Instead, qualitative research could be used to provide a foundation of preliminary knowledge about a particular problem, population, and so forth. Ultimately, that knowledge can be used to help inform the design of a tool that would be useful.

Questionnaire Design

For a product development study, a software company* asks to add an open-ended question to a survey: “What would make you more likely to use this software?” or “What do you wish the software could do that it can’t do now?”

Since most of us are not engineers or product designers, this question might be difficult for most respondents to answer. Open-ended questions like these are likely to yield a lot of not-so-helpful “I don’t know”-type responses, rather than specific enhancement suggestions.

Instead of squandering valuable real estate on a question not likely to yield helpful data, a qualitative approach could allow respondents to react to ideas at a more conceptual level, bounce ideas off of each other or a moderator, or take some time to reflect on their responses. Even if the customer is not a R&D expert, they may have a great idea that just needs a bit of coaxing via input and engagement with others.

Analysis and Reporting

In reviewing the findings from an online discussion board, a client at a restaurant chain* reviews the transcripts and states, “85% of participants responded negatively to our new item, so we need to remove it from our menu.”

Since findings from qualitative studies are not necessarily statistically significant, using the same techniques (e.g., descriptive statistics and frequencies) is not ideal as it implies a level of precision in the findings that is not necessarily accurate. Further, it would not be cost-effective to recruit and conduct qualitative research with a group large enough to be projectable onto the general population.

Rather than attempting to quantify the findings in strictly numerical terms, qualitative data should be thought of as more directional in terms of overall themes and observable patterns.

At CMB, we root for both teams. We believe both produce impactful insights, and that often means using a hybrid approach. We believe the most meaningful insights come from choosing the approach or approaches best suited to the problem our client is trying to solve. However, being a Boston-based company, we can’t say that we’re nearly this unbiased when it comes to the Red Sox versus the Stankees Yankees.

*Example (not actual)

Ashley is a Project Manager at CMB. She loves both qualitative and quantitative equally and is not knowledgeable enough about sports to make any sports-related analogies more sophisticated than the Red Sox vs. the Yankees.

Click the button below to subscribe to our monthly eZine and get the scoop on the latest webinars, conferences, and insights. 

Subscribe Here

Topics: methodology, qualitative research, research design, quantitative research

A Perfect Match? Tinder and Mobile Ethnographies

Posted by Anne Hooper

Wed, Apr 23, 2014

Tinder JoeI know what you are thinking...“What the heck is she TALKING about? How can Tinder possibly relate to mobile ethnography?”  You can call me crazy, but hear me out first.For those of you who may be unfamiliar, Tinder is a well-known “hook up” app that’s taken the smartphone wielding, hyper-social Millennial world by storm. With a simple swipe of the index finger, one can either approve or reject someone from a massive list of prospects. At the end of the day, it comes down to anonymously passing judgment on looks alone—yet if both users “like” each other, they are connected. Shallow? You bet. Effective? Clearly it must be because thousands of people are downloading the app daily.

So what’s the connection with mobile ethnography? While Tinder appears to be an effective tool for anonymously communicating attraction (anonymous in that the only thing you really know about the other person is what they look like), mobile ethnography is an effective tool for anonymously communicating daily experiences that we generally aren’t as privy to as researchers. Mobile ethnography gives us better insight into consumer behavior by bringing us places we’ve never gone before but are worthy of knowing nonetheless (Cialis, anyone?). Tapping into these experiences—from the benign to the very private—are the nuts and bolts behind any good product or brand.

So how might one tap into these experiences using mobile ethnography? It’s actually quite easy—we create and assign “activities” that are not only engaging for participants, but are also designed to dig deep and (hopefully) capture the "Aha!" moments we aim for as researchers. Imagine being able to see how consumers interact with your brand on a day-to-day basis—how they use your product, where their needs are being fulfilled, and where they experience frustrations. Imagine “being there” when your customer experiences your brand—offering insight into what delights and disappoints them right then and there (i.e., not several weeks later in a focus group facility). The possibilities for mobile ethnography are endless...let’s just hope the possibilities for Tinder come to a screeching halt sooner rather than later.

Anne Hooper is the Director of Qualitative Services at CMB. She has a 12 year old daughter who has no idea what Tinder is, and she hopes it stays that way for a very long time.

Topics: methodology, qualitative research, social media

What's the Story? 5 Insights from CASRO's Digital Research Conference

Posted by Jared Huizenga

Wed, Mar 19, 2014

CMB and CASROWho says market research isn’t exciting? I’ve been a market researcher for the past sixteen years, and I’ve seen the industry change dramatically since the days when telephone questionnaires were the norm. I still remember my excitement when disk-by-mail became popular! But I don’t think I’ve ever felt as excited about market research as I do right now. The CASRO Digital Research Conference was last week, and the presentations confirmed what I already knew—big changes are happening in the market research world. Here are five key takeaways from the conference:

  1. “Market research” is an antiquated term. It was even suggested that we change the name of our industry from market research to “insights.” In fact, the word “insights” came up multiple times throughout the conference by different presenters. This makes a lot of sense to me. Many people view market research as a process whereas insights are the end result we deliver to our clients. Speaking for CMB, partnering with our clients to provide critical insights is a much more accurate description of our mission and focus. We and our clients know percentages by themselves fail to tell the whole story, and can in fact lead to more confusion about which direction to take.

  2. “Big data” means different things to different people. If you ask ten people to define big data you’ll probably get ten different answers. Some define it as omnipresent data that follows us wherever we go. Others define it as vast amounts of unstructured data, some of which might be useful and some not. Still others call it an outdated buzzword.  No matter what your own definition of big data is, the market research industry seems to be in somewhat of a quandary about what to do with it. Clients want it and researchers want to oblige, but do adequate tools currently exist to deliver meaningful big data? Where does the big data come from, who owns it, and how do you integrate it with traditional forms of data? These are all questions that have not been fully answered by the market research (or insights) industry. Regardless, tons of investment dollars are currently being pumped into big data infrastructure and tools. Big data is going to be, well, BIG.  However, there’s a long way to go before most will be able to use it to its potential.

  3. Empathy is the hottest new research “tool.” Understanding others’ feelings, thoughts, and experiences allows us to understand the “why behind the what.”  Before you dismiss this as just a qualitative research thing, don’t be so sure.  While qualitative research is an effective tool for understanding the “why,” the lines are blurring between qualitative and quantitative research. Picking one over the other simply doesn’t seem wise in today’s world. Unlike with big data, tools do currently exist that allow us to empathize with people and tell a more complete story. When you look at a respondent, you shouldn’t only see a number, spreadsheet, or fancy graphic that shows cost is the most important factor when purchasing fabric softener. You should see the man who recently lost his wife to cancer and who is buying fabric softener solely based on cost because he has five years of medical bills. There is value in knowing the whole story. When you look at a person, you should see a person.

  4. Synthesizers are increasingly important. I’m not talking about the synthesizers from Soft Cell’s version of “Tainted Love” or Van Halen’s “Jump.” The goal here is to once again tell a complete story and, in order to do this, multiple skillsets are required. Analytics have traditionally been the backbone of market research and will continue to play a major role in the future. However, with more and more information coming from multiple sources, synthesizers are also needed to pull all of it together in a meaningful way. In many cases, those who are good at analytics are not as good at synthesizing information, and vice versa. This may require a shift in the way market research companies staff for success in the future. 

  5. Mobile devices are changing the way questionnaires are designed. A time will come when very few respondents are willing to take a questionnaire over twenty minutes long, and some are saying that day is coming within two years. The fact is, no matter how much mobile “optimization” you apply to your questionnaire, the time to take it on a smartphone is still going to be longer than on PCs and tablets. Forcing respondents to complete on a PC isn’t a good solution, especially since the already elusive sub 25 year old population spends more time on mobile devices than PCs. So what’s a researcher to do? The option of “chunking” long questionnaires into several modules is showing potential, but requires careful questionnaire design and a trusted sampling plan. This method isn’t a good fit for all studies where analysis dictates each respondent complete the entire questionnaire, and the number of overall respondents needed is likely to increase using this methodology. It also requires client buy-in. But it’s something that we at CMB believe is worth pursuing as we leverage mobile technologies.

Change is happening faster than ever. If you thought the transition from telephone to online research was fast—if you were even around back in the good old days when that happened—you’d better hold onto your seat! Information surrounds every consumer. The challenge for insights companies is not only to capture that information but to empathize, analyze, and synthesize it in order to tell a complete story. This requires multiple skillsets as well as the appropriate tools, and honestly the industry as a whole simply isn’t there yet. However, I strongly believe that those of us who are working feverishly to not just “deal” with change but to leverage it, and who are making progress with these rapidly changing technological advances, will be well equipped for success.

Jared is CMB’s Director of Field Services, and has been in market research industry for sixteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ

 

CMB Insight eZine


Sign-up here
for out monthly eZine for the latest Consumer Pulse
reports, case studies conference updates, webinars and more

Topics: qualitative research, big data, mobile, research design, quantitative research, conference recap

Jeffrey Henning:10 Tips for Mobile Diary Studies

Posted by Jeffrey Henning

Mon, Nov 25, 2013

Originally posted on Research Access

Earlier this month, Chris Neal of Chadwick Martin Bailey shared with members of the New England chapter of the Marketing Research Association tips for running mobile diary studies, based on lessons learned from a recent project.For the Council for Research Excellence (CRE), CMB studied mobile video usage to understand:

  • How much time is spent mobile diary researchon mobile devices watching TV (professionally produced TV shows)?

  • Does this cannibalize TV set viewing?

  • What motivates consumers to watch on mobile?

  • How can mobile TV viewing be accurately tracked?

The research included a quantitative phase with two online surveys and mobile journaling, followed by a series of home ethnographies. The quant work included a screening survey, the mobile diary, and a final online survey.

  • The screening survey was Census balanced to estimate market size, with three groups recruited for comparison: those without mobile devices (smartphones or tablets), those with mobile devices who don’t watch TV on them, and those with mobile devices that they watch TV on. The total number of respondents was 5,886.

  • The mobile diary activity asked respondents to complete their journal 4 times a day for 7 days.

  • A final attitudinal survey was used to better understand motivations and behaviors associated with decisions about TV watching.

Along the way, CMB learned some valuable best practices for mobile diary studies, including tips for recruiting, incentives, design and analysis. The 10 key lessons learned:

  1. Mobile panels don’t work for low incidence – Take care when using mobile panels – given the small size of many mobile panels, you may have better luck recruiting through traditional online panels, as CMB did. For this study, it was because of the comparatively low incidence of actual mobile TV watching.

  2. Overrecruit – You will lose many recruits to the journaling exercise when it comes time to downloading the mobile diary application. As a general rule, over-recruit by 100% – get twice the promises of participation that you need. Most dropout occurs after the screening and before the participant has recorded a single mobile diary entry. For many members of online survey panels, journaling is a new experience. The second biggest point of dropout was after recording 1 or 2 diary entries.

  3. Keep it short – To minimize this dropout, you have to keep the diary experience as short as possible: no more than 3 to 5 minutes long. The more times you ask participants to complete a diary each day, the greater the dropout rate.

  4. Think small screen – Make sure the survey is designed to provide a good experience on small screens – avoid grids and sum-allocation questions and limit open-ended prompts and use of images. Use vertical scales instead of horizontal scales. “Be wary of shiny new survey objects for smartphone survey-takers,” said Chris. Smartphone users had 5 times the dropout rate of tablet or laptop users in this study. Enable people to log on to their journal from whatever device they were using at the time, including their computer.

  5. Beware battery hogs – When evaluating smartphone apps, be wary of those that drain battery life by constantly logging GPS location. Check the app store reviews of the application.

  6. Keep consistent – Keep the diary questionnaire the same for every time block, to get respondents into the habit of answering it.

  7. Experiment with incentives to maximize participation – Tier incentives to motivate people to stick with the study and complete all time blocks. To earn the incentive for the CMB study, Chris said that respondents had to participate at least once a day for all 7 days, with additional incentives for every journal log entered (participants were reminded this didn’t have to involve actual TV watching, just filling out the log). In the end, 90% of journaling occasions were filled out.

  8. Remind via SMS and email – In-app notifications are not enough to prompt participation. Use email and text messages for each time block as well. Most respondents logged on within 2 hours of receiving a reminder.

  9. Use online surveys for detailed questions – Use the post-journaling survey to capture greater detail and to work around the limits of mobile surveys. You can then use these results to “slice and dice” the journal responses.

  10. Weight by occasions – Remember to weight the data file to total occasions not total respondents. For missing data, leave it missing. Develop a plan detailing which occasion-based data you’re going to analyze and what respondent-level analysis you are going to do. You may need to create a separate occasion-level data file and a separate respondent-level data file.

Properly done, mobile diary studies provide an amazing depth of data. For this project, CMB captured almost 400,000 viewing occasions (mobile and non-mobile TV watching), for over 5 million occasion-based records!

Interested in the actual survey results? CRE has published the results presentation, “TV Untethered: Following the Mobile Path of TV Content” [PDF].

Jeffrey Henning, PRC is president of Researchscape International, a market research firm providing custom surveys to small businesses. He is a Director at Large on the MRA Board of Directors; in 2012, he was the inaugural winner of the MRA’s Impact award. You can follow him on Twitter @jhenning.

Topics: methodology, qualitative research, mobile, research design

Deconstructing the Customer Experience: What's in Your Toolkit?

Posted by Jennifer von Briesen

Wed, Sep 25, 2013

Disassembled rubix 1More and more companies are focusing on trying to better understand and improve their customers’ experiences. Some want to become more customer-centric. Some see this as an effective path to competitive differentiation. While others, challenging traditional assumptions (e.g., Experience Co-creation, originated by my former boss, Francis Gouillart, and his colleagues Prof. Venkat Ramaswamy and the late C.K. Prahalad), are applying new strategic thinking about value creation. Decision-makers in these firms are starting to recognize that every single interaction and experience a customer has with the company (and its ecosystem partners) may either build or destroy customer value and loyalty over time.

While companies traditionally measure customer value based on revenues, share of wallet, cost to serve, retention, NPS, profitability, lifetime value etc., we now have more and better tools for deconstructing the customer experience and understanding the components driving customer and company interaction value at the activity/experience level. To really understand the value drivers in the customer experience, firms need to simultaneously look holistically, go deep in a few key focus areas, and use a multi-method approach.

Here’s an arsenal of tools and methods that are great to have in your toolkit for building customer experience insight:

Qualitative tools

  • Journey mapping methods and tools

  • In-the-moment, customer activity-based tools

    • Voice capture exercises (either using mobile phones or landlines) where customers can call in and answer a set of questions related to whatever they are doing in the moment.

    • Use mobile devices and online platforms to upload visuals, audio and/or video to answer questions, (e.g., as you are filling out your enrollment paperwork, take a moment to take a quick—less than 10 second video, to share your thoughts on what you are experiencing).

  • Customer diaries

    • E.g., use mobile devices as a visual diary or to complete a number of activities

  • Observation tools

    • Live or virtual tools (e.g., watch/videotape in-person or online experiences, either live or after the fact)

    • On-site customer visits: companies I’ve worked with often like to join customers doing activities in their own environments and situational contexts. Beyond basic observation, company employees can dialogue with customers during the activities/experiences to gain immediate feedback and richer understanding.

  • Interviews and qualitative surveys

  • Online discussion boards

  • Online or in-person focus groups

Quantitative tools

  • Quantitative surveys/research tools (too many to list in a blog post)

  • Internal tracking tools

    • Online tools for tracking behavior metrics (e.g., landing pages/clicks/page views/time on pages, etc.) for key interactions/experience stages. This enables ongoing data-mining, research and analysis.

    • Service/support data analysis (e.g., analyze call center data on inbound calls and online support queries for interaction types, stages, periods, etc. to look for FAQs, problems, etc.).

What tools are you using to better understand and improve the customer experience? What tools are in your toolkit?  Are you taking advantage of all the new tools available?

Jennifer is a Director at  South Street Strategy Group. She recently received the 2013 “Member of the Year” award by the Association for Strategic Planning (ASP), the preeminent professional association for those engaged in strategic thinking, planning and action.

Topics: South Street Strategy Group, strategy consulting, methodology, qualitative research, quantitative research, customer experience and loyalty

The Main Ingredient: The Market Research in your Pantry

Posted by Dana Vaille

Wed, Apr 17, 2013

market research foodThe New York Times article, The Extraordinary Science of Addictive Junk Food, caught my attention by linking the hot topic of “junk food” and the obesity epidemic to the market research that supports it.  This is where my inner geek gets really excited—it’s not often that two things I’m passionate about (nutrition and market research) are so perfectly linked. 

Ever wonder why it’s virtually impossible to eat just one Dorito? Or how they got the recipe for Dr. Pepper just right?  How do you think they engineered Cheetos into the perfect cheesy, crunchy, melt-in-your-mouth treat?  As any market researcher knows, it goes far beyond basic trial and error—this isn’t like asking a few people if they like your new brownie mix. But even for someone who lives and breathes market research, the article was incredibly illuminating. Companies put a lot of time and effort into developing foods that will both taste good and be profitable; they consider the basic principles of supply and demand, and couple that with food science and a lot of market research to fill our needs and desires.

Because I know very little about food science, I won’t talk about the “bliss point” (the levels of sugar, fat and salt in processed food that keep us craving more) though I find it fascinating.  Instead, here are some fascinating examples of how market research plays a role in determining what foods end up on the shelves of your local grocery store and in millions of pantries around the world.

Qualitative research identifies a need
In the article, we learn how Oscar Meyer conducted focus groups comprised of working moms to learn not what they were feeding their kids for lunch, but how they felt about the challenges and expectations they had in providing meals for their children. Oscar Meyer learned that these moms were strapped for time and felt pressured to provide a full lunch, while also getting themselves out the door, and to the office. The qualitative research revealed some of the tremendous sociological, psychological, and economic pressures faced by moms.  The company’s solution was Lunchables—a hugely successful product, with sales of $218 million in the first year.

Conjoint analysis configures a new product
Campbell’s Soup used a statistical method called conjoint analysis, to determine the optimal product configuration(s) for their soups.  We use conjoint analysis quite often ourselves because it lets us measure and evaluate the relative importance of individual characteristics and determine the right combinations of these characteristics. Campbell’s used conjoint the same way—to optimize the perfect combinations of ingredients, texture, taste, mouth feel, and so on, to (literally) engineer the ideal food.

Segmentation pinpoints a new target audience
Prego conducted segmentation research to find that there are three primary segments of spaghetti sauce consumers: those who like their sauce plain, those who prefer it to be spicy, and those who like it extra-chunky; the key here is that when the research was conducted, there was no extra-chunky tomato sauce on the market! Prego was able to identify a huge segment of the market whose needs (for extra-chunky tomato sauce) were not being met; the result was a new Prego “extra chunky” sauce that dominated the market.

Food is more than just fuel, especially for those of us lucky enough to have plenty to eat… it’s about things like family, comfort, convenience and love.  And whether you won’t touch a GMO or want Mayor Bloomberg to leave your giant sodas alone, it’s important to know when you grab that bag of chips—the first ingredient is most likely a ton of market research.

Dana is Research Director at CMB. Her husband’s recent conversion to a vegan diet has her thinking about food science even more than usual, though she continues to enjoy cheese.

Check out our latest webinar: The 6 Secrets of Succesful Segmentation, it's much healthier than Doritos we promise.

Topics: advanced analytics, qualitative research, market strategy and segmentation

How to Catch a Catfish: Secrets of a Qualitative Researcher

Posted by Anne Hooper

Tue, Mar 12, 2013

catch a catfish

Those who know me understand that I am not afraid to admit I love reality TV.  Combine that love with an interest in pop culture (generally), and a passion for understanding what people do and WHY they do it, and you have a match made in heaven. So obviously Catfish—the MTV series —is right up my alley.

Talk of "Catfishing" seems to be everywhere these days, but for the uninitiated, I’ll give you the quick (Wikipedia) definition: “A Catfish is a person who creates fake profiles online and pretends to be someone they are not by using someone else’s pictures and information.”  Put simply:  Catfishing is a relationship built on deception.

So what does Catfishing have to do with online qual?

As a qualitative researcher, I have to build “relationships” with strangers all the time, both online and in-person.  I can guarantee you that these relationships are genuine, authentic and honest—at least from my end.  My ultimate goal is to better understand research participants as human beings—how they live, what they value, what makes them ‘tick’, etc.  Most of the time, I truly feel that those I’m spending time with (both online and offline) are also being authentic and honest with me. Notice I said most of the time

Though it doesn’t happen often, it IS possible to come across a phony (AKA “Catfish”) in an in-person setting.  There are some pretty savvy people out there who seem to know how to make their way into a focus group for some extra cash.  Thankfully it’s rare—and most of the time these folks get weeded out before they even enter the room.  Online qualitative research, on the other hand, is ripe for Catfish.  Unless we are conducting video web-based research, there aren’t any visual clues to help us validate identities.  Therefore, we can’t be 100% sure that the person we THINK we are talking to is really that person.

The good news is that as researchers, we can take measures to protect ourselves from these Catfish participants online—it just takes a little effort and creativity.  Here are a few methods I’ve used successfully in the past:  

  • Demographics:  If you have a participant that has an annual income of $50K and claims to spend an average of $10K a year on vacation, you’ve got yourself a red flag.  Taking the time to cross reference demographics with online responses can be extremely helpful in getting to the truth.

  • Common sense:  Individual responses don’t stand alone, but pulled together they create a story.  At the end of the day you either have a story that makes sense or you don’t, and a story that doesn’t make sense is another red flag.  Just as one would do when moderating an in-person group, there are times when you must revisit what someone said earlier, and if necessary, request clarification.  (In the immortal words of Judge Judy: “If it doesn’t make sense, it’s not true.”) 

  • Consistency:  A lack of consistency can be another red flag.  If a participant says one thing, but contradicts themselves sometime later, there might be a problem.  Here’s an example:  in a recent “vacation” detective magnifying glassstudy we had a participant who changed her dates of a travel a few times (not unusual).  She later confirmed purchasing a package (air, hotel, car) for a family of 5 one week prior to departure (somewhat fishy … especially for someone who was very price sensitive).  Her “confirmed” travel dates were from the 25th-30th of the month—and when she hadn’t checked in, as requested during that time, we reached out to her to find out that she was “already home” on the 29th.  Suspicious?  Very.  This lack of consistency—along with several other red flags—confirmed our suspicions that she was not being truthful and she was pulled from the study.  Again, to quote Judge Judy, “If you tell the truth, you don’t have to have a good memory.”

  • Engagement:  There are always going to be participants who choose to do the bare minimum in order to get their incentive.  However, a lack of engagement and openness—coupled with any additional red flags—requires some investigation.  Is the participant just taking the easy way out by answering questions in as few words as possible, or are they skipping key questions altogether?  Skipping key questions (e.g., “Tell us what you like best about product X”) could be a sign that they really don’t use product X after all.  Again, it’s important for the moderator to probe accordingly and if the probes go ignored … you guessed it … another red flag.

With online research (and plenty of Catfish) here to stay, we need to continue to be vigilant in crossing our T’s and dotting our i’s.  I, for one, am ready to catch them … hook, line and sinker.

Anne is CMB’s Qualitative Research Director.  She enjoys travel and thanks to DVR, never misses an episode of Judge Judy. Anne especially loves being able to truly “connect” with her research participants—it’s in her Midwestern blood.   

Learn more about Anne and her Qualitative Research team here.

Topics: qualitative research, television, digital media and entertainment research

Compilation Scores: Look Under the Hood

Posted by Cathy Harrison

Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: advanced analytics, methodology, qualitative research, quantitative research

Three Ways to Level the Playing Field in any Focus Group

Posted by Anne Hooper

Mon, Feb 28, 2011

Level the playing fieldWhile I find it hard to believe myself, I have actually been in market research for just over 15 years now. Having spent more than half of that time in front of the mirror (gosh, am I really that old?) I’ve learned a lot about people and how they communicate and interact. As a moderator I have seen personalities and group dynamics that run the gamut, but I have found a few steadfast truths that level the playing field in any focus group and make sure insights are gleaned from all perspectives.

1.       Comfort is Key-  Being comfortable both physically and mentally means participants can be focused and engaged in a meaningful way.  If a focus group participant is focused on how “hot” the room is, how hungry they are, or how uncomfortable their chair is, they definitely aren’t going to be fully engaged in the conversation.  Similarly, if I—as a moderator—haven’t created a warm and open atmosphere, participants aren’t going to want to share their “true” thoughts and feelings with me.  Creating a safe environment—and showing some of my own vulnerabilities—gives participants the go-ahead to be vulnerable as well, resulting in dialogue that is insightful and findings that are useful.

2.       Everyone Wants to Share-  While it’s true that some choose to participate in research solely for the almighty dollar, they are definitely in the minority.  The fact is, those who have gone out of their way to take time off from work, battle traffic and parking, and perhaps even hire a babysitter are doing so for a reason—they want to be heard.  Even the quietest, most introverted person in the room has something to say and it’s my responsibility—as a moderator—to give them that opportunity.

3.       Keeping it Real-  Obviously market research is not the place for people to be entertained—participants are there to share their feelings and help us better understand the issues at hand.  However, that doesn’t mean we ought to create a sterile (AKA “boring”) environment that doesn’t support “color” and “creativity.”  Keeping it real as a moderator—showing some personality and truly enjoying the time you have with those participants—creates a win/win for everyone involved.

 

Posted by Anne Hooper, CMB’s director of qualitative research. When Anne’s not looking in the mirror she enjoys traveling, reading, skiing and spending time with her family (especially when it’s poolside).

Topics: qualitative research