WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

To Label Me is to Negate Me (Sometimes): The case for occasion-based segmentation

Posted by Peter Cronin

Wed, Aug 29, 2018

beer

One of my favorite lunchtime routines is to walk from my office over to the Trillium Brewing Company in nearby Seaport to grab a 4-pack of their current small-batch, limited-time, freshly brewed double IPA.

As far as Trillium knows, I’m an “Epicure”—a beer drinker characterized by my ardor and appreciation for craft beer.

During the summer months, I occasionally stop at BJ’s Wholesale Club to get a 30-pack of Corona (along with a couple of limes) because I like to have something to offer guests when hosting a cookout. In these instances, I’m looking for value, but not necessarily the cheapest option because quality and image are still important to me. BJ’s might consider me your average “Cost-aware Enthusiast.”

Every year on my birthday, which typically coincides with the start of March Madness, I stop at my local beer store to buy a six-pack of Samuel Smith’s Oatmeal Stout. They probably consider me a “Sports Oriented” beer drinker.

So, who am I? A beer snob, a deal-seeking but conscientious host, or a sports fan?

The answers are “all of the above” and “it depends.”  

In some categories (like beer) the same person may experience a variety of needs in any given time and make different choices based on those needs. Segmenting people by their dominant motivation/need risks majorly oversimplifying reality.

To understand opportunities for growth in categories like this, a better alternative is occasion-based segmentation. Rather than segmenting people into groups, occasion-based segmentation considers multiple use occasions instead of just one. As you can see from my example, I’m more apt to purchase one type of beer over another based on the occasion (e.g., time/day, who I’m with, what I’m doing).

Occasion-based segmentation is particularly successful when anchored in the psychology of habits. When a behavior is rewarding, we tend to repeat it. The more we repeat it, it eventually becomes a habit. For many people, drinking beer is habitual. Take my backyard BBQ, for example. Throughout the summer, I repeat the cycle of having friends and family over, eating good food, drinking Corona with lime, and feeling relaxed, restive and connected. This occasion has all the key components of a habit: my craving (motivation) to host triggers a routine of good food and drink that results in feeling connected (reward). Feeling connected makes me to do it again.

When we ask people about their occasions at CMB, we also ask what motivates these choices and to describe the rewards—including the emotional and functional outcomes. These inquiries become the base of the segmentation. 

Segmenting your market by usage occasion can be a powerful source of insight about your consumers. By linking brands to occasions and understanding the psychological needs and emotions that drive choices, marketers can position their brands to be the preferred choice. They can tailor messaging to each occasion to build engagement, preference and loyalty.  

Brand managers at The Boston Beer Company, AB InBev, MillerCoors, etc., should be less concerned about whether I’m a “High Impacter,” a “Macho Male,” a “Trend Follower,” or a “Chameleon.” Classifying me attitudinally will dramatically underestimate the complexity of my buying habits. 

Instead, understanding the core types of beer drinking occasions (and the driving psychological needs and emotions of each), how much volume each occasion represents, and which groups of people over-index on them, can enable marketers to make informed decisions on where and how to focus their messaging, promotions, and product development efforts.

Peter is a brand guy who is fascinated with understanding how others see the world, and an equal opportunity beer drinker who refuses to be labeled.

Learn more about segmentation and market strategy:

Learn More

 

Topics: research design, quantitative research, brand health and positioning, market strategy and segmentation

I, for one, welcome our new robot...partners

Posted by Laura Dulude

Tue, Oct 17, 2017

 

iStock-841217582.jpg

Ask a market researcher why they chose their career, and you won't hear them talk about prepping sample files, cleaning data, creating tables, and transferring those tables into a report. These tasks are all important parts of creating accurate and compelling deliverables, but the real value and fun is deriving insights, finding the story, and connecting that story to meaningful decisions.

So, what’s a researcher with a ton of data and not a lot of time to do? Hello, automation!

Automation is awesome.

There are a ton of examples of automation in market research, but for these purposes I'll keep it simple. As a data manager at CMB, part of my job is to proofread banner tables and reports, ensuring that the custom deliverables we provide to clients are 100% correct and consistent. I love digging through data, but let’s be honest, proofing isn’t the most exciting part of my role. Worse than a little monotony is that proofing done by a human is prone to human error.

To save time and avoid error, I use Excel formulas to compare two data lists and automatically flag any inaccuracies. This is much more accurate and quicker than checking lists against one another manually—it also means less eye strain.

As I said, this is a really simple example of automation, but even this use case is an incredible way to increase efficiency so I have more time to focus on finding meaning in the data.

Other examples include:

  • Reformatting tables for easier report population using Excel formulas
  • Creating Excel macros using VBA
  • SPSS loops and macros

I’m a huge proponent of automation, whether in the examples above or in myriad more complex scenarios. Automation helps us cut out inefficiencies and gives us time to focus on the cool stuff

Automation without human oversight? Not awesome.

Okay, so my proofreading example is quite basic because it doesn’t account for:

  • Correctness of labels
  • Ensuring all response options in a question are being reported on
  • Noting any reporting thresholds (e.g. only show items above 5%, only show items where this segment is significantly higher than 3+ other segments, etc.)
  • Visual consistency of the tables or report
  • Other details that come together to create a truly beautiful, accurate, and informative deliverable.

Some of the bullet points above can also be automated (e.g. thresholds for reporting and correctness of labels), but others can’t. On top of that, automation is also prone to human error—we can automate incorrectly by misaligning the data points or filtering and/or weighting the data incorrectly. Therefore, it’s imperative that, even after I automate, I review to catch any errors—flawless proofing requires a human touch.

When harnessed correctly, automation maximizes efficiency, alleviates tediousness, and reduces error to free up more time for insights. Before you start arming yourself against a robot takeover, remember: insights are an art and a science, and machines haven’t taken over the world just yet.

Topics: quantitative research, Artificial Intelligence, Market research Automation,

Are We There Yet? How TURF Can Save Your Family Trip

Posted by Victoria Young

Tue, Sep 05, 2017

road trip.jpeg

As the summer comes to a close, I’m reminiscing about the annual end-of-summer trip to New Hampshire my family used to take. There’s a lot to do and see in New Hampshire, and only having a week, we had to pick and choose how to spend our time wisely. Ultimately that decision was up to my mom, but that didn’t prevent me and my brother sharing our various opinions.

We all loved Story Land (that was a given) and it was always included on our NH itinerary… but that’s where unanimous agreement ended. My brother pined for Six Gun City–a Wild West themed park–but I preferred Santa’s Village. I thought Santa’s Village was cute while my brother thought it was tacky. Meanwhile, both my brother and I moaned and groaned when our mom insisted we hang out on the side of the road for an hour to look at The Old Man in the Mountain (RIP).

During the week, we managed to hit all desired attractions (and more), but tensions ran high some days. My brother complained at Santa’s Village while I couldn’t be bothered at Six Gun City—looking back, I can’t imagine the stress we caused our mom with our eye-rolling and sighs.

The researcher in me wonders if there could’ve been a way to satisfy everyone’s desires without upsetting some? Then I realized that this scenario isn’t totally unlike what we run TURF analyses for. If there had been a TURF analysis for our family vacations, perhaps it would’ve saved a lot of headaches.  

But what is “TURF”?

TURF is an acronym for “Total Unduplicated Reach and Frequency.” TURF Analysis is a statistical analysis that was traditionally used to help media buyers determine where to place ads to reach the widest possible audience. But the use of TURF has since expanded to help answer product development questions like “What is the smallest number of features, services or products that could be offered to appeal to the largest number of potential consumers?”  

TURF determines the maximum number of people reached by looking for unduplicated reach. For example, if Person A likes Channel X and Channel Y, and both channels are included in the analysis, the model will get no additional reach from Person A than it would’ve had only Channel X or Channel Y been included.

This type of analysis could’ve helped us determine which attractions would appeal to the largest audience on our family trips. TURF is ideal when the number of choice combinations is high and the number of combinations are restricted—in my family’s case, we were restricted by time, money, and patience.

TURF tests each combination of options (e.g., Story Land, Clark’s Trading Post, Santa’s Village, etc.), and reports both reach and frequency for each combination. As you add items (in this case, attractions), the reach increases for a while and then tapers off. This is called the law of diminishing returns. The key is finding that sweet spot where you get the highest reach with the fewest items, and where anything above that is only incremental.

To make this more digestible, consider the example below. We’re planning a family vacation with our extended family, all of whom have varying preferences:

Story Land table.png

Of our 8 family members, 4 like Story Land (50% reach). Two other attractions–Attitash Bear Peak and Santa’s Village–appeal to 3 family members, but because all 3 who like Santa’s Village also like Story Land, only Attitash Bear Peak adds to the model’s reach. 

If we add Attitash Bear Peak, we come up with a total of 6 family members (75%) who get something they want.  Both Six Gun City and Clark’s Trading Post reach 2 family members, but only Six Gun City reaches Cousin Blair, one of two family members not reached by the first two attractions, bringing us to 87.5% reach.  We’re unable to please everyone, especially Long Lost Uncle Mark who appears to not enjoy anything. 

As the chart below suggests, we could please almost everyone in three stops: Story Land, Attitash Bear Peak, and Six Gun City.  Instead of going everywhere, we can maximize everyone’s happiness (reach) and stay within our restrictions (budget, time, patience) by going to those three stops.

Story Land chart_2.png

 

Ok, so TURF might not be the most logical answer to family vacation logistics, but it can help companies make important business decisions, especially when they are faced with multiple options and a limited budget.

So for now, my mom, brother, and I will continue to ask ourselves, who’s up for Story Land?

Victoria is a Senior Associate Researcher at CMB who still loves Story Land and traveling with her family.

Topics: business decisions, quantitative research

Spring into Data Cleaning

Posted by Nicole Battaglia

Tue, Apr 04, 2017

scrubbing.jpegWhen someone hears “spring cleaning” they probably think of organizing their garage, purging clothes from their closet, and decluttering their workspace. For many, spring is a chance to refresh and rejuvenate after a long winter (fortunately ours in Boston was pretty mild).

This may be my inner market researcher talking, but when I think of spring cleaning, the first that comes to mind is data cleaning. Like cleaning and organizing your home, data cleaning is a detailed and lengthy process that is relevant to researchers and their clients.

Data cleaning is an arduous task. Each completed questionnaire must be checked to ensure that it's been answered correctly, clearly, truthfully, and consistently. Here’s what we typically clean:

  • We’ll look at each open-ended response in a survey to make sure respondents’ answers are coherent and appropriate. Sometimes respondents will curse, other times they'll write outrageously irrelevant answers like what they’re having for dinner, so we monitor these closely. We do the same for open-ended numeric responsesthere’s always that one respondent who enters ‘50’ when asked how many siblings they have.
  • We also check for outliers in open-ended numeric responses. Whether it’s false data or an exceptional respondent (e.g. Bill Gates), outliers can skew our data and lead us to draw the wrong conclusions and make more recommendations to clients. For example, I worked on a survey that asked respondents how many cars they own.  Anyone who provided a number that was three standard deviations above the mean was set as an outlier because their answers would’ve significantly impacted our interpretation of the average car ownershipthe reality is the average household owns two cars, not six.
  • Straightliners are respondents who answer a battery of questions on the same scale with the same response. Because of this, sometimes we’ll see someone who strongly agrees or disagrees with two completely opposing statements—making it difficult to trust these answers reflect the respondent’s real opinion.
  • We often insert a Red Herring Fail into our questionnaires to help identify and weed out distracted respondents. A Red Herring Fail is a 10-point scale question usually placed around the halfway mark of a questionnaire that simply asks respondents to select the number “3” on the scale. If they select a number other than “3”, we flag them for removal.
  • If there’s incentive to participate in a questionnaire, someone may feel inclined to participate more than once. So to ensure our completed surveys are from unique individuals, we check for duplicate IP addresses and respondent IDs.

There are a lot of variables that can skew our data, so our cleaning process is thorough and thoughtful. And while the process may be cumbersome, here’s why we clean data: 

  • Impression on the clientFollowing a detailed data cleaning processes helps show that your team is cautious, thoughtful, and able to accurately dissect and digest large amounts of data. This demonstration of thoroughness and competency goes a long way to building trust in the researcher/client relationship because the client will see their researchers are working to present the best data possible.
  • Helps tell a better storyWe pride ourselves on storytelling–using insights from data and turning them into strong deliverablesto help our clients make strategic business decisions. If we didn’t have accurate and clean data, we wouldn’t be able to tell a good story!
  • Overall, ensures high quality and precise dataAt CMB typically two or more researchers are working on the same data file to mitigate the chance of error. The data undergoes such scrutiny so that any issues or mistakes can be noted and rectified, ensuring the integrity of the report.

The benefits of taking the time to clean our data far outweigh the risks of skipping it. Data cleaning keeps false or unrepresentative information from influencing our analyses or recommendations to a client and ensures our sample accurately reflects the population of interest.

So this spring, while you’re finally putting away those holiday decorations, remember that data cleaning is an essential step in maintaining the integrity of your work.

Nicole Battaglia is an Associate Researcher at CMB who prefers cleaning data over cleaning her bedroom.

Topics: data collection, quantitative research

But first... how do you feel?

Posted by Lori Vellucci

Wed, Dec 14, 2016

EMPACT 12.14-2.jpg

How does your brand make consumers feel?  It’s a tough but important question and the answer will often vary between customers and prospects or between segments within your customer base.  Understanding and influencing consumers’ emotions is crucial for building a loyal customer base; and scientific research, market research, and conventional wisdom all suggest that to attract and engage consumers, emotions are a key piece of the puzzle. 

CMB designed EMPACTSM, a proprietary quantitative approach to understanding how a brand, product, touchpoint, or experience should make a consumer feel in order to drive their behaviors.  Measuring valence (how bad or good) and activation (low to high energy) across basic emotions (e.g., happy, sad, etc.), social and self-conscious emotions (e.g., pride, embarrassment, nostalgia, etc.) and other relevant feelings and mental states (e.g., social connection, cognitive ease, etc.), EMPACT has proved to be a practical, comprehensive, and robust tool.  The key insights around emotions emerge which can then drive communication to elicit the desired emotions and drive consumer behavior.  But while EMPACT has been used extensively as a quantitative tool, it is also an important component when conducting qualitative research.

In order to achieve the most bang for the buck with qualitative research, every researcher knows that having the right people in the room (or in front of the video-enabled IDI) is a critical first step.  You screen for demographics and behaviors and sometimes attitudes, but have you considered emotions?  Ensuring that you recruit respondents who feel a specific way when considering your brand or product is critical to being able to glean the most insight from qualitative work. (Tweet this!)  Applying an emotional qualifier to respondents allows us to ensure that we are talking to respondents who are in the best position to provide the specific types of insights we’re looking for. 

For example, CMB has a client who learned from a segmentation study which incorporated EMPACT that their brand over-indexed for eliciting certain emotions that tended to drive consumers away from brands within their industry.  The firm had a desire to craft targeted communications to mitigate these negative emotions among this specific strategic consumer segment.  As a first step in testing their marketing message and imagery, focus groups were conducted. 

In addition to using the segmentation algorithm to ensure we had the correct consumer segment in the room, we also included EMPACTscreening to be sure the respondents selected felt the emotions that we wanted to address with new messaging.  In this way, we were able to elicit insights directly related to how well the new messaging worked in mitigating the negative emotions.  Of course we tested the messaging among broader groups as well, but being able to identify and isolate respondents whose emotions we most wish to improve ensured development of great advertising that will move the emotion needle and motivate consumers to try and to love the brand.

Want to learn more about EMPACT? View our webinar by clicking the link below:

Learn More About EMPACT℠

Lori Vellucci is an Account Director at CMB.  She spends her free time purchasing ill-fated penny stocks and learning about mobile payment solutions from her Gen Z daughters.

Topics: methodology, qualitative research, EMPACT, quantitative research