WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

A Year in Review: Our Favorite Blogs from 2016

Posted by Savannah House

Thu, Dec 29, 2016

pexels-photo (2).jpg

What a year 2016 was.

In a year characterized by disruption, one constant is how we approach our blog: each CMBer contributes at least one post per year. And while asking each employee to write may seem cumbersome, it’s our way of ensuring that we provide you with a variety of perspectives, experiences, and insights into the ever-evolving world of market research, analytics, and consulting.

Before the clock strikes midnight and we bid adieu to this year, let’s take a moment to reflect on some favorite blogs we published over the last twelve months:

    1. When you think of a Porsche driver, who comes to mind? How old is he? What’s she like? Whoever it is, along with that image comes a perceived favored 2016 presidential candidate. Harnessing AffinIDSM and the results of our 2016 Consumer Identity Research, we found a skew towards one of the candidates for nearly every one of the 90 brands we tested.  Read Erica Carranza’s post and check out brands yourself with our interactive dashboard. Interested in learning more? Join Erica for our upcoming webinar: The Key to Consumer-Centricity: Your Brand User Image  
    2. During introspection, it’s easy to focus on our weaknesses. But what if we put all that energy towards our strengths? Blair Bailey discusses the benefits of Strength-Based Leadership—realizing growth potential in developing our strengths rather than focusing on our weaknesses. In 2017, let’s all take a page from Blair’s book and concentrate on what we’re good at instead of what we aren’t.
    3. Did you attend a conference in 2016? Going to any in 2017? CMB’s Business Development Lead, Julie Kurd, maps out a game plan to get the most ROI from attending a conference. Though this post is specific to TMRE, these recommendations could be applied to any industry conference where you’re aiming to garner leads and build relationships. 
    4. In 2016 we released the results of our Social Currency research – a five industry, 90 brand study to identify which consumer behaviors drive equity and Social Currency. Of the industry reports, one of our favorites is the beer edition. So pull up a stool, grab a pint, and learn from Ed Loessi, Director of Product Development and Innovation, how Social Currency helps insights pros and marketers create content and messaging that supports consumer identity.
    5. It’s a mobile world and we’re just living in it. Today we (yes, we) expect to use our smartphones with ease and have little patience for poor design. And as market researchers who depend on a quality pool of human respondents, the trend towards mobile is a reality we can’t ignore. CMB’s Director of Field Services, Jared Huizenga, weighs in on how we can adapt to keep our smart(phone) respondents happy – at least long enough for them to “complete” the study. 
    6. When you think of “innovation,” what comes to mind? The next generation iPhone? A self-driving car? While there are obvious tangible examples of innovation, professional service agencies like CMB are innovating, too. In fact, earlier this year we hired Ed Loessi to spearhead our Product Development and Innovation team. Sr. Research Associate, Lauren Sears, sat down with Ed to learn more about what it means for an agency like CMB to be “innovative.” 
    7. There’s something to be said for “too much of a good thing” – information being one of those things. To help manage the data overload we (and are clients) are often exposed to, Project Manager, Jen Golden, discusses the merits of focusing on one thing at a time (or research objective), keeping a clear space (or questionnaire) and avoiding trending topics (or looking at every single data point in a report). 
    8. According to our 2016 study on millennials and money, women ages 21-30 are driven, idealistic, and feel they budget and plan well enough. However, there’s a disparity when it comes to confidence in investing: nearly twice as many young women don’t feel confident in their investing decisions compared to their male counterparts. Lori Vellucci discusses how financial service providers have a lot of work to do to educate, motivate and inspire millennial women investors. 
    9. Admit it, you can’t get enough of Prince William and Princess Kate. The British Royals are more than a family – they’re a brand that’s embedded itself into the bedrock of American pop culture. So if the Royals can do it, why can’t other British brands infiltrate the coveted American marketplace, too? Before a brand enters a new international market, British native and CMB Project Manager, Josh Fortey, contends, the decision should be based on a solid foundation of research.
    10. We round out our list with a favorite from our “Dear Dr. Jay Series.” When considering a product, we often focus on its functional benefits. But as Dr. Jay, our VP of Advanced Analytics and Chief Methodologist, explains, the emotional attributes (how the brand/product makes us feel) are about as predictive of future behaviors of the functional benefits of the product. So brands, let's spread the love!

We thank you for being a loyal reader throughout 2016. Stay tuned because we’ve got some pretty cool content for 2017 that you won’t want to miss.

From everyone at CMB, we wish you much health and success in 2017 and beyond.

PS - There’s still time to make your New Year’s Resolution! Become a better marketer in 2017 and signup for our upcoming webinar on consumer identity:

Register Now!

 

Savannah House is a Senior Marketing Coordinator at CMB. A lifelong aspiration of hers is to own a pet sloth, but since the Boston rental market isn’t so keen on exotic animals, she’d settle for a visit to the Sloth Sanctuary in Costa Rica.

 

Topics: strategy consulting, advanced analytics, methodology, consumer insights

But first... how do you feel?

Posted by Lori Vellucci

Wed, Dec 14, 2016

EMPACT 12.14-2.jpg

How does your brand make consumers feel?  It’s a tough but important question and the answer will often vary between customers and prospects or between segments within your customer base.  Understanding and influencing consumers’ emotions is crucial for building a loyal customer base; and scientific research, market research, and conventional wisdom all suggest that to attract and engage consumers, emotions are a key piece of the puzzle. 

CMB designed EMPACTSM, a proprietary quantitative approach to understanding how a brand, product, touchpoint, or experience should make a consumer feel in order to drive their behaviors.  Measuring valence (how bad or good) and activation (low to high energy) across basic emotions (e.g., happy, sad, etc.), social and self-conscious emotions (e.g., pride, embarrassment, nostalgia, etc.) and other relevant feelings and mental states (e.g., social connection, cognitive ease, etc.), EMPACT has proved to be a practical, comprehensive, and robust tool.  The key insights around emotions emerge which can then drive communication to elicit the desired emotions and drive consumer behavior.  But while EMPACT has been used extensively as a quantitative tool, it is also an important component when conducting qualitative research.

In order to achieve the most bang for the buck with qualitative research, every researcher knows that having the right people in the room (or in front of the video-enabled IDI) is a critical first step.  You screen for demographics and behaviors and sometimes attitudes, but have you considered emotions?  Ensuring that you recruit respondents who feel a specific way when considering your brand or product is critical to being able to glean the most insight from qualitative work. (Tweet this!)  Applying an emotional qualifier to respondents allows us to ensure that we are talking to respondents who are in the best position to provide the specific types of insights we’re looking for. 

For example, CMB has a client who learned from a segmentation study which incorporated EMPACT that their brand over-indexed for eliciting certain emotions that tended to drive consumers away from brands within their industry.  The firm had a desire to craft targeted communications to mitigate these negative emotions among this specific strategic consumer segment.  As a first step in testing their marketing message and imagery, focus groups were conducted. 

In addition to using the segmentation algorithm to ensure we had the correct consumer segment in the room, we also included EMPACTscreening to be sure the respondents selected felt the emotions that we wanted to address with new messaging.  In this way, we were able to elicit insights directly related to how well the new messaging worked in mitigating the negative emotions.  Of course we tested the messaging among broader groups as well, but being able to identify and isolate respondents whose emotions we most wish to improve ensured development of great advertising that will move the emotion needle and motivate consumers to try and to love the brand.

Want to learn more about EMPACT? View our webinar by clicking the link below:

Learn More About EMPACT℠

Lori Vellucci is an Account Director at CMB.  She spends her free time purchasing ill-fated penny stocks and learning about mobile payment solutions from her Gen Z daughters.

Topics: methodology, qualitative research, EMPACT, quantitative research

Why Researchers Should Consider Hybrid Methods

Posted by Becky Schaefer

Fri, Dec 09, 2016

As market researchers we’re always challenging ourselves to provide deeper, more accurate insights for our clients. Throughout my career I’ve witnessed an increased dedication to uncovering better results by integrating traditional quantitative and qualitative methodologies to maximize insights within shorter time frames.Qualitative.jpg

Market research has traditionally been divided into quantitative and qualitative methodologies. But more and more researchers are combining elements of each – creating a hybrid methodology, if you will – to paint a clearer picture of the data for clients. [Tweet this!]

Quantitative research is focused on uncovering objective measurements via statistical analysis. In practice, quant market research studies generally entail questionnaire development, programming, data collection, analysis, and results, and can usually be completed within a few weeks (depending on the scope of the research).  Quant studies usually have larger sample sizes and are structured and setup to quantify data on respondents’ attitudes, opinions, and behaviors.

Qualitative research is exploratory and aims to uncover respondents’ underlying reasons, beliefs and motivations. Qualitative is descriptive, and studies may rely on projective techniques and principles of behavioral psychology to probe deeper than initial responses might allow. 

While both quantitative and qualitative research have their respective merits, market research is evolving and blurring the lines between the two.  At CMB we understand each client has different goals and sometimes it’s beneficial to apply these hybrid techniques.

 For example two approaches I like to recommend are:

  • Video open-ends Traditional quantitative open-ends ask respondents to complete open-ended questions by to entering a text response. Open-ends give respondents the freedom to answer questions in their own words versus selecting from a list of pre-determined responses. While open-ends are still considered to be a viable technique, market researchers are now throwing video into the mix. Instead of writing down their responses, respondents can record themselves on video. The obvious advantage to video is that it facilitates a more genuine, candid response while researchers are able see respondents’ emotions “face to face.” This is a twist on a traditional quantitative research that has the potential to garner deeper, more meaningful respondent insight.
  • In-depth/moderated chats let researchers dig deeper and connect with respondents within the paradigm of a traditional quantitative study. In these short discussions respondents can explain to researchers why they made a specific selection on a survey. In-depth/moderated chats can help contextualize a traditional quantitative survey – providing researchers (and clients) with a combination of both quantitative and qualitative insights.

As insights professionals we strive to offer critical insights that help our clients and partners answer their biggest business questions. More and more often the best way to achieve the best results is to put tradition aside and combine both qualitative and quantitative methodologies.

Rebecca is part of the field services team at CMB, and she is excited to celebrate her favorite time of year with her family and friends.  

Topics: methodology, qualitative research, quantitative research

The Elephant, the Donkey, and the Qualitative Researcher: The Moderator in Market Research and Politics

Posted by Kelsey Segaloff

Wed, Nov 23, 2016

capitol-32310_1280.pngAmericans have a lot to reckon with in the wake of the recent vote. You’re forgiven if analyzing the role of the presidential debate moderator isn’t high on your list. Still, for those of us in the qualitative market research business, there were professional lessons to be learned from the reactions to moderators Lester Holt (NBC), Martha Raddatz (ABC), Anderson Cooper (CNN), and Chris Wallace (Fox). Each moderator took their own approach and each was met with criticism and praise.

As CMB’s qualitative research associate and a moderator-in-training, I noticed parallels to the role of the moderator in the political and market research space. My thoughts:

 The moderator as unbiased

"Lester [Holt] is a Democrat. It’s a phony system. They are all Democrats.” – Donald Trump, President-Elect

Concerns regarding whether or not the debate moderators were unbiased arose throughout the primaries and presidential debates. Moderators were criticized for techniques like asking questions that were deemed “too difficult,” going after a single candidate, and not adequately pressing other candidates.  For example, critics called NBC’S Matt Lauer biased during the Commander-in-Chief forum. Some felt Lauer hindered Hillary Clinton’s performance by asking tougher questions than those asked of Donald Trump, interrupting Clinton, and not letting her speak on other issues the same way he allowed Donald Trump to.

In qualitative market research, every moderator will experience some bias from time to time, but it’s important to mitigate bias in order to maintain the integrity of the study. In my own qualitative experience, the moderator establishes that they are unbiased by opening each focus group by explaining that they are independent from the topic of discussion and/or client, and therein are not looking for the participants to answer a certain way.

Qualitative research moderators can also avoid bias by not asking leading questions, monitoring their own facial expressions and body language, and giving each participant an equal opportunity to speak. Like during a political debate, preventing bias is imperative in qualitative work because biases can skew the results of a study the same way the voting populace fears bias could skew the perceived performance of a candidate.

 The moderator as fact-checker

“It has not traditionally been the role of the moderator to engage in a lot of fact-checking.” – Alan Schroeder, professor of Journalism at Northeastern University

Throughout the 2016 election moderators were criticized for either fact-checking too much or not fact-checking the candidates enough. Talk about a Catch-22.

In qualitative moderating, fact-checking is dependent on the insights we are looking to achieve for a particular study. For example, I just finished traveling across the country with CMB’s Director of Qualitative, Anne Hooper, for focus groups. In each group, Anne asked participants what they knew about the product we were researching. Anne noted every response (accurate or inaccurate), as it was critical we understood the participants’ perceptions of the product. After the participants shared their thoughts, Anne gave them an accurate product description to clarify any false impressions because for the remainder of the conversation it was critical the respondents had the correct understanding of the product.

For the case of qualitative research, Anne demonstrated how fact-checking (or not fact-checking) can be used for insights. There’s no “one right way” to do it; it depends on your research goals.  

 The moderator as timekeeper

“Basically, you're there as a timekeeper, but you're not a participant.” – Chris Wallace, Television Anchor and Political Commentator for Fox News

Presidential debate moderators frequently interjected (or at least tried to) when candidates ran over their allotted time in order to stay on track and ensure each candidate had equal speaking time. Focus group moderators have the same responsibility. As a qualitative moderator-in-training, I’m learning the importance of playing timekeeper – to be respectful of the participants’ time and allow for equal participation.  I must also remember to cover all topics in the discussion guide. Whether you’re acting as a timekeeper in market research or political debates, it’s as much about the audience of voters or clients as it is about the participants (candidates or study respondents).  

The study’s desired insights will dictate the role of the moderator. Depending on your (or your client’s) goals, bias, fact-checking, and time-keeping could play an important part in how you moderate. But ultimately whether your client is a business or the American voting populace, the fundamental role of the moderator remains largely the same: to provide the client with the insights needed to make an informed decision.

Kelsey is a Qualitative Research Associate. She co-chairs the New England chapter of the QRCA, and recently received a QRCA Young Professionals Grant!

Topics: methodology, qualitative research, Election

Dear Dr. Jay: Weighting Data?

Posted by Dr. Jay Weiner

Wed, Nov 16, 2016

Dear Dr. Jay:

How do I know if my weighting matrix is good? 

Dan


Dear Dan,DRJAY-9.png

I’m excited you asked me this because it’s one of my favorite questions of all time.

First we need to talk about why we weight data in the first place.  We weight data because our ending sample is not truly representative of the general population.  This misrepresentation can occur because of non-response bias, poor sample source and even bad sample design.  In my opinion, if you go into a research study knowing that you’ll end up weighting the data, there may be a better way to plan your sample frame. 

Case in point, many researchers intentionally over-quota certain segments and plan to weight these groups down in the final sample.  We do this because the incidence of some of these groups in the general population is small enough that if we rely on natural fallout we would not get a readable base without a very large sample.  Why wouldn’t you just pull a rep sample and then augment these subgroups?  The weight needed to add these augments into the rep sample is 0. 

Arguments for including these augments with a very small weight include the treatment of outliers.  For example, if we were conducting a study of investors and we wanted to include folks with more than $1,000,000 in assets, we might want to obtain insights from at least 100 of these folks.  In a rep sample of 500, we might only have 25 of them.  This means I need to augment this group by 75 respondents.  If somehow I manage to get Warren Buffet in my rep sample of 25, he might skew the results of the sample.  Weighting the full sample of 100 wealthier investors down to 25 will reduce the impact of any outlier.

A recent post by Nate Cohn in the New York Times suggested that weighting was significantly impacting analysts’ ability to predict the outcome of the 2016 presidential election.  In the article, Mr. Cohn points out, “there is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.”  This man carried a sample weight of 30.  In a sample of 3000 respondents, he now accounts for 1% of the popular vote.  In a close race, that might just be enough to tip the scale one way or the other.  Clearly, he showed up on November 8th and cast the deciding ballot.

This real life example suggests that we might want to consider “capping” extreme weights so that we mitigate the potential for very small groups to influence overall results. But bear in mind that when we do this, our final sample profiles won’t be nationally representative because capping the weight understates the size of the segment being capped.  It’s a trade-off between a truly balanced sample and making sure that the survey results aren’t biased. [Tweet this!]

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

Keep the market research questions comin'! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit yours anonymously by clicking below:

 Ask Dr. Jay!

Topics: methodology, Dear Dr. Jay