WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Marketer Beware: Brand User Stereotypes Bias How Consumers See Your Ads

Posted by Dr. Erica Carranza

Thu, Jan 19, 2017

Imagine you see the picture below in an ad for Jack Daniels. Who is this guy? Where is he? What’s he like?

Man in boat_v2.jpg

I see a middle-aged man somewhere in the south. He’s out fishing. He’s a stoic, rugged, “salt of the earth” kind of guy. He drives a truck—and if it breaks down, he can fix it himself, thank you very much.

But what if, instead, you saw this image in an ad for the clothing brand Patagonia? What would you think about the man in the picture?

I’d imagine him on adventure vacation someplace exotic. He’s from California. He cares about looking good, feeling good, and doing good. Later, he’ll be scaling a mountain and drinking a juice cleanse.

In other words, if he’s in an ad for Patagonia (vs. Jack Daniels), I’d make a whole different set of assumptions.

This effect is driven by our tendency to develop stereotypes. After all, consumers are people, and people are social animals. We tend to categorize other people into types, and use our beliefs about those types to guide our perceptions, expectations, and behaviors. Stereotypes can be nefarious, no doubt. But they’re a fact of life. They’re a mental shortcut we’ve evolved in order to navigate a complex world—and they’re hard to avoid because they often operate at an unconscious level.

A brand can easily become the basis for a stereotype—an image of the kind of person who uses that brand (e.g., the kind of guy who drinks JD, or wears Patagonia). And that image can bias how consumers see the brand’s advertising.

Case in point: Research we conducted for a financial services brand with a reputation for being popular among older, affluent consumers.

The goal was to test advertising that would broaden the brand’s appeal—particularly among Millennials. But when we showed Millennial prospects an ad with a picture like the one above, they assumed that the man was much older. They said things like: “He was a Wall Street businessman. Now he’s retired and canoeing alone on a lake… This is probably his last vacation.” (Ouch!) To succeed in shaking-up their image of who uses the brand, the ads had to unambiguously portray customers in young adult life stages (e.g., a couple having their first baby).

The ads also had to show activities that were appealing without being too out-of-reach. Pictures of twenty-somethings yachting, or at the ballet, just reinforced prospects’ ingoing image of uber-wealthy customers with whom they couldn’t relate. ("I don't identify with any of these pictures! I don't own a boat… I never go to the ballet.”) And, for some prospects, these pictures just seemed unrealistic. Yachting Millennials didn’t fit with any type of person they knew.

Another pitfall were pictures of young people that struck prospects as realistic, but inadvertently
triggered other negative stereotypes. For example, a picture of a man wearing a hat like this…hipster hat (cropped).jpg

…triggered a “Hipster” image, and that was a turn-off. Prospects didn’t think they had much in common with him, didn’t aspire to be like him—and definitely wouldn’t want to hang out with him.

These perceptions matter a lot. Consumers’ image of a brand’s typical user needs to feel real and be compelling—because, as I wrote in an earlier blog, consumers’ image of the kind of person who uses a brand can really help (or really hinder!) brand growth. To attract consumers, the image should feel like a kind of person they know and like, or would like to know.

Here’s the good news: Marketing can play a powerful role in shaping that image. Not to say that it’s easy. Great marketing is art + science. So we developed AffinIDSM to support brands and agencies with science that can help them get the art of the marketing right. More specifically, AffinIDSM is a research solution designed to tackle three key questions:

  • What is consumers’ current image of the brand’s typical user?
    Note: They may not have a clear image, which is a challenge and opportunity for the brand—but that’s a topic for a different day!
  • How compelling is that image?
  • How should you optimize that image?
    In other words: What should marketing and brand initiatives seek to communicate about the kind of person who uses the brand in order to drive consumer engagement?

Then we can test ads to make sure that they convey the intended image, and that they avoid hard-to-predict missteps. (See above re: the “Hipster” hat… Who knew?)

I’ll be talking about AffinIDSM in an upcoming webinar. Curious? Sign-up below!

In the meantime, “The More You Know” lesson for today is that consumers’ image of a brand’s typical user—and their stereotypes of people in general—will bias their perceptions of marketing, whether we like it or not. The best course of action is to understand what those images are, the effect they have on consumers, and how to strategically influence them so that they work in the brand’s favor.  Tweet: @cmbinfo Consumers’ image of a brand’s typical user bias their perceptions of marketing https://ctt.ec/b254L+[Tweet this]

Erica Carranza is CMB’s VP of Consumer Psychology. She has supplier- and client-side market research experience, and earned her Ph.D. in social psychology from Princeton University.

PS – Have you registered for our webinar yet!? Join Erica as she explains why to change what consumers think of your brand, you must change their image of the people who use it.

What: The Key to Consumer-Centricity: Your Brand User Image

When: February 1, 2017 @ 1PM EST

Register Now!

Topics: consumer insights, webinar, brand health and positioning, AffinID

Dear Dr. Jay: HOW can we trust predictive models after the 2016 election?

Posted by Dr. Jay Weiner

Thu, Jan 12, 2017

Dear Dr. Jay,

After the 2016 election, how will I ever be able to trust predictive models again?

Alyssa


Dear Alyssa,

Data Happens!

Whether we’re talking about political polling or market research, to build good models, we need good inputs. Or as the old saying goes: “garbage in, garbage out”.  Let’s look at all the sources of error in the data itself:DRJAY-9-2.png

  • First, we make it too easy for respondents to say “yes” and “no” and they try to help us by guessing what answer we want to hear. For example, we ask for purchase intent to a new product idea. The respondent often overstates the true likelihood of buying the product.
  • Second, we give respondents perfect information. We create 100% awareness when we show the respondent a new product concept.  In reality, we know we will never achieve 100% awareness in the market.  There are some folks who live under a rock and of course, the client will never really spend enough money on advertising to even get close.
  • Third, the sample frame may not be truly representative of the population we hope to project to. This is one of the key issues in political polling because the population is comprised of those who actually voted (not registered voters).  For models to be correct, we need to predict which voters will actually show up to the polls and how they voted.  The good news in market research is that the population is usually not a moving target.

Now, let’s consider the sources of error in building predictive models.  The first step in building a predictive model is to specify the model.  If you’re a purist, you begin with a hypotheses, collect the data, test the hypotheses and draw conclusions.  If we fail to reject the null hypotheses, we should formulate a new hypotheses and collect new data.  What do we actually do?  We mine the data until we get significant results.  Why?  Because data collection is expensive.  One possible outcome from continuing to mine the data looking for a better model is a model that is only good at predicting the data you have and not too accurate in predicting the results using new inputs. 

It is up to the analyst to decide what is statistically meaningful versus what is managerially meaningful.  There are a number of websites where you can find “interesting” relationships in data.  Some examples of spurious correlations include:

  • Divorce rate in Maine and the per capita consumption of margarine
  • Number of people who die by becoming entangled in their bedsheets and the total revenue of US ski resorts
  • Per capita consumption of mozzarella cheese (US) and the number of civil engineering doctorates awarded (US)

In short, you can build a model that’s accurate but still wouldn’t be of any use (or make any sense) to your client. And the fact is, there’s always a certain amount of error in any model we build—we could be wrong, just by chance.  Ultimately, it’s up to the analyst to understand not only the tools and inputs they’re using but the business (or political) context.

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

PS – Have you registered for our webinar yet!? Join Dr. Erica Carranza as she explains why to change what consumers think of your brand, you must change their image of the people who use it.

What: The Key to Consumer-Centricity: Your Brand User Image

When: February 1, 2017 @ 1PM EST

Register Now!

 

 

Topics: methodology, data collection, Dear Dr. Jay, predictive analytics

A New Year’s Resolution: Closing the Gap Between Intent and Action

Posted by Indra Chapman

Wed, Jan 04, 2017

resolutions.jpg

Are you one of the more than 100 million adults in the U.S. who made a New Year’s resolution? Do you resolve to lose weight, exercise more, spend less and save more, or just be a better person?

Top 10 New Year's Resolutions for 2016:

  • Lose Weight
  • Getting Organized
  • Spend less, save more
  • Enjoy Life to the Fullest
  • Staying Fit and Healthy
  • Learn Something Exciting
  • Quit Smoking
  • Help Others in Their Dreams
  • Fall in Love
  • Spend More Time with Family
[Source: StatisticBrain.com]

The actual number varies from year to year, but generally more than four out of 10 of us make some type of resolution for the New Year. And now that we’re a few days into 2017, we’re seeing the impact of those New Year resolutions. Gyms and fitness classes are crowded (Pilates anyone?), and self-improvement and diet book sales are up.

But… (there’s that inevitable but!), despite the best of intentions, within a week, at least a quarter of us have abandoned that resolution, and by the end of the month, more than a third of us have dropped out of the race. In fact, several studies suggest that only 8% of us actually go on to achieve our resolutions. Alas, we see that behavior no longer follows intention.

It’s not so different in market research because we see the same gap between consumer intention and behavior. Sometimes the gap is fairly small, and other times it’s substantial. Consumers (with the best of intentions) tell us what they plan to do, but their follow through is not always consistent. This, as you might imagine, can lead to bad data. [ twitter icon-1.pngTweet this!]

So what does this mean?

To help close the gap and gather more accurate data, ask yourself the following questions when designing your next study:

  • What are the barriers to adoption or the path to behavior? Are there other factors or elements within the customer journey to consider?
  • Are you assessing the non-rational components? Are there social, psychological or economic implications to them following through with that rational selection? After all, consider that many of us know that exercising daily is good for us – but so few of us follow through.
  • Are there other real life factors that you should consider in analysis of the survey? Does the respondent’s financial situation make that preference more aspirational than intentional?

So what are your best practices for closing the gap between consumer intent and action? If you don’t already have a New Year’s resolution (or if you do, add this one!), why not resolve to make every effort to connect consumer intent to behavior in your studies during 2017.

Another great resolution is to become a better marketer!  How?

Register for our upcoming webinar with Dr. Erica Carranza on consumer identity and the power of measuring brand user image to help create meaningful and relevant messaging for your customers and prospects:

Register Now!

Indra Chapman is a Senior Project Manager at CMB, who has resolved to set goals in lieu of new year’s resolutions this year. In the words of Brad Paisley, the first day of the new year “is the first blank page of a 365-page book. Write a good one.”

Topics: data collection, research design

A Year in Review: Our Favorite Blogs from 2016

Posted by Savannah House

Thu, Dec 29, 2016

pexels-photo (2).jpg

What a year 2016 was.

In a year characterized by disruption, one constant is how we approach our blog: each CMBer contributes at least one post per year. And while asking each employee to write may seem cumbersome, it’s our way of ensuring that we provide you with a variety of perspectives, experiences, and insights into the ever-evolving world of market research, analytics, and consulting.

Before the clock strikes midnight and we bid adieu to this year, let’s take a moment to reflect on some favorite blogs we published over the last twelve months:

    1. When you think of a Porsche driver, who comes to mind? How old is he? What’s she like? Whoever it is, along with that image comes a perceived favored 2016 presidential candidate. Harnessing AffinIDSM and the results of our 2016 Consumer Identity Research, we found a skew towards one of the candidates for nearly every one of the 90 brands we tested.  Read Erica Carranza’s post and check out brands yourself with our interactive dashboard. Interested in learning more? Join Erica for our upcoming webinar: The Key to Consumer-Centricity: Your Brand User Image  
    2. During introspection, it’s easy to focus on our weaknesses. But what if we put all that energy towards our strengths? Blair Bailey discusses the benefits of Strength-Based Leadership—realizing growth potential in developing our strengths rather than focusing on our weaknesses. In 2017, let’s all take a page from Blair’s book and concentrate on what we’re good at instead of what we aren’t.
    3. Did you attend a conference in 2016? Going to any in 2017? CMB’s Business Development Lead, Julie Kurd, maps out a game plan to get the most ROI from attending a conference. Though this post is specific to TMRE, these recommendations could be applied to any industry conference where you’re aiming to garner leads and build relationships. 
    4. In 2016 we released the results of our Social Currency research – a five industry, 90 brand study to identify which consumer behaviors drive equity and Social Currency. Of the industry reports, one of our favorites is the beer edition. So pull up a stool, grab a pint, and learn from Ed Loessi, Director of Product Development and Innovation, how Social Currency helps insights pros and marketers create content and messaging that supports consumer identity.
    5. It’s a mobile world and we’re just living in it. Today we (yes, we) expect to use our smartphones with ease and have little patience for poor design. And as market researchers who depend on a quality pool of human respondents, the trend towards mobile is a reality we can’t ignore. CMB’s Director of Field Services, Jared Huizenga, weighs in on how we can adapt to keep our smart(phone) respondents happy – at least long enough for them to “complete” the study. 
    6. When you think of “innovation,” what comes to mind? The next generation iPhone? A self-driving car? While there are obvious tangible examples of innovation, professional service agencies like CMB are innovating, too. In fact, earlier this year we hired Ed Loessi to spearhead our Product Development and Innovation team. Sr. Research Associate, Lauren Sears, sat down with Ed to learn more about what it means for an agency like CMB to be “innovative.” 
    7. There’s something to be said for “too much of a good thing” – information being one of those things. To help manage the data overload we (and are clients) are often exposed to, Project Manager, Jen Golden, discusses the merits of focusing on one thing at a time (or research objective), keeping a clear space (or questionnaire) and avoiding trending topics (or looking at every single data point in a report). 
    8. According to our 2016 study on millennials and money, women ages 21-30 are driven, idealistic, and feel they budget and plan well enough. However, there’s a disparity when it comes to confidence in investing: nearly twice as many young women don’t feel confident in their investing decisions compared to their male counterparts. Lori Vellucci discusses how financial service providers have a lot of work to do to educate, motivate and inspire millennial women investors. 
    9. Admit it, you can’t get enough of Prince William and Princess Kate. The British Royals are more than a family – they’re a brand that’s embedded itself into the bedrock of American pop culture. So if the Royals can do it, why can’t other British brands infiltrate the coveted American marketplace, too? Before a brand enters a new international market, British native and CMB Project Manager, Josh Fortey, contends, the decision should be based on a solid foundation of research.
    10. We round out our list with a favorite from our “Dear Dr. Jay Series.” When considering a product, we often focus on its functional benefits. But as Dr. Jay, our VP of Advanced Analytics and Chief Methodologist, explains, the emotional attributes (how the brand/product makes us feel) are about as predictive of future behaviors of the functional benefits of the product. So brands, let's spread the love!

We thank you for being a loyal reader throughout 2016. Stay tuned because we’ve got some pretty cool content for 2017 that you won’t want to miss.

From everyone at CMB, we wish you much health and success in 2017 and beyond.

PS - There’s still time to make your New Year’s Resolution! Become a better marketer in 2017 and signup for our upcoming webinar on consumer identity:

Register Now!

 

Savannah House is a Senior Marketing Coordinator at CMB. A lifelong aspiration of hers is to own a pet sloth, but since the Boston rental market isn’t so keen on exotic animals, she’d settle for a visit to the Sloth Sanctuary in Costa Rica.

 

Topics: strategy consulting, advanced analytics, methodology, consumer insights

Happiness is...

Posted by Talia Fein

Wed, Dec 21, 2016

 

happiness is.jpeg
My senior year of college I interviewed at several market research firms. While there was a lot to like about many of them, CMB had a unique vibe that convinced me this was where I should start my career. As it turned out, my instincts were right. CMB was fantastic at teaching a novice associate like me the fundamentals of Market Research; I quickly developed a love for the clients, the work, and “All Things Data.” 

When I left CMB after three years for a chance to live overseas and then a stint in D.C., I had experience working with incredible brands, super-smart colleagues, and I’d developed a competitive skillset. Almost two years ago, I was offered the opportunity to return and rather than rely on my gut, I had to answer questions my 22-year-old self hadn’t considered:

What made CMB so special?

In the New York Times op-ed “The One Question You Should Ask About Every New Job,” Adam Grant, professor of management and psychology at the Wharton School of the University of Pennsylvania, discusses the relationship between company culture and happiness in the workplace. “Although finding the right title, position and salary is important,” he writes, “there’s another consideration that matters just as much: culture. The culture of a workplace — an organization’s values, norms, and practices — has a huge impact on our happiness and success.”

What does it mean to have good company culture, and how do you find it?

In writing this blog post, I asked a few people what company culture means to them, and specifically, what they considered characteristics of a good company culture. Responses were what you’d probably expect: Ping-pong tables, Friday happy hours, free lunch.  In short, answers were unanimous: good company culture means fun and free food.

Really? The holy grail of work happiness is free food?

OK, it’s a little more complicated than a couple slices of pizza. In his article, Grant cites a classic study that analyzed employee stories from across industries about their workplaces. In the study, researchers identified three fundamental themes: Justice (Is it a fair place?), Security (Is it safe to work there?) and Control (Can a person shape their destiny and have influence in the organization?). Ironically, these stories underscore an organizational uniqueness bias – people think their company culture is more unique than it really is.

But organizational uniqueness bias aside, this study also suggests that company culture isn’t defined by free food. Rather, it’s defined by an organization’s values.

That’s not to discredit the tangible stuff. Those things certainly are important to a company’s culture.  In fact, MIT professor Edgar H. Schein calls that stuff “the most visible parts of an organization’s culture… [its] artifacts and practices — how people talk, look and act.” But he, like the study Grant cited, contends that more important than overt office perks are the company’s operating principles.  [ twitter icon.png Tweet this!]

So how do we identify those proverbial “company values?” Despite organizational uniqueness bias, I’ve noticed a few CMB characteristics that have made it special to me:

  1. The organization feels “flat” (i.e., non-hierarchical)

Of course we have job titles and levels (see #3 below), but at CMB each person knows they are valued and their opinions are valid and respected. Our founder and CEO, Anne Bailey Berman, encourages us all to “be a squeaky wheel” – CMBers aren’t afraid to speak up because we know we’ll be heard.

  1. “We are a group of lively and engaging individuals”

Even though that’s a direct quote from the old CMB website (at least two or three website iterations ago), it still rings true today. And while a lot of companies make similar claims, I’d venture to say some are exaggerating. But not CMB. In fact, every CMB job description includes a line that says we’re looking for people who are “collaborative, enthusiastic, and who can put their ego aside, roll up their sleeves and get the job done.” To me, this line perfectly describes the CMB vibe.

  1. The company wants us (as individuals) to succeed

At every level and in every corner of the organization, CMB leadership is invested in individual development and growth (both personal and professional). Beyond our job responsibilities, we’re encouraged to learn and grow in experience whether through our internal mentorship program, a workshop, conference, or something else. A great example of CMB’s commitment to individual success is our ability to choose our career path. Research associates are given the opportunity to choose their trajectory based on their skills and interests. In carving our own paths, we’re able to excel in our jobs and deliver better experiences and results for our clients.

Organizational uniqueness bias may suggest that people think their organization’s cultures are more distinctive than they really are, but I believe that CMB’s culture truly is special and unique. It certainly has gotten this CMBer to stick around.

Talia is a Project Manager on CMB’s Technology and eCommerce practice. She was named one of Survey Magazine’s 2015 Data Dominators and as a native Bostonian, couldn’t be happier to be back in the city.

 

Topics: millennials, emotion

But first... how do you feel?

Posted by Lori Vellucci

Wed, Dec 14, 2016

EMPACT 12.14-2.jpg

How does your brand make consumers feel?  It’s a tough but important question and the answer will often vary between customers and prospects or between segments within your customer base.  Understanding and influencing consumers’ emotions is crucial for building a loyal customer base; and scientific research, market research, and conventional wisdom all suggest that to attract and engage consumers, emotions are a key piece of the puzzle. 

CMB designed EMPACTSM, a proprietary quantitative approach to understanding how a brand, product, touchpoint, or experience should make a consumer feel in order to drive their behaviors.  Measuring valence (how bad or good) and activation (low to high energy) across basic emotions (e.g., happy, sad, etc.), social and self-conscious emotions (e.g., pride, embarrassment, nostalgia, etc.) and other relevant feelings and mental states (e.g., social connection, cognitive ease, etc.), EMPACT has proved to be a practical, comprehensive, and robust tool.  The key insights around emotions emerge which can then drive communication to elicit the desired emotions and drive consumer behavior.  But while EMPACT has been used extensively as a quantitative tool, it is also an important component when conducting qualitative research.

In order to achieve the most bang for the buck with qualitative research, every researcher knows that having the right people in the room (or in front of the video-enabled IDI) is a critical first step.  You screen for demographics and behaviors and sometimes attitudes, but have you considered emotions?  Ensuring that you recruit respondents who feel a specific way when considering your brand or product is critical to being able to glean the most insight from qualitative work. (Tweet this!)  Applying an emotional qualifier to respondents allows us to ensure that we are talking to respondents who are in the best position to provide the specific types of insights we’re looking for. 

For example, CMB has a client who learned from a segmentation study which incorporated EMPACT that their brand over-indexed for eliciting certain emotions that tended to drive consumers away from brands within their industry.  The firm had a desire to craft targeted communications to mitigate these negative emotions among this specific strategic consumer segment.  As a first step in testing their marketing message and imagery, focus groups were conducted. 

In addition to using the segmentation algorithm to ensure we had the correct consumer segment in the room, we also included EMPACTscreening to be sure the respondents selected felt the emotions that we wanted to address with new messaging.  In this way, we were able to elicit insights directly related to how well the new messaging worked in mitigating the negative emotions.  Of course we tested the messaging among broader groups as well, but being able to identify and isolate respondents whose emotions we most wish to improve ensured development of great advertising that will move the emotion needle and motivate consumers to try and to love the brand.

Want to learn more about EMPACT? View our webinar by clicking the link below:

Learn More About EMPACT℠

Lori Vellucci is an Account Director at CMB.  She spends her free time purchasing ill-fated penny stocks and learning about mobile payment solutions from her Gen Z daughters.

Topics: methodology, qualitative research, EMPACT, quantitative research

Why Researchers Should Consider Hybrid Methods

Posted by Becky Schaefer

Fri, Dec 09, 2016

As market researchers we’re always challenging ourselves to provide deeper, more accurate insights for our clients. Throughout my career I’ve witnessed an increased dedication to uncovering better results by integrating traditional quantitative and qualitative methodologies to maximize insights within shorter time frames.Qualitative.jpg

Market research has traditionally been divided into quantitative and qualitative methodologies. But more and more researchers are combining elements of each – creating a hybrid methodology, if you will – to paint a clearer picture of the data for clients. [Tweet this!]

Quantitative research is focused on uncovering objective measurements via statistical analysis. In practice, quant market research studies generally entail questionnaire development, programming, data collection, analysis, and results, and can usually be completed within a few weeks (depending on the scope of the research).  Quant studies usually have larger sample sizes and are structured and setup to quantify data on respondents’ attitudes, opinions, and behaviors.

Qualitative research is exploratory and aims to uncover respondents’ underlying reasons, beliefs and motivations. Qualitative is descriptive, and studies may rely on projective techniques and principles of behavioral psychology to probe deeper than initial responses might allow. 

While both quantitative and qualitative research have their respective merits, market research is evolving and blurring the lines between the two.  At CMB we understand each client has different goals and sometimes it’s beneficial to apply these hybrid techniques.

 For example two approaches I like to recommend are:

  • Video open-ends Traditional quantitative open-ends ask respondents to complete open-ended questions by to entering a text response. Open-ends give respondents the freedom to answer questions in their own words versus selecting from a list of pre-determined responses. While open-ends are still considered to be a viable technique, market researchers are now throwing video into the mix. Instead of writing down their responses, respondents can record themselves on video. The obvious advantage to video is that it facilitates a more genuine, candid response while researchers are able see respondents’ emotions “face to face.” This is a twist on a traditional quantitative research that has the potential to garner deeper, more meaningful respondent insight.
  • In-depth/moderated chats let researchers dig deeper and connect with respondents within the paradigm of a traditional quantitative study. In these short discussions respondents can explain to researchers why they made a specific selection on a survey. In-depth/moderated chats can help contextualize a traditional quantitative survey – providing researchers (and clients) with a combination of both quantitative and qualitative insights.

As insights professionals we strive to offer critical insights that help our clients and partners answer their biggest business questions. More and more often the best way to achieve the best results is to put tradition aside and combine both qualitative and quantitative methodologies.

Rebecca is part of the field services team at CMB, and she is excited to celebrate her favorite time of year with her family and friends.  

Topics: methodology, qualitative research, quantitative research

Innovation Requires Truly Understanding the Customer's Needs

Posted by Julia Walker

Thu, Dec 01, 2016

business-561387_1280.jpg“Innovation” has enjoyed a long reign as king of the business buzzwords—you’d be hard-pressed to attend an insights or marketing conference without hearing it.  But beyond the buzz, organizations pursue innovation for a number of reasons: to differentiate themselves from other brands, establish themselves as an industry leader, or to avoid producing stale products, services, ad campaigns or content.  Smart brands know that complacency is not an option and recognize they must adapt to accommodate the ever-changing consumer landscape. 

Innovation is a significant investment—the stakes are high for these new ideas to deliver meaningful results, whether by boosting the brand, successfully introducing a new product, growing the customer base, or adding to bottom line profitability. No matter how disruptive a product, service, or idea is, at the core there must be a deep understanding of customer needs. (Tweet this!) Let’s take a look at two very different attempts at innovation, and where they stumbled:

 The Case of Google Glass

For any new product (innovative or otherwise), organizations need to answer “yes” to two questions: (1) Is there a market? (2) Does it solve a legitimate problem?

No matter how revolutionary the product may be, it won’t succeed unless there’s a market for it. It's possible that a product can be too forward-thinking, leaving customers confused or unwilling to try it. Take the case of Google Glass:  though the product itself was revolutionary and consumers were intrigued, it was unclear why consumers needed Google Glass and what problem it was designed to solve.   Google Glass ended up generating low demand since there wasn’t an easily identifiable need for it. 

The key here would’ve been to first identify what customers need and then develop a product aimed to satisfy that need.  Here’s where market research can help with innovation. As market researchers we can help brands get into the mind of consumers and identify the gaps between what they are currently receiving and what they want to receive. By identifying these gaps, we can shed light on where there’s a need to be met.

 The Febreze Scentstories Flop

Other innovation flops in recent years have proven that beyond identifying customer/prospect needs, it’s also important to test how messages play to real consumers prior to launch.  

A lesson illustrated by the failure of P&G’s “Febreze Scentstories”. In 2005, the company caused confusion because they failed to educate customers properly about what the product actually was. Febreze Scentstories resembled a disc player that emitted different scents every 30 minutes (they looked an awful lot like CDs). The ads told consumers with Febreze Scentstories they could "play scents like you play music."  And while P&G partnered with superstar Shania Twain to drum up excitement, its advertising campaign confused consumers by making them think the product actually involved music.  Clearer messaging that would’ve helped prevent this misunderstanding.

Advanced analytical techniques along with strategic qualitative methodologies are a boon to brands. There has never been so much information available nor computing power capable of parsing and modeling it. But as two very different product innovations demonstrate, that sheer volume of data is not enough. What is needed for successful innovation are insights grounded in a truly consumer-centric approach. After all, only the consumer knows what the consumer wants (and needs).

Julia Walker is a Senior Associate Researcher at CMB who enjoys being innovative in her everyday life.  For instance, she loves to find creative ways to eat healthy without sacrificing taste. 

Topics: consumer insights, customer experience and loyalty, growth and innovation

The Elephant, the Donkey, and the Qualitative Researcher: The Moderator in Market Research and Politics

Posted by Kelsey Segaloff

Wed, Nov 23, 2016

capitol-32310_1280.pngAmericans have a lot to reckon with in the wake of the recent vote. You’re forgiven if analyzing the role of the presidential debate moderator isn’t high on your list. Still, for those of us in the qualitative market research business, there were professional lessons to be learned from the reactions to moderators Lester Holt (NBC), Martha Raddatz (ABC), Anderson Cooper (CNN), and Chris Wallace (Fox). Each moderator took their own approach and each was met with criticism and praise.

As CMB’s qualitative research associate and a moderator-in-training, I noticed parallels to the role of the moderator in the political and market research space. My thoughts:

 The moderator as unbiased

"Lester [Holt] is a Democrat. It’s a phony system. They are all Democrats.” – Donald Trump, President-Elect

Concerns regarding whether or not the debate moderators were unbiased arose throughout the primaries and presidential debates. Moderators were criticized for techniques like asking questions that were deemed “too difficult,” going after a single candidate, and not adequately pressing other candidates.  For example, critics called NBC’S Matt Lauer biased during the Commander-in-Chief forum. Some felt Lauer hindered Hillary Clinton’s performance by asking tougher questions than those asked of Donald Trump, interrupting Clinton, and not letting her speak on other issues the same way he allowed Donald Trump to.

In qualitative market research, every moderator will experience some bias from time to time, but it’s important to mitigate bias in order to maintain the integrity of the study. In my own qualitative experience, the moderator establishes that they are unbiased by opening each focus group by explaining that they are independent from the topic of discussion and/or client, and therein are not looking for the participants to answer a certain way.

Qualitative research moderators can also avoid bias by not asking leading questions, monitoring their own facial expressions and body language, and giving each participant an equal opportunity to speak. Like during a political debate, preventing bias is imperative in qualitative work because biases can skew the results of a study the same way the voting populace fears bias could skew the perceived performance of a candidate.

 The moderator as fact-checker

“It has not traditionally been the role of the moderator to engage in a lot of fact-checking.” – Alan Schroeder, professor of Journalism at Northeastern University

Throughout the 2016 election moderators were criticized for either fact-checking too much or not fact-checking the candidates enough. Talk about a Catch-22.

In qualitative moderating, fact-checking is dependent on the insights we are looking to achieve for a particular study. For example, I just finished traveling across the country with CMB’s Director of Qualitative, Anne Hooper, for focus groups. In each group, Anne asked participants what they knew about the product we were researching. Anne noted every response (accurate or inaccurate), as it was critical we understood the participants’ perceptions of the product. After the participants shared their thoughts, Anne gave them an accurate product description to clarify any false impressions because for the remainder of the conversation it was critical the respondents had the correct understanding of the product.

For the case of qualitative research, Anne demonstrated how fact-checking (or not fact-checking) can be used for insights. There’s no “one right way” to do it; it depends on your research goals.  

 The moderator as timekeeper

“Basically, you're there as a timekeeper, but you're not a participant.” – Chris Wallace, Television Anchor and Political Commentator for Fox News

Presidential debate moderators frequently interjected (or at least tried to) when candidates ran over their allotted time in order to stay on track and ensure each candidate had equal speaking time. Focus group moderators have the same responsibility. As a qualitative moderator-in-training, I’m learning the importance of playing timekeeper – to be respectful of the participants’ time and allow for equal participation.  I must also remember to cover all topics in the discussion guide. Whether you’re acting as a timekeeper in market research or political debates, it’s as much about the audience of voters or clients as it is about the participants (candidates or study respondents).  

The study’s desired insights will dictate the role of the moderator. Depending on your (or your client’s) goals, bias, fact-checking, and time-keeping could play an important part in how you moderate. But ultimately whether your client is a business or the American voting populace, the fundamental role of the moderator remains largely the same: to provide the client with the insights needed to make an informed decision.

Kelsey is a Qualitative Research Associate. She co-chairs the New England chapter of the QRCA, and recently received a QRCA Young Professionals Grant!

Topics: methodology, qualitative research, Election

Dear Dr. Jay: Weighting Data?

Posted by Dr. Jay Weiner

Wed, Nov 16, 2016

Dear Dr. Jay:

How do I know if my weighting matrix is good? 

Dan


Dear Dan,DRJAY-9.png

I’m excited you asked me this because it’s one of my favorite questions of all time.

First we need to talk about why we weight data in the first place.  We weight data because our ending sample is not truly representative of the general population.  This misrepresentation can occur because of non-response bias, poor sample source and even bad sample design.  In my opinion, if you go into a research study knowing that you’ll end up weighting the data, there may be a better way to plan your sample frame. 

Case in point, many researchers intentionally over-quota certain segments and plan to weight these groups down in the final sample.  We do this because the incidence of some of these groups in the general population is small enough that if we rely on natural fallout we would not get a readable base without a very large sample.  Why wouldn’t you just pull a rep sample and then augment these subgroups?  The weight needed to add these augments into the rep sample is 0. 

Arguments for including these augments with a very small weight include the treatment of outliers.  For example, if we were conducting a study of investors and we wanted to include folks with more than $1,000,000 in assets, we might want to obtain insights from at least 100 of these folks.  In a rep sample of 500, we might only have 25 of them.  This means I need to augment this group by 75 respondents.  If somehow I manage to get Warren Buffet in my rep sample of 25, he might skew the results of the sample.  Weighting the full sample of 100 wealthier investors down to 25 will reduce the impact of any outlier.

A recent post by Nate Cohn in the New York Times suggested that weighting was significantly impacting analysts’ ability to predict the outcome of the 2016 presidential election.  In the article, Mr. Cohn points out, “there is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.”  This man carried a sample weight of 30.  In a sample of 3000 respondents, he now accounts for 1% of the popular vote.  In a close race, that might just be enough to tip the scale one way or the other.  Clearly, he showed up on November 8th and cast the deciding ballot.

This real life example suggests that we might want to consider “capping” extreme weights so that we mitigate the potential for very small groups to influence overall results. But bear in mind that when we do this, our final sample profiles won’t be nationally representative because capping the weight understates the size of the segment being capped.  It’s a trade-off between a truly balanced sample and making sure that the survey results aren’t biased. [Tweet this!]

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

Keep the market research questions comin'! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit yours anonymously by clicking below:

  Ask Dr. Jay!

Topics: methodology, Dear Dr. Jay