WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

My New Driving Tribe

Posted by Jay Weiner, PhD

Thu, Aug 16, 2018

ambiguous driver

The kind of car you drive says a lot about who you are—at least according to other peoples’ perceptions. And the more people identify with that perception, the more likely they are to use that brand, or in this case, drive that car.

This is important for marketers responsible for effectively communicating their brand to the target market. How is their typical customer perceived? Does that customer image align with their brand?

I’m not much of a joiner so I never thought much about what my car says about me. That is, until I joined a new tribe of car owners.

When I was recently debating buying a new car, my colleague said she wouldn’t speak to me again if I bought this certain brand. I did anyway. Even though she might be unhappy that I now drive this particular car, she still speaks to me (must need some analysis done).

In a self-funded study, we found the typical owner of this brand is viewed as: wealthy, confident, fun, stuck up, young, snobby, arrogant, and cool, among others. In reading this description, you can probably tell it’s not a Buick. 

But this brand’s marketing team might have a strong interest to figure out how best to play up the positive characteristics (cool) and play down or negate the negative dimensions (arrogant/stuck up). My kids still don’t think I’m cool.

word cloud 2Why does this brand generate such a view? It over-indexes on dimensions that might not seem approachable (i.e., worldly, trendsetting) and under-indexes on dimensions that might resonate with a wider audience (i.e., responsible, genuine, relaxed). This perception reflects a very specific view of the typical driver and risks alienating a potentially huge customer base.

Maybe that’s okay if you’re a niche product with a narrow target market, but then again, maybe not.

over under indexing

What does it all mean? Interestingly enough, I’ve seen a change in the way my fellow motorists treat me on the road. Take that with a grain of salt as we typically refer to drivers around here as MASSH@^%$ (editor made me type that despite my STET comment on the first draft).

In my previous domestic brand car, when I put my turn signal on, folks tended to let me change lanes. Now, they race up to box me out. At red lights, other drivers want to race. While my car does have some serious horse power, I don’t drive it that way. I guess if folks don’t see me as kind and caring based on the car I drive, why should I expect them to be kind and caring to me on the road?

Marketers beyond the car industry should be thinking about this, too. How are you communicating who your brand’s typical customer is? Is that image relatable? Is it desirable? Your messaging should clearly and effectively communicate who your customer is in the best light—whatever that means for your brand.

Dr. Jay is CMB’s Chief Methodologist and VP of Advanced Analytics and is always up for tackling your most pressing questions. Submit yours and he could answer it in his next blog!

Ask a Question!

 

Topics: AffinID, Dear Dr. Jay

Relatability, Desirability and Finding the Perfect Match

Posted by Dr. Jay Weiner

Tue, Feb 13, 2018

hearts-cropped-1.jpg

Dear Dr. Jay:

It’s Valentine’s Day, how do I find the one ones that are right for me?

-Allison N.


Dear Allison,

In our pursuit of love, we’re often reminded to keep an open mind and that looks aren’t everything.

This axiom also applies to lovestruck marketers looking for the perfect customer. Often, we focus on consumer demographics, but let’s see what happens when we dig below the surface.

For example, let’s consider two men who:

  • Were born in 1948
  • Grew up in England
  • Are on their Second Marriage
  • Have 2 children
  • Are Successful in Business
  • Are Wealthy
  • Live in a Castle
  • Winter in the Alps
  • Like Dogs

On paper these men sound like they’d have very similar tastes in products and services–they are the same age, nationality, and have common interests. But when you learn who these men are, you might think differently.

The men I profiled are the Prince of Darkness, Ozzy Osbourne, and Prince Charles of Wales. While both men sport regal titles and an affinity for canines, they are very different individuals.

Now let’s consider two restaurants. Based on proprietary self-funded research, we discovered that both restaurants’ typical customers are considered Sporty, Athletic, Confident, Self-assured, Social, Outgoing, Funny, Entertaining, Relaxed, Easy-going, Fun-loving, and Joyful. Their top interests include: Entertainment (e.g., movies, TV) and dining out. Demographically their customers are predominately single, middle-aged men.

One is Buffalo Wild Wings, the other, Hooters. Both seem to appeal to the same group of consumers and would potentially be good candidates for cross-promotions—maybe even an acquisition.

What could we have done to help distinguish between them? Perhaps a more robust attitudinal battery of items or interests would have helped. 

Or, we could look through a social identity lens.

We found that in addition to assessing customer clarity, measuring relatability and desirability can help differentiate brands:

  • Relatability: How much do you have in common with the kind of person who typically uses Brand X?
  • Social Desirability: How interested would you be in making friends with the kind of person who typically uses Brand X?

When we looked at the scores on these two dimensions, we saw that Buffalo Wild Wings scores higher than Hooters:BWW vs hooters1.png

Meaning, while the typical Buffalo Wild Wings customer is demographically like a typical Hooters customer, the typical Hooters customer is less relatable and socially desirable.  This isn’t necessarily bad news for Hooters–it simply means that it has a more targeted niche appeal than Buffalo Wild Wings. 

The main point is that it helps to look beyond demographics and understand identity—who finds you relatable and desirable. As we see in the Buffalo Wild Wings and Hooters example, digging deeper into the dimensions of social identity can uncover more nuanced niches within a target audience—potentially uncovering your “perfect match”. 

Topics: Identity, Dear Dr. Jay, consumer psychology

Dear Dr. Jay: How To Predict Customer Turnover When Transactions are Anonymous

Posted by Dr. Jay Weiner

Wed, Apr 26, 2017

Dear Dr. Jay:

What's the best way to estimate customer turnover for a service business whose customer transactions are usually anonymous?

-Ian S.


Dear Ian,

You have posed an interesting question.  My first response was, “you can’t”. But as I think about it some more, you might already have some data in-house that could be helpful in addressing the issue.DRJAY-9-2 (1).png

It appears you are in the mass transit industry. Most transit companies offer single ride fares and monthly passes while companies in college towns often offer semester-long passes. Since oftentimes the passes (monthly, semester, etc.) are sold at a discounted rate, we might conclude that all the single fare revenues are turnover transactions.

This assumption is a small leap of faith as I’m sure some folks just pay the single fare price and ride regularly. Let’s consider my boss. He travels a fair amount and even with the discounted monthly pass, it’s often cheaper for him to pay the single ride fare. Me, I like the convenience of not having to make sure I have the correct fare in my pocket so I just pay the monthly rate, even if I don’t use it every day. We both might be candidates for weekly pass sales if we planned for those weeks when we know we’d be commuting every day versus working from home or traveling. I suspect the only way to get at that dimension would be to conduct some primary research to determine the frequency of ridership and how folks pay.

For your student passes, you probably have enough historic data in-house to compare your average semester pass sales to the population of students using them and can figure out if you see turnover in those sales. That leaves you needing to estimate the turnover on your monthly pass sales.

You also may have corporate sales that you could look at. For example, here at CMB, employees can purchase their monthly transit passes through our human resources department. Each month our cards are automatically updated so that we don’t have to worry about renewing it every few weeks.  I suspect if we analyzed the monthly sales from our transit system (MTBA) to CMB, we could determine the turnover rate.

As you can see, you could already have valuable data in-house that can help shed light on customer turnover. I’m happy to look at any information you have and let you know what options you might have in trying to answer your question.

Dr. Jay is CMB’s Chief Methodologist and VP of Advanced Analytics and holds a Zone 3 monthly pass to the MTBA.  If it wasn’t for the engineer, he wouldn’t make it to South Station every morning.

Keep those questions coming! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit your question anonymously by clicking below:

Ask Dr. Jay!

Topics: Dear Dr. Jay, data collection, advanced analytics

Dear Dr. Jay: HOW can we trust predictive models after the 2016 election?

Posted by Dr. Jay Weiner

Thu, Jan 12, 2017

Dear Dr. Jay,

After the 2016 election, how will I ever be able to trust predictive models again?

Alyssa


Dear Alyssa,

Data Happens!

Whether we’re talking about political polling or market research, to build good models, we need good inputs. Or as the old saying goes: “garbage in, garbage out”.  Let’s look at all the sources of error in the data itself:DRJAY-9-2.png

  • First, we make it too easy for respondents to say “yes” and “no” and they try to help us by guessing what answer we want to hear. For example, we ask for purchase intent to a new product idea. The respondent often overstates the true likelihood of buying the product.
  • Second, we give respondents perfect information. We create 100% awareness when we show the respondent a new product concept.  In reality, we know we will never achieve 100% awareness in the market.  There are some folks who live under a rock and of course, the client will never really spend enough money on advertising to even get close.
  • Third, the sample frame may not be truly representative of the population we hope to project to. This is one of the key issues in political polling because the population is comprised of those who actually voted (not registered voters).  For models to be correct, we need to predict which voters will actually show up to the polls and how they voted.  The good news in market research is that the population is usually not a moving target.

Now, let’s consider the sources of error in building predictive models.  The first step in building a predictive model is to specify the model.  If you’re a purist, you begin with a hypotheses, collect the data, test the hypotheses and draw conclusions.  If we fail to reject the null hypotheses, we should formulate a new hypotheses and collect new data.  What do we actually do?  We mine the data until we get significant results.  Why?  Because data collection is expensive.  One possible outcome from continuing to mine the data looking for a better model is a model that is only good at predicting the data you have and not too accurate in predicting the results using new inputs. 

It is up to the analyst to decide what is statistically meaningful versus what is managerially meaningful.  There are a number of websites where you can find “interesting” relationships in data.  Some examples of spurious correlations include:

  • Divorce rate in Maine and the per capita consumption of margarine
  • Number of people who die by becoming entangled in their bedsheets and the total revenue of US ski resorts
  • Per capita consumption of mozzarella cheese (US) and the number of civil engineering doctorates awarded (US)

In short, you can build a model that’s accurate but still wouldn’t be of any use (or make any sense) to your client. And the fact is, there’s always a certain amount of error in any model we build—we could be wrong, just by chance.  Ultimately, it’s up to the analyst to understand not only the tools and inputs they’re using but the business (or political) context.

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

PS – Have you registered for our webinar yet!? Join Dr. Erica Carranza as she explains why to change what consumers think of your brand, you must change their image of the people who use it.

What: The Key to Consumer-Centricity: Your Brand User Image

When: February 1, 2017 @ 1PM EST

Register Now!

 

 

Topics: Dear Dr. Jay, predictive analytics, methodology, data collection

Dear Dr. Jay: Weighting Data?

Posted by Dr. Jay Weiner

Wed, Nov 16, 2016

Dear Dr. Jay:

How do I know if my weighting matrix is good? 

Dan


Dear Dan,DRJAY-9.png

I’m excited you asked me this because it’s one of my favorite questions of all time.

First we need to talk about why we weight data in the first place.  We weight data because our ending sample is not truly representative of the general population.  This misrepresentation can occur because of non-response bias, poor sample source and even bad sample design.  In my opinion, if you go into a research study knowing that you’ll end up weighting the data, there may be a better way to plan your sample frame. 

Case in point, many researchers intentionally over-quota certain segments and plan to weight these groups down in the final sample.  We do this because the incidence of some of these groups in the general population is small enough that if we rely on natural fallout we would not get a readable base without a very large sample.  Why wouldn’t you just pull a rep sample and then augment these subgroups?  The weight needed to add these augments into the rep sample is 0. 

Arguments for including these augments with a very small weight include the treatment of outliers.  For example, if we were conducting a study of investors and we wanted to include folks with more than $1,000,000 in assets, we might want to obtain insights from at least 100 of these folks.  In a rep sample of 500, we might only have 25 of them.  This means I need to augment this group by 75 respondents.  If somehow I manage to get Warren Buffet in my rep sample of 25, he might skew the results of the sample.  Weighting the full sample of 100 wealthier investors down to 25 will reduce the impact of any outlier.

A recent post by Nate Cohn in the New York Times suggested that weighting was significantly impacting analysts’ ability to predict the outcome of the 2016 presidential election.  In the article, Mr. Cohn points out, “there is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.”  This man carried a sample weight of 30.  In a sample of 3000 respondents, he now accounts for 1% of the popular vote.  In a close race, that might just be enough to tip the scale one way or the other.  Clearly, he showed up on November 8th and cast the deciding ballot.

This real life example suggests that we might want to consider “capping” extreme weights so that we mitigate the potential for very small groups to influence overall results. But bear in mind that when we do this, our final sample profiles won’t be nationally representative because capping the weight understates the size of the segment being capped.  It’s a trade-off between a truly balanced sample and making sure that the survey results aren’t biased. [Tweet this!]

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

Keep the market research questions comin'! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit yours anonymously by clicking below:

 Ask Dr. Jay!

Topics: Dear Dr. Jay, methodology