What's Love (and NPS Scores) Got to Do with it?

Posted by James Kelley

Wed, Feb 11, 2015

NPS, or Net Promoter Score, is all the rage in market research. Most major corporations have a tracking program built around the statistic, and many companies also gauge their customer service and client relationships against this number. But what is NPS? 

At its root, NPS is a measure of advocacy. In terms of data collection, NPS is a single question usually included in a customer satisfaction or brand tracking survey. The question’s scale ranges from 0-10 but is grouped according to the graphic below. In the aggregate, an NPS score is the percentage of Promoters minus the percentage of Detractors.  

nps

We did a recent study in which we took a deeper look at NPS and what how Promoters differ from Detractors. We surveyed customers from a wide array of industries (travel, eCommerce, telecom, etc.), and we uncovered quite a few statistics that might surprise you. 

What if I told you that only a slim majority (53%) of Promoters love the brands they do business with? Remember: this isn’t 50% of all consumers but 50% of Promoters. In fact, only 15% of all consumers use the “L word” at all. This isn’t to say that advocacy isn’t important—word of mouth is an essential part of advertising—but wouldn’t you rather your loudest advocates be your biggest loyalists? If these Promoters found a competitive brand more attractive, are they likely to advocate on that brand’s behalf?  

Here are some more fun facts: 4% of Promoters are only loyal customers during sales or when they have a coupon, and another 5% of Promoters would be happy to purchase from another brand if it were available. Consumers are fickle beasts. 

So, what does all this mean? Are we ready to throw NPS out the window? Certainly not. NPS is great in that provides a clear measure of how many advocates exist in comparison to Detractors. Think of it as a net tally of all the communications in the world. Scores above 0 mean you have more Promoters then Detractors, and negative scores mean the opposite. But for those companies out there that have the traffic on their side, it’s time to ask: is advocacy enough? Advocacy is great—it provides momentum, gets startups off the ground, and fuels growing brands. But love is better—love builds dynasties. 

James Kelley splits his time at CMB as a Project Manager for the Technology/eCommerce team and as a member of the analytics team. He is a self-described data nerd, political junkie, and board game geek. Outside of work, James works on his dissertation in political science which he hopes to complete in 2016.

Check out our new case study and learn how CMB refreshed Reebok’s global brand tracker, which gives the global fitness giant insight into how the brand is performing, its position in the global marketplace, and whether current brand strategies reach their targets.

Download Case Study Here

Topics: Advanced Analytics, NPS, Customer Experience & Loyalty

Conjoint Analysis: 3 Common Pitfalls and How to Avoid Them

Posted by Liz White

Thu, Jan 08, 2015

conjoint analysis, cmbIf you work in marketing or market research, chances are you’re becoming more and more familiar with conjoint analysis: a powerful research technique used to predict customer decision-making relative to a product or service. We love conjoint analysis at CMB, and it’s easy to see why. When conducted well, a conjoint study provides results that make researchers, marketers, and executives happy. These results:

  • Are statistically robust
  • Are flexible and realistic
  • Describe complex decision-making
  • Are easy to explain and understand

If you need a quick introduction or a refresher on conjoint analysis, I recommend Sawtooth Software’s excellent video, which can be found here.For these reasons conjoint analysis is one of the premiere tools in our analytical toolkit. However, as with any analytical approach, conjoint analysis should be applied thoughtfully to realize maximum benefits. Below, I describe three of the most common pitfalls related to conjoint analysis and tips on how to avoid them.

Pitfall #1: Rushing the Design

This is the most common pitfall, but it’s also be the easiest one to avoid. As anyone who has conducted a conjoint study knows, coming up with the right design takes time. When planning the schedule for a conjoint analysis study, make sure to leave time for the following steps:

  • Identify your business objective, and work to identify the research questions (and conjoint design) that will best address that objective.
  • Brainstorm a full list of product features that you’d like to test. Collaborate with coworkers from various areas of your organization—including marketing, sales, pricing, and engineering as well as the final decision-makers—to make sure your list is comprehensive and up-to-date.
    • You may also want to plan for qualitative research (e.g., focus groups) at this stage, particularly if you’re looking to test new products or product features. Qualitative research can prioritize what features to test and help to translate “product-speak” into language that customers find clear and meaningful.
    • If you’re looking to model customer choices among a set of competitive products, collect information about your competitors’ products and pricing.
    • Once all the information above is collected, budget time to translate your list of product features into a conjoint design. While conjoint analysis can handle complex product configurations, there’s often work to be done to ensure the final design (a) captures the features you want to measure, (b) will return statistically meaningful results, and (c) won’t be overly long or confusing for respondents.
    • Finally, budget time to review the final design. Have you captured everything you needed to capture?  Will this make sense to your customers and/or prospective customers? If not, you may need to go back and update the design. Make sure you’ve budgeted for this as well.

Pitfall #2: Overusing Prohibitions

Most conjoint studies typically involve a conversation about prohibitions—rules about what features can be shown under certain circumstances. For example:

Say Brand X’s products currently come in red, blue, and black colors while Brand Y’s products are only available in blue and black. When creating a conjoint design around these products, you might create a rule that if the brand is X, the product could be any of the three colors, but if the brand is Y, the product cannot be red.

While it’s tempting to add prohibitions to your design to make the options shown to respondents more closely resemble the options available in the market, overusing prohibitions can have two big negative effects:

  1. Loss of precision when estimating the value of different features for respondents.
  2. Loss of flexibility for market simulations.

The first of these effects can typically be identified in the design phase and fixed by reducing the number of prohibitions included in a model. The second is potentially more damaging as it usually becomes an issue after the research has already been conducted. For example:

We’ve conducted the research above for Brand Y, including the prohibition that if the brand is Y, the product cannot be red. Looking at the results, it becomes clear that Brand X’s red product is much preferred over their blue and black products. The VP of Brand Y would like to know what the impact of offering a Brand Y product in red would be.  Unfortunately, because we did not test a red Brand Y product, we are unable to use our conjoint data to answer the VP’s question.

In general, it is best to be extremely conservative about using prohibitions—use them sparingly and avoid them where possible. 

Pitfall #3: Not Taking Advantage of the Simulator

While the first two pitfalls are focused on conjoint design, the final pitfall is about the application of conjoint results. Once the data from the conjoint analysis has been analyzed, it can be used to stimulate virtually any combination of the features tested and predict the impact that different combinations will have on customer decision-making. . .which is just one of the reasons conjoint analysis is such a valuable tool. All of that predictive power can be distilled into a conjoint simulator that anyone—from researchers to marketers to C-suite executives—can use and interpret.

At CMB, the clients I’ve seen benefit most from conjoint analysis are the clients that take full advantage of the simulators we deliver, rather than simply relying on the scenarios created for reporting. Once you receive a conjoint simulator, I recommend the following:

  1. Distribute copies of the simulator to all key stakeholders.
  2. Have the simulator available when presenting the results of your study, and budget time in the meeting to run “what-if” scenarios then and there. This can allow you to leverage the knowledge in the room in real time, potentially leading to practical and informed conclusions.
  3. Continue to use your simulator to support decision-making even after the study is complete, using new information to inform the simulations you run. A well-designed conjoint study will continue to have value long after your project closes.

Liz is a member of the Analytics Team at CMB, and she can’t wait to hear your research questions!

Topics: Advanced Analytics, Research Design

Discrete Choice and the Path to a Car Purchase

Posted by Heidi Hitchen

Wed, Jun 11, 2014

Decision

One chilly night in February, I was heading home from a friend’s birthday festivities when my car just stopped working. I had just enough oomph and momentum from the hill I was on to pull off to the side of the road. I found myself stranded in the middle of the city, waiting for a tow truck until 4AM and vowing to myself the whole time that I wouldn’t deal with this clunker anymore. It was time for a new car. During the next two weeks, without wheels, I did my research on the Internet and made my way over to a local Toyota dealership. I walked in knowing exactly what I wanted: a 2014 green Corolla. I even knew the various payment and financing options I was prepared for. And wouldn’t you know it—I ended up getting exactly what I said I wanted.As easy as that sounds, my path wasn’t straight to the doors of the Toyota dealership. I had gone through a variety of different makes, models, financing options, and colors. At the end of researching each car, I asked myself not only if I would really buy this car, but also if I would truly be happy with it. It wasn’t until I asked myself this question for the first time that I realized I was essentially creating my own Discrete Choice Measurement (DCM), specifically a Dual-Choice DCM (DCDC).

DCM is a technique that presents several configurations of product features to respondents and asks them to pick which configuration they would most prefer. In a Dual-Choice DCM, a follow-up question is asked to determine whether the respondent would actually buy the preferred package. This second question is crucial—I might choose a Lamborghini but there’s little chance (OK, no chance) that I will actually purchase one.

Dual-Choice DCM scenarios are the gold standard for product development work and can lend more accurate insights into a buying scenario by:

  • more closely representing a consumer’s purchase decision
  • helping us better understand consumer preferences
  • more accurately reflecting market potential
  • dissecting the product into pieces, which allows us to measure price sensitivity and willingness to pay for the product as a whole as well as individual components
  • simulating the market interest in thousands of potential product packages for product optimization as the analysis examines how a product can be changed to perform better by identifying (and tweaking) individual product features that affect purchase decisions

Being able to produce more realistic results is obviously an important part of any research, and it just goes to show that DCMs can truly help with any decision making process. Running a DCM in my head prior to purchasing my car was truly helpful, so it’s no surprise that our clients often rave about the DCMs and Dual-Choice DCMs in our analytics program.

Heidi is an Associate Researcher who graduated from Quinnipiac University with a dual-degree in Marketing and Over-Involvement. After realizing she lacks hobbies now that student organizations don’t rule her free time, Heidi is taking sailing classes and looks forward to smooth sailing on the Charles River by the end of the summer.

Want to know more about our advanced analytic techniques, including our innovative Tri-Choice Approach? Let us know and we’ll be happy to talk through how we choose the right techniques to uncover critical consumer insights. Contact us.

Topics: Advanced Analytics, Research Design

Living in a World of Significance

Posted by Nick Pangallo

Wed, Apr 02, 2014

globe

Guess what? It’s 2014! The year of Super Bowl XLVIII©, the 100th anniversary of the start of World War I, the 70th anniversary of D-Day, and a whole host of other, generally not-that-impactful events, anniversaries, and changes. One event that will happen in 2014, though, is something which happens every two years: U.S. national elections.This seems like an odd way to start a blog, but bear with me for a moment.  Show of hands out there (ed. note: you’re welcome to actually raise your hand if you want, but I wouldn’t): how many of you readers have, at some point, become tired of the relentless political horse-race, always talking about who’s ahead and who’s behind for months and years on end?  I know I have, and chances are it’s happened to you too, but I’m going to ask that we all take a deep breath and dive once more into the fray.

The question of “who’s ahead” and “who’s behind” brings us to our discussion of statistical significance.  I’m going to talk today about how it works, how it can be used, and why it might not be quite as beneficial as you might think.

First, a quick refresher: when we take survey responses, test results, etc. from a sample of people that we think represents some broader population, there is always the risk that whatever results we see might be due to random chance instead of some other factor (like actual differences of opinion between two groups). To control for this, we can conduct significance testing, which tells us the likelihood that the result we have obtained is due to random chance, instead of some other real, underlying factor. I won’t bore you with the details of terms like p, α, one- vs. two-tailed tests and the like, but know that the methodology is sound and can be looked up in any AP-level statistics textbook.

Most organizations assume an “error range” of 5%, meaning that a data finding is statistically significant if the odds are 5% (or less) that the results are due to random chance. So, if we run significance testing on Millennials vs. Gen X’ers in a survey, and we find that the two are significantly different, we are saying there is a 5% (or less) chance that those differences are just random, and not due to actual underlying opinions, or price-sensitivity, or political beliefs, or receptiveness to that new hair-growth prescription, or whatever else you might be testing.

Now, if you have a huge data set and a fairly advanced statistical program, calculating significance is easy. But since most people don’t have access to these tools, there is another, much simpler way to think about significance: the margin of error. The margin of error is a simple way of determining how much higher or lower a result can be before it is considered significantly different. For instance, if your margin of error was ± 5%, and your data points were 60% and 49%, your data is (likely) significantly different; if your data points are 55% and 51%, they are not.

This brings us back to the political analogy; calculating the margin of error is how we determine whether Politician X is ahead of Politician Y, or vice-versa.

Let’s say, for example, a poll of 1,000 registered voters was conducted, with a sound methodology, and asks which of two candidates respondents support (assume no other options are presented in this circumstance, a small but notable difference for a future blog). We find that 48% support Politician X and 52% Politician Y. Because the sample size is 1,000, the margin of error is ± 3.1%. Since the difference between the two politicians is less than twice the margin of error (i.e., if Politician X’s share might be as high as 51.1% and Politician Y’s share as low as 48.9%), you would hear this reported as a “statistical tie” in the news. This would be because news organizations won’t report one candidate as ahead of the other, as long as the two are within that acceptable margin of error.

So that’s the political world, and there are many reasons networks and polling organizations choose to behave this way (aversion to being wrong, fear of being seen as taking sides, and fear of phone calls from angry academics, among others).  But in the research world, we don’t usually have nice, round sample sizes and two-person comparisons – and that’s why relying on statistical significance and margin of error when making decisions can be dangerous.

Let’s go back to that political poll.  The original sample size was N=1,000 and produced a margin of error of ± 3.1%.  Let’s see what happens when we start changing the sample size:

·        N=100: ± 9.8%

·        N=200: ± 6.9%

·        N=500: ± 4.4%

·        N=750: ± 3.6%

·        N=1,000: ± 3.1%

·        N=1,500: ± 2.5%

·        N=2,000: ± 2.2%

·        N=4,000: ± 1.6%

Notice the clear downward trend: as sample sizes grow, margins of error shrink, but with diminishing returns.

Now, we at CMB would advocate for larger sample sizes, since they allow more freedom within the data (looking at multiple audiences, generally smaller error ranges, etc.).  It’s no secret that larger sample sizes are better.  But I’ve had a few experiences recently that led me to want to reinforce a broader point: just because a difference is significant doesn’t make it meaningful, and vice versa.

With a sample size of N=5,000, a difference of 3% between Millennials and Gen X’ers would be significant, but is a 3% difference ever really meaningful in survey research?  From my perspective, the answer is a resounding no.  But if your sample size is N=150, a difference of 8% wouldn’t be significant…but eight percentage points is a fairly substantial difference.  Sure, it’s possible that your sample is slightly skewed, and with more data that difference would shrink.  But it’s more likely that this difference is meaningful, and by looking at only statistical significance, we would miss it. And that’s the mistake every researcher needs to avoid.

If I can leave you with one abiding maxim from today, it’s this: assuming some minimum sample size (75, 100, whatever makes you comfortable), big differences usually are meaningful, small differences usually are not.  Significance is a nice way to be certain in your results, but we as researchers need to support business decisions with meaningful findings, not (just) significant ones.

Nick Pangallo is a Project Manager in CMB’s Financial Services, Healthcare, and Insurance practice.  He has a meaningful-but-not-significant man-crush on Nate Silver.

Topics: Advanced Analytics, Research Design

What they Didn’t Teach you in Marketing Research Class: Sig Testing

Posted by Amy Maret

Mon, Feb 03, 2014

Market Research education CMB

As a recent graduate, and entrant into the world of professional market research, I have some words of wisdom for college seniors looking for a career in the industry. You may think your professors prepared you for the “real world” of market research, but there are some things you didn’t learn in your Marketing Research class. So what’s the major difference between research at the undergrad level and the work of a market researcher? In the real world, context matters, and there are real consequences to our research. One example of this is how we approach testing for statistical significance.Starting in my freshman year of college, I was taught to abide by a concept that I came to think of as the “Golden Rule of Research.” According to this rule, if you can’t be 95% or 90% confident that a difference is statistically significant, you should consider it essentially meaningless.

Entering the world of Market Research, I quickly found that this rule doesn’t always hold when the research is meant to help users make real business decisions. Although significance testing can be a helpful tool in interpreting results, ignoring a substantial difference simply because it does not cross the thin line into statistical significance can be a real mistake.

Our Chief Methodologist, Richard Schreuer, gives this example of why this “Golden Rule” doesn’t always make sense in the real world:

Imagine a manager gets the results of a concept test in which a new ad outperforms the old by a score of 54% to 47%; sig testing shows our manager can be 84% confident the new ad will do better than the old ad. The problem in the market research industry is that we typically assess significance at the 95% or 90% level, if the difference between scores doesn’t pass this strict threshold, then it is often assumed no difference exists.

However, in this case, we can be very sure that the new ad is not worse than the old (there’s only a 1% chance that the new ad’s score is below the old). So, the manager has an 84% chance of improving her advertising and a 1% chance of hurting it if she changes to the new creative—pretty good odds. The worst scenario is that the new creative will perform the same as the old. So, in this case, there is real upside in going with the new creative and little downside (save the production expense). But if the manager relied on industry-standard significance testing, she would likely have dismissed the creative immediately.

At CMB, it doesn’t take long to get the sense that there is something much bigger going on here than just number crunching. Creating useable, meaningful research and telling a cohesive story require more than just an understanding of the numbers themselves; it takes creativity and a solid grasp on our clients’ businesses and their needs. As much as I love working with the data, the most satisfying part of my job is seeing how our research and recommendations support real decisions that our clients make every day, and that’s not something I ever could have learned in school.

Amy is a recent graduate from Boston College, where she realized that she had a much greater interest in statistics than the average student. She is 95% confident that this is a meaningful difference.

 

Feb20webinar14Join CMB' Amy Modini on February 20th, at 12:30 pm ET, to learn how we use discrete choice to better position your brand in a complex changing market. Register here.

 

Topics: Chadwick Martin Bailey, Advanced Analytics, Methodology, Business Decisions