Nick Pangallo

Recent Posts

Ladder Up: What My New Prius Reminded Me About Brand Positioning

Posted by Nick Pangallo

Thu, Apr 02, 2015

M  CMB Photos and Stock Photography Stock Photography Objects Brand buildingDid I snag you with the title? I hope so—it took me quite a while to come up with it. As our regular readers and esteemed clients know, each of CMB’s employees contribute to our blog by writing at least once annually. In the past, I’ve used my posts to tackle the real-world applications of complex mathematical topics, including statistical significance, Maximum-Difference scaling, and stated vs. derived importance.Today, though, I’d like to introduce you to my true research passion: brand positioning. My first job in the research field took me all over the world as my team and I worked to determine and deliver the most effective positioning for a multinational insurance company. I’ve been hooked ever since.

Most of you reading this have probably heard the term “positioning” before, but for those who haven’t, here’s a definition from the guys who (quite literally) invented the field: “An organized system for finding a window in the mind. It is based on the concept that communication can only take place at the right time and under the right circumstances.” - Ries, A. and Trout, J. (1977), Positioning: The Battle for Your Mind.

A simpler definition, also from Jack Trout, would be this: “the place a product, brand, or group of products occupies in consumers' minds, relative to competing offerings.” Pretty simple, right? You define your brand as the collection of thoughts, feelings, and behaviors you want your consumers (whomever they may be) to have about you, relative to your key competitors (perhaps the most famous “opposition branding” of this sort is 7 Up’s classic “The Uncola”).

So, we need to identify the thoughts, feelings, and behaviors we want consumers to have and then make a big, direct marketing push to communicate those aspects to them. Right? (Obviously, there’s a lot more to it than that.) In a future blog post, I’ll tackle aspects like value statements, foundational benefits, key goals, and the like, but for now, I want to focus on one major sticking point I keep seeing come up: emotion.

These days, marketers talk endlessly about “big data” and “connecting on an emotional level.” How can we convince so-and-so to love our brand? What emotions do we want associated with our brand? Are we happy? Exciting? Stoic?

Research firms, including ours, often tackle these questions and try to help clients be seen for the right emotions. But here’s the rub: unless your product or company is brand-spankin’-new, the basic emotional reactions to your brand are already defined. Try as we might, changing an idea in someone’s mind is by far the most difficult task in all of marketing, and if people in a focus group are saying your brand reminds them of a Volvo, the odds that you can convince them to think of your brand as a Ferrari are virtually nil. 

So how can brands connect with consumers on an emotional level, convey the right emotions, and do so effectively in an already over-communicated world? Well, that answer would be too long for this blog post, but let me start with a simple analogy: brand positionings can be thought of as a ladder—you have to climb one rung before you can move on to the next. The very bottom is your foundation (what industry you’re in, when you were founded, etc.– just the facts, Jack), and the very top is your emotional connection to your consumers, inasmuch as one exists. In between is an array of needs, including functional benefits, the value statement, goals, and a few others I’ll cover in a future blog. 

Brands have to build up to that emotional connection, which is usually the most difficult component of branding (and why it’s at the top of the ladder). Brands or products can do so by delivering across the entire spectrum in a consistent, thorough way that speaks to the emotion you want to own. If you have major delivery issues, you won’t be thought of as reliable. If you’ve only existed for 2 months, you probably can’t own trustworthy. Oil companies can’t be fun. If you want to own reliability, you need top-level customer delivery, including responsive employees, a reputation for customer service, and a culture that rewards proactivity. You get the idea.

By now, you’re probably wondering what this has to do with my new Prius (good timing!). Outdoorsy, environmentally-friendly folk like myself have been long-devoted fans of Toyota’s original hybrid fuel cell vehicle for its emissions-slashing, fuel-saving engine among other things. But those aren’t emotions, and no one could think the Prius’ historical sales records could be accomplished without more than a dash of emotional connection thrown in. 

So how does the Prius make me feel? Like I’m making a difference. The “hybrid” stamp on the back reminds me not to be wasteful. The constantly-cycling energy meter not only encourages me to drive less aggressively, but also turns reducing emissions into a fun little game I play driving around Boston. (54 mpg? Psssh. I can do better.) A solar-powered climate roof reminds me not to waste energy and makes me smile when it unexpectedly turns on. A cynic might say that what the Prius really does is allow people to feel better about themselves, and I don’t deny there’s at least a kernel of truth there, too. 

You can see how the positioning of the Prius fits the ladder example: the foundation is the hybrid engine, 14 years of existence, and Toyota brand. Functional benefits include cutting gas costs and reducing emissions (the proof points are well-known) while supporting the goal of living a low-emission life. All of these things add up to that simple, good feeling I have whenever I slide behind the wheel, which connects me with the product in a way that the individual features cannot. The cycling energy monitor is cool, but I wouldn’t have assigned point values for efficiently driving away from stoplights around my neighborhood if it was just a toy. The solar roof not only helps keep the car cool in the summer, it reminds me to be energy-conscious at home, too. Seamless alignment between functional and emotional.

Let this be the first lesson then: brands can own emotions, but not without much effort. If you want someone to love your brand, you have to give them reasons why they should, and all of those reasons need to work in tandem with one another to create a whole greater than the sum of its parts. In a future post, I’ll show you how.

Nick Pangallo is the Senior Project Manager on CMB’s Financial Services, Insurance, Travel, and Hospitality team. He’s an avid poker player and an occasional lecturer at Boston College’s Carroll School of Management. You can follow him on Twitter @NAPangallo, though be warned: he often tweets about the Buffalo Bills. 

Topics: Emotional Measurement, Brand Health & Positioning

Living in a World of Significance

Posted by Nick Pangallo

Wed, Apr 02, 2014

globe

Guess what? It’s 2014! The year of Super Bowl XLVIII©, the 100th anniversary of the start of World War I, the 70th anniversary of D-Day, and a whole host of other, generally not-that-impactful events, anniversaries, and changes. One event that will happen in 2014, though, is something which happens every two years: U.S. national elections.This seems like an odd way to start a blog, but bear with me for a moment.  Show of hands out there (ed. note: you’re welcome to actually raise your hand if you want, but I wouldn’t): how many of you readers have, at some point, become tired of the relentless political horse-race, always talking about who’s ahead and who’s behind for months and years on end?  I know I have, and chances are it’s happened to you too, but I’m going to ask that we all take a deep breath and dive once more into the fray.

The question of “who’s ahead” and “who’s behind” brings us to our discussion of statistical significance.  I’m going to talk today about how it works, how it can be used, and why it might not be quite as beneficial as you might think.

First, a quick refresher: when we take survey responses, test results, etc. from a sample of people that we think represents some broader population, there is always the risk that whatever results we see might be due to random chance instead of some other factor (like actual differences of opinion between two groups). To control for this, we can conduct significance testing, which tells us the likelihood that the result we have obtained is due to random chance, instead of some other real, underlying factor. I won’t bore you with the details of terms like p, α, one- vs. two-tailed tests and the like, but know that the methodology is sound and can be looked up in any AP-level statistics textbook.

Most organizations assume an “error range” of 5%, meaning that a data finding is statistically significant if the odds are 5% (or less) that the results are due to random chance. So, if we run significance testing on Millennials vs. Gen X’ers in a survey, and we find that the two are significantly different, we are saying there is a 5% (or less) chance that those differences are just random, and not due to actual underlying opinions, or price-sensitivity, or political beliefs, or receptiveness to that new hair-growth prescription, or whatever else you might be testing.

Now, if you have a huge data set and a fairly advanced statistical program, calculating significance is easy. But since most people don’t have access to these tools, there is another, much simpler way to think about significance: the margin of error. The margin of error is a simple way of determining how much higher or lower a result can be before it is considered significantly different. For instance, if your margin of error was ± 5%, and your data points were 60% and 49%, your data is (likely) significantly different; if your data points are 55% and 51%, they are not.

This brings us back to the political analogy; calculating the margin of error is how we determine whether Politician X is ahead of Politician Y, or vice-versa.

Let’s say, for example, a poll of 1,000 registered voters was conducted, with a sound methodology, and asks which of two candidates respondents support (assume no other options are presented in this circumstance, a small but notable difference for a future blog). We find that 48% support Politician X and 52% Politician Y. Because the sample size is 1,000, the margin of error is ± 3.1%. Since the difference between the two politicians is less than twice the margin of error (i.e., if Politician X’s share might be as high as 51.1% and Politician Y’s share as low as 48.9%), you would hear this reported as a “statistical tie” in the news. This would be because news organizations won’t report one candidate as ahead of the other, as long as the two are within that acceptable margin of error.

So that’s the political world, and there are many reasons networks and polling organizations choose to behave this way (aversion to being wrong, fear of being seen as taking sides, and fear of phone calls from angry academics, among others).  But in the research world, we don’t usually have nice, round sample sizes and two-person comparisons – and that’s why relying on statistical significance and margin of error when making decisions can be dangerous.

Let’s go back to that political poll.  The original sample size was N=1,000 and produced a margin of error of ± 3.1%.  Let’s see what happens when we start changing the sample size:

·        N=100: ± 9.8%

·        N=200: ± 6.9%

·        N=500: ± 4.4%

·        N=750: ± 3.6%

·        N=1,000: ± 3.1%

·        N=1,500: ± 2.5%

·        N=2,000: ± 2.2%

·        N=4,000: ± 1.6%

Notice the clear downward trend: as sample sizes grow, margins of error shrink, but with diminishing returns.

Now, we at CMB would advocate for larger sample sizes, since they allow more freedom within the data (looking at multiple audiences, generally smaller error ranges, etc.).  It’s no secret that larger sample sizes are better.  But I’ve had a few experiences recently that led me to want to reinforce a broader point: just because a difference is significant doesn’t make it meaningful, and vice versa.

With a sample size of N=5,000, a difference of 3% between Millennials and Gen X’ers would be significant, but is a 3% difference ever really meaningful in survey research?  From my perspective, the answer is a resounding no.  But if your sample size is N=150, a difference of 8% wouldn’t be significant…but eight percentage points is a fairly substantial difference.  Sure, it’s possible that your sample is slightly skewed, and with more data that difference would shrink.  But it’s more likely that this difference is meaningful, and by looking at only statistical significance, we would miss it. And that’s the mistake every researcher needs to avoid.

If I can leave you with one abiding maxim from today, it’s this: assuming some minimum sample size (75, 100, whatever makes you comfortable), big differences usually are meaningful, small differences usually are not.  Significance is a nice way to be certain in your results, but we as researchers need to support business decisions with meaningful findings, not (just) significant ones.

Nick Pangallo is a Project Manager in CMB’s Financial Services, Healthcare, and Insurance practice.  He has a meaningful-but-not-significant man-crush on Nate Silver.

Topics: Advanced Analytics, Research Design

Want to Lose Weight? Try a Tradeoff Exercise!

Posted by Nick Pangallo

Tue, Nov 05, 2013

aerobicsA few weeks ago, I found myself seated at a trendy Mexican restaurant in Minneapolis, an eager participant at one of the more enjoyable business lunches I’ve encountered lately. As you might expect, the topic quickly turned from the vagaries of the marketing life to the weather, summer vacation stories, the gym, far-too-early holiday planning (I’m looking at you, Target), and then, unexpectedly, to dieting. It was there where I learned Jim Garrity, SVP and head of CMB’s Financial Services, Insurance & Healthcare Practice, was quite a fan of Weight Watchers, one of the few truly successfully long-term dieting options out there – it earned Consumer Report’s highest mark for nutrition analysis.Jim had become a devotee of the Weight Watchers PointsPlus® plan, which, for someone I’d fancied a meat-and-potatoes man like me, came as a bit of a surprise. But as Jim continued to discuss the program and what he enjoyed about it, I realized that PointsPlus® was nothing more than another example of CMB’s brand and product development bread-and-butter: the tradeoff exercise. 

For those not familiar with how it works, PointsPlus® assigns a numeric point-value to each meal/snack/dessert/shake/whatever you can buy through the program, based on protein, carbohydrates, fat and fiber content. The system will then give you a daily points target, taking into account your height, weight, age and gender; read more about it here. Simply stay at or under your target, and…well, that’s it. 

This, as you might expect, presents our would-be dieter with a daily flurry of choices.  Do you have the bagel with cream cheese or the fresh fruit for breakfast? Go with a savory salad or a slim sandwich for lunch? And how do these choices affect what you have “left” at dinner?  Sorry to burst your bubble, eager reader, but if you want that red velvet cake, you’re going to have to pass on the cream cheese.

Built on the economic principle of opportunity cost (the idea that to buy a product or undertake an activity, that product/activity must necessarily replace something else you might have bought or engaged in), these sorts of tradeoffs are exactly what we seek to model when researchers help develop brands, products, messaging campaigns, or basically anything else with a give-and-take.  You can’t be the industry leader and try harder. If you want to offer premium customer service, you’re going to have to charge a bit more. Anyone who’s ever made a budget, whether on a ledger or www.Mint.com, understands that if you go to the concert, you might have to skip the movies this week.

Our job, then, is to master the usage of methodological techniques which replicate these real-world tradeoffs within a research setting. Almost all of my clients take advantage of Maximum-Difference Scaling, an exercise where participants select which items/features/messages/etc. they like most and least, four options at a time. By forcing this tradeoff, we can accurately prioritize huge lists of information in short order, not only rank-ordering but also sizing the distance between items. Many also use allocations, which allow participants to assign different values to a set of given options, but knowing there are only so many points to go around (often 100). My brand and product development engagements often utilize Discrete Choice (or Conjoint) Modeling, an extremely powerful form of tradeoff exercise where participants must choose between holistic brand positionings or fully-configured products. We can then deconstruct their decisions, analyze the tradeoffs, and find those pesky drivers of decision-making that are the foundation of marketing as we know it.

Think about it this way: if you’re still with me, you’ve traded off the time you could’ve spent watching E!, playing Candy Crush, or checking Facebook in favor of a little lesson in economics and research. So what’ll it be, then–the pasta or the seafood?

Nick is a Project Manager within CMB’s Financial Services, Insurance & Healthcare Practice, as well as a poker-playing behavioral economics and game theory nerd. You can follow him on Twitter @NAPangallo. He would always choose the pasta.

Topics: Advanced Analytics, Research Design

Let's Talk about Importance, Baby

Posted by Nick Pangallo

Wed, Dec 05, 2012

If you’ll indulge me, I’d like to begin this post with a cheap trick: how many of you marketers, advertisers, researchers, corporate strategists and consultants out there have been asked to “find out what’s important to [some audience]?”  While I don’t actually expect any of you are sitting there with a hand raised in the air (kudos if you are, though), I’m betting you’re probably at least nodding to yourself.  Whatever you’re selling, the basic steps to market a product are simple: figure out who wants it, what’s important to them, and how to communicate that your product delivers on whatever they find to be important to encourage some behavior.  No one ever said marketing was rocket science.

But no one ever said it was easy, either.  And determining what’s actually important to your customer isn’t merely another task to check off, it’s a critical component on which a misstep could derail years of effort and potentially billions in R&D spending.  I always tell my clients that you can design an absolutely perfect product, a masterpiece of form and function, but if you can’t communicate why it’s important to someone, there’s no reason for anyone to buy it.  As my esteemed colleague Andrew Wilson will tell you, not even sliced bread sold itself.

So that brings us back to that original, fundamental question: how do we “find out what’s important?”  The simplest method, of course, is simply to ask.  If you’ve ever looked at a research questionnaire, chances are you’ve seen something like this:

When considering purchasing [X/Y/Z Product] from [A/B/C Company], how important to you is each of the following?

Stated Importance

This concept, generally known as Stated Importance, is one of the oldest and most used techniques in all of marketing research.  It’s easy to understand and evaluate, allows for a massive number of features to be evaluated (I’ve seen as many as 150), and the reporting is quick.  It produces a ranked list of all features, from 1 to X, giving seemingly clear guidelines on where to focus marketing efforts.  Right?

Well, now hold on.  Imagine you have a list of 40 features.  What incentive is there to say something isn’t important?  Perhaps “Information Security” is a 10, whereas “Price” is a 9.  But if everyone evaluated the list that way, you’d find that almost all of the features were “important.”  In fact, I’ve found this to be common across industries, products, audiences – you name it.  While you can still rank them 1 – 40, there’s little differentiation between the features, and you’ve just spent a big chunk of research money with little to show for it.

By the way, these two features (“Information Security” and “Price”) are, in my experience, two aspects that almost every research study includes, and which virtually always come up as being highly important.  So, using a stated measure only, one might conclude that the best features to communicate to your customers are security and costs.

Now, let’s consider the other general way of measuring importance: Derived Importance.  There are many methods to measure derived importance, but they all involve one general rule: they look for a statistical relationship between a metric, like stated importance, and a behavior – common ones include likelihood to purchase or brand advocacy.  You might use the same question as above, but instead of using a 1 – 40 ranking based on what consumers say, you could instead look for a relationship between what they say is important and their likelihood to purchase your product.

That brings us back to the question of “account security” and “price.”  We know from our discussion of stated importance that most consumers will score these very highly.  But check out what tends to happen when we look at derived importance (using an example from an auto insurance company):

stated and derived importance

The chart above is something every marketer and advertiser on the planet has probably seen 1,000 times, so bear with me.  On the vertical, or y-axis, we have our derived importance score, the statistical relationship between importance and likelihood to purchase, advocate, or whatever other behavior might be appropriate depending on where you are in your marketing funnel.  On the horizontal, or x-axis, I’m showing stated importance, or how important consumers said these features were when purchasing from Auto Insurance Company X (all of these numbers are made up, but you get the idea).

You’ll see that, as expected, information security and price perform very well on the stated measure, but low on the derived measure.  What we can infer, then, is that while most of the consumers interviewed in this made-up study say information security and price are very important, these features don’t have a strong relationship to the behavior we want to encourage.  These are commonly known as table stakes, or features that everyone says are important but don’t really connect to purchase, advocacy, and the like.

But since the third feature, offering a tool for calculating liability, has a much stronger relationship to our behavioral measure, what we can infer is that while fewer consumers said this was important, those that did view it as important are the most likely to purchase from or advocate for Auto Insurance Company X.  So if you had to pick one of these three features on which to hang your marketer’s hat, we’d recommend the tool for calculating liability – since it’s our job as marketers to figure out what’s going to encourage the behaviors we want, and then communicate that to our customers.

I hope this discussion has lent you some knowledge you can pass along to your clients, internal partners, fellow consultants, friends and whomever else.  There are many ways to calculate derived importance, and many clever techniques that improve on traditional stated importance (like Maximum-Difference Scaling or Point Allocations).  But if you take one thing from this post, let it be this – in this crazy, tech-driven world we live in, simply asking what’s important just isn’t enough anymore.

Nick is a Project Manager with CMB’s Financial Services, Insurance & Healthcare Practice.  He enjoys candlelit dinners, long walks on the beach, and averaging-over-orderings regression.

match.com case study

Speaking of romance, have you seen our latest case study on how we help Match.com use brand health tracking to understand current and potential member needs and connect them with the emotion core of the leading brand?

Topics: Methodology, Product Development, Research Design