Get on Your Game: Avoiding the Pitfalls of a Tired Tracker

Posted by T.J. Andre

Wed, Feb 27, 2013

rajon rondo boston celticsLate last month, the Boston Celtics were struggling along in the middle of the NBA standings. They weren’t great and they weren’t awful, but they were predictable—predictably middle of the pack. But star point guard Rajon Rondo was having another terrific year, racking up triple-doubles (achieving double digits in scoring, rebounds and assists) at twice the rate of his closest rivals. He was the Celtics top performer and the key to what limited success they were having. So when Rondo went down with a season ending knee injury on January 25th die-hard Celtics fans went from disappointment to outright depression.Then something surprising happened.   

Prior to Rondo’s injury the Celtics had looked tired; they suffered from low energy and lack of enthusiasm for moving beyond their very ordinary performance.  After the injury they looked like a bunch of kids bursting through the door on the last day of school.  What happened? They were playing the same game, but had changed (out of necessity) the way they were playing it.  Skills that had been lying dormant suddenly came to the fore.  They were energized, they were playing much better defense and they were moving the ball much more effectively on the offensive end of the floor.  The Celtics went on a tear, winning 8 out of 9 games over the next 3 weeks, and potentially changing the entire trajectory of their season. 

So what does this have to do with research?  Maybe a lot.  Ongoing tracking programs like customer experience and brand health tracking are especially susceptible to becoming “tired” – continuously delivering the same things over and over, with diminishing returns.  One of the biggest challenges facing Celtics Coach Doc Rivers was figuring out how to pull his team out of the middling rut they’d gotten comfortable in.  In his case, the team was forced to change due to their star player’s injury.  Fortunately, customer insights folks don’t need such a dramatic trigger.   

Here are three things you can do to breathe new life into tired tracking programs, and “up your game”:

Introduce “Deep Dives”  Incorporating “deep dives” into your program is a great way to get more and more useful insights into critical issues, without the time and cost of a separate project.  A few examples of potential deep dive topics:

  • Product enhancements
  • Customer decision drivers
  • Competitive comparisons
  • Internal performance comparisons

Integrate your tracking data with data from other sources:  Tracking measurement alone (no matter how well designed) isn’t capable of informing all of the insights your internal clients need.  “Connecting the dots” between the measurement and other business data can help you deliver new, more useful insights.  I’m not talking about a “millions of dollars and thousands of lives” IT initiative here. You’ll be surprised how much useful insight you can get by focusing on a specific business issue with data sets that you can readily get your hands on.

Celtics anthemPut insights directly into users’ hands in a way that helps them actIf your internal customers are using dashboards or portals to view tracking results, do those tools really help them take action, or are they really just data dashboards.  A dashboard needs to tell the end-user 4 things, customized to their roles and responsibilities: they need to know where they stand; what will have the highest impact on key business goals; they also need the need the tools to prioritize and plan actions, and show whether the actions taken are really working.

So, has your customer experience or brand health tracking program grown “tired?”  If so, what will you do to up your “game?”

 

T.J. is CMB's Chief Strategy Officer and General Manager of our Tech Solutions Team. His twin boys are enjoying a re-energized Celtics above.

Topics: Research Design, Brand Health & Positioning, Customer Experience & Loyalty

When Customer Experience Surveys Attack (or Just Go out of Scope)

Posted by Jeff McKenna

Wed, Jan 30, 2013

Last weekend, my family and I took a trip to Charlotte, North Carolina.  We rented a car and stayed at a hotel.  Within 12 hours of arriving home I received an online survey from each company.  In both cases, the experiences were excellent and I was happy to share the details.  In one case, the survey took me about 1 ½ minutes to complete.  The other one took me about 10 minutes. For the survey that took me 1 ½ minutes, when I reached the end, I thought “Well, they asked about the key aspects of the experience and got what they needed.”  In contrast, by the time I reached the midpoint of the 10 minute survey, I was exhausted and just wanted to end the damn thing – and then when I reached the end, they asked if I wanted to answer more(!?!) questions.

In the 1 ½ minute survey I could clearly see the questions focused solely on the experience and managing the key aspects of the service –they probably have more than enough data to get deep insights since they know who I am, my travel details, and have similar data for the thousands of other travelers who are also rating the experience.

In the 10 minute survey, I could see that the company was asking for details beyond the experience, they were seeking to understand competitive positioning and future intended travel behaviors—all things that are clearly outside the scope of the service experience.  They also asked questions about very detailed aspects of the experience e.g., the mechanical condition of the car and softness of the towels.  It led me to ask: “Really?  You want me to rate this aspect of the service?  Aren’t you guys smart enough to tell these things are up to standard?” 

asleep at deskHere’s an example from another industry: homebuilding.  I’ve seen surveys that ask buyers to rate the window quality in the home.  Why?!?  Shouldn’t the builder know if the windows they are putting into the home are high-grade or low-grade?  Remember, we’re assessing the home purchase experience, NOT homebuyer preferences.  If you’re trying to achieve both in the same research study, you’re going to be (as Mr. Miyagi says) “like the grasshopper in the middle of the road.” 

As researchers and companies asking our valued customers for feedback, we need to be very aware of the unstated agreement for what’s in scope and out of scope for these customer experience surveys.  I’m not opposed to having surveys do “double-duty,” but we should be clear with our customers that we are doing so, AND not kill them with gruelingly long surveys.

Jeff is VP, Market Science Solutions at CMB. He always takes time for a customer experience survey, but keep it short he's very busy, he needs time to blog and occasionally tweet @McKennaJeff.

Royal Caribbean Case StudySee how CMB is helping Royal Caribbean measure guest experience and improve customer satisfaction and retention. Click here.

 

 

 

 

 


Topics: Travel & Hospitality Research, Research Design

CMB Book Review: Delivering Happiness: A Path to Profits, Passion, and Purpose

Posted by Jeannine Rua

Wed, Dec 19, 2012

Delivering Happiness

Happiness means something a bit different to each of us. To Tony Hsieh, the CEO of Zappos.com, happiness means working for a company that you’re excited about, surrounded by people who feel the same way.As a researcher, I read his book Delivering Happiness as a subliminal message to anyone working with customer satisfaction or loyalty data. Reading his book, it’s clear Tony set out to highlight his journey, not to write a comprehensive corporate history or autobiography.  This is the golden rule in report writing as well – pull out the highlights and tell a story around the most important pieces.  

Tony adopted another critical rule of report-writing: write for your audience.  Tony admits his book is not a work of grammatical perfection, but it’s written in a way that makes it easy for everyone to read and enjoy. This separates Delivering Happiness from many of its neighbors in the world of “business books” – its informal tone is easy to digest. Throughout the book, Tony seamlessly juxtaposes comical stories of growing up with Asian-American parents with stories of his ambition, failed attempts, and successes.

Beyond the autobiographical elements of the book, I found the managerial guidance in Delivering Happiness also relates to research. The following principals are important context when writing recommendations, but also when thinking about survey design and analysis interpretation. As an added bonus these “rules” might also serve you well in your personal interactions:

1. Remember that you are an n of 1; other people have opinions, too. When Tony was first approached with the idea of starting an internet shoe website, he was skeptical because he himself had never considered purchasing shoes through a catalog and couldn’t imagine people buying shoes without trying them on. My toes are thankful that Tony realized “it didn’t matter whether I would be willing to buy shoes without trying them on first.”

2. Embrace change with an open mind.
Zappos.com was originally based on drop-ship sales, and had shied away from opening a warehouse with inventory because it was not part of their business model. When they realized they were limited in what they could offer their customers, they thought, “If changing our business model is what’s going to save us, then we need to embrace and drive change.”

3. Listen to employee’s feedback. Customer feedback is great – but it’s important to also hear from your employees. Happy employees are critical to delivering a positive customer experience, and employees working in the thick of daily processes often have valuable insight and ideas around what would enable them to better deliver.

Happiness4. Pay attention to word of mouth and the lifetime value of customers. It’s important to think about how your company is interacting with customers at every level – one happy customer with a large network of friends may be more valuable than he first appears. Zappos.com trusts their employees and empowers them to help customers in any way they can – even if that means recommending another site for their purchase.

From a research perspective, mobile technology strikes me as the most obvious application for these principals. As mobile technology changes the way consumers shop and interact, we are presented with new opportunities for listening and observing. As you think about your personal and professional goals for the new year, keep an open mind and hopefully happiness will find you.

Jeannine is a Project Manager working with our Tech, eCommerce, and MedTech practice. She finds happiness learning about new places through reading, travelling, and talking with just about anyone she can find.

What's your plan for delivering happiness in 2013?

 

 

 

Topics: Mobile, Research Design, Customer Experience & Loyalty

Let's Talk about Importance, Baby

Posted by Nick Pangallo

Wed, Dec 05, 2012

If you’ll indulge me, I’d like to begin this post with a cheap trick: how many of you marketers, advertisers, researchers, corporate strategists and consultants out there have been asked to “find out what’s important to [some audience]?”  While I don’t actually expect any of you are sitting there with a hand raised in the air (kudos if you are, though), I’m betting you’re probably at least nodding to yourself.  Whatever you’re selling, the basic steps to market a product are simple: figure out who wants it, what’s important to them, and how to communicate that your product delivers on whatever they find to be important to encourage some behavior.  No one ever said marketing was rocket science.

But no one ever said it was easy, either.  And determining what’s actually important to your customer isn’t merely another task to check off, it’s a critical component on which a misstep could derail years of effort and potentially billions in R&D spending.  I always tell my clients that you can design an absolutely perfect product, a masterpiece of form and function, but if you can’t communicate why it’s important to someone, there’s no reason for anyone to buy it.  As my esteemed colleague Andrew Wilson will tell you, not even sliced bread sold itself.

So that brings us back to that original, fundamental question: how do we “find out what’s important?”  The simplest method, of course, is simply to ask.  If you’ve ever looked at a research questionnaire, chances are you’ve seen something like this:

When considering purchasing [X/Y/Z Product] from [A/B/C Company], how important to you is each of the following?

Stated Importance

This concept, generally known as Stated Importance, is one of the oldest and most used techniques in all of marketing research.  It’s easy to understand and evaluate, allows for a massive number of features to be evaluated (I’ve seen as many as 150), and the reporting is quick.  It produces a ranked list of all features, from 1 to X, giving seemingly clear guidelines on where to focus marketing efforts.  Right?

Well, now hold on.  Imagine you have a list of 40 features.  What incentive is there to say something isn’t important?  Perhaps “Information Security” is a 10, whereas “Price” is a 9.  But if everyone evaluated the list that way, you’d find that almost all of the features were “important.”  In fact, I’ve found this to be common across industries, products, audiences – you name it.  While you can still rank them 1 – 40, there’s little differentiation between the features, and you’ve just spent a big chunk of research money with little to show for it.

By the way, these two features (“Information Security” and “Price”) are, in my experience, two aspects that almost every research study includes, and which virtually always come up as being highly important.  So, using a stated measure only, one might conclude that the best features to communicate to your customers are security and costs.

Now, let’s consider the other general way of measuring importance: Derived Importance.  There are many methods to measure derived importance, but they all involve one general rule: they look for a statistical relationship between a metric, like stated importance, and a behavior – common ones include likelihood to purchase or brand advocacy.  You might use the same question as above, but instead of using a 1 – 40 ranking based on what consumers say, you could instead look for a relationship between what they say is important and their likelihood to purchase your product.

That brings us back to the question of “account security” and “price.”  We know from our discussion of stated importance that most consumers will score these very highly.  But check out what tends to happen when we look at derived importance (using an example from an auto insurance company):

stated and derived importance

The chart above is something every marketer and advertiser on the planet has probably seen 1,000 times, so bear with me.  On the vertical, or y-axis, we have our derived importance score, the statistical relationship between importance and likelihood to purchase, advocate, or whatever other behavior might be appropriate depending on where you are in your marketing funnel.  On the horizontal, or x-axis, I’m showing stated importance, or how important consumers said these features were when purchasing from Auto Insurance Company X (all of these numbers are made up, but you get the idea).

You’ll see that, as expected, information security and price perform very well on the stated measure, but low on the derived measure.  What we can infer, then, is that while most of the consumers interviewed in this made-up study say information security and price are very important, these features don’t have a strong relationship to the behavior we want to encourage.  These are commonly known as table stakes, or features that everyone says are important but don’t really connect to purchase, advocacy, and the like.

But since the third feature, offering a tool for calculating liability, has a much stronger relationship to our behavioral measure, what we can infer is that while fewer consumers said this was important, those that did view it as important are the most likely to purchase from or advocate for Auto Insurance Company X.  So if you had to pick one of these three features on which to hang your marketer’s hat, we’d recommend the tool for calculating liability – since it’s our job as marketers to figure out what’s going to encourage the behaviors we want, and then communicate that to our customers.

I hope this discussion has lent you some knowledge you can pass along to your clients, internal partners, fellow consultants, friends and whomever else.  There are many ways to calculate derived importance, and many clever techniques that improve on traditional stated importance (like Maximum-Difference Scaling or Point Allocations).  But if you take one thing from this post, let it be this – in this crazy, tech-driven world we live in, simply asking what’s important just isn’t enough anymore.

Nick is a Project Manager with CMB’s Financial Services, Insurance & Healthcare Practice.  He enjoys candlelit dinners, long walks on the beach, and averaging-over-orderings regression.

match.com case study

Speaking of romance, have you seen our latest case study on how we help Match.com use brand health tracking to understand current and potential member needs and connect them with the emotion core of the leading brand?

Topics: Methodology, Product Development, Research Design

The Danger of Painting by Numbers

Posted by Marty Murk

Wed, Nov 14, 2012

I recently learned the story of Vitaly Komar and Alex Melamid, two Russian-born conceptual artists who, as part of their People's Choice series, captured EXACTLY what America wanted in their paintings. To create “The Most Wanted” (1994) painting in America, Komar and Melamid gathered data from professional polling companies and actually gave the people what they asked for.

Komar and Melamid Most WantedNaturally, by basing decisions unquestioningly on what consumers asked for, Komar and Melamid came up with a beauty. It’s a perfect combination of a pleasant blue sky, scenic mountains, frolicking deer, a picnicking family, and George Washington pondering life smack dab in the middle. It is a scene that has everything, and it's brilliant social commentary—but J.M.W. Turner it’s not.

Pointing out that a complete and unquestioning faith in numbers is a foolish exercise is nothing new. That’s especially true when you’re in the business of market research, consumer insights, or whatever you want to call us. I’m sure you’ve all heard the Henry Ford quote…“if I’d asked people how they’d like to see travel improved, they’d have told you: I want a faster horse.” But I’ve never come across anything that illustrated this better than “The Most Wanted” paintings. 

Besides giving me a chance to channel my inner art critic, the painting, and how it came to be, makes me think about how I design studies, analyze data, and think about its implications for my clients:

  • Sometimes by listening to everyone, you’re hearing no one:  It’s tempting to want to hear from as many people as possible, but more opinions don’t necessarily translate into more insights. Just as Komar and Melamid's data translated into something a little ridiculous, trying to get everyone to answer every question won’t give you a clear picture of what you need to improve or the decisions you have to make. That’s why it’s critical to identify who you want to listen to and determine what you can learn from a specific segment.

  • People can’t tell you EXACTLY what they want:  Consumer research that focuses solely on what customers say they want won’t tell you everything you should know. If you want to understand customer needs and develop products or services that meet them, you have to ask the questions that uncover what those needs are. Are people asking for mountains when they’re really seeking relaxation? Techniques like key driver analysis can help us understand customer needs and goals, and not just what they say they want.

  • If you want insights, you’ll need context: Just like slapping a few artistic elements on a canvas won’t make a great painting, pasting all of your data points onto a PowerPoint won’t add up to insights you can use. I’m reminded to ask what else we know— is there other information or behavioral data is available and can help give us a fuller picture?

Komar and Melamid Least WantedBut above all the biggest takeaway for me, from “The Most Wanted” painting, is that thoughtful actionable research starts with the end in mind. We researchers can’t measure needs, wants, and preferences for specific elements in the design without any forethought about the final results of the potential outcomes.  

And if you’re curious here’s America’s Least Wanted Painting:

 

Marty is a Senior Project Manager on CMB's Retail Practice. You may be surprised to learn he earned his Master's in Marketing Research and not Art History.

See how CMB and South Street Strategy Group helped Tauck create a successful new travel product through a multi-phase multi-method approach. Click here to read the Case Study.

Topics: Consumer Insights, Research Design