When Data Doesn't Deliver: Busting the Conventional Wisdom of Modern Pregnancy

Posted by Jeannine Rua

Tue, Oct 01, 2013

ExpectingBetterOne of the wonders of modern medicine is the mountain of information available on every conceivable platform. Regardless of the condition, there are sure to be numerous sources dedicated to providing advice: chat-rooms, blogs, magazines and so on; it can be overwhelming. But, there’s one “condition” the human race has been collecting data and offering recommendations (and opinions) on since the beginning of time—I’m speaking, of course, about pregnancy.Even if you or your partner have never been, and never plan to be pregnant, you can probably rattle off a few items that moms-to-be should avoid: alcohol, coffee, cheese, fish … the list goes on. In a recent Wall Street Journal article, economist, mother, and author of Expecting Better: Why the Conventional Pregnancy Wisdom Is Wrong-and What You Really Need to Know, Emily Oster takes a deeper look at the stats in “Take Back Your Pregnancy.” Dr. Oster takes issue with the standard recommendations about pregnancy, arguing:The key to good decision making is evaluating the available information—the data—and combining it with your own estimates of pluses and minuses.”

As a market researcher, I was drawn to Emily’s argument—a reminder to be thoughtful as we interpret information, both in our personal lives and our professional ones. This is especially important for those of us who spend our days understanding and decoding data. As a 27 year old woman, who hopes to become a mother in the next ten years or so, I was glad to hear there is good evidence I won’t need to go nine months without coffee.

A few myth-busting data points:

Sample Bias: avoiding wine while pregnant (and other alcohol, too): When reviewing data comparing mothers who consume alcohol during pregnancy vs. those who abstain, it’s important to understand social norms and physician guidelines. In the U.S., drinking is strongly discouraged during pregnancy. Therefore, U.S. mothers who drink during pregnancy (vs. those who do not) tend to exhibit other behavioral and attitudinal differences. Those who tended to drink are more likely to be “rule breakers” and, in the study Emily cited, were significantly more likely to have used cocaine. Emily cites another study of Australian women, where drinkers and non-drinkers were more similar, the results of which show that light drinking (2-6 drinks a week) is fine.

Correlation ≠ causation: avoid coffee while pregnant: Fueled by her love of coffee, Emily was determined to understand the data behind the claim that coffee drinking is related to a higher rate of miscarriages. Digging into the data, Emily concluded, “we may well be mistaking a correlation for an underlying cause. The women who drink less coffee have fewer problems not because they limit their caffeine intake but because they tend to suffer from nausea, which inhibits coffee drinking.”

Emily also tackled soft cheese, deli meats, and weight gain during pregnancy; with the amount of research done on pregnancy, there’s no doubt Dr. Oster will have plenty for a second edition. Regardless of what you set out to research, be it brand health or your own, it’s important to make sure you understand the recommendations within the context of the findings and how the research is conducted.

Jeannine is a Project Manager at CMB. She loves to read, travel, and takes her coffee black, no guilt.

Topics: Methodology

Deconstructing the Customer Experience: What's in Your Toolkit?

Posted by Jennifer von Briesen

Wed, Sep 25, 2013

Disassembled rubix 1More and more companies are focusing on trying to better understand and improve their customers’ experiences. Some want to become more customer-centric. Some see this as an effective path to competitive differentiation. While others, challenging traditional assumptions (e.g., Experience Co-creation, originated by my former boss, Francis Gouillart, and his colleagues Prof. Venkat Ramaswamy and the late C.K. Prahalad), are applying new strategic thinking about value creation. Decision-makers in these firms are starting to recognize that every single interaction and experience a customer has with the company (and its ecosystem partners) may either build or destroy customer value and loyalty over time.

While companies traditionally measure customer value based on revenues, share of wallet, cost to serve, retention, NPS, profitability, lifetime value etc., we now have more and better tools for deconstructing the customer experience and understanding the components driving customer and company interaction value at the activity/experience level. To really understand the value drivers in the customer experience, firms need to simultaneously look holistically, go deep in a few key focus areas, and use a multi-method approach.

Here’s an arsenal of tools and methods that are great to have in your toolkit for building customer experience insight:

Qualitative tools

  • Journey mapping methods and tools

  • In-the-moment, customer activity-based tools

    • Voice capture exercises (either using mobile phones or landlines) where customers can call in and answer a set of questions related to whatever they are doing in the moment.

    • Use mobile devices and online platforms to upload visuals, audio and/or video to answer questions, (e.g., as you are filling out your enrollment paperwork, take a moment to take a quick—less than 10 second video, to share your thoughts on what you are experiencing).

  • Customer diaries

    • E.g., use mobile devices as a visual diary or to complete a number of activities

  • Observation tools

    • Live or virtual tools (e.g., watch/videotape in-person or online experiences, either live or after the fact)

    • On-site customer visits: companies I’ve worked with often like to join customers doing activities in their own environments and situational contexts. Beyond basic observation, company employees can dialogue with customers during the activities/experiences to gain immediate feedback and richer understanding.

  • Interviews and qualitative surveys

  • Online discussion boards

  • Online or in-person focus groups

Quantitative tools

  • Quantitative surveys/research tools (too many to list in a blog post)

  • Internal tracking tools

    • Online tools for tracking behavior metrics (e.g., landing pages/clicks/page views/time on pages, etc.) for key interactions/experience stages. This enables ongoing data-mining, research and analysis.

    • Service/support data analysis (e.g., analyze call center data on inbound calls and online support queries for interaction types, stages, periods, etc. to look for FAQs, problems, etc.).

What tools are you using to better understand and improve the customer experience? What tools are in your toolkit?  Are you taking advantage of all the new tools available?

Jennifer is a Director at  South Street Strategy Group. She recently received the 2013 “Member of the Year” award by the Association for Strategic Planning (ASP), the preeminent professional association for those engaged in strategic thinking, planning and action.

Topics: South Street Strategy Group, Strategic Consulting, Methodology, Qualitative Research, Quantitative Research, Customer Experience & Loyalty

Let's Talk about Importance, Baby

Posted by Nick Pangallo

Wed, Dec 05, 2012

If you’ll indulge me, I’d like to begin this post with a cheap trick: how many of you marketers, advertisers, researchers, corporate strategists and consultants out there have been asked to “find out what’s important to [some audience]?”  While I don’t actually expect any of you are sitting there with a hand raised in the air (kudos if you are, though), I’m betting you’re probably at least nodding to yourself.  Whatever you’re selling, the basic steps to market a product are simple: figure out who wants it, what’s important to them, and how to communicate that your product delivers on whatever they find to be important to encourage some behavior.  No one ever said marketing was rocket science.

But no one ever said it was easy, either.  And determining what’s actually important to your customer isn’t merely another task to check off, it’s a critical component on which a misstep could derail years of effort and potentially billions in R&D spending.  I always tell my clients that you can design an absolutely perfect product, a masterpiece of form and function, but if you can’t communicate why it’s important to someone, there’s no reason for anyone to buy it.  As my esteemed colleague Andrew Wilson will tell you, not even sliced bread sold itself.

So that brings us back to that original, fundamental question: how do we “find out what’s important?”  The simplest method, of course, is simply to ask.  If you’ve ever looked at a research questionnaire, chances are you’ve seen something like this:

When considering purchasing [X/Y/Z Product] from [A/B/C Company], how important to you is each of the following?

Stated Importance

This concept, generally known as Stated Importance, is one of the oldest and most used techniques in all of marketing research.  It’s easy to understand and evaluate, allows for a massive number of features to be evaluated (I’ve seen as many as 150), and the reporting is quick.  It produces a ranked list of all features, from 1 to X, giving seemingly clear guidelines on where to focus marketing efforts.  Right?

Well, now hold on.  Imagine you have a list of 40 features.  What incentive is there to say something isn’t important?  Perhaps “Information Security” is a 10, whereas “Price” is a 9.  But if everyone evaluated the list that way, you’d find that almost all of the features were “important.”  In fact, I’ve found this to be common across industries, products, audiences – you name it.  While you can still rank them 1 – 40, there’s little differentiation between the features, and you’ve just spent a big chunk of research money with little to show for it.

By the way, these two features (“Information Security” and “Price”) are, in my experience, two aspects that almost every research study includes, and which virtually always come up as being highly important.  So, using a stated measure only, one might conclude that the best features to communicate to your customers are security and costs.

Now, let’s consider the other general way of measuring importance: Derived Importance.  There are many methods to measure derived importance, but they all involve one general rule: they look for a statistical relationship between a metric, like stated importance, and a behavior – common ones include likelihood to purchase or brand advocacy.  You might use the same question as above, but instead of using a 1 – 40 ranking based on what consumers say, you could instead look for a relationship between what they say is important and their likelihood to purchase your product.

That brings us back to the question of “account security” and “price.”  We know from our discussion of stated importance that most consumers will score these very highly.  But check out what tends to happen when we look at derived importance (using an example from an auto insurance company):

stated and derived importance

The chart above is something every marketer and advertiser on the planet has probably seen 1,000 times, so bear with me.  On the vertical, or y-axis, we have our derived importance score, the statistical relationship between importance and likelihood to purchase, advocate, or whatever other behavior might be appropriate depending on where you are in your marketing funnel.  On the horizontal, or x-axis, I’m showing stated importance, or how important consumers said these features were when purchasing from Auto Insurance Company X (all of these numbers are made up, but you get the idea).

You’ll see that, as expected, information security and price perform very well on the stated measure, but low on the derived measure.  What we can infer, then, is that while most of the consumers interviewed in this made-up study say information security and price are very important, these features don’t have a strong relationship to the behavior we want to encourage.  These are commonly known as table stakes, or features that everyone says are important but don’t really connect to purchase, advocacy, and the like.

But since the third feature, offering a tool for calculating liability, has a much stronger relationship to our behavioral measure, what we can infer is that while fewer consumers said this was important, those that did view it as important are the most likely to purchase from or advocate for Auto Insurance Company X.  So if you had to pick one of these three features on which to hang your marketer’s hat, we’d recommend the tool for calculating liability – since it’s our job as marketers to figure out what’s going to encourage the behaviors we want, and then communicate that to our customers.

I hope this discussion has lent you some knowledge you can pass along to your clients, internal partners, fellow consultants, friends and whomever else.  There are many ways to calculate derived importance, and many clever techniques that improve on traditional stated importance (like Maximum-Difference Scaling or Point Allocations).  But if you take one thing from this post, let it be this – in this crazy, tech-driven world we live in, simply asking what’s important just isn’t enough anymore.

Nick is a Project Manager with CMB’s Financial Services, Insurance & Healthcare Practice.  He enjoys candlelit dinners, long walks on the beach, and averaging-over-orderings regression.

match.com case study

Speaking of romance, have you seen our latest case study on how we help Match.com use brand health tracking to understand current and potential member needs and connect them with the emotion core of the leading brand?

Topics: Methodology, Product Development, Research Design

When Observation isn't Enough: The Case of the Green Jolly Ranchers

Posted by Lynne Castronuovo

Wed, Apr 11, 2012

Green Apple Jolly ranchersAs I prepare for my 14th Boston Marathon, I find myself thinking about food a lot, and when you’re on training runs there is no shortage of candy to keep you fueled. I have come to find our candy stations reveal a little known fact about us runners— we DO NOT like green apple Jolly Ranchers.  How did I come to this revelation? I didn’t interview my teammates, convene a focus group, or field a questionnaire— it was obvious from seeing dish upon dish of lonely green candies.

This type of observation, also known as an unobtrusive measure, can be pretty handy.  Museums can look at wear patterns in the carpet, in front of exhibits, to see which are the most popular, and social media researchers can get a good understanding of what people think about a brand using social media listening.  I was comfortable concluding my group of runners does not like Jolly Ranchers. But when I took a look at CMB’s 5th floor candy bowl—almost empty—except for five or six green Jolly Ranchers, I wondered, does NO ONE like these things?

I needed to investigate a little further. On Friday, I asked my fellow team members why the apple Jolly Ranchers were always the last to go, and I got some feedback that helps explain why that is.  One person cited that apple was actually her favorite “because they are the most tart” but that she didn’t know about the candy dish. I realized that she joined CMB after the advertising blitz that took place when I launched the dish.  Another team member said she found apple “a little bit too tangy” but that she liked them better than the cherry variety.  She explained that she loves fresh cherries, but hates the cherry flavor because it reminds her of the cough medicine she had to take as a kid.

While my unobtrusive observations accurately recognized that apple was definitely the last standing in the candy dish, the feedback I garnered from my colleagues not only helped me to identify an awareness issue but also highlighted a weakness of cherry Jolly Ranchers.  Even if my census of my 5th floor colleagues didn’t provide too much insight into the whole Jolly Rancher market, it does remind me what unobtrusive measures can and can’t do and why asking questions can uncover things simple observation can’t.

Posted by Lynne Castronuovo, Lynne is a Senior Project Manager at CMB, guardian of the 5th floor candy dish, and will run her  14th Boston Marathon on Monday April, 16th.

CMB Webinars

Interested in learning how quantitative data and online conversation can lead to richer insights? Watch our Tools and Tricks Webinar with CMB's Jeff McKenna and iModerate's Christine Tchoumba. Watch here.


 

Topics: Methodology

The Big Idea: Product Sampling in the 21st Century Marketplace

Posted by Meg Gerbasi and Scott Motyka

Thu, Nov 10, 2011

This blog is first in a series, from CMB's Meg Gerbasi and Scott Motyka, exploring the latest in market research methods.

red wineImagine that you are given a glass of merlot at a wine tasting. You swirl the glass and breathe in the aroma, immediately picking up oak and then hints of cherry and dark fruits. Now tasting it, you detect cherry, blueberries, and blackberries mixed in with a blackpepper tone. Later the sommelier tells you that the wine was produced in India. Would you be surprised if we told you that you’d be more likely to buy a bottle of this Indian wine than if it was from Italy? Today we discuss a forthcoming paper in the Journal of Consumer Research by researchers Keith Wilcox, Anne Roggeveen, and Dhruv Grewal of Babson College.The Big Idea

Have you figured out why learning the Merlot was from India (instead of Italy) after tasting it would make you like it more? Wilcox and his team say they have the answer, but it’s a little complicated. When given product information prior to tasting, favorable information leads to more enjoyment of the product, but when product information is provided after sampling, favorable product information will lead to less enjoyment of the product.

This is counterintuitive, so let’s unpack it carefully. When given information about a product before you sample it, the information colors your experience of the product. When you taste something you expect to be enjoyable, for example, an expensive Italian wine, you actually enjoy the product more than if it were something you would expect to be less enjoyable, say, an inexpensive Indian wine. These findings aren’t new, several studies have shown similar results – Coke tastes better from a cup with a Coca-Cola logo, people enjoy movies more if they know they have good reviews, and drinks taste better when they’re purchased at full price.

Journal of Consumer Research 01What got us excited here at CMB were the results of what happens if you give a consumer product information after they’ve sampled your product – the results reverse!  That rich, fruity red with the silky smooth finish somehow doesn’t seem so great when you find out it’s $200 a bottle, does it? But it tastes even better when you find out that it is a Trader Joe’s “Two Buck Chuck.”  Wilcox explains that when we sample an experiential product that excites our senses causing an emotional reaction (think food, movies, music) it forms an immediate impression. The product information provided after sampling is used as a measuring stick of our initial impressions, causing favorable information to actually diminish our enjoyment of the product (“I expected a $200 bottle of wine to taste better”).

A 21st Century Marketing Strategy?

What does this mean to market researchers out in the real world? As opportunities to demo video games, listen to music samples, and watch movie previews before purchase increases through internet, mobile, and OTT TV (e.g., Hulu) channels, this research has exciting potential to inform our digital marketing methodology. Through careful examination of both the impact and timing of the product information we communicate, we can maximize the digital experiences of our customers.  Should you provide your brand image, product name, or price at the beginning or end of a video game ad, before or after a person listens to a 30 second music clip? It depends. If you’re marketing a well-known brand or band, such as Call of Duty or the Beatles, you’d be better to let your customer know right off instead of leaving them hanging. If you’re trying to break into a new market, establish your brand, or present a relatively low-cost product, let the ad convert your potential customers first. In summary, this study reinforces the critical importance of companies to “hear” the voice of their customers through market research.

Posted by CMB researchers Meg Gerbasi and Scott Motyka. Meg holds a Ph.D. in social psychology from Princeton and specializes in the study of self-interest, psychometric validation, intergroup conflict, and decision making.  She has a passion for the color pink and the musical stylings of Lady Gaga. Scott is doctoral candidate in psychology at Brandeis University, his affinity for Lady Gaga is unknown.

Topics: Methodology, Product Development