“Innovation” has enjoyed a long reign as king of the business buzzwords—you’d be hard-pressed to attend an insights or marketing conference without hearing it. But beyond the buzz, organizations pursue innovation for a number of reasons: to differentiate themselves from other brands, establish themselves as an industry leader, or to avoid producing stale products, services, ad campaigns or content. Smart brands know that complacency is not an option and recognize they must adapt to accommodate the ever-changing consumer landscape.
Innovation is a significant investment—the stakes are high for these new ideas to deliver meaningful results, whether by boosting the brand, successfully introducing a new product, growing the customer base, or adding to bottom line profitability. No matter how disruptive a product, service, or idea is, at the core there must be a deep understanding of customer needs. (Tweet this!) Let’s take a look at two very different attempts at innovation, and where they stumbled:
The Case of Google Glass
For any new product (innovative or otherwise), organizations need to answer “yes” to two questions: (1) Is there a market? (2) Does it solve a legitimate problem?
No matter how revolutionary the product may be, it won’t succeed unless there’s a market for it. It's possible that a product can be too forward-thinking, leaving customers confused or unwilling to try it. Take the case of Google Glass: though the product itself was revolutionary and consumers were intrigued,it was unclear why consumers needed Google Glass and what problem it was designed to solve. Google Glass ended up generating low demand since there wasn’t an easily identifiable need for it.
The key here would’ve been to first identify what customers need and then develop a product aimed to satisfy that need. Here’s where market research can help with innovation. As market researchers we can help brands get into the mind of consumers and identify the gaps between what they are currently receiving and what they want to receive. By identifying these gaps, we can shed light on where there’s a need to be met.
The Febreze Scentstories Flop
Other innovation flops in recent years have proven that beyond identifying customer/prospect needs, it’s also important to test how messages play to real consumers prior to launch.
A lesson illustrated by the failure of P&G’s “Febreze Scentstories”. In 2005, the company caused confusion because they failed to educate customers properlyabout what the product actually was. Febreze Scentstories resembled a disc player that emitted different scents every 30 minutes (they looked an awful lot like CDs). The ads told consumers with Febreze Scentstories they could "play scents like you play music." And while P&G partnered with superstar Shania Twain to drum up excitement, its advertising campaign confused consumers by making them think the product actually involved music. Clearer messaging that would’ve helped prevent this misunderstanding.
Advanced analytical techniques along with strategic qualitative methodologies are a boon to brands. There has never been so much information available nor computing power capable of parsing and modeling it. But as two very different product innovations demonstrate, that sheer volume of data is not enough. What is needed for successful innovation are insights grounded in a truly consumer-centric approach. After all, only the consumer knows what the consumer wants (and needs).
Julia Walker is a Senior Associate Researcher at CMB who enjoys being innovative in her everyday life. For instance, she loves to find creative ways to eat healthy without sacrificing taste.
Americans have a lot to reckon with in the wake of the recent vote. You’re forgiven if analyzing the role of the presidential debate moderator isn’t high on your list. Still, for those of us in the qualitative market research business, there were professional lessons to be learned from the reactions to moderators Lester Holt (NBC), Martha Raddatz (ABC), Anderson Cooper (CNN), and Chris Wallace (Fox). Each moderator took their own approach and each was met with criticism and praise.
As CMB’s qualitative research associate and a moderator-in-training, I noticed parallels to the role of the moderator in the political and market research space. My thoughts:
The moderator as unbiased
"Lester [Holt] is a Democrat. It’s a phony system. They are all Democrats.” – Donald Trump, President-Elect
Concerns regarding whether or not the debate moderators were unbiased arose throughout the primaries and presidential debates. Moderators were criticized for techniques like asking questions that were deemed “too difficult,” going after a single candidate, and not adequately pressing other candidates. For example,critics called NBC’S Matt Lauer biased during the Commander-in-Chief forum. Some felt Lauer hindered Hillary Clinton’s performance by asking tougher questions than those asked of Donald Trump, interrupting Clinton, and not letting her speak on other issues the same way he allowed Donald Trump to.
In qualitative market research, every moderator will experience some bias from time to time, but it’s important to mitigate bias in order to maintain the integrity of the study. In my own qualitative experience, the moderator establishes that they are unbiased by opening each focus group by explaining that they are independent from the topic of discussion and/or client, and therein are not looking for the participants to answer a certain way.
Qualitative research moderators can also avoid bias by not asking leading questions, monitoring their own facial expressions and body language, and giving each participant an equal opportunity to speak. Like during a political debate, preventing bias is imperative in qualitative work because biases can skew the results of a study the same way the voting populace fears bias could skew the perceived performance of a candidate.
The moderator as fact-checker
“It has not traditionally been the role of the moderator to engage in a lot of fact-checking.” – Alan Schroeder, professor of Journalism at Northeastern University
In qualitative moderating, fact-checking is dependent on the insights we are looking to achieve for a particular study. For example, I just finished traveling across the country with CMB’s Director of Qualitative, Anne Hooper, for focus groups. In each group, Anne asked participants what they knew about the product we were researching. Anne noted every response (accurate or inaccurate), as it was critical we understood the participants’ perceptions of the product. After the participants shared their thoughts, Anne gave them an accurate product description to clarify any false impressions because for the remainder of the conversation it was critical the respondents had the correct understanding of the product.
For the case of qualitative research, Anne demonstrated how fact-checking (or not fact-checking) can be used for insights. There’s no “one right way” to do it; it depends on your research goals.
The moderator as timekeeper
“Basically, you're there as a timekeeper, but you're not a participant.” – Chris Wallace, Television Anchor and Political Commentator for Fox News
Presidential debate moderators frequently interjected (or at least tried to) when candidates ran over their allotted time in order to stay on track and ensure each candidate had equal speaking time. Focus group moderators have the same responsibility. As a qualitative moderator-in-training, I’m learning the importance of playing timekeeper – to be respectful of the participants’ time and allow for equal participation. I must also remember to cover all topics in the discussion guide. Whether you’re acting as a timekeeper in market research or political debates, it’s as much about the audience of voters or clients as it is about the participants (candidates or study respondents).
The study’s desired insights will dictate the role of the moderator. Depending on your (or your client’s) goals, bias, fact-checking, and time-keeping could play an important part in how you moderate. But ultimately whether your client is a business or the American voting populace, the fundamental role of the moderator remains largely the same: to provide the client with the insights needed to make an informed decision.
Kelsey is a Qualitative Research Associate. She co-chairs the New England chapter of the QRCA, and recently received a QRCA Young Professionals Grant!
I’m excited you asked me this because it’s one of my favorite questions of all time.
First we need to talk about why we weight data in the first place. We weight data because our ending sample is not truly representative of the general population. This misrepresentation can occur because of non-response bias, poor sample source and even bad sample design. In my opinion, if you go into a research study knowing that you’ll end up weighting the data, there may be a better way to plan your sample frame.
Case in point, many researchers intentionally over-quota certain segments and plan to weight these groups down in the final sample. We do this because the incidence of some of these groups in the general population is small enough that if we rely on natural fallout we would not get a readable base without a very large sample. Why wouldn’t you just pull a rep sample and then augment these subgroups? The weight needed to add these augments into the rep sample is 0.
Arguments for including these augments with a very small weight include the treatment of outliers. For example, if we were conducting a study of investors and we wanted to include folks with more than $1,000,000 in assets, we might want to obtain insights from at least 100 of these folks. In a rep sample of 500, we might only have 25 of them. This means I need to augment this group by 75 respondents. If somehow I manage to get Warren Buffet in my rep sample of 25, he might skew the results of the sample. Weighting the full sample of 100 wealthier investors down to 25 will reduce the impact of any outlier.
A recent post by Nate Cohn in the New York Times suggested that weighting was significantly impacting analysts’ ability to predict the outcome of the 2016 presidential election. In the article, Mr. Cohn points out, “there is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.” This man carried a sample weight of 30. In a sample of 3000 respondents, he now accounts for 1% of the popular vote. In a close race, that might just be enough to tip the scale one way or the other. Clearly, he showed up on November 8th and cast the deciding ballot.
This real life example suggests that we might want to consider “capping” extreme weights so that we mitigate the potential for very small groups to influence overall results. But bear in mind that when we do this, our final sample profiles won’t be nationally representative because capping the weight understates the size of the segment being capped. It’s a trade-off between a truly balanced sample and making sure that the survey results aren’t biased. [Tweet this!]
Dr. Jay loves designing really big, complex choice models. With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve.
Keep the market research questions comin'! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit yours anonymously by clicking below:
I’d describe the fashion sensibility in our Boston office as…eclectic. The khaki and button-down/dresses and heels faction (hello Financial Services team!) mingles easily with the flannel and sneakers crowd (hello pretty much everyone else!). Of course, when it’s time to head to a conference or awards dinner, even the most casual CMBer will toss on something that’s appropriate to the occasion and crowd.
For most of us, especially those of us in professional services, our approach to work fashion is deeply influenced by a tension between expressing ourselves and fitting in. This tension finds an analog in two concepts from consumer psychology:
Personal Identity: How much a consumer’s relationship to a brand plays into their self-image and self-esteem
Social Identity: The sense of belonging or kinship consumers feel with others who use the brand
Inrecent blog posts we’ve discussed our work with the consulting firm VIVALDI to take a fresh look at their 2010 “Social Currency” concept. We evaluated how 90 brands across five industries fit into the lives of consumers. Our results revealed seven critical components of consumers’ experience that brands must strengthen to influence the experiences and behaviors that drive engagement, purchase, and loyalty. Chief among these consumer experiences are Personal and Social Identity – which in the apparel industry are exemplified by the rise of customization and wearables.
To keep up with the “generation of customization” and Millennial’s preference for personalization, brands now offer customizable products to their customers. Take footwear giant Converse. Converse is a subsidiary of Nike, Inc., which was the best performing brand in our 2016 Social Currency Report (across all industries) with an indexed Social Currency composite score of 120.
While Converse still maintains its classic white Chuck Taylors, the brand has moved into the customization space to satisfy those consumers seeking personalization. Customers can personalize their Converse, selecting everything from shoe type, height, collection, color, and size. Even though consumers are still “fitting in” by sporting the notable Converse brand, the personalized shoes also satisfy their need to express themselves.
Although not limited to apparel, the ability to offer customization on a broad and relatively affordable scale offers a tremendous opportunity to support and reflect fashion consumers’ personal identities in particular. [Tweet this!]
Brands that do well are those that continue to find ways to meet the needs of their customers. Enter the rise of wearable technology. Why? Because wearables can enhance both a consumer’s personal and social identity. Let’s again look at Nike. Nike scored 119 in Social Identity in our 90-brand study – highlighting its success in fostering a sense of belonging and kinship among its customers.
So why is the short-lived FuelBand’s narrative important? Because it underscores Nike’s commitment to finding innovative ways to enhance customers’ personal and social identities. Even though the physical bracelet didn’t work out, Nike remained committed to the wearable tech space by introducing Nike+, an Apple and Android compatible app that connects Nike users to its online fitness community.
And Nike isn’t the only successful brand in wearables. Many other companies that our report looked at are invested in wearable technology, notably ones that have scored high in Social Currency:
Notice the top scoring brands we measured are each engaged in wearable tech. Coincidence? I think not.
It’s a consumer’s world and brands are just living in it
A key finding of our research (you can download our free report on apparel here) is that consumers are loyal to brands that fit seamlessly into their lives and help them express who they are, what they like, and who they feel connected to. For example, does a brand reinforce a consumer’s self-image? Is a brand fostering a sense of belonging or kinship among its customers—a hallmark of true consumer-centricity? If brands can answer “yes” to the above, they’re doing something right.
Ed is CMB's Director of Product Development and Innovation. He thinks there is a game-changing product or idea within everyone, and it’s his job to dig it out. You can share ideas with him @edloessi.
Get our FREE apparel report and learn how Social Currency can help brand transformation:
And check out our interactive dashboard for a sneak peek of Social Currency by industry:
Take a moment to think about the kind of person who drives a Porsche. What is that person like? Paint as clear a mental image as you can. Is it is a man or a woman? Young, old, or middle-aged? How would you describe that person’s personality, passions and values?
Now think about the kind of person who drives a Volvo. What is that person like? Or the kind of person who drives a Subaru? Or drives a Chevy? Or a Cadillac? Or a Mini?
If you’re like most people, for each of these cars, you picture a very different driver behind the wheel.
In fact, this summer we asked over 18,000 consumers to describe the typical user for 90 different brands, across 5 different industries, using their own words and batteries of perceptions. Our results uncovered images of typical users that differed vastly by brand and industry on a range of dimensions. For example:
The typical Porsche driver is often seen as a rich white man who is single or divorced. He is sporty, stylish and ambitious—but also arrogant, materialistic and self-centered. He’s into fashion and luxury. He likes to party.
The typical Volvo driver is also seen as a wealthy white man, but he’s more of a Northeastern intellectual. He’s into books and the arts. He’s responsible, self-assured, and a parent. His politics are progressive. He is not into sports or partying.
The typical Subaru driver is seen as a more middle-class, family-oriented parent who is smart, practical, responsible and caring—a nature-lover with a soft spot for pets and a desire to support good causes.
The typical Chevy driver is seen as a white, middle- to lower-class family man from the rural South or Midwest. He is reliable, humble, relaxed and genuine. He likes hunting, sports, and the great outdoors.
Consumers’ perceptions even differed on who each of these drivers was supporting in the presidential primaries. Who did they think the Porsche driver supported? Trump. By a very large margin. And while the Volvo driver was seen as supporting Bernie or Hillary, the Subaru driver was seen as feeling the Bern. Most assumed the Chevy driver would vote for Trump, but consumers were also twice as likely to say he’d vote for Cruz than they were for most other brands we tested.
We found a skew towards one of the candidates for nearly every one of the ninety brands we tested across the auto, airline, beer, fashion and food industries. Curious to see more? Select any brand from the drop-down and take a look!
Consumers’ generally held beliefs about the kind of person who uses each brand are driven in part by experience (e.g., all the Subaru drivers you know), and in part by marketing (e.g.,ads like this one).
But does it really matter what consumers think of the kind of person who uses a brand?
YES!It does. A lot.
The more consumers identify with their image of the kind of person who uses a brand, the more they will try, buy, pay for and recommend it. That’s because consumers are people, and people are driven by their identities. They embrace brands that help them reinforce, enhance, or express who they are—and the brands that do this best are ones that help them feel connected to people like them, people they know and like, or people they’d like to know. Consider: Would you rather be like the kind of person who drives a Porsche, a Volvo, a Subaru, or a Chevy?
In fact,consumers’ perceptions of the typical brand user matter more than their perceptions of the brand itself. We see clear mathematical evidence of this with AffinIDSM, our approach to uncovering consumers’ image of who uses a brand, and ways to strengthen how much they identify with that person.
As part of this approach, we calculate an AffinID℠ Score to quantify how much consumers identify with their image of the brand’s typical user
The score is based on the clarity, relatability and desirability of that image
Across industries, brands with high AffinID℠ Scores win on consideration, loyalty, price elasticity, and advocacy
In our research with 18,000 consumers, AffinID℠ was the #1 predictor of brand performance, beating out every brand perception we tested
Including: high quality, trustworthy, useful, easy/convenient, a good deal, worth paying more for, safe, secure, exciting, fun, reputable, innovative, socially responsible, understands its customers, cares about its customers, and rewards customers for their loyalty
The power of AffinID℠lies in the fact that human beings are social beings with identities shaped by our social groups and relationships—they provide self-knowledge, self-esteem, and the social norms that guide our behaviors. So we are particularly attentive to other people. And brands aren’t people. Brand users are.
Furthermore, while perceptions of brands and the people who use them are interrelated, they usually aren’t the same. Case in point: Consumers who love amazon. When we ask them to describe amazon, they say it has “great” customer service, prices, variety and convenience. When we ask them to describe amazon customers, what do they say? “Smart.”
To close, I’ll give one last example—a personal one. Porsche.
Let me start by saying to any Porsche owner who might be reading this that I’m sure you’re a lovely person who doesn’t fall into any stereotype. I think now is a good time to go get some coffee and consider how well you’ve done for yourself—I mean, after all, you have a Porsche! And, go ahead, donate more to Trump. It’s not too late. You can skip the next few paragraphs.
(Is he gone yet? Great—let’s continue…)
If you asked me what I think of Porsche the brand, I’d say: cool, reputable, fast, high quality, expensive. But if you asked me what I think of the typical Porsche driver, my response would be similar to the mass-market view described above: white male divorcee, wealthy, materialistic, in a midlife crisis, likely overcompensating for something.
So, as nice as I think Porsches are, I’m not spending my next bonus on one. I’m not like the person I envision as the Porsche driver, nor do I want to be. I’m a happily married mother of two. (Incidentally, the last mother I saw driving a Porsche was Carmella Soprano.) To get me to ever consider a Porsche, you’d have to really shake-up my image of the kind of person who drives one. But I’m sure there’s a marketer out there who could do it. Gauntlet thrown.
If you take away anything from this longer-than-usual blog (thanks for reading!), make it this: To change what consumers think of your brand, change their image of the people who use it. In today’s competitive marketplace and identity-driven culture, it is more important than ever that brands communicate a clear, compelling image of their typical customer.
Are you communicating the right image of the kind of person who uses your brand?
Erica Carranza is CMB's VP of Consumer Psychology with supplier- and client-side (American Express) experience. She earned her Ph.D. in psychology from Princeton University.
Contact us to learn more about identity's role in building brands, and CMB's AffinIDsm approach!
Methodology matters. Perhaps this much is obvious, but as the demand for market researchers to deliver better insights yesterday increases, dialing up the pressure to limit planning time, it’s worth re-emphasizing the impact of research approach on outcomes. Over the past year, I’ve come across numerous reminders of this while following this election cycle and the excellent coverage over at Nate Silver’s FiveThirtyEight.com. I’m not particularly politically engaged, and as the long, painful campaign has worn on I’ve become even less so; but, I keep coming back to FiveThirtyEight, not because of the politics, but because so much of the commentary is relevant to market research. I rarely ever visit the site (particularly the ‘chats’) without coming across an idea that inspires me or makes me more thoughtful about the research I’m doing day-to-day, and generally speaking that idea centers on methodology. Here are a few examples:
In my day-to-day work, I would guesstimate that 90-95% of the studies I see are intended to capture a more specific population of interest than the general population, making the screening criteria used to identify members of that population absolutely vital. In general, these criteria consist of a series of questions (e.g., are you a decision maker, do you meet certain age and income qualifications, have you purchased something in X category before, would you consider using brand Y), with only those with the right pattern or patterns of responses getting through.
But what if there were a better way to do this? Reading the above on FiveThirtyEight got me thinking about the kinds of studies in which using a probabilistic screener (and weighting the data accordingly) might actually be better than what we do now. These would be studies where the following is true:
Our population of interest might or might not engage in the behavior of interest
We have some kind of prior data on the behavior of interest tied to individual characteristics
“Yeah right,” you might say, “like we ever have robust enough data available on the exact behavior we’re interested in.” Well, this might be a perfect opportunity for incorporating the (to all appearances) ever-increasing amounts of passive customer data that are available into our surveys. It’s inspiring, at any rate, to think about how a more nuanced screener might make our research more predictive.
Social Desirability Bias & More Creative Questioning
Social desirability is very much a market research-101 topic, but that doesn’t mean it’s something that’s either been definitively solved for or that the same solution would work in every case. The issue comes up a lot, not only in the context of respondent attitudes, but even more commonly when asking about demographics like income or age. There are lots of available solutions, some of which involve manipulating the data to ‘normalize’ it in some way, and some of which involve creative questioning like the example shown above. I think the right takeaways from the above are:
Coming up with creative variations on your typical questions might help avoid respondent bias, and even has the potential to make questions more engaging for respondents
It’s important to think critically about whether or not creative questioning will resonate appropriately with your respondents
Plus, brainstorming alternatives is fun! For example:
Is someone you respect voting for Donald Trump?
Do the blogs you prefer to read tend to favor Trump or Clinton?
What media outlets do you visit to get your political news?
The Vital Importance of Context
At the heart of FiveThirtyEight’s commentary here is a reminder of the vital importance of context. It’s all very well to push respondents through a series of scales and return means or top box frequencies; but depending on the situation, that may tell only a small part of the story. What does an average rating of ‘6.5’ really tell you? In the end, without proper context, this kind of result has very little inherent meaning.
So how do we establish context? Some options (all of which rely on prior planning) include:
Indexing (against past performance or competitors)
Trade-off techniques (MaxDiff, DCM)
Predictive modeling against an outcome variable
Wrapping this up, there are two takeaways that I’d like to leave you with:
First, methodology matters. It’s worthwhile to spend the time to be thoughtful and creative in your market research approach.
Second, if you aren’t already, head over to FiveThirtyEight and read their entire backlog of 2016 election coverage. The site is an incredible reservoir of market research insight, and I can say with 95% confidence that you’ll be happy you checked it out.
Liz White is a member of CMB’s Advanced Analytics team, and checks FiveThirtyEight.com five times a day (plus or minus two times).
Whether you support Trump, Clinton, or neither, there’s no denying the 2016 race to the White House has been an emotional one. Voters of all stripes are feeling a range of emotion from fear and anxiety to anger.
But why should a market researcher care about the emotional aspects of the election? Because in elections, just like market research, emotions play a key role in determining future behaviors. For example, research suggests that voters’ feelings towards a candidate strongly influence not only who they’ll vote for, but if they’ll vote at all (Valentino et al., 2011; Finn and Glaser, 2010).
We know emotions impact voting behavior, but what’s the best way to gauge voter sentiment? Should we look to social media? Should we turn to neuroscience (biometrics such as fMRIs and EKGs)? In our client work, we take a quantitative approach to emotional impact analysis (CMB’s EMPACT℠) that measures brands’ emotional impact on consumers. Since Trump and Clinton have each built their own distinctive “brand” throughout the 2016 election, campaign managers might consider a quantitative “explicit” approach to measuring this aspect of consumer (and voter) decision-making. A quantitative methodology can:
Provide a quick and systematic approach to gathering big data: Quantitative analyses, like EMPACT℠, are both fast and systematic, allowing for target market/segment group comparisons that can be tracked over time. This method is ideal for a campaign manager looking to measure the sentiments of his or her candidate’s supporters. The more information that we have about the American public, specifically those connected to voting behavior, the better insight we have into the emotional battleground that is a contentious campaign. It’s also helpful to track voter sentiment over time to pick up on changes (e.g. October surprises) at specific junctures.
Compare the emotions a brand (or candidate) activates to those of their relevant competitors: Respondents might be asked to rate how a recent and relevant experience with a brand/product made them feel. This approach helps to determine a variety of emotions from basic (e.g., happiness and sadness) to social and self-conscious (e.g., pride and embarrassment). Applied to the presidential election, a quantitative approach could help determine who voters considered the “winner” of the three debates. We can look beyond the facts and policies and compare the emotions elicited by each candidate. Because presidential debates are key voter decision points, it’s imperative to track how citizens perceived each candidate’s performance beyond anger or fear.
Identify which emotions drive key outcomes (e.g., consideration, loyalty): After determining which emotions are activated by a specific brand/product, it’s possible to identify which are the most important for driving decisions and outcomes. Instead of focusing on polling numbers and predicting forecast stats, campaign managers could try to understand why voters have chosen a specific candidate. Which specific emotions are motivating voter turnout? Another use of this information is to see if emotional drivers differ by segment. How do Republicans feel about a specific candidate vs. Democrats and Independents? A strategic candidate would look at the specific emotions that drive voter support for or against them.
In the US, voter turnout hovers around 60%. Because researchers have found that emotional sentiment is linked to voter turnout, it’s an important part of the puzzle. If campaigns could measure how their constituents really feel during the election process, they could more effectively tailor their campaigns to elicit the kinds of emotions that translates into votes.
Like all brands, candidates are selling themselves to the public. A smart candidate should take advantage of techniques that will help inform how they should present themselves to voters. But no matter how you feel towards either candidate or the election in general, go out and make a difference by rocking the vote on November 8th!
Victoria is an Associate Researcher at CMB. She loves to eat any kind of pizza, travel to (somewhat) exotic places, and couldn’t have written this post without Spotify.
The debate over how and when to test for statistical significance comes up nearly every engagement. Why wouldn’t we just test everything?
-M.O. in Chicago
You’re not alone. Many clients want all sorts of things stat tested. Some things can be tested while others can’t. But for what can be tested, as market researchers we need to be mindful of two potential errors in hypothesis testing. Type I errors are when we reject a true null hypothesis. For example, if we accept the claim that Coke tastes better than Pepsi, it’s erroneous because in fact, it’s not true.
A type II error occurs when we accept the null hypothesis when in fact it is false. This part is safe to install and then the plane crashes. We choose the probability of committing a type I error when we choose alpha (say .05). The probability of a type II error is a function of power. We seldom take this side of the equation into account for good reason. Most decisions we make in market research don’t come with a huge price tag if we’re wrong. Hardly anyone ever dies if the results of the study are wrong. The goal in any research is to minimize both types of errors. The best way to do that is to use a larger sample.
This conundrum perfectly illustrates my “Life is a conjoint” mantra. While testing we’re always trading off between the accuracy of the results with the cost of executing a study with a larger sample. Further, we also tend to violate the true nature of hypothesis testing. More often than not, we don’t formally state a hypothesis. Rather, we statistically test everything and then report the statistical differences.
Consider this: when we compare two scores, we accept that we might get a statistical difference of 5% of the time simply by chance (a=.05). This could be the difference in concept acceptance between males and females.
In fact, that’s not really what we do, we perform hundreds of tests in most every study. Let’s say we have five segments and we want to test them for differences in concept acceptance. That’s 10 t-tests. Now we have a 29% chance of flagging a difference simply due to chance. That’s in every row of our tables. The better test would be to run an analysis of variance on the table to determine if any cell might be different. Then build a hypothesis and test them one at a time. But we don’t do this because it takes too much time. I realize I’m not going to change the way our industry does things (I’ve been trying for years), but maybe, just maybe you’ll pause for a moment when looking at your tables to decide if this “statistical” significance is really worth reporting—are the results valid and are they useful?.
Dr. Jay loves designing really big, complex choice models. With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve.
Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here:
Once again CMB is participating in Light the Night, a fundraising campaign for the Leukemia & Lymphoma Society, culminating in a walk on Boston Common on October 20th. Our participation began back in 2008, when our coworker, Catherine, was diagnosed with Hodgkin’s Lymphoma. After two rounds of chemo, a stem cell transplant, and proton radiation therapy, I’m happy to report that she recently celebrated six years in remission!
The money raised is used to fund research for new therapies and treatments (including those that saved Catherine) and ensure patient access to treatments. Last year alone, LLS invested $67.2 million in blood cancer research.
Over the past 8 years, we’ve raised over $80K—not bad for a 65 person company! LTN is truly a company-wide endeavor, we host bake sales, BBQs, silent auctions, and a very competitive cornhole tournament. This year we've raised over $6K, and we're still going strong. We'd like to give a big thank you to all of our clients, partners, and friends who've donated!
If you’d like to join us in the fight against cancer, pleasedonate here or meet us on Thursday October 20th at 5PM on the Boston Common.
That's not the only way to join the CMB team, whether you are an innovation guru, a tech whiz, or a strategic selling machine, we’re looking for collaborative, engaged professionals:
Knect365’s (formerly IIR) TMRE conference is the diva of the insights conference world—from October 17th to the 20th—you can expect thousands of attendees, six tracks running simultaneously, and terrific keynote speakers like Freakonomic’s Stephen Dubner. All of this adds up to a significantly higher price tag, so let’s talk about how you’re going to communicate conference ROI to your CMO.
Plan prior to the conference:
Write your elevator pitch: Whether you’re reserved or chatty, you’re going meet a lot of new people at TMRE, so take a minute to prepare your elevator speech:
“My name is ___ and I work for ___, the makers of ___.” If you work for Amazon, people understand that, but if you work for SC Johnson or Coca Cola, specify the product line.
“In the coming year we’re focused on improving our ___, and for that we’re interested in ___.” Here’s an example: “We just finished up a big journey study, which will help us drive the right messages to the right people at the right moments.” You can follow that up with something like: “In the coming year we’re going to do a lot of messaging optimization and concept testing to bring those moments into focus by segment.” That’s your hook, and your reason for the conversation you’re having.
Next comes your question. You’ve offered a bit about what you do, but who are you talking with? If they are a peer or competitor, ask, “How about you?” That’s it. You need to bring this information back to your company. If they are a supplier of research, ask, “How would you approach this if you were pitching to me?”
Highlight the agenda: Figure out which sessions you want to attend. Tip: I circle my agenda based on who will be speaking vs. the topic itself. I want a mix of dot com, financial services, technology, healthcare, hospitality, and consumer goods, so I circle every brand that interests me and then I go back and take a look at the titles. If I’m interested in mobile/geotagging more than dashboards (or vice versa), then I can narrow it down from there.
Block your calendar for the October 17-20 dates: Activate your out of office message and be sure to mention that you’re WORKING offsite all day. At the price of any conference, it’s really a crime to be dialing in to staff meetings or writing emails in your hotel room. Plan ahead…if you have a big deadline, consider moving it. The Conference ROI of you missing the conference…it’s not pretty.
During the Conference:
Recap 3 of the sessions in writing so you can talk specifically about the cases during a future lunch and/or a staff meeting: It is not enough to just go and listen to each session and then when you return to the office proclaim, “the conference was great.” You need to listen fiercely, with pen or tablet in hand, and write down who spoke, what they said and how it can be useful to your business. This is key, you need to find a way to weave in at least two of those three sessions into your future behaviors. TMRE should CHANGE the way you think, and the only way change happens is if you bring it on yourself.
Make a few new acquaintances (and connect on LinkedIn): Because you need to keep actively learning in and across industries, use TMRE to expand your network. One of our clients recently told me, “I’m painfully introverted so I just go to the sessions.” But how are you going to remember that incredible speaker from ___ or that kind person from ___ unless you connect on LinkedIn? It may seem awkward, but when it comes time to look for new methodologies, share best practices or recruit new hires, you’ll be happy you connected with a wider net of people. Companies can get insular, so TMRE offers you the opportunity to interact with people you wouldn’t typically meet.
Bonus tip—take a photo of yourself with one of the famous authors and share it with your CMO: OK, you don’t NEED to do this, but you need to come up with one visual representation of you at work and broadening your horizons at the IIR TMRE. Best-selling authors including Stephen Dubner (Freakonomics, SuperFreakonomics), Zoe Chance (Better Influence) or Francis Glebas (The Animator’s Eye) will be there, so you can check out at least one of those books prior to the conference. Or you can take a picture of the stage for one of your favorite sessions and share that. A picture tells a great story!
Julie blogs for GreenBook, ResearchAccess, and CMB. She’s an inspired participant, amplifier, socializer, and spotter in the Twitter #mrx community. Talk research with her @julie1research.
Headed to TMRE? Stop by Booth 516 and say hello to Julie and the rest of the CMB team. And don't forget to catch CMB's Brant Cruz and Electronic Arts' (EA's) Jodie Antypas as they share how EA leveraged insights to make a dramatic company turnaround: October 18th @11:15am.