Discrete Choice and the Path to a Car Purchase

Posted by Heidi Hitchen

Wed, Jun 11, 2014

Decision

One chilly night in February, I was heading home from a friend’s birthday festivities when my car just stopped working. I had just enough oomph and momentum from the hill I was on to pull off to the side of the road. I found myself stranded in the middle of the city, waiting for a tow truck until 4AM and vowing to myself the whole time that I wouldn’t deal with this clunker anymore. It was time for a new car. During the next two weeks, without wheels, I did my research on the Internet and made my way over to a local Toyota dealership. I walked in knowing exactly what I wanted: a 2014 green Corolla. I even knew the various payment and financing options I was prepared for. And wouldn’t you know it—I ended up getting exactly what I said I wanted.As easy as that sounds, my path wasn’t straight to the doors of the Toyota dealership. I had gone through a variety of different makes, models, financing options, and colors. At the end of researching each car, I asked myself not only if I would really buy this car, but also if I would truly be happy with it. It wasn’t until I asked myself this question for the first time that I realized I was essentially creating my own Discrete Choice Measurement (DCM), specifically a Dual-Choice DCM (DCDC).

DCM is a technique that presents several configurations of product features to respondents and asks them to pick which configuration they would most prefer. In a Dual-Choice DCM, a follow-up question is asked to determine whether the respondent would actually buy the preferred package. This second question is crucial—I might choose a Lamborghini but there’s little chance (OK, no chance) that I will actually purchase one.

Dual-Choice DCM scenarios are the gold standard for product development work and can lend more accurate insights into a buying scenario by:

  • more closely representing a consumer’s purchase decision
  • helping us better understand consumer preferences
  • more accurately reflecting market potential
  • dissecting the product into pieces, which allows us to measure price sensitivity and willingness to pay for the product as a whole as well as individual components
  • simulating the market interest in thousands of potential product packages for product optimization as the analysis examines how a product can be changed to perform better by identifying (and tweaking) individual product features that affect purchase decisions

Being able to produce more realistic results is obviously an important part of any research, and it just goes to show that DCMs can truly help with any decision making process. Running a DCM in my head prior to purchasing my car was truly helpful, so it’s no surprise that our clients often rave about the DCMs and Dual-Choice DCMs in our analytics program.

Heidi is an Associate Researcher who graduated from Quinnipiac University with a dual-degree in Marketing and Over-Involvement. After realizing she lacks hobbies now that student organizations don’t rule her free time, Heidi is taking sailing classes and looks forward to smooth sailing on the Charles River by the end of the summer.

Want to know more about our advanced analytic techniques, including our innovative Tri-Choice Approach? Let us know and we’ll be happy to talk through how we choose the right techniques to uncover critical consumer insights. Contact us.

Topics: Advanced Analytics, Research Design

Living in a World of Significance

Posted by Nick Pangallo

Wed, Apr 02, 2014

globe

Guess what? It’s 2014! The year of Super Bowl XLVIII©, the 100th anniversary of the start of World War I, the 70th anniversary of D-Day, and a whole host of other, generally not-that-impactful events, anniversaries, and changes. One event that will happen in 2014, though, is something which happens every two years: U.S. national elections.This seems like an odd way to start a blog, but bear with me for a moment.  Show of hands out there (ed. note: you’re welcome to actually raise your hand if you want, but I wouldn’t): how many of you readers have, at some point, become tired of the relentless political horse-race, always talking about who’s ahead and who’s behind for months and years on end?  I know I have, and chances are it’s happened to you too, but I’m going to ask that we all take a deep breath and dive once more into the fray.

The question of “who’s ahead” and “who’s behind” brings us to our discussion of statistical significance.  I’m going to talk today about how it works, how it can be used, and why it might not be quite as beneficial as you might think.

First, a quick refresher: when we take survey responses, test results, etc. from a sample of people that we think represents some broader population, there is always the risk that whatever results we see might be due to random chance instead of some other factor (like actual differences of opinion between two groups). To control for this, we can conduct significance testing, which tells us the likelihood that the result we have obtained is due to random chance, instead of some other real, underlying factor. I won’t bore you with the details of terms like p, α, one- vs. two-tailed tests and the like, but know that the methodology is sound and can be looked up in any AP-level statistics textbook.

Most organizations assume an “error range” of 5%, meaning that a data finding is statistically significant if the odds are 5% (or less) that the results are due to random chance. So, if we run significance testing on Millennials vs. Gen X’ers in a survey, and we find that the two are significantly different, we are saying there is a 5% (or less) chance that those differences are just random, and not due to actual underlying opinions, or price-sensitivity, or political beliefs, or receptiveness to that new hair-growth prescription, or whatever else you might be testing.

Now, if you have a huge data set and a fairly advanced statistical program, calculating significance is easy. But since most people don’t have access to these tools, there is another, much simpler way to think about significance: the margin of error. The margin of error is a simple way of determining how much higher or lower a result can be before it is considered significantly different. For instance, if your margin of error was ± 5%, and your data points were 60% and 49%, your data is (likely) significantly different; if your data points are 55% and 51%, they are not.

This brings us back to the political analogy; calculating the margin of error is how we determine whether Politician X is ahead of Politician Y, or vice-versa.

Let’s say, for example, a poll of 1,000 registered voters was conducted, with a sound methodology, and asks which of two candidates respondents support (assume no other options are presented in this circumstance, a small but notable difference for a future blog). We find that 48% support Politician X and 52% Politician Y. Because the sample size is 1,000, the margin of error is ± 3.1%. Since the difference between the two politicians is less than twice the margin of error (i.e., if Politician X’s share might be as high as 51.1% and Politician Y’s share as low as 48.9%), you would hear this reported as a “statistical tie” in the news. This would be because news organizations won’t report one candidate as ahead of the other, as long as the two are within that acceptable margin of error.

So that’s the political world, and there are many reasons networks and polling organizations choose to behave this way (aversion to being wrong, fear of being seen as taking sides, and fear of phone calls from angry academics, among others).  But in the research world, we don’t usually have nice, round sample sizes and two-person comparisons – and that’s why relying on statistical significance and margin of error when making decisions can be dangerous.

Let’s go back to that political poll.  The original sample size was N=1,000 and produced a margin of error of ± 3.1%.  Let’s see what happens when we start changing the sample size:

·        N=100: ± 9.8%

·        N=200: ± 6.9%

·        N=500: ± 4.4%

·        N=750: ± 3.6%

·        N=1,000: ± 3.1%

·        N=1,500: ± 2.5%

·        N=2,000: ± 2.2%

·        N=4,000: ± 1.6%

Notice the clear downward trend: as sample sizes grow, margins of error shrink, but with diminishing returns.

Now, we at CMB would advocate for larger sample sizes, since they allow more freedom within the data (looking at multiple audiences, generally smaller error ranges, etc.).  It’s no secret that larger sample sizes are better.  But I’ve had a few experiences recently that led me to want to reinforce a broader point: just because a difference is significant doesn’t make it meaningful, and vice versa.

With a sample size of N=5,000, a difference of 3% between Millennials and Gen X’ers would be significant, but is a 3% difference ever really meaningful in survey research?  From my perspective, the answer is a resounding no.  But if your sample size is N=150, a difference of 8% wouldn’t be significant…but eight percentage points is a fairly substantial difference.  Sure, it’s possible that your sample is slightly skewed, and with more data that difference would shrink.  But it’s more likely that this difference is meaningful, and by looking at only statistical significance, we would miss it. And that’s the mistake every researcher needs to avoid.

If I can leave you with one abiding maxim from today, it’s this: assuming some minimum sample size (75, 100, whatever makes you comfortable), big differences usually are meaningful, small differences usually are not.  Significance is a nice way to be certain in your results, but we as researchers need to support business decisions with meaningful findings, not (just) significant ones.

Nick Pangallo is a Project Manager in CMB’s Financial Services, Healthcare, and Insurance practice.  He has a meaningful-but-not-significant man-crush on Nate Silver.

Topics: Advanced Analytics, Research Design

What's the Story? 5 Insights from CASRO's Digital Research Conference

Posted by Jared Huizenga

Wed, Mar 19, 2014

CMB and CASROWho says market research isn’t exciting? I’ve been a market researcher for the past sixteen years, and I’ve seen the industry change dramatically since the days when telephone questionnaires were the norm. I still remember my excitement when disk-by-mail became popular! But I don’t think I’ve ever felt as excited about market research as I do right now. The CASRO Digital Research Conference was last week, and the presentations confirmed what I already knew—big changes are happening in the market research world. Here are five key takeaways from the conference:

  1. “Market research” is an antiquated term. It was even suggested that we change the name of our industry from market research to “insights.” In fact, the word “insights” came up multiple times throughout the conference by different presenters. This makes a lot of sense to me. Many people view market research as a process whereas insights are the end result we deliver to our clients. Speaking for CMB, partnering with our clients to provide critical insights is a much more accurate description of our mission and focus. We and our clients know percentages by themselves fail to tell the whole story, and can in fact lead to more confusion about which direction to take.

  2. “Big data” means different things to different people. If you ask ten people to define big data you’ll probably get ten different answers. Some define it as omnipresent data that follows us wherever we go. Others define it as vast amounts of unstructured data, some of which might be useful and some not. Still others call it an outdated buzzword.  No matter what your own definition of big data is, the market research industry seems to be in somewhat of a quandary about what to do with it. Clients want it and researchers want to oblige, but do adequate tools currently exist to deliver meaningful big data? Where does the big data come from, who owns it, and how do you integrate it with traditional forms of data? These are all questions that have not been fully answered by the market research (or insights) industry. Regardless, tons of investment dollars are currently being pumped into big data infrastructure and tools. Big data is going to be, well, BIG.  However, there’s a long way to go before most will be able to use it to its potential.

  3. Empathy is the hottest new research “tool.” Understanding others’ feelings, thoughts, and experiences allows us to understand the “why behind the what.”  Before you dismiss this as just a qualitative research thing, don’t be so sure.  While qualitative research is an effective tool for understanding the “why,” the lines are blurring between qualitative and quantitative research. Picking one over the other simply doesn’t seem wise in today’s world. Unlike with big data, tools do currently exist that allow us to empathize with people and tell a more complete story. When you look at a respondent, you shouldn’t only see a number, spreadsheet, or fancy graphic that shows cost is the most important factor when purchasing fabric softener. You should see the man who recently lost his wife to cancer and who is buying fabric softener solely based on cost because he has five years of medical bills. There is value in knowing the whole story. When you look at a person, you should see a person.

  4. Synthesizers are increasingly important. I’m not talking about the synthesizers from Soft Cell’s version of “Tainted Love” or Van Halen’s “Jump.” The goal here is to once again tell a complete story and, in order to do this, multiple skillsets are required. Analytics have traditionally been the backbone of market research and will continue to play a major role in the future. However, with more and more information coming from multiple sources, synthesizers are also needed to pull all of it together in a meaningful way. In many cases, those who are good at analytics are not as good at synthesizing information, and vice versa. This may require a shift in the way market research companies staff for success in the future. 

  5. Mobile devices are changing the way questionnaires are designed. A time will come when very few respondents are willing to take a questionnaire over twenty minutes long, and some are saying that day is coming within two years. The fact is, no matter how much mobile “optimization” you apply to your questionnaire, the time to take it on a smartphone is still going to be longer than on PCs and tablets. Forcing respondents to complete on a PC isn’t a good solution, especially since the already elusive sub 25 year old population spends more time on mobile devices than PCs. So what’s a researcher to do? The option of “chunking” long questionnaires into several modules is showing potential, but requires careful questionnaire design and a trusted sampling plan. This method isn’t a good fit for all studies where analysis dictates each respondent complete the entire questionnaire, and the number of overall respondents needed is likely to increase using this methodology. It also requires client buy-in. But it’s something that we at CMB believe is worth pursuing as we leverage mobile technologies.

Change is happening faster than ever. If you thought the transition from telephone to online research was fast—if you were even around back in the good old days when that happened—you’d better hold onto your seat! Information surrounds every consumer. The challenge for insights companies is not only to capture that information but to empathize, analyze, and synthesize it in order to tell a complete story. This requires multiple skillsets as well as the appropriate tools, and honestly the industry as a whole simply isn’t there yet. However, I strongly believe that those of us who are working feverishly to not just “deal” with change but to leverage it, and who are making progress with these rapidly changing technological advances, will be well equipped for success.

Jared is CMB’s Director of Field Services, and has been in market research industry for sixteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ

 

CMB Insight eZine


Sign-up here
for out monthly eZine for the latest Consumer Pulse
reports, case studies conference updates, webinars and more

Topics: Qualitative Research, Big Data, Mobile, Research Design, Quantitative Research, Conference Insights

All Aboard: Why Planning a Cruise is like Planning for Market Research

Posted by Cara Lousararian

Tue, Feb 25, 2014

map with push pins squareIn a few weeks I’ll be taking a cruise to the Caribbean—a cruise that I have spent 9 months planning. Needless to say, I’ve been a little preoccupied making sure everything is in place to ensure a flawless vacation. And as I sorted through all of these details, I couldn’t help but notice the similarities between vacation planning and how we at CMB prepare for a smooth, successful research project. You might be thinking “this is a woman who really needs a vacation.” But hear me out.

The first step of vacation planning is to put together a list of possible locations for a trip and select an appropriate timeframe. Planning a successful research study works on the same principles, every project starts with taking the time to define and understand the main decisions that need to be made from the research—we use tools like our Business Decision Worksheet—which directly ties the questionnaire, analysis and reporting to the business decisions, letting us identify and gain consensus on the most pressing decisions, and ensuring the results are actionable.

We also know how critical it is to develop (and stick to) a schedule that aligns with our clients’ needs. One of the first things that we at CMB do at the beginning of each project is put together a schedule outlining each key milestone of the process, all the way up to delivery of the final results. Putting together a detailed schedule helps us align resources and ensure we stay on track to meet our client deadlines. Knowing how much our clients rely on our research makes the scheduling a crucial part of the process and an important key to our success in executing projects.

Once the schedule is set, the project kicks off and the exploratory phase begins. I personally did lots of exploratory research before selecting my specific cruise line, ship, and date. Through this exploratory research, I was able to drill down and identify what aspects were most important in making my decision. Exploratory phases are also crucial for determining what will be most important to measure in the questionnaire and which areas are “nice to haves,” but not necessary to be included for the project.

Exploratory research also helps generate new ideas that may not have been previously considered. Similar to the many resources available for cruise planning (cruise line website, message boards, etc.), exploratory research for a project can span several platforms, including a review of secondary research, conducting in-depth interviews or focus groups, or hosting online discussion boards.

Sometimes the exploratory phase of a project gets less attention/recognition than is deserved because it doesn’t come across as being as “glamorous” as the analysis and insights that will come from the quantitative research. However, all market researchers know that the level of planning can make or break a project. CMB’s focus on planning allows us to try and anticipate what potential issues may come up down the road so that we can troubleshoot effectively and properly set expectations with our clients. Of course just like you can’t predict a rogue wave, there are times when the unexpected happens. When this happens we know we need to remain flexible enough to make course corrections and steer us back to the business decisions that our clients are trying to make.

I know we can only take the analogy so far; when all is said and done, often the only tangible evidence of having been on a vacation are the pictures. While the deliverables we produce for our clients are polished and shiny, they’re hardly the end “goal” of the research. Successful research is useful and used, and that starts well before a questionnaire is designed.

Cara is a Research Manager at CMB. She enjoys spending time with her husband Brett, her dog Nala, and planning her next vacation.

Topics: Business Decisions, Travel & Hospitality Research, Research Design

The 4 Step Cure for Choice Overload

Posted by Kyle Steinhouse

Tue, Dec 03, 2013

It was a recent Saturday afternoon, and I had a laundry list of errands to complete. My last stop was the liquor store where I immediately found myself stalled in the vodka aisle. My list simply read “vodka,” but the vodka market is saturated with diverse options, so which one should I choose? Just a few of the attributes where options vary widely include: reputation (“Hello, Grey Goose”), quality (“Hello again, Grey Goose”), name (“Good evening, Little Black Dress”), packaging (“Hey, Crystal Head”), flavor (“Hi, Van Gogh PB&J”), and price (“Sup, Aristocrat?”). Pinnacle Vodka alone boasts 30 different flavors in their Cocktail Catalog.


Best Vodka Brands

Having all these choices is great, right?  I thought so too at first, but then I spent five minutes pacing that same 20 foot stretch, and then ten minutes (my palms sweaty), and, oh please don’t let me have just spent 15 minutes in the vodka aisle. The diagnosis was clear; I was exhibiting all the symptoms of the choice overload blues.

Choice overload occurs when the addition of more choices becomes overwhelming and actually starts to have adverse effects (authors Scheibehenne, Greifeneder, and Todd provide a robust description in their 2010 meta-analysis “Can There Ever Be Too Many Options? A Meta-Analytic Review of Choice Overload”).In my case, I was having trouble committing to a choice, which resulted in a longer-than-expected errand. Scheibehenne et al. also describe other effects like a decrease in satisfaction with the final choice and an increased likelihood in not making any choice at all.

Is there a cure?

Recent research by Townsend and Khan suggests that a verbal depiction of information—text— can decrease choice overload when there are a large number of choices because verbal information requires more deliberate processing. Perhaps, with an inventory list of vodka SKUs, I would have more quickly eliminated Naked Jay Vodka’s Big Dill Pickle flavor.

Of course, the impact of choice overload goes well beyond the vodka aisle; think about choosing investments, a tablet for your child, or a loyalty program. It’s especially relevant for those of us who design questionnaires to be rigorous and yield insights, without drowning our respondents. One of the best known researchers of choice overload, Dr. Sheena Iyengar, offers these 4Cs  to consider when you’re charged with designing and presenting options:

  1. Cut: very simply, if possible, consider reducing the number of options

  2. Concretize:  help people understand the consequences between the choices they make in a vivid way—make the benefits real to your prospective customer

  3. Categorize: categories help people tell choices apart, and the categories need to make sense to the customer, not just to you, the provider

  4. Condition for complexity: Ask the questions with the fewest choices first and the questions with most choices last

So what happened next on my liquor store errand? Lucky for the vodka market, I don’t like to leave anything unchecked on my errand list. I ended up with a bottle of Van Gogh Vodka Dutch Caramel.

Kyle is a recent transplant to Boston and to CMB. He enjoys long runs along the Charles, the freedom of choice, and vodka cocktails.

Do you know a Segmentation guru, a tech whiz, or a strategic selling machine? We’re looking for collaborative, engaged professionals to join our growing team. Check out our newest Career Opportunities!

 

Topics: Consumer Insights, Research Design, Customer Experience & Loyalty