WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

Don't Over-Generalize My Generation

Posted by Reed Guerino

Wed, Apr 12, 2017

Dollarphotoclub_103845102-1.jpgI’m sure you’ve heard that Millennials are entitled narcissists (or mold-breaking visionaries) and Gen Z expect instant gratification (or they have the most integrity of any generation yet). Of the companies pouring millions of research dollars into generational research, who’s getting it right? Well maybe nobody.

In fact, we can’t even agree on where one generation begins and the other ends. Millennials are generally considered those born between 1980 and 2000, but there’s disagreement over the exact years—some say it’s as loose as the mid-1970s to the mid-2000s while others say strictly between 1980 and 2000.  When you’re comparing mid-1970 to 1980 and 2000 to mid-2000, it’s not a huge discrepancy. However, the point is that there is a discrepancy. And with growing interest in the emerging generation (Gen Z, “Post-Millennials”, "iGeneration", "Plurals"), once again we face an arbitrary age designation and battle over who best understands these future consumers.

As a market researcher myself, I'm the first to admit that researchers will be tempted to define and assign attributes to Gen Z early on because of our natural tendency to categorize and bucket into mutually exclusive groups. However, in our need for clean groups with labels, we forget that some groups aren’t mutually exclusive, and different groups (or in this case, generations) might share some overlapping qualities.

What’s more, generations aren’t as homogenous as we’d like to think. While normally there are overarching behaviors and attributes assigned to each age group, there can be room for variations among the cohorts. For example, we recently released a report where we found a segmentation of Millennials revealing five distinct personas with various preferences, attitudes, and behaviors. Our self-funded study focused specifically on financial behaviors, but it can serve as a microcosm for the rest of the generation. You can learn more about it here. This research underscores the potential for inaccuracies that can result from defining a generation too narrowly.

There will always be a place for analysis by generation, but we have a lot more data to consider today than ever before. In his 2013 book "Buyographics", Matt Carmichael reaffirms the importance of demographics, but emphasizes analysis shouldn’t stop there. He explains, "Demographics drive consumer behavior, and that's as true today as ever. We just have better means, thanks to more data sources, of measuring those behavioral impacts and targeting around them. All data needs to be considered through a broader lens and put into context."

Cuts by generation alone ignore the impact of geography and make assumptions about how age influences behavior and psychographics. For example, we often find our psychographics (e.g. our attitudes and aspiration), regardless of age, are good indicators of who we are and who we want to be. In fact, these aspirations (e.g. Who do I want to be?) are strong motivators of brand consideration and loyalty. This means if two people from separate generations can identify with the same type of person, they'll likely share an affinity for the brand because of that identification, not their age.

We'll hear a great deal about who Gen Z is in the next few years until they are eclipsed by the next group. But researchers, advertisers, and marketers should take heed against categorizing Gen Z—and the ensuing generations—solely by their date of birth. Without a multi-faceted approach to understanding consumers (considering demographics, psychographics, etc.), we'll continue to yield narrow insights that may result in marketers producing ads that alienate their target audiences.

Want to learn more about Millennials’ financial needs and expectations and what that means for your industry? Check out our webinar!

Watch here!

Reed Guerino is an Associate Researcher at CMB who is an entitled Millennial on the side and is bitter he missed being the “mature and in control” generation by 1-5 years.

Topics: millennials, Consumer Pulse, research design

A New Year’s Resolution: Closing the Gap Between Intent and Action

Posted by Indra Chapman

Wed, Jan 04, 2017

resolutions.jpg

Are you one of the more than 100 million adults in the U.S. who made a New Year’s resolution? Do you resolve to lose weight, exercise more, spend less and save more, or just be a better person?

Top 10 New Year's Resolutions for 2016:

  • Lose Weight
  • Getting Organized
  • Spend less, save more
  • Enjoy Life to the Fullest
  • Staying Fit and Healthy
  • Learn Something Exciting
  • Quit Smoking
  • Help Others in Their Dreams
  • Fall in Love
  • Spend More Time with Family
[Source: StatisticBrain.com]

The actual number varies from year to year, but generally more than four out of 10 of us make some type of resolution for the New Year. And now that we’re a few days into 2017, we’re seeing the impact of those New Year resolutions. Gyms and fitness classes are crowded (Pilates anyone?), and self-improvement and diet book sales are up.

But… (there’s that inevitable but!), despite the best of intentions, within a week, at least a quarter of us have abandoned that resolution, and by the end of the month, more than a third of us have dropped out of the race. In fact, several studies suggest that only 8% of us actually go on to achieve our resolutions. Alas, we see that behavior no longer follows intention.

It’s not so different in market research because we see the same gap between consumer intention and behavior. Sometimes the gap is fairly small, and other times it’s substantial. Consumers (with the best of intentions) tell us what they plan to do, but their follow through is not always consistent. This, as you might imagine, can lead to bad data. [ twitter icon-1.pngTweet this!]

So what does this mean?

To help close the gap and gather more accurate data, ask yourself the following questions when designing your next study:

  • What are the barriers to adoption or the path to behavior? Are there other factors or elements within the customer journey to consider?
  • Are you assessing the non-rational components? Are there social, psychological or economic implications to them following through with that rational selection? After all, consider that many of us know that exercising daily is good for us – but so few of us follow through.
  • Are there other real life factors that you should consider in analysis of the survey? Does the respondent’s financial situation make that preference more aspirational than intentional?

So what are your best practices for closing the gap between consumer intent and action? If you don’t already have a New Year’s resolution (or if you do, add this one!), why not resolve to make every effort to connect consumer intent to behavior in your studies during 2017.

Another great resolution is to become a better marketer!  How?

Register for our upcoming webinar with Dr. Erica Carranza on consumer identity and the power of measuring brand user image to help create meaningful and relevant messaging for your customers and prospects:

Register Now!

Indra Chapman is a Senior Project Manager at CMB, who has resolved to set goals in lieu of new year’s resolutions this year. In the words of Brad Paisley, the first day of the new year “is the first blank page of a 365-page book. Write a good one.”

Topics: data collection, research design

OMG! You Won’t Believe the 3 Things Segmentation and BuzzFeed Quizzes have in Common!

Posted by Amy Maret

Wed, Aug 31, 2016

19t0cg.jpg“Which Starbucks Drink Are You?” “What Role Would You Play in a Disney Movie?” “Which ‘Friends’ Character Are You Least Like?” These are the deep existential questions posed on websites like BuzzFeedand PlayBuzz. My Facebook and Twitter feeds are continuously flooded by friends posting their quiz results, and the market researcher in me can’t help compare them to the segmentationwork that we do at CMB every day.

So let’s take a closer look at a few of the basic concepts segmentations share with Buzzfeed quizzes and learn why I’m not too worried about losing my job to BuzzFeed writers just yet:

  1. You answer a predetermined set of questions. In the Starbucks drink quiz, you might be asked to identify your favorite color or your ideal vacation spot, even though these questions have nothing to do with Starbucks. At CMB, we focus on the product or service category at hand, we make sure we include questions that measure real customer needs. That way, we know our final solution will have implications in driving customer behavior. It’s much easier to see the relevance of a solution when the questions we ask have face validity.
  1. You are assigned to a group based on your answers. While I don’t know exactly what happens on the back end of a BuzzFeed quiz, there must be some basic algorithm that determines whether you are a Double Chocolaty Chip Frappuccino or Very Berry Hibiscus Refresher. However, as far as I know, the rules behind this algorithm are entirely made up by the author of the quiz, probably based on hours hanging out at their local Starbucks. When we conduct a market segmentation study, we typically use a nationally representative sample, which allows our clients to see how large the segments are and what true opportunities exist in the market. We also ensure that we end up with a set of clearly distinct segments that are both statistically solid and useful so that our clients can feel confident implementing the results.
  1. Each group is associated with certain traits. When your quiz results pop up, they usually come with a brief explanation of what the results mean. If you are an Iced Caramel Macchiato, for example, you're successful, honest, and confident. But, if you are a Passion Iced Tea, you are charismatic and hilarious. As a standard part of our segmentation studies, CMB delivers an in-depth look at key measures for each segment, such as demographics, brand preference, and usage, to demonstrate what makes them unique, and how they can be reached. We tailor these profiles to meet the needs of the client, so that they can be used to solve real business problems. For example, the sales team could use these segmentation results to personalize each pitch to a particular type of prospect, the creative team could target advertisements to key customer groups, or finance managers could ensure that budgets are being directed towards those with whom they will be most effective.

I’ll be the first person to admit that personality quizzes are a great way to waste some free time and maybe even learn something new about yourself. But what’s really fun is taking the same basic principles and using them to help real businesses make better decisions. After all, a segmentation is only useful when it is used, and that is why we make our segmentation solutions dynamic, living things to be reapplied and refreshed as often as needed to keep them actionable.

Amy Maret is a Project Manager at CMB with a slight addiction to personality quizzes. In case you were curious, she is an Espresso Macchiato, would play a Princess in a Disney movie, and is least like Ross from Friends.

Download our latest report: The Power of Social Currency, and let us show you how Social Currency can enable brand transformation:

Get the Full Report

And check out our interactive dashboard for a sneak peek of Social Currency by industry:

Interactive Dashboard

Topics: Chadwick Martin Bailey, research design, market strategy and segmentation, Market research

Swipe Right for Insights

Posted by Jared Huizenga

Wed, Aug 17, 2016

Data collection geeks like me can learn a ton at the CASRO Digital Research Conference. While the name of the event has changed many times over the years, the quality of the presentations and the opportunity to learn from experts in the industry are consistently good.

One topic that came up many years ago was conducting surveys via cellphones with SMS texts. This was at a time when most people had cellphones, but it was still a couple of years before the smartphone explosion. I remember listening to one presentation and looking down at my Samsung flip-phone thinking, “There’s no way respondents will take a CMB questionnaire this way.” For a few simple yes/no questions, this seemed like a fine methodology but it certainly wouldn’t fly for any of CMB’s studies.

For the next two or three years, less than half of the U.S. population owned smartphones (including yours truly). Even so, SMS texting was getting increasing coverage at the CASRO conference, and I was having a really hard time understanding why. Every year was billed as “the year of mobile!” I could see the potential of taking a survey while mobile, but the technology and user experience weren’t there yet. Then something happened that changed not only the market research industry but the way in which we live as human beings—smartphone adoption skyrocketed.
Girl_and_phone.jpg

Today in the U.S., smartphone ownership among adults is 72% according to the Pew Research Center. People are spending more time on their phones and less time sitting in front of a computer. Depending on the study and the population, anywhere from 20%-40% of survey takers are using their smartphones. And if it’s a study with people under 25 years old, that number would likely be even higher. We can approach mobile respondents in three ways:

  • Do nothing. This means surveys will be extremely cumbersome to take on smartphones, to the point where many will abandon during the painful process. This really isn’t an option at all. By doing nothing, you’re turning your back on the respondent experience and basically giving mobile users the middle finger.
  • Optimize questionnaires for mobile. All of CMB’s questionnaires are optimized for mobile. That is, our programming platforms identify the device type a respondent is using and renders the questionnaire to the appropriate screen size.  Even with this capability, long vertical grids and wide horizontal scales will still be painful for smartphone users since they will require some degree of scrolling. This option is better than nothing, but long questions are still going to be long questions.
  • Design questionnaires for mobile. This is the best option, and one that isn’t used often enough. This requires questions and answer options to be written with the idea that they will be viewed on smartphones. In other words, no lengthy grids, no sprawling scales, no drag and drop, minimal scrolling, or anything else that would cause the mobile user angst.  While this option sounds great, one of the criticisms has been that it’s difficult to do advanced exercises like max-diff or discrete choice on smartphones.

One cautionary note if you are thinking that a good option would be to simply disallow respondents from taking a survey on their smartphones.  Did your parents ever tell you not to do something when you were a child?  Did you listen to them or did you try it anyway? What’s going to happen when you tell someone not to take a survey on their mobile device?  Either by mistake or out of sheer defiance, some people will attempt to take it on their smartphone. This happened on a recent study for one of our clients.  These people tried to “stick it to the man,” but alas they were denied entry into the survey. If you want “representative” sample, the other argument against blocking mobile users is that you are blocking specific populations which could skew the results.

The respondent pool is getting shallow, and market research companies are facing increased challenges when it comes to getting enough “completes” for their studies.  It’s important for all of us to remember that behind every “complete” is a human being—one who’s trying to drag and drop a little image into the right bucket or one who’s scrolling and squinting to make sure they are choosing the right option on an 11-point scale in a twenty row grid.  Unless everyone is comfortable basing their quantitative findings off of N=50 in the future, we all need to take steps to embrace the mobile respondent. 

Jared is CMB’s Field Services Director, and has been in market research industry for eighteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

Sign up to receive our monthly eZine and never miss a webinar, conference recap, or insights from our self-funded research on hot topics from data integration to Social Currency.

Subscribe Here!

Topics: mobile, research design, Market research

Do Consumers Really Know You? Why True Awareness Matters

Posted by Jonah Lundberg

Wed, Jul 13, 2016

From hotels to healthcare, brands are facing an unprecedented era of disruption. For brands to compete, consumers need to know and love your brand for what it really stands for. Critical questions for brands include: have folks even heard of you (Awareness), how well do they think they know you (Familiarity), and how well do they really know you (True Awareness)?

Folks probably won’t buy from you if they’ve never heard of you or don’t know much about you. To pinpoint areas to improve and track success, you need to include both Familiarity and True Awareness in your competitive brand tracking.

Familiarity

Familiarity can be a vague metric for stakeholders to interpret, especially alongside Awareness. A common question we hear is “What’s the difference between Awareness and Familiarity? Yes, I’m aware. Yes, I’m familiar. Isn’t it the same thing?”

Not quite.

Awareness is “yes” or “no”—have you heard of the brand name or not? Familiarity gauges how well you think you know the brand. Sure, you’ve heard of the brand, but how much would you say you know about it?

It’s summertime, so let’s use a baseball example–Comerica Park is home of the Detroit Tigers, and Target Field is the home of the Minnesota Twins:

  • I watch baseball a lot, so if you asked me if I was aware of Comerica and Target, I’d say yes to both.
  • If you asked me how familiar I was with Comerica, I would tell you that I have absolutely no idea what its products are. I just know its name because of where the Twins go when they visit Detroit to play the Tigers.
  • Target, on the other hand, I know very well: it’s headquartered in my home state of Minnesota, and I’ve been inside their stores hundreds of times.

In research-talk: I am not at all familiar with Comerica. I am very familiar with Target.

If you’re deciding whether or not to include Familiarity in your competitive brand tracking, you first need to determine whether you want your brand to be widely known and known well or just widely known. Do you want to be the popular guy at school who most people know by name but don’t know very well? Or do you want to be the prom king—the guy everyone knows the name of and knows well enough to vote for? 

Take a look at a real example below, showing Top 10 Brands Aware vs. the Top 10 Brands Highly Familiar in a recent competitive brand tracking study (brand names changed for confidentiality):

Jonah_blog.png

You’ll notice a pattern: a brand that many people have heard the name of (high Awareness) can be trumped by a brand that not-as-many people have heard the name of (low Awareness) when it comes to how well the brand is known (Familiarity) among those who have heard the name (among Aware). It is possible to be more successful in the market with a lower level of awareness if those folks know you well.                                            

This isn’t surprising, since Familiarity is only asked for brands that people are aware of.

However, Big Papi’s Burgers proves that you can be both widely known and known well. Again, though the brand name is a pseudonym, the data is real. So, if you think it’s worth measuring your brand relative to the Big Papi’s Burgers of your industry you need Familiarity to gauge your brand’s standing vs. the competition.

True Awareness

Just because folks say they know you doesn’t mean they actually do. Also, if you find yourself with a lower level of Familiarity, how do you fix that?

While Familiarity gauges how well you think you know a brand, True Awareness asks you to prove it. Familiarity serves as the comparison point vs. other brands, but True Awareness serves as the comparison point of your brand vs. itself: how well do people know you for selling X, and how well do people know you for selling Y and Z?

True Awareness is a question that asks people aware of your brand which specific products or services they think your brand offers. You show them a list of offerings that includes all the things your brand does offer and a few things your brand does not offer.

If people choose any of your brand’s offerings correctly (e.g., they select one of the four correct offerings listed) and don’t erroneously select any things your brand does not offer, then they are truly aware—they do, in fact, know you well. This also helps you identify sources of errors in perception. Folks failing to credit you for things you do, or falsely crediting you for things you don’t, helps you identify areas for improvement in your marketing communications. 

So what’s the point of asking True Awareness? It provides you with more good information to use when making decisions about your brand:

  • When you combine True Awareness with usage data (e.g., how much people use and/or would like to use X, Y and Z products/services in general) you are able to inject vibrant colors into what was previously a black and white outline—your brand understanding transforms from a rough sketch into a portrait.
  • As a result, not only do you understand what people want, you also understand what people know your brand for.
  • Therefore, you know whether or not people think your brand can give them what they want. If people like using Y and Z but aren’t aware that your brand offers Y and Z, then your brand is suffering.

So, True Awareness allows you to discern exactly what needs to be done (e.g., what needs to be amplified in messaging) to improve your brand’s metrics and conversion funnel.

Use both Familiarity and True Awareness in your competitive brand tracking to push your brand to be the prom king of your industry and to make sure people know and love your brand for what it really stands for.

Jonah is a Project Manager at CMB. He enjoys traveling with his family and friends, and he hopes the Red Sox can hang in there to reach the postseason this year.

Topics: methodology, research design, brand health and positioning

How I Used Conjoint Analysis to Plan My Wedding

Posted by Alyse Dunn

Tue, Jun 14, 2016

I’m getting married in August, and the past year and a half of planning has been a whirlwind of fabrics, colors, and decisions. The number of options you have for any given item are immense, and, as a market researcher, I began to consider the choices I had and how I would make them. 

Let’s talk about cake. We tried 15 flavors of cake, and we knew that we could combine any four of them. They could be the same, or we could have 4 different flavors or a combination. Effectively, we had 3,060 possible combinations for cake. Now, that could be very overwhelming, but, to me, it was just a giant Conjoint Analysis exercise.

Conjoint Analysis is a trade-off technique that market researchers use to estimate consumer preferences for products with multiple features. The beauty of Conjoint Analysis is that it allows a researcher to predict preferences for huge numbers of possible product combinations without testing each combination explicitly.  The secret is in attaching a value to each level (chocolate, vanilla, strawberry, etc.) to each attribute (flavor) and making the assumption that the value of the whole is equal to the sum of its parts. For our wedding cake, we were presented with 2 attributes: Flavor and Number of Flavor Repeats.

For this Self-Explicated Conjoint exercise, I listed out the 15 possible flavors and number of possible repeated flavors. I then rated them on a 1-10 scale based on how attractive they were to me. Additionally, I rated each attribute based on how important it was to the final decision. In the case below, the number of repeated flavors was a more important attribute than flavor (60% of my decision). Finally, I multiplied the level and attribute values together to get a utility score.

wedding_conjoint_analysis.png

From there, it’s math! Now, with these scores, I have the ability to simulate all 3,060 cake combinations with their values (that’s a lot of frosting). To determine the “BEST CAKE” you add the utilities together and look for the highest total utility. In our case, it was 2 White Chocolate Tiers, with 1 Lavender, and 1 Italian Crème, with a total utility of 2,060. This very narrowly beat out 4 independent flavors (White Chocolate, Lavender, Italian Crème, and Chocolate) because of the high value for White Chocolate. 

Conjoint Analysis is helpful for numerous research needs (wedding planning included). Presenting individuals with various combinations of attributes helps determine how each attribute is valued, which can be projected to the larger population. By making tradeoffs when comparing different combinations, I was able to choose a cake that worked for our event. For organizations, Conjoint Analysis can help determine which new product features will perform the best, which hotel packages offer the biggest bang for the buck, or which insurance items will be most desirable to individuals. Conjoint is applicable across any organization and is a valuable analytical tool to help determine which combinations of attributes perform best. 

Learn more about avoiding common pitfalls in Conjoint Analysis. 

Alyse Dunn is a Data Manager at CMB, and she looks forward to how her Conjoint Analysis exercises in wedding planning will pay off (and thanks our Senior Analyst Liz White for socializing this example).

Topics: advanced analytics, research design

What We’ve Got Here Is a Respondent Experience Problem

Posted by Jared Huizenga

Thu, Apr 14, 2016

respondent experience problemA couple weeks ago, I was traveling to Austin for CASRO’s Digital Research Conference, and I had an interesting conversation while boarding the plane. [Insert Road Trip joke here.]

Stranger: First time traveling to Austin?

Me: Yeah, I’m going to a market research conference.

Stranger: [blank stare]

Me: It’s a really good conference. I go every year.

Stranger: So, what does your company do?

Me: We gather information from people—usually by having them take an online survey, and—

Stranger: I took one of those. Never again.

Me: Yeah? It was that bad?

Stranger: It was [expletive] horrible. They said it would take ten minutes, and I quit after spending twice that long on it. I got nothing for my time. They basically lied to me.

Me: I’m sorry you had that experience. Not all surveys are like that, but I totally understand why you wouldn’t want to take another one.

Thank goodness the plane started boarding before he could say anything else. Double thank goodness that I wasn’t sitting next to him during the flight.

I’ve been a proud member of the market research industry since 1998. I feel like it’s often the Rodney Dangerfield of professional services, but I’ve always preached about how important the industry is. Unfortunately, I’m finding it harder and harder to convince the general population. The experience my fellow traveler had with his survey points to a major theme of this year’s CASRO Digital Research Conference. Either directly or indirectly, many of the presentations this year were about the respondent experience. It’s become increasingly clear to me that the market research industry has no choice other than to address the respondent experience “problem.”

There were also two related sub-themes—generational differences and living in a digital world—that go hand-in-hand with the respondent experience theme. Fewer people are taking questionnaires on their desktop computers. Recent data suggests that, depending on the specific study, 20-30% of respondents are taking questionnaires on their smartphones. Not surprisingly, this skews towards younger respondents. Also not surprisingly, the percentage of smartphone survey takers is increasing at a rapid pace. Within the next two years, I predict the percent of smartphone respondents will be 35-40%. As researchers, we have to consider the mobile respondent when designing questionnaires.

From a practical standpoint, what does all this mean for researchers like me who are focused on data collection?

  1. I made a bold—and somewhat unpopular—prediction a few years ago that the method of using a single “panel” for market research sample is dying a slow death and that these panels would eventually become obsolete. We may not be quite at that point yet, but we’re getting closer. In my experience, being able to use a single sample source today is very rare except for the simplest of populations.

Action: Understand your sample source options. Have candid conversations with your data collection partners and only work with ones that are 100% transparent. Learn how to smell BS from a mile away, and stay away from those people.

  1. As researchers, part of our job should be to understand how the world around us is changing. So, why do we turn a blind eye to the poor experiences our respondents are having? According to CASRO’s Code of Standards and Ethics, “research participants are the lifeblood of the research industry.” The people taking our questionnaires aren’t just “completes.” They’re people. They have jobs, spouses, children, and a million other things going on in their lives at any given time, so they often don’t have time for your 30-minute questionnaire with ten scrolling grid questions.

Action: Take the questionnaires yourself so you can fully understand what you’re asking your respondents to do. Then take that same questionnaire on a smartphone. It might be an eye opener.

  1. It’s important to educate colleagues, peers, and clients regarding the pitfalls of poor data collection methods. Not only does a poorly designed 30-minute survey frustrate respondents, it also leads to speeding, straight lining, and just not caring. Most importantly, it leads to bad data. It’s not the respondent’s fault—it’s ours. One company stood up at the conference and stated that it won’t take a client project if the survey is too long. But for every company that does this, there are many others that will take that project.

Action: Educate your clients about the potential consequences of poorly designed, lengthy questionnaires. Market research industry leaders as a whole need to do this for it have a large impact.

Change is a good thing, and there’s no need to panic. Most of you are probably aware of the issues I’ve outlined above. There are no big shocks here. But, being cognizant of a problem and acting to fix the problem are two entirely different things. I challenge everyone in the market research industry to take some action. In fact, you don’t have much of a choice.

Jared is CMB’s Field Services Director, and has been in market research industry for eighteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

Topics: data collection, mobile, research design, conference recap

3 “Magical” Steps to Curbing Information Overload

Posted by Jen Golden

Wed, Feb 24, 2016

iStock_000024159442_Illustration.jpgRecently the WNYC podcast “Note to Self” (@NoteToSelf) released a week-long challenge to its listeners aimed at curbing information overload in our daily lives. In today’s internet-driven society, we’re hit from all angles with information, and it can be difficult to decide what information or content to consume in a day without being totally overwhelmed. I decided to participate in this challenge, and as the week progressed, I realized that many of the lessons from this exercise could be applied to our clients—who often struggle with information overload in their businesses.

The “InfoMagical” challenge worked like this: 

Challenge 1: “A Magical Day” – No multi-tasking, only single-tasking.

  • This challenge centered on focusing on one task at a time throughout the day. I knew this was going to be a struggle right from the start since my morning commute on the train typically involves listening to a podcast, scanning the news, checking social media, and catching up on emails at the same time. For this challenge, I stuck to one podcast (on the Architecture of Dumplings). By the end of the day, I felt more knowledgeable about the topics I focused on (ask me anything about jiaozi), as opposed to taking in little bits of information from various sources. 
  • Research Implications: Our clients often come to us with a laundry list of research objectives they want to capture in a single study. To maintain the quality of the data, we need to make trade-offs regarding what we can (or can’t) include in our design. We focus on designing projects around business decisions, asking our clients to prioritize the information they need in order to make the decisions they are facing. Some pieces may be “nice to have,” but they ultimately may not help answer a business decision. By following this focused approach, we can provide actionable insights on the topics that matter most.

 Challenge 2: “A Magical Phone” – Tidy up your smartphone apps.

  • This challenge asked me to clean up and organize my smartphone apps to keep only the ones that were truly useful to me. While I wasn’t quite ready to make a full commitment and delete Instagram or Facebook (how could I live without them?), I did bury them in a folder so I would be less likely to absentmindedly click through them every time I picked up my phone. Organizing and keeping only the apps you really need makes the device more task-oriented and less likely to be a distraction. 
  • Research Implications: When we design a questionnaire, answer option lists can often become long and unwieldy. With more and more respondents taking surveys on smartphones, it is important to make answer option lists manageable for respondents to answer. Often, a list can be cleaned up to include only the answer options that will produce useful results. Here are two ways to do this: (1) look at results from past studies with similar answer options lists to determine what was useful vs. not (i.e., what options had very high responses vs. very low) or (2) if the project is a tracker, run a factor analysis on the list to see if it can be paired down into a smaller sub-set of options for the next wave. This results in more meaningful (and higher quality) results going forward.  

Challenge 3: "A Magical Brain" – Avoid a meme, trending topic, or “must-read” today.

  • I did this challenge the day of the Iowa Caucuses, and it was hard to avoid all the associated coverage. But, when I looked at the results the next day, I realized I was happy enough just knowing the final results. I didn’t need to follow the minute-by-minute details of the night, including every Donald Trump remark and every Twitter comment. In this case, endless information did not make me feel better informed. 
  • Research Implications: Our clients often say they want to see the results of a study shown every which way, reporting out on every question by every possible sub-segment. There is likely some “FOMO” (fear of missing out) going on here, as clients might worry we are missing a key storyline by not showing everything. We often take the approach of not showing every single data point; instead, we only highlight differences in the data where it adds to the story in a significant and meaningful way. There comes a point when too much data overwhelms decisions. 

The other two pieces of this challenge focused on verbally communicating the information I learned on a single topic and setting a personal information mantra to say every time I consumed information (mine was “take time to digest after you consume it”). By the end of the challenge, even though I didn’t consume as much information as I typically do in a week, I didn’t feel like I was missing out on anything (except maybe some essential Bachelor episode recaps), and I felt more knowledgeable about the information I did consume. 

Jen Golden is a Project Manager on the Tech/E-commerce practice at CMB. She wishes there was more hours in the day to listen to podcasts without having to multi-task.  

For the latest Consumer Pulse reports, case studies, and conference news, subscribe to our monthly eZine.

Subscribe Here!

Topics: mobile, business decisions, research design

Move Over Cupid: A Qualitative Researcher’s Guide to Valentine’s Day

Posted by Eliza Novick

Tue, Feb 09, 2016

eliza_blog_image.pngAs Valentine’s Day ticks closer, I’m reminded of my best and worst dates over the years. At best, I’ve enjoyed rosé, cheese, and interesting conversations; at worst, I had a beer spilled on me and endured lots of awkward pauses. Through all the ups and downs, I’ve perfected a few tricks that can help make a date a great success and avoid your typical first date pitfalls. Best of all, these are tricks that I can apply to my work as a qualitative researcher!

Moderating a focus group is kind of like going on a blind date with eight people at once while your boss watches. Yes, it can be awkward, but it’s critical that respondents really connect with the moderator to ensure that our clients get reliable findings. With that in mind, here are my top three tips for making it through a first date and for wow-ing clients by getting the most out of your qualitative research:

  1. Ask open-ended questions: Nobody likes stilted conversation, but sometimes it can feel hard to avoid. Rather than asking close-ended questions that end in one-word answers, try asking people to describe an experience. “What kind of things have you been cooking recently?” tends to get a lot more traction than, “Do you like to cook?” Likewise, “Tell me about a time you paid for an unanticipated medical expense” can take you (and your clients) much further than “Have you ever had an unanticipated medical expense?” Putting the emphasis on sharing a story encourages people to give detailed responses and speak genuinely about their interests and experiences.
  2. Don’t try to cover too much ground: Meeting new people can be overwhelming—there’s a lot to digest. So, I’ve found that it’s best to keep the conversation simple. Unlike the unfortunate fellow who asked me rapid-fire questions for two hours over drinks, try asking follow-up questions on one topic. This lets you get to know someone better and discover interesting details that you wouldn’t uncover if you were speeding through topics. It also works in qualitative since your respondents are coming into the conversation with virtually no context. They weren’t privy to the hours of client calls, discussion guide revisions, and marketing materials like the research team was. While it’s tempting to cram as much content as possible into the discussion guide, nine times out of ten, clients find more value in clear, detailed findings than high-level, scattered anecdotes. Besides, speeding through different topics makes it difficult to identify patterns. So, do everyone a favor—slow down, and see where the conversation takes you.
  3. Trust your gut: If something doesn’t seem right, trust yourself. If you’re on a date and things aren’t going well, it’s ok to leave early. Likewise, if your carefully laid research plans are not panning out as you had planned, it’s ok to take a different route. Try phrasing a question a different way. Or, if you have a sense that someone in the group disagrees with a point but is too shy to say so, ask them if they’ve got anything they’d like to share. Not only will this show your respondents that you’re listening and care about what they have to say, it will also elicit more honest responses that lead to better findings (and happy clients).

Qualitative research, like dating, is really about connecting with people—we get the best results when respondents feel they can relate to us researchers on a personal level. So, don’t be afraid to put yourself out there! Take your time, listen to the data you’re getting, and trust yourself. Easy!

Eliza is a qualitative researcher at CMB. In addition to applying her dating life to her work, she likes to be outside, read books, and cook. 

For the latest Consumer Pulse reports, case studies, and conference news, subscribe to our monthly eZine.

Subscribe Here!

Topics: qualitative research, research design

A Data Dominator’s Guide to Research Design…and Dating

Posted by Talia Fein

Wed, Jan 20, 2016

people_on_date.jpgI recently went on a first date with a musician. We spent the first hour or so talking about our careers: the types of music he plays, the bands he’s been in, how music led him to the job he has now, and, of course, my unwavering passion for data. Later, when there was a pause in the conversation, he said: “so, do you like music?”

Um. . .how was I supposed to answer that? There was clearly only one right answer (“yes”) unless I really didn’t want this to go anywhere. I told him that, and we had a nice laugh. . .and then I used it as a teaching opportunity to explain one of my favorite market research concepts: Leading Questions.

According to Tull and Hawkins’ Marketing Research: Measurement and Method, a Leading Question is “a question that suggests what the answer should be, or that reflects the researcher’s point of view. Example: “Do you agree, as most people do, that TV advertising serves no useful purpose?”

In writing good survey questions, we need to give enough information for the respondent to fully answer the question, but not too much information that we give away either our own opinions or the responses we expect to hear. This is especially important in opinion research and political polling when slight changes in word choice can create bias and impact the results. For example, in their 1937 poll, Gallup asked, “Would you vote for a woman for President if she were qualified in every other aspect?” This implies that simply being a woman is a disqualification for President. (Just so you know: 33% answered “Yes.”) Gallup has since changed the wording—“If your party nominated a generally well-qualified person for President who happened to be a woman, would you vote for that person?”—and the question is included in a series of questions in which “woman” is replaced with other descriptors, such as Catholic, Black, Muslim, gay, etc. Of course, times have changed, and we can’t know exactly how much of the bias was due to the leading nature of the question, but 92% answered “Yes” as recently as June 2015.

The ordering of questions is just as important as the words we choose in specific questions. John Martin (Cofounder and Chairman of CMB, 1984-2014) taught us the importance—and danger—of sequential bias. In writing a good questionnaire, we’re not only spitting out a bunch of questions and receiving responses—we’re taking the respondent through a 15 (or 20 or 30) minute journey, trying to get his/her most unbiased, real, opinions and preferences. For example, if we start a questionnaire by showing a list of brands and asking which ones are fun and exciting, and then ask unaided which brands respondents know of, we’re not going to get very good data. Just like if we ask a person whether he/she likes music after talking for an hour about the importance of music in our own lives, we might get skewed results.

One common rule when it comes to questionnaire ordering is to ask unaided questions before aided questions. Otherwise, the aided questions would remind respondents of possible options—and inflate their unaided answers. A couple more rules I like to keep in mind:

  1. Start broad, then go narrow: talk about the category before the specific brand or product.

Remember that the respondent is in the middle of a busy day at work or has just put the kids to bed and has other things on his/her mind. The introductory sections of a questionnaire are as much about screening respondents and gathering data as they are about getting the respondent thinking about the category (rather than what to make for the kids’ lunch tomorrow).

  1. Think about what you have already told the respondent: like a good date, the questionnaire should build.

In one of my recent projects, after determining awareness of a product, we measured “concept awareness” by showing a short description of the product to those who had said they were NOT aware of it and then asking them if they had heard of the concept. Later on in the questionnaire, we asked respondents what product features they were familiar with. For respondents who had seen the concept awareness question (i.e., those who hadn’t been fully aware), we removed the product features that had been mentioned in the description (of course, the respondent would know those).

  1. When asking unaided awareness questions, think about how you’re defining the category.

“What Boston-based market research companies founded in 1984 come to mind?” might be a little too specific. A better way of wording this would simply be: “What market research companies come to mind?” Usually thinking about the client’s competitive set will help you figure out how to explain the category.

So, remember: in research, just as in dating, what we put out (good survey questions and positive vibes) influences what we get back.

Talia is a Project Manager on CMB’s Technology and eCommerce team. She was recently named one of Survey Magazine’s 2015 Data Dominators and enjoys long walks on the beach.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital. This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

Topics: methodology, research design, quantitative research