WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

Swipe Right for Insights

Posted by Jared Huizenga

Wed, Aug 17, 2016

Data collection geeks like me can learn a ton at the CASRO Digital Research Conference. While the name of the event has changed many times over the years, the quality of the presentations and the opportunity to learn from experts in the industry are consistently good.

One topic that came up many years ago was conducting surveys via cellphones with SMS texts. This was at a time when most people had cellphones, but it was still a couple of years before the smartphone explosion. I remember listening to one presentation and looking down at my Samsung flip-phone thinking, “There’s no way respondents will take a CMB questionnaire this way.” For a few simple yes/no questions, this seemed like a fine methodology but it certainly wouldn’t fly for any of CMB’s studies.

For the next two or three years, less than half of the U.S. population owned smartphones (including yours truly). Even so, SMS texting was getting increasing coverage at the CASRO conference, and I was having a really hard time understanding why. Every year was billed as “the year of mobile!” I could see the potential of taking a survey while mobile, but the technology and user experience weren’t there yet. Then something happened that changed not only the market research industry but the way in which we live as human beings—smartphone adoption skyrocketed.
Girl_and_phone.jpg

Today in the U.S., smartphone ownership among adults is 72% according to the Pew Research Center. People are spending more time on their phones and less time sitting in front of a computer. Depending on the study and the population, anywhere from 20%-40% of survey takers are using their smartphones. And if it’s a study with people under 25 years old, that number would likely be even higher. We can approach mobile respondents in three ways:

  • Do nothing. This means surveys will be extremely cumbersome to take on smartphones, to the point where many will abandon during the painful process. This really isn’t an option at all. By doing nothing, you’re turning your back on the respondent experience and basically giving mobile users the middle finger.
  • Optimize questionnaires for mobile. All of CMB’s questionnaires are optimized for mobile. That is, our programming platforms identify the device type a respondent is using and renders the questionnaire to the appropriate screen size.  Even with this capability, long vertical grids and wide horizontal scales will still be painful for smartphone users since they will require some degree of scrolling. This option is better than nothing, but long questions are still going to be long questions.
  • Design questionnaires for mobile. This is the best option, and one that isn’t used often enough. This requires questions and answer options to be written with the idea that they will be viewed on smartphones. In other words, no lengthy grids, no sprawling scales, no drag and drop, minimal scrolling, or anything else that would cause the mobile user angst.  While this option sounds great, one of the criticisms has been that it’s difficult to do advanced exercises like max-diff or discrete choice on smartphones.

One cautionary note if you are thinking that a good option would be to simply disallow respondents from taking a survey on their smartphones.  Did your parents ever tell you not to do something when you were a child?  Did you listen to them or did you try it anyway? What’s going to happen when you tell someone not to take a survey on their mobile device?  Either by mistake or out of sheer defiance, some people will attempt to take it on their smartphone. This happened on a recent study for one of our clients.  These people tried to “stick it to the man,” but alas they were denied entry into the survey. If you want “representative” sample, the other argument against blocking mobile users is that you are blocking specific populations which could skew the results.

The respondent pool is getting shallow, and market research companies are facing increased challenges when it comes to getting enough “completes” for their studies.  It’s important for all of us to remember that behind every “complete” is a human being—one who’s trying to drag and drop a little image into the right bucket or one who’s scrolling and squinting to make sure they are choosing the right option on an 11-point scale in a twenty row grid.  Unless everyone is comfortable basing their quantitative findings off of N=50 in the future, we all need to take steps to embrace the mobile respondent. 

Jared is CMB’s Field Services Director, and has been in market research industry for eighteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

Sign up to receive our monthly eZine and never miss a webinar, conference recap, or insights from our self-funded research on hot topics from data integration to Social Currency.

Subscribe Here!

Topics: mobile, research design, Market research

Passive Mobile Behavioral Data – Part Deux

Posted by Chris Neal

Wed, Aug 10, 2016

Over the past two years, we've  embarked on a quest to help the insights industry get better at harnessing passive mobile behavioral data. In 2015, we partnered with Research Now for an analysis37824990_thumbnail.jpg of mobile wallet usage, using unlinked passive and survey-based data. This year, we teamed up with Research Now once again for research-on-research directly linking actual mobile traffic and app data to consumers’ self-reported online shopper journey behavior.

We asked over 1,000 shoppers, across a variety of Black Friday/Cyber Monday categories, a standard set of purchase journey survey questions immediately after the event, then again after 30 days, 60 days, and 90 days. We then compared their self-reported online and mobile behavior to the actual mobile app and website usage data from their smartphones. 

The results deepened our understanding of how best to use (and not use) each respective data source, and how combining both can help our clients get closer to the truth than they could using any single source of information.

Here are a few things to consider if you find yourself tasked with a purchase journey project that uses one or both of these data sources as fuel for insights and recommendations:

  1. Most people use multiple devices for a major purchase journey, and here’s why you should care:
    • Any device tracking platform (even one claiming a 3600 view) is likely missing some relevant online behavior to a given shopper journey. In our study, we were getting behavior from their primary smartphone, but many of these consumers reported visiting websites we had no record of from our tracking data. Although they reported visiting these websites on their smartphones, it is likely that some of these visits happened on their personal computer, a tablet, a computer at their work, etc.
  2. Not all mobile usage is related to the purchase journey you care about:
    • We saw cases of consumers whose behavioral data showed they’d visited big retail websites and mobile apps during the purchase journey but who did not report using these sites/apps as part of the journey we asked them about. This is a bigger problem with larger, more generalist mobile websites and apps (like Amazon, for this particular project, or like PayPal when we did the earlier Mobile Wallet study with a similar methodological exercise).
  3. Human recall ain’t perfect. We all know this, but it’s important to understand when and where it’s less perfect, and where it’s actually sufficient for our purposes. Using survey sampling to analyze behaviors can be enormously valuable in a lot of different situations, but understand the limitations and when you are expecting too much detail from somebody to give you accurate data to work with.  Here are a few situations to consider:
    • Asking whether a given retailer, brand, or major web property figured into the purchase journey at all will give you pretty good survey data to work with. Smaller retailers, websites, and apps will get more misses/lack of recall, but accurate recall is a proxy for influence, and if you’re ultimately trying to figure out how best to influence a consumer’s purchase journey, self-reported recall of visits is a good proxy, whereas relying on behavioral data alone may inflate the apparent impact of smaller properties on the final purchase journey.
    • Asking people to remember whether they used the mobile app vs. the mobile website introduces more error in your data. Most websites are now mobile optimized and look/ feel like mobile apps, or will switch users to the native mobile app on their phone automatically if possible.
      • In this particular project, we saw evidence of a 35-50% improvement in survey-behavior match rates if we did not require respondents to differentiate the mobile website from the mobile app for the same retailer.
  4. Does time-lapse matter? It depends.
    • For certain activities (e.g., making minor purchases in grocery store, a TV viewing occasion), capturing in-the-moment feedback from consumers is critical for accuracy.
    • In other situations where the process is bigger, involves more research, or is more memorable in general (e.g., buying a car, having a wedding, or making a planned-for purchase based on a Black Friday or Cyber Monday deal): you can get away with asking people about it further out from the actual event.
      • In this particular project, we actually found no systematic evidence of recall deterioration when we ran the survey immediately after Black Friday/Cyber Monday vs. running it 30 days, 60 days, and 90 days after.

Working with passive mobile behavioral data (or any digital passive data) is challenging, no doubt.  Trying to make hay by combining these data with primary research survey sampling, customer databases, transactional data, etc., can be even more challenging.  But, like it or not, that’s where Insights is headed. We’ll continue to push the envelope in terms of best practices for navigating these types of engagements as Analytics teams, Insights departments, Financial Planning and Strategy groups work together more seamlessly to provide senior executives with a “single version of the truth”— one which is more accurate than any previously siloed version.

Chris Neal leads CMB’s Tech Practice. He knows full well that data scientists and programmatic ad buying bots are analyzing his every click on every computing device and is perfectly OK with that as long as they serve up relevant ads. Nothing to hide!

Don't miss out on the latest research, insights and conference recaps. Subscribe to our monthly eZine.

Subscribe Here!

Topics: advanced analytics, mobile, passive data, integrated data

Can Facial Recognition Revolutionize Qualitative?

Posted by Will Buxton

Wed, Aug 03, 2016

Full disclosure: I’m an Android and Google loyalist, but please don’t hold that against me or the rest of my fellow Android users, who, by the way, comprise 58% of the smartphone market share in the United States. As a result of my loyalty, I’m always intrigued by Google’s new hardware and software advancements, which are always positioned in a way that leads me to believe they will make my life easier. Some of the innovations over the years have in fact lived up to the hype, such as Google Now, Google Drive, and even Google Fusion, while others such as Google Buzz and Google Wave have not.

As a researcher, last year’s launch of Google Photos caught my eye. Essentially, Google
Photos now utilizes facial recognition software to group or bunch your photos based on people in them, scenery (i.e., beaches and Google_Photos_icon.svg-1.pngmountains) and even events (i.e., weddings and holidays). To activate the facial recognition feature, all you have to do is tag one photo with an individual’s name and all other photos with that person will be compiled into a searchable collection. Google uses visual cues within the photos and geotagging to create other searchable collections. While these features might not seem extraordinary—I can see who was the most frequent star of my photos (my enormous cat) or where I most commonly take photos (honeymoon sans enormous cat)—I began to imagine the possible impact these features could have on the market research industry.

Visual ethnographies are one of many qualitative research options we offer at CMB. This is a rich form of observation, and, for some companies, it can be cost prohibitive in nature, especially ones focused on a “cost-per-complete.” But, what if there was a way to remove some of the heavy lifting of a customer journey ethnography by quantifying some of the shopping experience using technology that could track date/time, location, shopping layout, products viewed, order in which products are viewed, and so on, all through recognition software? Would the reduction in hours, travel, and analysis be able to offset the technological costs of these improvements?

Market research, and, in particular, qualitative research have always been a combination of art and science, and to expect any technological advancement to adequately perform any cogent analyses is a bit premature and perhaps too reminiscent of The Minority Report. (I don’t think it worked out well). But the promise of these powerful tools makes it an exciting time to  be a qualitative researcher!

Will Buxton is a Project Manager on the Financial Services team. He enjoys finding humor in everyday tasks, being taken seriously, and his enormous cat.

Learn more about how our dedicated Qualitative practice helps brands Explore, Listen, & Engage.

 

 

 

Topics: methodology, qualitative research, mobile, storytelling, customer journey

What We’ve Got Here Is a Respondent Experience Problem

Posted by Jared Huizenga

Thu, Apr 14, 2016

respondent experience problemA couple weeks ago, I was traveling to Austin for CASRO’s Digital Research Conference, and I had an interesting conversation while boarding the plane. [Insert Road Trip joke here.]

Stranger: First time traveling to Austin?

Me: Yeah, I’m going to a market research conference.

Stranger: [blank stare]

Me: It’s a really good conference. I go every year.

Stranger: So, what does your company do?

Me: We gather information from people—usually by having them take an online survey, and—

Stranger: I took one of those. Never again.

Me: Yeah? It was that bad?

Stranger: It was [expletive] horrible. They said it would take ten minutes, and I quit after spending twice that long on it. I got nothing for my time. They basically lied to me.

Me: I’m sorry you had that experience. Not all surveys are like that, but I totally understand why you wouldn’t want to take another one.

Thank goodness the plane started boarding before he could say anything else. Double thank goodness that I wasn’t sitting next to him during the flight.

I’ve been a proud member of the market research industry since 1998. I feel like it’s often the Rodney Dangerfield of professional services, but I’ve always preached about how important the industry is. Unfortunately, I’m finding it harder and harder to convince the general population. The experience my fellow traveler had with his survey points to a major theme of this year’s CASRO Digital Research Conference. Either directly or indirectly, many of the presentations this year were about the respondent experience. It’s become increasingly clear to me that the market research industry has no choice other than to address the respondent experience “problem.”

There were also two related sub-themes—generational differences and living in a digital world—that go hand-in-hand with the respondent experience theme. Fewer people are taking questionnaires on their desktop computers. Recent data suggests that, depending on the specific study, 20-30% of respondents are taking questionnaires on their smartphones. Not surprisingly, this skews towards younger respondents. Also not surprisingly, the percentage of smartphone survey takers is increasing at a rapid pace. Within the next two years, I predict the percent of smartphone respondents will be 35-40%. As researchers, we have to consider the mobile respondent when designing questionnaires.

From a practical standpoint, what does all this mean for researchers like me who are focused on data collection?

  1. I made a bold—and somewhat unpopular—prediction a few years ago that the method of using a single “panel” for market research sample is dying a slow death and that these panels would eventually become obsolete. We may not be quite at that point yet, but we’re getting closer. In my experience, being able to use a single sample source today is very rare except for the simplest of populations.

Action: Understand your sample source options. Have candid conversations with your data collection partners and only work with ones that are 100% transparent. Learn how to smell BS from a mile away, and stay away from those people.

  1. As researchers, part of our job should be to understand how the world around us is changing. So, why do we turn a blind eye to the poor experiences our respondents are having? According to CASRO’s Code of Standards and Ethics, “research participants are the lifeblood of the research industry.” The people taking our questionnaires aren’t just “completes.” They’re people. They have jobs, spouses, children, and a million other things going on in their lives at any given time, so they often don’t have time for your 30-minute questionnaire with ten scrolling grid questions.

Action: Take the questionnaires yourself so you can fully understand what you’re asking your respondents to do. Then take that same questionnaire on a smartphone. It might be an eye opener.

  1. It’s important to educate colleagues, peers, and clients regarding the pitfalls of poor data collection methods. Not only does a poorly designed 30-minute survey frustrate respondents, it also leads to speeding, straight lining, and just not caring. Most importantly, it leads to bad data. It’s not the respondent’s fault—it’s ours. One company stood up at the conference and stated that it won’t take a client project if the survey is too long. But for every company that does this, there are many others that will take that project.

Action: Educate your clients about the potential consequences of poorly designed, lengthy questionnaires. Market research industry leaders as a whole need to do this for it have a large impact.

Change is a good thing, and there’s no need to panic. Most of you are probably aware of the issues I’ve outlined above. There are no big shocks here. But, being cognizant of a problem and acting to fix the problem are two entirely different things. I challenge everyone in the market research industry to take some action. In fact, you don’t have much of a choice.

Jared is CMB’s Field Services Director, and has been in market research industry for eighteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

Topics: data collection, mobile, research design, conference recap

3 “Magical” Steps to Curbing Information Overload

Posted by Jen Golden

Wed, Feb 24, 2016

iStock_000024159442_Illustration.jpgRecently the WNYC podcast “Note to Self” (@NoteToSelf) released a week-long challenge to its listeners aimed at curbing information overload in our daily lives. In today’s internet-driven society, we’re hit from all angles with information, and it can be difficult to decide what information or content to consume in a day without being totally overwhelmed. I decided to participate in this challenge, and as the week progressed, I realized that many of the lessons from this exercise could be applied to our clients—who often struggle with information overload in their businesses.

The “InfoMagical” challenge worked like this: 

Challenge 1: “A Magical Day” – No multi-tasking, only single-tasking.

  • This challenge centered on focusing on one task at a time throughout the day. I knew this was going to be a struggle right from the start since my morning commute on the train typically involves listening to a podcast, scanning the news, checking social media, and catching up on emails at the same time. For this challenge, I stuck to one podcast (on the Architecture of Dumplings). By the end of the day, I felt more knowledgeable about the topics I focused on (ask me anything about jiaozi), as opposed to taking in little bits of information from various sources. 
  • Research Implications: Our clients often come to us with a laundry list of research objectives they want to capture in a single study. To maintain the quality of the data, we need to make trade-offs regarding what we can (or can’t) include in our design. We focus on designing projects around business decisions, asking our clients to prioritize the information they need in order to make the decisions they are facing. Some pieces may be “nice to have,” but they ultimately may not help answer a business decision. By following this focused approach, we can provide actionable insights on the topics that matter most.

 Challenge 2: “A Magical Phone” – Tidy up your smartphone apps.

  • This challenge asked me to clean up and organize my smartphone apps to keep only the ones that were truly useful to me. While I wasn’t quite ready to make a full commitment and delete Instagram or Facebook (how could I live without them?), I did bury them in a folder so I would be less likely to absentmindedly click through them every time I picked up my phone. Organizing and keeping only the apps you really need makes the device more task-oriented and less likely to be a distraction. 
  • Research Implications: When we design a questionnaire, answer option lists can often become long and unwieldy. With more and more respondents taking surveys on smartphones, it is important to make answer option lists manageable for respondents to answer. Often, a list can be cleaned up to include only the answer options that will produce useful results. Here are two ways to do this: (1) look at results from past studies with similar answer options lists to determine what was useful vs. not (i.e., what options had very high responses vs. very low) or (2) if the project is a tracker, run a factor analysis on the list to see if it can be paired down into a smaller sub-set of options for the next wave. This results in more meaningful (and higher quality) results going forward.  

Challenge 3: "A Magical Brain" – Avoid a meme, trending topic, or “must-read” today.

  • I did this challenge the day of the Iowa Caucuses, and it was hard to avoid all the associated coverage. But, when I looked at the results the next day, I realized I was happy enough just knowing the final results. I didn’t need to follow the minute-by-minute details of the night, including every Donald Trump remark and every Twitter comment. In this case, endless information did not make me feel better informed. 
  • Research Implications: Our clients often say they want to see the results of a study shown every which way, reporting out on every question by every possible sub-segment. There is likely some “FOMO” (fear of missing out) going on here, as clients might worry we are missing a key storyline by not showing everything. We often take the approach of not showing every single data point; instead, we only highlight differences in the data where it adds to the story in a significant and meaningful way. There comes a point when too much data overwhelms decisions. 

The other two pieces of this challenge focused on verbally communicating the information I learned on a single topic and setting a personal information mantra to say every time I consumed information (mine was “take time to digest after you consume it”). By the end of the challenge, even though I didn’t consume as much information as I typically do in a week, I didn’t feel like I was missing out on anything (except maybe some essential Bachelor episode recaps), and I felt more knowledgeable about the information I did consume. 

Jen Golden is a Project Manager on the Tech/E-commerce practice at CMB. She wishes there was more hours in the day to listen to podcasts without having to multi-task.  

For the latest Consumer Pulse reports, case studies, and conference news, subscribe to our monthly eZine.

Subscribe Here!

Topics: mobile, business decisions, research design