WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Big Data Killed the Radio Star

Posted by Mark Doherty

Wed, Jun 29, 2016

It’s an amazing time to be a music fan (especially if you have all those Ticketmaster vouchers and a love of '90's music). While music production and distribution was once controlled by record label and radio station conglomerates, technology has “freed” it in almost every way. It’s 200542299-001_47.jpgnow easy to hear nearly any song ever recorded thanks to YouTube, iTunes, and a range of streaming sources. While these new options appear to be manna from heaven, for music lovers, they can  actually create more problems than you’d expect. The never-ending flow of music options can make it harder to decide what might be good or what to play next. In the old days (way back in 2010 :)), your music choices were limited by record companies and by radio station programmers. While these “corporate suits” may have prevented you from hearing that great underground indie band, they also “saved” you from thousands of options that you would probably hate. 

That same challenge is happening right now with marketers’ use of data. Back in the day (also around 2010), there was a limited number of data sets and sources to leverage in decisions relating to building/strengthening a brand. Now, that same marketer has access to a seemingly endless flow of data: from web analytics, third-party providers, primary research, and their own CRM systems. While most market information was previously collected and “curated” through the insights department, marketing managers are often now left to their own devices to sift through and determine how useful each set of data is to their business. And it’s not easy for a non-expert to do due diligence on each data source to establish its legitimacy and usefulness. As a result, many marketers are paralyzed by a firehose of data and/or end up trying to use lots of not-so-great data to make business decisions.

So, how do managers make use of all this data? It’s partly the same way streaming sources help music listeners decide what song to play next: predictive analytics. Predictive analytics is changing how companies use data to get, keep, and grow their most profitable customers. It helps managers “cut through the clutter” and analyze a wide range of data to make better decisions about the future of their business. It’s similarly being used in the music industry to help music lovers cut through the clutter of their myriad song choices to find their next favorite song. Pandora’s Musical Genome Project is doing just that by developing a recommendation algorithm that serves up choices based on the attributes of the music you have listened to in the past. Similarly, Spotify’s Discover Weekly playlist is a huge hit with music lovers, who appreciate Spotify’s assistance in identifying new songs they may love.

So, the next time you need to figure out how to best leverage the range of data you have—or find a new summer jam—consider predictive analytics.

Mark is a Vice President at CMB, he’s fully embracing his reputation around the office as the DJ of the Digital Age.

Did you miss our recent webinar on the power of Social Currency measurement to help brands activate the 7 levers that encourage consumers to advocate, engage, and gain real value? You're not out of luck:

 Watch Here

 

Topics: advanced analytics, big data, data integration, predictive analytics

CMB Conference Recap: ARF Re!Think16

Posted by Julie Kurd

Thu, Mar 17, 2016

Re-Think-2016.jpgRon Amram of Heineken uttered the three words that sum up my ARF #ReThink16 experience: science, storytelling, and seconds. Let’s recap some of the most energizing insights: 

  • Science: Using Data to Generate Insights
    • AT&T Mobility’s Greg Pharo talked about how AT&T measures the impact of mass and digital advertising. They start with a regression and integrate marketing variables (media weight, impressions, GRPs, brand and message recall, WoM, etc.) as well as information on major product launches, distribution, and competitive data, topped off with macroeconomic data and internal operational data such as quality (network functioning, etc.).
    • GfK’s voice analytics research actually records respondents’ voices and captures voice inflection, which predicts new idea or new product success by asking a simple question: “What do you think about this product and why?” They explore sentiment by analyzing respondents’ speech for passion, activation, and whether they’d purchase. I had to ask a question: since I have a sunny and positive personality, wouldn’t my voice always sound to a machine as though I like every product? Evidently, no. They establish each individual respondent’s baseline and measure the change.  
    • Nielsen talked about its new 40 ad normative benchmark (increasing soon to 75) and how it uses a multi-method approach—a mix of medical grade EEG, eye tracking, facial coding, biometrics, and self-reporting—to get a full view of reactions to advertising. 
  • Storytelling: Using Creative That’s Personal
    • Doug Ziewacz (Head of North America Digital Media and Advertising for Under Armour Connected Fitness) spoke about the ecosystem of connected health and fitness. It’s not enough to just receive a notification that you’ve hit your 10,000 steps—many people are looking for community and rewards.
    • Tell your story. I saw several presentations that covered how companies ensure that potential purchasers view a product’s advertising and how companies are driving interest from target audiences.
      • Heineken, for example, knows that 50% of its 21-34 year-old male target don’t even drink beer, so they focus on telling stories to the other 50%. The company’s research shows that most male beer drinkers are sort of loyal to a dozen beer brands, with different preferences for different occasions. Ron Amram (VP of Media at Heineken) talked about the need to activate people with their beer for the right occasion. 
      • Manvir Kalsi, Senior Manager of Innovation Process and Research at Samsung, said that Samsung spends ~$3B in advertising globally. With such a large footprint, they often end up adding impressions for people who will never be interested in the product. Now, the company focuses on reaching entrenched Apple consumers with messages (such as long battery life) that might not resonate with Samsung loyalists but will hit Apple users hard and give those Apple users reasons to believe in Samsung. 
  • Seconds: Be Responsive Enough to Influence the Purchase Decision Funnel
    • Nathalie Bordes from ESPN talked about sub-second ad exposure effectiveness. She spoke frankly about how exposure time is no longer the most meaningful part of ad recall for mobile scrolling or static environments. In fact, 36% of audience recalled an ad with only half a second of exposure. There was 59% recall in 1 second and 78% recall in 2 seconds. Point being, every time we have to wait 4 or 5 seconds before clicking “skip ad” on YouTube, our brains really are taking in those ads.
    • Laura Bernstein from Symphony Advanced Media discussed the evolution of Millennials’ video viewing habits. Symphony is using measurement technology among its panel of 15,000 viewers who simply install an app and then keep their phones charged and near them, allowing the app to passively collect cross-platform data. A great example of leveraging the right tech for the right audience.

How does your company use science and storytelling to drive business growth?

Want to know more about Millennials' attitudes and behaviors toward banking and finance?Download our new Consumer Pulse report here!

Topics: storytelling, marketing science, advertising, data integration, conference recap

Say Goodbye to Your Mother’s Market Research

Posted by Matt Skobe

Wed, Dec 02, 2015

evolving market researchIs it time for the “traditional” market researcher to join the ranks of the milkman and switchboard operator? The pressure to provide more actionable insights, more quickly, has never been so high. Add new competitors into the mix, and you have an industry feeling the pinch. At the same time, primary data collection has become substantially more difficult:

  • Response rates are decreasing as people become more and more inundated with email requests
  • Many among the younger crowd don’t check their email frequently, favoring social media and texting
  • Spam filters have become more effective, so potential respondents may not receive email invitations
  • The cell-phone-only population is becoming the norm—calls are easily avoided using voicemail, caller ID, call-blocking, and privacy managers
  • Traditional questionnaire methodologies don’t translate well to the mobile platform—it’s time to ditch large batteries of questions

It’s just harder to contact people and collect their opinions. The good news? There’s no shortage of researchable data. Quite the contrary, there’s more than ever. It’s just that market researchers are no longer the exclusive collectors—there’s a wealth of data collected internally by companies as well as an increase in new secondary passive data generated by mobile use and social media. We’ll also soon be awash in the Internet of Things, which means that everything with an on/off switch will increasingly be connected to one another (e.g., a wearable device can unlock your door and turn on the lights as you enter). The possibilities are endless, and all this activity will generate enormous amounts of behavioral data.

Yet, as tantalizing as these new forms of data are, they’re not without their own challenges. One such challenge? Barriers to access. Businesses may share data they collect with researchers, and social media is generally public domain, but what about data generated by mobile use and the Internet of Things? How can researchers get their hands on this aggregated information? And once acquired, how do you align dissimilar data for analysis? You can read about some of our cutting-edge research on mobile passive behavioral data here.

We also face challenges in striking the proper balance between sharing information and protecting personal privacy. However, people routinely trade personal information online when seeking product discounts and for the benefit of personalizing applications. So, how and what’s shared, in part, depends on what consumers gain. It’s reasonable to give up some privacy for meaningful rewards, right? There are now health insurance discounts based on shopping habits and information collected by health monitoring wearables. Auto insurance companies are already doing something similar in offering discounts based on devices that monitor driving behavior.

We are entering an era of real-time analysis capabilities. The kicker is that with real-time analysis comes the potential for real-time actionable insights to better serve our clients’ needs.

So, what’s today’s market researcher to do? Evolve. To avoid marginalization, market researchers need to continue to understand client issues and cultivate insights in regard to consumer behavior. To do so effectively in this new world, they need to embrace new and emerging analytical tools and effectively mine data from multiple disparate sources, bringing together the best of data science and knowledge curation to consult and partner with clients.

So, we can say goodbye to “traditional” market research? Yes, indeed. The market research landscape is constantly evolving, and the insights industry needs to evolve with it.

Matt Skobe is a Data Manager at CMB with keen interests in marketing research and mobile technology. When Matt reaches his screen time quota for the day he heads to Lynn Woods for gnarcore mountain biking.    

Topics: data collection, mobile, consumer insights, marketing science, internet of things, data integration, passive data

Dear Dr. Jay: The Internet of Things and The Connected Cow

Posted by Dr. Jay Weiner

Thu, Nov 19, 2015

Hello Dr. Jay, 

What is the internet of things, and how will it change market research?

-Hugo 


DrJay_Thinking-withGoatee_cow.png

Hi Hugo,

The internet of things is all of the connected devices that exist. Traditionally, it was limited to PCs, tablets, and smartphones. Now, we’re seeing wearables, connected buildings and homes. . .and even connected cows. (Just when I thought I’d seen it all.) Connected cows, surfing the internet looking for the next greenest pasture. Actually, a number of companies offer connected cow solutions for farmers. Some are geared toward beef cattle, others toward dairy cows. Some devices are worn on the leg or around the neck, others are swallowed (I don’t want to know how you change the battery). You can track the location of the herd, monitor milk production, and model the best field for grass to increase milk output. The solutions offer alerts to the farmer when the cow is sick or in heat, which means that the farmer can get by with fewer hands and doesn’t need to be with each cow 24/7. Not only can the device predict when a cow is in heat, it can also bias the gender of the calf based on the window of opportunity. Early artificial insemination increases the probability of getting a female calf. So, not only can the farmer increase his number of successful inseminations, he/she can also decide if more bulls or milk cows are needed in the herd. 

How did this happen? A bunch of farmers put the devices on the herd and began collecting data. Then, the additional data is appended to the data set (e.g., the time the cow was inseminated, whether it resulted in pregnancy, and the gender of the calf). If enough farmers do this, we can begin to build a robust data set for analysis.

So, what does this mean for humans? Well, many of you already own some sort of fitness band or watch, right? What if a company began to collect all of the data generated by these devices? Think of all the things the company could do with those data! It could predict the locations of more active people. If it appended some key health measures (BMI, diabetes, stroke, death, etc.) to the dataset, the company could try to build a model that predicts a person’s probability of getting diabetes, having a stroke, or even dying. Granted, that’s probably not a message you want from your smart watch: “Good afternoon, Jay. You will be dead in 3 hours 27 minutes and 41 seconds.” Here’s another possible (and less grim) message: “Good afternoon, Jay. You can increase your time on this planet if you walk just another 1,500 steps per day.” Healthcare providers would also be interested in this information. If healthcare providers had enough fitness tracking data, they might be able to compute new lifetime age expectations and offer discounts to customers who maintain a healthy lifestyle (which is tracked on the fitness band/watch).  

Based on connected cows, the possibility of this seems all too real. The question is: will we be willing to share the personal information needed to make this happen? Remember: nobody asked the cow if it wanted to share its rumination information with the boss.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. He is completely fascinated and paranoid about the internet of things. Big brother may be watching, and that may not be a good thing.

Topics: technology research, healthcare research, data collection, Dear Dr. Jay, internet of things, data integration

You Cheated—Can Love Restore Trust?

Posted by James Kelley

Mon, Nov 02, 2015

This year has been rife with corporate scandals. For example, FIFA’s corruption case and Volkswagen’s emissions cheating admission may have irreparably damaged public trust for these organizations. These are just two of the major corporations caught this year, and if history tells us anything, we’re likely to see at least another giant fall in 2015. 

What can managers learn about their brands from watching the aftermath of corporate scandal? Let’s start with the importance of trust—something we can all revisit. We take it for granted when our companies or brands are in good standing, but when trust falters, it recovers slowly and impacts all parts of the organization. To prove the latter point, we used data from our recent self-funded Consumer Pulse research to understand the relationship between Likelihood to Recommend (LTR), a Key Performance Indicator, and Trustworthiness amongst a host of other brand attributes. 

Before we dive into the models, let’s talk a little bit about the data. We leveraged data we collected some months ago—not at the height of any corporate scandal. In a perfect world, we would have pre-scandal and post-scandal observations of trust to understand any erosion due to awareness of the deception. This data also doesn’t measure the auto industry or professional sports. It focuses on brands in the hotel, e-commerce, wireless, airline, and credit card industries. Given the breadth of the industries, the data should provide a good look at how trust impacts LTR across different types of organizations. Finally, we used Bayes Net (which we’ve blogged about quite a bit recently) to factor and map the relationships between LTR and brand attributes. After factoring, we used TreeNet to get a more direct measure of explanatory power for each of the factors.

First, let’s take a look at the TreeNet results. Overall, our 31 brand attributes explain about 71% of the variance in LTR—not too shabby. Below are each factors’ individual contribution to the model (summing to 71%). Each factor is labeled by the top loading attribute, although they are each comprised of 3-5 such variables. For a complete list of which attributes goes with which factor, see the Bayes Net map below. That said, this list (labeled by the top attributes) should give you an idea of what’s directly driving LTR:

tree net, cmb, advanced analytics

Looking at these factor scores in isolation, they make inherent sense—love for a brand (which factors with “I am proud to use” and “I recommend, like, or share with friends”) is the top driver of LTR. In fact, this factor is responsible for a third of the variance we can explain. Other factors, including those with trust and “I am proud to wear/display the logo of Brand X” have more modest (and not all that dissimilar) explanatory power. 

You might be wondering: if Trustworthiness doesn’t register at the top of the list for TreeNet, then why is it so important? This is where Bayes Nets come in to play. TreeNet, like regression, looks to measure the direct relationships between independent and dependent variables, holding everything else constant. Bayes Nets, in contrast, looks for the relationships between all the attributes and helps map direct as well as indirect relationships.

Below is the Bayes Net map for this same data (and you can click on the map to see a larger image). You need three important pieces of information to interpret this data:

  1. The size of the nodes (circles/orbs) represents how important a factor is to the model. The bigger the circle, the more important the factor.
  2. Similarly, the thicker the lines, the stronger a relationship is between two factors/variables. The boldest lines have the strongest relationships.
  3. Finally, we can’t talk about causality, but rather correlations. This means we can’t say Trustworthiness causes LTR to move in a certain direction, but rather that they’re related. And, as anyone who has sat through an introduction to statistics course knows, correlation does not equal causation.

bayes net, cmb, advanced analytics

Here, Factor 7 (“I love Brand X”) is no longer a hands-down winner in terms of explanatory power. Instead, you’ll see that Factors 3, 5, 7 and 9 each wield a great deal of influence in this map in pretty similar quantities. Factor 7, which was responsible for over a third of the explanatory power before, is well-connected in this map. Not surprising—you don’t just love a brand out of nowhere. You love a brand because they value you (Factor 5), they’re innovative (Factor 9), they’re trustworthy (Factor 3), etc. Factor 7’s explanatory power in the TreeNet model was inflated because many attributes interact to produce the feeling of love or pride around a brand.

Similarly, Factor 3 (Trustworthiness) was deflated. The TreeNet model picked up the direct relationship between Trustworthiness and LTR, but it didn’t measure its cumulative impact (a combination of direct and indirect relationships). Note how well-connected Factor 3 is. It’s strongly related (one of the strongest relationships in the map) to Factor 5, which includes “Brand X makes me feel valued,” “Brand X appreciates my business,” and “Brand X provides excellent customer service.” This means these two variables are fairly inseparable. You can’t be trustworthy/have a good reputation without the essentials like excellent customer service and making customers feel valued. Although to a lesser degree, Trustworthiness is also related to love. Business is like dating—you can’t love someone if you don’t trust them first.

The data shows that sometimes relationships aren’t as cut and dry as they appear in classic multivariate techniques. Some things that look important are inflated, while other relationships are masked by indirect pathways. The data also shows that trust can influence a host of other brand attributes and may even be a prerequisite for some. 

So what does this mean for Volkswagen? Clearly, trust is damaged and will need to be repaired.  True to crisis management 101, VW has jettisoned a CEO and will likely make amends to those owners who have been hurt by their indiscretions. But how long will VW feel the damage done by this scandal? For existing customers, the road might be easier. One of us, James, is a current VW owner, and he is smitten with the brand. His particular model (GTI) wasn’t impacted, and while the cheating may damage the value of his car, he’s not selling it anytime soon. For prospects, love has yet to develop and a lack of trust may eliminate the brand from their consideration set.

The takeaway for brands? Don’t take trust for granted. It’s great while you’re in good favor, but trust’s reach is long, varied, and has the potential to impact all of your KPIs. Take a look at your company through the lens of trust. How can you improve? Take steps to better your customer service and to make customers feel valued. It may pay dividends in improving trust, other KPIs, and, ultimately, love.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. He keeps buying new cars to try to make the noise on the right side go away.

James Kelley splits his time at CMB as a Project Manager for the Technology/eCommerce team and as a member of the analytics team. He is a self-described data nerd, political junkie, and board game geek. Outside of work, James works on his dissertation in political science which he hopes to complete in 2016.

Topics: advanced analytics, data collection, Dear Dr. Jay, data integration, customer experience and loyalty

It's Not the Technology. . .It's Us

Posted by Mark Doherty

Wed, Oct 28, 2015

technology, human problem, cmb, data integrationWe’ve come a long way, baby. . .

In the past three decades, the exponential growth in technology’s capabilities have given us the power to integrate multiple sources, predict behaviors, and deliver insights at a speed we only dreamt of when I was starting out. CMB Chairman and co-founder, Dr. John Martin, was an early cheerleader of the value of using multiple methods and multiple sources, so the promise of bringing disparate data sources into a unified view of customers and the marketplace is this researcher’s dream come true. 

While integrating data to help make smarter decisions has always been a best practice, it is the advances in technology that have allowed for an even greater and easier integration. Below are some recent examples we’ve implemented at CMB:

  • In segmentation studies, we include needs/attitude-based survey data, internal CRM behaviors, and third-party appended data into the modeling to create more useful segments. Our clients have found that our perceptual data is a necessary complement to their internal data because it helps explain the “why’s” to the “what’s” that the internal behavioral/demographic data tell them.
  • For our brand tracking clients, we often combine web analytics (e.g., Google search data, social media sentiment analysis, client’s web traffic statistics) and internal data (e.g., inquiries, loyalty applications) with our tracking results to help tell a much more nuanced story of the brand’s progress. Additionally, we use dashboards to tie that data together in one place, providing a real-time view of the brand.
  • Our customer experience clients now provide us with internal data from call center reports (detailing the types of complaints received) and internal performance metrics to complement our satisfaction tracking. 

. . .but we’ve got a ways to go.

While many organizations are leveraging technology to integrate data for specific decision areas, I see a number of stumbling blocks. Many companies are still failing to develop an enterprise-wide, unified view of the marketplace—and the barriers often have little to do with the data or tech themselves: 

  • Organizational siloes make it very challenging for different functional areas to come together and create a common platform for this type of unified view. 
  • Moreover, the politics of who owns what—and more importantly, who pays for what—oftentimes means efforts like this never get off the ground.  

So, while it seems like technology is helping make all sorts of different data “play together,” we as humans haven’t mastered the same challenge! 

How do organizations overcome these challenges to take advantage of this possibility? Like most challenges, the solution starts with senior leadership. If the C-suite makes it a priority for the organization to become customer-centric and stresses that data is a big part of getting there, that goes far to pave the way for the different personalities and siloes to come together. Starting small is another way to tackle this problem. Look for opportunities in which teams can collaborate, even if it’s something as simple as looking at subsequent purchase behaviors from customers six months after they complete a satisfaction questionnaire in order to develop/refine the predictive power of your customer experience tracking. Starting small can create a more positive beginning to the partnership, building the trust and communication necessary to attack the bigger challenges down the road.

Mark is a Vice President at CMB, and while he recognizes that technology has absolutely transformed all aspects of his professional and personal life, he sees meaning in the fact that he prefers his music playlists generated by humans, not algorithms. Long live the DJ!

Are you following us on Twitter? If not, join the party! 

Follow Us @cmbinfo!

Topics: consumer insights, B2B research, data integration

Survey Magazine Names CMB’s Talia Fein a 2015 “Data Dominator”

Posted by Talia Fein

Wed, Sep 23, 2015

Talia Fein, CMB, Survey Magazine, Data DominatorEvery year, Survey Magazine names 10 “Data Dominators,” who are conquering data in different ways at their companies. This year, our very own Talia Fein was chosen. She discusses her passion for data in Survey Magazine’s August issue, and we’ve reposted the article below.

When I first came to CMB, a research and strategy company in Boston, I was fresh out of undergrad and an SPSS virgin. In fact, I remember there being an SPSS test that all new hires were supposed to take, but I couldn’t take it because I didn’t even know how to open a data file. Fast forward a few months, and I had quickly been converted to an SPSS specialist, a numbers nerd, orperhaps more appropriately—a data dominator.  I was a stickler for process and precision in all data matters, and I took great pride in ensuring that all data and analyses were perfect and pristine. To put it bluntly, I was a total nerd.

I recently returned to CMB after a four-year hiatus. When I left CMB, I quickly became the survey and data expert among my new colleagues and the point person for all SPSS and data questions. But it wasn’t just my data skills that were being put to use. To me, data management is also about the process and the organization of data. In my subsequent roles, I found myself looking to improve the data processes and streamline the systems used for survey data. I brought new software programs to my companies and taught my teams how to manage data effectively and efficiently.

When I think about the future of the research industry, I imagine survey research as being the foundation of a house.  Survey data and data management are the building blocks of what we do. When we do them excellently, we are a well-oiled machine. But a well-oiled machine doesn’t sell products or help our clients drive growth. We need to have the foundation in place in order to extend beyond it and to prepare ourselves for the next big thing that comes along. And that next big thing, in my mind, is big data technology. There is a lot of data out there, and a lot of ways of managing and analyzing it, and we need to be ready for that.  We need to expand our ideas about where our data is coming from and what we can do with it. It is our job to connect these data sources and to find greater meaning than we were previously able to. It is this non-traditional use of data and analytics that is the future of our industry, and we have to be nimble and creative in order to best serve our clients’ ever-evolving needs.

One recent example of this is CMB’s 2015 Mobile Wallet study, which leveraged multiple data sources and—in the process—revealed which were good for what types of questions. In the case of this research, we analyzed mobile behavioral data, including mobile app and mobile web usage, along with survey-based data to get a full picture of consumers’ behaviors, experiences, and attitudes toward mobile wallets. We also came away with new Best Practices for how best to manage passive mobile behavioral data, as it presents new challenges that are unique from managing survey data. Our clients are making big bets on new technology, and they need the comprehensive insights that come from integrating multiple sources. We specifically sampled different sources because we know that—in practice—many of our clients are being handed multiple data sets from multiple data sources. In order to best serve these clients, we need to be able to leverage all the data sources that are at our and their disposal so that we can glean the best insights and make the best recommendations.

Talia Fein is a Project & Data Manager at Chadwick Martin Bailey (CMB), a market research consulting firm in Boston. She’s responsible for the design and execution of market research studies for Fortune 500 companies as well as the data processing and analysis through all phases of the research. Her portfolio includes clients such as Dell, Intel, and Comcast, and her work includes customer segmentation, loyalty, brand tracking, new product development, and win-loss research.

Topics: our people, big data, data integration

Dear Dr. Jay: Data Integration

Posted by Jay Weiner, PhD

Wed, Aug 26, 2015

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email us! OR Submit anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, data integration, passive data

Be Aware When Conducting Research Among Mobile Respondents

Posted by Julie Kurd

Tue, Oct 28, 2014

mobile, cmb

Are you conducting research among mobile respondents yet? Autumn is conference season, and 1,000 of us just returned from IIR’s The Market Research Event (TMRE) conference where we learned, among other things, about research among mobile survey takers. Currently, only about 5% of the market research industry spend is for research conducted on a smartphone, 80% is online, and 15% is everything else (telephone and paper-based). Because mobile research is projected to be 20% of the industry spend in the coming years, we all need to understand the risks and opportunities of using mobile surveys.  

Below, you’ll find three recent conference presentations that discussed new and fresh approaches to mobile research as well as some things to watch out for if you decide to go the mobile route. 

1. At IIR TMRE, Anisha Hundiwal, the Director of U.S. Consumer and Business Insights for McDonald’s, and Jim Lane from Directions Research Inc. (DRI) did not disappoint. They co-presented the research they had done to understand the strengths of half a dozen national and regional coffee brands, including Newman’s Coffee (the coffee that McDonald’s serves), around 48 brand attributes. While they did share some compelling results, Anisha and Jim’s presentation primarily focused on the methodology they used. Here is my paraphrase of the approach they took:

  • They used a traditional 25 minute, full-length online study among traditional computer/laptop respondents who met the screening criteria (U.S. and Europe, age, gender, etc.), measuring a half dozen brands and approximately 48 brand attributes. They then analyzed results of the full-length study and conducted a key driver analysis.
  • Next, they administered the study using a mobile app for mobile survey takers among similar respondents who met the same screening criteria. They also dropped the survey length to 10 minutes, tested a narrower set of brands (3 instead of 6), and winnowed the attributes from ~48 to ~14. They made informed choices about which attributes to include based on their key driver analysis (key drivers to overall equity, and I believe I heard them say they added in some attributes that were highly polarizing).

Then, they compared mobile respondent results to the traditional online survey results. Anisha and Jim discussed key challenges we all face as we begin to adapt to smartphone respondent research. For example, they tinkered with rating scales and slide bars by setting the bar on the far left at 0 on a 0-100 rating scale for some respondents and then setting it at the mid-point for others to see if results would be different. While the overall brand results were about the same, the sections of the rating scales respondents used differed. Further, they reported that it was hard to compare detailed results for online and mobile because different parts of the rating scales were used in general. Finally, they reported that the winnowed attribute and brand lists made insights less rich than the online survey results.

2. At the MRA Corporate Researcher’s conference in September, Ryan Backer, Global Insights for Emerging Tech at General Mills, also very clearly articulated several early learnings in the emerging category of mobile surveys. He said that 80% of General Mills’ research team has conducted at least one smartphone respondent study. (Think about that and wonder out loud, “should I at least dip my toe into this smartphone research?”) He provides a laundry list of the challenges they faced and, like all true innovators, he was willing to share his challenges because it helps him continue to innovate.  You can read a full synopsis here.

3. Chadwick Martin Bailey was a finalist for the NGMR Disruptive Innovation Award at the IIR TMRE conference.  We partnered with Research Now for a presentation on modularizing surveys for mobile respondents at an earlier IIR conference and then turned the presentation into a webinar. CMB used a modularized technique in which a 20 minute survey was deconstructed into 3 partial surveys with key overlaps. After fielding the research among mobile survey takers, CMB used some designer analytics (warning, probably don’t do this without a resident PhD) to ‘stitch’ and ‘impute’ the results. In this conference presentation turned webinar, CMB talks about the pros and cons of this approach.

Conferences are a great way to connect with early adopters of new research methods. So, when you’re considering adopting new research methods such as mobile surveys, allocate time to see what those who have gone before you have learned!

Julie blogs for GreenBook, ResearchAccess, and CMB.  She’s an inspired participant, amplifier, socializer, and spotter in the twitter #mrx community so talk research with her @julie1research.

Topics: data collection, mobile, research design, data integration, conference recap

Global Mobile Market Research Has Arrived: Are You Prepared?

Posted by Brian Jones

Wed, May 14, 2014

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI LightspeedThe ubiquity of mobile devices has opened up new opportunities for market researchers on a global scale. Think: biometrics, geo-location, presence sensing, etc. The emerging possibilities enabled by mobile market research are exciting and worth exploring, but we can’t ignore the impact that small screens are already having on market research. For example, unintended mobile respondents make up about 10% of online interviews today. They also impact research in other ways—through dropped surveys, disenfranchised panel members, and other unknown influences. Online access panels have become multi-mode sources of data collection and we need to manage projects with that in mind.

Researchers have at least three options: (1) we can ignore the issue; (2) we can limit online surveys to PC only; or (3) we can embrace and adapt online surveys to a multi-mode methodology. 

We don’t need to make special accommodations for small screen surveys if mobile participants are a very small percentage of panel participants, but the number of mobile participants is growing.  Frank Kelly, SVP of global marketing and strategy for Lightspeed Research/GMI—one of the world’s largest online panels—puts it this way, we don’t have the time to debate the mobile transition, like we did in moving from CATI to online interviewing, since things are advancing so quickly.” 

If you look at the percentage of surveys completed on small screens in recent GMI panel interviews, they exceed 10% in several countries and even 15% among millennials.

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI Lightspeed

There are no true device agnostic platforms since the advanced features in many surveys simply cannot be supported on small screens and on less sophisticated devices.  It is possible to create device agnostic surveys, but it means giving up on many survey features that we’ve long considered standard. This creates a challenge. Some question types aren’t effectively supported by small screens, such as discrete choice exercises or multi-dimensional grids, and a touchscreen interface is different from what you get with a mouse. Testing on mobile devices may also reveal questions that render differently depending on the platform, which can influence how a respondent answers a question. In instances like these, it may be prudent to require respondents to complete online interviews on a PC-like device. The reverse is also true.  Some research requires mobile-only respondents, particularly when the specific features of smartphones or tablets are used. In some emerging countries, researchers may skip the PC as a data collection tool altogether in favor of small screen mobile devices.  In certain instances, PC-only or mobile-only interviewing makes sense, but the majority of today’s online research involves a mix of platform types. It is clear we need to adopt best practices reflect this reality. 

Online questionnaires must work on all or at least the vast majority of devices.  This becomes particularly challenging for multi-country studies which have a greater variety of devices, different broadband penetrations, and different coverage/quality concerns for network access and availability.  A research design that covers as many devices as possible—both PC and mobile—maximizes the breadth of respondents likely to participate.  

There are several ways to mitigate concerns and maximize the benefits of online research involving different platform types. 

1.      Design different versions of the same study optimized for larger vs. smaller screens.  One version might even be app-based instead of online-based, which would mitigate concerns over network accessibility. 

2.      Break questionnaires into smaller chunks to avoid respondent fatigue on longer surveys, which is a greater concern for mobile respondents. 

Both options 1 and 2 have their own challenges.  They require matching/merging data, need separate programming, and require separate testing, all of which can lead to more costly studies.

3.      Design more efficient surveys and shorter questionnaires. This is essential for accommodating multi-device user experiences. Technology needs to be part of the solution, specifically with better auto detect features that optimize how questionnaires are presented on different screen sizes.  For multi-country studies, technology needs to adapt how questionnaires are presented for different languages. 

Researchers can also use mobile-first questionnaire design practices.  For our clients, we always consider the following:

  • Shortening survey lengths since drop-off rates are greater for mobile participants, and it is difficult to hold their focus for more than 15 minutes.

  • Structuring questionnaires to enable smaller screen sizes to avoid horizontal scrolling and minimize vertical scrolling.

  • Minimizing the use of images and open-ended questions that require longer responses. SMS based interviewing is still useful in specific circumstances, but the number of key strokes required for online research should be minimized.

  •  Keeping the wording of the questions as concise as possible.

  • Carefully choosing which questions to ask which subsets of respondents. We spend a tremendous amount of equity in the design phase to make surveys more appealing to small screen participants. This approach pays dividends in every other phase of research and in the quality of what is learned.

Consumers and businesses are rapidly embracing the global mobile ecosystem. As market researchers and insights professionals, we need to keep pace without compromising the integrity of the value we provide. Here at CMB, we believe that smart planning, a thoughtful approach, and an innovative mindset will lead to better standards and practices for online market research and our clients.

Special thanks to Frank Kelly and the rest of the Lightspeed/GMI team for their insights.

Brian is a Project Manager and mobile expert on CMB’s Tech and Telecom team. He recently presented the results of our Consumer Pulse: The Future of the Mobile Wallet at The Total Customer Experience Leaders conference.

In Universal City next week for the Future of Consumer Intelligence? Chris Neal, SVP of our Tech and Telecom team, and Roddy Knowles of Research Now, will share A “How-To” Session on Modularizing a Live Survey for Mobile Optimization.

 

Topics: methodology, data collection, mobile, data integration