Dear Dr. Jay: Data Integration

Posted by Jay Weiner, PhD

Wed, Aug 26, 2015

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email us! OR Submit anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: Advanced Analytics, Big Data, Dear Dr. Jay, Data Integration, Passive Data

Be Aware When Conducting Research Among Mobile Respondents

Posted by Julie Kurd

Tue, Oct 28, 2014

mobile, cmb

Are you conducting research among mobile respondents yet? Autumn is conference season, and 1,000 of us just returned from IIR’s The Market Research Event (TMRE) conference where we learned, among other things, about research among mobile survey takers. Currently, only about 5% of the market research industry spend is for research conducted on a smartphone, 80% is online, and 15% is everything else (telephone and paper-based). Because mobile research is projected to be 20% of the industry spend in the coming years, we all need to understand the risks and opportunities of using mobile surveys.  

Below, you’ll find three recent conference presentations that discussed new and fresh approaches to mobile research as well as some things to watch out for if you decide to go the mobile route. 

1. At IIR TMRE, Anisha Hundiwal, the Director of U.S. Consumer and Business Insights for McDonald’s, and Jim Lane from Directions Research Inc. (DRI) did not disappoint. They co-presented the research they had done to understand the strengths of half a dozen national and regional coffee brands, including Newman’s Coffee (the coffee that McDonald’s serves), around 48 brand attributes. While they did share some compelling results, Anisha and Jim’s presentation primarily focused on the methodology they used. Here is my paraphrase of the approach they took:

  • They used a traditional 25 minute, full-length online study among traditional computer/laptop respondents who met the screening criteria (U.S. and Europe, age, gender, etc.), measuring a half dozen brands and approximately 48 brand attributes. They then analyzed results of the full-length study and conducted a key driver analysis.
  • Next, they administered the study using a mobile app for mobile survey takers among similar respondents who met the same screening criteria. They also dropped the survey length to 10 minutes, tested a narrower set of brands (3 instead of 6), and winnowed the attributes from ~48 to ~14. They made informed choices about which attributes to include based on their key driver analysis (key drivers to overall equity, and I believe I heard them say they added in some attributes that were highly polarizing).

Then, they compared mobile respondent results to the traditional online survey results. Anisha and Jim discussed key challenges we all face as we begin to adapt to smartphone respondent research. For example, they tinkered with rating scales and slide bars by setting the bar on the far left at 0 on a 0-100 rating scale for some respondents and then setting it at the mid-point for others to see if results would be different. While the overall brand results were about the same, the sections of the rating scales respondents used differed. Further, they reported that it was hard to compare detailed results for online and mobile because different parts of the rating scales were used in general. Finally, they reported that the winnowed attribute and brand lists made insights less rich than the online survey results.

2. At the MRA Corporate Researcher’s conference in September, Ryan Backer, Global Insights for Emerging Tech at General Mills, also very clearly articulated several early learnings in the emerging category of mobile surveys. He said that 80% of General Mills’ research team has conducted at least one smartphone respondent study. (Think about that and wonder out loud, “should I at least dip my toe into this smartphone research?”) He provides a laundry list of the challenges they faced and, like all true innovators, he was willing to share his challenges because it helps him continue to innovate.  You can read a full synopsis here.

3. Chadwick Martin Bailey was a finalist for the NGMR Disruptive Innovation Award at the IIR TMRE conference.  We partnered with Research Now for a presentation on modularizing surveys for mobile respondents at an earlier IIR conference and then turned the presentation into a webinar. CMB used a modularized technique in which a 20 minute survey was deconstructed into 3 partial surveys with key overlaps. After fielding the research among mobile survey takers, CMB used some designer analytics (warning, probably don’t do this without a resident PhD) to ‘stitch’ and ‘impute’ the results. In this conference presentation turned webinar, CMB talks about the pros and cons of this approach.

Conferences are a great way to connect with early adopters of new research methods. So, when you’re considering adopting new research methods such as mobile surveys, allocate time to see what those who have gone before you have learned!

Julie blogs for GreenBook, ResearchAccess, and CMB.  She’s an inspired participant, amplifier, socializer, and spotter in the twitter #mrx community so talk research with her @julie1research.

Topics: Data Collection, Mobile, Research Design, Data Integration, Conference Insights

Global Mobile Market Research Has Arrived: Are You Prepared?

Posted by Brian Jones

Wed, May 14, 2014

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI LightspeedThe ubiquity of mobile devices has opened up new opportunities for market researchers on a global scale. Think: biometrics, geo-location, presence sensing, etc. The emerging possibilities enabled by mobile market research are exciting and worth exploring, but we can’t ignore the impact that small screens are already having on market research. For example, unintended mobile respondents make up about 10% of online interviews today. They also impact research in other ways—through dropped surveys, disenfranchised panel members, and other unknown influences. Online access panels have become multi-mode sources of data collection and we need to manage projects with that in mind.

Researchers have at least three options: (1) we can ignore the issue; (2) we can limit online surveys to PC only; or (3) we can embrace and adapt online surveys to a multi-mode methodology. 

We don’t need to make special accommodations for small screen surveys if mobile participants are a very small percentage of panel participants, but the number of mobile participants is growing.  Frank Kelly, SVP of global marketing and strategy for Lightspeed Research/GMI—one of the world’s largest online panels—puts it this way, we don’t have the time to debate the mobile transition, like we did in moving from CATI to online interviewing, since things are advancing so quickly.” 

If you look at the percentage of surveys completed on small screens in recent GMI panel interviews, they exceed 10% in several countries and even 15% among millennials.

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI Lightspeed

There are no true device agnostic platforms since the advanced features in many surveys simply cannot be supported on small screens and on less sophisticated devices.  It is possible to create device agnostic surveys, but it means giving up on many survey features that we’ve long considered standard. This creates a challenge. Some question types aren’t effectively supported by small screens, such as discrete choice exercises or multi-dimensional grids, and a touchscreen interface is different from what you get with a mouse. Testing on mobile devices may also reveal questions that render differently depending on the platform, which can influence how a respondent answers a question. In instances like these, it may be prudent to require respondents to complete online interviews on a PC-like device. The reverse is also true.  Some research requires mobile-only respondents, particularly when the specific features of smartphones or tablets are used. In some emerging countries, researchers may skip the PC as a data collection tool altogether in favor of small screen mobile devices.  In certain instances, PC-only or mobile-only interviewing makes sense, but the majority of today’s online research involves a mix of platform types. It is clear we need to adopt best practices reflect this reality. 

Online questionnaires must work on all or at least the vast majority of devices.  This becomes particularly challenging for multi-country studies which have a greater variety of devices, different broadband penetrations, and different coverage/quality concerns for network access and availability.  A research design that covers as many devices as possible—both PC and mobile—maximizes the breadth of respondents likely to participate.  

There are several ways to mitigate concerns and maximize the benefits of online research involving different platform types. 

1.      Design different versions of the same study optimized for larger vs. smaller screens.  One version might even be app-based instead of online-based, which would mitigate concerns over network accessibility. 

2.      Break questionnaires into smaller chunks to avoid respondent fatigue on longer surveys, which is a greater concern for mobile respondents. 

Both options 1 and 2 have their own challenges.  They require matching/merging data, need separate programming, and require separate testing, all of which can lead to more costly studies.

3.      Design more efficient surveys and shorter questionnaires. This is essential for accommodating multi-device user experiences. Technology needs to be part of the solution, specifically with better auto detect features that optimize how questionnaires are presented on different screen sizes.  For multi-country studies, technology needs to adapt how questionnaires are presented for different languages. 

Researchers can also use mobile-first questionnaire design practices.  For our clients, we always consider the following:

  • Shortening survey lengths since drop-off rates are greater for mobile participants, and it is difficult to hold their focus for more than 15 minutes.

  • Structuring questionnaires to enable smaller screen sizes to avoid horizontal scrolling and minimize vertical scrolling.

  • Minimizing the use of images and open-ended questions that require longer responses. SMS based interviewing is still useful in specific circumstances, but the number of key strokes required for online research should be minimized.

  •  Keeping the wording of the questions as concise as possible.

  • Carefully choosing which questions to ask which subsets of respondents. We spend a tremendous amount of equity in the design phase to make surveys more appealing to small screen participants. This approach pays dividends in every other phase of research and in the quality of what is learned.

Consumers and businesses are rapidly embracing the global mobile ecosystem. As market researchers and insights professionals, we need to keep pace without compromising the integrity of the value we provide. Here at CMB, we believe that smart planning, a thoughtful approach, and an innovative mindset will lead to better standards and practices for online market research and our clients.

Special thanks to Frank Kelly and the rest of the Lightspeed/GMI team for their insights.

Brian is a Project Manager and mobile expert on CMB’s Tech and Telecom team. He recently presented the results of our Consumer Pulse: The Future of the Mobile Wallet at The Total Customer Experience Leaders conference.

In Universal City next week for the Future of Consumer Intelligence? Chris Neal, SVP of our Tech and Telecom team, and Roddy Knowles of Research Now, will share A “How-To” Session on Modularizing a Live Survey for Mobile Optimization.

 

Topics: Methodology, Data Collection, Mobile, Data Integration

Data Oceans: You're Gonna Need a Bigger Boat

Posted by Jeff McKenna

Tue, Jul 17, 2012

big data cmbWe hear a lot about Big Data—from Target using predictive analytics to tell which of its customers are pregnant, to MIT and Intel putting millions behind their bigdata@CSAIL initiative. Yet, I’m struck by the fact that most of what I read, and hear at conferences, is about  the wealth of data technology can provide researchers, managers and analysts. There is very little about how these folks can avoid drowning in it, and most importantly make the decisions that address business challenges.

For the uninitiated, the Big Data revolution is characterized by three traits:
  • Volume - Technology has led to an exponential increase in data we have available

  • Diversity - We can aggregate data from a wide range of disparate sources, like customer relationship management (CRM) systems, social media, voice of the customer, and even neuro-scientific measurement.

  • Speed – We are able to field and compile quantitative studies within days; before online, IVR, and mobile data collection methods were available this took weeks

While there may be other definitions of Big Data, it is clear that technology is making data, larger, wider, and faster. What we need to think about is how technology can make our response to and analysis of data larger, wider and faster as well, and avoid drowning in it.

The water metaphor is often used to describe Big Data, and the folks at the GreenBook Consulting Group use the term “oceans of data.”  They describe three business models driven by data: The Traditional (based on Data Ponds), Transitional (based on Data Rivers), and Future (based on Data Oceans).

Traditional market research based on small discrete amounts of data has its place – but as the folks at GreenBook point out, market researchers must face the fact that progression from these Data Ponds to Data Oceans is inevitable.  The Traditional mindset is faced with inertia and will decline in relevance in the next five to ten years.

Those who are in the Transitional phase are moving forward, folks here are in a “reactive” position.  They are seeing these changes around them and applying some big data solutions in their word.  They might have tried one or two tools or are even using them now on a regular basis. But when faced with large amounts of data, they think about technology only in terms of how to collect more data – not in how to manage and apply it quickly and in big ways.  In contrast, the Future mindset takes a proactive approach; these are the people who think about how technology will be the fundamental basis for applying the ideas and solutions to lead companies.

In the coming weeks I’ll be discussing specific examples of technologies that are helping push market researchers towards this future. I’d love to hear from you about things you’re doing to respond to Big Data and the challenges and opportunities you are facing as we confront these Data Oceans.

Watch our webinar Using Technology to Help your Entire Company CMB techUnderstand and Act on Customer Needs here

Posted by Jeff McKenna, Jeff is a senior consultant at CMB and team leader for Pinpoint Suite-our innovative Customer Experience Management software. Want to learn more about how Pinpoint Suite can help you make sense of your "Big Data," schedule a demo here.

Topics: Data Collection, Big Data, Data Integration