Voices of CMB: The Chadwick Martin Bailey Research Blog

Harry Potter and the Missing Segment

harry potter, segmentation, branding, slytherinGryffindor, Hufflepuff, Ravenclaw, or Slytherin? Brave, loyal, wise, or ambitious. . .which one are you?

For those of you unfamiliar with the Harry Potter series, these are the 4 houses that make up Hogwarts School of Witchcraft and Wizardry. When each young witch and wizard enters the school, a magical hat sorts them into one of four houses. Each house values certain attributes. Gryffindors value bravery and daring. Hufflepuffs value kindness and loyalty. Ravenclaws value knowledge and intelligence. Slytherins value ambition and cunning. The three main characters are Gryffindors (Harry, Ron, and Hermione), and most of the series’ villains come from one house in particular: Slytherin. Based on the rigorous questionnaire I completed on the Pottermore, I discovered I, too, am a Slytherin.

This past summer, I went to The Wizarding World of Harry Potter in Orlando, FL to immerse myself in the whimsy and magic of J.K. Rowling's world. Let me start by saying that if you’re a Harry Potter fan, the theme park is definitely worth a visit. The attention to detail is incredible. However, I have a bone to pick. I went to this theme park eager and willing to spend money on paraphernalia that would let me proudly represent my house. . .but I couldn’t find a single shirt that I liked. I went into every shop multiple times and was astounded (and disappointed) at the lack of Slytherin branded items. Gryffindors, on the other hand, had an expansive array of shirts, blankets, and cardigans to choose from.

Let my disappointment serve as a perfect example of why segmentation is so important. Without a useful segmentation, you can miss out on extremely valuable customers. It’s also essential in learning how to market to different groups of target customers with different needs.

As is the case with many brands, it’s possible Hogwarts’ houses aren’t just separated by character values, but also by consumer values and shopping habits. Maybe Slytherins are more price sensitive (though the Malfoys would demonstrate otherwise) or perhaps they don’t like to advertise that they’re cunning individuals (because that would make it a bit harder to be cunning). It’s also possible that Slytherins only make up a very small percentage of Harry Potter fans (we are special, after all), which would justify the lack of money and space Universal spent on Slytherin merchandise. Of course, it’s also possible that the opposite of all of this is true. . .but it’s more than the Sorting Hat will be able to tell you.

I did end up buying a patch with my house crest, and I let J.K. Rowling know that it’s time for Slytherins to get the respect we deserve. She has yet to respond.

Kirsten Clark is a Marketing Associate at CMB. Even though she’s a Slytherin, she closely identifies with Hermione Granger. In fact, in true Hermione fashion, she was once limited to asking only one question per day in elementary school.

The Sorting Hat might not be able to help you with segmentation, but we can. 

Learn About Our Approach to Segmentation

Tags: Kirsten Clark, Segmentation, branding, Harry Potter

Dear Dr. Jay: Data Integration

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email Us! OR  Submit Anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Tags: CMB, chadwick martin bailey, analytics, Big Data, Jay Weiner, Dear Dr. Jay, data integration, passive data

Brands Get in a Frenzy Over Shark Week

By Athena Rodriguez

Summer brings many joys—BBQ’s, the beach, and one of my favorite holidays. . .I’m referring, of course, to Shark Week. For over 25 years, the Discovery Channel has loaded as much shark-related content as possible into a 7-day period, including TV programming, online content, and social media frenzies by both the network and other “official” (and non-official) partners.

While some of these partnerships are no-brainers (e.g., Oceana, National Aquarium, and Sea Save Foundation), other less obvious partners such as Dunkin Donuts, Cold Stone Creamery, and Southwest Airlines, must get creative with their marketing to connect their brands to “the most wonderful week of the year.” Southwest, for example, offered flyers the chance to watch new content via a special Shark Week channel and to enter a sweepstakes for a chance to swim with sharks. Both Cold Stone Creamery and Dunkin Donuts debuted special treats (“Shark Week Frenzy”—blue ice cream with gummy sharks—and a lifesaver donut, respectively).

brand engagement, shark week, television

But it didn’t stop there—brands on social media found ways to tie in products to Shark Week in every way possible. Just take a look at these posts from Claire’s, Salesforce, and Red Bull.

shark week, brand engagement, television

So, what’s in it for these brands? Why go out of their way to connect themselves to something like Shark Week, which is seemingly unrelated to their services and products? It’s as simple as the concept of brand associations. Since brand associations work to form deeper bonds with customers, brands are often on the lookout for opportunities that will boost their standing with customers. Shark Week attracts millions of viewers each night, and since it’s one of the few true television events that remains, it presents the perfect opportunity for brands to engage with customers in a way they don’t often get to do. Furthermore, it demonstrates that these brands are in tune with what their customers like and what’s happening in the pop culture world. And, judging by the amount of interactions brands received from consumers, I’d say it worked.

If you missed the fun of Shark Week last month (the horror!) or just want more, don’t worry—Shweekend is just around the corner (August 29th), and I’ll be anticipating what brands can come up with this time. . .

Athena Rodriguez is a Project Consultant at CMB, and she is a certified fin fanatic. 

Speaking of social media, are you following us on Twitter? If not, get in on the fun! 

Follow Us @cmbinfo!

Tags: customer engagement, branding, social media, television, Athena Rodriguez, brand engagement, Shark Week

When Only a #Selfie Stands Between You and Those New Shoes

By Stephanie Kimball

mobile, shopping, mobile walletThe next time you opt to skip the lines at the mall and do some online shopping from your couch, you may still have to show your face. . .sort of. MasterCard is experimenting with a new program that will require you to hold up your phone and snap a selfie to confirm a purchase.  

MasterCard will be piloting the new app with 500 customers who will pay for items simply by looking at their phones and blinking once to take a selfie. The blink is another feature that ensures security by preventing someone from simply showing the app a picture of your face in an attempt to make a purchase.

As we all know, passwords are easily forgotten or even stolen. So, MasterCard is capitalizing on technology like biometrics and fingerprints to help their customers be more secure and efficient. While security remains a top barrier to mobile wallet usage, concern about security is diminishing among non-users. In addition to snapping a selfie, the MasterCard app also gives users the option to use a fingerprint scan. Worried that your fingerprints and glamour shots will be spread across the web? MasterCard doesn't actually get a picture of your face or finger. All fingerprint scans create a code that stays on your phone, and the facial scan maps out your face, converts it to 0s and 1s, and securely transmits it to MasterCard.

According to our recent Consumer Pulse Report, The Mobile Wallet – Today and Tomorrow, 2015 marks the year when mobile payments will take off. Familiarity and usage have doubled since 2013—15% have used a mobile wallet in the past 6 months and an additional 22% are likely to adopt in the coming 6 months. Familiarity and comfort with online payments has translated into high awareness and satisfaction for a number of providers, and MasterCard wants a slice of that pie. Among mobile wallet users, over a quarter would switch merchants based on mobile payment capabilities.

mobile wallet, wearables

Clearly the mobile wallet revolution is well underway, but the winning providers are far from decided, and MasterCard is taking huge leaps to see how far they can take the technology available. If MasterCard can successfully test and rollout these new features and deliver a product that their customers are comfortable using, they can capture some of the mobile wallet share from other brands like Apple Pay and PayPal.

So what’s next? Ajay Bhalla, President of Enterprise Safety and Security at MasterCard, is also experimenting with voice recognition, so you would only need to speak to approve a purchase. And don’t forget about wearables! While still in the early stages of adoption, wearables have the potential to drive mobile wallet use—particularly at the point of sale—which is why MasterCard is working with a Canadian firm, Nymi, to develop technology that will approve transactions by recognizing your heartbeat.

Since technology is constantly adapting and evolving, the options for mobile payments are limitless. We've heard the drumbeat of the mobile wallet revolution for years, but will 2015 be the turning point? All signs point to yes.

Want to learn more about our recent Consumer Pulse Report, The Mobile Wallet – Today and Tomorrow? Watch our webinar!

Watch Here!

Stephanie is CMB’s Senior Marketing Manager. She owns a selfie stick and isn’t afraid to use it. Follow her on Twitter: @SKBalls

Tags: wearables, Shopping, mobile wallet, Stephanie Kimball, mobile, security, selfie

A Lesson in Loyalty: Will J. Crew Get a Clue?

by Hilary O'Haire

loyalty, branding, retailIf you follow news in the fashion world, you may have read about recent setbacks at preppy retailer J. Crew. Following another disappointing quarter of earnings, the company announced corporate lay-offs and changes at the helm of their women’s clothing design strategy. Although J.Crew has been quick to take action, its poor performance goes beyond declining sales and disappointed customers. Even customers most loyal to the brand are shouting their frustrations in the social media streets (see: “Dear J.Crew, What Happened to Us? We Used to Be So Close”).

How could the direction of a company—known for its devout customer base—take such a dramatic turn? Although off-the-mark designing is partially to blame, many are frustrated with the poor construction and quality of the clothing. As a loyal customer, I have relied on J.Crew for items that are basic closet staples and distinctly on trend. Like others, however, I have been disenchanted by their new lines—my $40 t-shirt is stretched out after one wear and a hole has appeared near the seams. This is not the outcome one would expect when paying that much for a basic t-shirt. Sarah Halzack summed up the issue well in her Washington Post article on the topic—“J.Crew is learning the hard way that in an era when e-commerce has presented women with ever-greater shopping choices, customer loyalty is hard to win and incredibly easy to lose.”

That’s a point J. Crew and other retailers need to take seriously. It’s certainly true for me. Receiving poorly crafted items from a higher price brand such as J.Crew creates a sharp disconnect. After experiencing this, I’m more likely to purchase from one of many cheaper brands (e.g., H&M or ASOS). Most shoppers that I know feel the same way. In facing this challenge, J.Crew needs to re-examine its core strengths. What positive attributes drove customers to advocate the brand in the first place? Is it quality (as in my experience) or is it design? Is it something else? Although the world of fashion is very forward-thinking (fashion-forward!), the case of J.Crew is a good reminder for brands to consistently monitor and deliver on the core aspects that first led to success.  

Hilary O’Haire is a Project Manager on the FIH/RT team. Having worked for J.Crew back in college, she is particularly hopeful the brand will make a comeback!  

Tags: Retail, Brand Loyalty, branding, loyalty, Hilary O'Haire, brand strategy

Dear Dr. Jay: Bayesian Networks

Hello Dr. Jay,

I enjoyed your recent post on predictive analytics that mentioned Bayesian Networks.

Could you explain Bayesian Networks in the context of survey research? I believe a Bayes Net says something about probability distribution for a given data set, but I am curious about how we can use Bayesian Networks to prioritize drivers, e.g. drivers of NPS or drivers of a customer satisfaction metric.

-Al

Dear Dr. Jay, Chadwick Martin BaileyDear Al,

Driver modeling is an interesting challenge. There are 2 possible reasons why folks do driver modeling. The first is to prioritize a set of attributes that a company might address to improve a key metric (like NPS). In this case, a simple importance ranking is all you need. The second reason is to determine the incremental change in your dependent variable (DV) as you improve any given independent variable by X. In this case, we’re looking for a set of coefficients that can be used to predict the dependent variable.

Why do I distinguish between these two things? Much of our customer experience and brand ratings work is confounded by multi-collinearity. What often happens in driver modeling is that 2 attributes that are highly correlated with each other might end up with 2 very different scores—one highly positive and the other 0, or worse yet, negative. In the case of getting a model to accurately predict the DV, I really don’t care about the magnitude of the coefficient or even the sign. I just need a robust equation to predict the value. In fact, this is seldom the case. Most clients would want these highly correlated attributes to yield the same importance score.

So, if we’re not interested in an equation to predict our DV, but do want importances, Bayes Nets can be a useful tool. There are a variety of useful outputs that come from Bayes Nets. Mutual information and Node Force are two such items. Mutual information is essentially the reduction in uncertainty about one variable given what we know about the value of another. We can think of Node Force as a correlation between any 2 items in the network. The more certain the relationship (higher correlation), the greater the Node Force.

The one thing that is relatively unique to Bayes Nets is the ability to see if the attributes are directly connected to your key measure or if they are moderated through another attribute. This information is often useful in understanding possible changes to other measures in the network. So, if the main goal is to help your client understand the structure in your data and what items are most important, Bayes Nets is quite useful.

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Tags: CMB, chadwick martin bailey, Advanced Analytics, Jay Weiner, Dear Dr. Jay, Bayes Net

The Power of Kaleidoscope Thinking

By Anne Bailey Berman

KaleidoscopeI can’t count the number of presentations and lectures I’ve attended throughout my professional career. While many have contained grains of useful insight, few have remained as relevant as one I attended by Harvard professor Rosabeth Moss Kanter. In that presentation, she argued that we should practice “kaleidoscope thinking.” I’ve always loved that idea—"look at all of your assets, move them around, and see if they create new opportunities." While Kanter was talking about marketing, I’d argue that today those of us in the information and insights business must practice this type of thinking more than ever.

To me, kaleidoscope thinking describes how we should approach information to reveal insights that are useful for our clients. Regardless of the volume and sources of information (e.g., characteristics, behaviors, beliefs, satisfaction, intention, and experiences), much of what we are trying to do is understand the patterns that will influence behaviors. In our information world, we call this analysis.

The sheer vastness of available data can be paralyzing or—worse—lead to catastrophic decision-making. We need to put the right information in our “kaleidoscopes” and view the data and decisions in different ways. By thoughtfully turning the barrel, we can see all the different decision paths until we uncover those that are best for increasing opportunity and decreasing risk. It is critical that we develop the skills to see and understand the most useful patterns and insights—not necessarily the solutions that first appear. This is what provides the most beautiful (read: useful) image in the kaleidoscope. 

Anne is the President of Chadwick Martin Bailey and a collector of kaleidoscopes. This summer, she can be found lecturing on storytelling in the insights industry.  

Watch our recent webinar to hear the results of our self-funded Consumer Pulse study on the future of the mobile wallet. 

Watch Here!

Tags: CMB, chadwick martin bailey, Anne Bailey Berman, insights, Kaleidoscope Thinking, decision making

Embracing Mobile Market Research

By Brian Jones

Who are the mobile consumers?

mobile research, cmbLet’s get this straight: I am not addicted to my smartphone. Unlike so many of my fellow train commuters who stare zombie-eyed into their small screens, I am not immersed in a personal relationship with pixels. I have an e-Reader for that. But, my smartphone IS my lifeline.

I’ve come to depend exclusively on my phone to keep me on-time and on-schedule, to entertain me (when not using my e-Reader), to stay in touch with family and friends, and to keep up-to-date with my work email. It’s my primary source for directions, weather, news, photography, messaging, banking, and a regular source for payment, shopping, and ticketing/reservations. I haven’t purchased a PC in nearly a decade, and I don’t have a landline. I also use my smartphone to take market research questionnaires, and I am far from alone. 

Data around smartphone usage aligns with my personal experience. In a recent CMB online study of U.S. consumers, optimized for mobile devices, 1 in 6 Millennials completed the questionnaire on a smartphone. Other studies report similar results. This example illustrates the issue with representativeness. Major panel vendors are seeing over half of Millennials joining their panels via a mobile device. 

mobile research, cmb

How do we adapt?

Much has been hypothesized about the future of market research under the new paradigm of mobile commerce, big data, and cloud services. New technologies and industry convergence (not just mobile) have brought sweeping changes in consumer behaviors, and market researchers must adapt.

A key component of successful adaptation will be greater integration of primary market research with other data streams. The promise of passive or observational data is captivating, but it is largely still in the formative stages. (For more on passive data, check out our recent webinar.) We still need and will likely always need active “please tell me” research. The shift from phone to online data collection has quickly been replaced with the urgency of a shift to mobile data collection (or at least device agnostic interviewing). Our industry has lagged behind because the consumer experience has become so personalized and the trust/value equation for tapping into their experiences is challenging. Tackling mobile market research with tactical solutions is a necessary step in this transition.

What should we do about it?  

  1. Understand your current audience. Researchers need to determine how important mobile data collection is to the business decision and decide how to treat mobile respondents. You can have all respondents use a mobile device, have some use a mobile device, or have mobile device respondents excluded. There are criteria and considerations for each of these, and there are also considerations for the expected mix of feature phones, smartphones, tablets, and PCs. The audience will determine the source of sample and representation that must be factored into the study design. Ultimately, this has a huge impact on the validity and reliability of the data. Respondent invitations need to include any limitations for devices not suitable for a particular survey.
  2. Design for mobile. If mobile participation is important, researchers should use a mobile first questionnaire design. Mobile optimized or mobile friendly surveys typically need to be shorter in length, use concise language, avoid complex grids and answering mechanisms, and have fewer answer options, so they can be supported on a small screen and keep respondents focused on the activity. In some cases,questionnaire modularization or data stitching can be used to help adhere to mobile design standards.
  3. Test for mobile. All questions, images, etc. need to display on a variety of screen sizes and within the bandwidth capacity of the devices that are being used. Android and iOS device accommodation covers most users. If app based surveys are being used, researchers need to ensure that the latest versions can be downloaded and are bug-free. 
  4. Apply data protection and privacy standards. Mobile market research comes with a unique set of conditions and challenges that impact how information is collected, protected, and secured. Research quality and ethical guidelines specific to mobile market research have been published by CASRO, ESOMAR, the MMRA (Mobile Marketing Research Association), and others.
  5. Implement Mobile Qualitative. The barriers are lower, and researchers can leverage the unique capabilities of mobile devices quite effectively with qualitative research. Most importantly, willing participants are mobile, which makes in-the-moment research possible. Mobile qualitative is also a great gateway to explore what’s possible for mobile quantitative studies. See my colleague Anne Hooper’s blog for more on the future of qualitative methodologies.
  6. Promote Research-on-Research. Experts need to conduct and publish additional research-on-research studies that advance understanding of how to treat mobile respondents and utilize passive data, location tracking, and other capabilities that mobile devices provide. We also need stronger evidence of what works and what doesn’t work in execution of multi-mode and mobile-only studies across different demographics, in B2B studies, and within different countries.

But perhaps the most important thing to remember is that this is just a start. Market researchers and other insight professionals must evolve from data providers to become integrated strategic partners—harnessing technology (not just mobile) to industry expertise to focus on decision-making, risk reduction, and growth.

Brian is a Senior Project Manager for Chadwick Martin Bailey, the photographer of the image in this post, and an 82 percenter—he is one of the 82% of mobile phone owners whose phone is with them always or most of the time. 

Watch our recent webinar that discusses the results of our self-funded Consumer Pulse study on the future of the mobile wallet. 

Watch Here!

Tags: mobile research, mobile data collection, Brian Jones, mobile market research, mobile qualitative

Mobile Passive Behavioral Data: Opportunities and Pitfalls

By Chris Neal and Dr. Jay Weiner

Hands with phonesAs I wrote in last week’s post, we recently conducted an analysis of mobile wallet use in the U.S. To make it interesting, we used unlinked passive mobile behavioral data alongside survey-based data.

In this post, I’ve teamed up with Jay Weiner—our VP of Analytics who helped me torture analyze the mobile passive behavioral data for this Mobile Wallet study—to share some of the typical challenges you may face when working with passive mobile behavioral data (or any type of passive behavioral data for that matter) along with some best practices for dealing with these challenges:

  1. Not being able to link mobile usage to individualsThere’s a lot of online passive data out there (mobile app usage ratings, web usage ratings by device type, social media monitoring, etc.) that is at the aggregate level and cannot be reliably attributed to individuals. These data have value, to be sure, but aggregate traffic data can sometimes be very misleading. This is why—for the Mobile Wallet project CMB did—we sourced mobile app and mobile web usage from the Research Now mobile panel where it is possible to attribute mobile usage data to individuals (and have additional profiling information on these individuals). 

    When you’re faced with aggregate level data that isn’t linked to individuals, we recommend either getting some sample from a mobile usage panel in order to understand and calibrate your results better and/or doing a parallel survey-sampling so you can make more informed assumptions (this holds true for aggregate search trend data, website clickstream data, and social media listening tools).
  1. Unstacking the passive mobile behavioral data. Mobile behavioral data that is linked to individuals typically comes in “stacked” form, i.e., every consumer tracked has many different records: one for each active mobile app or mobile website session. Analyzing this data in its raw form is very useful for understanding overall mobile usage trends. What these stacked behavioral data files do not tell you, however, is the reach or incidence (e.g., how many people or the percentage of an addressable market) of any given mobile app/website. It also doesn’t tell you the mobile session frequency or duration characteristics of different consumer types nor does it allow you to profile types of people with different mobile behaviors. 

    Unstacking a mobile behavioral data file can sometimes end up being a pretty big programming task, so we recommend deciding upfront exactly which apps/websites you want to “unstack.” A typical behavioral data file that tracks all smartphone usage during a given period of time can involve thousands of different apps and websites. . .and the resulting unstacked data file covering all of these could quickly become unwieldy.
  1. Beware the outlier! Unstacking a mobile behavioral data file will reveal some pretty extreme outliers. We all know about outliers, right? In survey research, we scrub (or impute) open-ended quant responses that are three standard deviations higher than the mean response, we take out some records altogether if they claim to be planning to spend $6 billion on their next smartphone purchase, and so on. But outliers in passive data can be quite extreme. In reviewing the passive data for this particular project, I couldn’t help but recall that delightful Adobe Marketing ad in which a baby playing with his parents’ tablet repeatedly clicks the “buy” button for an encyclopedia company’s e-commerce site, setting off a global stock bubble. 

    Here is a real-world example from our mobile wallet study that illustrates just how wide the range is of mobile behaviors across even a limited group of consumers: the overall “average” time spent using a mobile wallet app was 162 minutes, but the median time was only 23 minutes. A very small (<1% of total) portion of high-usage individuals created an average that grossly inflated the true usage snapshot of the majority of users. One individual spent over 3,000 minutes using a mobile wallet app.
  1. Understand what is (and what is not) captured by a tracking platform. Different tracking tools do different things and produce different data to analyze. In general, it’s very difficult to capture detailed on-device usage for iOS devices. . .most platforms set up a proxy that instead captures and categorizes the IP addresses that the device transmits data to/from. In our Mobile Wallet study, as one example, our mobile behavioral data did not pick up any Apple Pay usage because it leverages NFC to conduct the transaction between the smartphone and the NFC terminal at the cash register (without any signal ever being transmitted out to the mobile web or to any external mobile app, which is how the platform captured mobile usage).   There are a variety of tricks of the trade to account for these phenomenon and to adjust your analysis so you can get close to a real comparison, but you need to understand what things aren’t picked up by passive metering in order to apply them correctly.
  1. Categorize apps and websites. Needless to say, there are many different mobile apps and websites that people use, and many of these do a variety of different things and are used for a variety of different purposes. Additionally, the distribution of usage across many niche apps and websites is often not useful for any meaningful insights work unless these are bundled up into broader categories. 

    Some panel sources—including Research Now’s mobile panel—have existing mobile website and app categories, which are quite useful. For many custom projects, however, you’ll need to do the background research ahead of time in order to have meaningful categories to work with. Fishing expeditions are typically not a great analysis plan in any scenario, but they are out of the question if you’re going to dive into a big mobile usage data file.

    As you work to create meaningful categories for analysis, be open to adjusting and iterating. A certain group of specific apps might not yield the insight you were looking for. . .learn from the data you see during this process then try new groupings of apps and websites accordingly.
  1. Consider complementary survey sampling in parallel with behavioral analysis. During our iterative process of attempting to categorize mobile apps from reviewing passive mobile behavioral data, we were relieved to have a complementary survey sampling data set that helped us make some very educated guesses about how or why people were using different apps. For example, PayPal has a very successful mobile app that is widely used for a variety of reasons—peer-to-peer payments, ecommerce payments, and, increasingly, for “mobile wallet” payments at a physical point of sale. The passive behavioral data we had could not tell us what proportion of different users’ PayPal mobile app usage was for which purpose. That’s a problem because if we were relying on passive data alone to tell our clients what percent of smartphone users have used a mobile wallet to pay at a physical point of sale, we could come up with grossly inflated numbers. As an increasing number of mobile platforms add competing functionality (e.g., Facebook now has mobile payments functionality), this will remain a challenge.

    Passive tracking platforms will no doubt crack some of these challenges accurately, but some well-designed complementary survey sampling can go a long way towards helping you read the behavioral tea leaves with greater confidence. It can also reveal differences between actual vs. self-reported behavior that are valuable for businesses (e.g., a lot of people may say they really want a particular mobile functionality when asked directly, but if virtually no one is actually using existing apps that provide this functionality then perhaps your product roadmap can live without it for the next launch).

Want to learn more about the future of Mobile Wallet? Join us for a webinar on August 19, and we’ll share our insights with you!

Chris Neal leads CMB’s Tech Practice. He judges every survey he takes and every website he visits by how it looks on his 4” smartphone screen, and has sworn off buying a larger “phablet” screen size because it wouldn’t fit well in his Hipster-compliant skinny jeans.

Dr. Jay heads up the analytics group at CMB. He opted for the 6 inch “phablet” and baggy jeans.  He does look stupid talking to a brick. He’s busy trying to compute which event has the higher probability: his kids texting him back or his kids completing an online questionnaire. Every month, he answers your burning market research questions in his column: Dear Dr. Jay. Got a question? Ask it here!

Want to learn more about combining survey data with passive mobile behavioral data? Watch our recent webinar with Research Now that discusses these findings in depth.

Watch Now!

Tags: mobile research, mobile wallet, chris neal, Jay Weiner, mobile data collection, Webinar, Resebile, mobile behavioral daarch Now, mota

Upcoming Webinar: Passive Mobile Behavioral Data + Survey Data

By Chris Neal

mobile research, mobile data collection, The explosion of mobile web and mobile app usage presents enormous opportunities for consumer insights professionals to deepen their understanding of consumer behavior, particularly for “in the moment” findings and tracking consumers over time (when they aren’t actively participating in research. . .which is 99%+ of the time for most people). Insight nerds like us can’t ignore this burgeoning wealth of data—it is a potential goldmine. 

But, working with passive mobile behavioral data brings with it plenty of challenges, too. It looks, smells, and feels very different from self-reported survey data:

  • It’s big. (I’m not gonna drop the “Big Data” buzzword in this blog post, but—yep—the typical consumer does indeed use their smartphone quite a bit.)
  • It’s messy.
  • We don’t have the luxury of carefully curating it in the same way we do with survey sampling. 

As we all find ourselves increasingly tasked with synthesizing insights and a cohesive “story” using multiple data sources, we’re finding that mobile usage and other data sources don’t always play nicely in the sandbox with survey data. Each of them have their strengths and weaknesses that we need to understand in order to use them most effectively. 

So, in our latest in a series of sadomasochistic self-funded thought leadership experiments, we decided to take on a challenge similar in nature to what more and more companies will ask insights departments to do: use passive mobile behavioral data alongside survey-based data for a single purpose. In this case, the topic was an analysis of the U.S. mobile wallet market opportunity. To make things extra fun, we ensured that the passive mobile behavioral data was completely unlinked to the survey data (i.e., we could not link the two data sources at the respondent level for deeper understanding or to do attitudinal + behavioral based modeling). There are situations where you’ll be given data that is linked, but currently—more often than not—you’ll be working with separate silos and asked to make hay.

During this experiment, a number of things became very clear to us, including:

  • the actual value that mobile behavioral data can bring to business engagements
  • how it could easily produce misleading results if you don’t properly analyze the data
  • how survey data and passive mobile behavioral data can complement one another greatly

Interested? I’ll be diving deep into these findings (and more) along with Roddy Knowles of Research Now in a webinar this Thursday, July 16th at 1pm ET (11am PT). Please join us by registering here

Chris leads CMB’s Tech Practice. He enjoys spending time with his two kids and rock climbing.

Watch our recent webinar with Research Now to hear the results of our recent self-funded Consumer Pulse study that leveraged passive mobile behavioral data and survey data simultaneously to reveal insights into the current Mobile Wallet industry in the US.

Watch Now!

Tags: mobile research, mobile, chris neal, Research Now, mobile data collection, Webinar, mobile behavioral data