WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Passive Mobile Behavioral Data – Part Deux

Posted by Chris Neal

Wed, Aug 10, 2016

Over the past two years, we've  embarked on a quest to help the insights industry get better at harnessing passive mobile behavioral data. In 2015, we partnered with Research Now for an analysis37824990_thumbnail.jpg of mobile wallet usage, using unlinked passive and survey-based data. This year, we teamed up with Research Now once again for research-on-research directly linking actual mobile traffic and app data to consumers’ self-reported online shopper journey behavior.

We asked over 1,000 shoppers, across a variety of Black Friday/Cyber Monday categories, a standard set of purchase journey survey questions immediately after the event, then again after 30 days, 60 days, and 90 days. We then compared their self-reported online and mobile behavior to the actual mobile app and website usage data from their smartphones. 

The results deepened our understanding of how best to use (and not use) each respective data source, and how combining both can help our clients get closer to the truth than they could using any single source of information.

Here are a few things to consider if you find yourself tasked with a purchase journey project that uses one or both of these data sources as fuel for insights and recommendations:

  1. Most people use multiple devices for a major purchase journey, and here’s why you should care:
    • Any device tracking platform (even one claiming a 3600 view) is likely missing some relevant online behavior to a given shopper journey. In our study, we were getting behavior from their primary smartphone, but many of these consumers reported visiting websites we had no record of from our tracking data. Although they reported visiting these websites on their smartphones, it is likely that some of these visits happened on their personal computer, a tablet, a computer at their work, etc.
  2. Not all mobile usage is related to the purchase journey you care about:
    • We saw cases of consumers whose behavioral data showed they’d visited big retail websites and mobile apps during the purchase journey but who did not report using these sites/apps as part of the journey we asked them about. This is a bigger problem with larger, more generalist mobile websites and apps (like Amazon, for this particular project, or like PayPal when we did the earlier Mobile Wallet study with a similar methodological exercise).
  3. Human recall ain’t perfect. We all know this, but it’s important to understand when and where it’s less perfect, and where it’s actually sufficient for our purposes. Using survey sampling to analyze behaviors can be enormously valuable in a lot of different situations, but understand the limitations and when you are expecting too much detail from somebody to give you accurate data to work with.  Here are a few situations to consider:
    • Asking whether a given retailer, brand, or major web property figured into the purchase journey at all will give you pretty good survey data to work with. Smaller retailers, websites, and apps will get more misses/lack of recall, but accurate recall is a proxy for influence, and if you’re ultimately trying to figure out how best to influence a consumer’s purchase journey, self-reported recall of visits is a good proxy, whereas relying on behavioral data alone may inflate the apparent impact of smaller properties on the final purchase journey.
    • Asking people to remember whether they used the mobile app vs. the mobile website introduces more error in your data. Most websites are now mobile optimized and look/ feel like mobile apps, or will switch users to the native mobile app on their phone automatically if possible.
      • In this particular project, we saw evidence of a 35-50% improvement in survey-behavior match rates if we did not require respondents to differentiate the mobile website from the mobile app for the same retailer.
  4. Does time-lapse matter? It depends.
    • For certain activities (e.g., making minor purchases in grocery store, a TV viewing occasion), capturing in-the-moment feedback from consumers is critical for accuracy.
    • In other situations where the process is bigger, involves more research, or is more memorable in general (e.g., buying a car, having a wedding, or making a planned-for purchase based on a Black Friday or Cyber Monday deal): you can get away with asking people about it further out from the actual event.
      • In this particular project, we actually found no systematic evidence of recall deterioration when we ran the survey immediately after Black Friday/Cyber Monday vs. running it 30 days, 60 days, and 90 days after.

Working with passive mobile behavioral data (or any digital passive data) is challenging, no doubt.  Trying to make hay by combining these data with primary research survey sampling, customer databases, transactional data, etc., can be even more challenging.  But, like it or not, that’s where Insights is headed. We’ll continue to push the envelope in terms of best practices for navigating these types of engagements as Analytics teams, Insights departments, Financial Planning and Strategy groups work together more seamlessly to provide senior executives with a “single version of the truth”— one which is more accurate than any previously siloed version.

Chris Neal leads CMB’s Tech Practice. He knows full well that data scientists and programmatic ad buying bots are analyzing his every click on every computing device and is perfectly OK with that as long as they serve up relevant ads. Nothing to hide!

Don't miss out on the latest research, insights and conference recaps. Subscribe to our monthly eZine.

Subscribe Here!

Topics: advanced analytics, mobile, passive data, integrated data

Say Goodbye to Your Mother’s Market Research

Posted by Matt Skobe

Wed, Dec 02, 2015

evolving market researchIs it time for the “traditional” market researcher to join the ranks of the milkman and switchboard operator? The pressure to provide more actionable insights, more quickly, has never been so high. Add new competitors into the mix, and you have an industry feeling the pinch. At the same time, primary data collection has become substantially more difficult:

  • Response rates are decreasing as people become more and more inundated with email requests
  • Many among the younger crowd don’t check their email frequently, favoring social media and texting
  • Spam filters have become more effective, so potential respondents may not receive email invitations
  • The cell-phone-only population is becoming the norm—calls are easily avoided using voicemail, caller ID, call-blocking, and privacy managers
  • Traditional questionnaire methodologies don’t translate well to the mobile platform—it’s time to ditch large batteries of questions

It’s just harder to contact people and collect their opinions. The good news? There’s no shortage of researchable data. Quite the contrary, there’s more than ever. It’s just that market researchers are no longer the exclusive collectors—there’s a wealth of data collected internally by companies as well as an increase in new secondary passive data generated by mobile use and social media. We’ll also soon be awash in the Internet of Things, which means that everything with an on/off switch will increasingly be connected to one another (e.g., a wearable device can unlock your door and turn on the lights as you enter). The possibilities are endless, and all this activity will generate enormous amounts of behavioral data.

Yet, as tantalizing as these new forms of data are, they’re not without their own challenges. One such challenge? Barriers to access. Businesses may share data they collect with researchers, and social media is generally public domain, but what about data generated by mobile use and the Internet of Things? How can researchers get their hands on this aggregated information? And once acquired, how do you align dissimilar data for analysis? You can read about some of our cutting-edge research on mobile passive behavioral data here.

We also face challenges in striking the proper balance between sharing information and protecting personal privacy. However, people routinely trade personal information online when seeking product discounts and for the benefit of personalizing applications. So, how and what’s shared, in part, depends on what consumers gain. It’s reasonable to give up some privacy for meaningful rewards, right? There are now health insurance discounts based on shopping habits and information collected by health monitoring wearables. Auto insurance companies are already doing something similar in offering discounts based on devices that monitor driving behavior.

We are entering an era of real-time analysis capabilities. The kicker is that with real-time analysis comes the potential for real-time actionable insights to better serve our clients’ needs.

So, what’s today’s market researcher to do? Evolve. To avoid marginalization, market researchers need to continue to understand client issues and cultivate insights in regard to consumer behavior. To do so effectively in this new world, they need to embrace new and emerging analytical tools and effectively mine data from multiple disparate sources, bringing together the best of data science and knowledge curation to consult and partner with clients.

So, we can say goodbye to “traditional” market research? Yes, indeed. The market research landscape is constantly evolving, and the insights industry needs to evolve with it.

Matt Skobe is a Data Manager at CMB with keen interests in marketing research and mobile technology. When Matt reaches his screen time quota for the day he heads to Lynn Woods for gnarcore mountain biking.    

Topics: data collection, mobile, consumer insights, marketing science, internet of things, data integration, passive data

Dear Dr. Jay: Data Integration

Posted by Jay Weiner, PhD

Wed, Aug 26, 2015

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email us! OR  Submit anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, data integration, passive data

Mobile Passive Behavioral Data: Opportunities and Pitfalls

Posted by Chris Neal

Tue, Jul 21, 2015

By Chris Neal and Dr. Jay Weiner

Hands with phonesAs I wrote in last week’s post, we recently conducted an analysis of mobile wallet use in the U.S. To make it interesting, we used unlinked passive mobile behavioral data alongside survey-based data.In this post, I’ve teamed up with Jay Weiner—our VP of Analytics who helped me torture analyze the mobile passive behavioral data for this Mobile Wallet study—to share some of the typical challenges you may face when working with passive mobile behavioral data (or any type of passive behavioral data for that matter) along with some best practices for dealing with these challenges:

  1. Not being able to link mobile usage to individualsThere’s a lot of online passive data out there (mobile app usage ratings, web usage ratings by device type, social media monitoring, etc.) that is at the aggregate level and cannot be reliably attributed to individuals. These data have value, to be sure, but aggregate traffic data can sometimes be very misleading. This is why—for the Mobile Wallet project CMB did—we sourced mobile app and mobile web usage from the Research Now mobile panel where it is possible to attribute mobile usage data to individuals (and have additional profiling information on these individuals). 

    When you’re faced with aggregate level data that isn’t linked to individuals, we recommend either getting some sample from a mobile usage panel in order to understand and calibrate your results better and/or doing a parallel survey-sampling so you can make more informed assumptions (this holds true for aggregate search trend data, website clickstream data, and social media listening tools).
  1. Unstacking the passive mobile behavioral data. Mobile behavioral data that is linked to individuals typically comes in “stacked” form, i.e., every consumer tracked has many different records: one for each active mobile app or mobile website session. Analyzing this data in its raw form is very useful for understanding overall mobile usage trends. What these stacked behavioral data files do not tell you, however, is the reach or incidence (e.g., how many people or the percentage of an addressable market) of any given mobile app/website. It also doesn’t tell you the mobile session frequency or duration characteristics of different consumer types nor does it allow you to profile types of people with different mobile behaviors. 

    Unstacking a mobile behavioral data file can sometimes end up being a pretty big programming task, so we recommend deciding upfront exactly which apps/websites you want to “unstack.” A typical behavioral data file that tracks all smartphone usage during a given period of time can involve thousands of different apps and websites. . .and the resulting unstacked data file covering all of these could quickly become unwieldy.
  1. Beware the outlier! Unstacking a mobile behavioral data file will reveal some pretty extreme outliers. We all know about outliers, right? In survey research, we scrub (or impute) open-ended quant responses that are three standard deviations higher than the mean response, we take out some records altogether if they claim to be planning to spend $6 billion on their next smartphone purchase, and so on. But outliers in passive data can be quite extreme. In reviewing the passive data for this particular project, I couldn’t help but recall that delightful Adobe Marketing ad in which a baby playing with his parents’ tablet repeatedly clicks the “buy” button for an encyclopedia company’s e-commerce site, setting off a global stock bubble. 

    Here is a real-world example from our mobile wallet study that illustrates just how wide the range is of mobile behaviors across even a limited group of consumers: the overall “average” time spent using a mobile wallet app was 162 minutes, but the median time was only 23 minutes. A very small (<1% of total) portion of high-usage individuals created an average that grossly inflated the true usage snapshot of the majority of users. One individual spent over 3,000 minutes using a mobile wallet app.
  1. Understand what is (and what is not) captured by a tracking platform. Different tracking tools do different things and produce different data to analyze. In general, it’s very difficult to capture detailed on-device usage for iOS devices. . .most platforms set up a proxy that instead captures and categorizes the IP addresses that the device transmits data to/from. In our Mobile Wallet study, as one example, our mobile behavioral data did not pick up any Apple Pay usage because it leverages NFC to conduct the transaction between the smartphone and the NFC terminal at the cash register (without any signal ever being transmitted out to the mobile web or to any external mobile app, which is how the platform captured mobile usage).   There are a variety of tricks of the trade to account for these phenomenon and to adjust your analysis so you can get close to a real comparison, but you need to understand what things aren’t picked up by passive metering in order to apply them correctly.
  1. Categorize apps and websites. Needless to say, there are many different mobile apps and websites that people use, and many of these do a variety of different things and are used for a variety of different purposes. Additionally, the distribution of usage across many niche apps and websites is often not useful for any meaningful insights work unless these are bundled up into broader categories. 

    Some panel sources—including Research Now’s mobile panel—have existing mobile website and app categories, which are quite useful. For many custom projects, however, you’ll need to do the background research ahead of time in order to have meaningful categories to work with. Fishing expeditions are typically not a great analysis plan in any scenario, but they are out of the question if you’re going to dive into a big mobile usage data file.

    As you work to create meaningful categories for analysis, be open to adjusting and iterating. A certain group of specific apps might not yield the insight you were looking for. . .learn from the data you see during this process then try new groupings of apps and websites accordingly.
  1. Consider complementary survey sampling in parallel with behavioral analysis. During our iterative process of attempting to categorize mobile apps from reviewing passive mobile behavioral data, we were relieved to have a complementary survey sampling data set that helped us make some very educated guesses about how or why people were using different apps. For example, PayPal has a very successful mobile app that is widely used for a variety of reasons—peer-to-peer payments, ecommerce payments, and, increasingly, for “mobile wallet” payments at a physical point of sale. The passive behavioral data we had could not tell us what proportion of different users’ PayPal mobile app usage was for which purpose. That’s a problem because if we were relying on passive data alone to tell our clients what percent of smartphone users have used a mobile wallet to pay at a physical point of sale, we could come up with grossly inflated numbers. As an increasing number of mobile platforms add competing functionality (e.g., Facebook now has mobile payments functionality), this will remain a challenge.

    Passive tracking platforms will no doubt crack some of these challenges accurately, but some well-designed complementary survey sampling can go a long way towards helping you read the behavioral tea leaves with greater confidence. It can also reveal differences between actual vs. self-reported behavior that are valuable for businesses (e.g., a lot of people may say they really want a particular mobile functionality when asked directly, but if virtually no one is actually using existing apps that provide this functionality then perhaps your product roadmap can live without it for the next launch).

Want to learn more about the future of Mobile Wallet? Join us for a webinar on August 19, and we’ll share our insights with you!

Chris Neal leads CMB’s Tech Practice. He judges every survey he takes and every website he visits by how it looks on his 4” smartphone screen, and has sworn off buying a larger “phablet” screen size because it wouldn’t fit well in his Hipster-compliant skinny jeans.

Dr. Jay heads up the analytics group at CMB. He opted for the 6 inch “phablet” and baggy jeans.  He does look stupid talking to a brick. He’s busy trying to compute which event has the higher probability: his kids texting him back or his kids completing an online questionnaire. Every month, he answers your burning market research questions in his column: Dear Dr. Jay. Got a question? Ask it here!

Want to learn more about combining survey data with passive mobile behavioral data? Watch our recent webinar with Research Now that discusses these findings in depth.

Watch Now!

Topics: advanced analytics, methodology, data collection, mobile, Dear Dr. Jay, webinar, passive data

Upcoming Webinar: Passive Mobile Behavioral Data + Survey Data

Posted by Chris Neal

Mon, Jul 13, 2015

mobile research, mobile data collection, The explosion of mobile web and mobile app usage presents enormous opportunities for consumer insights professionals to deepen their understanding of consumer behavior, particularly for “in the moment” findings and tracking consumers over time (when they aren’t actively participating in research. . .which is 99%+ of the time for most people). Insight nerds like us can’t ignore this burgeoning wealth of data—it is a potential goldmine. But, working with passive mobile behavioral data brings with it plenty of challenges, too. It looks, smells, and feels very different from self-reported survey data:

  • It’s big. (I’m not gonna drop the “Big Data” buzzword in this blog post, but—yep—the typical consumer does indeed use their smartphone quite a bit.)
  • It’s messy.
  • We don’t have the luxury of carefully curating it in the same way we do with survey sampling. 

As we all find ourselves increasingly tasked with synthesizing insights and a cohesive “story” using multiple data sources, we’re finding that mobile usage and other data sources don’t always play nicely in the sandbox with survey data. Each of them have their strengths and weaknesses that we need to understand in order to use them most effectively. 

So, in our latest in a series of sadomasochistic self-funded thought leadership experiments, we decided to take on a challenge similar in nature to what more and more companies will ask insights departments to do: use passive mobile behavioral data alongside survey-based data for a single purpose. In this case, the topic was an analysis of the U.S. mobile wallet market opportunity. To make things extra fun, we ensured that the passive mobile behavioral data was completely unlinked to the survey data (i.e., we could not link the two data sources at the respondent level for deeper understanding or to do attitudinal + behavioral based modeling). There are situations where you’ll be given data that is linked, but currently—more often than not—you’ll be working with separate silos and asked to make hay.

During this experiment, a number of things became very clear to us, including:

  • the actual value that mobile behavioral data can bring to business engagements
  • how it could easily produce misleading results if you don’t properly analyze the data
  • how survey data and passive mobile behavioral data can complement one another greatly

Interested? I’ll be diving deep into these findings (and more) along with Roddy Knowles of Research Now in a webinar this Thursday, July 16th at 1pm ET (11am PT). Please join us by registering here

Chris leads CMB’s Tech Practice. He enjoys spending time with his two kids and rock climbing.

Watch our recent webinar with Research Now to hear the results of our recent self-funded Consumer Pulse study that leveraged passive mobile behavioral data and survey data simultaneously to reveal insights into the current Mobile Wallet industry in the US.

Watch Now!

Topics: advanced analytics, methodology, data collection, mobile, webinar, passive data, integrated data

Dear Dr. Jay: Predictive Analytics

Posted by Dr. Jay Weiner

Mon, Apr 27, 2015

ddj investigates

Dear Dr. Jay, 

What’s hot in market research?

-Steve W., Chicago

 

Dear Steve, 

We’re two months into my column, and you’ve already asked one of my least favorite questions. But, I will give you some credit—you’re not the only one asking such questions. In a recent discussion on LinkedIn, Ray Poynter asked folks to anticipate the key MR buzzwords for 2015. Top picks included “wearables” and “passive data.” While these are certainly topics worthy of conversation, I was surprised Predictive Analytics (and Big Data), didn’t get more hits from the MR community. My theory: even though the MR community has been modeling data for years, we often don’t have the luxury of getting all the data that might prove useful to the analysis. It’s often clients who are drowning in a sea of information—not researchers.

On another trending LinkedIn post, Edward Appleton asked whether “80% Insights Understanding” is increasingly "good enough.” Here’s another place where Predictive Analytics may provide answers. Simply put, Predictive Analytics lets us predict the future based on a set of known conditions. For example, if we were able to improve our order processing time from 48 hours to 24 hours, Predictive Analytics could tell us the impact that would have on our customer satisfaction ratings and repeat purchases. Another example using non-survey data is predicting concept success using GRP buying data.


What do you need to perform this task? predictive analytics2

  • We need a dependent variable we would like to predict. This could be loyalty, likelihood to recommend, likelihood to redeem an offer, etc.
  • We need a set of variables that we believe influences this measure (independent variables). These might be factors that are controlled by the company, market factors, and other environmental conditions.
  • Next, we need a data set that has all of this information. This could be data you already have in house, secondary data, data we help you collect, or some combination of these sources of data.
  • Once we have an idea of the data we have and the data we need, the challenge becomes aggregating the information into a single database for analysis. One key challenge in integrating information across disparate sources of data is figuring out how to create unique rows of data for use in model building. We may need a database wizard to help merge multiple data sources that we deem useful to modeling.  This is probably the step in the process that requires the most time and effort. For example, we might have 20 years’ worth of concept measures and the GRP buys for each product launched. We can’t assign the GRPs for each concept to each respondent in the concept test. If we did, there wouldn’t be much variation in the data for a model. The observation level becomes a concept. We then aggregate the individual level responses across each concept and then append the GRP data. Now the challenge becomes one of the number of observations in the data set we’re analyzing.
  • Lastly, we need a smart analyst armed with the right statistical tools. Two tools we find useful for predictive analytics are Bayesian networks and TreeNet. Both tools are useful for different types of attributes. More often than not, we find the data sets comprised of scale data, ordinal data, and categorical data. It’s important to choose a tool that is capable of working with this type of information

The truth is, we’re always looking for the best (fastest, most accurate, useful, etc.) way to solve client challenges—whether they’re “new” or not. 

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, passive data

Tablet Purchase Journey Relies Heavily on Mobile Web

Posted by Chris Neal

Thu, Oct 16, 2014

consumer pulse, tabletsWe all know the consumer purchase journey has changed dramatically since the “mobile web” explosion and continues to evolve rapidly. In order to understand the current state of this evolving journey, CMB surveyed 2,000 recent buyers of tablets in the U.S. We confirmed several things that we expected to see, but we also busted a few myths along the way: 

1. TRUE: “Online media and advertising are now essential to influence consumers.”

  • Reading about tablets online and online advertisements are the top ways in which consumers learn about new brands or products. [Tweet this.]
  • Nearly everyone we surveyed does some type of research and evaluation online before buying—most commonly using online-only shopping sites (e.g., Amazon, eBay, etc.), general web searches, consumer electronics store websites, review websites (e.g., CNET, Engadget, etc.), or tablet manufacturer websites.

2. TRUE: “The mobile web is becoming more important in the consumer purchase journey.”

  • Over half of buyers use the mobile web during the research and evaluation phase, and nearly 40% of buyers do so as a part of the final purchase decision (although very few people actually purchase a tablet using a mobile device). [Tweet this.]

3. FALSE: Mobile applications are becoming very important in the consumer purchase journey.”

  • Although the mobile web is now highly influential, very little purchase journey activity actually happens from within a mobile application per se. This could be because tablet purchasing isn’t something that happens frequently for more individual consumers (high-frequency activities lend themselves better to a dedicated app to expedite and track them). [Tweet this.]

4. FALSE: “Social Media is becoming very important in the consumer purchase journey.”

  • The purchase journey for tablets is indeed very “social” (i.e., word-of-mouth and consumer reviews are hugely influential), but precious little of this socialization actually happens on social media platforms in the case of U.S. tablet buyers. [Tweet this.]

5. FALSE: “The Brick and Mortar Retail Store is Dead.”

  • The rise of all things online does not spell the death of brick and mortar retail in the consumer electronics category. In-store experiences (including speaking with retail sales associated and doing hands-on demos of tablets) were one of the top sources of influence during the research and evaluation phase, regardless of whether they ultimately bought their tablet in a physical store. 
  • Next to ads, in-store experiences were the top source of awareness for new tablet brands and models. 41% of those who learned about new makes/models during the process did so inside of a physical retail store. [Tweet this.]
  • Half of all buyers surveyed actually bought their tablet in a physical retail store. [Tweet this.]

6. TRUE: The line between “online” and “offline” purchase journeys is becoming blurred.

  • Most people use both online and offline sources during their purchase journey, and they typically influence one another. People doing research online may discover that a tablet model they are interested in is on sale at a particular retailer. At the same time, something a retail sales associate recommends to a shopper in a store may spur an online search in order to read other consumer reviews and see where they can get the recommended model the cheapest and fastest. Smartphone-based activities from within a retail store are just as common as interacting with an actual salesperson face-to-face at this point. 

The mobile web is undoubtedly here to stay, and how consumers go about making various different buying decisions will continue to evolve along with future changes in the mobile web. Here at CMB, we will continue to help companies and brands adapt to these shifts.

Download the full report. 

For more on our mobile stitching methodology, please see CMB's Chris Neal's webinar with Research Now: Watch the Webinar

Chris leads CMB’s Tech Practice. He enjoys spending time with his two kids and rock climbing.

Topics: technology research, mobile, path to purchase, advertising, Consumer Pulse, passive data, retail research, customer journey