Jeffrey Henning:10 Tips for Mobile Diary Studies

Posted by Jeffrey Henning

Mon, Nov 25, 2013

Originally posted on Research Access

Earlier this month, Chris Neal of Chadwick Martin Bailey shared with members of the New England chapter of the Marketing Research Association tips for running mobile diary studies, based on lessons learned from a recent project.For the Council for Research Excellence (CRE), CMB studied mobile video usage to understand:

  • How much time is spent mobile diary researchon mobile devices watching TV (professionally produced TV shows)?

  • Does this cannibalize TV set viewing?

  • What motivates consumers to watch on mobile?

  • How can mobile TV viewing be accurately tracked?

The research included a quantitative phase with two online surveys and mobile journaling, followed by a series of home ethnographies. The quant work included a screening survey, the mobile diary, and a final online survey.

  • The screening survey was Census balanced to estimate market size, with three groups recruited for comparison: those without mobile devices (smartphones or tablets), those with mobile devices who don’t watch TV on them, and those with mobile devices that they watch TV on. The total number of respondents was 5,886.

  • The mobile diary activity asked respondents to complete their journal 4 times a day for 7 days.

  • A final attitudinal survey was used to better understand motivations and behaviors associated with decisions about TV watching.

Along the way, CMB learned some valuable best practices for mobile diary studies, including tips for recruiting, incentives, design and analysis. The 10 key lessons learned:

  1. Mobile panels don’t work for low incidence – Take care when using mobile panels – given the small size of many mobile panels, you may have better luck recruiting through traditional online panels, as CMB did. For this study, it was because of the comparatively low incidence of actual mobile TV watching.

  2. Overrecruit – You will lose many recruits to the journaling exercise when it comes time to downloading the mobile diary application. As a general rule, over-recruit by 100% – get twice the promises of participation that you need. Most dropout occurs after the screening and before the participant has recorded a single mobile diary entry. For many members of online survey panels, journaling is a new experience. The second biggest point of dropout was after recording 1 or 2 diary entries.

  3. Keep it short – To minimize this dropout, you have to keep the diary experience as short as possible: no more than 3 to 5 minutes long. The more times you ask participants to complete a diary each day, the greater the dropout rate.

  4. Think small screen – Make sure the survey is designed to provide a good experience on small screens – avoid grids and sum-allocation questions and limit open-ended prompts and use of images. Use vertical scales instead of horizontal scales. “Be wary of shiny new survey objects for smartphone survey-takers,” said Chris. Smartphone users had 5 times the dropout rate of tablet or laptop users in this study. Enable people to log on to their journal from whatever device they were using at the time, including their computer.

  5. Beware battery hogs – When evaluating smartphone apps, be wary of those that drain battery life by constantly logging GPS location. Check the app store reviews of the application.

  6. Keep consistent – Keep the diary questionnaire the same for every time block, to get respondents into the habit of answering it.

  7. Experiment with incentives to maximize participation – Tier incentives to motivate people to stick with the study and complete all time blocks. To earn the incentive for the CMB study, Chris said that respondents had to participate at least once a day for all 7 days, with additional incentives for every journal log entered (participants were reminded this didn’t have to involve actual TV watching, just filling out the log). In the end, 90% of journaling occasions were filled out.

  8. Remind via SMS and email – In-app notifications are not enough to prompt participation. Use email and text messages for each time block as well. Most respondents logged on within 2 hours of receiving a reminder.

  9. Use online surveys for detailed questions – Use the post-journaling survey to capture greater detail and to work around the limits of mobile surveys. You can then use these results to “slice and dice” the journal responses.

  10. Weight by occasions – Remember to weight the data file to total occasions not total respondents. For missing data, leave it missing. Develop a plan detailing which occasion-based data you’re going to analyze and what respondent-level analysis you are going to do. You may need to create a separate occasion-level data file and a separate respondent-level data file.

Properly done, mobile diary studies provide an amazing depth of data. For this project, CMB captured almost 400,000 viewing occasions (mobile and non-mobile TV watching), for over 5 million occasion-based records!

Interested in the actual survey results? CRE has published the results presentation, “TV Untethered: Following the Mobile Path of TV Content” [PDF].

Jeffrey Henning, PRC is president of Researchscape International, a market research firm providing custom surveys to small businesses. He is a Director at Large on the MRA Board of Directors; in 2012, he was the inaugural winner of the MRA’s Impact award. You can follow him on Twitter @jhenning.

Topics: Methodology, Qualitative Research, Mobile, Research Design

Taking Product Development to Infinity and Beyond

Posted by Athena Rodriguez

Tue, Nov 19, 2013

CMB New Product DevelopmentI recently came across an article focused on defunct exhibits at Disney parks. I’m a native Floridan so I flipped through the accompanying slide show with fond memories. And there it was...my all-time childhood favorite—Horizons at Epcot Center. From the robot butler to the holographic telephone, Horizons revealed a future full of promise, excitement, and funky monotone jumpsuits. 

It’s been 30 years, the future is now the present, and I don’t have a robot butler. Disappointing yes, but on the other hand, we do have the Roomba and I will argue Apple’s FaceTime is likely better than a hologram. So I think we can agree many companies have made serious innovations in the last few decades—they’ve understood that incremental change means incremental growth, and they’ve pushed the limits. Although product development is critical for companies to compete and grow, it also carries high risks, because it represents a big investment into new and unfamiliar territory—it’s crucial to get it right.

While we aren’t all Imagineers, there are strategies for new product and service development that have proven successful in a rapidly changing market—these strategies form the basis of our Best Practices in New Product Development. Two of these Best Practices are below:

  1. Use advanced techniques that emulate real world trade-offs: In real life, people don’t evaluate the importance of individual features or attributes. They make choices between/among products. The more closely research emulates this process, the more accurate the findings will be. What people say they prefer, and what they actually choose, are often not the same thing. That’s why we use trade-off techniques (e.g., discrete choice) that let us derive the most important and relevant preferences as well as sophisticated data mining techniques that help us to create more accurate predictive models.

  2. Build flexibility into the research: If you’re using trade-off techniques, channel Walt Disney himself (“if we can dream it, we can do it”) by including features that fall outside of current capabilities. This lets you mimic the current market and simulate a future market where these feature become available. So while you might not be ready to “do it,” if you’ve dreamed it, you can test it! That’s why, when appropriate, we build a user-friendly simulator. These simulators allow design decision-makers to run “what if” scenarios, providing additional insight when changes occur (e.g., a competitor responds with a new product, prices change, or when the technology to realize your stretch features catches up with your dreams).

We can’t promise your product development research will live as long as Horizons (16 magic filled years) but we can help ensure it’s useful for both the short and longer-term (at least until we all get our robot butlers). Check out the video below to learn how we help our make sure their new product development efforts are a success:

CMB New Product and Service Development from CMBinfo on Vimeo.

Athena is a Project Director at CMB, she looks awesome in a jumpsuit and is patiently waiting for her favorite Disney character, Donald Duck, to make a comeback.

 

Topics: Advanced Analytics, Product Development, Research Design, Growth & Innovation

Want to Lose Weight? Try a Tradeoff Exercise!

Posted by Nick Pangallo

Tue, Nov 05, 2013

aerobicsA few weeks ago, I found myself seated at a trendy Mexican restaurant in Minneapolis, an eager participant at one of the more enjoyable business lunches I’ve encountered lately. As you might expect, the topic quickly turned from the vagaries of the marketing life to the weather, summer vacation stories, the gym, far-too-early holiday planning (I’m looking at you, Target), and then, unexpectedly, to dieting. It was there where I learned Jim Garrity, SVP and head of CMB’s Financial Services, Insurance & Healthcare Practice, was quite a fan of Weight Watchers, one of the few truly successfully long-term dieting options out there – it earned Consumer Report’s highest mark for nutrition analysis.Jim had become a devotee of the Weight Watchers PointsPlus® plan, which, for someone I’d fancied a meat-and-potatoes man like me, came as a bit of a surprise. But as Jim continued to discuss the program and what he enjoyed about it, I realized that PointsPlus® was nothing more than another example of CMB’s brand and product development bread-and-butter: the tradeoff exercise. 

For those not familiar with how it works, PointsPlus® assigns a numeric point-value to each meal/snack/dessert/shake/whatever you can buy through the program, based on protein, carbohydrates, fat and fiber content. The system will then give you a daily points target, taking into account your height, weight, age and gender; read more about it here. Simply stay at or under your target, and…well, that’s it. 

This, as you might expect, presents our would-be dieter with a daily flurry of choices.  Do you have the bagel with cream cheese or the fresh fruit for breakfast? Go with a savory salad or a slim sandwich for lunch? And how do these choices affect what you have “left” at dinner?  Sorry to burst your bubble, eager reader, but if you want that red velvet cake, you’re going to have to pass on the cream cheese.

Built on the economic principle of opportunity cost (the idea that to buy a product or undertake an activity, that product/activity must necessarily replace something else you might have bought or engaged in), these sorts of tradeoffs are exactly what we seek to model when researchers help develop brands, products, messaging campaigns, or basically anything else with a give-and-take.  You can’t be the industry leader and try harder. If you want to offer premium customer service, you’re going to have to charge a bit more. Anyone who’s ever made a budget, whether on a ledger or www.Mint.com, understands that if you go to the concert, you might have to skip the movies this week.

Our job, then, is to master the usage of methodological techniques which replicate these real-world tradeoffs within a research setting. Almost all of my clients take advantage of Maximum-Difference Scaling, an exercise where participants select which items/features/messages/etc. they like most and least, four options at a time. By forcing this tradeoff, we can accurately prioritize huge lists of information in short order, not only rank-ordering but also sizing the distance between items. Many also use allocations, which allow participants to assign different values to a set of given options, but knowing there are only so many points to go around (often 100). My brand and product development engagements often utilize Discrete Choice (or Conjoint) Modeling, an extremely powerful form of tradeoff exercise where participants must choose between holistic brand positionings or fully-configured products. We can then deconstruct their decisions, analyze the tradeoffs, and find those pesky drivers of decision-making that are the foundation of marketing as we know it.

Think about it this way: if you’re still with me, you’ve traded off the time you could’ve spent watching E!, playing Candy Crush, or checking Facebook in favor of a little lesson in economics and research. So what’ll it be, then–the pasta or the seafood?

Nick is a Project Manager within CMB’s Financial Services, Insurance & Healthcare Practice, as well as a poker-playing behavioral economics and game theory nerd. You can follow him on Twitter @NAPangallo. He would always choose the pasta.

Topics: Advanced Analytics, Research Design

Strangers with Influence: The Mysterious People Behind Online Reviews

Posted by Tara Lasker

Tue, Oct 22, 2013

By Tara Lasker

Like a lot of people, I rely on user reviews for virtually all of my purchase decisions. For example, in the last week I’ve read reviews on:

  • Yelp for restaurants (and even which dishes to order from said restaurants)

  • Overstock.com to give me a better idea on the quality/color of a mirror I was about to purchase

  • Airbnb to decide whether the location and appearance of a vacation rental was all it was cracked up to be

While I’ve come to depend on these reviews—I’d be hesitant to buy something that didn’t have some kind of rating— this mountain of data can be paralyzing. My husband and I are notoriously slow decision makers, and the cartoon below (from the always spot on xkcd.com) pretty much sums up how relying on user reviews has lengthened our purchase process. At one point we found ourselves wondering: who are these people anyway? 

What kind of person has the time to deconstruct and rate every detail of a lamp? I mean, you can find user reviews on anything—it’s remarkable.  Can these people even be trusted? And whose businesses are they hurting, or helping in the process?

xkcd onlinereviews

As a market researcher, I think a lot about these people and the information they’re providing.  Sampling is such a critical part of research design but it’s often overlooked by data users. Here are some questions we should be asking about the people we entrust our hard-earned money to:

  • Representativeness: This is a pretty simple concept, we need to ask: does this data represent the population it’s intended to?  Are Yelpers different than the average person? Do they care about the same things as me?

  • Authenticity: Are the responses real or are people gaming the system? If authenticity weren’t a real concern before, the recent government crackdown on consumer review fraud should make us wonder who is actually writing some of these reviews. Even if nothing illegal is going on, it makes sense to ask whether there are incentives or disincentives for a sincere evaluation.

  • Disposition: Are we only hearing from those who need a platform to vent or conversely those who are thrilled? Will reviews skew negative because consumers are much more likely to share a negative experience than a positive one? It's an important question and for Yelp's part, they share the breakdown of reviews by number of stars. In the chart below we find more positive reviews on Yelp than negative. 

Yelp ratings distribution

User reviews have changed the path to purchase for many industries, some are slower to adopt (e.g., health care) but even the stragglers will have no choice but to accept that these strangers are influencing their brand perceptions and purchase likelihood. It's worth our time to ask just who these influencers are.

Tara is Research Director at CMB, she's also an avid user review reader who doesn’t have the time to write her own reviews.

Topics: Consumer Insights, Research Design

The Segmentation Research Crisis

Posted by Rich Schreuer

Mon, Mar 25, 2013

A lot of time and money is wasted on segmentation studies. Here’s why, and what to do about it.

Segmentation Secrets CMBLast November I partnered with a banking client for a conference presentation on a segmentation study we conducted to help guide his organization towards greater customer-centricity. The study provided market insight to help transition from a product-based to a customer-centric organization by identifying need, attitude, and behavior-based segments.  The results helped them develop value propositions customized for each segment, which addressed products, messaging and customer experiences. 

The study was a great success. It’s used by our client in many ways, and was “actionable” in every sense of the word.  But rather than dwelling on our very great success, it got me thinking about why segmentation studies are often not acted upon.  In my 25 years of market research experience, I have found that segmentation studies are often found “interesting” but not “actionable.”  And it’s often not a function of the quality of research.  Poorly executed studies are never actionable.  But even well executed studies may not be actionable.  (And, by the way, when a client finds a study “interesting,” for me, that’s code for “I don’t want to hurt your feelings, but you failed.”)

Back to the conference presentation…at the start of our talk I asked the audience how many had worked on  well-executed segmentation studies (either as a supplier or a client) that were ultimately deemed “not useful.”  I knew the situation was bad, but I was shocked when about four-fifths of the audience raised their hand. So, here are a number of things we at CMB have learned over years about how to make segmentation actionable.  Note they don’t have anything to do with the mechanics of execution.

  1. It’s the process, stupid (apologies to James Carville)
    While any good market research firm can write a decent questionnaire, structure a sound sample, and use state-of-the art analysis techniques, it’s the process that usually determines the project’s fate.  Simply soliciting client input, executing the study and presenting results is not enough.  The study will be a success if the process involves making information-users partners by capturing their definition of success, upcoming decisions and hypotheses, and then including these partners in selection the final segmentation solution.

  2. Articulate and agree on business decisions
    Our experience shows that while, many research consumers are good at listing information needs, few actually identify the decisions they intend to make with this information.  Most seem to believe that if they have enough information they will find insights to help make as yet undetermined decisions.  This problem is especially acute in segmentation studies, because different types of decisions (product development vs. messaging vs. targeting) require different type questions and measurement techniques.

  3. Many options, but no silver bullet
    Over many years and many studies I have never had an engagement where one segmentation solution worked equally well for all decisions.  For example, solutions that are stronger for targeting will typically be weaker for messaging.   At CMB, our process involves examining and rejecting up to 50 solutions, and then presenting four or five really good ones to our client. This is where management art blends with science.  By understanding competing decisions at the start, we make rational tradeoffs to select the best solution.

  4. Real work begins when the study ends
    A segmentation study is typically treated as a discrete project with a beginning and end date.  If the final presentation is well-received the supplier and client may have celebratory drinks or dinner, if not the supplier quietly slinks off to the airport.  But the reality is that no matter how positive the initial reaction, segmentation studies can die on the vine if planning for implementation doesn’t occur before the final presentation.  In successful segmentation engagements, the final presentation is not “the end,” but rather “the end of the beginning.”  Segmentation often requires managers to think differently about the market, and this can’t occur without a process to support and reinforce this way of thinking.  We typically use a set of cross-functional workshops in which participants work with the information and participate in exercises to develop plans with input and support from the group

If you can internalize and act on these principles you’ll never have to slink back to airport after a final presentation. 

Rich is Senior VP and Chief Methodologist at CMB, he also knows the secrets of raising chickens, and the lost art of ski ballet.

You didn’t think we’d give away all our secrets did you? Join us this Wednesday the 27th at noon to learn more secrets to successful segmentation.

Topics: Business Decisions, Research Design, Webinar, Market Strategy & Segmentation