Jeffrey Henning:10 Tips for Mobile Diary Studies

Posted by Jeffrey Henning

Mon, Nov 25, 2013

Originally posted on Research Access

Earlier this month, Chris Neal of Chadwick Martin Bailey shared with members of the New England chapter of the Marketing Research Association tips for running mobile diary studies, based on lessons learned from a recent project.For the Council for Research Excellence (CRE), CMB studied mobile video usage to understand:

  • How much time is spent mobile diary researchon mobile devices watching TV (professionally produced TV shows)?

  • Does this cannibalize TV set viewing?

  • What motivates consumers to watch on mobile?

  • How can mobile TV viewing be accurately tracked?

The research included a quantitative phase with two online surveys and mobile journaling, followed by a series of home ethnographies. The quant work included a screening survey, the mobile diary, and a final online survey.

  • The screening survey was Census balanced to estimate market size, with three groups recruited for comparison: those without mobile devices (smartphones or tablets), those with mobile devices who don’t watch TV on them, and those with mobile devices that they watch TV on. The total number of respondents was 5,886.

  • The mobile diary activity asked respondents to complete their journal 4 times a day for 7 days.

  • A final attitudinal survey was used to better understand motivations and behaviors associated with decisions about TV watching.

Along the way, CMB learned some valuable best practices for mobile diary studies, including tips for recruiting, incentives, design and analysis. The 10 key lessons learned:

  1. Mobile panels don’t work for low incidence – Take care when using mobile panels – given the small size of many mobile panels, you may have better luck recruiting through traditional online panels, as CMB did. For this study, it was because of the comparatively low incidence of actual mobile TV watching.

  2. Overrecruit – You will lose many recruits to the journaling exercise when it comes time to downloading the mobile diary application. As a general rule, over-recruit by 100% – get twice the promises of participation that you need. Most dropout occurs after the screening and before the participant has recorded a single mobile diary entry. For many members of online survey panels, journaling is a new experience. The second biggest point of dropout was after recording 1 or 2 diary entries.

  3. Keep it short – To minimize this dropout, you have to keep the diary experience as short as possible: no more than 3 to 5 minutes long. The more times you ask participants to complete a diary each day, the greater the dropout rate.

  4. Think small screen – Make sure the survey is designed to provide a good experience on small screens – avoid grids and sum-allocation questions and limit open-ended prompts and use of images. Use vertical scales instead of horizontal scales. “Be wary of shiny new survey objects for smartphone survey-takers,” said Chris. Smartphone users had 5 times the dropout rate of tablet or laptop users in this study. Enable people to log on to their journal from whatever device they were using at the time, including their computer.

  5. Beware battery hogs – When evaluating smartphone apps, be wary of those that drain battery life by constantly logging GPS location. Check the app store reviews of the application.

  6. Keep consistent – Keep the diary questionnaire the same for every time block, to get respondents into the habit of answering it.

  7. Experiment with incentives to maximize participation – Tier incentives to motivate people to stick with the study and complete all time blocks. To earn the incentive for the CMB study, Chris said that respondents had to participate at least once a day for all 7 days, with additional incentives for every journal log entered (participants were reminded this didn’t have to involve actual TV watching, just filling out the log). In the end, 90% of journaling occasions were filled out.

  8. Remind via SMS and email – In-app notifications are not enough to prompt participation. Use email and text messages for each time block as well. Most respondents logged on within 2 hours of receiving a reminder.

  9. Use online surveys for detailed questions – Use the post-journaling survey to capture greater detail and to work around the limits of mobile surveys. You can then use these results to “slice and dice” the journal responses.

  10. Weight by occasions – Remember to weight the data file to total occasions not total respondents. For missing data, leave it missing. Develop a plan detailing which occasion-based data you’re going to analyze and what respondent-level analysis you are going to do. You may need to create a separate occasion-level data file and a separate respondent-level data file.

Properly done, mobile diary studies provide an amazing depth of data. For this project, CMB captured almost 400,000 viewing occasions (mobile and non-mobile TV watching), for over 5 million occasion-based records!

Interested in the actual survey results? CRE has published the results presentation, “TV Untethered: Following the Mobile Path of TV Content” [PDF].

Jeffrey Henning, PRC is president of Researchscape International, a market research firm providing custom surveys to small businesses. He is a Director at Large on the MRA Board of Directors; in 2012, he was the inaugural winner of the MRA’s Impact award. You can follow him on Twitter @jhenning.

Topics: Methodology, Qualitative Research, Mobile, Research Design

Deconstructing the Customer Experience: What's in Your Toolkit?

Posted by Jennifer von Briesen

Wed, Sep 25, 2013

Disassembled rubix 1More and more companies are focusing on trying to better understand and improve their customers’ experiences. Some want to become more customer-centric. Some see this as an effective path to competitive differentiation. While others, challenging traditional assumptions (e.g., Experience Co-creation, originated by my former boss, Francis Gouillart, and his colleagues Prof. Venkat Ramaswamy and the late C.K. Prahalad), are applying new strategic thinking about value creation. Decision-makers in these firms are starting to recognize that every single interaction and experience a customer has with the company (and its ecosystem partners) may either build or destroy customer value and loyalty over time.

While companies traditionally measure customer value based on revenues, share of wallet, cost to serve, retention, NPS, profitability, lifetime value etc., we now have more and better tools for deconstructing the customer experience and understanding the components driving customer and company interaction value at the activity/experience level. To really understand the value drivers in the customer experience, firms need to simultaneously look holistically, go deep in a few key focus areas, and use a multi-method approach.

Here’s an arsenal of tools and methods that are great to have in your toolkit for building customer experience insight:

Qualitative tools

  • Journey mapping methods and tools

  • In-the-moment, customer activity-based tools

    • Voice capture exercises (either using mobile phones or landlines) where customers can call in and answer a set of questions related to whatever they are doing in the moment.

    • Use mobile devices and online platforms to upload visuals, audio and/or video to answer questions, (e.g., as you are filling out your enrollment paperwork, take a moment to take a quick—less than 10 second video, to share your thoughts on what you are experiencing).

  • Customer diaries

    • E.g., use mobile devices as a visual diary or to complete a number of activities

  • Observation tools

    • Live or virtual tools (e.g., watch/videotape in-person or online experiences, either live or after the fact)

    • On-site customer visits: companies I’ve worked with often like to join customers doing activities in their own environments and situational contexts. Beyond basic observation, company employees can dialogue with customers during the activities/experiences to gain immediate feedback and richer understanding.

  • Interviews and qualitative surveys

  • Online discussion boards

  • Online or in-person focus groups

Quantitative tools

  • Quantitative surveys/research tools (too many to list in a blog post)

  • Internal tracking tools

    • Online tools for tracking behavior metrics (e.g., landing pages/clicks/page views/time on pages, etc.) for key interactions/experience stages. This enables ongoing data-mining, research and analysis.

    • Service/support data analysis (e.g., analyze call center data on inbound calls and online support queries for interaction types, stages, periods, etc. to look for FAQs, problems, etc.).

What tools are you using to better understand and improve the customer experience? What tools are in your toolkit?  Are you taking advantage of all the new tools available?

Jennifer is a Director at  South Street Strategy Group. She recently received the 2013 “Member of the Year” award by the Association for Strategic Planning (ASP), the preeminent professional association for those engaged in strategic thinking, planning and action.

Topics: South Street Strategy Group, Strategic Consulting, Methodology, Qualitative Research, Quantitative Research, Customer Experience & Loyalty

The Main Ingredient: The Market Research in your Pantry

Posted by Dana Vaille

Wed, Apr 17, 2013

market research foodThe New York Times article, The Extraordinary Science of Addictive Junk Food, caught my attention by linking the hot topic of “junk food” and the obesity epidemic to the market research that supports it.  This is where my inner geek gets really excited—it’s not often that two things I’m passionate about (nutrition and market research) are so perfectly linked. 

Ever wonder why it’s virtually impossible to eat just one Dorito? Or how they got the recipe for Dr. Pepper just right?  How do you think they engineered Cheetos into the perfect cheesy, crunchy, melt-in-your-mouth treat?  As any market researcher knows, it goes far beyond basic trial and error—this isn’t like asking a few people if they like your new brownie mix. But even for someone who lives and breathes market research, the article was incredibly illuminating. Companies put a lot of time and effort into developing foods that will both taste good and be profitable; they consider the basic principles of supply and demand, and couple that with food science and a lot of market research to fill our needs and desires.

Because I know very little about food science, I won’t talk about the “bliss point” (the levels of sugar, fat and salt in processed food that keep us craving more) though I find it fascinating.  Instead, here are some fascinating examples of how market research plays a role in determining what foods end up on the shelves of your local grocery store and in millions of pantries around the world.

Qualitative research identifies a need
In the article, we learn how Oscar Meyer conducted focus groups comprised of working moms to learn not what they were feeding their kids for lunch, but how they felt about the challenges and expectations they had in providing meals for their children. Oscar Meyer learned that these moms were strapped for time and felt pressured to provide a full lunch, while also getting themselves out the door, and to the office. The qualitative research revealed some of the tremendous sociological, psychological, and economic pressures faced by moms.  The company’s solution was Lunchables—a hugely successful product, with sales of $218 million in the first year.

Conjoint analysis configures a new product
Campbell’s Soup used a statistical method called conjoint analysis, to determine the optimal product configuration(s) for their soups.  We use conjoint analysis quite often ourselves because it lets us measure and evaluate the relative importance of individual characteristics and determine the right combinations of these characteristics. Campbell’s used conjoint the same way—to optimize the perfect combinations of ingredients, texture, taste, mouth feel, and so on, to (literally) engineer the ideal food.

Segmentation pinpoints a new target audience
Prego conducted segmentation research to find that there are three primary segments of spaghetti sauce consumers: those who like their sauce plain, those who prefer it to be spicy, and those who like it extra-chunky; the key here is that when the research was conducted, there was no extra-chunky tomato sauce on the market! Prego was able to identify a huge segment of the market whose needs (for extra-chunky tomato sauce) were not being met; the result was a new Prego “extra chunky” sauce that dominated the market.

Food is more than just fuel, especially for those of us lucky enough to have plenty to eat… it’s about things like family, comfort, convenience and love.  And whether you won’t touch a GMO or want Mayor Bloomberg to leave your giant sodas alone, it’s important to know when you grab that bag of chips—the first ingredient is most likely a ton of market research.

Dana is Research Director at CMB. Her husband’s recent conversion to a vegan diet has her thinking about food science even more than usual, though she continues to enjoy cheese.

Check out our latest webinar: The 6 Secrets of Succesful Segmentation, it's much healthier than Doritos we promise.

Topics: Advanced Analytics, Qualitative Research, Market Strategy & Segmentation

How to Catch a Catfish: Secrets of a Qualitative Researcher

Posted by Anne Hooper

Tue, Mar 12, 2013

catch a catfish

Those who know me understand that I am not afraid to admit I love reality TV.  Combine that love with an interest in pop culture (generally), and a passion for understanding what people do and WHY they do it, and you have a match made in heaven. So obviously Catfish—the MTV series —is right up my alley.

Talk of "Catfishing" seems to be everywhere these days, but for the uninitiated, I’ll give you the quick (Wikipedia) definition: “A Catfish is a person who creates fake profiles online and pretends to be someone they are not by using someone else’s pictures and information.”  Put simply:  Catfishing is a relationship built on deception.

So what does Catfishing have to do with online qual?

As a qualitative researcher, I have to build “relationships” with strangers all the time, both online and in-person.  I can guarantee you that these relationships are genuine, authentic and honest—at least from my end.  My ultimate goal is to better understand research participants as human beings—how they live, what they value, what makes them ‘tick’, etc.  Most of the time, I truly feel that those I’m spending time with (both online and offline) are also being authentic and honest with me. Notice I said most of the time

Though it doesn’t happen often, it IS possible to come across a phony (AKA “Catfish”) in an in-person setting.  There are some pretty savvy people out there who seem to know how to make their way into a focus group for some extra cash.  Thankfully it’s rare—and most of the time these folks get weeded out before they even enter the room.  Online qualitative research, on the other hand, is ripe for Catfish.  Unless we are conducting video web-based research, there aren’t any visual clues to help us validate identities.  Therefore, we can’t be 100% sure that the person we THINK we are talking to is really that person.

The good news is that as researchers, we can take measures to protect ourselves from these Catfish participants online—it just takes a little effort and creativity.  Here are a few methods I’ve used successfully in the past:  

  • Demographics:  If you have a participant that has an annual income of $50K and claims to spend an average of $10K a year on vacation, you’ve got yourself a red flag.  Taking the time to cross reference demographics with online responses can be extremely helpful in getting to the truth.

  • Common sense:  Individual responses don’t stand alone, but pulled together they create a story.  At the end of the day you either have a story that makes sense or you don’t, and a story that doesn’t make sense is another red flag.  Just as one would do when moderating an in-person group, there are times when you must revisit what someone said earlier, and if necessary, request clarification.  (In the immortal words of Judge Judy: “If it doesn’t make sense, it’s not true.”) 

  • Consistency:  A lack of consistency can be another red flag.  If a participant says one thing, but contradicts themselves sometime later, there might be a problem.  Here’s an example:  in a recent “vacation” detective magnifying glassstudy we had a participant who changed her dates of a travel a few times (not unusual).  She later confirmed purchasing a package (air, hotel, car) for a family of 5 one week prior to departure (somewhat fishy … especially for someone who was very price sensitive).  Her “confirmed” travel dates were from the 25th-30th of the month—and when she hadn’t checked in, as requested during that time, we reached out to her to find out that she was “already home” on the 29th.  Suspicious?  Very.  This lack of consistency—along with several other red flags—confirmed our suspicions that she was not being truthful and she was pulled from the study.  Again, to quote Judge Judy, “If you tell the truth, you don’t have to have a good memory.”

  • Engagement:  There are always going to be participants who choose to do the bare minimum in order to get their incentive.  However, a lack of engagement and openness—coupled with any additional red flags—requires some investigation.  Is the participant just taking the easy way out by answering questions in as few words as possible, or are they skipping key questions altogether?  Skipping key questions (e.g., “Tell us what you like best about product X”) could be a sign that they really don’t use product X after all.  Again, it’s important for the moderator to probe accordingly and if the probes go ignored … you guessed it … another red flag.

With online research (and plenty of Catfish) here to stay, we need to continue to be vigilant in crossing our T’s and dotting our i’s.  I, for one, am ready to catch them … hook, line and sinker.

Anne is CMB’s Qualitative Research Director.  She enjoys travel and thanks to DVR, never misses an episode of Judge Judy. Anne especially loves being able to truly “connect” with her research participants—it’s in her Midwestern blood.   

Learn more about Anne and her Qualitative Research team here.

Topics: Qualitative Research, Television, Media & Entertainment Research

Compilation Scores: Look Under the Hood

Posted by Cathy Harrison

Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: Advanced Analytics, Methodology, Qualitative Research, Quantitative Research