Parents at the Tumble Gym: A Segmentation Analysis

Posted by Jessica Chavez

Wed, Jun 25, 2014

segmentation, parenting, cmb, chadwick martin baileyOn Saturdays, when the weather is not fit for the playground, I take my toddler to a tumble gym where he can run, climb, and kick balls around with other kids his age.  Parents must accompany kids in the play area as this is a free-form play center without an employed staff (other than the front desk attendant).  As a market researcher and a perpetual observer of the human condition, I’ve noticed that these parents fall into three distinct groups: the super-involved group, the middle-of-the-road group, and the barely-involved group.The super-involved parents take full control of their child’s playtime.  They grab the ball and throw it to their kid. They build forts. They chase the kids around.  They completely guide their child’s playtime by initiating all the activities.  “Over here, Jimmy!  Let’s build a ramp and climb up!  Now let’s build a fort!  Ooh, let’s grab that ball and kick it!”

The middle-of-the-road group lets the kids play on their own, but they also keep an eye out and intervene when needed. For example, a parent in this group would intervene if the child is looking dangerously unstable while climbing the fort, or if the child steals another kid’s ball and sparks a meltdown.

The barely-involved parents tend to lean against the wall and stay on their phones—probably checking Facebook. They don’t know where their kid is or what their kid is doing.  For all they know, their child could be scaling a four foot wall and jumping onto another kid’s head.

This just demonstrates this simple fact: people are more the same than they are different.  This is why I love segmentation studies—it’s fascinating that almost everyone can be grouped together based on similar behaviors.

At CMB, we strive to make our segmentation studies relevant, meaningful, and actionable.  To this end, we have found the following five-point plan valuable for guiding our segmentation studies:

  • Start with the End in Mind: Determine how the definition and understanding of segments will be used before you begin.
  • Allow for Multiple Bases: Take a comprehensive, model-based approach that incorporates all potential bases.
  • Have an Open Mind: Let the segments define themselves.
  • Leverage Existing Resources: Harness the power of your internal databases.
  • Create a Plan of Action: Focus on internal deployment from the start.

Because each segmentation study is different, using appropriate selection criteria ensures that segments can be acted upon.  In the case of the tumble gym patrons, we might recommend that marketing efforts be based on a psychographic segmentation.  What are the parenting philosophies?  In what ways does this motivate the parents, and how can marketing efforts be targeted to the low-hanging fruit?

Incidentally, I find that I fall into the middle segment.

Jessica is a Data Manager at CMB and can’t help but mentally segment the population at large.

Want to learn more about segmentation? In the “The 5 C’s of Great Segmentation Socializers,” Brant Cruz shares 5 tips for making sure your segmentation is embraced and used in your organization. 


Webinar: Modularized Research Design for a Mobile World

Join us and Research Now to learn about the modularized traditional purchasing survey we created, which allows researchers to reach mobile shoppers en mass. We'll review sampling and weighting best practices and study design considerations as well as our “data-stitching” process. 

Watch Now!

Topics: Methodology, Market Strategy & Segmentation

Global Mobile Market Research Has Arrived: Are You Prepared?

Posted by Brian Jones

Wed, May 14, 2014

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI LightspeedThe ubiquity of mobile devices has opened up new opportunities for market researchers on a global scale. Think: biometrics, geo-location, presence sensing, etc. The emerging possibilities enabled by mobile market research are exciting and worth exploring, but we can’t ignore the impact that small screens are already having on market research. For example, unintended mobile respondents make up about 10% of online interviews today. They also impact research in other ways—through dropped surveys, disenfranchised panel members, and other unknown influences. Online access panels have become multi-mode sources of data collection and we need to manage projects with that in mind.

Researchers have at least three options: (1) we can ignore the issue; (2) we can limit online surveys to PC only; or (3) we can embrace and adapt online surveys to a multi-mode methodology. 

We don’t need to make special accommodations for small screen surveys if mobile participants are a very small percentage of panel participants, but the number of mobile participants is growing.  Frank Kelly, SVP of global marketing and strategy for Lightspeed Research/GMI—one of the world’s largest online panels—puts it this way, we don’t have the time to debate the mobile transition, like we did in moving from CATI to online interviewing, since things are advancing so quickly.” 

If you look at the percentage of surveys completed on small screens in recent GMI panel interviews, they exceed 10% in several countries and even 15% among millennials.

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI Lightspeed

There are no true device agnostic platforms since the advanced features in many surveys simply cannot be supported on small screens and on less sophisticated devices.  It is possible to create device agnostic surveys, but it means giving up on many survey features that we’ve long considered standard. This creates a challenge. Some question types aren’t effectively supported by small screens, such as discrete choice exercises or multi-dimensional grids, and a touchscreen interface is different from what you get with a mouse. Testing on mobile devices may also reveal questions that render differently depending on the platform, which can influence how a respondent answers a question. In instances like these, it may be prudent to require respondents to complete online interviews on a PC-like device. The reverse is also true.  Some research requires mobile-only respondents, particularly when the specific features of smartphones or tablets are used. In some emerging countries, researchers may skip the PC as a data collection tool altogether in favor of small screen mobile devices.  In certain instances, PC-only or mobile-only interviewing makes sense, but the majority of today’s online research involves a mix of platform types. It is clear we need to adopt best practices reflect this reality. 

Online questionnaires must work on all or at least the vast majority of devices.  This becomes particularly challenging for multi-country studies which have a greater variety of devices, different broadband penetrations, and different coverage/quality concerns for network access and availability.  A research design that covers as many devices as possible—both PC and mobile—maximizes the breadth of respondents likely to participate.  

There are several ways to mitigate concerns and maximize the benefits of online research involving different platform types. 

1.      Design different versions of the same study optimized for larger vs. smaller screens.  One version might even be app-based instead of online-based, which would mitigate concerns over network accessibility. 

2.      Break questionnaires into smaller chunks to avoid respondent fatigue on longer surveys, which is a greater concern for mobile respondents. 

Both options 1 and 2 have their own challenges.  They require matching/merging data, need separate programming, and require separate testing, all of which can lead to more costly studies.

3.      Design more efficient surveys and shorter questionnaires. This is essential for accommodating multi-device user experiences. Technology needs to be part of the solution, specifically with better auto detect features that optimize how questionnaires are presented on different screen sizes.  For multi-country studies, technology needs to adapt how questionnaires are presented for different languages. 

Researchers can also use mobile-first questionnaire design practices.  For our clients, we always consider the following:

  • Shortening survey lengths since drop-off rates are greater for mobile participants, and it is difficult to hold their focus for more than 15 minutes.

  • Structuring questionnaires to enable smaller screen sizes to avoid horizontal scrolling and minimize vertical scrolling.

  • Minimizing the use of images and open-ended questions that require longer responses. SMS based interviewing is still useful in specific circumstances, but the number of key strokes required for online research should be minimized.

  •  Keeping the wording of the questions as concise as possible.

  • Carefully choosing which questions to ask which subsets of respondents. We spend a tremendous amount of equity in the design phase to make surveys more appealing to small screen participants. This approach pays dividends in every other phase of research and in the quality of what is learned.

Consumers and businesses are rapidly embracing the global mobile ecosystem. As market researchers and insights professionals, we need to keep pace without compromising the integrity of the value we provide. Here at CMB, we believe that smart planning, a thoughtful approach, and an innovative mindset will lead to better standards and practices for online market research and our clients.

Special thanks to Frank Kelly and the rest of the Lightspeed/GMI team for their insights.

Brian is a Project Manager and mobile expert on CMB’s Tech and Telecom team. He recently presented the results of our Consumer Pulse: The Future of the Mobile Wallet at The Total Customer Experience Leaders conference.

In Universal City next week for the Future of Consumer Intelligence? Chris Neal, SVP of our Tech and Telecom team, and Roddy Knowles of Research Now, will share A “How-To” Session on Modularizing a Live Survey for Mobile Optimization.

 

Topics: Methodology, Data Collection, Mobile, Data Integration

A Perfect Match? Tinder and Mobile Ethnographies

Posted by Anne Hooper

Wed, Apr 23, 2014

Tinder JoeI know what you are thinking...“What the heck is she TALKING about? How can Tinder possibly relate to mobile ethnography?”  You can call me crazy, but hear me out first.For those of you who may be unfamiliar, Tinder is a well-known “hook up” app that’s taken the smartphone wielding, hyper-social Millennial world by storm. With a simple swipe of the index finger, one can either approve or reject someone from a massive list of prospects. At the end of the day, it comes down to anonymously passing judgment on looks alone—yet if both users “like” each other, they are connected. Shallow? You bet. Effective? Clearly it must be because thousands of people are downloading the app daily.

So what’s the connection with mobile ethnography? While Tinder appears to be an effective tool for anonymously communicating attraction (anonymous in that the only thing you really know about the other person is what they look like), mobile ethnography is an effective tool for anonymously communicating daily experiences that we generally aren’t as privy to as researchers. Mobile ethnography gives us better insight into consumer behavior by bringing us places we’ve never gone before but are worthy of knowing nonetheless (Cialis, anyone?). Tapping into these experiences—from the benign to the very private—are the nuts and bolts behind any good product or brand.

So how might one tap into these experiences using mobile ethnography? It’s actually quite easy—we create and assign “activities” that are not only engaging for participants, but are also designed to dig deep and (hopefully) capture the "Aha!" moments we aim for as researchers. Imagine being able to see how consumers interact with your brand on a day-to-day basis—how they use your product, where their needs are being fulfilled, and where they experience frustrations. Imagine “being there” when your customer experiences your brand—offering insight into what delights and disappoints them right then and there (i.e., not several weeks later in a focus group facility). The possibilities for mobile ethnography are endless...let’s just hope the possibilities for Tinder come to a screeching halt sooner rather than later.

Anne Hooper is the Director of Qualitative Services at CMB. She has a 12 year old daughter who has no idea what Tinder is, and she hopes it stays that way for a very long time.

Topics: Methodology, Qualitative Research, Social Media

What they Didn’t Teach you in Marketing Research Class: Sig Testing

Posted by Amy Maret

Mon, Feb 03, 2014

Market Research education CMB

As a recent graduate, and entrant into the world of professional market research, I have some words of wisdom for college seniors looking for a career in the industry. You may think your professors prepared you for the “real world” of market research, but there are some things you didn’t learn in your Marketing Research class. So what’s the major difference between research at the undergrad level and the work of a market researcher? In the real world, context matters, and there are real consequences to our research. One example of this is how we approach testing for statistical significance.Starting in my freshman year of college, I was taught to abide by a concept that I came to think of as the “Golden Rule of Research.” According to this rule, if you can’t be 95% or 90% confident that a difference is statistically significant, you should consider it essentially meaningless.

Entering the world of Market Research, I quickly found that this rule doesn’t always hold when the research is meant to help users make real business decisions. Although significance testing can be a helpful tool in interpreting results, ignoring a substantial difference simply because it does not cross the thin line into statistical significance can be a real mistake.

Our Chief Methodologist, Richard Schreuer, gives this example of why this “Golden Rule” doesn’t always make sense in the real world:

Imagine a manager gets the results of a concept test in which a new ad outperforms the old by a score of 54% to 47%; sig testing shows our manager can be 84% confident the new ad will do better than the old ad. The problem in the market research industry is that we typically assess significance at the 95% or 90% level, if the difference between scores doesn’t pass this strict threshold, then it is often assumed no difference exists.

However, in this case, we can be very sure that the new ad is not worse than the old (there’s only a 1% chance that the new ad’s score is below the old). So, the manager has an 84% chance of improving her advertising and a 1% chance of hurting it if she changes to the new creative—pretty good odds. The worst scenario is that the new creative will perform the same as the old. So, in this case, there is real upside in going with the new creative and little downside (save the production expense). But if the manager relied on industry-standard significance testing, she would likely have dismissed the creative immediately.

At CMB, it doesn’t take long to get the sense that there is something much bigger going on here than just number crunching. Creating useable, meaningful research and telling a cohesive story require more than just an understanding of the numbers themselves; it takes creativity and a solid grasp on our clients’ businesses and their needs. As much as I love working with the data, the most satisfying part of my job is seeing how our research and recommendations support real decisions that our clients make every day, and that’s not something I ever could have learned in school.

Amy is a recent graduate from Boston College, where she realized that she had a much greater interest in statistics than the average student. She is 95% confident that this is a meaningful difference.

 

Feb20webinar14Join CMB' Amy Modini on February 20th, at 12:30 pm ET, to learn how we use discrete choice to better position your brand in a complex changing market. Register here.

 

Topics: Chadwick Martin Bailey, Advanced Analytics, Methodology, Business Decisions

Jeffrey Henning:10 Tips for Mobile Diary Studies

Posted by Jeffrey Henning

Mon, Nov 25, 2013

Originally posted on Research Access

Earlier this month, Chris Neal of Chadwick Martin Bailey shared with members of the New England chapter of the Marketing Research Association tips for running mobile diary studies, based on lessons learned from a recent project.For the Council for Research Excellence (CRE), CMB studied mobile video usage to understand:

  • How much time is spent mobile diary researchon mobile devices watching TV (professionally produced TV shows)?

  • Does this cannibalize TV set viewing?

  • What motivates consumers to watch on mobile?

  • How can mobile TV viewing be accurately tracked?

The research included a quantitative phase with two online surveys and mobile journaling, followed by a series of home ethnographies. The quant work included a screening survey, the mobile diary, and a final online survey.

  • The screening survey was Census balanced to estimate market size, with three groups recruited for comparison: those without mobile devices (smartphones or tablets), those with mobile devices who don’t watch TV on them, and those with mobile devices that they watch TV on. The total number of respondents was 5,886.

  • The mobile diary activity asked respondents to complete their journal 4 times a day for 7 days.

  • A final attitudinal survey was used to better understand motivations and behaviors associated with decisions about TV watching.

Along the way, CMB learned some valuable best practices for mobile diary studies, including tips for recruiting, incentives, design and analysis. The 10 key lessons learned:

  1. Mobile panels don’t work for low incidence – Take care when using mobile panels – given the small size of many mobile panels, you may have better luck recruiting through traditional online panels, as CMB did. For this study, it was because of the comparatively low incidence of actual mobile TV watching.

  2. Overrecruit – You will lose many recruits to the journaling exercise when it comes time to downloading the mobile diary application. As a general rule, over-recruit by 100% – get twice the promises of participation that you need. Most dropout occurs after the screening and before the participant has recorded a single mobile diary entry. For many members of online survey panels, journaling is a new experience. The second biggest point of dropout was after recording 1 or 2 diary entries.

  3. Keep it short – To minimize this dropout, you have to keep the diary experience as short as possible: no more than 3 to 5 minutes long. The more times you ask participants to complete a diary each day, the greater the dropout rate.

  4. Think small screen – Make sure the survey is designed to provide a good experience on small screens – avoid grids and sum-allocation questions and limit open-ended prompts and use of images. Use vertical scales instead of horizontal scales. “Be wary of shiny new survey objects for smartphone survey-takers,” said Chris. Smartphone users had 5 times the dropout rate of tablet or laptop users in this study. Enable people to log on to their journal from whatever device they were using at the time, including their computer.

  5. Beware battery hogs – When evaluating smartphone apps, be wary of those that drain battery life by constantly logging GPS location. Check the app store reviews of the application.

  6. Keep consistent – Keep the diary questionnaire the same for every time block, to get respondents into the habit of answering it.

  7. Experiment with incentives to maximize participation – Tier incentives to motivate people to stick with the study and complete all time blocks. To earn the incentive for the CMB study, Chris said that respondents had to participate at least once a day for all 7 days, with additional incentives for every journal log entered (participants were reminded this didn’t have to involve actual TV watching, just filling out the log). In the end, 90% of journaling occasions were filled out.

  8. Remind via SMS and email – In-app notifications are not enough to prompt participation. Use email and text messages for each time block as well. Most respondents logged on within 2 hours of receiving a reminder.

  9. Use online surveys for detailed questions – Use the post-journaling survey to capture greater detail and to work around the limits of mobile surveys. You can then use these results to “slice and dice” the journal responses.

  10. Weight by occasions – Remember to weight the data file to total occasions not total respondents. For missing data, leave it missing. Develop a plan detailing which occasion-based data you’re going to analyze and what respondent-level analysis you are going to do. You may need to create a separate occasion-level data file and a separate respondent-level data file.

Properly done, mobile diary studies provide an amazing depth of data. For this project, CMB captured almost 400,000 viewing occasions (mobile and non-mobile TV watching), for over 5 million occasion-based records!

Interested in the actual survey results? CRE has published the results presentation, “TV Untethered: Following the Mobile Path of TV Content” [PDF].

Jeffrey Henning, PRC is president of Researchscape International, a market research firm providing custom surveys to small businesses. He is a Director at Large on the MRA Board of Directors; in 2012, he was the inaugural winner of the MRA’s Impact award. You can follow him on Twitter @jhenning.

Topics: Methodology, Qualitative Research, Mobile, Research Design