WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Dear Dr. Jay: How To Predict Customer Turnover When Transactions are Anonymous

Posted by Dr. Jay Weiner

Wed, Apr 26, 2017

Dear Dr. Jay:

What's the best way to estimate customer turnover for a service business whose customer transactions are usually anonymous?

-Ian S.


Dear Ian,

You have posed an interesting question.  My first response was, “you can’t”. But as I think about it some more, you might already have some data in-house that could be helpful in addressing the issue.DRJAY-9-2 (1).png

It appears you are in the mass transit industry. Most transit companies offer single ride fares and monthly passes while companies in college towns often offer semester-long passes. Since oftentimes the passes (monthly, semester, etc.) are sold at a discounted rate, we might conclude that all the single fare revenues are turnover transactions.

This assumption is a small leap of faith as I’m sure some folks just pay the single fare price and ride regularly. Let’s consider my boss. He travels a fair amount and even with the discounted monthly pass, it’s often cheaper for him to pay the single ride fare. Me, I like the convenience of not having to make sure I have the correct fare in my pocket so I just pay the monthly rate, even if I don’t use it every day. We both might be candidates for weekly pass sales if we planned for those weeks when we know we’d be commuting every day versus working from home or traveling. I suspect the only way to get at that dimension would be to conduct some primary research to determine the frequency of ridership and how folks pay.

For your student passes, you probably have enough historic data in-house to compare your average semester pass sales to the population of students using them and can figure out if you see turnover in those sales. That leaves you needing to estimate the turnover on your monthly pass sales.

You also may have corporate sales that you could look at. For example, here at CMB, employees can purchase their monthly transit passes through our human resources department. Each month our cards are automatically updated so that we don’t have to worry about renewing it every few weeks.  I suspect if we analyzed the monthly sales from our transit system (MTBA) to CMB, we could determine the turnover rate.

As you can see, you could already have valuable data in-house that can help shed light on customer turnover. I’m happy to look at any information you have and let you know what options you might have in trying to answer your question.

Dr. Jay is CMB’s Chief Methodologist and VP of Advanced Analytics and holds a Zone 3 monthly pass to the MTBA.  If it wasn’t for the engineer, he wouldn’t make it to South Station every morning.

Keep those questions coming! Ask Dr. Jay directly at DearDrJay@cmbinfo.com or submit your question anonymously by clicking below:

Ask Dr. Jay!

Topics: advanced analytics, data collection, Dear Dr. Jay

If you can’t trust your sample sources, you can’t trust your data

Posted by Jared Huizenga

Wed, Apr 19, 2017

people with word bubbles-2.jpgDuring a recent data collection orientation for new CMB employees, someone asked me how we select the online sample providers we work with on a regular basis. Each week, my Field Services team receives multiple requests from sample providers—some we know from conferences, others from what we’ve read in industry publications, and some that are entirely new to us.

When vetting new sample providers, a good place to start is the ESOMAR 28 Questions to Help Buyers of Online Samples. Per the site, these questions “help research buyers think about issues related to online samples.”

An online sample provider should be able to answer the ESOMAR 28 questions; consider red flagging any that won’t. If their answers are too brief and don’t provide much insight into their procedures, it’s okay to ask them for more information, or just move along to the next. 

While all 28 questions are valuable, here are a few that I pay close attention to:

Please describe and explain the type(s) of online sample sources from which you get respondents. Are these databases?  Actively managed research panels?  Direct marketing lists?  Social networks?  Web intercept (also known as river) samples?  

Many online sample providers use multiple methods, so these options aren’t always exclusive. I’m a firm believer in knowing where the sample is coming from, but there isn’t necessarily one “right” answer to this question. Depending on the project and the population you are looking for, different methods may need to be used to get the desired results.

Are your sample source(s) used solely for market research? If not, what other purposes are they used for? 

Beware of providers that use sample sources for non-research purposes. If a provider states that they are using their sample for something other than research, at the very least you should probe them for more details so that you feel comfortable in what those other purposes are. Otherwise, pass on the provider.

Do you employ a survey router? 

A survey router is software that directs potential respondents to a questionnaire for which they may qualify. There are pros and cons to survey routers, and they have become such a touchy subject that several of the ESOMAR 28 questions are devoted to the topic of routers. I’m not a big fan of survey routers, since they can be easily abused by dishonest respondents. If a company uses a survey router as part of their standard practice, be sure you have a very clear understanding of how the router is used as well as any restrictions they place on router usage.

You should also be wary of any sample provider who tells you that your quality control (QC) measures are too strict. This happened to me a few years ago and, needless to say, it ended our relationship with the company. This is not to say that QC measures can’t be too restrictive, and in those cases you can actually be throwing out good data.

At CMB, we did a lot of research prior to implementing our QC standards.  We consulted peers and sample providers to get a good understanding of what was fair and reasonable in the market. We investigated speeding criteria, red herring options, and how to look at open-ended responses. We revisit these standards on a regular basis to make sure they are still relevant. 

Since each of our tried and true providers support our QC standards, when a new (to us) sample provider tells us we’re rejecting too many of their panelists due to poor quality, you can understand how that raises a red flag. Legitimate sample providers will appreciate the feedback on “bad” respondents because it helps them to improve the quality of their sample.

There are tons of online sample providers in the marketplace, but not every partner is a good fit for everyone. While I won’t make specific recommendations, I urge you to consider the three questions I referenced above when selecting your partner.

At Chadwick Martin Bailey, we’ve worked hard to establish trusted relationships with a handful of online sample providers. They’re dedicated to delivering high quality sample and have a true “partnership” mentality. 

In my world of data collection, recommending the best sample providers to my internal clients is extremely important. This is key to providing our clients with sound insights and recommendations that support confident, strategic decision-making. 

Jared Huizenga is CMB’s Field Services Director, and has been in market research industry for nineteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

 

 

Topics: methodology, data collection

Spring into Data Cleaning

Posted by Nicole Battaglia

Tue, Apr 04, 2017

scrubbing.jpegWhen someone hears “spring cleaning” they probably think of organizing their garage, purging clothes from their closet, and decluttering their workspace. For many, spring is a chance to refresh and rejuvenate after a long winter (fortunately ours in Boston was pretty mild).

This may be my inner market researcher talking, but when I think of spring cleaning, the first that comes to mind is data cleaning. Like cleaning and organizing your home, data cleaning is a detailed and lengthy process that is relevant to researchers and their clients.

Data cleaning is an arduous task. Each completed questionnaire must be checked to ensure that it's been answered correctly, clearly, truthfully, and consistently. Here’s what we typically clean:

  • We’ll look at each open-ended response in a survey to make sure respondents’ answers are coherent and appropriate. Sometimes respondents will curse, other times they'll write outrageously irrelevant answers like what they’re having for dinner, so we monitor these closely. We do the same for open-ended numeric responsesthere’s always that one respondent who enters ‘50’ when asked how many siblings they have.
  • We also check for outliers in open-ended numeric responses. Whether it’s false data or an exceptional respondent (e.g. Bill Gates), outliers can skew our data and lead us to draw the wrong conclusions and make more recommendations to clients. For example, I worked on a survey that asked respondents how many cars they own.  Anyone who provided a number that was three standard deviations above the mean was set as an outlier because their answers would’ve significantly impacted our interpretation of the average car ownershipthe reality is the average household owns two cars, not six.
  • Straightliners are respondents who answer a battery of questions on the same scale with the same response. Because of this, sometimes we’ll see someone who strongly agrees or disagrees with two completely opposing statements—making it difficult to trust these answers reflect the respondent’s real opinion.
  • We often insert a Red Herring Fail into our questionnaires to help identify and weed out distracted respondents. A Red Herring Fail is a 10-point scale question usually placed around the halfway mark of a questionnaire that simply asks respondents to select the number “3” on the scale. If they select a number other than “3”, we flag them for removal.
  • If there’s incentive to participate in a questionnaire, someone may feel inclined to participate more than once. So to ensure our completed surveys are from unique individuals, we check for duplicate IP addresses and respondent IDs.

There are a lot of variables that can skew our data, so our cleaning process is thorough and thoughtful. And while the process may be cumbersome, here’s why we clean data: 

  • Impression on the clientFollowing a detailed data cleaning processes helps show that your team is cautious, thoughtful, and able to accurately dissect and digest large amounts of data. This demonstration of thoroughness and competency goes a long way to building trust in the researcher/client relationship because the client will see their researchers are working to present the best data possible.
  • Helps tell a better storyWe pride ourselves on storytelling–using insights from data and turning them into strong deliverablesto help our clients make strategic business decisions. If we didn’t have accurate and clean data, we wouldn’t be able to tell a good story!
  • Overall, ensures high quality and precise dataAt CMB typically two or more researchers are working on the same data file to mitigate the chance of error. The data undergoes such scrutiny so that any issues or mistakes can be noted and rectified, ensuring the integrity of the report.

The benefits of taking the time to clean our data far outweigh the risks of skipping it. Data cleaning keeps false or unrepresentative information from influencing our analyses or recommendations to a client and ensures our sample accurately reflects the population of interest.

So this spring, while you’re finally putting away those holiday decorations, remember that data cleaning is an essential step in maintaining the integrity of your work.

Nicole Battaglia is an Associate Researcher at CMB who prefers cleaning data over cleaning her bedroom.

Topics: data collection, quantitative research

A Lesson in Storytelling from the NFL MVP Race

Posted by Jen Golden

Thu, Feb 02, 2017

american football.jpg

There’s always a lot of debate in the weeks leading up to the NFL’s announcement of its regular season MVP. While the recipient is often from a team with a strong regular season record, it’s not always that simple. Of course the MVP's season stats are an important factor in who comes out on top, but a good story also influences the outcome. 

Take this year, we have a few excellent contenders for the crown, including…

  • Ezekiel Elliot, the rookie running back on the Dallas Cowboys
  • Tom Brady, the NE Patriots QB coming back from a four game “Deflategate” suspension
  • Matt Ryan, the Atlanta Falcons veteran “nice-guy” QB having a career year

Ultimately, deciding the winner is a mix of art and science. And while you’re probably wondering what this has to do with market research, the NFL regular season MVP selection process has a few important things in common with the creation of a good report. [Twitter bird-1.pngTweet this!]

First, make a framework: Having a framework for your research project can help keep you from feeling overwhelmed by the amount of data in front of you. In the MVP race, for example, voters should start by listing attributes they think make an MVP: team record, individual record, strength of schedule, etc. These attributes are a good way to narrow down potential candidates. In research, the framework might include laying out the business objectives and the data available for each. This outline helps focus the narrative and guide the story’s structure.

Then, look at the whole picture: Once the data is compiled, take a step back and think about how the pieces relate to one another and the context of each. Let’s look at Tom Brady’s regular season stats as an example. He lags behind league leaders on total passing yards and TDs, but remember that he missed four games with a suspension. When the regular season is only 12 games, missing a quarter of those was a missed opportunity to garner points, so you can’t help but wonder if it’s a fair comparison to make. Here’s where it’s important to look at the whole picture (whether we’re talking about research or MVP picks). If you don’t have the entire context, you could dismiss Brady altogether. In research, a meaningful story builds on all the primary data within larger social, political, and/or business contexts.

Finally, back it up with facts:  Once the pieces have come together, you need to back up your key storyline (or MVP pick) with facts to prove your credibility. For example, someone could vote for Giants wide receiver Odell Beckham Jr. because of an impressive once-in-a-lifetime catch he made during the regular season. But beyond the catch there wouldn’t be much data to support that he was more deserving than the other candidates. In a research report, you must support your story with solid data and evidence.  The predictions will continue until the 2016 regular season MVP is named, but whoever that ends up being, he will have a strong story and the stats to back it up.

 Jen is a Sr. PM on the Technology/E-commerce team. She hopes Tom Brady will take the MVP crown to silence his “Deflategate” critics – what a story that would be.

Topics: data collection, storytelling, marketing science

Dear Dr. Jay: HOW can we trust predictive models after the 2016 election?

Posted by Dr. Jay Weiner

Thu, Jan 12, 2017

Dear Dr. Jay,

After the 2016 election, how will I ever be able to trust predictive models again?

Alyssa


Dear Alyssa,

Data Happens!

Whether we’re talking about political polling or market research, to build good models, we need good inputs. Or as the old saying goes: “garbage in, garbage out”.  Let’s look at all the sources of error in the data itself:DRJAY-9-2.png

  • First, we make it too easy for respondents to say “yes” and “no” and they try to help us by guessing what answer we want to hear. For example, we ask for purchase intent to a new product idea. The respondent often overstates the true likelihood of buying the product.
  • Second, we give respondents perfect information. We create 100% awareness when we show the respondent a new product concept.  In reality, we know we will never achieve 100% awareness in the market.  There are some folks who live under a rock and of course, the client will never really spend enough money on advertising to even get close.
  • Third, the sample frame may not be truly representative of the population we hope to project to. This is one of the key issues in political polling because the population is comprised of those who actually voted (not registered voters).  For models to be correct, we need to predict which voters will actually show up to the polls and how they voted.  The good news in market research is that the population is usually not a moving target.

Now, let’s consider the sources of error in building predictive models.  The first step in building a predictive model is to specify the model.  If you’re a purist, you begin with a hypotheses, collect the data, test the hypotheses and draw conclusions.  If we fail to reject the null hypotheses, we should formulate a new hypotheses and collect new data.  What do we actually do?  We mine the data until we get significant results.  Why?  Because data collection is expensive.  One possible outcome from continuing to mine the data looking for a better model is a model that is only good at predicting the data you have and not too accurate in predicting the results using new inputs. 

It is up to the analyst to decide what is statistically meaningful versus what is managerially meaningful.  There are a number of websites where you can find “interesting” relationships in data.  Some examples of spurious correlations include:

  • Divorce rate in Maine and the per capita consumption of margarine
  • Number of people who die by becoming entangled in their bedsheets and the total revenue of US ski resorts
  • Per capita consumption of mozzarella cheese (US) and the number of civil engineering doctorates awarded (US)

In short, you can build a model that’s accurate but still wouldn’t be of any use (or make any sense) to your client. And the fact is, there’s always a certain amount of error in any model we build—we could be wrong, just by chance.  Ultimately, it’s up to the analyst to understand not only the tools and inputs they’re using but the business (or political) context.

Dr. Jay loves designing really big, complex choice models.  With over 20 years of DCM experience, he’s never met a design challenge he couldn’t solve. 

PS – Have you registered for our webinar yet!? Join Dr. Erica Carranza as she explains why to change what consumers think of your brand, you must change their image of the people who use it.

What: The Key to Consumer-Centricity: Your Brand User Image

When: February 1, 2017 @ 1PM EST

Register Now!

 

 

Topics: methodology, data collection, Dear Dr. Jay, predictive analytics

A New Year’s Resolution: Closing the Gap Between Intent and Action

Posted by Indra Chapman

Wed, Jan 04, 2017

resolutions.jpg

Are you one of the more than 100 million adults in the U.S. who made a New Year’s resolution? Do you resolve to lose weight, exercise more, spend less and save more, or just be a better person?

Top 10 New Year's Resolutions for 2016:

  • Lose Weight
  • Getting Organized
  • Spend less, save more
  • Enjoy Life to the Fullest
  • Staying Fit and Healthy
  • Learn Something Exciting
  • Quit Smoking
  • Help Others in Their Dreams
  • Fall in Love
  • Spend More Time with Family
[Source: StatisticBrain.com]

The actual number varies from year to year, but generally more than four out of 10 of us make some type of resolution for the New Year. And now that we’re a few days into 2017, we’re seeing the impact of those New Year resolutions. Gyms and fitness classes are crowded (Pilates anyone?), and self-improvement and diet book sales are up.

But… (there’s that inevitable but!), despite the best of intentions, within a week, at least a quarter of us have abandoned that resolution, and by the end of the month, more than a third of us have dropped out of the race. In fact, several studies suggest that only 8% of us actually go on to achieve our resolutions. Alas, we see that behavior no longer follows intention.

It’s not so different in market research because we see the same gap between consumer intention and behavior. Sometimes the gap is fairly small, and other times it’s substantial. Consumers (with the best of intentions) tell us what they plan to do, but their follow through is not always consistent. This, as you might imagine, can lead to bad data. [ twitter icon-1.pngTweet this!]

So what does this mean?

To help close the gap and gather more accurate data, ask yourself the following questions when designing your next study:

  • What are the barriers to adoption or the path to behavior? Are there other factors or elements within the customer journey to consider?
  • Are you assessing the non-rational components? Are there social, psychological or economic implications to them following through with that rational selection? After all, consider that many of us know that exercising daily is good for us – but so few of us follow through.
  • Are there other real life factors that you should consider in analysis of the survey? Does the respondent’s financial situation make that preference more aspirational than intentional?

So what are your best practices for closing the gap between consumer intent and action? If you don’t already have a New Year’s resolution (or if you do, add this one!), why not resolve to make every effort to connect consumer intent to behavior in your studies during 2017.

Another great resolution is to become a better marketer!  How?

Register for our upcoming webinar with Dr. Erica Carranza on consumer identity and the power of measuring brand user image to help create meaningful and relevant messaging for your customers and prospects:

Register Now!

Indra Chapman is a Senior Project Manager at CMB, who has resolved to set goals in lieu of new year’s resolutions this year. In the words of Brad Paisley, the first day of the new year “is the first blank page of a 365-page book. Write a good one.”

Topics: data collection, research design

What We’ve Got Here Is a Respondent Experience Problem

Posted by Jared Huizenga

Thu, Apr 14, 2016

respondent experience problemA couple weeks ago, I was traveling to Austin for CASRO’s Digital Research Conference, and I had an interesting conversation while boarding the plane. [Insert Road Trip joke here.]

Stranger: First time traveling to Austin?

Me: Yeah, I’m going to a market research conference.

Stranger: [blank stare]

Me: It’s a really good conference. I go every year.

Stranger: So, what does your company do?

Me: We gather information from people—usually by having them take an online survey, and—

Stranger: I took one of those. Never again.

Me: Yeah? It was that bad?

Stranger: It was [expletive] horrible. They said it would take ten minutes, and I quit after spending twice that long on it. I got nothing for my time. They basically lied to me.

Me: I’m sorry you had that experience. Not all surveys are like that, but I totally understand why you wouldn’t want to take another one.

Thank goodness the plane started boarding before he could say anything else. Double thank goodness that I wasn’t sitting next to him during the flight.

I’ve been a proud member of the market research industry since 1998. I feel like it’s often the Rodney Dangerfield of professional services, but I’ve always preached about how important the industry is. Unfortunately, I’m finding it harder and harder to convince the general population. The experience my fellow traveler had with his survey points to a major theme of this year’s CASRO Digital Research Conference. Either directly or indirectly, many of the presentations this year were about the respondent experience. It’s become increasingly clear to me that the market research industry has no choice other than to address the respondent experience “problem.”

There were also two related sub-themes—generational differences and living in a digital world—that go hand-in-hand with the respondent experience theme. Fewer people are taking questionnaires on their desktop computers. Recent data suggests that, depending on the specific study, 20-30% of respondents are taking questionnaires on their smartphones. Not surprisingly, this skews towards younger respondents. Also not surprisingly, the percentage of smartphone survey takers is increasing at a rapid pace. Within the next two years, I predict the percent of smartphone respondents will be 35-40%. As researchers, we have to consider the mobile respondent when designing questionnaires.

From a practical standpoint, what does all this mean for researchers like me who are focused on data collection?

  1. I made a bold—and somewhat unpopular—prediction a few years ago that the method of using a single “panel” for market research sample is dying a slow death and that these panels would eventually become obsolete. We may not be quite at that point yet, but we’re getting closer. In my experience, being able to use a single sample source today is very rare except for the simplest of populations.

Action: Understand your sample source options. Have candid conversations with your data collection partners and only work with ones that are 100% transparent. Learn how to smell BS from a mile away, and stay away from those people.

  1. As researchers, part of our job should be to understand how the world around us is changing. So, why do we turn a blind eye to the poor experiences our respondents are having? According to CASRO’s Code of Standards and Ethics, “research participants are the lifeblood of the research industry.” The people taking our questionnaires aren’t just “completes.” They’re people. They have jobs, spouses, children, and a million other things going on in their lives at any given time, so they often don’t have time for your 30-minute questionnaire with ten scrolling grid questions.

Action: Take the questionnaires yourself so you can fully understand what you’re asking your respondents to do. Then take that same questionnaire on a smartphone. It might be an eye opener.

  1. It’s important to educate colleagues, peers, and clients regarding the pitfalls of poor data collection methods. Not only does a poorly designed 30-minute survey frustrate respondents, it also leads to speeding, straight lining, and just not caring. Most importantly, it leads to bad data. It’s not the respondent’s fault—it’s ours. One company stood up at the conference and stated that it won’t take a client project if the survey is too long. But for every company that does this, there are many others that will take that project.

Action: Educate your clients about the potential consequences of poorly designed, lengthy questionnaires. Market research industry leaders as a whole need to do this for it have a large impact.

Change is a good thing, and there’s no need to panic. Most of you are probably aware of the issues I’ve outlined above. There are no big shocks here. But, being cognizant of a problem and acting to fix the problem are two entirely different things. I challenge everyone in the market research industry to take some action. In fact, you don’t have much of a choice.

Jared is CMB’s Field Services Director, and has been in market research industry for eighteen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the team Insane Swine BBQ.

Topics: data collection, mobile, research design, conference recap

My Data Quality Obsession

Posted by Laurie McCarthy

Tue, Jan 12, 2016

3d_people_in_a_row.jpgYesterday I got at least 50 emails, and that doesn’t include what went to my spam folder—at least half of those went straight in the trash. So, I know what a challenge it is to get a potential respondent to even open an email that contains a questionnaire link. We’re always striving to discover and implement new ways to reach respondents and to keep them engaged: mobile optimization is key, but we also consider incentive levels and types, subject lines, and, of course, better ways to ask questions like highlighter exercises, sliding scales, interactive web simulations, and heat maps. This project customization also provides us with the flexibility needed to communicate with respondents in hard-to-reach groups.

Once we’ve got those precious respondents, the question remains: are we reaching the RIGHT respondents and keeping them engaged? How can we evaluate the data efficiently prior to any analysis?

Even with the increased methods in place to protect against “bad”/professional respondents, the data quality control process remains an important aspect of each project. We have set standards in place, starting in the programming phase—as well as during the final review of the data—to identify and eliminate “bad” respondents from the data prior to conducting any analysis.

We start from a conservative standpoint during programming, flagging respondents who fail any of the criteria in the list below. These respondents are not permanently removed from the data at this point, but they are categorized as an incomplete and are reviewable if we feel that they provide value to the study:

  • “Speedsters”Respondents who completed the questionnaire in 1/5 of the overall median time or less. This is applied to evaluate the data collected after approximately the first 20% or 100 completes, whichever is first.
  • “Grid Speedsters”:When applicable, respondents who, for two or more grids of ten or more items, has a grid speed less than 2 standard deviations from the mean for the grid. Again, this is applied after approximately the first 20% or 100 completes, whichever is first.
  • “Red-Herring”We incorporate a standard scale question (0-10), which is programmed at or around the estimated 10-minute mark in the questionnaire, asking the respondent to select a number on the scale. Respondents who do not select the appropriate number are flagged.

This process allows us to begin the data quality review during fielding, so that the blatantly “bad” respondents are removed prior to close of data collection.

However, our process extends to the final data as well.  After the fielding is complete, we review the data for the following:

  • Duplicate respondents: Even with unique links and passwords (for online), we review the data based on the email/phone number provided and the IP Address to remove respondents who do not appear to be unique.
  • Additional speedsters: Respondents who completed the questionnaire in a short amount of time. We take into consideration any brand/product rotation as well (evaluating one brand/product would take less time than evaluating several brands/products). 
  • Straight-liners: Similar to the grid speeders above, we review respondents who have selected only one value for each attribute in a grid. We flag respondents who have straight-lined each grid to create a sum of “straight-liners.” We review this metric on its own as well as in conjunction with overall completion time. The rationale being that if respondents are only selecting one value throughout the questionnaire and appear in the straight-lining flag, these individuals will also have sped through the questionnaire.
  • Inconsistent response patterns: In grids, we can sometimes have attributes that would use the reverse scale, and we review those to determine if there are contradictory responses. Another example might be a respondent who indicates he/she uses a specific brand, and, later in the study, the respondent indicates that he/she is not aware of that brand.

While we may not eliminate respondents, we do examine other factors for “common sense”:

  • Gibberish verbatims: Random letters/symbols or references that do not pertain to the study across each open ended response
  • Demographic review: Review of the demographic information to ensure that they are reasonable and in line with the specifications of the study

As part of our continuing partnership with panel sample providers, we provide them with the panel ID and information of those respondents who have failed our quality control process. In some instances, in which the client or the analysis require that certain sample sizes are collected, this may also necessitate replacing bad respondents. Our collaboration allows us to stand behind the quality of the respondents we provide for analysis and reporting, while also meeting the needs of our clients in a challenging environment.

Our clients rely on us to manage all aspects of data collection when we partner with them to develop a questionnaire, and our stringent data quality control process ensures that we can do that plus provide data that will support their business decisions. 

Laurie McCarthy is a Senior Data Manager at CMB. Though an avid fan of Excel formulas and solving data problems, she has never seen Star Wars. Live long and prosper.

We recently did a webinar on research we conducted in partnership with venture capital firm Foundation Capital This webinar will help you think about Millennials and their investing, including specific financial habits and the attitudinal drivers of their investing preferences.

Watch Here!

 

Topics: Chadwick Martin Bailey, methodology, data collection, quantitative research

Say Goodbye to Your Mother’s Market Research

Posted by Matt Skobe

Wed, Dec 02, 2015

evolving market researchIs it time for the “traditional” market researcher to join the ranks of the milkman and switchboard operator? The pressure to provide more actionable insights, more quickly, has never been so high. Add new competitors into the mix, and you have an industry feeling the pinch. At the same time, primary data collection has become substantially more difficult:

  • Response rates are decreasing as people become more and more inundated with email requests
  • Many among the younger crowd don’t check their email frequently, favoring social media and texting
  • Spam filters have become more effective, so potential respondents may not receive email invitations
  • The cell-phone-only population is becoming the norm—calls are easily avoided using voicemail, caller ID, call-blocking, and privacy managers
  • Traditional questionnaire methodologies don’t translate well to the mobile platform—it’s time to ditch large batteries of questions

It’s just harder to contact people and collect their opinions. The good news? There’s no shortage of researchable data. Quite the contrary, there’s more than ever. It’s just that market researchers are no longer the exclusive collectors—there’s a wealth of data collected internally by companies as well as an increase in new secondary passive data generated by mobile use and social media. We’ll also soon be awash in the Internet of Things, which means that everything with an on/off switch will increasingly be connected to one another (e.g., a wearable device can unlock your door and turn on the lights as you enter). The possibilities are endless, and all this activity will generate enormous amounts of behavioral data.

Yet, as tantalizing as these new forms of data are, they’re not without their own challenges. One such challenge? Barriers to access. Businesses may share data they collect with researchers, and social media is generally public domain, but what about data generated by mobile use and the Internet of Things? How can researchers get their hands on this aggregated information? And once acquired, how do you align dissimilar data for analysis? You can read about some of our cutting-edge research on mobile passive behavioral data here.

We also face challenges in striking the proper balance between sharing information and protecting personal privacy. However, people routinely trade personal information online when seeking product discounts and for the benefit of personalizing applications. So, how and what’s shared, in part, depends on what consumers gain. It’s reasonable to give up some privacy for meaningful rewards, right? There are now health insurance discounts based on shopping habits and information collected by health monitoring wearables. Auto insurance companies are already doing something similar in offering discounts based on devices that monitor driving behavior.

We are entering an era of real-time analysis capabilities. The kicker is that with real-time analysis comes the potential for real-time actionable insights to better serve our clients’ needs.

So, what’s today’s market researcher to do? Evolve. To avoid marginalization, market researchers need to continue to understand client issues and cultivate insights in regard to consumer behavior. To do so effectively in this new world, they need to embrace new and emerging analytical tools and effectively mine data from multiple disparate sources, bringing together the best of data science and knowledge curation to consult and partner with clients.

So, we can say goodbye to “traditional” market research? Yes, indeed. The market research landscape is constantly evolving, and the insights industry needs to evolve with it.

Matt Skobe is a Data Manager at CMB with keen interests in marketing research and mobile technology. When Matt reaches his screen time quota for the day he heads to Lynn Woods for gnarcore mountain biking.    

Topics: data collection, mobile, consumer insights, marketing science, internet of things, data integration, passive data

Dear Dr. Jay: The Internet of Things and The Connected Cow

Posted by Dr. Jay Weiner

Thu, Nov 19, 2015

Hello Dr. Jay, 

What is the internet of things, and how will it change market research?

-Hugo 


DrJay_Thinking-withGoatee_cow.png

Hi Hugo,

The internet of things is all of the connected devices that exist. Traditionally, it was limited to PCs, tablets, and smartphones. Now, we’re seeing wearables, connected buildings and homes. . .and even connected cows. (Just when I thought I’d seen it all.) Connected cows, surfing the internet looking for the next greenest pasture. Actually, a number of companies offer connected cow solutions for farmers. Some are geared toward beef cattle, others toward dairy cows. Some devices are worn on the leg or around the neck, others are swallowed (I don’t want to know how you change the battery). You can track the location of the herd, monitor milk production, and model the best field for grass to increase milk output. The solutions offer alerts to the farmer when the cow is sick or in heat, which means that the farmer can get by with fewer hands and doesn’t need to be with each cow 24/7. Not only can the device predict when a cow is in heat, it can also bias the gender of the calf based on the window of opportunity. Early artificial insemination increases the probability of getting a female calf. So, not only can the farmer increase his number of successful inseminations, he/she can also decide if more bulls or milk cows are needed in the herd. 

How did this happen? A bunch of farmers put the devices on the herd and began collecting data. Then, the additional data is appended to the data set (e.g., the time the cow was inseminated, whether it resulted in pregnancy, and the gender of the calf). If enough farmers do this, we can begin to build a robust data set for analysis.

So, what does this mean for humans? Well, many of you already own some sort of fitness band or watch, right? What if a company began to collect all of the data generated by these devices? Think of all the things the company could do with those data! It could predict the locations of more active people. If it appended some key health measures (BMI, diabetes, stroke, death, etc.) to the dataset, the company could try to build a model that predicts a person’s probability of getting diabetes, having a stroke, or even dying. Granted, that’s probably not a message you want from your smart watch: “Good afternoon, Jay. You will be dead in 3 hours 27 minutes and 41 seconds.” Here’s another possible (and less grim) message: “Good afternoon, Jay. You can increase your time on this planet if you walk just another 1,500 steps per day.” Healthcare providers would also be interested in this information. If healthcare providers had enough fitness tracking data, they might be able to compute new lifetime age expectations and offer discounts to customers who maintain a healthy lifestyle (which is tracked on the fitness band/watch).  

Based on connected cows, the possibility of this seems all too real. The question is: will we be willing to share the personal information needed to make this happen? Remember: nobody asked the cow if it wanted to share its rumination information with the boss.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. He is completely fascinated and paranoid about the internet of things. Big brother may be watching, and that may not be a good thing.

Topics: technology research, healthcare research, data collection, Dear Dr. Jay, internet of things, data integration