The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

Better Demographics = Better Insights

Posted by Eliza Novick

Thu, Jun 25, 2015

There is a strong belief that gender identity can be used to predict behavior in the marketplace, and we see evidence of this belief in advertising every day (and we also regularly poke fun at this idea - see the video below). Despite this, the standard approach to collecting information about gender and behavior often lacks the depth and complexity necessary to reach the meaningful insights around gender identity. How can we fix this? One way forward is to incorporate social science into our questionnaire design. 


There’s a large body of evidence from social science research that indicates social identities, like gender, can have concrete economic implications for people belonging to certain groups. Gender is not only an expression of individual identity, but is also negotiated on a group level as we practice and enforce patterns of hierarchical social, political, and economic relationships (including work and family life). So, while one woman’s social, political, and/or economic profiles may deviate from the profiles of women as a group, she’s still subject to the systematic opportunities and barriers that these group profiles represent.

At CMB, we often leverage social science in questionnaire design to elicit responses that most closely reflect the market. As an industry, we could (and should) go further in the way we collect demographic information. For example, respondents are typically allowed to select only one employment status from a list of several options: employed part time, employed full time, full time homemaker, full time student, retired, or unemployed. From the social science perspective, this question is problematic because it ignores the fact that respondents may fall into more than one category and that women are more likely than men to experience overlap in these categories in their lifetime. A question like this might produce compromised data, particularly for respondents who are young, female, and/or low-income. Another example is marital status: is the marketplace behavior of a same-sex unmarried couple categorically different than that of a couple in a traditional marriage? Depending on the industry, the answer may vary, but with a few easy questionnaire tweaks, we can capture that information.

From segmentation to optimization, demographic information is often a critical part of the analyses that solve our clients’ business challenges. But our answers to their problems are only as good as the questions our surveys are asking. Revisiting demographic collection is an easy update that goes a long way towards generating higher quality data, making better evidence-based recommendations, and pushing businesses forward.  

Eliza Novick is an Associate Researcher at CMB. Her favorite Boston attraction is the New England Aquarium, particularly the Edge of the Sea exhibit where you can pick up clams and starfish. 

Want to know the latest on barriers and opportunities for the next generation of mobile payment providers?


Topics: data collection, research design

Be Aware When Conducting Research Among Mobile Respondents

Posted by Julie Kurd

Tue, Oct 28, 2014

mobile, cmb

Are you conducting research among mobile respondents yet? Autumn is conference season, and 1,000 of us just returned from IIR’s The Market Research Event (TMRE) conference where we learned, among other things, about research among mobile survey takers. Currently, only about 5% of the market research industry spend is for research conducted on a smartphone, 80% is online, and 15% is everything else (telephone and paper-based). Because mobile research is projected to be 20% of the industry spend in the coming years, we all need to understand the risks and opportunities of using mobile surveys.  

Below, you’ll find three recent conference presentations that discussed new and fresh approaches to mobile research as well as some things to watch out for if you decide to go the mobile route. 

1. At IIR TMRE, Anisha Hundiwal, the Director of U.S. Consumer and Business Insights for McDonald’s, and Jim Lane from Directions Research Inc. (DRI) did not disappoint. They co-presented the research they had done to understand the strengths of half a dozen national and regional coffee brands, including Newman’s Coffee (the coffee that McDonald’s serves), around 48 brand attributes. While they did share some compelling results, Anisha and Jim’s presentation primarily focused on the methodology they used. Here is my paraphrase of the approach they took:

  • They used a traditional 25 minute, full-length online study among traditional computer/laptop respondents who met the screening criteria (U.S. and Europe, age, gender, etc.), measuring a half dozen brands and approximately 48 brand attributes. They then analyzed results of the full-length study and conducted a key driver analysis.
  • Next, they administered the study using a mobile app for mobile survey takers among similar respondents who met the same screening criteria. They also dropped the survey length to 10 minutes, tested a narrower set of brands (3 instead of 6), and winnowed the attributes from ~48 to ~14. They made informed choices about which attributes to include based on their key driver analysis (key drivers to overall equity, and I believe I heard them say they added in some attributes that were highly polarizing).

Then, they compared mobile respondent results to the traditional online survey results. Anisha and Jim discussed key challenges we all face as we begin to adapt to smartphone respondent research. For example, they tinkered with rating scales and slide bars by setting the bar on the far left at 0 on a 0-100 rating scale for some respondents and then setting it at the mid-point for others to see if results would be different. While the overall brand results were about the same, the sections of the rating scales respondents used differed. Further, they reported that it was hard to compare detailed results for online and mobile because different parts of the rating scales were used in general. Finally, they reported that the winnowed attribute and brand lists made insights less rich than the online survey results.

2. At the MRA Corporate Researcher’s conference in September, Ryan Backer, Global Insights for Emerging Tech at General Mills, also very clearly articulated several early learnings in the emerging category of mobile surveys. He said that 80% of General Mills’ research team has conducted at least one smartphone respondent study. (Think about that and wonder out loud, “should I at least dip my toe into this smartphone research?”) He provides a laundry list of the challenges they faced and, like all true innovators, he was willing to share his challenges because it helps him continue to innovate.  You can read a full synopsis here.

3. Chadwick Martin Bailey was a finalist for the NGMR Disruptive Innovation Award at the IIR TMRE conference.  We partnered with Research Now for a presentation on modularizing surveys for mobile respondents at an earlier IIR conference and then turned the presentation into a webinar. CMB used a modularized technique in which a 20 minute survey was deconstructed into 3 partial surveys with key overlaps. After fielding the research among mobile survey takers, CMB used some designer analytics (warning, probably don’t do this without a resident PhD) to ‘stitch’ and ‘impute’ the results. In this conference presentation turned webinar, CMB talks about the pros and cons of this approach.

Conferences are a great way to connect with early adopters of new research methods. So, when you’re considering adopting new research methods such as mobile surveys, allocate time to see what those who have gone before you have learned!

Julie blogs for GreenBook, ResearchAccess, and CMB.  She’s an inspired participant, amplifier, socializer, and spotter in the twitter #mrx community so talk research with her @julie1research.

Topics: data collection, mobile, research design, data integration, conference recap

Big Data: We’ve Only Just Begun

Posted by Jonah Lundberg

Wed, Sep 24, 2014

big data, chadwick martin baileyData has existed in the modern business world for a long time (think manila folders in file cabinets in every office on every floor). Digitized data has been around for a while now, too (think virtual folders in hard drives connected to seemingly bottomless computer networks). So why, in just the past few years, have all of us become so excited about and actually engaged in data? We even decided to give it a new name—“big” data. Where did all this excitement come from? Why is it happening? If you asked Tom Breur, Cengage Learning’s VP of Analytics who spoke about big data at NEMRA’s Spring into Action event earlier this year, he would tell you that it’s because there has been a recent surge in data volume (mostly thanks to the emergence of machine-generated data and machine-to-machine communication). This surge led to an ever-expanding data surplus—a surplus that would not have had a home if it weren’t for subsequent innovations in the type of software that manages huge amounts of data and the innovations that led to much more efficient data warehousing capabilities.Initially, large companies were the only ones who had any sort of big data capability (credit scores and fraud protection are two early examples), and until recently these companies were the only ones to leverage those capabilities to play the big data game when it came to predicting their customers’ behavior. But in their July-August issue, Inc. Magazine featured an article detailing how smaller companies are now allowed to play as well, thanks to decreasing technology costs and increasing user-friendliness of big data software.

All of this begs the question: will companies, big and small, no longer need market researchers? After all, big data solutions allow companies to learn about their customers and make more informed business decisions, and let’s not forget that the newest big data solutions are so user-friendly that companies can do all the consumer insights themselves. However, I don’t think market researchers will be replaced anytime soon. Big data may be able to tell you the “what,” but it can’t tell you the “why.”

Enter the story of the widely-covered 2013 Google Flu Trends “Epidemic.” By running algorithms based on flu-related Google searches and searchers’ locations, Google Flu Trends had been historically accurate in predicting how much of the U.S. population had the flu. However, in 2013, it inaccurately predicted the number. In fact, it predicted twice the number reported by the Centers for Disease Control and Prevention! How did this happen? The widespread media coverage of the severe flu season in the U.S. spread like a virus throughout social media, which led to an increase in flu-related Google searches. Many of these searches were from people who thought they might have the flu—“I’m sniffling! I’m sneezing!”—but didn’t. Since Google Flu Trends didn’t consider the context and wasn’t able to ask Googlers why they were Googling flu-like symptoms, it thought 11% of the U.S. population had the flu when the actual number was closer to 6%.

Mark Hansen of Columbia University summed it up best when he said, “Data is not a magic force in society; it’s an extension of us.” Can you believe it? Big data is actually quite human. It tells a story about people because it comes from people, and it’s simply a new medium through which people are telling stories about themselves. It’s like collaborative storytelling. Remember those stories that your teachers would have you start and then make other kids add to? It’s similar, but with a simple twist: big data is collaborative non-fiction. But the authors are still people, which brings it back to market researchers. As market researchers, we not only ask people questions about how they feel or what they do, but we also ask why. We’re able to apply the context that, as evidenced by the Google Flu Trends Epidemic, big data is not able to accomplish alone.

Even though we’re not being replaced, we still have to adapt. For example, there is a great opportunity in synthesizing what we do with the data our research partners have in-house. By combining our knowledge of the “why” with a research partner’s “what,” we can identify the error in what would have otherwise been our research partner’s version of the Google Flu Trends Epidemic if they had not been appropriately focused on why the data looked the way it did. For a company attempting to adjust its product offerings, this could be the difference between abandoning its most loyal customers and maintaining those loyal customers by keeping them happy, all while successfully gaining new customers in the process.

The number of success stories that result from combining the best of both worlds—the what and the why—seems to be ever-expanding. Here at CMB, we have had the pleasure of co-authoring a few of those success stories. For market research, big data is a good thing and worth adapting for. Company by company, the market research industry should adapt in order to set itself up not only for survival, but also for leadership in the next century of consumer insights so we can continue to play the role of co-author in a story that has only just begun.

Jonah is a Senior Associate Research at CMB. He enjoys traveling with his friends and family, and he can't wait for the hockey season to start up again.

Join us at The Market Research Event in October! Use the code CMB2014 and receive 25% off your registration. 

Register Today!

Topics: data collection, technology solutions, big data

Global Mobile Market Research Has Arrived: Are You Prepared?

Posted by Brian Jones

Wed, May 14, 2014

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI LightspeedThe ubiquity of mobile devices has opened up new opportunities for market researchers on a global scale. Think: biometrics, geo-location, presence sensing, etc. The emerging possibilities enabled by mobile market research are exciting and worth exploring, but we can’t ignore the impact that small screens are already having on market research. For example, unintended mobile respondents make up about 10% of online interviews today. They also impact research in other ways—through dropped surveys, disenfranchised panel members, and other unknown influences. Online access panels have become multi-mode sources of data collection and we need to manage projects with that in mind.

Researchers have at least three options: (1) we can ignore the issue; (2) we can limit online surveys to PC only; or (3) we can embrace and adapt online surveys to a multi-mode methodology. 

We don’t need to make special accommodations for small screen surveys if mobile participants are a very small percentage of panel participants, but the number of mobile participants is growing.  Frank Kelly, SVP of global marketing and strategy for Lightspeed Research/GMI—one of the world’s largest online panels—puts it this way, we don’t have the time to debate the mobile transition, like we did in moving from CATI to online interviewing, since things are advancing so quickly.” 

If you look at the percentage of surveys completed on small screens in recent GMI panel interviews, they exceed 10% in several countries and even 15% among millennials.

mobile research,Chadwick Martin Bailey,CMB,Chris Neal,Brian Jones,mobile data collection,mobile stitching,GMI Lightspeed

There are no true device agnostic platforms since the advanced features in many surveys simply cannot be supported on small screens and on less sophisticated devices.  It is possible to create device agnostic surveys, but it means giving up on many survey features that we’ve long considered standard. This creates a challenge. Some question types aren’t effectively supported by small screens, such as discrete choice exercises or multi-dimensional grids, and a touchscreen interface is different from what you get with a mouse. Testing on mobile devices may also reveal questions that render differently depending on the platform, which can influence how a respondent answers a question. In instances like these, it may be prudent to require respondents to complete online interviews on a PC-like device. The reverse is also true.  Some research requires mobile-only respondents, particularly when the specific features of smartphones or tablets are used. In some emerging countries, researchers may skip the PC as a data collection tool altogether in favor of small screen mobile devices.  In certain instances, PC-only or mobile-only interviewing makes sense, but the majority of today’s online research involves a mix of platform types. It is clear we need to adopt best practices reflect this reality. 

Online questionnaires must work on all or at least the vast majority of devices.  This becomes particularly challenging for multi-country studies which have a greater variety of devices, different broadband penetrations, and different coverage/quality concerns for network access and availability.  A research design that covers as many devices as possible—both PC and mobile—maximizes the breadth of respondents likely to participate.  

There are several ways to mitigate concerns and maximize the benefits of online research involving different platform types. 

1.      Design different versions of the same study optimized for larger vs. smaller screens.  One version might even be app-based instead of online-based, which would mitigate concerns over network accessibility. 

2.      Break questionnaires into smaller chunks to avoid respondent fatigue on longer surveys, which is a greater concern for mobile respondents. 

Both options 1 and 2 have their own challenges.  They require matching/merging data, need separate programming, and require separate testing, all of which can lead to more costly studies.

3.      Design more efficient surveys and shorter questionnaires. This is essential for accommodating multi-device user experiences. Technology needs to be part of the solution, specifically with better auto detect features that optimize how questionnaires are presented on different screen sizes.  For multi-country studies, technology needs to adapt how questionnaires are presented for different languages. 

Researchers can also use mobile-first questionnaire design practices.  For our clients, we always consider the following:

  • Shortening survey lengths since drop-off rates are greater for mobile participants, and it is difficult to hold their focus for more than 15 minutes.

  • Structuring questionnaires to enable smaller screen sizes to avoid horizontal scrolling and minimize vertical scrolling.

  • Minimizing the use of images and open-ended questions that require longer responses. SMS based interviewing is still useful in specific circumstances, but the number of key strokes required for online research should be minimized.

  •  Keeping the wording of the questions as concise as possible.

  • Carefully choosing which questions to ask which subsets of respondents. We spend a tremendous amount of equity in the design phase to make surveys more appealing to small screen participants. This approach pays dividends in every other phase of research and in the quality of what is learned.

Consumers and businesses are rapidly embracing the global mobile ecosystem. As market researchers and insights professionals, we need to keep pace without compromising the integrity of the value we provide. Here at CMB, we believe that smart planning, a thoughtful approach, and an innovative mindset will lead to better standards and practices for online market research and our clients.

Special thanks to Frank Kelly and the rest of the Lightspeed/GMI team for their insights.

Brian is a Project Manager and mobile expert on CMB’s Tech and Telecom team. He recently presented the results of our Consumer Pulse: The Future of the Mobile Wallet at The Total Customer Experience Leaders conference.

In Universal City next week for the Future of Consumer Intelligence? Chris Neal, SVP of our Tech and Telecom team, and Roddy Knowles of Research Now, will share A “How-To” Session on Modularizing a Live Survey for Mobile Optimization.


Topics: methodology, data collection, mobile, data integration

Data Oceans: You're Gonna Need a Bigger Boat

Posted by Jeff McKenna

Tue, Jul 17, 2012

big data cmbWe hear a lot about Big Data—from Target using predictive analytics to tell which of its customers are pregnant, to MIT and Intel putting millions behind their bigdata@CSAIL initiative. Yet, I’m struck by the fact that most of what I read, and hear at conferences, is about  the wealth of data technology can provide researchers, managers and analysts. There is very little about how these folks can avoid drowning in it, and most importantly make the decisions that address business challenges.

For the uninitiated, the Big Data revolution is characterized by three traits:
  • Volume - Technology has led to an exponential increase in data we have available

  • Diversity - We can aggregate data from a wide range of disparate sources, like customer relationship management (CRM) systems, social media, voice of the customer, and even neuro-scientific measurement.

  • Speed – We are able to field and compile quantitative studies within days; before online, IVR, and mobile data collection methods were available this took weeks

While there may be other definitions of Big Data, it is clear that technology is making data, larger, wider, and faster. What we need to think about is how technology can make our response to and analysis of data larger, wider and faster as well, and avoid drowning in it.

The water metaphor is often used to describe Big Data, and the folks at the GreenBook Consulting Group use the term “oceans of data.”  They describe three business models driven by data: The Traditional (based on Data Ponds), Transitional (based on Data Rivers), and Future (based on Data Oceans).

Traditional market research based on small discrete amounts of data has its place – but as the folks at GreenBook point out, market researchers must face the fact that progression from these Data Ponds to Data Oceans is inevitable.  The Traditional mindset is faced with inertia and will decline in relevance in the next five to ten years.

Those who are in the Transitional phase are moving forward, folks here are in a “reactive” position.  They are seeing these changes around them and applying some big data solutions in their word.  They might have tried one or two tools or are even using them now on a regular basis. But when faced with large amounts of data, they think about technology only in terms of how to collect more data – not in how to manage and apply it quickly and in big ways.  In contrast, the Future mindset takes a proactive approach; these are the people who think about how technology will be the fundamental basis for applying the ideas and solutions to lead companies.

In the coming weeks I’ll be discussing specific examples of technologies that are helping push market researchers towards this future. I’d love to hear from you about things you’re doing to respond to Big Data and the challenges and opportunities you are facing as we confront these Data Oceans.

Watch our webinar Using Technology to Help your Entire Company CMB techUnderstand and Act on Customer Needs here

Posted by Jeff McKenna, Jeff is a senior consultant at CMB and team leader for Pinpoint Suite-our innovative Customer Experience Management software. Want to learn more about how Pinpoint Suite can help you make sense of your "Big Data," schedule a demo here.

Topics: data collection, big data, data integration

Hold the Phones: Chat as an Alternative to 1-800 Helplines?

Posted by Jessica Chavez

Mon, Sep 26, 2011

1-800 beauty hotlines

I recently read Mike Albo’s piece in W Magazine about beauty hotlines where operators are standing by to answer questions and deal with “emergencies,” like accidentally using an antiperspirant cream as a hand lotion.  This got me wondering , in a 24/7 online world filled with IMs and chats, are most beauty companies still relying only on 1-800 numbers to answer their customers’ questions and concerns?Curious as to whether beauty companies offered a customer service chat option, I did an impromptu investigation of 10 product websites based on products I have in my bathroom.  Most products are from well-known, deep-pocket companies (e.g. Neutrogena and L'Oreal).  A few were organic-type products produced by smaller companies (like Earth Science Naturals). I was surprised to find none of the product websites I visited offered live chat with a representative.  Not one.  If chat was available, I couldn’t find it anywhere on the sites I looked at, and I searched.  Usually, all I could find were the 1-800 hotlines from the back of the product itself. 

As a marketer I acknowledge there are some definite pluses to beauty hotlines, they are great for building customer relationships. As a market researcher I see other benefits too: the calls are recorded, and companies get the pulse of the customer, potentially driving further research on hot topics.  It's essentially free qualitative research that comes to them.  But the world has changed from a decade ago, customers expect answers now and limiting feedback to phone calls could keep companies from getting the most accurate information. Also, there are a couple of problems with limiting interactions to 1-800 numbers.

  • First, these hotlines are usually available during office hours: Monday to Friday 9-5. These are the prime hours counted against cell phone minutes (800 numbers still count as minutes used).  Plus they’re closed nights and weekends, the time that most cell plans offer free calling.  With fewer and fewer people owning landlines; companies must consider that their toll free numbers aren’t free for most.  And hey, people work too!

  • Second, if you can’t, or don’t want to call during hotline hours, there’s usually an email option. The rise of IM can make even email feel like a pain in the neck. And sometimes an email answer generates more questions.  Sometimes you need a little back-and-forth to get to the root of your question.  People want reassurance: a real live person to answer questions and hash it out with you until you get the information you need. 

There’s a huge opportunity here folks. I’m talking to you, Bath and Beauty Products Industry.  With the implementation of website chat functionality, just think how much easier data collection could be.  Think how you could be getting more contact with a wider variety of people with a wider variety of questions. Think of the potential increase in customer satisfaction by offering another option for contact, and the chance to drive future strategy.  Think of the “Cool Technology” factor and who might be inclined to use it. 

As both a researcher and a consumer of beauty products, this seems like a no-brainer.  What do you think?

Posted by Jessica McClelland.  Jessica is a senior associate researcher at CMB who does her best thinking and magazine reading while exercising.


Topics: data collection, technology solutions, customer experience and loyalty, retail research

A Slap in the Face for Market Researchers? Or a Wake Up Call?

Posted by Jeff McKenna

Thu, Aug 11, 2011

Boston Market ResearchA recent blog Success Comes From Better Data, Not Better Analysis from Daryl Morey @dmorey on Harvard Business Review raises quite a few interesting and maybe even hair raising questions.  And if you take any of them out of context they have the potential to ruffle quite a few feathers in the market research industry. For example,

As much as I don't want to admit it, however, the age of the irreplaceable analyst no longer exists, if it ever did…  If better analysts won't create an edge, however, what will?  The answer is better data. Yep, that's right. Raw numbers, not the people and programs that attempt to make sense of them.” Daryl Morey

What!? “Not the people and programs that makes sense of them,” those are harsh words and we all know you can have all the data in the world, but data does not equal insights without having the right people and tools behind it. In fact, at CMB we pride ourselves on our people and tools…but I took a deep breath and read on.

And I’m glad I did. I think what he is really getting at is smart people and the right tools are not enough anymore.  I agree, we do need to be collecting more data – yes, even a “sea of data."  Data that helps us understand the currents and tides and direct change.  Analysts are still, and always will be, vital for categorizing, prioritizing, and making sense of it all. In the end, you never know where you will find the next big idea.  This happens only when you are listening, not only to your own audience, but to those of your competitors as well.

So in looking more closely at what Mr. Morey is saying, I’d have to say he makes a great point.  Companies should be doing more to find and gather useful data for their analytical efforts.  When done with an eye to competitive differentiation, data becomes more than a commodity – it becomes an investment.  And, it’s up to analysts (like us) to determine:

  1. The best data to gather,

  2. The best way to structure and prepare the data,

  3. The most appropriate analytical techniques, and

  4. The ideal method for reporting and informing internal clients of the results

I know I’m biased, but I believe market researchers should play a central role in the strategic missions companies apply to their data.  If not leading the effort, then at least being part of the core team directing the vision and activities.  I’d love to hear from you. What’s your perspective?

Posted by Jeff McKenna. Jeff is a senior consultant at CMB and a lover of the mid-west, the Cleveland Indians, and gleaning key insights from data to drive innovation and change.

Data AnalyticsUpcoming Webinar: Appearance Counts: How to Tell a More Visually Compelling Story with Your Data

Join Jeff McKenna Wednesday August 24th 12 PM ET as he talks about how to make your “sea of data” more visually compelling.

This presentation will highlight some of the tools that are already available at little or no cost and give a hands-on view of how they can be used to make sure you and others throughout your organization get the most out of the research. Register Here

Topics: strategy consulting, data collection, consumer insights

Debating the Usefulness of Self-Reported Market Research Data

Posted by Cathy Harrison

Thu, Jun 03, 2010

Jeffrey Henning (@jhenning), Kathryn Korostoff (@ResearchRocks) and I  (@VirtualMR) are participating in the complimentary AMA MRC Virtual Event: Unveiling Marketing Research's Future Online on June 23 (hope you can join us!). Our session, "Tweet Off! Three MR Tweeps Bicker, Badger & Bust out of 140 Characters", will involve debates on the prickliest topics in market research.

As a lead-in to the discussion, I am serving as the verdict judge for an early debate between Kathryn and Jeffrey on the subject of self-reported respondent data. 

Kathryn's point:  "Self-reported information is not perfect. But it is less perfect in some cases than in others."

Jeffrey's counterpoint: "Respondents, as a group, have sufficient ability to self-report to provide valuable data for market researchers."

My Take: 

No one would argue that self-reported data is perfect and any knowledgeable researcher would agree that some self-reported measures are better than others. 

In her argument, Kathryn makes several good points about the well-known biases of self-reported data, such as over-reporting, impacts to post-behaviors, and social desirability (supported by research including  Social Desirability Bias and the Validity of Indirect Questioning  by Robert J. Fisher © 1993 Journal of Consumer Research Inc.)   

Still, there are biases in every methodology.  And as researchers, one of our roles is to identify, control, and consider these biases in designing and evaluating market research studies.   

For example, behavioral/observational research beyond descriptive observational variables (just reporting the behavior that is observed) requires researchers to make inferences and evaluations, which introduces biases.  Or in the case of using sales data, customer databases or scanner data are robust but often do not include all distribution channels.

So, instead of discarding some ‘less-than-perfect' self-reported data, there are several ways researchers can address the shortcomings of self-reported data to yield useful conclusions:

1) Review the self-reported results with a critical eye as part of a comprehensive, multi-mode research plan

2) Calibrate purchase intentions based on historical data vs. actual behavior (for example, 75% of top 3 box purchase intent actually purchase)

3) Assign relative value to Inflated self-reported expenditures (high, medium, low) for use in a segmentation or product development/launch study

My Verdict:  Respondents, as a group, have a sufficient ability to self-report for market researchers to draw useful conclusions  - as long as actual magnitudes and values are interpreted relatively rather than literally.


Posted by Cathy Harrison.  Cathy is a client services executive at CMB, loves social media, music, and kick-butt research.  You can follow Cathy on Twitter at @virtualMR

Topics: methodology, data collection

Find Multiple Uses for Your Internally Generated Data

Posted by Megan McManaman

Tue, Nov 17, 2009

In recent years, it has become easier for companies to collect information on their operations, clients and prospects. Credit cards, online tracking, loyalty programs, utilization reports, and other metrics are integral parts of the business landscape that fill servers and databases.

But when was the last time you critically reviewed the reports and analysis you receive from your internally collected data? Which of your business decisions could be supported by extracting additional insight from the data (internal or customer) that you already have?

More and more we are being asked to apply advanced analytics and critical thinking to data collected in the course of business operations. In doing so, weve been able to help in a number of ways:

1) Damage Control:  

 A hotel company wanted to predict the reduction in value (if any) from a customers exposure to lower- performing locations in its network. Using recent advancements in Customer Lifetime Value analysis (far beyond regression-based models) and thinking, we concluded whether underperforming locations were reducing the brands value by undermining customer connection.

2) Determine Best Practices:

 A services company with over 1400 locations wanted to share best practices for driving improvements to the bottom line. Using CHAID and Latent Class segmentation, we examined their internal data (e.g., number/type/wages of employees, customer volume, rate paid, how booked, etc.) to prioritize opportunities to reduce spending (with minimal impact) or increase investment (with maximum impact). They then could determine what elements of a locations success could/should be replicated across the organization.

3) Localization/inventory control:

Successful and insightful location managers know what sells, to whom, and when. Some patterns may not be as obvious particularly when managers are looking across multiple locations but by diving into the actual transactional data, we are able to put concrete numbers in front of managers that can support or change their intuition and drive more fact based decision making.


Topics: data collection, big data, integrated data