WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

Survey Magazine Names CMB’s Talia Fein a 2015 “Data Dominator”

Posted by Talia Fein

Wed, Sep 23, 2015

Talia Fein, CMB, Survey Magazine, Data DominatorEvery year, Survey Magazine names 10 “Data Dominators,” who are conquering data in different ways at their companies. This year, our very own Talia Fein was chosen. She discusses her passion for data in Survey Magazine’s August issue, and we’ve reposted the article below.

When I first came to CMB, a research and strategy company in Boston, I was fresh out of undergrad and an SPSS virgin. In fact, I remember there being an SPSS test that all new hires were supposed to take, but I couldn’t take it because I didn’t even know how to open a data file. Fast forward a few months, and I had quickly been converted to an SPSS specialist, a numbers nerd, orperhaps more appropriately—a data dominator.  I was a stickler for process and precision in all data matters, and I took great pride in ensuring that all data and analyses were perfect and pristine. To put it bluntly, I was a total nerd.

I recently returned to CMB after a four-year hiatus. When I left CMB, I quickly became the survey and data expert among my new colleagues and the point person for all SPSS and data questions. But it wasn’t just my data skills that were being put to use. To me, data management is also about the process and the organization of data. In my subsequent roles, I found myself looking to improve the data processes and streamline the systems used for survey data. I brought new software programs to my companies and taught my teams how to manage data effectively and efficiently.

When I think about the future of the research industry, I imagine survey research as being the foundation of a house.  Survey data and data management are the building blocks of what we do. When we do them excellently, we are a well-oiled machine. But a well-oiled machine doesn’t sell products or help our clients drive growth. We need to have the foundation in place in order to extend beyond it and to prepare ourselves for the next big thing that comes along. And that next big thing, in my mind, is big data technology. There is a lot of data out there, and a lot of ways of managing and analyzing it, and we need to be ready for that.  We need to expand our ideas about where our data is coming from and what we can do with it. It is our job to connect these data sources and to find greater meaning than we were previously able to. It is this non-traditional use of data and analytics that is the future of our industry, and we have to be nimble and creative in order to best serve our clients’ ever-evolving needs.

One recent example of this is CMB’s 2015 Mobile Wallet study, which leveraged multiple data sources and—in the process—revealed which were good for what types of questions. In the case of this research, we analyzed mobile behavioral data, including mobile app and mobile web usage, along with survey-based data to get a full picture of consumers’ behaviors, experiences, and attitudes toward mobile wallets. We also came away with new Best Practices for how best to manage passive mobile behavioral data, as it presents new challenges that are unique from managing survey data. Our clients are making big bets on new technology, and they need the comprehensive insights that come from integrating multiple sources. We specifically sampled different sources because we know that—in practice—many of our clients are being handed multiple data sets from multiple data sources. In order to best serve these clients, we need to be able to leverage all the data sources that are at our and their disposal so that we can glean the best insights and make the best recommendations.

Talia Fein is a Project & Data Manager at Chadwick Martin Bailey (CMB), a market research consulting firm in Boston. She’s responsible for the design and execution of market research studies for Fortune 500 companies as well as the data processing and analysis through all phases of the research. Her portfolio includes clients such as Dell, Intel, and Comcast, and her work includes customer segmentation, loyalty, brand tracking, new product development, and win-loss research.

Topics: our people, big data, data integration

Dear Dr. Jay: Data Integration

Posted by Jay Weiner, PhD

Wed, Aug 26, 2015

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email us! OR Submit anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, data integration, passive data

Dear Dr. Jay: Predictive Analytics

Posted by Dr. Jay Weiner

Mon, Apr 27, 2015

ddj investigates

Dear Dr. Jay, 

What’s hot in market research?

-Steve W., Chicago

 

Dear Steve, 

We’re two months into my column, and you’ve already asked one of my least favorite questions. But, I will give you some credit—you’re not the only one asking such questions. In a recent discussion on LinkedIn, Ray Poynter asked folks to anticipate the key MR buzzwords for 2015. Top picks included “wearables” and “passive data.” While these are certainly topics worthy of conversation, I was surprised Predictive Analytics (and Big Data), didn’t get more hits from the MR community. My theory: even though the MR community has been modeling data for years, we often don’t have the luxury of getting all the data that might prove useful to the analysis. It’s often clients who are drowning in a sea of information—not researchers.

On another trending LinkedIn post, Edward Appleton asked whether “80% Insights Understanding” is increasingly "good enough.” Here’s another place where Predictive Analytics may provide answers. Simply put, Predictive Analytics lets us predict the future based on a set of known conditions. For example, if we were able to improve our order processing time from 48 hours to 24 hours, Predictive Analytics could tell us the impact that would have on our customer satisfaction ratings and repeat purchases. Another example using non-survey data is predicting concept success using GRP buying data.


What do you need to perform this task? predictive analytics2

  • We need a dependent variable we would like to predict. This could be loyalty, likelihood to recommend, likelihood to redeem an offer, etc.
  • We need a set of variables that we believe influences this measure (independent variables). These might be factors that are controlled by the company, market factors, and other environmental conditions.
  • Next, we need a data set that has all of this information. This could be data you already have in house, secondary data, data we help you collect, or some combination of these sources of data.
  • Once we have an idea of the data we have and the data we need, the challenge becomes aggregating the information into a single database for analysis. One key challenge in integrating information across disparate sources of data is figuring out how to create unique rows of data for use in model building. We may need a database wizard to help merge multiple data sources that we deem useful to modeling.  This is probably the step in the process that requires the most time and effort. For example, we might have 20 years’ worth of concept measures and the GRP buys for each product launched. We can’t assign the GRPs for each concept to each respondent in the concept test. If we did, there wouldn’t be much variation in the data for a model. The observation level becomes a concept. We then aggregate the individual level responses across each concept and then append the GRP data. Now the challenge becomes one of the number of observations in the data set we’re analyzing.
  • Lastly, we need a smart analyst armed with the right statistical tools. Two tools we find useful for predictive analytics are Bayesian networks and TreeNet. Both tools are useful for different types of attributes. More often than not, we find the data sets comprised of scale data, ordinal data, and categorical data. It’s important to choose a tool that is capable of working with this type of information

The truth is, we’re always looking for the best (fastest, most accurate, useful, etc.) way to solve client challenges—whether they’re “new” or not. 

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, passive data

Reaping the Rewards of Big Data

Posted by Heather Magaw

Thu, Apr 09, 2015

HiResIt’s both an exciting and challenging time to be a researcher. Exciting because we can collect data at speeds our predecessors could only dream about and challenging because we must help our partners stay nimble enough to really benefit from this data deluge. So, how do we help our clients reap the rewards of Big Data without drowning in it? Start with the end in mind: If you’re a CMB client, you know that we start every engagement with the end in mind before a single question is ever written. First, we ask what business decisions the research will help answer. Once we have those, we begin to identify what information is necessary to support those decisions. This keeps us focused and informs everything from questionnaire design to implementation.

Leverage behavioral and attitudinal data: While business intelligence (BI) teams have access to mountains of transactional, financial, and performance data, they often lack insight into what drives customer behavior, which is a critical element of understanding the full picture. BI teams are garnering more and more organizational respect due to data access and speed of analysis, yet market research departments (and their partners like CMB) are the ones bringing the voice of the customer to life and answering the “why?” questions.

Tell a compelling story: One of the biggest challenges of having “too much” data is that data from disparate sources can provide conflicting information, but time-starved decision makers don't have time to sort through all of it in great detail. In a world in which data is everywhere, the ability to take insights beyond a bar chart and bring it to life is critical. It’s why we spend a lot of time honing our storytelling skills and not just our analytic chops. We know that multiple data sources must be analyzed from different angles and through multiple lenses to provide both a full picture and one that can be acted upon.

Big Data is ripe with potential. Enterprise-level integration of information has the power to change the game for businesses of all sizes, but data alone isn’t enough. The keen ability to ask the right questions and tell a holistic story based on the results gives our clients the confidence to make those difficult investment decisions. 2014 was the year of giving Big Data a seat at the table, but for the rest of 2015, market researchers need to make sure their seat is also reserved so that we can continue to give decision makers the real story of the ever-changing business landscape.

Heather is the VP of Client Services, and she admits to becoming stymied by analysis paralyses when too much data is available. She confesses that she resorts to selecting restaurants and vacation destinations based solely on verbal recommendations from friends who take the time to tell a compelling story instead of slogging through an over-abundance of online reviews. 

Topics: big data, storytelling, business decisions

Dear Dr. Jay: Mining Big Data

Posted by Dr. Jay Weiner

Tue, Mar 17, 2015

Dear Dr. Jay,

We’ve been testing new concepts for years. The magic score to move forward in the new product development process is a 40% top 2 box score to purchase intent on a 5 point scale. How do I know if 40% is still a good benchmark? Are there any other measures that might be useful in predicting success?

-Normatively Challenged

 

DrJay Thinking withGoateeDear Norm,

I have some good news—you may have a big data mining challenge. Situations like yours are why I always ask our clients two questions: (1) what do you already know about this problem, and (2) what information do you have in-house that might shed some light on a solution? You say you’ve been testing concepts for years.  Do you have a database of concepts already set up? If not, can you easily get access to your concept scores?

Look back on all of the concepts you have ever tested, and try to understand what makes for a successful idea. In addition to all the traditional concept test measures like purchase intent, believability, and uniqueness, you can also append marketing spend, distribution measures, and perhaps even social media trend data. You might even want to include economic condition information like the rate of inflation, the prime rate of interest, and the average DOW stock index. While many of these appended variables might be outside of your control, they may serve to help you understand what might happen if you launch a new product under various market conditions.

Take heart Norm, you are most definitely not alone. In fact, I recently attended a presentation on Big Data hosted by the Association of Management Consulting Firms. There, Steve Sashihara, CEO of Princeton Consultants, suggested there are four key stages for integrating big data into practice. The first stage is to monitor the market. At CMB, we typically rely on dashboards to show what is happening. The second stage is to analyze the data. Are you improving, getting worse, or just holding your own? However, only going this far with the data doesn’t really provide any insight into what to do. To take it to the next level, you need enter the third stage: building predictive models that forecast what might happen if you make changes to any of the factors that impact the results. The true value to your organization is really in the fourth stage of the process—recommending action. The tools that build models have become increasingly powerful in the past few years. The computing power now permits you to model millions of combinations to determine the optimal outcomes from all possible executions.

In my experience, there are usually many attributes that can be improved to optimize your key performance measure. In modeling, you’re looking for the attributes with the largest impact and the cost associated with implementing those changes to your offer. It’s possible that the second best improvement plan might only cost a small percentage of the best option. If you’re in the business of providing cellular device coverage, why build more towers if fixing your customer service would improve your retention almost as much?

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, product development, big data, Dear Dr. Jay