WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

Flying High on Predictive Analytics

Posted by Amy Maret

Thu, Jul 27, 2017

pexels-photo-297755_resized-1.jpgBuying a plane ticket can be a gamble. Right now, it might be a good price, but who’s to say it won’t drop in a day—a week? Not only that, it may be cheaper to take that Sunday night flight instead of Monday morning. And oh—should you fly into Long Beach or LAX? As a frequent traveler (for leisure and work!) and deal seeker, I face dilemmas like these a lot.

The good news is that there are loads of apps and websites to help passengers make informed travel decisions. But how? How can an app—say, Hopper—know exactly when a ticket price will hit its lowest point? Is it magic? Is there a psychic in the backroom predicting airline prices with her crystal ball?

Not quite.

While it seems like magic (especially when you do land that great deal), forecasting flight prices all comes down to predictive analytics—identifying patterns and trends in a vast amount of data. And for the travel industry in particular, there’s incredible opportunity to use data in this way. So, let’s put away the crystal ball (it won’t fit in your carry on) and look at how travel companies and data scientists are using the tremendous amount of travel data to make predictions like when airfare will hit its lowest point.

In order to predict what will happen in the future (in this case, how airfare may rise and fall), you need a lot of data on past behaviors. According to the Federal Aviation Administration (FAA), there are nearly 24,000 commercial flights carrying over two million passengers around the world every day. And for every single one of those travelers, there’s a record of when they purchased their ticket, how much they paid, what airline they’re flying, where they’re flying to/from, and when they’re traveling. That’s a ton of data to work with!

As a researcher, I get excited about the endless potential for how that amount of historical data can be used. And I’m not the only one. Companies like Kayak, Hopper, Skyscanner, and Hipmunk are finding ways to harness travel data to empower consumers to make informed travel decisions. To quote Hopper’s website: their data scientists have compiled data on trillions of flight prices over the years to help them make “insightful predictions that consistently perform with 95% accuracy”.

 While the details of Hopper are intentionally vague, we can assume that their team is using data mining and predictive analytics techniques to identify patterns in flights prices. Then, based on what they’ve learned from these patterns, they build algorithms that let customers know when the best time to purchase a ticket is—whether they should buy now or wait as prices continue to drop leading up to their travel date. They may not even realize it, but in a way those customers are making data-driven decisions, just like the ones we help our clients make every day.

As a Market Researcher, I’m all about leveraging data to make people’s lives easier. The travel industry’s use of predictive modeling is mutually beneficial—consumers find great deals while airlines enjoy steady sales. My inner globetrotter is constantly looking for ways to travel more often and more affordably, so as I continue to discover new tools that utilize the power of data analytics to find me the best deals, I’m realizing I might need some more vacation days to fit it all in!

So the next time you’re stressed out about booking your next vacation, just remember: sit back, relax, and enjoy the analytics.

Amy M. is a Project Manager at CMB who will continue to channel her inner predictive analyst to plan her next adventure.

Topics: big data, travel and hospitality research, predictive analytics

Big Data Killed the Radio Star

Posted by Mark Doherty

Wed, Jun 29, 2016

It’s an amazing time to be a music fan (especially if you have all those Ticketmaster vouchers and a love of '90's music). While music production and distribution was once controlled by record label and radio station conglomerates, technology has “freed” it in almost every way. It’s 200542299-001_47.jpgnow easy to hear nearly any song ever recorded thanks to YouTube, iTunes, and a range of streaming sources. While these new options appear to be manna from heaven, for music lovers, they can  actually create more problems than you’d expect. The never-ending flow of music options can make it harder to decide what might be good or what to play next. In the old days (way back in 2010 :)), your music choices were limited by record companies and by radio station programmers. While these “corporate suits” may have prevented you from hearing that great underground indie band, they also “saved” you from thousands of options that you would probably hate. 

That same challenge is happening right now with marketers’ use of data. Back in the day (also around 2010), there was a limited number of data sets and sources to leverage in decisions relating to building/strengthening a brand. Now, that same marketer has access to a seemingly endless flow of data: from web analytics, third-party providers, primary research, and their own CRM systems. While most market information was previously collected and “curated” through the insights department, marketing managers are often now left to their own devices to sift through and determine how useful each set of data is to their business. And it’s not easy for a non-expert to do due diligence on each data source to establish its legitimacy and usefulness. As a result, many marketers are paralyzed by a firehose of data and/or end up trying to use lots of not-so-great data to make business decisions.

So, how do managers make use of all this data? It’s partly the same way streaming sources help music listeners decide what song to play next: predictive analytics. Predictive analytics is changing how companies use data to get, keep, and grow their most profitable customers. It helps managers “cut through the clutter” and analyze a wide range of data to make better decisions about the future of their business. It’s similarly being used in the music industry to help music lovers cut through the clutter of their myriad song choices to find their next favorite song. Pandora’s Musical Genome Project is doing just that by developing a recommendation algorithm that serves up choices based on the attributes of the music you have listened to in the past. Similarly, Spotify’s Discover Weekly playlist is a huge hit with music lovers, who appreciate Spotify’s assistance in identifying new songs they may love.

So, the next time you need to figure out how to best leverage the range of data you have—or find a new summer jam—consider predictive analytics.

Mark is a Vice President at CMB, he’s fully embracing his reputation around the office as the DJ of the Digital Age.

Did you miss our recent webinar on the power of Social Currency measurement to help brands activate the 7 levers that encourage consumers to advocate, engage, and gain real value? You're not out of luck:

 Watch Here

 

Topics: advanced analytics, big data, data integration, predictive analytics

Dear Dr. Jay: Can One Metric Rule Them All?

Posted by Dr. Jay Weiner

Wed, Dec 16, 2015

Hi Dr. Jay –

The city of Boston is trying develop one key measure to help officials track and report how well the city is doing. We’d like to do that in house. How would we go about it?

-Olivia


DrJay_desk-withGoatee.pngHi Olivia,

This is the perfect tie in for big data and the key performance index (KPI). Senior management doesn’t really have time to pour through tables of numbers to see how things are going. What they want is a nice barometer that can be used to summarize overall performance. So, how might one take data from each business unit and aggregate them into a composite score?

We begin the process by understanding all the measures we have. Once we have assembled all of the potential inputs to our key measure, we need to develop a weighting system to aggregate them into one measure. This is often the challenge when working with internal data. We need some key business metric to use as the dependent variable, and these data are often missing in the database.

For example, I might have sales by product by customer and maybe even total revenue. Companies often assume that the top revenue clients are the bread and butter for the company. But what if your number one account uses way more corporate resources than any other account? If you’re one of the lucky service companies, you probably charge hours to specific accounts and can easily determine the total cost of servicing each client. If you sell a tangible product, that may be more challenging. Instead of sales by product or total revenue, your business decision metric should be the total cost of doing business with the client or the net profit for each client. It’s unlikely that you capture this data, so let’s figure out how to compute it. Gross profit is easy (net sales – cost of goods sold), but what about other costs like sales calls, customer service calls, and product returns? Look at other internal databases and pull information on how many times your sales reps visited in person or called over the phone, and get an average cost for each of these activities. Then, you can subtract those costs from the gross profit number. Okay, that was an easy one.

Let’s look at the city of Boston case for a little more challenging exercise. What types of information is the city using? According to the article you referenced, the city hopes to “corral their data on issues like crime, housing for veterans and Wi-Fi availability and turn them into a single numerical score intended to reflect the city’s overall performance.” So, how do you do that? Let’s consider that some of these things have both income and expense implications. For example, as crime rates go up, the attractiveness of the city drops and it loses residents (income and property tax revenues drop). Adding to the lost revenue, the city has the added cost of providing public safety services. If you add up the net gains/losses from each measure, you would have a possible weighting matrix to aggregate all of the measures into a single score. This allows the mayor to quickly assess changes in how well the city is doing on an ongoing basis. The weights can be used by the resource planners to assess where future investments will offer the greatest pay back.

 Dr. Jay is fascinated by all things data. Your data, our data, he doesn’t care what the source. The more data, the happier he is.

Topics: advanced analytics, Boston, big data, Dear Dr. Jay

Survey Magazine Names CMB’s Talia Fein a 2015 “Data Dominator”

Posted by Talia Fein

Wed, Sep 23, 2015

Talia Fein, CMB, Survey Magazine, Data DominatorEvery year, Survey Magazine names 10 “Data Dominators,” who are conquering data in different ways at their companies. This year, our very own Talia Fein was chosen. She discusses her passion for data in Survey Magazine’s August issue, and we’ve reposted the article below.

When I first came to CMB, a research and strategy company in Boston, I was fresh out of undergrad and an SPSS virgin. In fact, I remember there being an SPSS test that all new hires were supposed to take, but I couldn’t take it because I didn’t even know how to open a data file. Fast forward a few months, and I had quickly been converted to an SPSS specialist, a numbers nerd, orperhaps more appropriately—a data dominator.  I was a stickler for process and precision in all data matters, and I took great pride in ensuring that all data and analyses were perfect and pristine. To put it bluntly, I was a total nerd.

I recently returned to CMB after a four-year hiatus. When I left CMB, I quickly became the survey and data expert among my new colleagues and the point person for all SPSS and data questions. But it wasn’t just my data skills that were being put to use. To me, data management is also about the process and the organization of data. In my subsequent roles, I found myself looking to improve the data processes and streamline the systems used for survey data. I brought new software programs to my companies and taught my teams how to manage data effectively and efficiently.

When I think about the future of the research industry, I imagine survey research as being the foundation of a house.  Survey data and data management are the building blocks of what we do. When we do them excellently, we are a well-oiled machine. But a well-oiled machine doesn’t sell products or help our clients drive growth. We need to have the foundation in place in order to extend beyond it and to prepare ourselves for the next big thing that comes along. And that next big thing, in my mind, is big data technology. There is a lot of data out there, and a lot of ways of managing and analyzing it, and we need to be ready for that.  We need to expand our ideas about where our data is coming from and what we can do with it. It is our job to connect these data sources and to find greater meaning than we were previously able to. It is this non-traditional use of data and analytics that is the future of our industry, and we have to be nimble and creative in order to best serve our clients’ ever-evolving needs.

One recent example of this is CMB’s 2015 Mobile Wallet study, which leveraged multiple data sources and—in the process—revealed which were good for what types of questions. In the case of this research, we analyzed mobile behavioral data, including mobile app and mobile web usage, along with survey-based data to get a full picture of consumers’ behaviors, experiences, and attitudes toward mobile wallets. We also came away with new Best Practices for how best to manage passive mobile behavioral data, as it presents new challenges that are unique from managing survey data. Our clients are making big bets on new technology, and they need the comprehensive insights that come from integrating multiple sources. We specifically sampled different sources because we know that—in practice—many of our clients are being handed multiple data sets from multiple data sources. In order to best serve these clients, we need to be able to leverage all the data sources that are at our and their disposal so that we can glean the best insights and make the best recommendations.

Talia Fein is a Project & Data Manager at Chadwick Martin Bailey (CMB), a market research consulting firm in Boston. She’s responsible for the design and execution of market research studies for Fortune 500 companies as well as the data processing and analysis through all phases of the research. Her portfolio includes clients such as Dell, Intel, and Comcast, and her work includes customer segmentation, loyalty, brand tracking, new product development, and win-loss research.

Topics: our people, big data, data integration

Dear Dr. Jay: Data Integration

Posted by Jay Weiner, PhD

Wed, Aug 26, 2015

Dear Dr. Jay,

How can I explain the value of data integration to my CMO and other non-research folks?

- Jeff B. 


 

DRJAY-3

Hi Jeff,

Years ago, at a former employer that will remain unnamed, we used to entertain ourselves by playing Buzzword Bingo in meetings. We’d create Bingo cards with 30 or so words that management like to use (“actionable,” for instance). You’d be surprised how fast you could fill a card. If you have attended a conference in the past few years, you know we as market researchers have plenty of new words to play with. Think: big data, integrated data, passive data collection, etc. What do all these new buzzwords really mean to the research community? It boils down to this: we potentially have more data to analyze, and the data might come from multiple sources.

If you only collect primary survey data, then you typically only worry about sample reliability, measurement error, construct validity, and non-response bias. However, with multiple sources of data, we need to worry about all of that plus level of aggregation, impact of missing data, and the accuracy of the data. When we typically get a database of information to append to survey data, we often don’t question the contents of that file. . . but maybe we should.

A client recently sent me a file with more than 100,000 records (ding ding, “big data”). Included in the file were survey data from a number of ad hoc studies conducted over the past two years as well as customer behavioral data (ding ding, “passive data”). And, it was all in one file (ding ding, “integrated data”). BINGO!

I was excited to get this file for a couple of reasons. One, I love to play with really big data sets, and two, I was able to start playing right away. Most of the time, clients send me a bunch of files, and I have to do the integration/merging myself. Because this file was already integrated, I didn’t need to worry about having unique and matching record identifiers in each file.

Why would a client have already integrated these data? Well, if you can add variables to your database and append attitudinal measures, you can improve the value of the modeling you can do. For example, let’s say that I have a Dunkin’ Donuts (DD) rewards card, and every weekday, I stop by a DD close to my office and pick up a large coffee and an apple fritter. I’ve been doing this for quite some time, so the database modelers feel fairly confident that they can compute my lifetime value from this pattern of transactions. However, if the coffee was cold, the fritter was stale, and the server was rude during my most recent transaction, I might decide that McDonald’s coffee is a suitable substitute and stop visiting my local DD store in favor of McDonald’s. How many days without a transaction will it take the DD algorithm to decide that my lifetime value is now $0.00? If we had the ability to append customer experience survey data to the transaction database, maybe the model could be improved to more quickly adapt. Maybe even after 5 days without a purchase, it might send a coupon in an attempt to lure me back, but I digress.

Earlier, I suggested that maybe we should question the contents of the database. When the client sent me the file of 100,000 records, I’m pretty sure that was most (if not all) of the records that had both survey and behavioral measures. Considering the client has millions of account holders, that’s actually a sparse amount of data. Here’s another thing to consider: how well do the two data sources line up in time? Even if 100% of my customer records included overall satisfaction with my company, these data may not be as useful as you might think. For example, overall satisfaction in 2010 and behavior in 2015 may not produce a good model. What if some of the behavioral measures were missing values? If a customer recently signed up for an account, then his/her 90-day behavioral data elements won’t get populated for some time. This means that I would need to either remove these respondents from my file or build unique models for new customers.

The good news is that there is almost always some value to be gained in doing these sorts of analysis. As long as we’re cognizant of the quality of our data, we should be safe in applying the insights.

Got a burning market research question?

Email us! OR Submit anonymously!

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, data integration, passive data

Dear Dr. Jay: Predictive Analytics

Posted by Dr. Jay Weiner

Mon, Apr 27, 2015

ddj investigates

Dear Dr. Jay, 

What’s hot in market research?

-Steve W., Chicago

 

Dear Steve, 

We’re two months into my column, and you’ve already asked one of my least favorite questions. But, I will give you some credit—you’re not the only one asking such questions. In a recent discussion on LinkedIn, Ray Poynter asked folks to anticipate the key MR buzzwords for 2015. Top picks included “wearables” and “passive data.” While these are certainly topics worthy of conversation, I was surprised Predictive Analytics (and Big Data), didn’t get more hits from the MR community. My theory: even though the MR community has been modeling data for years, we often don’t have the luxury of getting all the data that might prove useful to the analysis. It’s often clients who are drowning in a sea of information—not researchers.

On another trending LinkedIn post, Edward Appleton asked whether “80% Insights Understanding” is increasingly "good enough.” Here’s another place where Predictive Analytics may provide answers. Simply put, Predictive Analytics lets us predict the future based on a set of known conditions. For example, if we were able to improve our order processing time from 48 hours to 24 hours, Predictive Analytics could tell us the impact that would have on our customer satisfaction ratings and repeat purchases. Another example using non-survey data is predicting concept success using GRP buying data.


What do you need to perform this task? predictive analytics2

  • We need a dependent variable we would like to predict. This could be loyalty, likelihood to recommend, likelihood to redeem an offer, etc.
  • We need a set of variables that we believe influences this measure (independent variables). These might be factors that are controlled by the company, market factors, and other environmental conditions.
  • Next, we need a data set that has all of this information. This could be data you already have in house, secondary data, data we help you collect, or some combination of these sources of data.
  • Once we have an idea of the data we have and the data we need, the challenge becomes aggregating the information into a single database for analysis. One key challenge in integrating information across disparate sources of data is figuring out how to create unique rows of data for use in model building. We may need a database wizard to help merge multiple data sources that we deem useful to modeling.  This is probably the step in the process that requires the most time and effort. For example, we might have 20 years’ worth of concept measures and the GRP buys for each product launched. We can’t assign the GRPs for each concept to each respondent in the concept test. If we did, there wouldn’t be much variation in the data for a model. The observation level becomes a concept. We then aggregate the individual level responses across each concept and then append the GRP data. Now the challenge becomes one of the number of observations in the data set we’re analyzing.
  • Lastly, we need a smart analyst armed with the right statistical tools. Two tools we find useful for predictive analytics are Bayesian networks and TreeNet. Both tools are useful for different types of attributes. More often than not, we find the data sets comprised of scale data, ordinal data, and categorical data. It’s important to choose a tool that is capable of working with this type of information

The truth is, we’re always looking for the best (fastest, most accurate, useful, etc.) way to solve client challenges—whether they’re “new” or not. 

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, big data, Dear Dr. Jay, passive data

Reaping the Rewards of Big Data

Posted by Heather Magaw

Thu, Apr 09, 2015

HiResIt’s both an exciting and challenging time to be a researcher. Exciting because we can collect data at speeds our predecessors could only dream about and challenging because we must help our partners stay nimble enough to really benefit from this data deluge. So, how do we help our clients reap the rewards of Big Data without drowning in it? Start with the end in mind: If you’re a CMB client, you know that we start every engagement with the end in mind before a single question is ever written. First, we ask what business decisions the research will help answer. Once we have those, we begin to identify what information is necessary to support those decisions. This keeps us focused and informs everything from questionnaire design to implementation.

Leverage behavioral and attitudinal data: While business intelligence (BI) teams have access to mountains of transactional, financial, and performance data, they often lack insight into what drives customer behavior, which is a critical element of understanding the full picture. BI teams are garnering more and more organizational respect due to data access and speed of analysis, yet market research departments (and their partners like CMB) are the ones bringing the voice of the customer to life and answering the “why?” questions.

Tell a compelling story: One of the biggest challenges of having “too much” data is that data from disparate sources can provide conflicting information, but time-starved decision makers don't have time to sort through all of it in great detail. In a world in which data is everywhere, the ability to take insights beyond a bar chart and bring it to life is critical. It’s why we spend a lot of time honing our storytelling skills and not just our analytic chops. We know that multiple data sources must be analyzed from different angles and through multiple lenses to provide both a full picture and one that can be acted upon.

Big Data is ripe with potential. Enterprise-level integration of information has the power to change the game for businesses of all sizes, but data alone isn’t enough. The keen ability to ask the right questions and tell a holistic story based on the results gives our clients the confidence to make those difficult investment decisions. 2014 was the year of giving Big Data a seat at the table, but for the rest of 2015, market researchers need to make sure their seat is also reserved so that we can continue to give decision makers the real story of the ever-changing business landscape.

Heather is the VP of Client Services, and she admits to becoming stymied by analysis paralyses when too much data is available. She confesses that she resorts to selecting restaurants and vacation destinations based solely on verbal recommendations from friends who take the time to tell a compelling story instead of slogging through an over-abundance of online reviews. 

Topics: big data, storytelling, business decisions

Dear Dr. Jay: Mining Big Data

Posted by Dr. Jay Weiner

Tue, Mar 17, 2015

Dear Dr. Jay,

We’ve been testing new concepts for years. The magic score to move forward in the new product development process is a 40% top 2 box score to purchase intent on a 5 point scale. How do I know if 40% is still a good benchmark? Are there any other measures that might be useful in predicting success?

-Normatively Challenged

 

DrJay Thinking withGoateeDear Norm,

I have some good news—you may have a big data mining challenge. Situations like yours are why I always ask our clients two questions: (1) what do you already know about this problem, and (2) what information do you have in-house that might shed some light on a solution? You say you’ve been testing concepts for years.  Do you have a database of concepts already set up? If not, can you easily get access to your concept scores?

Look back on all of the concepts you have ever tested, and try to understand what makes for a successful idea. In addition to all the traditional concept test measures like purchase intent, believability, and uniqueness, you can also append marketing spend, distribution measures, and perhaps even social media trend data. You might even want to include economic condition information like the rate of inflation, the prime rate of interest, and the average DOW stock index. While many of these appended variables might be outside of your control, they may serve to help you understand what might happen if you launch a new product under various market conditions.

Take heart Norm, you are most definitely not alone. In fact, I recently attended a presentation on Big Data hosted by the Association of Management Consulting Firms. There, Steve Sashihara, CEO of Princeton Consultants, suggested there are four key stages for integrating big data into practice. The first stage is to monitor the market. At CMB, we typically rely on dashboards to show what is happening. The second stage is to analyze the data. Are you improving, getting worse, or just holding your own? However, only going this far with the data doesn’t really provide any insight into what to do. To take it to the next level, you need enter the third stage: building predictive models that forecast what might happen if you make changes to any of the factors that impact the results. The true value to your organization is really in the fourth stage of the process—recommending action. The tools that build models have become increasingly powerful in the past few years. The computing power now permits you to model millions of combinations to determine the optimal outcomes from all possible executions.

In my experience, there are usually many attributes that can be improved to optimize your key performance measure. In modeling, you’re looking for the attributes with the largest impact and the cost associated with implementing those changes to your offer. It’s possible that the second best improvement plan might only cost a small percentage of the best option. If you’re in the business of providing cellular device coverage, why build more towers if fixing your customer service would improve your retention almost as much?

Got a burning research question? You can send your questions to DearDrJay@cmbinfo.com or submit anonymously here.

Dr. Jay Weiner is CMB’s senior methodologist and VP of Advanced Analytics. Jay earned his Ph.D. in Marketing/Research from the University of Texas at Arlington and regularly publishes and presents on topics, including conjoint, choice, and pricing.

Topics: advanced analytics, product development, big data, Dear Dr. Jay

Follow the Humans: Insights from CASRO’s Digital Research Conference

Posted by Jared Huizenga

Mon, Mar 09, 2015

iStock 000008338677XSmallI once again had the pleasure of attending the CASRO Digital Research Conference this year. It’s the one of the best conferences available to data collection geeks like me, and this year’s presentations did not disappoint. Here are a few key takeaways from this year’s conference.

1. The South shuts down when it snows. After having a great weekend in Nashville after the conference, my flight was cancelled on Monday due to about an inch of snow and a little ice. Needless to say, I was happy to return to Boston and its nine feet of snow.

2. “Big data” is an antiquated term. Over the past few years, big data has been the big buzz in the industry. Much like we said goodbye to traditional “market research,” we can now say adios to “big data.” Good riddance. The term was vague at best. However, that doesn’t mean that the concept is going away. It’s simply being replaced by new, more meaningful terminology like “integrated data” and “multi-sourced data.” But one thing isn’t changing. . .

3. Researchers still don’t know what to do with all that data. What can I say about multi-sourced data that I haven’t already said many times over the past couple years? Clients still want it, and researchers still want to oblige. But this fact remains: adequate tools still do not exist to deliver meaningful integrated data in most cases. We have a long way to go before most researchers will be able to leverage all of this data to its full potential in a meaningful way for our clients.

4. There’s a lot more to mobile research than how a questionnaire looks on a screen. For the past three or four years, it seems like every year is going to be “the year of mobile” at these types of conferences. Because of this, I always attend the mobile-related sessions skeptically. When we talk about mobile, more often than not, the main concern is how the questionnaire will look on a mobile device. But mobile research is much more than that. One of the best things I heard at the conference this year was that researchers should “follow the humans.” This is true on so many levels. Of course, a person can respond to a questionnaire invitation on his/her mobile device, but so much of a person’s daily life, including behaviors and attitudes, is shaped by mobile. Welcome to the world of the ultra-informed consumer. I can confidently say that 2015 is most definitely the year of mobile! (I do, however, reserve the right to say the same thing again next year.)

5. Researchers need to think like humans. It’s easy to get caught up in percentages in our world, and researchers sometimes lose sight of the human aspect of our industry. We like to think that millionaire CEOs are constantly checking their emails on their desktop computers, waiting for their next “opportunity” to take a 45-minute online questionnaire for a twenty-five cent reward. I attended sessions at the conference about gamification, how to make questionnaires more user-friendly, and also how to make questionnaires more kid-friendly by adding voice-to-text and text-to-voice options. All of these things have the potential to ease the burden on research participants, and as an industry, this must happen. We have a long way to go, but. . .

6. Now is the time to play catch-up with the rest of the world. Last year, I ended my recap by saying that change is happening faster than ever. I still think that’s true about the world we live in. With all of the technological advances and new opportunities provided to us, it’s an exciting time to be alive. However, I’m not sure I can honestly say that change is happening faster than ever when it comes to the world of research. I’ve been a part of this industry for a very fulfilling seventeen years, and sometimes my pride in the industry clouds my thinking. Let’s face the facts. The truth is that, as an industry, we are lagging far behind as the world speeds by. Research techniques and tools are evolving at a very slow pace, and I don’t see this changing in the near future. (In our defense, this is true for many industries and not only market research.) I still believe that those of us who are working to leverage the changing world we live in will be much better equipped for success than those who sit idly and watch the world fly.

I’m still confident that my industry is primed and ready for significant and meaningful change—even if we sometimes take the path of a tortoise. As a weekend pitmaster, I know that low and slow is sometimes the best approach. The end result is what really counts.

Jared is CMB’s Field Services Director, and has been in market research industry for seventeen years. When he isn’t enjoying the exciting world of data collection, he can be found competing at barbecue contests as the pitmaster of the cooking team Insane Swine BBQ.

 

Topics: big data, mobile, research design, conference recap

5 Key Takeaways from The Quirk's Event

Posted by Jen Golden and Ashley Harrington

Thu, Mar 05, 2015

Quirks Event LogoLast week, we spent a few days networking with and learning from some of the industry’s best and brightest at The Quirk's Event. At the end of the day, a few key ideas stuck out to us, and we wanted to share them with you. 1. Insights need to be actionable: This point may seem obvious, but multiple presenters at the conference hammered in on this point. Corporate researchers are shifting from a primarily separate entity to a more consultative role within the organization, so they need to deliver insights that best answer business decisions (vs. passing along a 200 slide data-dump). This mindset should flow through the entire lifespan of a project—starting at the beginning by crafting a questionnaire that truly speaks to the business decisions that need to be made (and cuts out all the fluff that may be “nice to have” but is not actionable) all the way to thoughtful analysis and reporting. Taking this approach will help ensure final deliverables aren’t left collecting dust and are instead used to lead engagement across the organization. 

2. Allocate time and resources to socializing these insights throughout the organization: All too often, insightful findings are left sitting on a shelf when they have potential to be useful across an organization. Several presenters shared creative approaches to socializing the data so that it lives long after the project ended. From transforming a conference room with life-size cut-outs of key customer segments to creating an app employees can use to access data points quickly and on-the-go, researchers and their partners are getting creative within how they share the findings. The most effective researchers think about research results as a product to be marketed to their stakeholders.
 
3. Leverage customer data to help validate primary research: Most organizations have a plethora of data to work with, ranging from internal customer databases to secondary sources to primary research. These various sources can be leveraged to paint a full picture of the consumer (and help to validate findings). Etsy (a peer-to-peer e-commerce site) talked about comparing data collected from its customer database to its own primary research to see if what buyers and sellers said they did on the site aligned with what they actually did. For Etsy, past self-reported behaviors (e.g., number of purchases, number of times someone “favorites” a shop, etc.) aligned strongly with its internal database, but future behavior (e.g., likelihood to buy from Etsy in the future) did not. Future behaviors might not be something we can easily predict by asking directly in a survey, but that data could be helpful as another way to identify customer loyalty or advocacy. A note of caution: if you plan on doing this data comparison, make sure the wording in your questionnaire aligns with what you plan on matching in your existing database. This ensures you’re getting an apples to apples comparison.
 
4. Be cautious when comparing cross-country data: A multi-country study is typically going to ask for a “global overview” or cross-country comparison, but this can lead to inaccurate recommendations. Most are aware of cultural biases such as extreme response (e.g., Brazilian respondents often rate higher on rating scales while Japanese respondents tend to rate lower) or acquiescence (e.g., China often has the propensity to want to please the interviewer), and these biases should be kept in the back of your mind when delving into the final data. Comparing scaled data directly between countries with very different rating tendencies could lead to to falsely thinking one country is underperforming. A better indication of performance would be to provide an in-country comparison to competitors or looking at in-country trending data.
 
5. Remember your results are only as useful as your design is solid: A large number of stakeholders invested in a study’s outcome can lead to a project designed by committee since each stakeholder will inevitably have different needs, perspectives, and even vocabularies. A presenter shared an example from a study that asked recent mothers, “How long was your baby in the hospital?” Some respondents thought the question referred to the baby’s length, so they answered in inches. Others thought the question referred to the baby’s duration in the hospital, so they answered in days. Therein lies the problem.  Throughout the process, it’s our job to ensure that all of the feedback and input from multiple stakeholders adheres to the fundamentals of good questionnaire design: clarity, answerable, ease, and lack of bias.

Have you been to any great conferences lately and have insights to share? Tell us in the comments!

Jen is a Project Manager on the Tech practice who always has the intention to make a purchase on Etsy but never actually pulls the trigger.  

Ashley is a Project Manager on the FIH/RTE practice who has pulled the trigger on several Etsy items (as evidenced in multiple “vintage” tchotchkes and half-complete craft projects around her home).

Topics: big data, research design, conference recap