How Target Knows You're Pregnant: A Predictive Analysis Perspective

Posted by Jeff McKenna

Tue, Feb 21, 2012

Shopping CMBOn Sunday, The New York Times Magazine published a piece: How Companies Learn Your Secrets, by Charles Duhigg, author of the forthcoming The Power of Habit: Why We Do What We Do in Life and Business.  It’s an interesting article, especially for market researchers, and I recommend everyone take the time to read it.

Consumer "habits” are a big focus of the work we (market researchers) do as we seek to understand consumer behavior. From the perspective of the article, a large part of what we do is identify behavioral habits to help marketers find ways to insert their product or service into people's habit processes. 

In this blog, I want to focus on the insights the story shared about predictive analytics. Much of Duhigg's article looks at how Target conducts advanced analytics to identify data within their CRM system to predict whether a shopper is expecting a baby.  From a business process POV, and how we think about using predictive analytics, it’s important to point out a few relevant facts for market researchers:

  1. It wasn’t a “fishing expedition”: The analysis started with a clear marketing benefit as the outcome – Target wanted to begin promoting itself to expectant mothers before the baby is born. As the article points out, by marketing to these families before the baby becomes public knowledge, Target can get beat the flood of marketers that begin pitching a range of products and services once the birth is entered into public record.  It was the marketing team that came to the analyst with a high-value opportunity.  The analyst did not create the winning marketing idea (“Hey! Let’s market to expectant mothers before the baby is born!”).  Instead, the analyst looked under every stone and in every corner of the data to find the key to unlock the opportunity.

  2. The research didn’t stop with finding the key: The application of these insights required a lot more research to determine the best method of implementing the campaign.  For instance, Target ran several test campaigns to identify the best offers to send to the expectant mothers, and cycled through several messages to find just the right one in order to avoid revealing that Target was prying into the data.  Although the predictive analytics found the key, Target still relied on a comprehensive plan to make sure the findings were used in the best possible manner.

  3. Don’t let this story increase your expectations: The Target approach has had a big impact on how the company markets to a highly valuable segment of shoppers.  It's a great success story, but it's also something that happened ten years ago.  While I’m sure the Guest Market Analytics team achieves many victories along the way, they also spent a lot of time reaching “dead-ends,” unable to find that magic key.  And most of the time, the predictive solution yields valuable but incremental gains, these high-profile stories are few and far between.

The article shares many interesting ideas and insights; the story about the re-positioning of Febreze highlights another great research success. I'm looking forward to reading Duhigg's book, and if it covers more of these thought provoking business cases, I expect we will be seeing Charles Duhigg’s name popping up in other discussions on market research.

Did you read the article? What do you think?

CMB Webinar tools and techniques

Did you miss our latest Webinar? Learn how Aflac Unleashed the Power of Discrete Choice, Positioning their Brand for the Future 

 

Posted by Jeff McKenna, Jeff is a Senior Consultant at CMB, and the creator and host of our Tools and Techniques Webinar Series.

 

 

Topics: Advanced Analytics, Consumer Insights, Marketing Science, Customer Experience & Loyalty, Retail

Compilation Scores: Look Under the Hood

Posted by Cathy Harrison

Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: Advanced Analytics, Methodology, Qualitative Research, Quantitative Research