What they Didn’t Teach you in Marketing Research Class: Sig Testing

Posted by Amy Maret on Mon, Feb 03, 2014

Market Research education CMB

As a recent graduate, and entrant into the world of professional market research, I have some words of wisdom for college seniors looking for a career in the industry. You may think your professors prepared you for the “real world” of market research, but there are some things you didn’t learn in your Marketing Research class. So what’s the major difference between research at the undergrad level and the work of a market researcher? In the real world, context matters, and there are real consequences to our research. One example of this is how we approach testing for statistical significance.Starting in my freshman year of college, I was taught to abide by a concept that I came to think of as the “Golden Rule of Research.” According to this rule, if you can’t be 95% or 90% confident that a difference is statistically significant, you should consider it essentially meaningless.

Entering the world of Market Research, I quickly found that this rule doesn’t always hold when the research is meant to help users make real business decisions. Although significance testing can be a helpful tool in interpreting results, ignoring a substantial difference simply because it does not cross the thin line into statistical significance can be a real mistake.

Our Chief Methodologist, Richard Schreuer, gives this example of why this “Golden Rule” doesn’t always make sense in the real world:

Imagine a manager gets the results of a concept test in which a new ad outperforms the old by a score of 54% to 47%; sig testing shows our manager can be 84% confident the new ad will do better than the old ad. The problem in the market research industry is that we typically assess significance at the 95% or 90% level, if the difference between scores doesn’t pass this strict threshold, then it is often assumed no difference exists.

However, in this case, we can be very sure that the new ad is not worse than the old (there’s only a 1% chance that the new ad’s score is below the old). So, the manager has an 84% chance of improving her advertising and a 1% chance of hurting it if she changes to the new creative—pretty good odds. The worst scenario is that the new creative will perform the same as the old. So, in this case, there is real upside in going with the new creative and little downside (save the production expense). But if the manager relied on industry-standard significance testing, she would likely have dismissed the creative immediately.

At CMB, it doesn’t take long to get the sense that there is something much bigger going on here than just number crunching. Creating useable, meaningful research and telling a cohesive story require more than just an understanding of the numbers themselves; it takes creativity and a solid grasp on our clients’ businesses and their needs. As much as I love working with the data, the most satisfying part of my job is seeing how our research and recommendations support real decisions that our clients make every day, and that’s not something I ever could have learned in school.

Amy is a recent graduate from Boston College, where she realized that she had a much greater interest in statistics than the average student. She is 95% confident that this is a meaningful difference.

 

Feb20webinar14Join CMB' Amy Modini on February 20th, at 12:30 pm ET, to learn how we use discrete choice to better position your brand in a complex changing market. Register here.

 

Topics: Chadwick Martin Bailey, Advanced Analytics, Methodology, Business Decisions