During a recent social media webinar, the question was raised “How do we convince clients that social media is statistically significant?” After an involuntary groan, this question brought two things to mind:
-
There are a lot of people working in social media research who do not understand the fundamentals of market research; and
-
Why would anyone want to apply significance testing to social media data?
Apparently, there’s much debate in online research forums about whether significance testing should be applied to social media data. Proponents argue that online panels are convenience samples and significance testing is routinely applied to those research results – so why not social media? Admittedly that is true, but the ability to define the sample population and a structured data set should provide some test/retest reliability of the results. It’s not a fair comparison.
I’m all for creative analysis and see potential value in sig testing applied to any data set as a way to wade through a lot of numbers to find meaningful patterns. The analyst should understand that more things appear to be significant with big data sets so it might not be a useful exercise for social media. Even if it can be applied, I would use it as a behind-the-scenes tool and not something to report on.
Anyone who has worked with social media data understands the challenging, ongoing process of disambiguation (removing irrelevant chatter). There are numerous uncontrollable external factors including the ever-changing set of sites the chatter is being pulled from. Some are new sites where chatter is occurring but others are new sites being added to the listening tool’s database. Given the nature of social media data, how can statistical comparisons over time be valid? Social media analysis is a messy business. Think of it as a valuable source of qualitative information.
There is value in tracking social media chatter over time to monitor for potential red flags. Keep in mind that there is lot of noise in social media data and more often than not, an increase in chatter may not require any action.
Applying sig testing to social media data is a slippery slope. It implies a precision that is not there and puts focus on “significant” changes instead of meaning. Social media analysis is already challenging – why needlessly complicate things?
Cathy is CMB’s social media research maven dedicated to an “eyes wide open” approach to social media research and its practical application and integration with other data sources. Follow her on Twitter at @VirtualMR


Social media research is still behind social media marketing in terms of getting past the hype. Clearly there’s some overselling going on and more education is needed about how and when to effectively use social media data. Some sales folks even go as far as suggesting social media listening can replace market research as a way to save money – without having the background or unbiased perspective needed to make such a suggestion. 
On a recent 5-hour drive with my son Pete, his iPhone’s GPS-app suddenly started “cha-chinging” up points. He didn’t know why the points were rolling in, nor did he have any idea how – or for what – or if he would ever redeem these points, but we both agreed: “Points are good.”
But after reading about social loyalty programs, where members earn reward points for promoting brands on social networks, I’m wondering if marketers are taking things a bit too far. Does rewarding customers for forwarding a brand-related tweet water down the strength of true loyalty?

