Compilation Scores: Look Under the Hood

Posted by Cathy Harrison on Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: Advanced Analytics, Methodology, Qualitative Research, Quantitative Research