Collecting Customer Feedback? Timing Matters...A Lot.

Posted by Jeff McKenna

Mon, Sep 19, 2011

Last week, my family and I enjoyed a trip to Orlando.  And with two girls (age 4 and 5), of course we visited some theme parks. While there I had the good fortune of being asked to complete a customer feedback interview at not one, but TWO, of the parks. Good fortune? To be stopped at an amusement park on a hot day with tired kids? Definitly. As someone who focuses on customer feedback research, I look forward to every opportunity to learn more about how people experience the process and  how companies apply the results to improve performance.

And just as I hoped, it was very enlightening. I could write a hundred blogs on the interview experience, but the one thing I want to focus on here is the timing of the interviews. Timing is a frequently discussed and debated topic among market researchers, and I want to add a more personal twist. As I mentioned above, the folks at each park intercepted me on premises, but at Park 1, the interviewer asked me a couple qualifying questions and collected my email address: I received the email about a week later, and completed the lengthy questionnaire online. 

At Park 2, the staff member asked me to complete an online interview at a computer in an office on-site.  So, this amusement park was getting my “immediate” reactions to the questions.  Which was better?  Well, it depends.  Really, the two experiences made me think of some great research, books, and ideas occurring in the field of human emotions and behavioral economics.  Daniel Kahneman is always a great resource in this area, and a popular TED video describes the two instances very well.

 “Using examples from vacations to colonoscopies, Nobel laureate and founder of behavioral economics Daniel Kahneman reveals how our "experiencing selves" and our "remembering selves" perceive happiness differently. This new insight has profound implications for economics, public policy -- and our own self-awareness]

David McRaney gives a nice summary of the video’s theme on his blog

The psychologist Daniel Kahneman has much to say on this topic.He says the self which makes decisions in your life is usually the remembering one. It drags your current (experiencing – sic) self around in pursuit of new memories, anticipating them based on old memories.

The current self has little control over your future. It can only control a few actions like moving your hand away from a hot stove or putting one foot in front of the other. Occasionally, it prompts you to eat cheeseburgers, or watch a horror movie, or play a video game.

The current self is happy experiencing things. It likes to be in the flow.

It is the remembering self which has made all the big decisions. It is happy when you can sit back and reflect on your life up to this point and feel content. It is happy when you tell people stories about the things you have seen and done.”

Kahneman’s delineation between the “Experiencing Self” and the “Remembering Self” really resonated in the two customer feedback studies I described.  To put it in terms of Kahneman’s theory: at Park 1 (off-site survey), when I was asked a few preliminary questions and later sent an email invitation, I evaluated the visit from my “Remembering Self.”  At Park 2 (on-site interview), when I was asked to evaluate the visit while still experiencing my park visit, I evaluated the visit from my “Experiencing Self.”

This has big implications for the data and information the parks will gain from the feedback.  The evaluation from my Remembering Self is closer to my decision frame of mind; it gives a better read on the aspects of the visit that lead to my choice to select/return to the park.  For the evaluation of Park 1, I had already viewed pictures of my girls enjoying themselves and begun concluding whether I would want to return again in the future. 

Of course, I could not recall many specific feelings or problems during the visit, yet the questionnaire (one week later) asked about a wide range of things, from cleanliness to security (which presents a big disconnect between the Experiencing and Remembering selves, as feelings of security/fear in-the-moment quickly dissipate).  Sure, we had a chatty restaurant server looking to up-sell us on every dish – a big annoyance as I work hard to remember every moment of the visit – but if Park 1 (off-site survey) is looking for problems to fix, it will not find them (beyond the glaring items). 

Therefore, we shouldn’t dismiss the timing of the interview at Park 2 (on-site interview).  In fact, Kahneman’s example of pain experienced during a colonoscopy is not that much different from what I experienced at that park.  For instance, a long wait for lunch at a restaurant was quite frustrating, especially with two hungry children, and I was very open about the frustration when completing the questionnaire onsite.  Park 2 would not have received such open comments if I hadn’t given them “in the moment.” 

On the other hand, I was also less glowing in my overall satisfaction ratings, saying I was less likely to return.  I was hot, tired, and worried about my kids melting down. The interesting thing about it is this: I would be more likely to return to Park 2, where the interview was on-site.  Now that my Remembering Self has reflected on the experiences – and had the “fog of battle” clear from my head – I realize that my family gained a lot more cherished memories from Park 2, and I would be far more likely to return compared to the other park.

Therefore, if the purpose of the interview is to understand the experiences, memories, and drivers of choice, it’s critical to time the interview for my “Remembering Self” to respond.  If the purpose is to find specific points of pain or joy (regardless of their role on choice), then it’s critical to time the interview for my “Experiencing Self” to respond.

Posted by Jeff McKenna who will be chairing the Action Planning track and leading discussions around the getting the most out of your voice of the customer program at the Total Customer Experience conference October 3-5. 

Are you planning on going to Total Customer Experience? CMB is an event sponsor. Feel free to use the code: TCEL11CMB when you register for a discounted price. We hope to see you there.

Topics: Methodology, Research Design, Customer Experience & Loyalty

Compilation Scores: Look Under the Hood

Posted by Cathy Harrison

Wed, Aug 03, 2011

My kid is passionate about math, and based on every quantitative indication, he math problemexcels at it.  So you can imagine our surprise when he didn’t qualify for next year’s advanced math program. Apparently he barely missed the cut-off score - a compilation of two quantitative sources of data and one qualitative source.  Given this injustice, I dug into the school’s evaluation method (hold off your sympathy for the school administration just yet).

Undoubtedly, the best way to get a comprehensive view of a situation is to consider both quantitative and qualitative information from a variety of sources.  By using this multi-method approach, you are more likely to get an accurate view of the problem at hand and are better able to make an informed decision.  Sometimes it makes sense to combine data from different sources into a “score” or “index.”  This provides the decision-maker with a shorthand way of comparing something – a brand, a person, or how something changes over time.

These compilation scores or indices are widely used and can be quite useful, but their validity depends on the sources used and how they are combined.   In the case of the math evaluation, there were two sources of quantitative and one qualitative source.  The quantitative sources were the results of a math test conducted by the school (CTP4) and a statewide standardized test (MCAS).  The qualitative was based on the teacher’s observations of the child across ten variables, rated on a 3 point scale.  For the most part, I don’t have a problem with these data sources.  The problem was in the weighting of these scores.

I’m not suggesting that the quantitative data is totally bias-free but at least the kids are evaluated on a level playing field.  They either get the right answer or they don’t.  In the case of the teacher evaluation, many more biases can impact the score (such as the teacher’s preference for certain personality types or the kids of colleagues or teacher’s aides).  The qualitative component was given a 39% weight – equal to the CTP4 (“for balance”) and greater than the MCAS (weighted at 22%).  This puts a great deal of influence in the hands of one person.  In this case, it was enough to override the superior quantitative scores and disqualify my kid.

Before you think this is just the rant of a miffed parent with love blinders on, think of this evaluation process as if it were a corporate decision that had millions of dollars at stake.  Would you be comfortable with this evaluation system?

In my opinion, a fairer evaluation process would have been qualification of the students based on the quantitative data (especially since there were two sources available) and then for those on the “borderline” use the qualitative data to make a decision about qualification.  Qualitative data is rarely combined with quantitative data in an index.  Its purpose is to explore a topic before quantification or to bring “color” to the quantitative results.  As you can imagine, I have voiced this opinion to the school administration but am unlikely to be able to reverse the decision. 

What’s the takeaway for you?  Be careful of how you create or evaluate indices or “scores.” They are only as good as what goes into them.

Posted by Cathy Harrison.  Cathy is a client services executive at CMB and has a passion for strategic market research, social media, and music.  You can follow Cathy on Twitter at @virtualMR     

 

Topics: Advanced Analytics, Methodology, Qualitative Research, Quantitative Research