WELCOME TO OUR BLOG!

The posts here represent the opinions of CMB employees and guests—not necessarily the company as a whole. 

Subscribe to Email Updates

BROWSE BY TAG

see all

New Adaptive Choice-Based Conjoint Technique Shows Promise

Posted by Jon Godin on Tue, Sep 30, 2008

In case you missed it, a promising new conjoint technique called Adaptive Choice-Based Conjoint (ACBC)was introduced at the AMA’s Advanced Research Techniques (ART) Forum, held in June. The ART Forum is an annual marketing science conference that brings together academics and practitioners for three-plus days of presentations and tutorials on the latest-and-greatest research methods our industry has to offer.

At this year’s session, Bryan Orme, President of Sawtooth Software, presented the results of early testing they’ve completed on ACBC, which they have been developing for the past few years. Traditional choice-based conjoint (also known as discrete choice modeling) designs assume that respondents complete choice tasks by making compensatory trade-offs: that is, if they really want feature A, but it is not available, they will seek out a package that contains other features that will compensate for the loss of feature A. However, in practice, respondents will often resort to non-compensatory heuristics, employing various rules or shortcuts to help simplify the task. These could include such things as always selecting the lowest price regardless of other features present, always choosing a favored brand, or focusing on a particular feature that they “must have” or “must avoid.”

While these rules technically infringe upon the assumptions of the standard logit choice model, the use of Hierarchical Bayes (HB) techniques when estimating utilities still enables us to make good predictions of future market behavior based on the information provided 1. However, we’re essentially getting less information than we could or should be from many respondents. Furthermore, conjoint tasks are often viewed by respondents as being repetitive, and contain offers that can be irrelevant to their actual wants and preferences. In order to address these issues, the industry has been struggling for several years to develop an approach to discrete choice that would “adapt” to respondent answers as the task progresses.

Sawtooth Software’s latest idea focuses on getting better data from each respondent, rather than trying to develop a purely adaptive model in the traditional discrete choice format. The new ACBC approach consists of three stages:

1. First, respondents are presented a Build-Your-Own (BYO) task (like you often see at a car manufacturer’s website, where you can “Build Your Own BMW” for instance), in which attribute levels are evaluated individually and traded off against price – using the car example, you could have no moon roof for $0 additional cost, or a moon roof for an extra $1,000. The respondents can choose any combination of features they desire, subject to their own budget constraint.

2. The winning product combination from the BYO task is taken into a second stage, where 24-40 “near neighbor” concepts are constructed by changing 2-5 attributes at a time. For each concept, respondents are asked whether they would consider the concept as a possibility they might purchase, or whether they would reject it. Thus, this stage of the exercise essentially produces a “consideration set.” Throughout the process of completing these screening tasks, the program can track whether respondents seem to be always including certain features (“must haves”) or always rejecting others (“unacceptables.) After enough evidence is accumulated, respondents are asked directly about these features to confirm whether a screening rule is being used. Once confirmed, all subsequent “near neighbor” tasks that contain (or are missing) those features will essentially be thrown out and replaced by new concepts that meet the screening criteria.

3. Finally, once all of the screening tasks are completed, the selected tasks that qualified into the respondent’s consideration set are brought forward into a “choice tournament,” where concepts are shown on a screen three at a time, with the respondent being asked to choose which concept they most prefer. Winning concepts move on and losing concepts drop out until a final winner is established. Often, this winning concept is deemed “more favorable” than their original BYO choice!

All of the information learned from the three tasks is then used to estimate the respondent’s utilities using a multinomial logit model. This can take the form of standard HB estimation, or a more advanced version of HB that takes into account the fact that different degrees of respondent error are present in each of the three tasks. The utilities from either approach can then be used in a simulator, just as you would with standard choice utilities.

You may be thinking that this seems like a lot to ask of respondents, and you would be correct: ACBC, based on limited testing Sawtooth has conducted, tends to take 50% to 200% longer to complete than standard discrete choice tasks. However, qualitative feedback suggests that respondents actually enjoy the ACBC exercise more, believe that it seems more tailored to their own preferences, and indeed made them want to slow down and make more careful choices. Sawtooth describes the interview experience as attempting to mimic the actual in-store buying experience that a very patient and interested salesperson would provide.

Ah, but does it actually work better? The answer here also appears to be “yes” – early tests indicate that ACBC utilities produce higher hit rates and lower mean absolute errors than standard choice utilities when used to predict responses to holdout tasks. This seems to be especially true in situations with small samples of respondents – so it appears that we are indeed getting more information about each respondent’s preferences using this new approach.

This is very exciting stuff, but only three tests to date have been conducted by Sawtooth. I will be among a select group of researchers who will be beta-testing this new software, starting in early August. Right now, the software is planned for market launch sometime around Q2 2009, so I and the other testers will be evaluating and debugging the approach in the meantime.

Interested in learning about the results? So are we. Once we’ve put the approach through some testing, hopefully in different contexts, we’ll be back in this forum to let you know what we think about it: Better than the old way, even though it’s longer? Or perhaps an interesting approach, but not worth the hassle? We’ll see…

Watch this space for more ways that CMB is working to stay on the cutting edge in order to provide you with the best information to inform your decisions.

1 The reasons why this is are outside the scope of this article.

Topics: advanced analytics, methodology, research design