The Turbo Choice Modeling Seminar is the premier event for advanced users of choice modeling methodology (especially as it relates CBC, MaxDiff, and MBC). We have invited a panel of experts to deliver advanced training that takes you well beyond the basics. Join us for this intensive and practical 2-day Turbo event!
Respondent behavior in conjoint studies often deviates from the assumptions of random utility theory. We refer to deviations from normative choice behavior as data pathologies. A variety of models have been developed that attempt to correct for specific pathologies (i.e., screening rules, respondent quality, attribute non-attendance, etc.). While useful, these approaches tend to be both conceptually complex and computational intensive. As such, these approaches have not widely diffused into the practice of marketing research. In this paper we draw on innovations in machine learning to develop a practical approach that relies on (clever) randomization strategies and ensembling to simultaneously accommodate multiple data pathologies in a single model. We provide tips and tricks on how to implement this approach in practice.
Consumers often make decisions about products using limited information. We develop an approach to measure and model the role of brand and price in facilitating these decisions through the formation of subjective product perceptions. This is accomplished by combining a pick-any-J data with a discrete choice experiment. We apply our approach to experimental data to learn the degree to which brand and price serve as signals about other attributes. We demonstrate the conditions and study design required to identify the parameters of brand and price separately from those of latent perceptions cued by brand and price. We demonstrate the value of our approach by constructing counterfactual experiments where we show how additional (perceptual) information can be strategically introduced to increase consumer preference and choice share. Taken collectively, our research contributes to the literature by empirically refining existing theory about how brand and price are interpreted by consumers. It also provides practical guidance for marketing tasks involving limited information environments, both digital and physical. Such environments include online search, retail packaging, website development, and most forms of advertising.
We report results of a CBC study where we tested a number of possible treatments in CBC questionnaires that might affect derived WTP, including: the use of a BYO question prior to the CBC questions, summed-price vs. corner-prohibitions experimental design, “cheap talk” scripts, and a budget question where we prompted respondents to re-think CBC responses where they chose a product that exceeded their previously declared budget threshold.
Main-effects brand x price studies can reflect differential substitution patterns among brands when we estimate point estimates of utilities at the individual level via HB and build choice simulators that leverage those individual-level utilities. But, as the number of brands increases and the data become sparser, IIA plays a bigger role and differential substitution patterns may be smoothed. One approach that might help preserve brand substitution patterns is to build simulators using lower level draws (rather than point estimates). We compare results for multiple brand x price studies when simulating on lower-level draws vs. point estimates
The current design algorithms (complete enumeration, balanced overlap, and shortcut) used in Lighthouse Studio’s CBC program were developed in the 1990s. Regarding the more recent ACBC, the current design approach lacks flexibility and is suggested may oversample BYO levels too aggressively in some situations. We discuss plans for creating a new experimental design routine for both CBC and ACBC that could provide more speed, higher customization, and slightly better design efficiency. We look forward to audience feedback!
Most of the choice-based research done today is full profile (FP), where a level from every attribute is shown in every product profile. However, some argue that there comes a point when a FP choice task is too cumbersome and overwhelming, forcing respondents to use a simplification heuristic that could affect the model’s predictability. Since the work of Green and Srinivasan (Green, P. & Srinivasan. 1978), we have been historically taught to use around six attributes (depending on level text, category, and more). One solution to this problem includes Partial Profile (PP). PP is where a level from only a subset of attributes, usually 7 or fewer, is shown in every product profile. The subset of attributes changes across every screen so that respondents evaluate all attributes, but only 7 at a time. (Chrzan, K., & Elrod, T. 1995) This presentation compares the standard approach to PP to a custom approach, where only the levels of an attribute that a respondent says is important, is shown in every product profile. Therefore, the subset of attributes stays the same across every screen so that respondents only evaluate a subset of the total attributes. We call this approach, Bespoke CBC
When examining the results of a conjoint analysis, because each attribute’s utilities are on their own scale, a utility level from one attribute cannot be directly compared to another from a different attribute. Not to mention, testing extreme levels within an attribute can result in a high “importance” score for that attribute, when in reality, that attribute has little impact on the overall purchase decision. This presentation will demonstrate how to create more appropriate scores for attributes, or what we will call “impact” scores, based on simulation methods. We will also explore a new way of understanding the impact of the individual features on overall product preference. An approach particularly helpful when comparing across segments or clusters in the data and trying to optimize a product line within a competitive set.
What we understand from MaxDiff is the relative value of each of the attributes. However, we don’t understand if all the items tested are important, if some of them are important or if none of them are important. Thanks to both Louviere and Lattery, we are able to implement an “anchoring” procedure that allows us to draw a line in the sand where that threshold of importance lies. However, is there another Anchoring approach we could consider? Potentially adding a none alternative to every screen, similar to a choice-based conjoint? This presentation seeks to explore two new anchoring techniques and compare them to the two current Sawtooth Software offerings. Option 1 - Indirect Dual Response Method from Jordan Louviere Option 2 - Direct Anchoring from Kevin Lattery Option 3 - Best with a None Alternative Option 4 - Best with a Dual-Response None Alternative
R Shiny is an extension of the R language designed for building custom web applications. In this presentation, we'll take a look at what R Shiny is and examples of custom, flexible, web-based deliverables that can cater exactly to client specifications.
It can be tricky to catch respondents straight lining their way through their Conjoint/MaxDiff exercises. However, with a randomized experimental design, we have the expectation that the position of the answers will have nothing to do with which answer is selected. We can use this fact to find sufficiently improbable position combinations (e.g. straightlining or near-straightlining) and disqualify them from our studies in real-time. We will dive deep into methods and application for catching straightliners in our Conjoint and MaxDiff exercises.
Conjoints can be confusing and communicating an already sophisticated topic to a lay audience can add to the problems. In this talk, we'll explore the principles and best practices of data communication, communicating uncertainty, and making clear and accurate inferences from your conjoint data.
Environmental changes brought about by COVID-19 introduced disruptions in consumer confidence and have clients worried about doing pricing research. Combining discrete choice with SEM allows manipulation of economic perceptions together with price and portfolio configuration to understand consumer choice. By using Consumer Sentiment Index, we can adjust the index to its future level, allowing for pricing adjustments. As a result, the product and pricing decisions can be made in context of the fluctuating economic climate.
In these sessions, two approaches will be shown that overcome the downsides of both HB point estimates (robustness, accuracy, arbitrary scaling) and HB draws (speed), with the upsides of both (speed and performance). And, not unimportant, both approaches are easy to implement and do not require any fancy software or tools
Choice modelers are often interested in estimating consumer marginal "willingness to pay" (WTP) for different product attributes. This talk will highlight some of the subtleties (and perhaps unintended assumptions) in two different approaches for obtaining estimates WTP as well as introduce the logitr R package, which was written to support flexible estimation of multinomial logit and mixed logit models with preference space and WTP space utility specifications.
Tom Eagle asked me at the 2019 Sawtooth Software Conference if we could use decision trees to model CBC data like we can data from situational choice experiments. Neither he nor I had seen such a thing. But you will, in this presentation about Relative Advantage Trees: After describing how RATs work, Bethany will illustrate how to transform a CBC data set for analysis via RATs. Using data from two empirical studies, Bethany will avoid rodentine puns while comparing the performance of two breeds of RATs with aggregate and HB-MNL.
At the 2019 Advanced Research Techniques Forum, Jake Lee showed results from an empirical study and suggested that MaxDiff utilities aren’t appropriate for use in subsequent multivariate analyses. Other authors have successfully used MaxDiff utilities as inputs to factor analysis, multidimensional scaling and correlation analyses. Drawing on several empirical data sets we’ll see that the problems Jake encountered seem to be an idiosyncratic feature of his particular study and that his results are not generalizable.