SKU-Based Conjoint involves presenting respondents with numerous products 'Stock Keeping Units (SKUs)' at different price points. In contrast to typical conjoints, the product concept is fixed and refers to existing (or close to launch) products. This design is best suited for pricing research. However, the complexity arises from the number of SKU, leading to lengthy surveys that can overwhelm respondents. This can result in decreased data quality and unreliable insights. The session will present an overview of previous foundational research in the area and recommendations for some of the best practices in areas such as selection bias, nested simulations, data augmentation and modelling the price function.
SKU-Based Conjoint involves presenting respondents with numerous products 'Stock Keeping Units (SKUs)' at different price points. In contrast to typical conjoints, the product concept is fixed and refers to existing (or close to launch) products. This design is best suited for pricing research. However, the complexity arises from the number of SKU, leading to lengthy surveys that can overwhelm respondents. This can result in decreased data quality and unreliable insights. The session will present an overview of previous foundational research in the area and recommendations for some of the best practices in areas such as selection bias, nested simulations, data augmentation and modelling the price function.
Accurately estimating choice preferences in conjoint research is crucial in market research. Conjoint exercises are becoming more complex, involving numerous parameters and constraints with limited respondents. This paper compares the performance of a state-of-the-art no-U-turn sampler (NUTS) in STAN against the Gibbs sampler used in Sawtooth Software CBC/HB for complex conjoint studies.
Accurately estimating choice preferences in conjoint research is crucial in market research. Conjoint exercises are becoming more complex, involving numerous parameters and constraints with limited respondents. This paper compares the performance of a state-of-the-art no-U-turn sampler (NUTS) in STAN against the Gibbs sampler used in Sawtooth Software CBC/HB for complex conjoint studies.
Generative AI (GenAI) offers transformative potential for marketing research by enabling the creation of flexible, custom stimuli to explore consumer preferences and behaviors. This paper introduces a validation protocol to ensure the content and context validity of GenAI-generated stimuli that addresses potential biases and variability in outputs. By applying this protocol, researchers can leverage GenAI to develop reliable measurement tools, opening new possibilities for studying consumer preferences in previously challenging domains.
Generative AI (GenAI) offers transformative potential for marketing research by enabling the creation of flexible, custom stimuli to explore consumer preferences and behaviors. This paper introduces a validation protocol to ensure the content and context validity of GenAI-generated stimuli that addresses potential biases and variability in outputs. By applying this protocol, researchers can leverage GenAI to develop reliable measurement tools, opening new possibilities for studying consumer preferences in previously challenging domains.
This presentation explores how generative AI (GenAI) can expand the scope of conjoint measurement by enabling the creation of validated stimuli across diverse formats and contexts. We showcase a variety of applications including assessing preferences for image characteristics in online property listings, optimizing the tone and structure of product review summaries, and measurement of intra-individual attitudes and beliefs. This approach ensures reliable, contextually accurate stimuli, opening new possibilities for studying complex consumer preferences with enhanced flexibility and depth.
This presentation explores how generative AI (GenAI) can expand the scope of conjoint measurement by enabling the creation of validated stimuli across diverse formats and contexts. We showcase a variety of applications including assessing preferences for image characteristics in online property listings, optimizing the tone and structure of product review summaries, and measurement of intra-individual attitudes and beliefs. This approach ensures reliable, contextually accurate stimuli, opening new possibilities for studying complex consumer preferences with enhanced flexibility and depth.
Previous comparisons of best-worst case 2 (BW-2) scaling and choice-based conjoint (CBC) have focused on parameter equivalence (the two models produce significantly different utilities), respondent efficiency and predictive validity (the two models perform at parity). We extend this test to cover a second type of BW-2, one containing a dual-response none. This also allows us to include a single concept (“Tinder-style”) CBC. We compare these three models with a standard CBC in terms of utility equivalence, respondent efficiency and predictive validity, as has been done before, but we extend our comparison to include attribute non-attendance (ANA).
Previous comparisons of best-worst case 2 (BW-2) scaling and choice-based conjoint (CBC) have focused on parameter equivalence (the two models produce significantly different utilities), respondent efficiency and predictive validity (the two models perform at parity). We extend this test to cover a second type of BW-2, one containing a dual-response none. This also allows us to include a single concept (“Tinder-style”) CBC. We compare these three models with a standard CBC in terms of utility equivalence, respondent efficiency and predictive validity, as has been done before, but we extend our comparison to include attribute non-attendance (ANA).
There are many different approaches to estimating price utilities, but which one is best when you have many (30+) SKUs? Previous papers have focused on studies with 16 or less SKUs. This paper will review a conjoint exercise with 36 SKUs to understand if using price tiers (grouping similar price points together) can simplify the analysis and potentially provide similar stability of elasticity estimates compared to using SKU-level alternative-specific price attributes to capture the full range of price responses. Will using a tiered approach avoid extreme elasticity estimates?
There are many different approaches to estimating price utilities, but which one is best when you have many (30+) SKUs? Previous papers have focused on studies with 16 or less SKUs. This paper will review a conjoint exercise with 36 SKUs to understand if using price tiers (grouping similar price points together) can simplify the analysis and potentially provide similar stability of elasticity estimates compared to using SKU-level alternative-specific price attributes to capture the full range of price responses. Will using a tiered approach avoid extreme elasticity estimates?
Researchers often face questions about a subset of utilities. For example, which subset of MaxDiff utilities are most informative? Could we use a subset of utilities from this study in another study and predict the full set? Can we estimate utilities for a subset and predict the full set? We discuss these and other examples of “subset” questions. We also introduce and show how to use the Conditional Distribution of a Multivariate Normal to address these questions.
Researchers often face questions about a subset of utilities. For example, which subset of MaxDiff utilities are most informative? Could we use a subset of utilities from this study in another study and predict the full set? Can we estimate utilities for a subset and predict the full set? We discuss these and other examples of “subset” questions. We also introduce and show how to use the Conditional Distribution of a Multivariate Normal to address these questions.
Our standard HB model assumes respondent utilities are distributed multivariate normal. But constrained utilities are not likely multivariate normal, and in some cases this becomes problematic. For example, when one has ordinal levels that differ by an amount near 0. Most practitioners use point estimates of HB draws – the mean. Rather than using the mean of HB draws, we ignore problematic constraints during HB estimation and enforce constraints using the distribution of HB draws as a prior.
Our standard HB model assumes respondent utilities are distributed multivariate normal. But constrained utilities are not likely multivariate normal, and in some cases this becomes problematic. For example, when one has ordinal levels that differ by an amount near 0. Most practitioners use point estimates of HB draws – the mean. Rather than using the mean of HB draws, we ignore problematic constraints during HB estimation and enforce constraints using the distribution of HB draws as a prior.
Scale and preference heterogeneity will be confounded in a simple latent class MNL model. Two approaches for handling this are • Scale-adjusted latent class (SALC) which estimates scale classes and preference classes independently, so that each respondent is a member both of a preference class and a scale class. • Sawtooth Software’s scale-constrained latent class (where we use the standard deviation across the vector of utilities for each group as a proxy for scale) attempts to equalize scale during latent class MNL estimation to unconfound scale and preference heterogeneity Examining two artificial data sets, both with known preference heterogeneity (one with known continuous scale heterogeneity and one with known scale classes) we’ll test which approach does a better job of identifying the known preference classes and of putting the right respondents into the right preference classes.
Scale and preference heterogeneity will be confounded in a simple latent class MNL model. Two approaches for handling this are • Scale-adjusted latent class (SALC) which estimates scale classes and preference classes independently, so that each respondent is a member both of a preference class and a scale class. • Sawtooth Software’s scale-constrained latent class (where we use the standard deviation across the vector of utilities for each group as a proxy for scale) attempts to equalize scale during latent class MNL estimation to unconfound scale and preference heterogeneity Examining two artificial data sets, both with known preference heterogeneity (one with known continuous scale heterogeneity and one with known scale classes) we’ll test which approach does a better job of identifying the known preference classes and of putting the right respondents into the right preference classes.
The last phase of many segmentation projects is the construction of a segment typing tool, reducing the broad basis of the segmentation down to a minimum number of “golden questions” used to confidently classify future respondent into existing segments. While a variety of statistical and machine learning tools can create typing tools, when we use our handy MaxDiff experiment results to build the segments, our choices become more limited. Two commonly applied methods for building MaxDiff-based typing tools are a Naïve Bayes classifier (currently used in Sawtooth’s MaxDiff Typing Tool software) and Stepwise Discriminant Analysis. Using a series of real-world MaxDiff segmentations, we will test these two methods head-to-head to see if we can determine which is more successful at sorting holdout respondents to known segment memberships.
The last phase of many segmentation projects is the construction of a segment typing tool, reducing the broad basis of the segmentation down to a minimum number of “golden questions” used to confidently classify future respondent into existing segments. While a variety of statistical and machine learning tools can create typing tools, when we use our handy MaxDiff experiment results to build the segments, our choices become more limited. Two commonly applied methods for building MaxDiff-based typing tools are a Naïve Bayes classifier (currently used in Sawtooth’s MaxDiff Typing Tool software) and Stepwise Discriminant Analysis. Using a series of real-world MaxDiff segmentations, we will test these two methods head-to-head to see if we can determine which is more successful at sorting holdout respondents to known segment memberships.
For over 20 years, the default approach in Sawtooth for constraining part-worth utilities for HB analysis of CBC data (such as levels of price, speed, etc.) has been Simultaneous Tying. Over the years, some researchers have raised concerns regarding speed of estimation and convergence and have turned to using different approaches involving unary (thermometer) coding. And, what about combining unary coding with a newer algorithm (Hamiltonian Monte Carlo) available in the Stan platform? We compare a handful of approaches for utility constraints using a variety of CBC datasets, examining speed, predictive accuracy, quality of the draws, and convergence.
For over 20 years, the default approach in Sawtooth for constraining part-worth utilities for HB analysis of CBC data (such as levels of price, speed, etc.) has been Simultaneous Tying. Over the years, some researchers have raised concerns regarding speed of estimation and convergence and have turned to using different approaches involving unary (thermometer) coding. And, what about combining unary coding with a newer algorithm (Hamiltonian Monte Carlo) available in the Stan platform? We compare a handful of approaches for utility constraints using a variety of CBC datasets, examining speed, predictive accuracy, quality of the draws, and convergence.