Every year many researchers and insights professionals submit papers to present at the Sawtooth Software Conference. A selection committee carefully reviews and selects papers that fit best within the conference to ensure the conference is unique and provides value and learning for attendees.
We aim to provide additional insights into the use of RLH, Latent Class (LC) and Scaled-Adjusted Latent Class (SALC) models to identify poor quality respondents using data from a real-world global study. We will also investigate how these methods can improve the validity of MaxDiff findings, based on corroborating data from the same study.
We aim to provide additional insights into the use of RLH, Latent Class (LC) and Scaled-Adjusted Latent Class (SALC) models to identify poor quality respondents using data from a real-world global study. We will also investigate how these methods can improve the validity of MaxDiff findings, based on corroborating data from the same study.
This paper investigates some of the methodological shortcomings of the frequently used Van Westendorp price model and explores if potential extensions can be made based on the original framework. Newer statistical approaches under consideration include Bayesian inference, multi-level modeling, mixture model, and survival analysis. Easier and more interpretable outputs in the form of probabilistic distribution will be attempted to help better pricing decisions. In parallel, a R package will be written to share.
This paper investigates some of the methodological shortcomings of the frequently used Van Westendorp price model and explores if potential extensions can be made based on the original framework. Newer statistical approaches under consideration include Bayesian inference, multi-level modeling, mixture model, and survival analysis. Easier and more interpretable outputs in the form of probabilistic distribution will be attempted to help better pricing decisions. In parallel, a R package will be written to share.
Too often good thinking is rendered less accessible by cluttered and ineffective presenting. Here are ten ideas that you can use to make your presentations more effective, in the context of complex methods, research and thinking. The presentation draws on best practices from sources such as Edward Tufte, Nancy Duarte, Garr Reynolds and Barbara Minto. Put simply, these ideas are human-centric; focusing on how people understand and learn, rather than being presenter or results-centric.
Too often good thinking is rendered less accessible by cluttered and ineffective presenting. Here are ten ideas that you can use to make your presentations more effective, in the context of complex methods, research and thinking. The presentation draws on best practices from sources such as Edward Tufte, Nancy Duarte, Garr Reynolds and Barbara Minto. Put simply, these ideas are human-centric; focusing on how people understand and learn, rather than being presenter or results-centric.
Fifty-seven million Americans have a disability. Thirty-six million experience challenges with using the internet such as limited dexterity, vision or hearing problems. Designing research without accessibility in mind can lead to pitfalls including frustration, ostracization, and legal issues. This presentation will focus on the challenges we faced in creating an accessible survey using Sawtooth Software. Follow our journey from aimless wandering to resolution. Learn about the pitfalls, the aha moments and where we are now.
Fifty-seven million Americans have a disability. Thirty-six million experience challenges with using the internet such as limited dexterity, vision or hearing problems. Designing research without accessibility in mind can lead to pitfalls including frustration, ostracization, and legal issues. This presentation will focus on the challenges we faced in creating an accessible survey using Sawtooth Software. Follow our journey from aimless wandering to resolution. Learn about the pitfalls, the aha moments and where we are now.
We demonstrate the efficacy on including passive geolocation data on visits to automotive service locations as covariates in a standard CBC model to predict car purchase. We then compare those predictions to not only internal holdouts, but also to external car ownership data and a recontact survey one year later to validate those predictions.
We demonstrate the efficacy on including passive geolocation data on visits to automotive service locations as covariates in a standard CBC model to predict car purchase. We then compare those predictions to not only internal holdouts, but also to external car ownership data and a recontact survey one year later to validate those predictions.
To prioritize product requirements it is of paramount importance that Research practitioners accurately represent gathered pain points back to Users for feedback. Traditionally this is performed via concept studies in which data is collected on how potential solutions resonate with potential customers. In this talk we introduce an alternative approach called CUJ Mad Libs; through which Researchers can drive Customer insights into product decisions at an earlier stage, and have greater confidence in the problems they are solving ahead of MaxDiff surveys.
To prioritize product requirements it is of paramount importance that Research practitioners accurately represent gathered pain points back to Users for feedback. Traditionally this is performed via concept studies in which data is collected on how potential solutions resonate with potential customers. In this talk we introduce an alternative approach called CUJ Mad Libs; through which Researchers can drive Customer insights into product decisions at an earlier stage, and have greater confidence in the problems they are solving ahead of MaxDiff surveys.
Most validation studies that compare conjoint preference shares with actual sales only focus on one single point in time. We go one step further and compare conjoint preference shares with real market shares over time. We discuss the ability of a simple brand-price conjoint to capture monthly fluctuations and the long-term trend in brand sales. We demonstrate how conjoint preference is superior to traditional stated brand preference, offering an exciting new use-case for conjoint analysis.
Most validation studies that compare conjoint preference shares with actual sales only focus on one single point in time. We go one step further and compare conjoint preference shares with real market shares over time. We discuss the ability of a simple brand-price conjoint to capture monthly fluctuations and the long-term trend in brand sales. We demonstrate how conjoint preference is superior to traditional stated brand preference, offering an exciting new use-case for conjoint analysis.
The paper presents a behavioral framework which improve choice models with 9 simple questions. The recall of past shopping experiences has a relevant impact on the results on the following choice exercise. Hit-Rates and out-of-sample predictions could be significantly improved using this framework. The results of 9 empirical studies illustrate the improvement and give a guideline how to work with the additional questions. The paper describes the usefulness of the answers for three different purposes: recall of past shopping situations; use as co-variates in CBC/HB estimation and consumer segmentation.
The paper presents a behavioral framework which improve choice models with 9 simple questions. The recall of past shopping experiences has a relevant impact on the results on the following choice exercise. Hit-Rates and out-of-sample predictions could be significantly improved using this framework. The results of 9 empirical studies illustrate the improvement and give a guideline how to work with the additional questions. The paper describes the usefulness of the answers for three different purposes: recall of past shopping situations; use as co-variates in CBC/HB estimation and consumer segmentation.
Chris discusses 4 novel applications of Conjoint Analysis (CA) for product design, from 12 years of past Sawtooth Software Conferences. He highlights how each method is useful, describing its key points for success. This is a managerial talk omitting technical details (which will be linked in past proceedings and open source code). The goal is that attendees will learn something new, think about novel applications of CA in practice, and expand their conception of the problems that they can tackle with conjoint analysis!
Chris discusses 4 novel applications of Conjoint Analysis (CA) for product design, from 12 years of past Sawtooth Software Conferences. He highlights how each method is useful, describing its key points for success. This is a managerial talk omitting technical details (which will be linked in past proceedings and open source code). The goal is that attendees will learn something new, think about novel applications of CA in practice, and expand their conception of the problems that they can tackle with conjoint analysis!
Partial profile CBC is sometimes recommended as an alternative to ACBC when there are many attributes in conjoint studies. This paper presents a result that compares partial profile CBC with ACBC in a smartphone choice study. The data shows that utility estimates and sensitivity analyses from both methods are rather similar, and it takes less time for respondents to complete partial profile CBC questions. However, ACBC outperforms partial profile CBC on holdout prediction accuracy and in real markets.
Partial profile CBC is sometimes recommended as an alternative to ACBC when there are many attributes in conjoint studies. This paper presents a result that compares partial profile CBC with ACBC in a smartphone choice study. The data shows that utility estimates and sensitivity analyses from both methods are rather similar, and it takes less time for respondents to complete partial profile CBC questions. However, ACBC outperforms partial profile CBC on holdout prediction accuracy and in real markets.
With the introduction of Anchored MaxDiff from Jordan Louviere (Indirect Anchoring) and Kevin Lattery (Direct Anchoring), we're given a new set of capabilities and a proxy for the "none" alternative that we previously only had in a Choice-Based Conjoint. But can (and should) we take our smaller CBC designs and turn them into Anchored MaxDiffs? If we do, will we get similar results? Benefits could include automatically capturing interaction effects, easily running TURF analysis, and creating clustering solutions based on reactions to the entire product preference, not just individual feature preference. This paper sets out to understand the pros and cons of turning a simple CBC into an Anchored MaxDiff and recommendations for when it might be most useful.
With the introduction of Anchored MaxDiff from Jordan Louviere (Indirect Anchoring) and Kevin Lattery (Direct Anchoring), we're given a new set of capabilities and a proxy for the "none" alternative that we previously only had in a Choice-Based Conjoint. But can (and should) we take our smaller CBC designs and turn them into Anchored MaxDiffs? If we do, will we get similar results? Benefits could include automatically capturing interaction effects, easily running TURF analysis, and creating clustering solutions based on reactions to the entire product preference, not just individual feature preference. This paper sets out to understand the pros and cons of turning a simple CBC into an Anchored MaxDiff and recommendations for when it might be most useful.
Treatment choice for localized prostate cancer is preference sensitive given that several medically viable and effective treatment options are available, each with specific risks and benefits. We present the development and effectiveness of our PreProCare preference assessment instrument to assess the association between value markers (utility levels) and treatment choice in localized prostate cancer patients. We observed that preference assessment intervention helped prostate cancer patients reveal their preferences, leading to better alignment with treatment decision.
Treatment choice for localized prostate cancer is preference sensitive given that several medically viable and effective treatment options are available, each with specific risks and benefits. We present the development and effectiveness of our PreProCare preference assessment instrument to assess the association between value markers (utility levels) and treatment choice in localized prostate cancer patients. We observed that preference assessment intervention helped prostate cancer patients reveal their preferences, leading to better alignment with treatment decision.
This presentation proposes an alternative preference assessment exercise to replace choice-based-conjoint in contexts where tradeoffs are uncertain, irreversible, or threaten norms. For difficult choices we propose replacing multiple choices with a single judgment to move toward a goal, and preparing for that judgment with assessments of arguments for and against that goal. We show that for difficult medical options such a task is better accepted by respondents and better supports respondent resources and needs.
This presentation proposes an alternative preference assessment exercise to replace choice-based-conjoint in contexts where tradeoffs are uncertain, irreversible, or threaten norms. For difficult choices we propose replacing multiple choices with a single judgment to move toward a goal, and preparing for that judgment with assessments of arguments for and against that goal. We show that for difficult medical options such a task is better accepted by respondents and better supports respondent resources and needs.
Faced with the challenges presented by established concept testing and screening techniques, the authors describe an alternative approach that utilizes an artificial market in which a static research panel responds to survey based sequential monadic exposures of concepts offered and priced within a prediction game.
Faced with the challenges presented by established concept testing and screening techniques, the authors describe an alternative approach that utilizes an artificial market in which a static research panel responds to survey based sequential monadic exposures of concepts offered and priced within a prediction game.
For one common kind of segmentation several algorithms apply and we don’t know which of them works best. Across data sets constructed to reflect a number of varying data conditions (number of segments, relative segment sizes, degree of segment separation, number of dimensions and number of variables per dimension) we test which of four robust segmentation algorithms fares best in terms of identifying the correct number of segments and in correctly assigning respondents to segments.
For one common kind of segmentation several algorithms apply and we don’t know which of them works best. Across data sets constructed to reflect a number of varying data conditions (number of segments, relative segment sizes, degree of segment separation, number of dimensions and number of variables per dimension) we test which of four robust segmentation algorithms fares best in terms of identifying the correct number of segments and in correctly assigning respondents to segments.
A complex product is made up of simpler, component products. A challenge in modeling demand for complex products lies in simultaneously studying features that affect component preference, and its resulting effect on marketplace demand. We propose an integrative model for studying complex products utilizing multiple conjoint exercises within a single structure. We illustrate our model using a conjoint study of demand for option packages when purchasing an automobile.
A complex product is made up of simpler, component products. A challenge in modeling demand for complex products lies in simultaneously studying features that affect component preference, and its resulting effect on marketplace demand. We propose an integrative model for studying complex products utilizing multiple conjoint exercises within a single structure. We illustrate our model using a conjoint study of demand for option packages when purchasing an automobile.
We describe new automated procedures in Sawtooth Software’s market simulator for estimating Willingness to Pay (WTP) in a more appropriate and realistic way than either the common algebraic approach or the two-product simulation approach that don’t consider competition. We simulate the firm’s product versus either fixed competitors or repeated sampling across varying competitors, finding the indifference price that drives the firm’s share back to the share prior to enhancement. We use bootstrap sampling to develop confidence intervals.
We describe new automated procedures in Sawtooth Software’s market simulator for estimating Willingness to Pay (WTP) in a more appropriate and realistic way than either the common algebraic approach or the two-product simulation approach that don’t consider competition. We simulate the firm’s product versus either fixed competitors or repeated sampling across varying competitors, finding the indifference price that drives the firm’s share back to the share prior to enhancement. We use bootstrap sampling to develop confidence intervals.
We conducted an experiment in two waves, first a conjoint survey, later we asked the same respondents what they had bought for real, and we compared conjoint predictions with reality. Part of the conjoint exercise were “FilterCBC” tasks in which respondents can use filters and sorting to navigate through a wide range of product options. We will show what the comparison brought us and more particularly how, with alternative estimation methods, FilterCBC tasks contributed to the comparison.
We conducted an experiment in two waves, first a conjoint survey, later we asked the same respondents what they had bought for real, and we compared conjoint predictions with reality. Part of the conjoint exercise were “FilterCBC” tasks in which respondents can use filters and sorting to navigate through a wide range of product options. We will show what the comparison brought us and more particularly how, with alternative estimation methods, FilterCBC tasks contributed to the comparison.
The impact of experimental design on the inferential results of a choice experiment is often overlooked. Designs are the most complicated and least sexy part of an experiment. This paper will illustrate what can go wrong if you use a poor design and some indicators that you might have a less effective or disaster-in-the-waiting design.
The impact of experimental design on the inferential results of a choice experiment is often overlooked. Designs are the most complicated and least sexy part of an experiment. This paper will illustrate what can go wrong if you use a poor design and some indicators that you might have a less effective or disaster-in-the-waiting design.
Shapley regression is a useful tool for understanding relative importance of predictors. The output of Shapley regression – Shapley values – are not regression coefficients and cannot be used in prediction. It is possible though to derive coefficients by optimizing on the Shapley values. I compare these coefficients to those derived from ordinary regression models and find that both methods lead to similar predictions. New R code will be made available for calculating Shapley values and coefficients.
Shapley regression is a useful tool for understanding relative importance of predictors. The output of Shapley regression – Shapley values – are not regression coefficients and cannot be used in prediction. It is possible though to derive coefficients by optimizing on the Shapley values. I compare these coefficients to those derived from ordinary regression models and find that both methods lead to similar predictions. New R code will be made available for calculating Shapley values and coefficients.
Incorporating data stacking into CBC studies allows for many practical applications, such as testing sets of new products in large CPG categories when research time or financial resources are limited. This research is aimed to quantify the effects of data stacking using a full CBC study and follow-up new product waves. We will look at the optimal sample size for the follow-up waves and develop guidelines for consecutive designs and further validation and optimization.
Incorporating data stacking into CBC studies allows for many practical applications, such as testing sets of new products in large CPG categories when research time or financial resources are limited. This research is aimed to quantify the effects of data stacking using a full CBC study and follow-up new product waves. We will look at the optimal sample size for the follow-up waves and develop guidelines for consecutive designs and further validation and optimization.
Traditional data clustering algorithms are challenged both by conceptual as well as practical issues. This paper will present an approach, bi-clustering, which addresses the presence of dyadic relationships in clustering data. It will also profile the resultant “cluster cells” graphically and with a variety of machine-learning based importance measures.
Traditional data clustering algorithms are challenged both by conceptual as well as practical issues. This paper will present an approach, bi-clustering, which addresses the presence of dyadic relationships in clustering data. It will also profile the resultant “cluster cells” graphically and with a variety of machine-learning based importance measures.
In pricing studies for mature technological markets, researchers face a bulk of models and variants. They often use brand-price conjoints combined with individual consideration sets. With 200+ variants, however many concepts never enter any consideration set, leading to very sparse data. I am comparing HB, Xgboost and individual logistic regressions in terms of speed and validity, discussing where Xgboost can have an edge on other methods and how to apply it in practice.
In pricing studies for mature technological markets, researchers face a bulk of models and variants. They often use brand-price conjoints combined with individual consideration sets. With 200+ variants, however many concepts never enter any consideration set, leading to very sparse data. I am comparing HB, Xgboost and individual logistic regressions in terms of speed and validity, discussing where Xgboost can have an edge on other methods and how to apply it in practice.
There is extraordinary potential for the application of primary research in alternative asset management, an industry not often associated with typical marketing research. This presentation explores the ways primary research is currently being used and demonstrates opportunities where it could further be applied by alternative asset managers such as hedge funds, venture capital, and private equity firms.
There is extraordinary potential for the application of primary research in alternative asset management, an industry not often associated with typical marketing research. This presentation explores the ways primary research is currently being used and demonstrates opportunities where it could further be applied by alternative asset managers such as hedge funds, venture capital, and private equity firms.