Every year many researchers and insights professionals submit papers to present at the Sawtooth Software Conference. A selection committee carefully reviews and selects papers that fit best within the conference to ensure the conference is unique and provides value and learning for attendees.
We aim to provide additional insights into the use of RLH, Latent Class (LC) and Scaled-Adjusted Latent Class (SALC) models to identify poor quality respondents using data from a real-world global study. We will also investigate how these methods can improve the validity of MaxDiff findings, based on corroborating data from the same study.
We aim to provide additional insights into the use of RLH, Latent Class (LC) and Scaled-Adjusted Latent Class (SALC) models to identify poor quality respondents using data from a real-world global study. We will also investigate how these methods can improve the validity of MaxDiff findings, based on corroborating data from the same study.
This paper investigates some of the methodological shortcomings of the frequently used Van Westendorp price model and explores if potential extensions can be made based on the original framework. Newer statistical approaches under consideration include Bayesian inference, multi-level modeling, mixture model, and survival analysis. Easier and more interpretable outputs in the form of probabilistic distribution will be attempted to help better pricing decisions. In parallel, a R package will be written to share.
This paper investigates some of the methodological shortcomings of the frequently used Van Westendorp price model and explores if potential extensions can be made based on the original framework. Newer statistical approaches under consideration include Bayesian inference, multi-level modeling, mixture model, and survival analysis. Easier and more interpretable outputs in the form of probabilistic distribution will be attempted to help better pricing decisions. In parallel, a R package will be written to share.
Too often good thinking is rendered less accessible by cluttered and ineffective presenting. Here are ten ideas that you can use to make your presentations more effective, in the context of complex methods, research and thinking. The presentation draws on best practices from sources such as Edward Tufte, Nancy Duarte, Garr Reynolds and Barbara Minto. Put simply, these ideas are human-centric; focusing on how people understand and learn, rather than being presenter or results-centric.
Too often good thinking is rendered less accessible by cluttered and ineffective presenting. Here are ten ideas that you can use to make your presentations more effective, in the context of complex methods, research and thinking. The presentation draws on best practices from sources such as Edward Tufte, Nancy Duarte, Garr Reynolds and Barbara Minto. Put simply, these ideas are human-centric; focusing on how people understand and learn, rather than being presenter or results-centric.
Fifty-seven million Americans have a disability. Thirty-six million experience challenges with using the internet such as limited dexterity, vision or hearing problems. Designing research without accessibility in mind can lead to pitfalls including frustration, ostracization, and legal issues. This presentation will focus on the challenges we faced in creating an accessible survey using Sawtooth Software. Follow our journey from aimless wandering to resolution. Learn about the pitfalls, the aha moments and where we are now.
Fifty-seven million Americans have a disability. Thirty-six million experience challenges with using the internet such as limited dexterity, vision or hearing problems. Designing research without accessibility in mind can lead to pitfalls including frustration, ostracization, and legal issues. This presentation will focus on the challenges we faced in creating an accessible survey using Sawtooth Software. Follow our journey from aimless wandering to resolution. Learn about the pitfalls, the aha moments and where we are now.
We demonstrate the efficacy on including passive geolocation data on visits to automotive service locations as covariates in a standard CBC model to predict car purchase. We then compare those predictions to not only internal holdouts, but also to external car ownership data and a recontact survey one year later to validate those predictions.
We demonstrate the efficacy on including passive geolocation data on visits to automotive service locations as covariates in a standard CBC model to predict car purchase. We then compare those predictions to not only internal holdouts, but also to external car ownership data and a recontact survey one year later to validate those predictions.
To prioritize product requirements it is of paramount importance that Research practitioners accurately represent gathered pain points back to Users for feedback. Traditionally this is performed via concept studies in which data is collected on how potential solutions resonate with potential customers. In this talk we introduce an alternative approach called CUJ Mad Libs; through which Researchers can drive Customer insights into product decisions at an earlier stage, and have greater confidence in the problems they are solving ahead of MaxDiff surveys.
To prioritize product requirements it is of paramount importance that Research practitioners accurately represent gathered pain points back to Users for feedback. Traditionally this is performed via concept studies in which data is collected on how potential solutions resonate with potential customers. In this talk we introduce an alternative approach called CUJ Mad Libs; through which Researchers can drive Customer insights into product decisions at an earlier stage, and have greater confidence in the problems they are solving ahead of MaxDiff surveys.
Most validation studies that compare conjoint preference shares with actual sales only focus on one single point in time. We go one step further and compare conjoint preference shares with real market shares over time. We discuss the ability of a simple brand-price conjoint to capture monthly fluctuations and the long-term trend in brand sales. We demonstrate how conjoint preference is superior to traditional stated brand preference, offering an exciting new use-case for conjoint analysis.
Most validation studies that compare conjoint preference shares with actual sales only focus on one single point in time. We go one step further and compare conjoint preference shares with real market shares over time. We discuss the ability of a simple brand-price conjoint to capture monthly fluctuations and the long-term trend in brand sales. We demonstrate how conjoint preference is superior to traditional stated brand preference, offering an exciting new use-case for conjoint analysis.
The paper presents a behavioral framework which improve choice models with 9 simple questions. The recall of past shopping experiences has a relevant impact on the results on the following choice exercise. Hit-Rates and out-of-sample predictions could be significantly improved using this framework. The results of 9 empirical studies illustrate the improvement and give a guideline how to work with the additional questions. The paper describes the usefulness of the answers for three different purposes: recall of past shopping situations; use as co-variates in CBC/HB estimation and consumer segmentation.
The paper presents a behavioral framework which improve choice models with 9 simple questions. The recall of past shopping experiences has a relevant impact on the results on the following choice exercise. Hit-Rates and out-of-sample predictions could be significantly improved using this framework. The results of 9 empirical studies illustrate the improvement and give a guideline how to work with the additional questions. The paper describes the usefulness of the answers for three different purposes: recall of past shopping situations; use as co-variates in CBC/HB estimation and consumer segmentation.
Chris discusses 4 novel applications of Conjoint Analysis (CA) for product design, from 12 years of past Sawtooth Software Conferences. He highlights how each method is useful, describing its key points for success. This is a managerial talk omitting technical details (which will be linked in past proceedings and open source code). The goal is that attendees will learn something new, think about novel applications of CA in practice, and expand their conception of the problems that they can tackle with conjoint analysis!
Chris discusses 4 novel applications of Conjoint Analysis (CA) for product design, from 12 years of past Sawtooth Software Conferences. He highlights how each method is useful, describing its key points for success. This is a managerial talk omitting technical details (which will be linked in past proceedings and open source code). The goal is that attendees will learn something new, think about novel applications of CA in practice, and expand their conception of the problems that they can tackle with conjoint analysis!
Partial profile CBC is sometimes recommended as an alternative to ACBC when there are many attributes in conjoint studies. This paper presents a result that compares partial profile CBC with ACBC in a smartphone choice study. The data shows that utility estimates and sensitivity analyses from both methods are rather similar, and it takes less time for respondents to complete partial profile CBC questions. However, ACBC outperforms partial profile CBC on holdout prediction accuracy and in real markets.
Partial profile CBC is sometimes recommended as an alternative to ACBC when there are many attributes in conjoint studies. This paper presents a result that compares partial profile CBC with ACBC in a smartphone choice study. The data shows that utility estimates and sensitivity analyses from both methods are rather similar, and it takes less time for respondents to complete partial profile CBC questions. However, ACBC outperforms partial profile CBC on holdout prediction accuracy and in real markets.
With the introduction of Anchored MaxDiff from Jordan Louviere (Indirect Anchoring) and Kevin Lattery (Direct Anchoring), we're given a new set of capabilities and a proxy for the "none" alternative that we previously only had in a Choice-Based Conjoint. But can (and should) we take our smaller CBC designs and turn them into Anchored MaxDiffs? If we do, will we get similar results? Benefits could include automatically capturing interaction effects, easily running TURF analysis, and creating clustering solutions based on reactions to the entire product preference, not just individual feature preference. This paper sets out to understand the pros and cons of turning a simple CBC into an Anchored MaxDiff and recommendations for when it might be most useful.
With the introduction of Anchored MaxDiff from Jordan Louviere (Indirect Anchoring) and Kevin Lattery (Direct Anchoring), we're given a new set of capabilities and a proxy for the "none" alternative that we previously only had in a Choice-Based Conjoint. But can (and should) we take our smaller CBC designs and turn them into Anchored MaxDiffs? If we do, will we get similar results? Benefits could include automatically capturing interaction effects, easily running TURF analysis, and creating clustering solutions based on reactions to the entire product preference, not just individual feature preference. This paper sets out to understand the pros and cons of turning a simple CBC into an Anchored MaxDiff and recommendations for when it might be most useful.
Treatment choice for localized prostate cancer is preference sensitive given that several medically viable and effective treatment options are available, each with specific risks and benefits. We present the development and effectiveness of our PreProCare preference assessment instrument to assess the association between value markers (utility levels) and treatment choice in localized prostate cancer patients. We observed that preference assessment intervention helped prostate cancer patients reveal their preferences, leading to better alignment with treatment decision.
Treatment choice for localized prostate cancer is preference sensitive given that several medically viable and effective treatment options are available, each with specific risks and benefits. We present the development and effectiveness of our PreProCare preference assessment instrument to assess the association between value markers (utility levels) and treatment choice in localized prostate cancer patients. We observed that preference assessment intervention helped prostate cancer patients reveal their preferences, leading to better alignment with treatment decision.
This presentation proposes an alternative preference assessment exercise to replace choice-based-conjoint in contexts where tradeoffs are uncertain, irreversible, or threaten norms. For difficult choices we propose replacing multiple choices with a single judgment to move toward a goal, and preparing for that judgment with assessments of arguments for and against that goal. We show that for difficult medical options such a task is better accepted by respondents and better supports respondent resources and needs.
This presentation proposes an alternative preference assessment exercise to replace choice-based-conjoint in contexts where tradeoffs are uncertain, irreversible, or threaten norms. For difficult choices we propose replacing multiple choices with a single judgment to move toward a goal, and preparing for that judgment with assessments of arguments for and against that goal. We show that for difficult medical options such a task is better accepted by respondents and better supports respondent resources and needs.
Faced with the challenges presented by established concept testing and screening techniques, the authors describe an alternative approach that utilizes an artificial market in which a static research panel responds to survey based sequential monadic exposures of concepts offered and priced within a prediction game.
Faced with the challenges presented by established concept testing and screening techniques, the authors describe an alternative approach that utilizes an artificial market in which a static research panel responds to survey based sequential monadic exposures of concepts offered and priced within a prediction game.
For one common kind of segmentation several algorithms apply and we don’t know which of them works best. Across data sets constructed to reflect a number of varying data conditions (number of segments, relative segment sizes, degree of segment separation, number of dimensions and number of variables per dimension) we test which of four robust segmentation algorithms fares best in terms of identifying the correct number of segments and in correctly assigning respondents to segments.
For one common kind of segmentation several algorithms apply and we don’t know which of them works best. Across data sets constructed to reflect a number of varying data conditions (number of segments, relative segment sizes, degree of segment separation, number of dimensions and number of variables per dimension) we test which of four robust segmentation algorithms fares best in terms of identifying the correct number of segments and in correctly assigning respondents to segments.
A complex product is made up of simpler, component products. A challenge in modeling demand for complex products lies in simultaneously studying features that affect component preference, and its resulting effect on marketplace demand. We propose an integrative model for studying complex products utilizing multiple conjoint exercises within a single structure. We illustrate our model using a conjoint study of demand for option packages when purchasing an automobile.
A complex product is made up of simpler, component products. A challenge in modeling demand for complex products lies in simultaneously studying features that affect component preference, and its resulting effect on marketplace demand. We propose an integrative model for studying complex products utilizing multiple conjoint exercises within a single structure. We illustrate our model using a conjoint study of demand for option packages when purchasing an automobile.
We describe new automated procedures in Sawtooth Software’s market simulator for estimating Willingness to Pay (WTP) in a more appropriate and realistic way than either the common algebraic approach or the two-product simulation approach that don’t consider competition. We simulate the firm’s product versus either fixed competitors or repeated sampling across varying competitors, finding the indifference price that drives the firm’s share back to the share prior to enhancement. We use bootstrap sampling to develop confidence intervals.
We describe new automated procedures in Sawtooth Software’s market simulator for estimating Willingness to Pay (WTP) in a more appropriate and realistic way than either the common algebraic approach or the two-product simulation approach that don’t consider competition. We simulate the firm’s product versus either fixed competitors or repeated sampling across varying competitors, finding the indifference price that drives the firm’s share back to the share prior to enhancement. We use bootstrap sampling to develop confidence intervals.
We conducted an experiment in two waves, first a conjoint survey, later we asked the same respondents what they had bought for real, and we compared conjoint predictions with reality. Part of the conjoint exercise were “FilterCBC” tasks in which respondents can use filters and sorting to navigate through a wide range of product options. We will show what the comparison brought us and more particularly how, with alternative estimation methods, FilterCBC tasks contributed to the comparison.
We conducted an experiment in two waves, first a conjoint survey, later we asked the same respondents what they had bought for real, and we compared conjoint predictions with reality. Part of the conjoint exercise were “FilterCBC” tasks in which respondents can use filters and sorting to navigate through a wide range of product options. We will show what the comparison brought us and more particularly how, with alternative estimation methods, FilterCBC tasks contributed to the comparison.
The impact of experimental design on the inferential results of a choice experiment is often overlooked. Designs are the most complicated and least sexy part of an experiment. This paper will illustrate what can go wrong if you use a poor design and some indicators that you might have a less effective or disaster-in-the-waiting design.
The impact of experimental design on the inferential results of a choice experiment is often overlooked. Designs are the most complicated and least sexy part of an experiment. This paper will illustrate what can go wrong if you use a poor design and some indicators that you might have a less effective or disaster-in-the-waiting design.
Shapley regression is a useful tool for understanding relative importance of predictors. The output of Shapley regression – Shapley values – are not regression coefficients and cannot be used in prediction. It is possible though to derive coefficients by optimizing on the Shapley values. I compare these coefficients to those derived from ordinary regression models and find that both methods lead to similar predictions. New R code will be made available for calculating Shapley values and coefficients.
Shapley regression is a useful tool for understanding relative importance of predictors. The output of Shapley regression – Shapley values – are not regression coefficients and cannot be used in prediction. It is possible though to derive coefficients by optimizing on the Shapley values. I compare these coefficients to those derived from ordinary regression models and find that both methods lead to similar predictions. New R code will be made available for calculating Shapley values and coefficients.
Incorporating data stacking into CBC studies allows for many practical applications, such as testing sets of new products in large CPG categories when research time or financial resources are limited. This research is aimed to quantify the effects of data stacking using a full CBC study and follow-up new product waves. We will look at the optimal sample size for the follow-up waves and develop guidelines for consecutive designs and further validation and optimization.
Incorporating data stacking into CBC studies allows for many practical applications, such as testing sets of new products in large CPG categories when research time or financial resources are limited. This research is aimed to quantify the effects of data stacking using a full CBC study and follow-up new product waves. We will look at the optimal sample size for the follow-up waves and develop guidelines for consecutive designs and further validation and optimization.
Traditional data clustering algorithms are challenged both by conceptual as well as practical issues. This paper will present an approach, bi-clustering, which addresses the presence of dyadic relationships in clustering data. It will also profile the resultant “cluster cells” graphically and with a variety of machine-learning based importance measures.
Traditional data clustering algorithms are challenged both by conceptual as well as practical issues. This paper will present an approach, bi-clustering, which addresses the presence of dyadic relationships in clustering data. It will also profile the resultant “cluster cells” graphically and with a variety of machine-learning based importance measures.
In pricing studies for mature technological markets, researchers face a bulk of models and variants. They often use brand-price conjoints combined with individual consideration sets. With 200+ variants, however many concepts never enter any consideration set, leading to very sparse data. I am comparing HB, Xgboost and individual logistic regressions in terms of speed and validity, discussing where Xgboost can have an edge on other methods and how to apply it in practice.
In pricing studies for mature technological markets, researchers face a bulk of models and variants. They often use brand-price conjoints combined with individual consideration sets. With 200+ variants, however many concepts never enter any consideration set, leading to very sparse data. I am comparing HB, Xgboost and individual logistic regressions in terms of speed and validity, discussing where Xgboost can have an edge on other methods and how to apply it in practice.
There is extraordinary potential for the application of primary research in alternative asset management, an industry not often associated with typical marketing research. This presentation explores the ways primary research is currently being used and demonstrates opportunities where it could further be applied by alternative asset managers such as hedge funds, venture capital, and private equity firms.
There is extraordinary potential for the application of primary research in alternative asset management, an industry not often associated with typical marketing research. This presentation explores the ways primary research is currently being used and demonstrates opportunities where it could further be applied by alternative asset managers such as hedge funds, venture capital, and private equity firms.
Cheaters are overtaking online panel data collection in staggering numbers. Both B2C and B2B audiences are impacted, some quite severely. In addition to the tools and techniques panel companies are bringing to fight this common enemy, researchers are having to up our game as well. In this presentation, we quantify the cheating problem and its trend over time. We then discuss best practices to identify cheaters and achieve the cleanest data set possible.
Cheaters are overtaking online panel data collection in staggering numbers. Both B2C and B2B audiences are impacted, some quite severely. In addition to the tools and techniques panel companies are bringing to fight this common enemy, researchers are having to up our game as well. In this presentation, we quantify the cheating problem and its trend over time. We then discuss best practices to identify cheaters and achieve the cleanest data set possible.
In the not too distant past, surveys were administered via pen and paper then manually analyzed and reported. Today, most survey data is collected electronically, but is still cleaned, analyzed, and reported manually. This takes time and allows for errors. The continuing demand for data means more efficient methods are needed. This presentation demonstrates the integration of Sawtooth Software with multiple systems to create a (nearly) automated study from first respondent to final word.
In the not too distant past, surveys were administered via pen and paper then manually analyzed and reported. Today, most survey data is collected electronically, but is still cleaned, analyzed, and reported manually. This takes time and allows for errors. The continuing demand for data means more efficient methods are needed. This presentation demonstrates the integration of Sawtooth Software with multiple systems to create a (nearly) automated study from first respondent to final word.
The Kano method assesses whether a proposed product or feature is attractive or delightful, as opposed to being merely necessary or unexciting. It gives a compelling answer but one that may not be correct. Its implementation uses questionable theory with low-quality survey items. We discuss the theory and method and present an empirical study of Kano reliability. Alternatives -- such as MaxDiff paired with traditional scales -- obtain similar benefits while using higher quality, more reliable methods.
The Kano method assesses whether a proposed product or feature is attractive or delightful, as opposed to being merely necessary or unexciting. It gives a compelling answer but one that may not be correct. Its implementation uses questionable theory with low-quality survey items. We discuss the theory and method and present an empirical study of Kano reliability. Alternatives -- such as MaxDiff paired with traditional scales -- obtain similar benefits while using higher quality, more reliable methods.
How do you capture complex preferences while also keeping respondents engaged? Adaptive Choice-Based Conjoint (ACBC) provides multiple alternatives for accomplishing both goals. But procuring precious survey content for choice customization can be challenging, particularly when the benefits may not be immediately clear at the design phase of the project. This presentation will use a series of case studies to evaluate different inputs to choice experiments and help the analyst quantify the trade-off between design efficiency and response quality for each.
How do you capture complex preferences while also keeping respondents engaged? Adaptive Choice-Based Conjoint (ACBC) provides multiple alternatives for accomplishing both goals. But procuring precious survey content for choice customization can be challenging, particularly when the benefits may not be immediately clear at the design phase of the project. This presentation will use a series of case studies to evaluate different inputs to choice experiments and help the analyst quantify the trade-off between design efficiency and response quality for each.
We propose an extension to Menu Based Conjoint (MBC) in the form of an add-on activity to gauge “Long Term” behavior and risks of customer attrition in the presence of price increases. Unlike previous methods, this activity does not require a separate conjoint activity, and only takes 30 seconds of survey time. Our estimates will be compared to existing approaches, and we will discuss successful applications to current research.
We propose an extension to Menu Based Conjoint (MBC) in the form of an add-on activity to gauge “Long Term” behavior and risks of customer attrition in the presence of price increases. Unlike previous methods, this activity does not require a separate conjoint activity, and only takes 30 seconds of survey time. Our estimates will be compared to existing approaches, and we will discuss successful applications to current research.
Traditional brand trackers focus on a brand’s ability to generate volume through measures of consideration and preference but often overlook the ability of a brand to charge a higher price. We show how conjoint analysis not only more accurately measures brand volumes but can also measure price premium. Furthermore, we show what drives brand premiums is distinctively different from what drives brand choice. Hence, we provide a new way of thinking on how brands should approach building brand equity.
Traditional brand trackers focus on a brand’s ability to generate volume through measures of consideration and preference but often overlook the ability of a brand to charge a higher price. We show how conjoint analysis not only more accurately measures brand volumes but can also measure price premium. Furthermore, we show what drives brand premiums is distinctively different from what drives brand choice. Hence, we provide a new way of thinking on how brands should approach building brand equity.
Conjoint analysis is a method that is commonly used to optimise the configuration and pricing of products/services in a competitive environment. We typically want to understand the optimum “product” price, but often it is important to understand how much consumers are willing to pay for individual features of that product. This paper will compare three Willingness to Pay methods using empirical data sets to get a better understanding of the pros and cons of each method.
Conjoint analysis is a method that is commonly used to optimise the configuration and pricing of products/services in a competitive environment. We typically want to understand the optimum “product” price, but often it is important to understand how much consumers are willing to pay for individual features of that product. This paper will compare three Willingness to Pay methods using empirical data sets to get a better understanding of the pros and cons of each method.
Have you finished a conjoint and ended with results that didn’t seem realistic? We demonstrate a case study of a way to integrate in distributional and informational assumptions into a subscription product where the raw conjoint results predicted unrealistic product launch expectations. We incorporate a variety of techniques including Otter’s method, putting in a distributional gate for prospective clients, and increasing the none parameter to increase the value of the external good or ‘none’ option.
Have you finished a conjoint and ended with results that didn’t seem realistic? We demonstrate a case study of a way to integrate in distributional and informational assumptions into a subscription product where the raw conjoint results predicted unrealistic product launch expectations. We incorporate a variety of techniques including Otter’s method, putting in a distributional gate for prospective clients, and increasing the none parameter to increase the value of the external good or ‘none’ option.
Recent studies suggest that we can improve the precision of HB utility estimates by relating features of the choice task to the response time using Gaussian Process Regression. We show how this methodology performs under a variety of conditions of interest to practitioners. We also compare Gaussian Process models with linear models. We conclude with a discussion on the benefits and drawbacks of modeling response time alongside HB utility estimation and provide recommendations for practitioners.
Recent studies suggest that we can improve the precision of HB utility estimates by relating features of the choice task to the response time using Gaussian Process Regression. We show how this methodology performs under a variety of conditions of interest to practitioners. We also compare Gaussian Process models with linear models. We conclude with a discussion on the benefits and drawbacks of modeling response time alongside HB utility estimation and provide recommendations for practitioners.
Predicting the volume of a new product to be launched is a task that many researchers have done at least once. There are a variety of methodologies to accomplish the task, which are grouped in three families: real-life tests, benchmarking, and replication of market environments. In this paper, we show the pros and cons of each of them and explain an approach that oversteps some of the limitations of these methods by using them in combination.
Predicting the volume of a new product to be launched is a task that many researchers have done at least once. There are a variety of methodologies to accomplish the task, which are grouped in three families: real-life tests, benchmarking, and replication of market environments. In this paper, we show the pros and cons of each of them and explain an approach that oversteps some of the limitations of these methods by using them in combination.
Vector Autoregression (VAR) is often used for modeling sales of P items over time. VAR forecasts sales at time tnew using previous sales at tlag, coupled with attributes explaining those changes like price, distribution, and trend. We also model correlated sourcing among P items using a simulated population ~ Multivariate normal(α_lag, ∑). We show how to use conjoint experiments to inform ∑ and how that significantly improves predictions versus modeling ∑ from sales data alone.
Vector Autoregression (VAR) is often used for modeling sales of P items over time. VAR forecasts sales at time tnew using previous sales at tlag, coupled with attributes explaining those changes like price, distribution, and trend. We also model correlated sourcing among P items using a simulated population ~ Multivariate normal(α_lag, ∑). We show how to use conjoint experiments to inform ∑ and how that significantly improves predictions versus modeling ∑ from sales data alone.
Product line design is challenged by the diversity of demand in the market and the wide variety of product features available for sale. Some consumers have broad experience in the activities associated with a product category and others engage narrowly and rely on products in more limited ways. The number of product features and their levels is often large and difficult to characterize in a low-dimensional space. Evaluating marketing opportunities when there exist many usage contexts and product features requires the integration of information on what and when features are demanded, and by whom. We propose an archetypal analysis that combines data on the context of consumption, alternative product usage and feature preferences useful for product line design and management.
Product line design is challenged by the diversity of demand in the market and the wide variety of product features available for sale. Some consumers have broad experience in the activities associated with a product category and others engage narrowly and rely on products in more limited ways. The number of product features and their levels is often large and difficult to characterize in a low-dimensional space. Evaluating marketing opportunities when there exist many usage contexts and product features requires the integration of information on what and when features are demanded, and by whom. We propose an archetypal analysis that combines data on the context of consumption, alternative product usage and feature preferences useful for product line design and management.
This talk discusses some general principles of discrete choice experiment design and introduces the conjointTools R package, which provides tools for assessing experiment designs and sample size requirements under a variety of conditions prior to fielding an experiment. The package contains functions for generating designs, simulating choice data according to assumed models, and estimating models using simulated data to inform sample size requirements, including using designs exported from Sawtooth Software.
This talk discusses some general principles of discrete choice experiment design and introduces the conjointTools R package, which provides tools for assessing experiment designs and sample size requirements under a variety of conditions prior to fielding an experiment. The package contains functions for generating designs, simulating choice data according to assumed models, and estimating models using simulated data to inform sample size requirements, including using designs exported from Sawtooth Software.
Researchers often want to test how a set of items or attributes rank on multiple outcome metrics. One way to do this is by utilizing multiple MaxDiffs in a survey. The present study explores three approaches for this method, with special consideration given to the Tandem MaxDiff approach: presenting both outcome metrics on each screen of a single MaxDiff exercise.
Researchers often want to test how a set of items or attributes rank on multiple outcome metrics. One way to do this is by utilizing multiple MaxDiffs in a survey. The present study explores three approaches for this method, with special consideration given to the Tandem MaxDiff approach: presenting both outcome metrics on each screen of a single MaxDiff exercise.
Co-clustering is the simultaneous clustering of rows and columns of data. For example, when used for rating questions, or MaxDiff scores, it provides excellent insight into the underlying heterogeneity of this data: which respondents are similar and which items are similar. Adding covariates in the process – both for respondents and for the variables! – adds another layer of insights. This paper will show different ways of visualising co-clustered data and explain the heuristics on how to do co-clustering with covariates.
Co-clustering is the simultaneous clustering of rows and columns of data. For example, when used for rating questions, or MaxDiff scores, it provides excellent insight into the underlying heterogeneity of this data: which respondents are similar and which items are similar. Adding covariates in the process – both for respondents and for the variables! – adds another layer of insights. This paper will show different ways of visualising co-clustered data and explain the heuristics on how to do co-clustering with covariates.
Market segmentation usually includes a typing tool that predicts who falls in which segment: vital for a successful implementation. Using commercial real-life datasets, we compare support vector machines and neural nets against a common approach in commercial practice: linear discriminant analysis. We show that ML approaches yield superior performance in terms of segment-level prediction, interpretability and expected profitability.
Market segmentation usually includes a typing tool that predicts who falls in which segment: vital for a successful implementation. Using commercial real-life datasets, we compare support vector machines and neural nets against a common approach in commercial practice: linear discriminant analysis. We show that ML approaches yield superior performance in terms of segment-level prediction, interpretability and expected profitability.
We present results to validate the findings from the Kurz/Binner 2021 Sawtooth Software Conference presentation that won best paper. Kurz/Binner demonstrated how a simple grid of 9 binary “Behavioral Calibration Questions” that probed how respondents regarded brand, innovation, and price could significantly improve the consistency of respondents’ CBC data and also their holdout predictive validity. We extend the binary questions to include 3 questions related to the importance of features.
We present results to validate the findings from the Kurz/Binner 2021 Sawtooth Software Conference presentation that won best paper. Kurz/Binner demonstrated how a simple grid of 9 binary “Behavioral Calibration Questions” that probed how respondents regarded brand, innovation, and price could significantly improve the consistency of respondents’ CBC data and also their holdout predictive validity. We extend the binary questions to include 3 questions related to the importance of features.
In 2021 we presented how 9 simple behavioral questions can enhance choice models. The recall of past shopping experiences has a relevant impact on the results of the following choice exercise. Building on these findings we go the next step by using the behavioral framework to set up a Bayesian model for simultaneous attribute and parameter selection. This more sophisticated approach is used to understand the diversity of consumer preferences in even more detail.
In 2021 we presented how 9 simple behavioral questions can enhance choice models. The recall of past shopping experiences has a relevant impact on the results of the following choice exercise. Building on these findings we go the next step by using the behavioral framework to set up a Bayesian model for simultaneous attribute and parameter selection. This more sophisticated approach is used to understand the diversity of consumer preferences in even more detail.
Properly choosing the variables to include in cluster analysis allows analysts to address a set of serious problems that can impair segmentation studies. We'll use artificial data and empirical data sets to test three methods for variable selection: an automatic variable selection algorithm and two manual processes (stepwise discriminant analysis and a stepwise analysis of variance procedure. We'll find out whether any of these methods perform well enough to reduce barriers to successful segmentation.
Properly choosing the variables to include in cluster analysis allows analysts to address a set of serious problems that can impair segmentation studies. We'll use artificial data and empirical data sets to test three methods for variable selection: an automatic variable selection algorithm and two manual processes (stepwise discriminant analysis and a stepwise analysis of variance procedure. We'll find out whether any of these methods perform well enough to reduce barriers to successful segmentation.
We programmed a choice experiment that mimics the user experience of an online product comparison. Respondents saw dozens of concepts per screen and were given the option to sort and filter these for easier navigation through the product space. Aside from the respondent choices the experiment captured the usage of filters and additional self-stated information (outside of the experiment) about the respondents’ decision criteria, preferences, and willingness to pay. We compared the results of various ways of incorporating different types of these “secondary signals” into the estimation.
We programmed a choice experiment that mimics the user experience of an online product comparison. Respondents saw dozens of concepts per screen and were given the option to sort and filter these for easier navigation through the product space. Aside from the respondent choices the experiment captured the usage of filters and additional self-stated information (outside of the experiment) about the respondents’ decision criteria, preferences, and willingness to pay. We compared the results of various ways of incorporating different types of these “secondary signals” into the estimation.
Complicated pricing studies can end up with a lot of part-worth levels of price. From recent conferences, a piecewise function that uses from 2 to 6 breakpoints (aside from the endpoints) is recommended and 12-20 breakpoints have been seen as potentially useful. We would want to investigate whether a dozen or more breakpoints is an overfit and we are better off with a more parsimonious approach. This investigation will also test if it would be best to have multiple simpler price effects. RLH, Holdout Hit Rate, and % of effects that don't need to be constrained will be used as testing criteria.
Complicated pricing studies can end up with a lot of part-worth levels of price. From recent conferences, a piecewise function that uses from 2 to 6 breakpoints (aside from the endpoints) is recommended and 12-20 breakpoints have been seen as potentially useful. We would want to investigate whether a dozen or more breakpoints is an overfit and we are better off with a more parsimonious approach. This investigation will also test if it would be best to have multiple simpler price effects. RLH, Holdout Hit Rate, and % of effects that don't need to be constrained will be used as testing criteria.
Often, we run into the challenge that we can only test a certain number of levels per attribute to make sure the estimation remains robust. However, there are scenarios in CBC studies where we want to test many more levels and we just want to find out what the best levels and combinations of levels are. By employing Thompson Sampling, we select preferred products to oversample for each new respondent.
Often, we run into the challenge that we can only test a certain number of levels per attribute to make sure the estimation remains robust. However, there are scenarios in CBC studies where we want to test many more levels and we just want to find out what the best levels and combinations of levels are. By employing Thompson Sampling, we select preferred products to oversample for each new respondent.