If you knew box office performance, Rotten Tomatoes rating, and online sentiment, could you predict the appeal of an intellectual property (IP) to the average theme park guest? Turns out, you can! Learn about the ways UDX is using publicly-available data to supplement survey research, how we’re dealing with cross-country differences in scale use, and how we communicate the appeal of 120+ IPs to a busy executive audience.
If you knew box office performance, Rotten Tomatoes rating, and online sentiment, could you predict the appeal of an intellectual property (IP) to the average theme park guest? Turns out, you can! Learn about the ways UDX is using publicly-available data to supplement survey research, how we’re dealing with cross-country differences in scale use, and how we communicate the appeal of 120+ IPs to a busy executive audience.
In 2023 Leatherman Tool Group celebrated its 40th anniversary building multipurpose tools. Leatherman has ambitious growth goals and sells its products in over 80 countries worldwide – all while committing to keeping manufacturing in the United States at its headquarters in Portland, Oregon. To drive innovation in 2024, Leatherman Consumer Insights partnered with Escalent, a global research firm, to execute a series of international research studies supporting new product development. In this case study, Leatherman and Escalent highlight the value of advanced analytics and mixed methods to deliver impactful insights at a global scale.
In 2023 Leatherman Tool Group celebrated its 40th anniversary building multipurpose tools. Leatherman has ambitious growth goals and sells its products in over 80 countries worldwide – all while committing to keeping manufacturing in the United States at its headquarters in Portland, Oregon. To drive innovation in 2024, Leatherman Consumer Insights partnered with Escalent, a global research firm, to execute a series of international research studies supporting new product development. In this case study, Leatherman and Escalent highlight the value of advanced analytics and mixed methods to deliver impactful insights at a global scale.
Procter & Gamble (P&G) conducted 73 studies in oral health care, producing 1,554 unique claims. To compare these claims, a Sparse MaxDiff study was designed, using a subset of claims to create a database. AI models were then trained to predict claim success, with the most successful model using OpenAI embeddings and a neural network. The study demonstrated AI’s potential for predicting claim outcomes but highlighted the need for a robust claims database for accurate predictions.
Procter & Gamble (P&G) conducted 73 studies in oral health care, producing 1,554 unique claims. To compare these claims, a Sparse MaxDiff study was designed, using a subset of claims to create a database. AI models were then trained to predict claim success, with the most successful model using OpenAI embeddings and a neural network. The study demonstrated AI’s potential for predicting claim outcomes but highlighted the need for a robust claims database for accurate predictions.
The Microsoft Research + Insights team has been exploring using AI and unstructured data to improve data quality in conjoint studies, working with PSB. Initial experiments explored using AI to create or augment samples, and conversational AI to probe respondent choices and use answers as: covariates. Results suggest AI enhances respondent engagement and model strength. Building on this, we report on an experiment comparing voice and text responses, and other work using AI to scale small sample size research for more robustness.
The Microsoft Research + Insights team has been exploring using AI and unstructured data to improve data quality in conjoint studies, working with PSB. Initial experiments explored using AI to create or augment samples, and conversational AI to probe respondent choices and use answers as: covariates. Results suggest AI enhances respondent engagement and model strength. Building on this, we report on an experiment comparing voice and text responses, and other work using AI to scale small sample size research for more robustness.
Creating actionable and engaging segmentation models is part science and part art, ideally producing a mathematically sound and easily explainable model. To avoid the overwhelm (and myriad technical problems) of too many variables, we drew on various sources to tackle two key questions that helped us streamline our approach and create a solid, actionable – and, dare I say, fun – segmentation model faster than you can say “factor analysis.” All set to music, of course.
Creating actionable and engaging segmentation models is part science and part art, ideally producing a mathematically sound and easily explainable model. To avoid the overwhelm (and myriad technical problems) of too many variables, we drew on various sources to tackle two key questions that helped us streamline our approach and create a solid, actionable – and, dare I say, fun – segmentation model faster than you can say “factor analysis.” All set to music, of course.
Comparing a traditional CBC methodology to two new approaches, this paper will explore new ways to conduct conjoint exercises on mobile devices that are faster and more enjoyable than the traditional approach. Attendees will learn how well the results from two new approaches compare to the results generated by traditional choice exercises as well as how to optimize the accuracy of the new approaches.
Comparing a traditional CBC methodology to two new approaches, this paper will explore new ways to conduct conjoint exercises on mobile devices that are faster and more enjoyable than the traditional approach. Attendees will learn how well the results from two new approaches compare to the results generated by traditional choice exercises as well as how to optimize the accuracy of the new approaches.
Designing experiments with a large number of product attributes can be challenging. We propose a novel approach, Token-Based Conjoint (TBC), which reframes stated choice experiments to manage complexity more effectively. TBC is especially useful for products like subscription services with many binary features. By dynamically adjusting feature selection and incorporating a dual-response likelihood question, TBC delivers deeper insights into which feature combinations drive adoption, providing marketers with actionable results.
Designing experiments with a large number of product attributes can be challenging. We propose a novel approach, Token-Based Conjoint (TBC), which reframes stated choice experiments to manage complexity more effectively. TBC is especially useful for products like subscription services with many binary features. By dynamically adjusting feature selection and incorporating a dual-response likelihood question, TBC delivers deeper insights into which feature combinations drive adoption, providing marketers with actionable results.
Synthetic AI Avatars are making their way into the survey world, offering a fresh approach to enhance engagement and improve data quality. This study explores their potential across diverse use cases, from fully interactive surveys to personalized introductions and targeted prompts, while tackling challenges like bias and monotony in both online and offline settings. Join us to find out if AI Avatars are the next big thing in market research surveys or just another passing trend!
Synthetic AI Avatars are making their way into the survey world, offering a fresh approach to enhance engagement and improve data quality. This study explores their potential across diverse use cases, from fully interactive surveys to personalized introductions and targeted prompts, while tackling challenges like bias and monotony in both online and offline settings. Join us to find out if AI Avatars are the next big thing in market research surveys or just another passing trend!
We introduce advanced Bayesian choice modeling capabilities that extend beyond the traditional Multinomial Logit (MNL) model available in Sawtooth Software’s CBCHB programs. By leveraging the R packages RSGHB and Apollo, we will demonstrate fitting a diminishing returns component within a utility function in an MNL model, implementing a Hierarchical Bayes nested Logit model, estimating a joint discrete/continuous volumetric model. Our session will guide you through these concepts and demonstrate how to estimate these models in R using RSGHB and Apollo. Additionally, we will showcase how the latest version of Latent Gold can kick-start your modeling process by exporting code directly compatible with RSGHB.
We introduce advanced Bayesian choice modeling capabilities that extend beyond the traditional Multinomial Logit (MNL) model available in Sawtooth Software’s CBCHB programs. By leveraging the R packages RSGHB and Apollo, we will demonstrate fitting a diminishing returns component within a utility function in an MNL model, implementing a Hierarchical Bayes nested Logit model, estimating a joint discrete/continuous volumetric model. Our session will guide you through these concepts and demonstrate how to estimate these models in R using RSGHB and Apollo. Additionally, we will showcase how the latest version of Latent Gold can kick-start your modeling process by exporting code directly compatible with RSGHB.
Manual coding of open-ended survey responses is a labor-intensive, error-prone task. Discover how AI transformer models like text embeddings and large language models (LLMs) can automate and enhance the analysis of open-ends. This presentation compares these advanced methods with manual coding and traditional techniques, highlighting their strengths in understanding context and semantics. Attendees will gain practical expertise for integrating these technologies into their analysis of open-ends, and a clear grasp of their benefits and limitations.
Manual coding of open-ended survey responses is a labor-intensive, error-prone task. Discover how AI transformer models like text embeddings and large language models (LLMs) can automate and enhance the analysis of open-ends. This presentation compares these advanced methods with manual coding and traditional techniques, highlighting their strengths in understanding context and semantics. Attendees will gain practical expertise for integrating these technologies into their analysis of open-ends, and a clear grasp of their benefits and limitations.
This paper explores how Generative AI, using Large Language Models and multi-agent systems, is humanizing surveys through conversational interactions. By simulating natural, human-like dialogues, these AI-driven surveys enhance respondent engagement and improve data quality. A real estate app case study shows how conversational AI creates a more intuitive, personalized survey experience, allowing users to ask clarifying questions and provide contextual feedback. Our findings highlight how AI can revolutionize market research by making it more responsive, interactive, and human-centered.
This paper explores how Generative AI, using Large Language Models and multi-agent systems, is humanizing surveys through conversational interactions. By simulating natural, human-like dialogues, these AI-driven surveys enhance respondent engagement and improve data quality. A real estate app case study shows how conversational AI creates a more intuitive, personalized survey experience, allowing users to ask clarifying questions and provide contextual feedback. Our findings highlight how AI can revolutionize market research by making it more responsive, interactive, and human-centered.
Join us for an engaging session as we unveil the latest advancements in Sawtooth's Discover platform. Come see how new features are transforming Discover into an even more powerful research tool. We’ll also share expert tips and a sneak peek at what’s coming next on our roadmap.
Join us for an engaging session as we unveil the latest advancements in Sawtooth's Discover platform. Come see how new features are transforming Discover into an even more powerful research tool. We’ll also share expert tips and a sneak peek at what’s coming next on our roadmap.
In customer satisfaction studies it’s common to ask respondents to rate the brand overall in terms of satisfaction or loyalty. It’s also common to ask respondents to rate the brand on a variety of performance attributes. Researchers often try to link the performance on the attributes to the overall evaluation of the brand, calling it “Key Drivers Analysis.” Common methods for this are flawed as the lack of independence of the drivers can cause all sorts of statistical messiness! We’ll discuss the use of Johnson’s weights and show how this has been incorporated into Sawtooth Software’s Lighthouse Studio platform.
In customer satisfaction studies it’s common to ask respondents to rate the brand overall in terms of satisfaction or loyalty. It’s also common to ask respondents to rate the brand on a variety of performance attributes. Researchers often try to link the performance on the attributes to the overall evaluation of the brand, calling it “Key Drivers Analysis.” Common methods for this are flawed as the lack of independence of the drivers can cause all sorts of statistical messiness! We’ll discuss the use of Johnson’s weights and show how this has been incorporated into Sawtooth Software’s Lighthouse Studio platform.
The validity of using conjoint analysis rests on the inclusion of brand names, prices, and an outside “no-choice” option in the choice task. We find that the lack of knowledge of competitive offerings and prices affects the brand part-worths but not the part-worths of other product features. We discuss how a well-designed conjoint study mitigates the effects of this type of learning in conjoint analysis.
The validity of using conjoint analysis rests on the inclusion of brand names, prices, and an outside “no-choice” option in the choice task. We find that the lack of knowledge of competitive offerings and prices affects the brand part-worths but not the part-worths of other product features. We discuss how a well-designed conjoint study mitigates the effects of this type of learning in conjoint analysis.
This paper explores how Discrete Choice Models (DCMs) help market researchers optimize product portfolios by simulating market scenarios to predict consumer preferences. We present a two-stage approach: First, use algorithms like simulated annealing to find near-optimal portfolios; second, refine these solutions by testing all possible remaining SKU combinations. This method balances mathematical optimization with real-world constraints, providing a practical, adaptable solution for both simple and complex market situations, and delivering actionable insights for business strategy optimization.
This paper explores how Discrete Choice Models (DCMs) help market researchers optimize product portfolios by simulating market scenarios to predict consumer preferences. We present a two-stage approach: First, use algorithms like simulated annealing to find near-optimal portfolios; second, refine these solutions by testing all possible remaining SKU combinations. This method balances mathematical optimization with real-world constraints, providing a practical, adaptable solution for both simple and complex market situations, and delivering actionable insights for business strategy optimization.
A typical MaxDiff is done in a sequential fashion: design before the data collection and modeling last. Building on the ideas from Sawtooth’s Adaptive and Bandit MaxDiff and the field of computerized adaptive testing, I look for ways to model each individual respondent during the MaxDiff survey to inform the design in real-time for performance gain. Item response theory and machine learning will be explored as model options. Simulations will be carried out for validation.
A typical MaxDiff is done in a sequential fashion: design before the data collection and modeling last. Building on the ideas from Sawtooth’s Adaptive and Bandit MaxDiff and the field of computerized adaptive testing, I look for ways to model each individual respondent during the MaxDiff survey to inform the design in real-time for performance gain. Item response theory and machine learning will be explored as model options. Simulations will be carried out for validation.
Step into the future of market research with our cutting-edge training session! Explore how Artificial Intelligence and automation are transforming survey programming and unlocking unprecedented opportunities. This dynamic 45-minute session covers three transformative topics: 1. Conversational AI Surveys: Discover how to turn standard surveys into engaging, dialogue-driven experiences using AI, enhancing respondent engagement and delivering deeper insights. 2. API Integration: Learn how integrating API into Sawtooth creates visually dynamic and intuitive surveys that elevate the user experience, keeping respondents engaged from start to finish. 3. Synthetic Data Innovation: Uncover how AI-generated synthetic data fills gaps in traditional survey data, offering richer, more comprehensive insights for smarter decision-making. Join us to harness these innovative tools and techniques to create impactful research projects. Whether you're looking to enhance data quality, automate workflows, or design surveys that truly engage, this session will empower you to elevate your research to the next level.
Step into the future of market research with our cutting-edge training session! Explore how Artificial Intelligence and automation are transforming survey programming and unlocking unprecedented opportunities. This dynamic 45-minute session covers three transformative topics: 1. Conversational AI Surveys: Discover how to turn standard surveys into engaging, dialogue-driven experiences using AI, enhancing respondent engagement and delivering deeper insights. 2. API Integration: Learn how integrating API into Sawtooth creates visually dynamic and intuitive surveys that elevate the user experience, keeping respondents engaged from start to finish. 3. Synthetic Data Innovation: Uncover how AI-generated synthetic data fills gaps in traditional survey data, offering richer, more comprehensive insights for smarter decision-making. Join us to harness these innovative tools and techniques to create impactful research projects. Whether you're looking to enhance data quality, automate workflows, or design surveys that truly engage, this session will empower you to elevate your research to the next level.
We measure brand equity (in $s) via conjoint analysis and choice experiments. Brand associations can be integrated but may suffer from response biases. A new approach leverages an open-ended question to extract brand associations, avoiding halo effects and capturing both positive and negative perceptions. We use AI to derive a brand score that correlates highly with market share. By integrating these associations into conjoint we can put a $ value on a brand association.
We measure brand equity (in $s) via conjoint analysis and choice experiments. Brand associations can be integrated but may suffer from response biases. A new approach leverages an open-ended question to extract brand associations, avoiding halo effects and capturing both positive and negative perceptions. We use AI to derive a brand score that correlates highly with market share. By integrating these associations into conjoint we can put a $ value on a brand association.
The traditional approach to segmentation is highly iterative with a great deal of manual evaluation of competing segmentation solutions. An analyst may estimate dozens of competing segmentation solutions and each needs evaluation on statistical and interpretative considerations. We will illustrate how we create scoring rules and leverage AI to provide guidance on a more efficient process for both development and evaluation of segmentation solutions.
The traditional approach to segmentation is highly iterative with a great deal of manual evaluation of competing segmentation solutions. An analyst may estimate dozens of competing segmentation solutions and each needs evaluation on statistical and interpretative considerations. We will illustrate how we create scoring rules and leverage AI to provide guidance on a more efficient process for both development and evaluation of segmentation solutions.
Recent marketing theories challenge traditional concepts like differentiation and niche targeting, favoring broader market penetration. Is this the end of segmentation? We explore how these theories are transforming segmentation practices and pushing analytical boundaries. New analytics are proposed to assess segment differences and similarities, effectively blending mass marketing with targeted efforts to help marketers achieve both immediate sales activation and sustained brand-building goals.
Recent marketing theories challenge traditional concepts like differentiation and niche targeting, favoring broader market penetration. Is this the end of segmentation? We explore how these theories are transforming segmentation practices and pushing analytical boundaries. New analytics are proposed to assess segment differences and similarities, effectively blending mass marketing with targeted efforts to help marketers achieve both immediate sales activation and sustained brand-building goals.
This work critically examines modern data visualization practices in market research, focusing on perceptual science, accessibility, and context-driven design. Attendees will explore how traditional methods can obscure insights and learn strategies for modernizing visualizations. Real-world examples in R, Python, and PowerPoint will highlight best practices for clarity and impact. The session offers actionable insights for improving data communication and decision-making in today’s business environment.
This work critically examines modern data visualization practices in market research, focusing on perceptual science, accessibility, and context-driven design. Attendees will explore how traditional methods can obscure insights and learn strategies for modernizing visualizations. Real-world examples in R, Python, and PowerPoint will highlight best practices for clarity and impact. The session offers actionable insights for improving data communication and decision-making in today’s business environment.
While common in practice, it has been a long time (Eagle 2010) since a paper has outlined a clear practitioner focussed methodology for undertaking volumetric analysis. In this paper we will undertake a series of analyses to identify an approach that maximises out of sample predictive validity, whilst maintaining ease of use for analysts undertaking volumetric choice studies.
While common in practice, it has been a long time (Eagle 2010) since a paper has outlined a clear practitioner focussed methodology for undertaking volumetric analysis. In this paper we will undertake a series of analyses to identify an approach that maximises out of sample predictive validity, whilst maintaining ease of use for analysts undertaking volumetric choice studies.
New product design typically begins with designers curating a limited set of stimuli based on their interpretation of market needs, consumer preferences, and feasibility constraints. This a priori approach restricts the exploration of potential designs to what the designers anticipate as viable and appealing, effectively narrowing the creative space. Inspired by crowdsourcing, we propose an AI-driven approach that expands the design space by directly incorporating respondent input throughout the new product development process.
New product design typically begins with designers curating a limited set of stimuli based on their interpretation of market needs, consumer preferences, and feasibility constraints. This a priori approach restricts the exploration of potential designs to what the designers anticipate as viable and appealing, effectively narrowing the creative space. Inspired by crowdsourcing, we propose an AI-driven approach that expands the design space by directly incorporating respondent input throughout the new product development process.
Situational choice experiments (SCE) differ from the choice-based conjoint experiments more commonly used in marketing research. While respondents still choose among the alternatives, the attributes and levels we have are invariant across those alternatives, because they describe the chooser, or the situation, not the alternatives. The archetypal example is physicians' choice of medications as a function of patient characteristics (which latter we create using an experimental design), but we've also modeled decisions like buy/lease/neither as a function of the item purchased and the financing options available or the choice of retire/work full-time/work part-time as a function of economic conditions for respondents nearing retirement age. After briefly introducing this kind of experiment, we’ll show how to design and program SCEs in Lighthouse Studio, then how to analyze them in our MBC software and how to simulate them in Excel.
Situational choice experiments (SCE) differ from the choice-based conjoint experiments more commonly used in marketing research. While respondents still choose among the alternatives, the attributes and levels we have are invariant across those alternatives, because they describe the chooser, or the situation, not the alternatives. The archetypal example is physicians' choice of medications as a function of patient characteristics (which latter we create using an experimental design), but we've also modeled decisions like buy/lease/neither as a function of the item purchased and the financing options available or the choice of retire/work full-time/work part-time as a function of economic conditions for respondents nearing retirement age. After briefly introducing this kind of experiment, we’ll show how to design and program SCEs in Lighthouse Studio, then how to analyze them in our MBC software and how to simulate them in Excel.
MaxDiff continues to increase in popularity as an alternative to traditional survey questions when looking to rate, rank or prioritize. Turning best and worst choices into numeric scores with a goodness of fit statistic is a great tool for the researcher’s toolbox. In this clinic we’ll go beyond the basics and explore a handful of popular MaxDiff extensions designed to help with research problems like: What if my list of items is really, really big? What if I only really care about finding the best items? What if some of my items don’t apply to some respondents? How can I anchor MaxDiff scores to something more concrete like purchase intent? Through a mixture of slides and survey examples, we’ll peel back the inner workings of these extensions to give you confidence in how they work and when to use them.
MaxDiff continues to increase in popularity as an alternative to traditional survey questions when looking to rate, rank or prioritize. Turning best and worst choices into numeric scores with a goodness of fit statistic is a great tool for the researcher’s toolbox. In this clinic we’ll go beyond the basics and explore a handful of popular MaxDiff extensions designed to help with research problems like: What if my list of items is really, really big? What if I only really care about finding the best items? What if some of my items don’t apply to some respondents? How can I anchor MaxDiff scores to something more concrete like purchase intent? Through a mixture of slides and survey examples, we’ll peel back the inner workings of these extensions to give you confidence in how they work and when to use them.
This paper will present useful and reproducible feature engineering techniques utilizing R's tidyverse workflows. It will focus primarily on identifying and resolving redundant measures of underlying constructs. In addition, it will explore the use of deep learning autoencoding for non-linear dimensionality reduction and anomaly detection. Autoencoding also allows for complete reproduction of the data, unlike e.g., principal components. The objective is to achieve high quality partitions leading to more accurate predictive models for scoring.
Internal investigation at Sawtooth Software suggests that latent class clustering (via Latent Gold) does very well on its own for clustering, but potentially even better as part of a CCEA ensemble (Sawtooth’s cluster ensemble package). We’ll investigate with numerous synthetic datasets whether adding latent class solutions to the CCEA ensemble improves the accuracy of our predictions of the known number of segments and of the membership of those segments.
Clients across multiple industries have become intrigued by the idea of potential price thresholds or price cliffs in the price elasticities of their products and these thresholds can be very key to their pricing strategies. In this presentation, through testing the application of post-hoc and modelled threshold options, we will illustrate whether pricing thresholds add value to our pricing models, what type of pricing thresholds (post-hoc or modelled) work the best, and if there are types of pricing studies that pricing thresholds are most appropriate for.
A persistent challenge in conjoint analysis is the discrepancy between preference shares and actual market shares. This gap often arises from the omission of critical market dynamics and assumptions such as 100% awareness and distribution in our models. We propose an innovative approach that integrates the 4P marketing framework (Product, Price, Place, and Promotion) into the calibration process of conjoint analysis. This method offers a more holistic and accurate representation of market behavior, thereby enhancing the predictive validity of conjoint models.
MaxDiff is a powerful and commonly used basis for needs-based segmentation projects, but things can get tricky when we get to the final phase of segmentation, building a typing tool. Typing requires that we reduce the list of every possible MaxDiff question down to a small, ideal set that can most efficiently (and accurately) classify new respondents into existing segments. We will cover the methods behind common typing approaches and show how to use Sawtooth’s MaxDiff Typing Tool software to build typing tool deliverables that will add that much more value to your next segmentation study.