If you knew box office performance, Rotten Tomatoes rating, and online sentiment, could you predict the appeal of an intellectual property (IP) to the average theme park guest? Turns out, you can! Learn about the ways UDX is using publicly-available data to supplement survey research, how we’re dealing with cross-country differences in scale use, and how we communicate the appeal of 120+ IPs to a busy executive audience.
If you knew box office performance, Rotten Tomatoes rating, and online sentiment, could you predict the appeal of an intellectual property (IP) to the average theme park guest? Turns out, you can! Learn about the ways UDX is using publicly-available data to supplement survey research, how we’re dealing with cross-country differences in scale use, and how we communicate the appeal of 120+ IPs to a busy executive audience.
Details coming soon
Details coming soon
Procter & Gamble (P&G) conducted 73 studies in oral health care, producing 1,554 unique claims. To compare these claims, a Sparse MaxDiff study was designed, using a subset of claims to create a database. AI models were then trained to predict claim success, with the most successful model using OpenAI embeddings and a neural network. The study demonstrated AI’s potential for predicting claim outcomes but highlighted the need for a robust claims database for accurate predictions.
Procter & Gamble (P&G) conducted 73 studies in oral health care, producing 1,554 unique claims. To compare these claims, a Sparse MaxDiff study was designed, using a subset of claims to create a database. AI models were then trained to predict claim success, with the most successful model using OpenAI embeddings and a neural network. The study demonstrated AI’s potential for predicting claim outcomes but highlighted the need for a robust claims database for accurate predictions.
The Microsoft Research + Insights team has been exploring using AI and unstructured data to improve data quality in conjoint studies, working with PSB. Initial experiments explored using AI to create or augment samples, and conversational AI to probe respondent choices and use answers as: covariates. Results suggest AI enhances respondent engagement and model strength. Building on this, we report on an experiment comparing voice and text responses, and other work using AI to scale small sample size research for more robustness.
The Microsoft Research + Insights team has been exploring using AI and unstructured data to improve data quality in conjoint studies, working with PSB. Initial experiments explored using AI to create or augment samples, and conversational AI to probe respondent choices and use answers as: covariates. Results suggest AI enhances respondent engagement and model strength. Building on this, we report on an experiment comparing voice and text responses, and other work using AI to scale small sample size research for more robustness.
Creating actionable and engaging segmentation models is part science and part art, ideally producing a mathematically sound and easily explainable model. To avoid the overwhelm (and myriad technical problems) of too many variables, we drew on various sources to tackle two key questions that helped us streamline our approach and create a solid, actionable – and, dare I say, fun – segmentation model faster than you can say “factor analysis.” All set to music, of course.
Creating actionable and engaging segmentation models is part science and part art, ideally producing a mathematically sound and easily explainable model. To avoid the overwhelm (and myriad technical problems) of too many variables, we drew on various sources to tackle two key questions that helped us streamline our approach and create a solid, actionable – and, dare I say, fun – segmentation model faster than you can say “factor analysis.” All set to music, of course.
Comparing a traditional CBC methodology to two new approaches, this paper will explore new ways to conduct conjoint exercises on mobile devices that are faster and more enjoyable than the traditional approach. Attendees will learn how well the results from two new approaches compare to the results generated by traditional choice exercises as well as how to optimize the accuracy of the new approaches.
Comparing a traditional CBC methodology to two new approaches, this paper will explore new ways to conduct conjoint exercises on mobile devices that are faster and more enjoyable than the traditional approach. Attendees will learn how well the results from two new approaches compare to the results generated by traditional choice exercises as well as how to optimize the accuracy of the new approaches.
Designing experiments with a large number of product attributes can be challenging. We propose a novel approach, Token-Based Conjoint (TBC), which reframes stated choice experiments to manage complexity more effectively. TBC is especially useful for products like subscription services with many binary features. By dynamically adjusting feature selection and incorporating a dual-response likelihood question, TBC delivers deeper insights into which feature combinations drive adoption, providing marketers with actionable results.
Designing experiments with a large number of product attributes can be challenging. We propose a novel approach, Token-Based Conjoint (TBC), which reframes stated choice experiments to manage complexity more effectively. TBC is especially useful for products like subscription services with many binary features. By dynamically adjusting feature selection and incorporating a dual-response likelihood question, TBC delivers deeper insights into which feature combinations drive adoption, providing marketers with actionable results.
Synthetic AI Avatars are entering the survey space, bringing a new approach to boost engagement and enhance data quality. This study examines their potential across various use cases, from complete survey interactions to personalized intros and optional chatbot assistance, while addressing challenges like bias and monotony in both online and offline settings. Join us to find out if AI Avatars are the next big thing in market research or just another passing trend!
Synthetic AI Avatars are entering the survey space, bringing a new approach to boost engagement and enhance data quality. This study examines their potential across various use cases, from complete survey interactions to personalized intros and optional chatbot assistance, while addressing challenges like bias and monotony in both online and offline settings. Join us to find out if AI Avatars are the next big thing in market research or just another passing trend!
Manual coding of open-ended survey responses is a labor-intensive, error-prone task. Discover how AI transformer models like text embeddings and large language models (LLMs) can automate and enhance the analysis of open-ends. This presentation compares these advanced methods with manual coding and traditional techniques, highlighting their strengths in understanding context and semantics. Attendees will gain practical expertise for integrating these technologies into their analysis of open-ends, and a clear grasp of their benefits and limitations.
Manual coding of open-ended survey responses is a labor-intensive, error-prone task. Discover how AI transformer models like text embeddings and large language models (LLMs) can automate and enhance the analysis of open-ends. This presentation compares these advanced methods with manual coding and traditional techniques, highlighting their strengths in understanding context and semantics. Attendees will gain practical expertise for integrating these technologies into their analysis of open-ends, and a clear grasp of their benefits and limitations.
This paper explores how Generative AI, using Large Language Models and multi-agent systems, is humanizing surveys through conversational interactions. By simulating natural, human-like dialogues, these AI-driven surveys enhance respondent engagement and improve data quality. A real estate app case study shows how conversational AI creates a more intuitive, personalized survey experience, allowing users to ask clarifying questions and provide contextual feedback. Our findings highlight how AI can revolutionize market research by making it more responsive, interactive, and human-centered.
This paper explores how Generative AI, using Large Language Models and multi-agent systems, is humanizing surveys through conversational interactions. By simulating natural, human-like dialogues, these AI-driven surveys enhance respondent engagement and improve data quality. A real estate app case study shows how conversational AI creates a more intuitive, personalized survey experience, allowing users to ask clarifying questions and provide contextual feedback. Our findings highlight how AI can revolutionize market research by making it more responsive, interactive, and human-centered.
The validity of using conjoint analysis rests on the inclusion of brand names, prices, and an outside “no-choice” option in the choice task. We find that the lack of knowledge of competitive offerings and prices affects the brand part-worths but not the part-worths of other product features. We discuss how a well-designed conjoint study mitigates the effects of this type of learning in conjoint analysis.
The validity of using conjoint analysis rests on the inclusion of brand names, prices, and an outside “no-choice” option in the choice task. We find that the lack of knowledge of competitive offerings and prices affects the brand part-worths but not the part-worths of other product features. We discuss how a well-designed conjoint study mitigates the effects of this type of learning in conjoint analysis.
This paper explores how Discrete Choice Models (DCMs) help market researchers optimize product portfolios by simulating market scenarios to predict consumer preferences. We present a two-stage approach: First, use algorithms like simulated annealing to find near-optimal portfolios; second, refine these solutions by testing all possible remaining SKU combinations. This method balances mathematical optimization with real-world constraints, providing a practical, adaptable solution for both simple and complex market situations, and delivering actionable insights for business strategy optimization.
This paper explores how Discrete Choice Models (DCMs) help market researchers optimize product portfolios by simulating market scenarios to predict consumer preferences. We present a two-stage approach: First, use algorithms like simulated annealing to find near-optimal portfolios; second, refine these solutions by testing all possible remaining SKU combinations. This method balances mathematical optimization with real-world constraints, providing a practical, adaptable solution for both simple and complex market situations, and delivering actionable insights for business strategy optimization.
A typical MaxDiff is done in a sequential fashion: design before the data collection and modeling last. Building on the ideas from Sawtooth’s Adaptive and Bandit MaxDiff and the field of computerized adaptive testing, I look for ways to model each individual respondent during the MaxDiff survey to inform the design in real-time for performance gain. Item response theory and machine learning will be explored as model options. Simulations will be carried out for validation.
A typical MaxDiff is done in a sequential fashion: design before the data collection and modeling last. Building on the ideas from Sawtooth’s Adaptive and Bandit MaxDiff and the field of computerized adaptive testing, I look for ways to model each individual respondent during the MaxDiff survey to inform the design in real-time for performance gain. Item response theory and machine learning will be explored as model options. Simulations will be carried out for validation.
We measure brand equity (in $s) via conjoint analysis and choice experiments. Brand associations can be integrated but may suffer from response biases. A new approach leverages an open-ended question to extract brand associations, avoiding halo effects and capturing both positive and negative perceptions. We use AI to derive a brand score that correlates highly with market share. By integrating these associations into conjoint we can put a $ value on a brand association.
We measure brand equity (in $s) via conjoint analysis and choice experiments. Brand associations can be integrated but may suffer from response biases. A new approach leverages an open-ended question to extract brand associations, avoiding halo effects and capturing both positive and negative perceptions. We use AI to derive a brand score that correlates highly with market share. By integrating these associations into conjoint we can put a $ value on a brand association.
The traditional approach to segmentation is highly iterative with a great deal of manual evaluation of competing segmentation solutions. An analyst may estimate dozens of competing segmentation solutions and each needs evaluation on statistical and interpretative considerations. We will illustrate how we create scoring rules and leverage AI to provide guidance on a more efficient process for both development and evaluation of segmentation solutions.
The traditional approach to segmentation is highly iterative with a great deal of manual evaluation of competing segmentation solutions. An analyst may estimate dozens of competing segmentation solutions and each needs evaluation on statistical and interpretative considerations. We will illustrate how we create scoring rules and leverage AI to provide guidance on a more efficient process for both development and evaluation of segmentation solutions.
Recent marketing theories challenge traditional concepts like differentiation and niche targeting, favoring broader market penetration. Is this the end of segmentation? We explore how these theories are transforming segmentation practices and pushing analytical boundaries. New analytics are proposed to assess segment differences and similarities, effectively blending mass marketing with targeted efforts to help marketers achieve both immediate sales activation and sustained brand-building goals.
Recent marketing theories challenge traditional concepts like differentiation and niche targeting, favoring broader market penetration. Is this the end of segmentation? We explore how these theories are transforming segmentation practices and pushing analytical boundaries. New analytics are proposed to assess segment differences and similarities, effectively blending mass marketing with targeted efforts to help marketers achieve both immediate sales activation and sustained brand-building goals.
This work critically examines modern data visualization practices in market research, focusing on perceptual science, accessibility, and context-driven design. Attendees will explore how traditional methods can obscure insights and learn strategies for modernizing visualizations. Real-world examples in R, Python, and PowerPoint will highlight best practices for clarity and impact. The session offers actionable insights for improving data communication and decision-making in today’s business environment.
This work critically examines modern data visualization practices in market research, focusing on perceptual science, accessibility, and context-driven design. Attendees will explore how traditional methods can obscure insights and learn strategies for modernizing visualizations. Real-world examples in R, Python, and PowerPoint will highlight best practices for clarity and impact. The session offers actionable insights for improving data communication and decision-making in today’s business environment.
While common in practice, it has been a long time (Eagle 2010) since a paper has outlined a clear practitioner focussed methodology for undertaking volumetric analysis. In this paper we will undertake a series of analyses to identify an approach that maximises out of sample predictive validity, whilst maintaining ease of use for analysts undertaking volumetric choice studies.
While common in practice, it has been a long time (Eagle 2010) since a paper has outlined a clear practitioner focussed methodology for undertaking volumetric analysis. In this paper we will undertake a series of analyses to identify an approach that maximises out of sample predictive validity, whilst maintaining ease of use for analysts undertaking volumetric choice studies.
New product design typically begins with designers curating a limited set of stimuli based on their interpretation of market needs, consumer preferences, and feasibility constraints. This a priori approach restricts the exploration of potential designs to what the designers anticipate as viable and appealing, effectively narrowing the creative space. Inspired by crowdsourcing, we propose an AI-driven approach that expands the design space by directly incorporating respondent input throughout the new product development process.
New product design typically begins with designers curating a limited set of stimuli based on their interpretation of market needs, consumer preferences, and feasibility constraints. This a priori approach restricts the exploration of potential designs to what the designers anticipate as viable and appealing, effectively narrowing the creative space. Inspired by crowdsourcing, we propose an AI-driven approach that expands the design space by directly incorporating respondent input throughout the new product development process.
This paper will present useful and reproducible feature engineering techniques utilizing R's tidyverse workflows. It will focus primarily on identifying and resolving redundant measures of underlying constructs. In addition, it will explore the use of deep learning embeddings for non-linear dimensionality reduction and anomaly detection. Embeddings also allow for complete reproduction of the data, unlike e.g., principal components. The objective is to achieve high quality partitions leading to more accurate predictive models for scoring.
This paper will present useful and reproducible feature engineering techniques utilizing R's tidyverse workflows. It will focus primarily on identifying and resolving redundant measures of underlying constructs. In addition, it will explore the use of deep learning embeddings for non-linear dimensionality reduction and anomaly detection. Embeddings also allow for complete reproduction of the data, unlike e.g., principal components. The objective is to achieve high quality partitions leading to more accurate predictive models for scoring.
Internal investigation at Sawtooth Software suggests that latent class clustering (via Latent Gold) does very well on its own for clustering, but potentially even better as part of a CCEA ensemble (Sawtooth’s cluster ensemble package). We’ll investigate with numerous synthetic datasets whether adding latent class solutions to the CCEA ensemble improves the accuracy of our predictions of the known number of segments and of the membership of those segments.
Internal investigation at Sawtooth Software suggests that latent class clustering (via Latent Gold) does very well on its own for clustering, but potentially even better as part of a CCEA ensemble (Sawtooth’s cluster ensemble package). We’ll investigate with numerous synthetic datasets whether adding latent class solutions to the CCEA ensemble improves the accuracy of our predictions of the known number of segments and of the membership of those segments.
Clients across multiple industries have become intrigued by the idea of potential price thresholds or price cliffs in the price elasticities of their products and these thresholds can be very key to their pricing strategies. In this presentation, through testing the application of post-hoc and modelled threshold options, we will illustrate whether pricing thresholds add value to our pricing models, what type of pricing thresholds (post-hoc or modelled) work the best, and if there are types of pricing studies that pricing thresholds are most appropriate for.
Clients across multiple industries have become intrigued by the idea of potential price thresholds or price cliffs in the price elasticities of their products and these thresholds can be very key to their pricing strategies. In this presentation, through testing the application of post-hoc and modelled threshold options, we will illustrate whether pricing thresholds add value to our pricing models, what type of pricing thresholds (post-hoc or modelled) work the best, and if there are types of pricing studies that pricing thresholds are most appropriate for.
A persistent challenge in conjoint analysis is the discrepancy between preference shares and actual market shares. This gap often arises from the omission of critical market dynamics and assumptions such as 100% awareness and distribution in our models. We propose an innovative approach that integrates the 4P marketing framework (Product, Price, Place, and Promotion) into the calibration process of conjoint analysis. This method offers a more holistic and accurate representation of market behavior, thereby enhancing the predictive validity of conjoint models.
A persistent challenge in conjoint analysis is the discrepancy between preference shares and actual market shares. This gap often arises from the omission of critical market dynamics and assumptions such as 100% awareness and distribution in our models. We propose an innovative approach that integrates the 4P marketing framework (Product, Price, Place, and Promotion) into the calibration process of conjoint analysis. This method offers a more holistic and accurate representation of market behavior, thereby enhancing the predictive validity of conjoint models.