Every year many researchers and insights professionals submit papers to present at the Analytics & Insights Summit. A selection committee carefully reviews and selects papers that fit best within the conference to ensure the conference is unique and provides value and learning for attendees.
Purchase accessResearchers are frequently challenged to quickly bring stakeholders up to speed on complex and unfamiliar approaches to obtain insights through choice-based methods. Despite the researcher's confidence in the methods or experience using them successfully, gaining alignment with stakeholders is critical to move projects forward and to deliver the outcomes needed. Some well-tested approaches can help the research team to effectively address resistance, take feedback, concede where needed, and hold the line appropriately to ultimately satisfy the stakeholder's needs for quality insights.
Researchers are frequently challenged to quickly bring stakeholders up to speed on complex and unfamiliar approaches to obtain insights through choice-based methods. Despite the researcher's confidence in the methods or experience using them successfully, gaining alignment with stakeholders is critical to move projects forward and to deliver the outcomes needed. Some well-tested approaches can help the research team to effectively address resistance, take feedback, concede where needed, and hold the line appropriately to ultimately satisfy the stakeholder's needs for quality insights.
Clients often want to test many items using MaxDiff. Previous research shows that Sparse MaxDiff is a valid technique for testing these conditions; however, these designs typically include many alternatives per task (5 or more). What happens when the items are extremely wordy or long? With triplets or, even worse, paired comparisons, we get much less information per task - can Sparse approaches still work under these more extreme conditions?
Clients often want to test many items using MaxDiff. Previous research shows that Sparse MaxDiff is a valid technique for testing these conditions; however, these designs typically include many alternatives per task (5 or more). What happens when the items are extremely wordy or long? With triplets or, even worse, paired comparisons, we get much less information per task - can Sparse approaches still work under these more extreme conditions?
The 2021 Kurz/Binner Sawtooth presentation introduced 9 priming questions to include prior to showing a CBC. Although we wanted to implement these priming questions, we got pushback from clients regarding the number of extra questions. Is there a subset of the 9 questions that lead to the same improvement in out of sample prediction? We ran factor analysis and found 4 questions that could account for a majority of variance. Will asking just 4 questions have the same priming benefit?
The 2021 Kurz/Binner Sawtooth presentation introduced 9 priming questions to include prior to showing a CBC. Although we wanted to implement these priming questions, we got pushback from clients regarding the number of extra questions. Is there a subset of the 9 questions that lead to the same improvement in out of sample prediction? We ran factor analysis and found 4 questions that could account for a majority of variance. Will asking just 4 questions have the same priming benefit?
We discuss how best to set-up the price attribute in conjoint studies when specific product prices cannot be tested. We discuss the pros and cons of each approach and provide practical recommendations. This will provide practitioners with the confidence to expand the situations in which they use conjoint analysis to conduct pricing research, which will ultimately help promote conjoint analysis as a methodology across the Market Research industry.
We discuss how best to set-up the price attribute in conjoint studies when specific product prices cannot be tested. We discuss the pros and cons of each approach and provide practical recommendations. This will provide practitioners with the confidence to expand the situations in which they use conjoint analysis to conduct pricing research, which will ultimately help promote conjoint analysis as a methodology across the Market Research industry.
Sometimes an attribute is a primary factor in the decision process compared to other attributes. In these instances, emphasis on trade-offs within the dominant attribute needs attention. The present study investigates various design and analytic strategies for dealing with dominant attributes. Results suggest building in overlap at the design stage and/or incorporating alternative specific effects helps tame the dominant attribute and permits better understandings and movement for the non-dominant attributes in the design.
Sometimes an attribute is a primary factor in the decision process compared to other attributes. In these instances, emphasis on trade-offs within the dominant attribute needs attention. The present study investigates various design and analytic strategies for dealing with dominant attributes. Results suggest building in overlap at the design stage and/or incorporating alternative specific effects helps tame the dominant attribute and permits better understandings and movement for the non-dominant attributes in the design.
System 1 Implicit Reaction Tests (IRT) are being utilized more frequently within marketing research to assess brand preferences and emotional reactions. Quick reaction, MaxDiff paired comparison tests (MaxDiff) are also an approach that can be used to determine emotional perceptions. In this paper, we compare IRT vs. MaxDiff vs. adaptive MaxDiff (combining the two approaches) to assess which approach provides a more accurate assessment of individual's preferences and likely behaviors.
System 1 Implicit Reaction Tests (IRT) are being utilized more frequently within marketing research to assess brand preferences and emotional reactions. Quick reaction, MaxDiff paired comparison tests (MaxDiff) are also an approach that can be used to determine emotional perceptions. In this paper, we compare IRT vs. MaxDiff vs. adaptive MaxDiff (combining the two approaches) to assess which approach provides a more accurate assessment of individual's preferences and likely behaviors.
Adapting conjoint tasks from a respondent's previous answers has been a longstanding need in conjoint research. In this study, we will compare two different methods of implementing adaptive conjoint: preference-based conjoint and an approach with on-the-fly latent class. In addition, we will attempt to determine how to optimize within each method.
Adapting conjoint tasks from a respondent's previous answers has been a longstanding need in conjoint research. In this study, we will compare two different methods of implementing adaptive conjoint: preference-based conjoint and an approach with on-the-fly latent class. In addition, we will attempt to determine how to optimize within each method.
An alternative-specific Choice-Based Conjoint (CBC) design could be the most appropriate approach for a feature optimization or pricing study for complex products or offerings in categories like tech, durables, telecoms, financial services, etc. In our presentation, we will compare an alternative-specific approach with other variations of CBC, such as a traditional CBC, a shelf test, a partial profile design and demonstrate the impact of design on the estimation, findings, and recommendations in a conjoint study.
An alternative-specific Choice-Based Conjoint (CBC) design could be the most appropriate approach for a feature optimization or pricing study for complex products or offerings in categories like tech, durables, telecoms, financial services, etc. In our presentation, we will compare an alternative-specific approach with other variations of CBC, such as a traditional CBC, a shelf test, a partial profile design and demonstrate the impact of design on the estimation, findings, and recommendations in a conjoint study.
Analyzing open-ended questions in the most efficient, rapid and automatic way is a significant challenge. We propose to address this issue by creating an algorithm that combines descriptive statistics with machine learning, automatically segmenting the natural language of respondents. We will illustrate the results using a study that KNACK did for Unicef during the COVID pandemic in 2020. We provide Python code.
Analyzing open-ended questions in the most efficient, rapid and automatic way is a significant challenge. We propose to address this issue by creating an algorithm that combines descriptive statistics with machine learning, automatically segmenting the natural language of respondents. We will illustrate the results using a study that KNACK did for Unicef during the COVID pandemic in 2020. We provide Python code.
Marketers are interested in how much they can charge for enhanced product features. Conjoint analysis is also accepted in litigation, where assessed damages rely on defensible WTP estimation. Some WTP methods are more defensible and accepted than others in litigation cases. Yet, WTP approaches that are most relevant and explainable for marketers may not necessarily be the ones that are most defensible in court cases.
Marketers are interested in how much they can charge for enhanced product features. Conjoint analysis is also accepted in litigation, where assessed damages rely on defensible WTP estimation. Some WTP methods are more defensible and accepted than others in litigation cases. Yet, WTP approaches that are most relevant and explainable for marketers may not necessarily be the ones that are most defensible in court cases.
We propose a novel approach to simplifying the choice-based conjoint questionnaire to reduce the burden it places upon respondents. We investigate the viability of conducting conjoint studies using singleton choice sets where the respondent repeatedly makes a choice between buying and not buying. We find that choice-sets of size one are easier and more enjoyable for respondents to complete and yield higher-quality data relative to existing mobile-based conjoint approaches.
We propose a novel approach to simplifying the choice-based conjoint questionnaire to reduce the burden it places upon respondents. We investigate the viability of conducting conjoint studies using singleton choice sets where the respondent repeatedly makes a choice between buying and not buying. We find that choice-sets of size one are easier and more enjoyable for respondents to complete and yield higher-quality data relative to existing mobile-based conjoint approaches.
Traditionally, market segmentation utilizes cluster analysis techniques like K-Means and Latent class to identify market segments. These techniques are used often with high success. However, these algorithms focus on cluster homogeneity, while the researcher only needs to identify and exploit segment differences. This disconnect can lead to missed opportunities for the researcher. "Archetypal analysis" is a powerful alternative that focuses on segment contrasts rather than homogeneity, frequently leading to richer and more actionable market segments.
Traditionally, market segmentation utilizes cluster analysis techniques like K-Means and Latent class to identify market segments. These techniques are used often with high success. However, these algorithms focus on cluster homogeneity, while the researcher only needs to identify and exploit segment differences. This disconnect can lead to missed opportunities for the researcher. "Archetypal analysis" is a powerful alternative that focuses on segment contrasts rather than homogeneity, frequently leading to richer and more actionable market segments.
In this study, we integrate data across two commercial studies, and across four different types of data: attitudes, benefits, goals, and conjoint utilities. We use an Archetypal analysis approach to capture the heterogeneity across goals and benefits. We link the goals and benefit archetypes, the conjoint segments, and the attitudinal segments through an ensemble approach. This analysis yields rich insights that bridges product strategy and advertising strategy.
In this study, we integrate data across two commercial studies, and across four different types of data: attitudes, benefits, goals, and conjoint utilities. We use an Archetypal analysis approach to capture the heterogeneity across goals and benefits. We link the goals and benefit archetypes, the conjoint segments, and the attitudinal segments through an ensemble approach. This analysis yields rich insights that bridges product strategy and advertising strategy.
Previous research (Chrzan and White 2022) identified algorithmic variable selection techniques that were helpful in identifying the key dimensional components of a known segment structure in the presence of redundancies, cluster imbalance, and masking variables. This paper extends that work to consider the additional challenge of multiple segment structures with different dimensional components. We will explore the impact of different relative structural characteristics on the ability of our algorithms to identify key dimensional components and consequences for segmention solutions.
Previous research (Chrzan and White 2022) identified algorithmic variable selection techniques that were helpful in identifying the key dimensional components of a known segment structure in the presence of redundancies, cluster imbalance, and masking variables. This paper extends that work to consider the additional challenge of multiple segment structures with different dimensional components. We will explore the impact of different relative structural characteristics on the ability of our algorithms to identify key dimensional components and consequences for segmention solutions.
We designed and modelled a CBC with 12 binary features and price. This is something not usually done by researchers because of correlation issues between predictors. We show how this is possible to do once the design is produced ad hoc, considering important key details, and the predictive model is estimated with the CBC/HB Sawtooth Software standalone module using an interaction term between a variable capturing the number of features and price.
We designed and modelled a CBC with 12 binary features and price. This is something not usually done by researchers because of correlation issues between predictors. We show how this is possible to do once the design is produced ad hoc, considering important key details, and the predictive model is estimated with the CBC/HB Sawtooth Software standalone module using an interaction term between a variable capturing the number of features and price.
Marketing practitioners choose to run individual-level models (HB) to capture heterogeneity in preferences when it comes to trade-offs. But the majority of designs built for choice research are focused on improving the insights at the aggregate level. Our hypothesis is that there are better design strategies to capture individual heterogeneity and ultimately improve model predictions, particularly when it comes to looking at subgroups within the data. This paper will explore several methods of optimizing designs including relabeling-swapping routines and utility balanced designs and give the audience access to an open-source package, built in Julia by the Numerious team, to build these types of designs.
Marketing practitioners choose to run individual-level models (HB) to capture heterogeneity in preferences when it comes to trade-offs. But the majority of designs built for choice research are focused on improving the insights at the aggregate level. Our hypothesis is that there are better design strategies to capture individual heterogeneity and ultimately improve model predictions, particularly when it comes to looking at subgroups within the data. This paper will explore several methods of optimizing designs including relabeling-swapping routines and utility balanced designs and give the audience access to an open-source package, built in Julia by the Numerious team, to build these types of designs.
Our Monte Carlo study explores the right number of burn-in and used draws. Experimental factors are the influence of number of choice tasks, number of parameters to be estimated and heterogeneity in the data and their influence on the draws settings. Criteria for Validation are convergence of the hierarchical Bayes model, capturing of long-term oscillations, and goodness of fit compared against the real simulated data. As a result, we present a guideline for everyday work.
Our Monte Carlo study explores the right number of burn-in and used draws. Experimental factors are the influence of number of choice tasks, number of parameters to be estimated and heterogeneity in the data and their influence on the draws settings. Criteria for Validation are convergence of the hierarchical Bayes model, capturing of long-term oscillations, and goodness of fit compared against the real simulated data. As a result, we present a guideline for everyday work.
Clients of CBC studies frequently ask if their results are statistically significant. Are you sure you can answer such questions with confidence? We propose a novel, practical approach to conducting statistical significance tests in the context of CBC studies. While being practical, it also has solid theoretical foundation. We argue it is necessary to use total KPI variance, which consists of sampling variance and modelling variance.
Clients of CBC studies frequently ask if their results are statistically significant. Are you sure you can answer such questions with confidence? We propose a novel, practical approach to conducting statistical significance tests in the context of CBC studies. While being practical, it also has solid theoretical foundation. We argue it is necessary to use total KPI variance, which consists of sampling variance and modelling variance.