Every year many researchers and insights professionals submit papers to present at the Analytics & Insights Summit. A selection committee carefully reviews and selects papers that fit best within the conference to ensure the conference is unique and provides value and learning for attendees.
Purchase accessResearchers are frequently challenged to quickly bring stakeholders up to speed on complex and unfamiliar approaches to obtain insights through choice-based methods. Despite the researcher's confidence in the methods or experience using them successfully, gaining alignment with stakeholders is critical to move projects forward and to deliver the outcomes needed. Some well-tested approaches can help the research team to effectively address resistance, take feedback, concede where needed, and hold the line appropriately to ultimately satisfy the stakeholder's needs for quality insights.
Researchers are frequently challenged to quickly bring stakeholders up to speed on complex and unfamiliar approaches to obtain insights through choice-based methods. Despite the researcher's confidence in the methods or experience using them successfully, gaining alignment with stakeholders is critical to move projects forward and to deliver the outcomes needed. Some well-tested approaches can help the research team to effectively address resistance, take feedback, concede where needed, and hold the line appropriately to ultimately satisfy the stakeholder's needs for quality insights.
Clients often want to test many items using MaxDiff. Previous research shows that Sparse MaxDiff is a valid technique for testing these conditions; however, these designs typically include many alternatives per task (5 or more). What happens when the items are extremely wordy or long? With triplets or, even worse, paired comparisons, we get much less information per task - can Sparse approaches still work under these more extreme conditions?
Clients often want to test many items using MaxDiff. Previous research shows that Sparse MaxDiff is a valid technique for testing these conditions; however, these designs typically include many alternatives per task (5 or more). What happens when the items are extremely wordy or long? With triplets or, even worse, paired comparisons, we get much less information per task - can Sparse approaches still work under these more extreme conditions?
The 2021 Kurz/Binner Sawtooth presentation introduced 9 priming questions to include prior to showing a CBC. Although we wanted to implement these priming questions, we got pushback from clients regarding the number of extra questions. Is there a subset of the 9 questions that lead to the same improvement in out of sample prediction? We ran factor analysis and found 4 questions that could account for a majority of variance. Will asking just 4 questions have the same priming benefit?
The 2021 Kurz/Binner Sawtooth presentation introduced 9 priming questions to include prior to showing a CBC. Although we wanted to implement these priming questions, we got pushback from clients regarding the number of extra questions. Is there a subset of the 9 questions that lead to the same improvement in out of sample prediction? We ran factor analysis and found 4 questions that could account for a majority of variance. Will asking just 4 questions have the same priming benefit?
We discuss how best to set-up the price attribute in conjoint studies when specific product prices cannot be tested. We discuss the pros and cons of each approach and provide practical recommendations. This will provide practitioners with the confidence to expand the situations in which they use conjoint analysis to conduct pricing research, which will ultimately help promote conjoint analysis as a methodology across the Market Research industry.
We discuss how best to set-up the price attribute in conjoint studies when specific product prices cannot be tested. We discuss the pros and cons of each approach and provide practical recommendations. This will provide practitioners with the confidence to expand the situations in which they use conjoint analysis to conduct pricing research, which will ultimately help promote conjoint analysis as a methodology across the Market Research industry.
Sometimes an attribute is a primary factor in the decision process compared to other attributes. In these instances, emphasis on trade-offs within the dominant attribute needs attention. The present study investigates various design and analytic strategies for dealing with dominant attributes. Results suggest building in overlap at the design stage and/or incorporating alternative specific effects helps tame the dominant attribute and permits better understandings and movement for the non-dominant attributes in the design.
Sometimes an attribute is a primary factor in the decision process compared to other attributes. In these instances, emphasis on trade-offs within the dominant attribute needs attention. The present study investigates various design and analytic strategies for dealing with dominant attributes. Results suggest building in overlap at the design stage and/or incorporating alternative specific effects helps tame the dominant attribute and permits better understandings and movement for the non-dominant attributes in the design.
System 1 Implicit Reaction Tests (IRT) are being utilized more frequently within marketing research to assess brand preferences and emotional reactions. Quick reaction, MaxDiff paired comparison tests (MaxDiff) are also an approach that can be used to determine emotional perceptions. In this paper, we compare IRT vs. MaxDiff vs. adaptive MaxDiff (combining the two approaches) to assess which approach provides a more accurate assessment of individual's preferences and likely behaviors.
System 1 Implicit Reaction Tests (IRT) are being utilized more frequently within marketing research to assess brand preferences and emotional reactions. Quick reaction, MaxDiff paired comparison tests (MaxDiff) are also an approach that can be used to determine emotional perceptions. In this paper, we compare IRT vs. MaxDiff vs. adaptive MaxDiff (combining the two approaches) to assess which approach provides a more accurate assessment of individual's preferences and likely behaviors.
Adapting conjoint tasks from a respondent's previous answers has been a longstanding need in conjoint research. In this study, we will compare two different methods of implementing adaptive conjoint: preference-based conjoint and an approach with on-the-fly latent class. In addition, we will attempt to determine how to optimize within each method.
Adapting conjoint tasks from a respondent's previous answers has been a longstanding need in conjoint research. In this study, we will compare two different methods of implementing adaptive conjoint: preference-based conjoint and an approach with on-the-fly latent class. In addition, we will attempt to determine how to optimize within each method.
An alternative-specific Choice-Based Conjoint (CBC) design could be the most appropriate approach for a feature optimization or pricing study for complex products or offerings in categories like tech, durables, telecoms, financial services, etc. In our presentation, we will compare an alternative-specific approach with other variations of CBC, such as a traditional CBC, a shelf test, a partial profile design and demonstrate the impact of design on the estimation, findings, and recommendations in a conjoint study.
An alternative-specific Choice-Based Conjoint (CBC) design could be the most appropriate approach for a feature optimization or pricing study for complex products or offerings in categories like tech, durables, telecoms, financial services, etc. In our presentation, we will compare an alternative-specific approach with other variations of CBC, such as a traditional CBC, a shelf test, a partial profile design and demonstrate the impact of design on the estimation, findings, and recommendations in a conjoint study.
Analyzing open-ended questions in the most efficient, rapid and automatic way is a significant challenge. We propose to address this issue by creating an algorithm that combines descriptive statistics with machine learning, automatically segmenting the natural language of respondents. We will illustrate the results using a study that KNACK did for Unicef during the COVID pandemic in 2020. We provide Python code.
Analyzing open-ended questions in the most efficient, rapid and automatic way is a significant challenge. We propose to address this issue by creating an algorithm that combines descriptive statistics with machine learning, automatically segmenting the natural language of respondents. We will illustrate the results using a study that KNACK did for Unicef during the COVID pandemic in 2020. We provide Python code.