Tutorials

April 30, 2024

Dig in deeper with these half-day tutorial sessions led by experienced and plain-spoken practitioners. You’re sure to come away with deeper understanding on how to execute your upcoming quant research studies. The longer format also leads to greater opportunities for Q&A and networking.

The tutorials will be offered as a morning and afternoon session, with each session including two classes to choose from.

Register Now

Morning Session

Tuesday, April 30 — 9:00 am - 1:00pm
Choose one of two courses

9:00 am

Expand Your Toolbox with Advanced MaxDiff

Maximum Difference Scaling (MaxDiff, also known as Best-Worst scaling), is a replacement for traditional rating-scale and ranking questions. It provides a robust and reliable approach to measuring preferences and importance while capturing greater differentiation. It also includes a handy respondent consistency score! In this workshop, we’ll go beyond the basics and cover popular extensions to MaxDiff and the unique solutions they present. We’ll cover methods for handling a very large list of items, reducing survey length and respondent burden, and tailoring each exercise more to the individual. We’ll also show some unique applications of MaxDiff you might not have thought before from a few different industries. More than just slides, we’ll walk through programming examples using Sawtooth Software’s Discover and Lighthouse Studio platforms to reinforce learning and get you the confidence you need to apply these methods yourself.

Brian McEwan
Sawtooth Software
Option 1

Option 1

Expand Your Toolbox with Advanced MaxDiff

Maximum Difference Scaling (MaxDiff, also known as Best-Worst scaling), is a replacement for traditional rating-scale and ranking questions. It provides a robust and reliable approach to measuring preferences and importance while capturing greater differentiation. It also includes a handy respondent consistency score! In this workshop, we’ll go beyond the basics and cover popular extensions to MaxDiff and the unique solutions they present. We’ll cover methods for handling a very large list of items, reducing survey length and respondent burden, and tailoring each exercise more to the individual. We’ll also show some unique applications of MaxDiff you might not have thought before from a few different industries. More than just slides, we’ll walk through programming examples using Sawtooth Software’s Discover and Lighthouse Studio platforms to reinforce learning and get you the confidence you need to apply these methods yourself.

Brian McEwan
Sawtooth Software
Option 1

9:00 am

Practical Steps to Ensure Higher Data Quality

Quality insights are predicated on quality data. Nevertheless, online survey research is regularly plagued by fraudulent data, including responses from bots and click farms, and respondents who give less than thoughtful responses in a race to complete as many research studies as possible. In the best-case scenario, low-quality data add noise to your dataset, limiting your ability to find statistically meaningful results; at worst, low-quality data can introduce bias, causing you to make incorrect inferences and business decisions. In this session we provide an overview of concrete steps that you as a researcher can take to mitigate the threat of low-quality data in your studies. We discuss survey design and how to create an engaging survey as well as how to measure attention throughout the survey. We also discuss paradata that you can collect from your potential respondents to identify fraud – including the use of VPN/VPS, geolocation, Sawtooth Software's RLH score, and the use of copy/paste functions to complete open-ended questions. Finally, we consider common techniques for cleaning inattentive respondents and compare data cleaning to our tech-enabled fraud prevention.

Steven Snell
Rep Data
Vignesh Krishnan
Rep Data
Cameron Halversen
Sawtooth Software
Option 2

Option 2

Practical Steps to Ensure Higher Data Quality

Quality insights are predicated on quality data. Nevertheless, online survey research is regularly plagued by fraudulent data, including responses from bots and click farms, and respondents who give less than thoughtful responses in a race to complete as many research studies as possible. In the best-case scenario, low-quality data add noise to your dataset, limiting your ability to find statistically meaningful results; at worst, low-quality data can introduce bias, causing you to make incorrect inferences and business decisions. In this session we provide an overview of concrete steps that you as a researcher can take to mitigate the threat of low-quality data in your studies. We discuss survey design and how to create an engaging survey as well as how to measure attention throughout the survey. We also discuss paradata that you can collect from your potential respondents to identify fraud – including the use of VPN/VPS, geolocation, Sawtooth Software's RLH score, and the use of copy/paste functions to complete open-ended questions. Finally, we consider common techniques for cleaning inattentive respondents and compare data cleaning to our tech-enabled fraud prevention.

Steven Snell
Rep Data
Vignesh Krishnan
Rep Data
Cameron Halversen
Sawtooth Software
Option 2
Back to Top

Afternoon Session

Tuesday, April 30 — 2:00 pm - 5:30pm
Choose one of two courses

2:00 pm

Running a Pricing Conjoint Project from Start to Finish

Determining the optimal product or portfolio pricing has always been a critical strategic decision and in the light of the pricing dynamics we are observing these days it is becoming even more important to have data driven evidence to make these decisions. And while theory of conjoint analysis is very important and much discussed at the A&I Summit, we want to take the practical approach with this workshop and run you through an entire pricing (shelf) CBC study. We will first describe different pricing research approaches touching upon pros and cons of methods such as Van Westendorp, Gabor Granger, Monadic experiments and conjoint. Then we will take you through both set up and analysis steps of a shelf conjoint including building a customized statistical design, data re-coding, counts, Logit and HB. During each of these steps we will show tips and tricks to get the most out of your data. How do you model the data, interpret the result and what are the key things to look out for? All of this will be discussed based on a real data set using Lighthouse Software.

Remco Don
SKIM
Egle Meskauskaite
SKIM
Option 1

Option 1

Running a Pricing Conjoint Project from Start to Finish

Determining the optimal product or portfolio pricing has always been a critical strategic decision and in the light of the pricing dynamics we are observing these days it is becoming even more important to have data driven evidence to make these decisions. And while theory of conjoint analysis is very important and much discussed at the A&I Summit, we want to take the practical approach with this workshop and run you through an entire pricing (shelf) CBC study. We will first describe different pricing research approaches touching upon pros and cons of methods such as Van Westendorp, Gabor Granger, Monadic experiments and conjoint. Then we will take you through both set up and analysis steps of a shelf conjoint including building a customized statistical design, data re-coding, counts, Logit and HB. During each of these steps we will show tips and tricks to get the most out of your data. How do you model the data, interpret the result and what are the key things to look out for? All of this will be discussed based on a real data set using Lighthouse Software.

Remco Don
SKIM
Egle Meskauskaite
SKIM
Option 1

2:00 pm

Situational Choice Experiments

The go-to analysis engine for choice modelers is the conditional multinomial logit (MNL) model (McFadden 1974, Ben-Akiva and Lerman 1985, Train 2003). Conditional MNL predicts choices among alternatives that are conditioned on the attributes and levels of those alternatives. A less well known special case of MNL is the polytomous multinomial logit (P-MNL) model (Theil 1969, Hoffman and Duncan 1988). With P-MNL the attributes and levels describe not the products, but the chooser, the situation, or the context in which a decision occurs. Respondents choose among an invariant set of the alternatives, as the attributes and levels we have describe the situation, not the alternatives. If we also apply experimental control to an experiment featuring multi-profile choice sets analyzed via conditional logit, we have the well-known choice-based conjoint experiment (Louviere and Woodworth 1983, Louviere 1988, Louviere et al. 2000). A situational choice experiment (SCE), however, differs from a CBC in that its questions show one experimentally designed profile which describes the attributes and levels of the choice situation or context or chooser, and elicits choices among a fixed set of alternatives; SCE then uses polytomous MNL to generate utilities. Few marketers know about SCEs and there doesn’t seem to be a single source reference about them, two limitations we hope to remedy in this presentation. After introducing polytomous logit and distinguishing SCE from CBC, we’ll show how to design and program SCEs in Lighthouse Studio, how to analyze them in MBC software and how to simulate them in Excel.

Keith Chrzan
Sawtooth Software
Dean Tindall
Sawtooth Software
Option 2

Option 2

Situational Choice Experiments

The go-to analysis engine for choice modelers is the conditional multinomial logit (MNL) model (McFadden 1974, Ben-Akiva and Lerman 1985, Train 2003). Conditional MNL predicts choices among alternatives that are conditioned on the attributes and levels of those alternatives. A less well known special case of MNL is the polytomous multinomial logit (P-MNL) model (Theil 1969, Hoffman and Duncan 1988). With P-MNL the attributes and levels describe not the products, but the chooser, the situation, or the context in which a decision occurs. Respondents choose among an invariant set of the alternatives, as the attributes and levels we have describe the situation, not the alternatives. If we also apply experimental control to an experiment featuring multi-profile choice sets analyzed via conditional logit, we have the well-known choice-based conjoint experiment (Louviere and Woodworth 1983, Louviere 1988, Louviere et al. 2000). A situational choice experiment (SCE), however, differs from a CBC in that its questions show one experimentally designed profile which describes the attributes and levels of the choice situation or context or chooser, and elicits choices among a fixed set of alternatives; SCE then uses polytomous MNL to generate utilities. Few marketers know about SCEs and there doesn’t seem to be a single source reference about them, two limitations we hope to remedy in this presentation. After introducing polytomous logit and distinguishing SCE from CBC, we’ll show how to design and program SCEs in Lighthouse Studio, how to analyze them in MBC software and how to simulate them in Excel.

Keith Chrzan
Sawtooth Software
Dean Tindall
Sawtooth Software
Option 2
Back to Top
Don’t delay!

Get tickets to attend the 2024 A&I Summit