visually integrative representation of user types in surveys (ricardo carvalho & joseph luchman...
DESCRIPTION
Given at UXPA-DC's User Focus Conference, Oct. 19, 2012TRANSCRIPT
Why Do Respondents Skip Questions in Surveys: A Visually Integrative Representation of User Types
Ricardo CarvalhoJoseph LuchmanMichael ParaloglouVanessa PattersonRon Vega
1
Outline• Background• Our Research• Our Findings• Our Conclusions & Recommendations• Future Research
2
Background
3
4
• DoD Youth Poll December 2011 survey– Mailed to 50,000 youth ages 16 to 24 with no prior or current military
experience through stratified, probability-based sampling– Address-based sample drawn from list frame estimated to cover 92% of target
population– Standard mailing methodology (Dillman 2007)– Scantron survey; double-entered and verified– Up-front and contingent monetary incentives upon completion– Response Rate 3: 17%, Contact Rate 2: 92%; n= 7,210
• Note: although our study was specific to completing a paper survey, much of the theory (and more importantly, the tool) can be applied to other survey modes and experiences
Background
5
Background
Background• The Issue
– Amount of refusals per item was very small (<1.5%) up to Q29
– But at Q30 and thereafter, it increases to about 5-6%
• Behavior of certain users changes in consistent manner
• Can we understand the user’s experience and behavior?
6
6% 5%
1%
5%
Q29 Q300%
1%
2%
3%
4%
5%
6%
1-18 Refusals (one to many butnot all items)
19 Refusals (all items)
# Refusals to Q29 and Q30
Our Research
7
8
• We noticed 370 respondents whose behavior seemed fundamentally different– Are these different “user types”? – Or was there a usability issue with the survey (“troublesome areas”)?– How can we identify the “user type” or a “troublesome area”?
Our Research
Does this kind of information tell us what to change in the experience?
No…
9
• The behavior we noticed is characteristic of “satisficing” (Simon 1957)– Economic phenomenon: “satisfying” and “sacrificing”– We exercise an acceptable level of effort to achieve a satisfactory but less than optimal
outcome– Example: driving around for the cheapest gas price– There is substantial literature written on this topic and how it applies to surveys
(Krosnick 1991)– Behavior points to this phenomenon, but very difficult to be certain
• The focus of our presentation is NOT on exploring this behavior but on understanding and visualizing different user types
– How does the survey experience impact users?– Are there usability issues we can notice or isolate?– Can we build a tool to help improve the overall user experience and hence obtain more
complete and accurate information?
Our Research
Our Findings
10
11
• What we did:– Examined only the 370 respondents who refused all of Q30 – Determined if unique user types existed through mixture modeling– Wrote code to visually map these user types and their refusals for the
remainder of the survey– Marked page breaks and “grid” questions in this map
• What we found:– 3 distinct user types
• The Quitters• The Returners• The Completers
– Map allows us to easily identify these user types– Map also allows us to easily identify “troublesome areas”
Our Findings
Our FindingsBlack line = pagesOrange line = grid questionColored squares = question was ANSWERED
The 370 Respondents
Our Findings: The QuittersBlack line = pagesOrange line = grid questionColored squares = question was ANSWERED
Last page of survey
Demographics
• Engagement clearly breaks off and users flip to back of survey
• Paper survey immediately presents users with workload
• Completes Demographic questions (“essential” and easy items) for token of appreciation
Our Findings: The ReturnersBlack line = pagesOrange line = grid questionColored squares = question was ANSWERED
Grid questions
• Engagement terminates after long second grid question (Q30) but returns
• Selectively respond to “taskful” questions (i.e., grid questions) to minimize effort
Our Findings: The CompletersBlack line = pagesOrange line = grid questionColored squares = question was ANSWERED
Only a few questions are left unanswered by this user type
• Most conscientious and engaged group
• Engagement terminates for only Q30
• Occasional refusals
Our Findings: Profiling the User Types
16
White
Black
Hispanic
Asian
Other
Whites show almost all of the Returners
Asians show more Completers
17
Our Findings: Profiling the User Types
Overall Population Hispanic, Native Hawaiian or Pacific IslanderPopulation
Median Household IncomeCivilian Population in Labor Force Employed; Age 16 and up
Population aged 16-17 Hispanic, Other PopulationPopulation with less than 9th Grade Education; Age 25 and up
Population in Labor Force Unemployed; Age 16 and up
Population aged 18-20 Hispanic PopulationPopulation with some High School Education; Age 25 and up
Population not in Labor Force; Age 16 and up
Population aged 21-24 Population in Nursing HomePopulation with High School Education; Age 25 and up
Percent of Population in Labor Force Unemployed; Age 16 and up
Median AgePopulation in other Institutionalized Group Quarters
Population with some College Education; Age 25 and up
Population employed in Private, for Profit; Age 16 and up
Non-Hispanic, White Population Population in College Dorms Population with Associates Degree; Age 25 and upPopulation Employed in Private, not-for Profit; Age 16 and up
Non-Hispanic, Black Population Population in Military Barracks Population with Bachelors Degree; Age 25 and upPopulation Employed in Local Government; Age 16 and up
Non-Hispanic, American Indian PopulationPopulation in Non-Institutionalized Group Quarters
Population with Masters Degree; Age 25 and upPopulation Employed in State Government; Age 16 and up
Non-Hispanic, Asian Population Average Household Size Population with Professional Degree; Age 25 and upPopulation Employed in Federal Government; Age 16 and up
Non-Hispanic, Native Hawaiian or Pacific IslanderPopulation
Average Household Size – Non-Family Household Population with Doctorate Degree; Age 25 and up Population Self-Employed; Age 16 and up
Non-Hispanic, Other Population Average Household Size – Family Household Families at Poverty Level Population Unpaid Family Work; Age 16 and up
Hispanic, White PopulationPopulation Speaking only English at Home; Age 5 and Older
Families at Poverty Level with ChildrenPopulation Employed Blue Collar Work; Age 16 and up
Hispanic, Black PopulationPopulation Speaking Spanish at Home; Age 5 and Older
Families above Poverty LevelPopulation Employed White Collar Work; Age 16 and up
Hispanic, American Indian Population Housing Units Owned by Occupant Families above Poverty Level with ChildrenPopulation Employed Service and Farm Work; Age 16 and up
Hispanic, Asian Population Housing Units Rented by OccupantPopulation in Labor Force Employed by Armed Forces; Age 16 and up
Population Male
Average Length of Residence Population Female
• Mixture model: seeks homogenous distributions within data based on number of questions refused after Q30
• Predictive model based on census block sociodemographic data linked to respondent scores
– Exploratory predictive model (i.e., empirically driven)
18
Our Findings: Profiling the User Types
User TypeMedian Income
Unemploy-ment Rate
% with Bachelor’s
Degree
Summary of other socio-economic variables
Quitters(n=111)
$63,000 7.1% 13.4% Government and private, not-for profit employment with large household size conditions
Returners(n=180)
$58,000 8.5% 10.7% More transient, socioeconomically disadvantageous conditions
Completers(n=79)
$61,000 8.1% 11% Less transient, socioeconomically advantageous conditions
“I’m tired of being surveyed!The government is wasting our
time/money!”
“I’m doing the best I can, but you’re asking a lot”
“I should do a good job at this, my opinions are helping”
Our Conclusions & Recommendations
19
20
• The tool gives us immediately visual, easy to interpret results that clearly bring out patterns– “Knew” the user types before we modeled it– Very easy to explain to clients or share across professionals
• With visual mapping, we can:– Easily see the entire user experience– Determine if unique user types exist– See if usability problems exist with certain questions and people
• Length interactions• Placement issues• Factual vs attitudinal questions
Our Conclusions
21
• Provides great alternative for complex statistical investigation – May not give you anything useful or helpful – Usually empirically driven, so results can change frequently– Hard to determine what to ACTUALLY do!
• A simple and effective way to communicate and examine a survey’s effectiveness to clients and other researchers– Can overlay respondent behavior with survey design– Natural extension of pilot-testing and cognitive testing
Our Conclusions
22
• Most concerns are with total non-response. But this suggests specific item non-response patterns– Allows us to pinpoint the characteristics of those items– As well as the people behind that non-response
• This suggests that item-level non-response adjustments may be necessary if variable is of key interest– Weigh option against client interests– Complexity can grow exponentially
Our Conclusions
23
1. Everyone knows the value of pre-testing a survey. This emphasizes it and the need for “true” test conditions.
2. Avoid areas where respondents are forced to engage for long periods
3. The design of a survey are critical and should not be left just to the statisticians! Example: paper surveys and interaction of questions and page location
4. Different users may required different persuasions techniques
5. Remember a reassessment of your key variables is always a good idea and can uncover significant issues (try this new tool!)
Our Recommendations
– Incentive levels– Survey instructions
– Customized invitations
– Different layout