who cares? measuring preference intensity in a polarized...

66
Who Cares? Measuring Preference Intensity in a Polarized Environment * Charlotte Cavaill´ e (Ford School of Public Policy, University of Michigan) Daniel L. Chen (Toulouse School of Economics - IAST) Karine Van der Straeten (Toulouse School of Economics - IAST) 15 January 2020 PRELIMINARY, Please do not quote Abstract Among all the policy issues voters have preferences over, some are more likely than oth- ers to matter for people’s thoughts, intentions and behavior. This variation, often described as variation in preference intensity, is difficult to measure empirically. In this paper, we test a new survey technology designed to address this methodological gap. This technology, called Quadratic Voting for Survey Research (QVSR) gives respondents a fixed budget to “buy” votes in favor of (or against) a set of policy proposals. Because the price for each vote is quadratic, it becomes increasingly costly to acquire additional votes to express support for (or opposition to) the same proposal. In a polarized political context, we expect QVSR to better measure pref- erence intensity relative to existing survey technologies. To test this expectation, we randomly assign individuals to take the same survey, varying only the technology used to measure pref- erences. We compare how each survey technology predicts real world outcomes and find that QVSR significantly outperforms alternative methods. We conclude by discussing the benefits of QVSR for research in political science. Keywords: Survey research, Public opinion, Attitude strength, Preference intensity, Likert item, Quadratic voting for survey research. * We would like to thank Jonathan Ladd, Matthew Levendusky, Thomas Leeper, James Druckman, Samara Klar, Adam S. Levine, Gregory Huber, Kosuke Imai, Rich Nielsen and Michele Margolis for extensive help in the design phase of the pilot. Alisha Holland, Scott Page and Glen Weyl also provided important feedback. Presentations at the Institute for Advanced Study in Toulouse (IAST), Georgetown University’s Political Economy workshop, Columbia University’s Political Economy workshop, Oxford University’s Comparative Politics workshop and Princeton’s CSDP seminar generated many helpful comments. We would also like to acknowledge generous funding from the IAST inter-disciplinary seed grant, Carnegie’s Bridging the Gap grant, as well as funding from Georgetown University’s School of Foreign Service Summer Grant. Chen and Van der Straeten acknowledge IAST funding from the French National Research Agency (ANR) under the Investments for the Future (Investissements d’Avenir) program, grant ANR-17-EUR-0010. 1

Upload: others

Post on 21-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Who Cares? Measuring Preference Intensityin a Polarized Environment∗

Charlotte Cavaille(Ford School of Public Policy, University of Michigan)

Daniel L. Chen(Toulouse School of Economics - IAST)

Karine Van der Straeten(Toulouse School of Economics - IAST)

15 January 2020

PRELIMINARY, Please do not quote

Abstract

Among all the policy issues voters have preferences over, some are more likely than oth-ers to matter for people’s thoughts, intentions and behavior. This variation, often describedas variation in preference intensity, is difficult to measure empirically. In this paper, we test anew survey technology designed to address this methodological gap. This technology, calledQuadratic Voting for Survey Research (QVSR) gives respondents a fixed budget to “buy” votesin favor of (or against) a set of policy proposals. Because the price for each vote is quadratic,it becomes increasingly costly to acquire additional votes to express support for (or oppositionto) the same proposal. In a polarized political context, we expect QVSR to better measure pref-erence intensity relative to existing survey technologies. To test this expectation, we randomlyassign individuals to take the same survey, varying only the technology used to measure pref-erences. We compare how each survey technology predicts real world outcomes and find thatQVSR significantly outperforms alternative methods. We conclude by discussing the benefits ofQVSR for research in political science.

Keywords: Survey research, Public opinion, Attitude strength, Preference intensity, Likert item,

Quadratic voting for survey research.

∗We would like to thank Jonathan Ladd, Matthew Levendusky, Thomas Leeper, James Druckman, Samara Klar, AdamS. Levine, Gregory Huber, Kosuke Imai, Rich Nielsen and Michele Margolis for extensive help in the design phase ofthe pilot. Alisha Holland, Scott Page and Glen Weyl also provided important feedback. Presentations at the Institutefor Advanced Study in Toulouse (IAST), Georgetown University’s Political Economy workshop, Columbia University’sPolitical Economy workshop, Oxford University’s Comparative Politics workshop and Princeton’s CSDP seminar generatedmany helpful comments. We would also like to acknowledge generous funding from the IAST inter-disciplinary seedgrant, Carnegie’s Bridging the Gap grant, as well as funding from Georgetown University’s School of Foreign ServiceSummer Grant. Chen and Van der Straeten acknowledge IAST funding from the French National Research Agency(ANR) under the Investments for the Future (Investissements d’Avenir) program, grant ANR-17-EUR-0010.

1

Page 2: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Introduction

Social scientists collect attitudinal survey data to better understand what people think, prefer and

want.1 This information is used by political scientists as a benchmark to judge the quality of

democracy in a given country (Gilens 2012; Dahl 1973). It is also central to studies of political

change in democracies.2 Both research endeavors require knowing not only what people want —

preference orientation—, but also how much they want it —preference intensity—. For example,

when assessing democratic accountability, researchers have to decide whether a policy intensely

preferred by a minority should be given more or less weight than a policy supported by a mostly

indifferent majority. Empirical studies on political change also emphasize differences in preference

intensity: individuals have many political preferences, but only those that are salient, i.e. intense,

are expected to decisively influence political behavior.3

Despite its relevance, preference intensity is poorly measured in surveys. Most studies rely on

some variant of the Likert item, a survey technology developed in the 1930s by the psychologist

Rensis Likert. Survey respondents are asked to evaluate a policy statement by picking one of 5 (or

7) ordered responses ranging from ‘strongly agree/favor’ to ‘strongly disagree/oppose.’ Answers in

Likert capture a mix of preference orientation and preference intensity. Yet, disentangling one from

the other is difficult. This survey technology raises two additional concerns. First, respondents who

answer a battery of Likert items have only limited incentives to consider trade-offs across policy

issues (the abundance problem).4 In other words, the data collected provides limited information

on the issues people might prioritize when faced with a forced choice between two policies (e.g.

in the case of Brexit, immigration control at the expense of access to the single market). A second

concern, most common in studies of contemporary American politics, is the distortion introduced

by partisan motives, by which survey answers cluster on both extremities of the ordinal response

scale (the bunching problem). The same response category (strongly favor/oppose) combines re-

spondents who care about the issue and respondents who, while expressing the same position, do

1 In this paper, we define ‘preferences’ as evaluations of statements, in this case policy-relevant statements, rangingfrom positive to negative. We also distinguish between ‘preference orientation’ and ‘preference intensity.’ Our definitionsare in broad agreement with similar concepts in social psychology such as ‘attitude’, ‘attitude orientation’ and ‘attitudestrength.’

2 In bottom-up models of democratic politics, preference change is assumed to precede policy change (Grossmanand Helpman 2018; Gennaioli and Tabellini 2019). In top-down models, while elites are often insulated from populardemand, they nevertheless operate within boundary constraints set by mass policy preferences (Pierson 2001). A sharedassumption is that change is more likely when existing policies do not reflect mass preferences

3 Through the remainder of the paper, we avoid the concept of salience as its definition in political science is some-what vague, describing in some instances something akin to preference intensity as defined in this paper, and in otherinstances, something that is “on top of voters’ minds” because of the issues elites decide to politicize over others. Asalient issue according to the latter, need not be salient according to the former.

4 Such trade-offs can be due to the incompatibility of options —i.e., one can support more government spending andtax cuts—, resource scarcity —politicians cannot implement all policies—, or the specific features of the choice space—one’s favorite policy bundle might not be offered by any existing candidate or party—.

2

Page 3: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

not care as intensely and are merely “paying lip service to the party norm” (Zaller 2012).

In rare instances, a Likert item will be followed by a question asking how important a given issue

is to the respondent. Responses are collected using a similar categorical scale ranging from ‘not at

all important’ (1) to ‘very important’ (5) (Miller and Peterson 2004; Howe and Krosnick 2017).

This survey technique improves measures of preference intensity by explicitly asking respondents

about it. Yet, it does not explicitly address the abundance and bunching problems.

Our goal in this paper is to identify and test a survey technology than can improve the way

we measure preference intensity. Several solutions to the abundance problem have been proposed

(e.g. issue ranking), with limited success. Recent examples such as Hanretty, Lauderdale and

Vivyan (2019), build on conjoint analysis. However, this type of forced-choice design only provides

a population-level estimate of preference intensity, limiting its relevance to specific areas of inquiry

that do not require individual-level measures.5 To the best of our knowledge, no solution to the

bunching problem has been suggested. Solutions

Like any measurement exercise, successfully measuring variation in preference intensity re-

quires answering three questions. First, conceptually, what is preference intensity? Second, the-

oretically, which survey technology best measures preference intensity? Third, empirically, is this

survey technology indeed measuring preference intensity?

In response to the first question, we follow common practice in political economy and concep-

tualize preference intensity as the willingness to incur the costs of aligning one’s political behavior

with one’s preference orientation. In other words, the more intense the preference, the more an

individual is likely to act to move the status quo closer to her preferred political outcome. We also

posit that the psychological cost of misreporting one’s preference depends positively on preference

intensity.

To answer the second question, we propose a simple model of survey answer and use it to

compare different survey technologies including a new way of measuring preference intensity called

Quadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs by asking

respondents to vote on a bundle of issues under conditions of scarcity: respondents are constrained

by a fixed budget with which to ‘buy’ the votes. Because the price for each vote is quadratic, it

becomes increasingly costly to acquire additional votes to express more intense support for (or

opposition to) the same issue. The budget constraint and quadratic pricing compel respondents

to arbitrate between the issues in the bundle and make it costly to express intense preferences by

voting repeatedly for the same issue (Lalley and Weyl 2016). This set up contrasts sharply with

Likert items’ world of abundance in which respondents face no trade-offs and can pick end-of-scale

responses (e.g. strongly favor/oppose) at no cost. With QVSR, respondents, faced with scarcity,

have to decide where to allocate their votes.

5 See Abramson, Kocak and Magazinnik (2019) for the misuse of conjoint analysis beyond these areas of application.

3

Page 4: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

In line with the previously stated assumption, we expect respondents to prioritize issues they

care about the most, i.e. intense preferences, relative to issues they care about the least, i.e. weak

preferences. In other words, QVSR forces respondents to arbitrate between issues and, compared

to Likert, “de-bunch.”6 We expect the resulting variance in survey answers (i.e. QVSR votes) to

contain more information on differences in preference intensity than survey answers measured

using a succession of Likert-type ordinal scales.

To answer the third question, i.e. whether or not QVSR better measures preference intensity, we

randomly assign individuals to take the same survey, varying only the technology used to measure

preferences. In addition to Likert items (‘Likert’ treatment) and QVSR (‘QVSR’ treatment), we test a

third survey tool which combines a Likert item with an issue-importance item that asks respondent

whether an issue is personally important to them (‘Likert +’ treatment). Likert +, because it also

places respondents in a world of abundance and does not penalize the partisan motive, should

theoretically, face some of the same limits as Likert. Nevertheless, under certain conditions (e.g. a

weak partisan motive), it remains to be seen whether, and in which situations, the improvement is

large enough to justify switching to QVSR.

To examine which survey technology best measures preference intensity, we derive observable

implications regarding the empirical manifestations of preference intensity. In line with our def-

inition of preference intensity, we expect preference intensity to be correlated with real world

behavior. To probe the extent to which survey answers predict behavior, we thus asked respondents

to perform three behavioral tasks. We also collected information on turnout in the 2016 elections.

Second, building on workhorse assumptions in political economy, we assume that people directly

and significantly affected by a policy hold more intense preferences over this policy than individuals

not affected by it. We consequently recorded the likelihood that one will benefit or be harmed by

a policy by collecting information on socio-economic factors that are correlated with exposure to a

reform.7

The survey technology that best measures preference intensity should generate survey answers

that are more predictive of individual behavior or personal exposure. What does it means for a

measurement tool, to be ‘more predictive’? To examine this issue, we focus on two facets of predic-

6 In a previous study on the feasibility of QVSR using smart-phone technology, Quarfoot et al. (2017) asked respon-dents to answer the same survey using both Likert items and QVSR. According to this study, QVSR forces respondents tode-bunch. Indeed, we could check using the authors’ data that among the respondents who signaled strong oppositionto Obamacare in Likert, some cast -7 votes in QVSR (7 votes against, at the cost of 49 credits), while some cast 0 votes,with other respondents falling somewhere in between. In this case, a group that provides the same end of scale responsein Likert ends up sorting itself across 7 different response categories (i.e. votes) in QVSR. (We would like to thank theauthors of this study for sharing their data.)

7 Given the psychological cost of misreporting preferences one strongly cares about, we expect that the more intensethe preference the less likely one’s corresponding survey answer is affected by transient contextual factors, such aspartisan cues and primes. To examine the relationship between survey answers and resistance to contextual cues, weprimed respondents’ partisan identity, with the goal of examining differences in response patterns between the primedgroup and the control group. This manipulation failed and we ultimately postpone this part of the analysis to a futurefollow-up study.

4

Page 5: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

tion adapted to binary outcomes, i.e. discrimination and classification. Simply put, discrimination

examines the extent to which a given survey tool successfully sorts respondents into increasingly

discriminating categories. In practice, this implies a monotonic relationship between a) probabil-

ity scores derived using relevant survey answers and b) the relative frequency of the outcome of

interest among individuals with the same probability score, i.e. P(Y = 1/X = x). The better tool

is the one for which the relative frequency of a given outcome increases monotonically with the

probability score, i.e. x j > xi implies P(Y = 1/X = x j) ≥ P(Y = 1/X = xi) for all possible values of

X . By classification we mean the overall share of observations correctly classified as either Y = 1 or

Y = 0. Note that, with binary outcomes, the same classification rate can be achieved despite very

different discrimination. Assuming the same classification rate, the tool with better discrimination

will always be preferred.

Our experimental results indicate that QVSR significantly outperforms Likert items and more

consistently outperforms Likert + items in predicting both behavior and exposure. For example,

we find that, when measured using Likert items, attitudes toward policies that benefit or hurt a

given group (e.g. women or gun owners) say little about the respondent’s own membership in that

group. In contrast, knowing, for example, the number of votes someone gave in QVSR in favor of

gender equality is much more informative of this person’s actual gender. We also find that predictive

scores derived from a model regressing turnout on answer to all survey items are more informative

of actual turnout in the 2016 election when answers in QVSR are used. In light of our results,

including limited evidence that QVSR significantly increases survey costs, we see few downsides

of switching to QVSR. This is especially true for research questions and designs that would benefit

from QVSR’s higher discrimination abilities (e.g. research on the micro-foundations of models

of democratic politics). In the conclusion, we point to important next steps for further testing

applications of QVSR to political science and political economy. We also discuss how important

debates might be profoundly reshaped once preference intensity is accounted for both theoretically

and empirically.

Measuring Preference Intensity

Likert items are the standard tools used to measure policy preferences. Nevertheless, there are (at

least) two potential issues with standard Likert items when one wants to measure policy preferences

in general and preference intensity in particular.

The first conceptual issue with standard Likert items is the confusion regarding what they are

trying to measure: what does the wording and survey structure capture? Is it fit to measure

preference intensity as defined above? The second conceptual issue with Likert scales, and with

any survey instrument more generally, is that respondents may not answer truthfully.

We propose a model that allow us to formalize both problems, and examine how alternative

survey tools may help address them. We focus on two survey tools, a Likert item followed by an

5

Page 6: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

issue important item (Likert +) and the previously mentioned QVSR, which we present in detail

below. The formal model is presented in the appendix. Below, we explain its main ingredients,

intuitions, and results. For clarity of exposition, we discuss each potential flaw of standard Likert

items in turn, but in the model, both are addressed simultaneously.

What do Surveys Actually Try to Measure? Preference Orientation vs Preference In-tensity

Conceptually, when one considers a number of proposed policy reforms, e.g. building a wall on the

border between US and Mexico or legislating to give same sex couples the right to adopt a child,

one may assume that an individual’s policy preferences on these reforms are characterized by two

parameters: her preference orientation, and her preference intensity.

Preference orientation describes whether the individual agrees or not with the reform as de-

scribed. It is a subjective evaluation of the “quality” of the reform, as considered from the individ-

ual’s perspective. Formally, one’s preference orientation on each issue k = 1, ...,K can be described

by a real number αik in the interval [−1,+1], where αik = +1 means perfect agreement with the

reform, and αik =−1 total disagreement with this reform. Intermediate values correspond to inter-

mediate opinions, something for instance, that might be due to ambivalence (e.g. support for the

policy principle but not its specific suggested implementation).

Preference intensity describes how important the issue at stake is to the individual. Formally, how

much an individual cares about an issue, i.e. preference intensity, can be captured by a positive

number βik in the interval [0,+1], where βik = 0 means that this individual does not care about this

issue, and βik = 1 means that the question is of the utmost importance to her.

What does it mean for an individual to care about a given issue? In line with common practice

in economics, our definition of preference intensity is inherently tied to observable behavior. Fol-

lowing Arrow, we define preference intensity as the willingness to incur the costs of aligning one’s

political behavior with one’s political preference orientation. By that we mean any behavior that

moves the status quo closer to one’s preference orientation on a given issue.8

When deciding whether or not to act on one’s preferences, individuals face at least two types of

costs, the direct personal cost of engaging in a given behavior (e.g. cost of gathering information

on a candidate, cost of turning out to vote, etc...) and the opportunity cost of not engaging in an

alternative course of action. The latter cost is especially important in representative democracies.

Indeed, voting for a candidate with a policy platform that includes one’s preferred policy A can

also mean expressing support for a policy B one dislikes —if part of the same platform— or failing

8 Note that the concept of preference intensity, as defined here, assumes a disconnect between the status quo and one’spolicy preferences. When the latter two align, “status quo changing behavior” is a meaningless concept. This requiresresearch to pay close attention to item wording: support for the status quo can be captured by asking about a policy thatunravels the status quo.

6

Page 7: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

to express support for a policy C one agrees with —if advocated by an opponent—. The more

someone cares about policy A relative to B and C, i.e. the more intense the preference for issue

A, the more one is willing to incur the cost of behaving in ways that align with one’s preference

orientation on issue A. In practice, βik incorporates for both direct costs and trade-off costs.

Ideally, one would like to collect information about both αik and βik, since both are relevant to

social scientists studying policy preferences and to policymakers.

Standard Likert items With this framework in mind, we can ask what Likert items actually

try to measure. The wording of the Likert question in our survey is the following: ”Do you favor,

oppose, or neither favor nor oppose: [Example] Giving same sex couples the legal right to adopt a

child?”, and if favor (resp. oppose), “Do you favor (resp. oppose) that a great deal, moderately, or

a little?”. The wording (”favor”/”oppose”) seems to imply that the question should predominantly

tap into differences in preference orientation. Yet, when an individual reports that she opposes

giving same sex couples the legal right to adopt a child a great deal, it might mean that she not only

fully disagrees with the terms of this reform (preference orientation), but also that she has strong

feelings about this issue (preference intensity). In summary, the wording of the Likert question

suggests that while it should mostly captures preference orientation, respondents might also take

into account the intensity of their preferences when answering.

Formally, we denote by xik ∈ [−1,+1] the sincere answer that individual i would like to give to

this question, which, as discussed above, is likely to include information about αik (the preference

orientation) and βik (the preference intensity). Note that we mention the notion of sincerity here,

since later in this section, we will explain why there are reasons to believe that when answering

the survey, respondents may have the incentive to systematically misreport their ’true’ views.

We examine two alternative ways of measuring policy preferences, each promising to improve

on the standard Likert item.

Likert+ In Likert +, there is a follow-up question: “How important is this issue to you per-

sonally?” The options are: “Extremely important; Very important; Somewhat important; Not too

important; Not important at all.” Assuming that people accurately and sincerely report the impor-

tance of the issue to them, Likert + then aims at collecting two pieces of information: xik (as in

standard Likert items) and βik (the intensity of the preference). The expected improvement with

Likert + is quite straightforward: the objective is to try and measure directly preference intensity.

Quadratic Voting for Survey Research QVSR draws on research on quadratic voting (QV)

(Posner and Weyl 2018). Each person starts with an artificial budget, which she can use to “buy”

votes in favor or against a proposal. Table 1 shows the costs of each vote, displaying both the total

and marginal costs. It costs twice as much at the margin to cast 4 votes than to cast 2 votes ($7

7

Page 8: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

rather than $3); twice as much to cast 8 votes than to cast 4 votes ($15 rather than $7); and so

on. Such quadratic increase in the price of “buying” a vote has been shown mathematically to give

fully informed rational individuals incentives to report truthfully on their preferences (Lalley and

Weyl 2016).

[Table 1 here]

In this original formulation, QV was intended as a means of arriving at efficient social decisions

when voting on policies that have a high probability of being implemented. Lalley and Weyl (2018),

primarily assume that influencing policy is the main motivation of citizens. In contrast, we examine

the implications of QV in a very different context, namely survey research.

Figure 1 shows a sample of what a hypothetical survey would like under QVSR. The first three

issues involve positions on immigration, the Affordable Care Act, and abortion. Respondents can

scroll down to vote on the other seven issues (the order is randomized). Notice that respondents

can buy credits in favor and against each issue proposal. The cost of each vote increases according

to the quadratic form and is displayed below each question. Respondents can go back to revise

their answers to spend their entire budgets. The maximum that a respondent can spend in favor or

against any question is 10 votes (100 credits).

[Figure 1 here]

How does QVSR help improve the measurement of preference intensity? Compared to Likert +,

where basically the respondent is just asked the “How important” question directly, the mechanism

at play in QVSR is more subtle. Instead of directly asking about preference intensity, the idea is

to capture it indirectly by forcing some trade-off across issues. Under QVSR, the individual has

some budget constraint, which prevents her from opposing or favoring “a great deal” too many

options. When the budget constraint is binding, an individual cannot report her policy preferences

as she would have had under standard Likert items. Due to the budget constraint, she will have

to move away from her unconstrained responses on each issue. How is she going to allocate these

costly votes across issues? We assume in our model that the reluctance to deviate from one’s

unconstrained answer on one issue is positively correlated with preference intensity. In that case,

we formally show that, compared to standard Likert items, she will relatively allocate more points

to issues over which she has intense preferences. Therefore, compared to standard Likert, answers

are still expected to be a mix of her preference orientation and her preference intensity, but the

weight on the preference intensity parameter is expected to be larger. As a result the covariance

between survey answers and preference intensity should be higher for QVSR than for standard

Likert items.

What can be said about the comparison between QVSR and Likert +? Compared to the Likert

+ instrument, QVSR relies on only one question per issue instead of two, which might be seen

8

Page 9: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

as a positive or a negative depending on how much one values parsimony. Another difference is

that QVSR compels individuals to explicitly consider the trade-off costs mentioned earlier. In Likert

+, the comparison across issues is implicit though the “How important” question. An individual

answering this question on a given issue will likely use her answer to the previous “How important”

question as a benchmark. Under QVSR, the procedure is different: respondents are forced to

compare across issues. A previous study of how people engage with QVSR finds that respondents

proceed by trial and error, moving back and forth across items to best distribute their votes and

credits (Quarfoot et al. 2017).

Another possible advantage of QVSR vs Likert + stems from the fact that respondents’ answers

to surveys are likely to be only partially sincere - a point we will discuss at more length in the next

subsection.

Non-sincere responses: Measuring policy preferences in a polarized environment

Another potential problem with Likert, as with any survey instrument, is that respondents’ answers

to surveys should be expected to be only partially sincere. In particular, strategic or partisan mo-

tives may induce respondents to inflate their opposition to or support for a reform, as well as the

importance of the issue. Consider the example of a Republican voter being asked whether she fa-

vors or opposes building a wall between US and Mexico, and whether this issue is important to her

personally. Assume further that in reality, this respondent has some doubts about the adequacy of

this policy to address immigration issues, or thinks that this issue is only of secondary importance

compared to more pressing economic issues. Still, since this issue is high on the agenda of the Re-

publican party, in a highly polarized political environment, and out of loyalty to her preferred party,

she may feel a psychological pressure to answer that she strongly support this reform, and that this

issue is of the utmost importance. In the model, we formally study how respondents answers sur-

veys under different techniques (Standard Likert, Likert +, QVSR), when they are cross-pressured

between different motives.

This problem of cross-pressured respondents and of the interaction between survey technology

and the political context has been somewhat overlooked by existing methodological studies. Yet,

we know that respondents’ answers are shaped by other motives than sincerity. For example, one

such motive is partisan cheerleading, which can lead strong partisans to purposefully misreport

their preferences out of a desire to behave as good partisans (Bullock et al. 2013).

In our model, we formally study how respondents answers surveys under different techniques,

when they have potentially conflicting motives. To do so, we build on a recent article by Cavaille,

Chen and Van Der Straeten (2019), which tackles this issue of non-sincere answers under standard

Likert items and QVSR when the relevant parameter the survey designer is ultimately interested in

capturing is the xik parameter (that is, whether a respondent opposes or favors a reform, moderately

or a great deal). We extend the model in Cavaille, Chen and Van Der Straeten (2019) by adding a

9

Page 10: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

formal distinction between the preference orientation vs preference intensity, and by studying the

properties of Likert +.

Formally, we model respondents as being intrinsically motivated to report their true views/preferences

- which we call the sincerity motive - but as also simultaneously pursuing other objectives when an-

swering surveys. For example, they might also want to strategically influence policy, or to signal a

partisan identity - which we call the signaling motive. We assume these conflicting motives to be

present whatever the survey technology. We show how, depending on the survey tool, answers will

reflect a different mix of the two motives.

Standard Likert items Under standard Likert items, we show that answers are typically a

weighted average between the sincere answer (the xik parameter) and some signaling target. In

particular, in a polarized environment where parties are quite extreme, we expect the signaling

motive to push respondents to systematically inflate their views, and to report stronger support or

opposition than what they actually feel. In the model, we denote by xLik the answers under standard

Likert, and our expectation in a polarized environment is that∣∣xL

ik

∣∣> |xik|.

Indeed, in a polarized context, signaling partisan loyalty becomes especially important, making

it more likely that respondents will mechanically default to signaling intense opposition to (support

for) policies associated with the opposite (their own) party. As a result, responses will bunch at the

extremities of the response scale (strongly favor - very important/ strongly oppose - very important)

making it difficult to distinguish between 1) respondents who strongly favor (or oppose) a given

policy (i.e. a tax increase) and are ready to act on this preference and 2) respondents who, while

expressing the same position, do not “care” as much and are merely “paying lip service to the party

norm” (Zaller 2012) and expressing their dislike for the opposite party (Mason 2014). In other

words, if political polarization is high, voters polled using a Likert item will appear to strongly

disagree on key issues, when in practice, only a subset of voters “truly care.” When the latter group

is unequally distributed across parties (e.g. members of one party strongly support a policy while

members of the other only weakly oppose it), variants of Likert items can provide an inadequate

picture of public opinion, one that overlooks important asymmetries in preference intensity across

parties.

In a polarized two party-system where political elites provide coherent ideological bundles,

such bunching at the extremities of a scale occurs on more than one key issue: on the one hand, a

large group of respondents strongly favors a bundle of policies associated with their party, while,

on the other hand, members of the opposite party signal strong opposition to the same bundle. In

this case, variance mostly stems from between party differences. The information on differences

within each group is, by contrast, much smaller. Another consequence of this “bunching to the

extremes” on more than one issue is the absence of individual-level differences in survey answers

on important policy issues: respondents appear to care about many things, with little information

on how they might address the policy trade-offs constitutive of democratic politics.

10

Page 11: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Likert + Under Likert +, the logic is exactly the same as under standard Likert for the “op-

pose or favor a reform, moderately or a great deal” part of the question (answers are xLik). Now,

regarding the “How important?” part, a similar partisan motive may push individuals away from

reporting their ’true’ βik (preference intensity). For example, if an issue is perceived as being very

controversial and hotly debated in the political debate, with one’s preferred party indicating that

this issue is of the highest priority for the party, loyal partisans may have the incentive to inflate

the importance of the issue. Denoting by βLik the answers to the ”How important?” question under

Likert+, one might expect that βLik > βik. In a sense, the “How important” question added under

Likert + suffers from the same problems of abundance and potential bunching as the standard

“oppose/favor” Likert items.

QVSR The distinctive feature of QVSR is that it introduces a budget constraint, which prevents

respondents from opposing or favoring “a great deal” too many options. As explained above, if the

budget constraint is binding, an individual cannot report her policy preferences as she would have

had under Likert scales (the xLik) and if the reluctance to deviate from one’s unconstrained answer

on one issue is positively correlated with the importance of this issue, compared to standard Likert

scales, she will comparatively allocate more points to issues over which she has intense preferences.

One first advantage of QVSR compared to standard Likert items is therefore that answers are

still expected to be a mix of the respondent’s preference orientation and preference intensity, but

the weight on the preference intensity parameter is expected to be larger.

A second advantage, which is more subtle, is that not only does QVSR better capture preference

intensity than standard Likert items, it also better measures xik , i.e. the “sincere” answer to the

standard Likert question. To understand why, remember that when a signaling motive is present,

answers under Likert only imperfectly captures xik, since answers under Likert (the xLik) are typically

a weighted average between the sincere answer (the xik parameter) and some signaling target. The

lower the strength of the sincerity motive compared to the signaling/partisan motive, the further

away the Likert answers will be from the sincere answers. Now, remember that, under QVSR, the

individual comparatively allocates more votes to issues over which she has intense preferences. If

one assumes that there is a negative correlation between preference intensity and the strength of

the signaling motive, the model then predicts that under QVSR, the individual will allocate more

points to issues on which the unconstrained answer (xLik) is closer to the sincere answer (xik), thus

better capturing the ’true’ xik parameter.

Last, we can qualify the predictions made earlier regarding the comparison between QVSR and

Likert+. As already explained above, the comparison between Likert + and QVSR regarding which

best measures policy preferences is ambiguous. When one explicitly takes into account the fact that

some respondents are insincere and may have the incentive to inflate the importance of issues that

are high on the agenda of their preferred party, the model reveals another potential advantage of

QVSR. Compared to Likert +, QVSR brings a “revealed preference” approach to the measurement

11

Page 12: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

of preference intensity. Instead of asking explicitly and openly how important an issue is, QVSR

uses the fact that it forces trade-offs across issues to indirectly measure their importance. In a

highly polarized environment where partisan identities are very salient, answers to the Likert +

“How important” question might not be that informative, and the revealed preference approach of

QVSR -where individual actually have to choose across issues rather than just report whether an

issue is important or not - might be superior. To conclude, the comparison between Likert + and

QVSR in terms of ability to recover meaningful information about preference intensity depends on

the behavioral importance of the sincerity motive. Which method performs best is an empirical

question.

Empirical Strategy

Based on the argument in the previous section, we expect both Likert + and QVSR to better measure

preference intensity than Likert. Assuming respondents are sincere, Likert + should, relative to

QVSR, better predict behavior and exposure. Yet, in the American context where partisan motives

compete with the sincerity motive, QVSR could ultimately outperform Likert +.

To examine empirically which tool best measures preference intensity, we need to first define

what are the latter’s main empirical manifestations. As a reminder, preference intensity first and

foremost affect individuals’ likelihood to behave in ways that bring the status quo closer to their

preferred policy outcome. The relationship between preference intensity and behavior provides

us with our first identifying criterion (the behavior criterion). Beyond behavior, what other ob-

servable features of the world can be tied to preference intensity? A common assumption, and

well-documented finding, in political economy is that individuals significantly affected by a policy

are more likely to mobilize around this policy (see for example Bouton et al. (2018)). We thus

posit that preferences over policies that “directly affect (a voter’s) rights, privileges, or lifestyle in

some concrete manner” (Howe and Krosnick 2017: 328) are more intense than preferences over

policies that have no direct personal implications. This relationship between personal exposure

and preference intensity is our second identifying criterion (the exposure criterion). From these

two identifying criteria, we derive the following empirical predictions:

Prediction 1 : Compared with answers to standard Likert items, answers in Likert + and QVSR

should better predict the likelihood of engaging in costly action regarding a policy issue. The

comparison between Likert+ and QVSR is ambiguous.

Prediction 2 : Compared with answers to standard Likert items, answers in Likert + and QVSR

should better predict the likelihood that one is directly affected by a policy issue. The comparison

between Likert + and QVSR is ambiguous.

12

Page 13: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

To test these predictions, we rely on a large survey experiment composed of two waves of

data collection. Wave 1 randomly varied the survey tool used to measure policy preferences. In

addition, we collected personal information on the likelihood of benefiting from a subset of the

policies respondents are being surveyed about (P2). Most importantly, respondents were asked to

perform tasks with “real world” consequences (P1). In wave 2, a subset of individuals (N= 2400).9

Respondents were asked to express their policy preferences using the same survey tool they were

assigned to in wave 1. They also performed additional real world tasks (P1). Below we list the

survey questions asked:

Do you Favor or Oppose:

– Giving same sex couples the legal right to adopt a child

– Laws making it more difficult for people to buy a gun

– Building a wall on the US Border with Mexico

– Requiring employers to offer paid leave to parents of new children

– Preferential hiring and promotion of blacks to address past discrimination

– Requiring employers to pay women and men the same amount for the same work

– Raising the minimum wage to $15 an hour over the next 6 years

– A nationwide ban on abortion with only very limited exceptions

– A spending cap that prevents the federal government from spending more than it takes.

– The government regulating business to protect the environment

The survey was administered a few weeks before the 2018 midterm elections to a general pop-

ulation of English-speaking US citizens over the age of 18 who reside in the United States. The

survey company, GfK Knowledge Networks, uses a probability-based web panel designed to be rep-

resentative of the population of the United States. To allocate respondents across the treatment

conditions, we used a randomized-block design. We first formed 27 blocks on the basis of par-

tisan identity (Republican, Independent and Democrat), subjective ideology (Liberal, Middle of

the road, conservative) and vote in 2016 (Hillary-other, Trump, did not vote/too young to vote).

These variables are important predictors of individuals’ policy positions on politicized issues such

as immigration, gay rights or budget deficits, as well as predictors of partisan identity and parti-

san strength. Within each block, we implemented a complete randomization. Table 2 provides a

general overview of sample sizes. In total, about one in 10 individuals who were assigned to take

QVSR did not complete the survey. Most of the drop out occurred when respondents were asked to

watch a video explaining how QVSR works. We found no evidence of differential attrition tied to

the specific exercise of answering survey questions using QVSR (see Appendix A.?? for more details

information on response rates, attribution and balance).

9 We recontacted respondents who did not receive the partisan prime in wave 1.

13

Page 14: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

[Table 2 here]

Survey Tools Attitudes are measured using one of three possible survey tools. A third of survey

respondents was asked to express their opinion on the 10 policy issues using Likert items. The

order of the 10 Likert items was randomized. Following the best practice identified by Malhotra,

Krosnick and Thomas (2009) and commonly used in reputable surveys such as the ANES, we used

a sequence of two branching questions. After an initial three-option question assessing direction

(i.e. favor, oppose, neither), respondents who selected favor or oppose were further prompted to

express the extent to which they favor or oppose the policy (a little, moderately, a great deal).

Respondents who initially select “neither” were not asked a follow up question. Answers are then

recoded to construct a variable centered around 0 (i.e ranging from -3 to 3).

Another third of respondents was also asked to answer using the same Likert item and, after

having done so, immediately prompted to indicate how important the issue was to them personally

(Not important at all (1) - Extremely important (5)). Answers to these questions are combined

to generate a scale conveying information on preference orientation and intensity. We examined

several ways of combining these two pieces of information into one single measure directly com-

parable with Likert and QVSR. Ultimately, we chose to disregards the information measured in

the second Likert branching question. To built the Likert + scale, we multiply values of the first

branching question (favor = 1, opposes = -1, neither = 0) by values of the issue importance scale,

resulting in a variable ranging from -5 to 5.10

The last third of the sample is assigned to the QVSR method. Respondents were first required to

listen to a 110 seconds long video explaining how to use the new tool. This video is the main reason

for differential attrition rates across methodologies. To make sure respondents all have sound on

their device, we implemented a screening question at the beginning of the survey which asked all

respondents to identify a number being mentioned in an audio recording. After watching the video

(there is no option to skip it), respondents were asked to “vote” on the 10 policy issues (order on

the QVSR screen is randomized). There were given 100 credits, which they could allocate across

issues according to a quadratic cost per issue scheme (see Table 1). They could allocate between 0

and 10 votes per issue. Note that giving 10 votes to one issue ’costs’ 100 credits, that is, the entirety

of the budget. QVSR answers are recorded to be equal to 0 if the respondent did not vote on this

issue. If the individuals votes in favor (against) then votes are recorded as a positive (negative)

value. In theory, the QVSR score thus ranges from -10 to 10. In practice, most respondents seek to

vote on at least a few issues, meaning that the absolute ranges is below 10. In practice, only a few

individuals answered 8 (64 credits) or 9 (81 credits) on a given issue. We pooled these answers

10 We also considered a second solution, which does not discard the information provided by the second branchingquestion. More specifically, using a regression framework, we use both items separately. See the ’Estimation Strategy’approach for a discussion of this alternative approach.

14

Page 15: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

with the -7 or 7 response categories, producing a variable ranging from -7 to 7.

Behavior Outcomes (Prediction 1) After having expressed their policy preferences using one of

the three survey tools described above, respondents in both waves are asked to perform a task with

“real world” consequences. The tasks vary in terms of cost and structure. The “cheapest” behavioral

task was recorded in wave 2. Respondents were given the option to write a short text about one

of two policy proposals being currently discussed in Congress, one on restricting abortion rights

and the other one on increasing the minimum wage. The texts were then integrated into a letter,

which was ultimately sent to the Senate committee in charge of reviewing this policy proposal.

Contributions were anonymous. Section A in the Appendix present screen shots of the letter task.

The behavioral task in wave 1 was comparatively more costly. Respondents were asked whether

or not they wanted to donate part of their potential lottery money (100$) to one of four non-profit

advocacy groups working on two issue areas: immigration and gun control. For each issue area,

we chose non-profits that fall on different sides of the political divide: for and against immigration,

for and against gun control. Respondents could chose not to donte or to donate to one of the four

advocacy groups. Whatever they did not donate, they could keep. Two weeks after the end of the

survey, to donate to one non-profit only. They were also given the option to keep the lottery money.

Section B in the Appendix presents screen shots of the donation task.

A third task, recorded in wave 2, involved a dictator game, where voters were faced with a clear

trade-off: reward someone –an independent– who agrees with them on one issue but disagrees with

them on another. We first asked respondents how they would behave in three dictator games in-

volving a Republican, a Democrat and an Independent respectively. The order was randomized. We

then returned to the Independent and informed respondents of this person’s donations. We asked

respondents if they wanted to change the amount they had previously decided to donate to this

individual. The independent’s preferences were conveyed using information on donation patterns

in wave 1. More specifically, we presented respondents with the option of donating lottery funds

to an independent that donated to the anti-gun control organization and to the pro-immigration

organization.11 The outcome of interest is the decision by respondents to punish or reward the in-

dependent by decreasing or increasing the amount given in the dictator game. Respondents had the

option do donate anywhere between 0 and 100$. Note that for most Democrats, the task involved

a clear trade-off: punishing the independent for donating to a pro-gun organization meant also

punishing her for donating to a pro-immigration organization. Section C in the Appendix presents

screen shots of the dictator game task.

Our fourth outcome is turnout. This information was collected independently by GfK either

right after the 2016 election or, for panel members who joined after 2016, at the time of recruit-

11 In practice, respondents could only donate to one non-profit, after implementing the lottery we picked an indepen-dent to receive the funds based on her survey answers, not her donation behavior.

15

Page 16: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

ment.

All four outcomes (wrote about a given bill, donated, punished and turned out) are coded such

that Y = 1 when the outcome is observed . Outcomes were designed to echo real world trade-off

where acting on one set of preferences might mean forgoing acting on another set (e.g. writing

about the minimum wage means one cannot write about abortion).

Exposure Outcomes (Prediction 2) At the beginning and at the end of the survey, we asked

individuals a set of questions about themselves on features that capture the likelihood that one will

be affected by a given policy. We also purchased additional variables from GfK . Table 3 lists these

variables alongside the relevant policy issues.

[Table 3 here]

Estimation Strategy

We examine how well answers collected using a given survey tool predict two types of outcome

variables, one for each prediction: namely individual behavior (Ybev) for Prediction 1, and individ-

uals features correlated with exposure (Yexpo) for Prediction 2. Thanks to randomized blocking, the

distribution of Ybev and Yexpo is the same across the three treatment conditions (see Appendix A.??).

Note that each respondent was assigned to one survey tool only.12

A first step is to transform survey answers into a variable comparable across survey tools. We

use answers collected using a given survey tool to generate individual scores (X). These scores are

of two types. In one case, we use the ‘raw’ survey answers (Xraw) as individual scores. In another,

we use the predicted probabilities (Xpred) derived from regressing Y on the survey answers . We

follow common practice in political science and model the relationship between Ybev and the survey

answers as a linear additive relationship using a logit link function adapted to binary outcomes. To

make these scores comparable across survey methodology, we normalize them. In the case of Xraw,

0 is equal to the lowest response category provided by a given survey tool and 1 is equal to highest

response category. In the case of Xpred , 0 is equal to the lowest predicted probability derived from

the logit regression, and 1 the highest predicted probability.

Whether we use Xraw or Xpred depends on the Y . As previously mentioned, most of the behavioral

tasks were designed to mimic real world where acting in line with preferences on issue area A

means forgoing the option to act in line with issue preferences on issue area B. For example, in the

12 An alternative research design would have been to ask individuals their answers using all three survey tools,randomly varying which tool gets asked first. Concerned about contamination effects across tools, we ultimately decidedagainst it.

16

Page 17: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

donation task respondents could pick between two advocacy groups, gun rights and immigration,

but could only donate to one. Given the structure of this task, answers to the item on gun control, as

well as answers to the item on the border wall should be both relevant to predicting a respondent’s

behavior on this task. For this type of outcome we summarize the information available in both

survey items using Xpred . In contrast, for exposure outcomes such as gun ownership, or sexual

orientation, we focus on the one item that directly asks about a policy directly relevant to this

group, in this case gun control and same sex couples’ right to adopt. For this type of outcome, we

use Xraw. In the case of turnout, we use answers to all survey items. In this case, we use a different

approach described in detail in the relevant section.

Note that irrespective of the Y , Likert + always involves more than one survey item (Likert item

followed by the issue importance item). For Likert +, we use two techniques: either we combine

both items into a single scale as described above. Or we use a regression framework and use both

items separately. This means that in the case where Y is shaped by preferences on two issues, the

regression includes 4 variables in total, two standard Likert items and two issue importance items.

Results are not affected by the type of approach used. In the paper below, we only present results

using the combined scale ranging from -5 to 5. Results using the alternative empirical strategy are

available in Appendix ??.

We start by examining the relationship between X and P(Y = 1/X). The better tool is the one

for which the relative frequency of a given outcome increases monotonically with X , i.e. x j > xi

implies P(Y = 1/X = x j) ≥ P(Y = 1/X = xi) for all possible values of X . We call this dimension of

prediction discrimination.

Note that relative to Likert, Likert + and QVSR have more response categories. Indeed, com-

pared to XLraw, which ranges from -3 to 3, XL+

raw ranges from -5 to 5 and XQV SRraw ranges from -7 to 7.

If the issue is that Likert does not provide enough response categories, then it would be enough

to increase the numbers of response categories available in Likert. Yet, as shown by Revilla, Saris

and Krosnick (2014), the quality of the data collected deteriorates as the number of categories in-

creases. In other words, the main feature of a survey tool is less the number of response categories

it offers and more the technique it relies on to populate each response category. By comparing the

extent to which x j > xi implies P(Y = 1/X = x j) ≥ P(Y = 1/X = xi) for all possible values of X , we

are examining whether a given tool outperforms others when it comes to distributing individuals

across a set of response categories in such a way that individuals for whom Y = 1 are more likely

to be assigned to higher values of X relative to individuals for whom Y = 0.

To examine the shape of the relationship between P(Y = 1/X = x) and X , we bin the data and

overlay it with a lowess curve. Figure 2 conveys our expectations when only considering the nor-

malized survey answers (Xraw). If the minimum (0) and maximum (1) values for a given tool better

identifies individuals least and most likely to engage in a given behavior, then the curve relating

the average Y across bins should be steeper than for tools that perform less well on discrimination.

17

Page 18: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

[Figure 2 here]

To compare the fit between predicted probabilities Xpred and P(Y = 1/X = x), we also bin the data

and overlay a lowess curve. Note that there are two sources of error. One is the number of predicted

probabilities: the larger this number, the fewer observations in each bin. This increases the amount

of measurement error for P(Y = 1/X = x). The other is the tool’s ability to discriminate, i.e. assign

the right score to the right individual such that x j > xi implies P(Y = 1/X = x j) ≥ P(Y = 1/X = xi).

Figure 3 plots three different scenarios. If the noise is too large, meaning that a given survey tool

splits the data in random or uninformative ways, then the lowess curve should be non-monotonic

(red curve). Given two monotonic curves, the one indicating a strong relationship (the green curve

relative to the blue one) is indicative of better discrimination.

[Figure 3 here]

The main benefit of this binning approach is transparency: binned scatter plots are straightfor-

ward to understand and require limited data transformation. The latter is especially valuable given

that QVSR is a new method whose statistical properties are still being investigated. Most impor-

tantly answers in QVSR, unlike answers in Likert and Likert +, are a linear combination of each

other, a feature that complicates reliance on simple statistics such as correlations or the slopes and

intercepts in a linear regression. When the visual analysis does not efficiently convey the relative

ranking of each tool (i.e. from most to least predictive of Y ), we examine the correlation between

the bin averages, P(Y = 1/X = x), and the predicted probabilities X .

We complement this analysis of discrimination by an analysis of classification using, in all cases

but one, logistic regression. Specifically, we examine how well a logit model using survey answers

generated by a given tool correctly classify observations as either Y = 1 or Y = 0. Note that such

exercise requires a model for transforming survey answers into a binary prediction. All models rely

on assumptions that can favor one survey instrument over the other. For consistency, we rely on the

same model as previously mentioned (logit regression assuming a linear additive combination of

the key predictors). We use a well-known goodness of fit measure for binary outcomes in a logistic

regression model namely the area under the ROC curve (or c-statistic). A value of 1 indicates perfect

classification. With binary outcomes modeled using a logit, better discrimination, while it should

not result in worse classification, need not always translate into better classification. Assuming the

same classification, the tool with better discrimination will always be preferred.

Results

We start with basic facts on the types of survey responses generated by QVSR and Likert +, relative

to Likert. We then turn to prediction 1 (behavior criterion). Given the structure of the behavioral

18

Page 19: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

tasks, we mostly rely on Xpred . Evidence using Xraw (i.e. normalized survey responses) is punctually

included for illustrative purposes. We then turn to prediction 2, which, by construct, only includes

test using Xraw.

A First Look

Figure 4 provides a first look at the distribution of votes in all three survey tools, taking the example

of the wall one the US border with Mexico. In Likert, many respondents choose an extreme answer

(60 percents of the Democrats oppose building the wall a great deal, whereas roughly 45 percents

of the Republicans favor it a great deal). By contrast, in QVSR, answers appear to approximate

a normal distribution. Note that Likert + also appears to ‘de-bunch’ the two extreme response

categories in Likert.

[Figure 4 here]

This de-bunching observed in Likert+ and QVSR generates new variance within a given issue

area as well as across issue areas. For example, as Figure 5 shows, with Likert, democrats with

a BA appears to both strongly oppose the border wall and strongly support the minimum wage.

Under QVSR, when forced to choose, this group overall puts more votes on the border wall issue

than on the minimum wage one. Likert + also provides some information regarding relative dif-

ferences across issues. In the next sections we examine whether the de-bunching in QVSR is more

informative of preference intensity than in Likert +.

[Figure 5 here]

What are the additional costs of running a survey using QVSR? Asking 10 survey items in QVSR

takes about 25 to 30% more time than answering them using Likert items. This is due to the

explanatory video respondents have to listen to. Assuming they know how to use QVSR, the time

needed is actually less in QVSR than Likert. Video included, it takes the same time to answer

10 questions in QVSR than the time necessary to answer a total of 20 questions in Likert + (10

standard Likert items and 10 issue importance items).

Behavior Criterion (1): Predicting Who Writes a Short Text about a Bill

Respondents were given the option to write a short text addressed to members of the relevant

Senate committee on one of two bills: one bill seeks to raise the minimum wage and the other

bill seeks to impose restrictions to abortion. We examine two outcome variables. One variable,

Y minW , is equal to 1 if respondents chose to write about the minimum wage bill, and equal to zero

19

Page 20: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

otherwise. The other variable, Yabortion, is equal to 1 when respondents chose to write about the

abortion restriction bill. Note that in each case, the 0 category includes both individuals that did

not write anything for either bill and respondents who wrote about the second bill. In total, 75% of

respondents wrote about one of the two bills and only 25% abstained from writing. Among those

who chose to write a short text, 48% wrote about abortion and 52% wrote about the minimum

wage. Below we present results for YminW . Results for Y abort are available in Appendix A.??.

Figure 6 provides a first overview of patterns in the raw data. It bins the data based on the

answers provided on the minimum wage item (XminWraw ) and, for each bin, plots the percentage of

respondents who wrote about the minimum wage bill (P(Y minW = 1/XminWraw = x) . Because we do not

know what people wrote, merely that they wrote about the bill, we only use the absolute values of

the response scales, namely the number of ‘votes’ cast for each survey tool, which range from 0 to

7 in QVSR, 0 to 3 in Likert and 0 to 5 in Likert +. As the number of votes increases, so does the

share of individuals who write about the bill. At first glance, the increase appears steeper in QVSR,

with an important caveat: few respondents cast 6 or more votes, meaning that QVSR gives only

extreme scores to a few respondents.

[Figure 6 here]

Next, we predict the decision to write about a given bill using the absolute values of the response

scales for both the minimum wage and abortion items (XminW,abortpred,write ). We run a logistic regression

and assume a linear additive relationship between predictors and the outcome. We reproduce the

same analysis as in Figure 6, using normalized predicted probabilities as bins. As explained above,

we overlay a lowess curve to examine the relationship between the binned scatter points and the

predicted probabilities. If QVSR splits the data in random or uninformative ways, then the lowess

curve should no be consistently increasing in X . Yet, as shown in Figure 7 (top panel), higher

predicted values are associated with the same or a larger share of individuals who write about the

bill. QVSR’s comparative advantage appears to be in the tails. With regards to classification, we

find no evidence that QVSR performs better or worse than Likert and Likert +, the area under the

ROC curve is the same across the three survey instruments.

[Figure 7 here]

As shown in Figure 8, with Likert, there is very little variance to leverage to predict differences

in the behavior of Democrats. In contrast patterns of survey answers in both QVSR and Likert +

convey more information. Unlike Democrats, Republicans have diverse opinions on the minimum

wage and abortion, even when these are measured using Likert items (see Appendix ??). Based

on this assessment, we reproduced the analysis focusing only on Democrats. Results are plotted

in the bottom panel of Figure 7. Predicted probabilities generated using Likert items have lim-

ited variance: a large group of respondents is assigned the same value. In addition, the share of

20

Page 21: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

individuals who write about the minimum wage in this category is equal to the baseline rate in

the full sample, underlying Likert’s inability to discriminate across observations. As a result there

is no systematic relationship between predicted probabilites and Y beyond a higher propensity to

find people who wrote about the bill in the two extremities of the Likert scale. Relative to Likert

+, QVSR also represents a significant improvement in terms of discrimination. For classification,

the difference in the c-statistic, while large when comparing QVSR and Likert (0.67 versus 0.58) is

less so when comparing QVSR and LIkert + (0.67 versus 0.63). The same analysis for Republicans

returns a pattern similar to that for the full sample: QVSR does better on discrimination but not

classification (see Appendix ??).

[Figure 8 here]

Assuming preference intensity predicts who will write about a given bill, we examined which

tool best predicts this behavior. Evidence shows that QVSR outperforms Likert on both discrimina-

tion and classification. This is especially true when zooming in on Democrats. Evidence that QVSR

outperforms Likert + is mostly limited to the sub-sample of Democrats. For this group, we find

better discrimination for QVSR than Likert + and a marginal improvement in terms of classifica-

tion. We now examine whether this pattern holds when moving to more costly tasks, i.e. tasks that

either involve lottery money or real world behavior such as turnout.

Behavior Criterion (2): Donation to an Advocacy Group

Respondents were informed that they had been automatically entered into a lottery with a prize of

$100 for 40 randomly selected respondents among 4000 respondents. They were then prompted to

imagine that they were among the winners and asked whether they wanted to donate some of their

lottery money to an advocacy group. They were given a choice between four advocacy group: one

pro-gun control, one anti-gun control, one pro-immigration, and one anti-immigration. We examine

four outcomes, one for each advocacy group. Individuals who donated to the specified organization

are coded as 1. All individuals who did not donate or that donated to another organization are

coded as 0. Overall, 60% of respondents donated. Among those who donated, 46% donated to

the advocacy group that lobbies for gun control (YdonG) and 14% donated to the organization that

lobbies against gun control. With regards to immigration, 21% donated to the pro-immigration

group and 18% to the anti-immigration group. Below, we focus on YdonG, results for the other

outcomes are available in Appendix A.??.

Figure 9 bins the data based on the answers provided on the gun control survey item (XgunCraw ).

Individuals who oppose gun control do not donate to this advocacy group. This is true for all

three survey tools. Instead, the action is among people who express support for gun control.

In Likert, most of the variance stems from differences between the “Strongly oppose” (3) and

the “Neither/nor” (0) response categories. In Likert +, clear differences in donation decisions

21

Page 22: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

emerge when comparing individuals who responded “Neither/nor” (0), “Somewhat important” (3),

“Very important” (4) and “Extremely important” (5). Few respondents chose the 1 and 2 response

categories, and overall, these response categories appear mostly uninformative. In QVSR, the share

of individuals who donate increases steadily with the number of votes cast.

[Figure 9 here]

Next, we predict the decision to donate using responses on both the gun control and border

wall survey items (XgunC,wallpred,donG). In line with the previous analysis, we first focus on the overall

sample and then zoom in on Democrats. Figure 10 plots the results. Likert returns a very noisy

pattern, with evidence of a non-monotonic relationship when focusing on Democrats, implying

poor discrimination. For this task, Likert + and QVSR appear roughly similar in their discrimination

abilities. The c-statistics also reveals similar classification performance.

[Figure 10 here]

Based on the letter and donation tasks, we find evidence that QVSR outperforms Likert on

discrimination and classification. Evidence that QVSR better predicts behavior than Likert + is

more mixed. We confirm the expectation that Likert + and QVSR both outperforms Likert but the

jury is still out regarding QVSR’s ability to outperform Likert +.

Behavior Criterion (3): Punishing an Individual for Holding Contrasting Policy Pref-erences

We now turn to the dictator game in which voters are first asked what they would do if they were

given the option to share some of their lottery money with another respondent. In the first round,

we asked about three respondents: an Independent, a Democrat and a Republican. In the second

round, we gave them the option to change the amount they chose to share with the Independent,

knowing that she opposes gun control and is favorable to immigrants. This combination of attitudes

is very rare in our sample (5%) meaning that most respondents are cross-pressured and thus face a

trade-off: rewarding someone with similar preferences on one issue means also rewarding someone

who has opposite preferences on a second issue. This task is thus costly both materially (forgone

lottery money) and symbolically (support one cause at the expense of another).

A respondent could either reward, punish or do nothing. In total 39% of respondents did not

share in the first round and in the second round. This group is a mix of individuals who either never

share whatever the partisanship of the receiver or never share with Independents in particular (but

do share with a Democrat or a Republican). Given the heterogeneity in the group of respondents

who never donate, and because they have no bandwidth to further punish the Independent, we

22

Page 23: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

drop this group and only focus on the individuals who shared with the Independent in at east one

round. Within this group, 38% punished the Independent (gave less in the second round than in

the first). Less than 2% of respondents rewarded the independent.13 and 60% did not change the

amount they originally donated. We code the respondents who punish as equal to 1 and the others

as equal to 0.14

As show in Appendix A.?? raw answers to the gun control and border wall items alone are

uninformative for all three items. In the paper, we focus on predicted probabilities. We predict the

decision to punish using responses on both the gun control and border wall survey items. Figure 11

plots the results. In Likert and Likert +, the relation between predicted probabilities (XgunC,wallpred,punish)

and Y (P(Ypunish = 1/XgunC,wallpred,punish = x) is non-monotonic relationship implying poor discrimination (top

panel). The bottom panel represents the same analysis allowing for larger bins (in 0.1 increments

instead of 0.01 increments on a 0 to 1 scale). Larger bins decrease measurement error to the

benefit of Likert + and QVSR, but not Likert. Overall, QVSR outperforms Likert and Likert + on

both discrimination and classification. With a c-statistic higher for QVSR (0.64) than for Likert and

Likert + (0.60), we confirm that, on this task, QVSR better predicts who punishes the independent

voter in the dictator game.

[Figure 11 here]

Behavior Criterion (4): Turnout

We now turn to a real world behavior namely the decision to turn out to vote.15 We assume that

most of the costs of voting lie in the act of turning out. If preference strength is the willingness

to incur a given cost, and if QVSR better measure preference strength, then QVSR should better

predict who turns out. We examine how answers to all 10 survey answer items predict turnout.

We dichotomize all the survey answers and run a model with and without interactions between

all dummies. We use penalized regression (elastic) and k-fold cross-validation to select the “best”

model and generate predicted values. We present results from the models with interactions between

all response dummies. The results without interactions are available in Appendix A.??.

In line with previous analyses, we first plot average turnout rate against the normalized pre-

dicted values. The results are plotted in Figure 12. Discrimination for Likert + appears to be much

13 This group includes individuals who have shared their lottery money with the Independent in the first round andincrease the amount in the second round (1%) and individuals who having shared nothing in the first round, do so inthe second round (1%)

14 Because of the smaller sample size, we are not able to focus our analysis on Democrats or Republicans only.

15 In our sample, 76% of individuals reported having voted in the 2016 elections. This is much higher than the56% recorded the day of the election. We thus plan to follow up using a different measure of turnout that relies onadministrative data.

23

Page 24: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

worse than for Likert and QVSR. The analysis using Likert items returns 264 different predicted

values. The number for QVSR and Likert + is 335 and 126 respectively. This large number of bins

allows us to complement Figure 12 by examining the correlation between the normalized predic-

tive value associated with each bin and the share of individuals who declared turnout out in each

bin. The correlation for Likert + is 0.2. It increases to 0.49 for Likert. In the case of QVSR, it is

equal to 0.67 indicating much better discrimination for this tool.

With this analysis, we cannot use the c-statistic. Indeed, absent a logit link function, we cannot

interpret our scores as predicted probabilities. Instead, we use Figure 12 to identify a cutoff value

c for each survey tool such that P(Y = 1/X = c) >= 0.5. We then generate a new variable which

assigns respondents a value of 1 if Xpred > c and a value of 0 otherwise. We then compare this vari-

able to actual turnout. Following this procedure Likert + correctly classifies 82% of respondents,

Likert 83% and QVSR 87%. QVSR is especially apt at correctly identifying individuals who did not

turnout: 17% are correctly classified in Likert +, 27% in Likert and 47% in QVSR.16

[Figure 12 here]

Using four different behavioral outcomes, we find evidence that QVSR consistently outperforms

Likert and Likert + on discrimination. Differences in terms of classification vary across outcomes

but overall, QVSR returns either similar or better on the classification dimension of prediction.

Based on this evidence, we find strong support that QVSR better measures preference strength than

Likert, and suggestive evidence that, at least in the US context, QVSR more consistently measures

preference intensity than Likert +.

Exposure Criterion (Prediction 2)

In this part of the analysis, we focus on raw survey answers (Xraw). Figures 13 through 16 plot

the share of individuals exhibiting a given feature by survey response category (normalized) and

by survey technology. The more discriminatory tool should have a steeper slope (see Figure 2).

Overall, evidence indicates that, with regards to discrimination, QVSR outperforms both Likert and

Likert +.

[Figures 13 to 16 here]

To present the results more succinctly for all individual covariates, we also examine slope dif-

ferences across survey tools within a regression framework. More specifically, we regress outcome

16 This comes at the cost of a 1% increase in false negatives, i.e. people who reported turning out but are incor-rectly classified as having stayed home. Given that this group represent a much larger share of respondent, the overallimprovement on classification is less dramatic.

24

Page 25: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Yexpo on normalized survey answers. We interact the latter with two dummies identifying the sur-

vey methods used. Standard errors are clustered by treatment conditions, and we also include

randomized-blocks as covariates. Given the budget constraint, answers in QVSR are a linear com-

bination of each other meaning that the error terms across equations are correlated. We conse-

quently examine all issues jointly and estimate a seemingly unrelated regressions model (Zellner

1962). Table 4 provides an overview of differences in the regression slope for each method. Out of

8 different tests, QVSR outperforms the other two tools half of the time. On three outcomes, both

QVSR and Likert + outperform Likert, but do not outperform each other. For one outcome, i.e.

being in a same sex relationship and support for adoption by same sex couples, none of the tools

are predictive.

[Table 4 here]

To assess classification, we model the relationship between Yexpo and the survey answers as a

linear relationship and use a logit link function. On this aspect of prediction, we find no evidence

that QVSR consistently outperforms Likert and Likert +. QVSR either does marginally better at

classifying respondents or returns a c-statistic similar to the one obtained using Likert and Likert +.

In other words, while a higher score in QVSR is associated with a higher likelihood of being exposed

to a reform’s consequences, a model trying to predict exposure using QVSR assuming linearity and

using a logit link function does not necessarily improve classification success.

To Sum Up

Based on our model of survey responses, we expected QVSR to generate variance that, compara-

tively, better overlaps with differences in preference intensity. Assuming this is true, then QVSR

should better predict who cares enough about a given issue to act on it. It should also better

predict who is likely to be exposed to the consequences of a given policy. To examine this issue,

we distinguished between two facets of prediction, namely discrimination and classification. Bet-

ter classification abilities are always preferred. For similar classification ability, a model/tool with

better discirmination abilities is always preferred.

To test the first prediction, we used behavioral tasks, that while more costly than simply an-

swering a Likert item, are nevertheless still relatively low-stakes. We see two advantages to this

design. First, it captures the reality of everyday politics from the point of view of the individual

vote, namely a relatively low-stakes activity (at least materially) yet highly symbolic activity. Sec-

ond, it provides a conservative test of QVSR. Evidence that QVSR outperforms Likert and Likert +

items indicates that QVSR will outperform them in predicting more costly behaviors.

To test the second predictions, we collected information on objective features of an individual’s

life experience that associated with the likelihood of benefiting (or not) from a given policy. We

25

Page 26: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

included both material (paid parental leave for young parents) and non-material (the banning of

abortion for evangelicals) benefits. The value of the test comes from the battery of items asked,

allowing to identify a robust pattern across very different types of covariates.

Overall, QVSR only occasionally outperforms Likert on classification. Yet, for a similar classifi-

cation ability, QVSR always outperforms Likert with regards to discrimination.

Predictions when comparing QVSR and Likert + were more ambiguous. Our evidence indicates

a pattern similar to that found when comparing QVSR and Likert. For a similar classification ability,

QVSR more consistently outperforms Likert + with regards to discrimination. On two behavioral

tasks, namely the dictator game and turnout, QVSR also outperforms Likert + in terms of its

classification abilities.

Conclusion

While many have highlighted the theoretical importance of preference intensity, there currently

exists no routine way of measuring it. Martin Gilens, in his opus on democratic accountability and

political equality, acknowledges this methodological gap and sets the issue of preference intensity

aside, recoding Likerts items into a binary Favor/Oppose variable (Gilens 2012: 38). In this paper,

we have discussed and tested a new possible way of measuring preference intensity, namely QVSR,

and compared it to existing alternatives such as Likert +. We draw a few lessons regarding QVSR’s

potential contribution to survey research from our results and describe next steps for improving

QVSR.

QVSR did not necessarily classify outcomes better than alternative survey instruments. Its main

improvement is on the discrimination dimension. This failure to translate better discrimination

into improved calibration requires additional probing. One area of investigation is whether using

a logit regression as a classification method while focusing on a small subset of survey answers

is especially favorable to Likert and Likert +. Indeed, by design, answers in QVSR make sense

as a bundle. It is thus to be expected that it would perform better once all survey answers are

included in the same model. Using such an approach combined with machine learning, we found

substantially better prediction of an important electoral indicator like voter turnout. Nevertheless,

it could also be that the quadratic pricing is too constraining relative to existing levels of preference

intensity. While it de-bunches in a meaningful way, improving discrimination, it forces too many

individuals below the classification threshold, increasing the overall number of false negatives, and

limiting gains on classification.

Another possible reason for Likert and Likert + relatively strong classification abilities is par-

tisan polarization: the slope generated by comparing the behavior of Republicans and Democrats

is enough to match QVSR on classification. QVSR’s comparative advantage is thus most apparent

when predicting behavior using only data from members of one of the two major parties, as con-

26

Page 27: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

firmed by the analysis of the letter writing and gun donation tasks. Note that one concern with

skewed or bimodal distributions is that it shifts a lot of the weight from the data to the assumptions

built into the model: by artificially plotting a line between the preferences of strong republicans

and strong democrts, OLS models end up “inventing” a world in between. In addition, when the

distribution of key variables is highly skewed, OLS regression, and many of the straightforward

statistical tests common in political science, become unreliable. Researchers concerned about these

issues might thus prefer QVSR despite the low improvement on classification.

QVSR’s discrimination ability could be especially valuable for work on the micro-foundations

of models of democratic politics. For example, Kuziemko et al. (2013) ran a survey experiment

to understand how support for redistribution is affected by information on inequality. They find

no effect of their informational treatment on support for redistributive social policies, which were

measured using a Likert item. QVSR may have yielded more discriminatory variance in thus a better

‘signal’. Note that such variance is especially valuable when the estimate of interest leverages the

covariance between two variables. For example, support for income redistribution is high among

all Democrats, irrespective of their income. One possibility is that low-income Democrats might

care more about the issue than high-income Democrats relative to other issues constitutive of the

Democratic party’s platform. In other words, rejecting the hypothesis of a positive income gradient

requires a very different survey methodology.

Several issues remain to be investigated. First, we still lack strong evidence supportive of the

claim, mentioned in footnote 7, that intense preferences are less amenable to contextual cues.

Addressing this issue has the potential to settle a core disagreement between behavioral studies

of public opinion and work in political economy. To simplify, political economists assume that

individuals hold political preferences and that these preferences direct, and consequently predict

politically-relevant behavior. Students of public opinion have challenged the accuracy and ade-

quacy of these behavioral assumptions. Achen and Bartels (2016), for instance, argue that the

causal arrow between preferences and behavior should be reversed: citizens first pick a politician

for reasons that have little to do with policy preferences and then adopt that politician’s policy

views (see also Lenz (2013); Freeder, Lenz and Turney (2018)). “Even the more attentive citizens,”

they write, “mostly adopt the policy positions of their parties as their own: they are mirrors of the

parties, not masters” (p 299). One way to rescue political economy’s core assumptions while also

accounting for the empirical patterns documented by those critical of this framework is to empha-

size that preferences vary in intensity.17 ‘Intense “attitudes have the characteristics that” political

economists “assume are possessed by all (preferences)” (Howe and Krosnick 2017: 328). While

intensely held attitudes might results in party switching, weakly held attitudes are more likely to

be affected by partisan elite rhetoric (Zaller 1992, 2012). In other words, other preferences are

easier to change or manipulate, but the result of this change is unlikely to have meaningful behav-

17 See Krosnick (1999); Howe and Krosnick (2017); Miller and Peterson (2004) for more on this theme.

27

Page 28: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

ioral consequences (Lenz 2009; Zaller 2012).18 A tool like QVSR could potentially help revisit this

important debate.

From a methodological standpoint, we have yet to examine the impact of changing the menu

of options, as well as the consequences of switching to other forms of pricing (e.g., linear). Most

importantly, we have set aside the issue of anchors that allow comparability across menus and

surveys. Possible anchors include substituting one of the question with a non-political item. A

revealed preference anchor might be a small monetary lottery. Here, the unit would be the value

of relaxing the budget constraint in QVSR - how much the user would be willing to pay to express

a more intense preference to the surveyor. The second anchor is analogous to the “Lagrange mul-

tiplier” in microeconomics. An advantage is that the researcher observes on each policy question

the willingness to trade-off the marginal benefit (‘vote’) with the marginal cost (loss of money). A

second advantage is bridging survey research to social choice theory and welfare economics. We

leave these explorations for future work.

We hope that the argument and results presented in this study will contribute to innovation in

survey research going forward. The model presented in this paper (see also Appendix) lays the

foundations for fleshing out how competing method might perform relative to QVSR. We do not

believe the current version of QVSR is the final word. To accelerate innovation in survey design,

we plan to make available online a version of QVSR, with a user friendly interface (both for the

researcher and the user) and flexible enough to minimize the start-up costs for such follow-up

studies.

18 Zaller, in a recent review of the research published since his 1992 opus, similarly distinguishes between strong andweak attitudes. Some voters, he argues, develop strong opinions on one or two of the many major issues that constitutethe agenda of national politics. These voters, who form what Converse called “issue publics,” attach themselves to partieson the basis of their strong issue-specific concerns (Leeper and Slothuus 2014). They must then decide what they “thinkabout [the other] issue positions attached to the party they embrace.” Evidence indicates that they will often end upexpressing support for the party’s broad agenda. Their preferences on these issues are best understood as a signal ofsupport for the party (Bullock et al. 2013) and are unlikely to be the reflection of deeply held and behaviorally relevantpreferences (Freeder, Lenz and Turney 2018).

28

Page 29: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

References

Abramson, Scott F, Korhan Kocak and Asya Magazinnik. 2019. What Do We Learn About Voter

Preferences From Conjoint Experiments? Technical report Working paper. URL: https://scholar.

princeton. edu/sites/default/files . . . .

Achen, Christopher H and Larry M Bartels. 2016. Democracy for realists: Why elections do notproduce responsive government. Princeton University Press.

Alesina, Alberto, Armando Miano and Stefanie Stantcheva. 2018. Immigration and Redistribution.

Technical report National Bureau of Economic Research.

Alesina, Alberto, Stefanie Stantcheva and Edoardo Teso. 2018. “Intergenerational mobility and

preferences for redistribution.” American Economic Review 108(2):521–54.

Bertrand, Marianne and Sendhil Mullainathan. 2003. “Enjoying the quiet life? Corporate gover-

nance and managerial preferences.” Journal of political Economy 111(5):1043–1075.

Bouton, Laurent, Paola Conconi, Francisco Pino and Maurizio Zanardi. 2018. “Guns, Environment,

and Abortion: How Single-Minded Voters Shape Politicians’ Decisions.”.

Bullock, John G, Alan S Gerber, Seth J Hill and Gregory A Huber. 2013. Partisan bias in factual

beliefs about politics. Technical report National Bureau of Economic Research.

Cavaille, Charlotte, Daniel L. Chen and Karine Van Der Straeten. 2019. “A Decision-Theoretic Ap-

proach to Understanding Survey Response: Likert vs. Quadratic Voting for Attitudinal Research.”

University of Chicago Law Review .

Dahl, Robert Alan. 1973. Polyarchy: Participation and opposition. Yale University Press.

Freeder, Sean, Gabriel S Lenz and Shad Turney. 2018. “The Importance of Knowing ‘What Goes

With What’.” Journal of Politics .

Gennaioli, Nicola and Guido Tabellini. 2019. “Identity, Beliefs, and Political Conflict.”.

Gilens, Martin. 2012. Affluence and influence: Economic inequality and political power in America.

Princeton University Press.

Grossman, Gene M and Elhanan Helpman. 2018. Identity politics and trade policy. Technical report

National Bureau of Economic Research.

Hanretty, Chris, Benjamin E Lauderdale and Nick Vivyan. 2019. “A Choice-Based Measure of Issue

Importance in the Electorate.” American journal of political science. .

Howe, Lauren C and Jon A Krosnick. 2017. “Attitude strength.” Annual review of psychology 68:327–

351.

Krosnick, Jon A. 1999. “Survey research.” Annual review of psychology 50(1):537–567.

Kuziemko, Ilyana, Michael I Norton, Emmanuel Saez and Stefanie Stantcheva. 2013. How elastic

are preferences for redistribution? Evidence from randomized survey experiments. Technical

report National Bureau of Economic Research.

29

Page 30: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Lalley, Steven P and E Glen Weyl. 2016. “Quadratic voting.”.

Leeper, Thomas J and Rune Slothuus. 2014. “Political parties, motivated reasoning, and public

opinion formation.” Political Psychology 35:129–156.

Lenz, Gabriel S. 2009. “Learning and opinion change, not priming: Reconsidering the priming

hypothesis.” American Journal of Political Science 53(4):821–837.

Lenz, Gabriel S. 2013. Follow the leader?: how voters respond to politicians’ policies and performance.

University of Chicago Press.

Malhotra, Neil, Jon A Krosnick and Randall K Thomas. 2009. “Optimal design of branching ques-

tions to measure bipolar constructs.” Public Opinion Quarterly 73(2):304–324.

Manski, Charles F. 2000. Economic analysis of social interactions. Technical report National bureau

of economic research.

Mason, Lilliana. 2014. “”I Disrespectfully Agree” :The Differential Effects of Partisan Sorting on

Social and Issue Polarization.” American Journal of Political Science .

Miller, Joanne M and David AM Peterson. 2004. “Theoretical and empirical implications of attitude

strength.” The Journal of Politics 66(3):847–867.

Pierson, Paul. 2001. The new politics of the welfare state. Oxford University Press, USA.

Posner, Eric A and E Glen Weyl. 2018. Radical markets: Uprooting capitalism and democracy for ajust society. Princeton University Press.

Quarfoot, David, Douglas von Kohorn, Kevin Slavin, Rory Sutherland, David Goldstein and Ellen

Konar. 2017. “Quadratic voting in the wild: real people, real votes.” Public Choice 172(1-2):283–

303.

Revilla, Melanie A, Willem E Saris and Jon A Krosnick. 2014. “Choosing the number of categories

in agree–disagree scales.” Sociological Methods & Research 43(1):73–97.

Zaller, J.R. 1992. The nature and origins of mass opinion. Cambridge university press.

Zaller, J.R. 2012. “What Nature and Origins Leaves Out.” Critical Review 24(4):569–642.

Zellner, Arnold. 1962. “An efficient method of estimating seemingly unrelated regressions and tests

for aggregation bias.” Journal of the American statistical Association 57(298):348–368.

30

Page 31: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Table 1: Total and Marginal Cost of Voting Under QV

Votes Total cost Marginal cost1 1 12 4 33 9 54 16 75 25 96 36 117 49 138 64 15... ... ...

Table 2: Sample SizesLikert Likert + QVSR

Total 1263 1276 1127

Table 3: Proxies of Personal Exposure and Selected Policy Proposals

Variable Measurement Policy

Sexual orientationWhich of the following best describes how you thinkof yourself? Gay/Lesbian versus Straight (GfK)

Giving same sex couplesthe legal right to adopt achild.

Gun ownershipAre there any guns or revolvers that are kept inyour home or garage? (Y/N) Do any of these gunspersonally belong to you? (Y/N) (GfK)

Laws making it moredifficult for people to buya gun

Proximity toimmigration event

Where were you / your parents / grand-parentsborn?

the US Border withMexico

Gender What is your gender? (GfK)

Require employers to paywomen and men thesame amount for thesame work?

Race What is your race/ethnicity (GfK)

Preferential hiring andpromotion of blacks toaddress pastdiscrimination

Proximity to birthof child

Do you have children? How old is your youngest?Do you plan on having children (another child) oneday?

Require employers tooffer paid leave toparents of new children

Wage level relativetocurrent/proposedminimum wage

Are you paid at/below/above the minimum wage?Are you paid at/below/above 15$/h ?

Raising the minimumwage to 15$/h over thenext 3 years

Religiosity /born-again

Would you describe yourself as a born-again orEvangelical Christian? How often do you attendreligious service? (GfK)

A nationwide ban onabortion with only verylimited exceptions

31

Page 32: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Table 4: Differences in the regression slope for each method

Likert se Likert + se QVSR seGender 0.40 0.04 0.63 0.07 0.80 0.02Religious 0.40 0.04 0.33 0.02 0.63 0.06Race 0.29 0.01 0.35 0.04 0.69 0.05Gun owner 0.56 0.03 0.53 0.05 1.11 0.17Native born 0.16 0.05 0.32 0.05 0.44 0.08Proximity to childbirth 0.65 0.10 1.12 0.06 1.22 0.06Min wage 0.15 0.14 0.54 0.05 0.57 0.35Same sex rel 0.12 0.01 0.17 0.02 0.24 0.05

32

Page 33: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 1: QV Pilot Evaluating American Political Issues

Source: Quarfoot et al. (2017)

33

Page 34: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 2: Assessing Discrimination (1)

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .2 .4 .6 .8 1

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .2 .4 .6 .8 1

0

.05

.1

.15

.2

.25

.3

.35

.4

-3 -2 -1 0 1 2 3

Border Wall - Likert

0

.05

.1

.15

.2

.25

.3

.35

.4

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

Border Wall - QVSR

10 10

Pro-Immigration Behavior Y Against Opposition to Border Wall X (Hypothetical). Y-axis: percent of respondentswho engage in pro-immigration behavior (e..g donate to pro-immigration non-profit)The green histogram represents responses in Likert and QVSR. The red line represents one possible pattern assumingthe additional variance in QVSR is indeed better at discriminating between individuals most and least likely to exhibitY = 1..

34

Page 35: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 3: Assessing Discrimination (2)

05

1015

2025

3035

40

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

05

1015

2025

3035

40

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

35

Page 36: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 4: Favor or Oppose: Building a wall on the US border with Mexico?

0

.05

.1

.15

.2

.25

.3

.35

.4

.45

.5

.55

.6

Bord

er W

all

-3 -2 -1 0 1 2 3

Republicans - Likert

0

.05

.1

.15

.2

.25

.3

.35

.4

.45

.5

.55

.6

Bord

er W

all

-3 -2 -1 0 1 2 3

Democrats - Likert

0

.05

.1

.15

.2

.25

.3

Bord

er W

all

-5 -4 -3 -2 -1 0 1 2 3 4 5

Republicans - Likert +

0

.05

.1

.15

.2

.25

.3

Bord

er W

all

-5 -4 -3 -2 -1 0 1 2 3 4 5

Democrats - Likert +

0

.05

.1

.15

.2

.25

.3

Bord

er W

all

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

Republicans - QVSR

0

.05

.1

.15

.2

.25

.3

Bord

er W

all

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

Democrats - QVSR

Source: Cavaille et al. 2019, GfK-Ipsos, weighted

36

Page 37: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 5: Minimum Wage versus Border Wall by Education Level (Democrats Only)

-3-2

-10

12

3M

inin

um W

age

-3 -2 -1 0 1 2 3Oppose the wall

Likert - BA

-3-2

-10

12

3M

inin

um W

age

-3 -2 -1 0 1 2 3Oppose the wall

Likert - No BA

Democrats by Education Level

-5-4

-3-2

-10

12

34

5M

inin

um W

age

-5 -4 -3 -2 -1 0 1 2 3 4 5Border Wall

Likert +-5

-4-3

-2-1

01

23

45

Min

inum

Wag

e

-5 -4 -3 -2 -1 0 1 2 3 4 5Oppose the wall

Likert Plus - No BA

Democrats by Education Level

-7-6

-5-4

-3-2

-10

12

34

56

7M

inin

um W

age

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7Oppose the wall

QVSR - BA

-7-6

-5-4

-3-2

-10

12

34

56

7M

inin

um W

age

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7Oppose the wall

QVSR - No BA

Democrats by Education Level

Question wording: Favor/oppose building a wall on the border with Mexico, Raising the minimum wage to $15 anhour over the next 6 years. Anyone with a 4 year degree is coded as having a BA. Random noise has been added tofacilitate interpretation.Source: Cavaille et al. 2019, GfK-Ipsos, weighted 37

Page 38: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 6: Wrote About the Min Wage Bill —Raw Survey Responses—

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

% W

rote

abo

ut M

in. W

age

Bill

.5 .6 .7 .8 .9 1Likert

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

.5 .6 .7 .8 .9 1Likert +

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

.5 .6 .7 .8 .9 1QVSR

020

040

060

0

0 1 2 3

010

020

030

040

0

0 1 2 3 4 5

010

020

030

040

0

0 1 2 3 4 5 6 7

All Voters

Binned scatter plot with the y-axis representing the percentage of individuals who decided to write about the mini-mum wage bill.Bins on the x-axis each represent a response category (Xraw). We only using the absolute values of the responsescales.

38

Page 39: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 7: Wrote About the Min Wage Bill —Predicted Probabilities—

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

All Voters0

.1.2

.3.4

.5.6

.7.8

.91

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

050

100

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

010

2030

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

05

1015

20

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

Democrats

Binned scatter plot with the y-axis representing the percentage of individuals who decided to write about the mini-mum wage bill.Bins on the x-axis each represent a predicted probability (Xpred) from a logit using survey responses on both theminimum wage and abortion. We only using the absolute values of the response scales.

39

Page 40: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 8: Minimum Wage versus Abortion (Democrats Only)

-3-2

-10

12

3R

estri

ctio

ns o

n Ab

ortio

n

-3 -2 -1 0 1 2 3Raise Min Wage

Likert

-5-4

-3-2

-10

12

34

5R

estri

ctio

ns o

n Ab

ortio

n

-5 -4 -3 -2 -1 0 1 2 3 4 5Raise Min Wage

Likert Plus

-7-6

-5-4

-3-2

-10

12

34

56

7R

estri

ctio

ns o

n Ab

ortio

n

-7-6-5-4-3-2-1 0 1 2 3 4 5 6 7Raise Min Wage

QVSR

Republicans

Random noise has been added to facilitate interpretation.

40

Page 41: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 9: Donated to Pro Gun Control Organization —Raw Survey Responses—

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

% D

onat

ion

pro

gun

cont

rol

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

0.1

.2.3

.4.5

-3 -2 -1 0 1 2 3

0.1

.2.3

.4.5

-5 -4 -3 -2 -1 0 1 2 3 4 5

0.1

.2.3

.4.5

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

Binned scatter plot with the y-axis representing the percentage of individuals who decided to donate to pro guncontrol organization.Bins on the x-axis each represent a response category (Xraw).

41

Page 42: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 10: Donated to Pro-Gun Control —Predicted Probabilities—

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

All Voters0

.1.2

.3.4

.5.6

.7.8

.91

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

010

020

030

0

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

020

4060

8010

0

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

010

2030

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

Democrats

Binned scatter plot with the y-axis representing the percentage of individuals who decided to donate to pro guncontrol organization.Bins on the x-axis each represent a predicted probability (Xpred) from a logit using survey responses on both the guncontrol and border wall items.

42

Page 43: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 11: Punished Independent —Predicted Probabilities—

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

0.1

.2.3

.4.5

.6.7

.8.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

050

100

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

020

4060

80

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

020

4060

80

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

Binned scatter plot with the y-axis representing the percentage of individuals who chose to punish the independent,conditional on having previously shared some amount with the Independent voter.Bins on the x-axis each represent a predicted probability (Xpred) from a logit using survey responses on both the guncontrol and border wall items.

43

Page 44: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 12: Turnout —Predicted Probabilities—

0.2

.4.6

.81

0 .2 .4 .6 .8 1Likert

0.2

.4.6

.81

0 .2 .4 .6 .8 1Likert +

0.2

.4.6

.81

0 .2 .4 .6 .8 1QVSR

015

030

045

0

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

020

040

060

0

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

050

100

150

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

X-axis represents the predicted probabilities generated after running a regression model (see text for more details)predicting turnout and using answers to all questions as predictors and allowing for interactions. Predicted proba-bilities have been normalized.

Figure 13: Being Black and Support for Affirmative Action

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

% B

lack

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

-- Likert --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert Plus

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

-- Likert + --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QV

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

-- QVSR --

44

Page 45: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 14: Being a Woman and Support for Equal Pay

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

% W

omen

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

-- Likert --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert Plus

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

-- Likert + --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QV

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

-- QVSR --

Figure 15: Temporal Proximity to Childbirth and Support for Parental Paid Leave

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

Chi

ld S

core

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

-- Likert --

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert Plus

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

-- Likert + --

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QV

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

-- QVSR --

One the Y-axis : Y = 3 if respondent is planning to get pregnant or just got pregnant, Y = 2 if has young children,Y = 1 otherwise.

45

Page 46: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Figure 16: Likelihood of being Born Again and Opposition to Abortion Restriction

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

% B

orn

Agai

n

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert

-- Likert --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert Plus

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1Likert +

-- Likert + --

0

.1

.2

.3

.4

.5

.6

.7

.8

.9

1

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QV

0.1

.2.3

.4.5

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1QVSR

-- QVSR --

One the Y-axis : Y = 1 if respondent identifies as born again, Y = 0 otherwise.

46

Page 47: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A Letter Writing

A1

Page 48: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A2

Page 49: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A3

Page 50: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A4

Page 51: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

B Donation

A5

Page 52: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A6

Page 53: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A7

Page 54: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A8

Page 55: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

C Dictator Game

A9

Page 56: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A10

Page 57: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A11

Page 58: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

A12

Page 59: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

D A model of survey answers

Likert scales are the standard tools to measure policy preferences. Nevertheless, there are (at least)

two potential issues with standard Likert scales when one wants to measure policy preferences.

The first conceptual issue with standard Likert scales rests with what they actually try to mea-

sure: is the wording fit to capture preference intensity?

The second conceptual issue with Likert scales, and with any survey instrument more generally,

is that respondents may not answer truthfully.

We propose a model that allows us to study both potential flaws, and to examine how alternative

survey techniques - Likert+ and QVSR - may help address these issues.

D.1 Assumptions

We build on Cavaille, Chen and Van Der Straeten (2019) to derive predictions about the relative

performance of QVSR, Likert and Likert+ at measuring policy preferences.

Given the potential flaws of standard Likert scales we want to address, we make two main sets

of assumptions: one about policy preferences, and the second about motivations when answering

survey.

Assumptions about policy preferences Consider a number of proposed policy reforms, e.g.

building a wall on the border between US and Mexico or legislating to give same sex couples the

right to adopt a child.

We assume that an individual’ policy preferences on these different possible reforms are char-

acterized by two parameters: her preference orientation, and her preference intensity.

The preference orientation describes whether the individual agrees or not with the reform as

described. It is a subjective evaluation of the ”quality” of the reform, as considered from the

individual’s perspective. Formally, one’s preference orientation on each issue k = 1, ...,K can be

described by a real number αik in the interval [−1,+1], where +1 means perfect agreement with

the reform, and −1 total disagreement with this reform. Intermediate values correspond to inter-

mediate opinions, something for instance, that might be due to ambivalence (e.g. support for the

policy principle but not its specific suggested implementation).

The preference intensity describes how important the issue at stake is to the individual. Formally,

how much an individual cares about an issue, i.e. preference intensity,can be captured by a positive

number βik in the interval [0,+1], where βik = 0 means that this individual does not care about this

issues, in the sense that she would not be ready to take costly action in favor or against any policy

change on this issue, and βik = 1 means that the question is of the utmost importance to her.

A13

Page 60: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Ideally, one would like to collect information about both parameters, since both are relevant to

social scientists studying policy preferences and to policy makers.

Remark about standard Likert scales With this framework in mind, we can ask what Likert

scales actually try to measure. The wording of the Likert question in our survey is the following:

”Do you favor, oppose, or neither favor nor oppose: [Example] Giving same sex couples the legal

right to adopt a child?”, and if favor (resp. oppose), “Do you favor (resp. oppose) that a great deal,

moderately, or a little?”.

Answers to Likert questions are ambiguous regarding the key parameters we are interested

in measuring, that is, the preference orientation and the preference intensity as defined above.

Indeed, the wording (”favor”/”oppose”) seems to imply that the question should predominantly

tap into the ’preference orientation’ dimension. Yet,when an individual reports that she opposes

giving same sex couples the legal right to adopt a child a great deal, it can be interpreted as meaning

that she fully disagrees with the terms of this reform (preference orientation), but it may also mean

that she has strong feelings about this issue (preference intensity). Formally, we will denote by

xik ∈ [−1,+1] the sincere answer that individual i would like to give to this question, which, as

discussed above, is likely to include information mostly about αik (the preference orientation) but

also about βik (the preference intensity).

Assumptions about motivations when answering survey We assume that an individual may

have (at least) two (potentially) conflicting motives when answering the survey. On the one hand,

she derives some intrinsic utility from answering each question sincerely. This might derive from

some expressive benefits (I am happy to tell who I am, or what I stand for), or this might be

induced by a psychological cost of lying. We call this motive the ”sincerity motive”. This is the

motive generally assumed in the literature using survey data.19 On the other hand, we defend the

view that she may also care about how her answers will be read and interpreted by other people,

which might conflict with this sincerity motive. This additional motivation might encompass a

variety of psychological mechanisms, depending on the context and the question. For example,

imagine that the government is considering whether a specific reform should be adopted or not,

and that a survey is conducted to measure public support for or opposition to this reform. The

respondent might be willing to use her answers to the survey to influence policy making. Another

motivation for the respondent might be to signal to herself, or to whoever is going to read the

survey, that she has some socially desirable traits. For example, she may want to appear altruistic,

non-racist, tolerant, etc. She might also want to signal a group identity. For example, if she is a

Republican, and she expects Republicans to take specific positions on some issues, she may suffer

a psychological cost from moving away from these typical “Republican positions”. Whatever the

19 One assumption commonly made by social scientists using survey data is that they provide a faithful - if noisy -measure of respondents views (See for example Achen 1975 or Ansolabehere et al 2008).

A14

Page 61: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

source of this motivation, because of this ”signaling motive”, one position is particularly attractive

to the respondent, which might be different from where she really stands.

Under all three survey techniques, the individual is asked to answer the ”Favor/oppose” ques-

tion. Additionally, under Likert+, she is also asked to answer the “How important” question. For-

mally, we assume that, on each issue k = 1, ...,K, respondent i is characterized by four parameters:

• the answer she would give to the standard “Favor/oppose” question if she were to answer it

sincerely, xik ∈ [−1,+1],

• the answer to the “Favor/oppose” question that she finds the most attractive because of the

signaling motive, denoted by Xik ∈ [−1,+1],

• her true preference intensity of this issue, βik ∈ [0,+1] ,

• the answer to the “How important” question that she finds the most attractive because of the

signaling motive, denoted by Bik ∈ [0,+1],

Under standard Likert and QVSR, we assume that the utility a respondent derives from answer-

ing the survey, denoted by Vi, depends on her answers to the “Favor/oppose” question, denoted by

xi = (xi1, ..., xiK) ∈ [−1,+1]K , in the following way:

Vi(xi) =−∑k

wik

[(1− zik)(xik− xik)

2 + zik (xik−Xik)2], (A1)

with wik > 0 and zik ∈ [0,1].20 Parameter wik is the importance put on issue k when answering the

”Favor/oppose” question, and parameter zik is the relative weight of the signaling motive compared

to the sincerity motive for this question.21

Under Likert+, the respondent is additionally asked to report how important each issue is to

her personally. We then assume that in this case, the utility a respondent derives from answering

the survey, denoted by Wi, depends on both her answers to the “Favor/oppose” question, denoted

as previously by xi = (xi1, ..., xiK) ∈ [−1,+1]K , and her answers to the “How important” question,

denoted by β i = (β i1, ..., β iK) ∈ [0,+1]K . We make the following assumption in that case:

Wi(xi, β i) = −wik ∑k

wik

[(1− zik)(xik− xik)

2 + zik (xik−Xik)2]

−∑k

w′ik

[(1− z′ik

)(β ik−βik

)2+ z′ik

(β ik−Bik

)2],

20 We use these simple quadratic forms to derive some simple closed-form solutions. See Cavaille, Chen and VanDer Straeten (2019) for a more general specifications.

21 Our model shares some similarities with that of Bullock et al. (2015), which studies systematic differences betweenRepublican and Democrat voters in how they answer factual questions about economic facts.

A15

Page 62: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

with w′ik > 0 and z′ik ∈ [0,1]. Parameter w′ik is the importance put on issue k when answering the “How

important” question, and parameter z′ik is the relative weight of the signaling motive compared to

the sincerity motive for this question.

”Survey technology” The survey technology specifies the set of questions that are asked, and

the set of answers that are admissible, that is, the set of answers the respondents can choose from.

For example, under standard Likert scales, a respondent can pick any answer on a pre-determined

scale (e.g. “oppose a great deal”, “oppose moderately”, “neither oppose nor support”, “support a

great deal”, “support moderately”). Under QVSR, there is a maximum number of credits that the

respondent can use to answer the “Favor/oppose” question, and the marginal cost of moving away

from the neutral answer (here 0) increases linearly with the distance to this neutral answer.

Optimization problem Individuals are assumed to choose answers (xi in all cases, and addi-

tionally β i in the case of Likert+) that maximize the utility function Vi (Wi in the case of Likert+),

subject to the constraints on answers imposed by the survey technology.

Equipped with this model, we can predict how respondents will answer the survey. In particular,

our interest will be to discuss whether these reported views are a good measure of the underlying

parameters of interest: preference orientation (αik) and her preference intensity (βik). In the next

sections, we derive answers under Likert, Likert+ and QVSR respectively, and discuss the relative

performance of each survey technology at measuring policy preferences.

D.2 Optimal responses under standard Likert scales

Under standard Likert technology, the individual can freely pick any answer she wishes to on all

issues. She solves the following optimization program:

maxxi∈[−1,+1]K

Vi (xi) =−∑k

wik

[(1− zik)(xik− xik)

2 + zik (xik−Xik)2].

It is easy to check that the solution of the optimization program for issue k, denoted by xLik, is:

xLik = (1− zik)xik + zikXik. (A2)

If zik = 0 (only the sincerity motive is active), the individual has no incentive to misreport her

view, and xLik = xik. But as soon as zik > 0, the individual has the incentive to move away from her

true opinion in the direction of the partisan target. In particular, in a situation of intense political

polarization, one might expect the |Xik| to be quite large, which will induce respondents to inflate

their support or opposition to policy reforms (depending whether their preferred party endorse or

A16

Page 63: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

oppose them). This might result in massive bunching for issues with a strong and clear partisan

divide.

Second, note that how much the individual values her response to this question compared to

other questions in the survey (parameter wik) does not influence her answers. Indeed, each question

is treated in isolation.

Regarding the key parameters we are interested in measuring, that is, the preference orientation

(αik) and the preference intensity (βik), answers under standard Likert scales are a mix of their

sincere response xik - which itself includes mostly information about αik (the preference orientation)

and also presumably some limited amount of information about βik (the preference intensity) - and

their signaling target.

D.3 Optimal responses under Likert+

Under Likert+, the individual answers both the “Favor/oppose” question and the “How important”

question. Question by question, the individual solves the trade-off between the sincerity motive

and the signaling motive, resulting in the following responses, denoted xL+ik and β

L+ik :

xL+ik = (1− zik)xik + zikXik = xL

ik (A3)

βL+ik =

(1− z′ik

)βik + z′ikBik (A4)

In particular, note that partisan motive may induce individuals to misreport the importance of

issues. Some respondents may have the incentive to inflate the importance of issues that are high

on the agenda of their preferred party, even if they feel only moderately concerned about them.

Comparison between Likert+ and standard Likert scales Compared to standard Likert

scales, the advantage of Likert+ is obvious: it aims at collecting two pieces of information in-

stead of one: xik (as in standard Likert) but also βik (the preference intensity). Yet, as highlighted

above, note that this direct measure of the preference intensity is likely to be only imperfect. In a

highly polarized environment, respondents may have the incentive to inflate the importance of the

issues that are high on their preferred party’s agenda.

D.4 Optimal responses under QVSR

Under QVSR, there is a maximum number of credits that the individual is allowed to spend on

answers. Formally, assume that the set of feasible answers under QVSR is:{xi = (xi1, ..., xiK) ∈ [−1,+1]K : ∑

kx2

ik ≤ m

},

A17

Page 64: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

where m ∈ R+, with m < K (the individual cannot pick the most extreme answers of all issues).

Deriving the optimal answers under QV is more complicated since it involves solving a constrained

maximization program. The individual solves the following optimization program:

maxxi∈[−1,+1]K

L (xi,λi) = −∑k

wik

[(1− zik)(xik− xik)

2 + zik (xik−Xik)2]

+λi

[m−∑

kx2

ik

],

where λi is the Lagrange multiplier. One may check that first order conditions with respect to xik

yield:

xQV SRik =

wik (1−βik)

wik +λixik +

wik (1−βik)

wik +λiXik =

wik

wik +λixL

ik.

If ∑k(xL

ik

)2 ≤ m: responses are the same as under Likert (the budget constraint is not binding).

If ∑k(xL

ik

)2> m, then satisfying the budget constraint implies that:

∑k

(wik

wik +λixL

ik

)2

= m. (A5)

Note that the left-hand side of the equality is strictly decreasing in λi, taking the value ∑k(xL

ik

)2

strictly higher than m when λi = 0, and converging towards 0 as λi goes to +∞. Therefore, there

exists a unique positive λi such that equality (A5) is satisfied.

Denoting the Lagrange multiplier at the optimum by λ ∗i , the optimal response on issue k under

QVSR is therefore:

xQV SRik =

1

1+ λ ∗iwik

xLik (A6)

=1

1+ λ ∗iwik

[(1− zik)xik + zikXik] .

where λ ∗i is the Lagrange multiplier at the optimum. As soon as the budget constraint is binding,

compared to Likert, QVSR ’shrinks’ all answers towards the neutral answer (0). Expression (A6)

shows that this ’contraction’ is likely to be heterogenous across issues: more credits will be given

to issues with an higher wik, meaning that respondents will put more credits on issues where they

judge their responses as important (see the definition of their utility when answering surveys (As

soon as the budget constraint is binding, compared to Likert, QV ’shrinks’ all answers towards

the neutral answer (0). Expression (A6) shows that this ’contraction’ can be heterogenous across

issues: more credits will be given to issues with an higher wik, meaning that respondents will put

more credits on issues that they judge their responses as important (see the utility function when

answering surveys (A1)).

A18

Page 65: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Comparison between QVSR and standard Likert scales Let us first compare the perfor-

mance of QVSR and standard Likert scales. Note that both standard Likert and QVSR only ask the

“Favor/oppose” question. The relative performance of QVSR vs. Likert at measuring the relevant

preference parameters - to wit, the preference orientation, αik, and the preference intensity, βik -

will depend on the statistical relationship between wik (how much the individual suffers when she

has to deviate from reporting her unconstrained answers on issue k), βik (the intrinsic importance

of the issue), and zik (the relative importance of the partisan motive compared to the sincerity

motive).

One first advantage of QVSR compared to standard Likert is that answers under QVSR also

incorporates information about wik (how much the individual suffers when she has to deviate from

reporting her unconstrained answers on issue k). One might expect that this parameter positively

correlates with βik (the intrinsic importance of the issue). If this is the case, compared to standard

Likert, answers under QVSR are still expected to be a mix of the respondent’s preference orientation

and her preference intensity, but the weight on the preference intensity parameter is expected to

be larger.

A second advantage of QVSR, which is more subtle, is that is there is a positive correlation

between wik (how much the individual suffers when she has to deviate from reporting her uncon-

strained answers on issue k) and the relative importance of the sincerity motive compared to the

signaling motive (1− zik), we can show that not only QVSR better captures the preference intensity,

it also better measures xik (the ”sincere” answer to the standard Likert question). To understand

why, remember that when a signaling motive is present, answers under Likert only imperfectly cap-

tures xik, since answers under Likert (the xLik) are typically a weighted average between the sincere

answer (the xik parameter) and some signaling target (see (A2)). The lower the strength of the

sincerity motive compared to the signaling/partisan motive, the further away the Likert answers

will be from the sincere answers. Now, remember that under QVSR, the individual relatively allo-

cates more credits to issues for which wik is large (see (A6)). If one assumes that there is a negative

correlation between wik and the relative strength of the sincerity motive, the model then predicts

that under QVSR, the individual will allocate more credits to issues on which the unconstrained

answers (xLik) are closer to the sincere answer (xik), thus better capturing the ’true’ xik parameter.

Comparison between QVSR and Likert+ Compared to the Likert+ instrument, QVSR relies

on only one question per issue instead of two, which might be seen as an advantage or a default

depending on how much one values parsimony. Under Likert+, one gets xLik (same response to

the “Favor/oppose” question as under standard Likert, see (A3)), plus an imperfect report about

preference intensity βL+ik (see (A4)). Under QVSR, one gets xQV SR

ik , which by (A2) incorporates

information about both xLik and wik.

The comparison between Likert+ and QVSR regarding which instrument best measures policy

preferences is ambiguous.

A19

Page 66: Who Cares? Measuring Preference Intensity in a Polarized ...users.nber.org/~dlchen/papers/Who_Cares.pdfQuadratic Voting for Survey Research (QVSR). This method mimics real world trade-offs

Indeed, one may think at first sight that with two questions instead of one, Likert+ collects

more information than QVSR.22

The model makes us qualify this result. When one explicitly takes into account the fact that

respondents’ answers to surveys are likely to be only partially sincere, the model reveals a potential

advantage of QVSR compared to Likert+. Indeed, one may suspect that in a polarized environment,

answers to the”How important?” question are likely to be only very noisy measures of the true

intensity parameter. In such a context, an advantage of QVSR is to bring a “revealed preferenceapproach” to the measurement of preference intensity. Instead of asking explicitly and openly how

important an issue is, QVSR uses the fact that it forces trade-offs across issues to indirectly measure

their importance. In a highly polarized environment where partisan identities are very salient,

answers to the Likert+ “How important” question might not be that informative, and the revealed

preference approach of QVSR -where individual actually have to choose across issues rather than

just reporting whether an issue is important or not - might be superior. To conclude, the comparison

between Likert+ and QVSR in terms of ability to recover meaningful information about preference

intensity is ambiguous, and which performs best is an empirical question.

22 Note that the fact that Likert+ relies on two questions per issue instead of one might be seen as a default ratherthan an advantage, depending on how much one values. parsimony.

A20