why we disagree and why it matters...why we disagree and why it matters item type text; electronic...
TRANSCRIPT
Why We Disagree and Why It Matters
Item Type text; Electronic Dissertation
Authors Ballantyne, W Nathan
Publisher The University of Arizona.
Rights Copyright © is held by the author. Digital access to this materialis made possible by the University Libraries, University of Arizona.Further transmission, reproduction or presentation (such aspublic display or performance) of protected items is prohibitedexcept with permission of the author.
Download date 28/01/2021 15:22:53
Link to Item http://hdl.handle.net/10150/204303
WHY WE DISAGREE AND WHY IT MATTERS
by
W. Nathan Ballantyne
A Dissertation Submitted to the Faculty of the
DEPARTMENT OF PHILOSOPHY
In Partial Fulfillment of the RequirementsFor the Degree of
DOCTOR OF PHILOSOPHY
In the Graduate College
THE UNIVERSITY OF ARIZONA
2011
Copyright (c) W. Nathan Ballantyne 2011
2
THE UNIVERSITY OF ARIZONAGRADUATE COLLEGE
As members of the Dissertation Committee, we certify that we have read the dis-sertation prepared by W. Nathan Ballantyneentitled Why We Disagree and Why It Mattersand recommend that it be accepted as fulfilling the dissertation requirement for theDegree of Doctor of Philosophy.
Date: 10 December 2010Stewart Cohen
Date: 10 December 2010Terry Horgan
Date: 10 December 2010Keith Lehrer
Date: 10 December 2010
Date: 10 December 2010
Final approval and acceptance of this dissertation is contingent upon the candidate’ssubmission of the final copies of the dissertation to the Graduate College.I hereby certify that I have read this dissertation prepared under my direction andrecommend that it be accepted as fulfilling the dissertation requirement.
Date: 10 December 2010Dissertation Director: Stewart Cohen
3
STATEMENT BY AUTHOR
This dissertation has been submitted in partial fulfillment of requirements for anadvanced degree at the University of Arizona and is deposited in the UniversityLibrary to be made available to borrowers under rules of the Library.
Brief quotations from this dissertation are allowable without special permission,provided that accurate acknowledgment of source is made. Requests for permissionfor extended quotation from or reproduction of this manuscript in whole or in partmay be granted by the copyright holder.
SIGNED: W. Nathan Ballantyne
4
ACKNOWLEDGEMENTS
The fact that this dissertation is done is a testament to the kindness and supportof many people. Blaise Pascal’s words strike me as exactly right: “Certain authors,speaking of their works, say: ‘My book,’ ‘My commentary,’ ‘My history,’ and soon. They resemble middle-class people who have a house of their own and alwayshave ‘My house’ on their tongue. They would do better to say: ‘Our book,’ ‘Ourcommentary,’ ‘Our history,’ and so on, because there is in them usually more ofother people’s than their own.”
I will begin a career in teaching and research having learned much from eachone of my committee members. Stew Cohen’s insightful critical comments werematched by his kindness and sense of humor. Our philosophical discussions havebeen one high point of my time in Arizona. Keith Lehrer has commented upon nearlyeverything I’ve written since starting graduate school, gently pushing me to improve;his commitment to my education has been remarkable. Keith’s creative energyhas inspired me as well. And Terry Horgan impressed me with his collaborativeapproach to philosophy. We co-taught an epistemology course in spring 2007, whileI was preparing for area exams; it featured a unit on peer disagreement and rationaluniqueness. Some of our discussions then led me to embark on this project. Terrybrought curiosity and sharp questions to drafts of this material, and that helped mekeep writing until I was done.
For advice, practical help, or camaraderie, I want to mention several friends andcolleagues: Alex Arnold, E.J. Coffman, Thomas Christiano, Thomas Crisp, WillDyer, Ian Evans, Chris Freiman, Debbie Jackson, Jerry Gaus, Michael Gill, PaulGooch, Nathan King, Peter King, Matt Mikitka, Alvin Plantinga, Alex Plato, thelate John Pollock, Josh Rasmussen, David Schmidtz, Gayle Seigel, Ari Shapiro atxoom juice, Alex Skiles, Mark Timmons, and Craig Warmke.
Conversations with my dear friend Ben Wilson have left a significant mark onthese pages. Though working in different fields, we have shared the satisfactionsand frustrations of academic life.
My soon-to-be wife Jenna was endlessly patient, encouraging, and helpful while Icompleted this dissertation and looked for a teaching position. Her wise feedback hasconsiderably improved my thinking and writing, and her behind-the-scenes effortsensured I was ready for job interviews. She has also brought much hope and laughterto my life.
This work somehow represents the culmination of my years as a student. Myparents were there in the beginning and at each step along the way. It is hard toknow how to thank them properly for all they have done. For now, I dedicate thiswork to them.
5
DEDICATION
For my parents
6
TABLE OF CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . 9
CHAPTER 2 THE VARIABILITY PROBLEM . . . . . . . . . . . . . . . . 192.1 What’s the Problem? . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2 The Symmetry Argument . . . . . . . . . . . . . . . . . . . . . . . . 232.3 The Arbitrariness Argument . . . . . . . . . . . . . . . . . . . . . . . 312.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
CHAPTER 3 EVIDENCE WE DON’T HAVE . . . . . . . . . . . . . . . . . 463.1 The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.2 Evidence, Defeat, and Dismissing Counterevidence . . . . . . . . . . . 503.3 A Puzzle about Dismissing Evidence . . . . . . . . . . . . . . . . . . 533.4 The Argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.5 Objections and Replies . . . . . . . . . . . . . . . . . . . . . . . . . . 643.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
CHAPTER 4 REASONING ABOUT BIASES . . . . . . . . . . . . . . . . . 684.1 The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2 The Statistical Syllogism Argument . . . . . . . . . . . . . . . . . . . 724.3 The Unclarity Argument . . . . . . . . . . . . . . . . . . . . . . . . . 884.4 The Irrational Accommodation Argument . . . . . . . . . . . . . . . 924.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
CHAPTER 5 GENES AND ATTITUDES . . . . . . . . . . . . . . . . . . . 1025.1 From Genes to Attitudes . . . . . . . . . . . . . . . . . . . . . . . . . 1045.2 Non-shared Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . 1105.3 The Alternative Explanation Argument . . . . . . . . . . . . . . . . . 1165.4 The Arbitrary Perspective Argument . . . . . . . . . . . . . . . . . . 1205.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
CHAPTER 6 KNOCKDOWN ARGUMENTS . . . . . . . . . . . . . . . . . 1276.1 What is a Knockdown Argument? . . . . . . . . . . . . . . . . . . . . 1296.2 Distinctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1346.3 The Case Against Knockdown Philosophical Arguments . . . . . . . . 136
TABLE OF CONTENTS – Continued
7
6.4 From Non-Philosophical Knockdown Arguments to PhilosophicalKnockdown Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8
ABSTRACT
This dissertation investigates whether controversial beliefs concerning a range of
topics can be rational or reasonable. It proceeds by developing a series of challenges
to the putative rationality of belief in such topics. In chapter 1, the project is
introduced and motivated. Several challenges are set out in chapters 2–5. Finally,
a thought behind one solution to these challenges is examined in chapter 6.
9
CHAPTER 1
INTRODUCTION
“A man’s bewilderment is the measure of his wisdom.”
– Nathaniel Hawthorne (The House of Seven Gables)
Is it rational or reasonable to maintain controversial beliefs? Or is there something
epistemically unacceptable in doing so, something irrational or unreasonable? I shall
argue in this dissertation that belief in disputed matters is often misplaced.
We are creatures who quite naturally clash in our judgments. Controversy
abounds in everyday life. We divide over many issues—foreign policy, religion, so-
cial justice, the employment practices of Walmart, and so on. Life in the academic
world, even when at a distance from everyday life, is also characterized by conflict;
shared expertise and information does not always produce consensus. Scholarly
work in many fields would look entirely different without it, and some fields might
cease to exist. Finding disagreement often alerts us to the fact that work remains to
be done—expanding the pool of evidence, developing stronger arguments, dispelling
confusions. Disagreement makes for horse races. It suggests we have not reached
the end of inquiry.
Reflecting on our own disagreements can tell us about the rationality of our
disputed attitudes. This dissertation develops a new series of challenges to our
controversial attitudes that arise when we carefully attend to the kind of factors
that produce disagreement among us. I will focus on disagreements where we sur-
mise there is some truth in question, where at most one party to the dispute can
be correct. Taken together, the arguments I will offer lead us toward intellectual
humility in light of facts about our cultural and historical background, our limited
10
evidential situation, our systematic tendency to err, our idiosyncratic genetic predis-
positions, and our knowledge of informed and intelligent others who disagree. I will
use this introduction to briefly outline each chapter and explain how the arguments
fit into discussions of skepticism. I will begin by saying how I have come to write
on disagreement.
My thinking about disagreement was sparked by a growing literature on “peer
disagreement”, much of which appeared in print or manuscript form as I began
graduate study.1 Papers by Peter van Inwagen [1996], Gideon Rosen [2001], Richard
Feldman ([2006] and [2007]), Thomas Kelly [2005], and David Christensen [2007]
helped to draw discussion to the following question: supposing we learn that we
disagree with someone we know is equally well-informed and equally intelligent—
in a word, an epistemic peer—can we rationally retain our view? Unsurprisingly,
there has been disagreement over the right response and the implications of different
responses.
Considerable discussion of the so-called “epistemology of disagreement” has fol-
lowed in the wake of these initial papers.2 Though I have watched with interest as
this literature has grown, it seems inadequate to address my main questions. Is it ra-
tional or reasonable to maintain controversial beliefs, or is there something epistem-
ically unacceptable in doing so? Peer disagreement is one type of disagreement—an
idealized, unusual one at that, as theorists have noted. My inquiry here attempts to
1Earlier discussions of related issues are found in Lehrer [1976], Gutting [1982], and Wolterstroff
[1988], as well as discussions of religious pluralism and diversity in the philosophy of religion, such
as Plantinga [2000]; see King [2008] for a useful overview.2See the papers collected in Feldman and Warfield [2010], which was released in the days leading
up to the submission of the present work. See also Bergmann [2009], Bogardus [2009], Christensen
[forthcoming], Cohen [ms], King [forthcoming], Lackey [2010], Lehrer [forthcoming], Thune [2009],
and Sosa [2010]. Christensen [2009] offers a nice introduction to the central issues. Not all recent
work on disagreement features peers. Discussion of disagreement with superiors is found in Frances
[2010]; issues concerning deference to majority testimony are found in Pettit [2006].
11
address a wider range of disagreements. Our controversial beliefs include those re-
jected by known peers, but also many other beliefs that are not subject to recognized
peer disagreement. Discussions of peer disagreement presently have limited value
when figuring out what we should do in most real-world (or “non-peer”) disagree-
ments (see Christensen [2009: 765], King [forthcoming], and Feldman and Warfield
[2010: 2–3]). Though I won’t be able to argue that out in detail, a few points are
in order.
(a) Since the standard notion of epistemic peers involves the idealization of
having equal evidence and equal cognitive ability, lessons regarding how peers should
react to disagreement will need to be applied to non-peer disagreements. It remains
an open question how that can be done, if at all.3 (b) We are not typically involved
in simple ‘one-on-one’ disagreements, as is built into the typical peer disagreement
idealization. As Christensen notes, real-life conflicts feature dimensions such as the
number of people on opposing sides of an issue [2009: 765]. The discussion of peer
disagreement has yet to model such dimensions, though, and so once again we find
a gap between any lessons regarding peers and applications to non-peer cases. (c)
In idealized cases of peer disagreement, it is usually stipulated what the relevant
epistemic factors are. Standard cases—Feldman’s “quad” example [2006: 223] and
Elga’s “horse race” example [2007: 486], for instance—settle the matter for us: such
cases presuppose that we know or reasonably believe we have peers. But in real-
world situations, we are sometimes unsure what epistemic features of a situation are
important, what would make for peerhood in the first place.4
For all that, theorists may eventually show that discussions of peer disagreement
can inform our responses to a wide range of real-world conflicts. But why wait for
further developments? According to me, there are important lessons to learn about
3See Feldman and Warfield [2010: 2] and King [forthcoming].4Christensen [2009: 765] mentions this point.
12
the rational significance of disagreement without focusing on peers.5 This disserta-
tion sets a new course. It is an attempt to gain a more complete understanding of
the rational significance of our actual, real-world disagreements.6
The general strategy driving the project comes in two stages. First, when we
reflect on controversy between ourselves and others, explanatory questions often
spring to mind. “Why do we disagree?” “How have we come to conflicting conclu-
sions?” “What is it that makes us think differently?” Answers are easy to come by.
Once these questions arise, we quite naturally try to account for the divergence of
attitudes in the following terms: the parties possess different bodies of evidence; one
party is more reliable than the other; the parties have different cognitive or intellec-
tual abilities; the parties have different personal histories or cultural backgrounds;
one party is biased. These ideas allow us to tell a story that explains how we’ve
come to be in the dispute. That is how we get the first part of this dissertation’s
title: Why We Disagree.
5In “Peers and Other Minds”, I argue that conciliationism concerning peer disagreement—
the view on which learning you have a disagreeing peer calls for a revision in your attitude (see
Christensen [2009])—is no threat to many of our disputed attitudes. In order for the conciliationist
argument to work, you need to have reasonable beliefs about another thinker’s mental life. Even
supposing we know what it takes to satisfy peerhood in some disagreement, we’ll often have no
idea whether another thinker in fact meets the standard. Thus, we sometimes do well to avoid
skepticism about our view by defending a kind of skepticism about other minds—namely, one on
which we don’t know enough about other minds to have reasonable beliefs about whether they are
our peers.6Although many discussions in the Renaissance and early modern periods were driven by facts
about disagreement, the “peerhood” idealization that’s so central to contemporary work on dis-
agreement apparently did not come into play. Yet facts concerning disagreement were thought to
have significant import. (Isn’t it hard to imagine Descartes embarking on his philosophical project
without the background of conflict and diversity? See his Discourse on the Method.) In certain
ways, the present work has affinities to earlier disagreement-fueled projects.
13
It was by asking explanatory questions about typical disagreements concerning
philosophy, politics, and religion that I was led to write this present work.7 But
the etiology of our controversial beliefs is not the central concern here. Suppose
that we ask and answer some explanatory questions regarding the controversies
we find ourselves in. For instance, we gain reason to think that a disagreement
between us, about politics or metaphysics, is due to the different bodies of evidence
that the two of us possess, or our different intellectual abilities, or our different
backgrounds. What might follow for the rationality of our disputed views, once
we become cognizant of various factors that have led to conflict? That question
animates the second part of this dissertation’s title: Why It Matters.
Why do we disagree and why does it matter? Here is my answer in a nutshell: we
disagree because we are affected in different ways by various causal and evidential
factors and we when we come to think that such factors have led us to disagree,
that matters for us, because we will be led toward intellectual humility. Confidence
in controversial matters is often misplaced once we find out why we disagree.
That, anyway, is the overarching picture of the argument of this dissertation.
Let me now briefly summarize the chapters. Reflecting on our own controversial
beliefs, we’ll often discover the following:
7I should comment on an increasingly common type of argument against one’s opponents that
takes off from claims about the causes of their attitudes. Opponents’ attitudes are often ‘debunked’
or ‘explained away’ by identifying the origin of the attitudes and then arguing that the origin has
nothing to do with the truth. Recent days have seen growing popularity for arguments that target
particular ethical and metaethical positions by way of claims about the evolutionary origins of
moral judgment. Arguments along these lines are given by Street [2006], Joyce [2006: ch. 6], and
Greene [2007], among others. Are the arguments I give in the family of debunking arguments?
I can’t say, because I am unsure what unites that family. Regardless, my central tactics appear
different than typical debunking arguments—and the range of attitudes targeted seems much wider.
Further discussion of the relationship between what I do here and recent debunking arguments
must wait for another time.
14
1. These beliefs depend on our personal history such that if our history
had been different, we would have believed something different.
[Chapter 2: “THE VARIABILITY PROBLEM”]
2. They are based on evidence that represents only a limited part of
the total relevant evidence. [Chapter 3: “EVIDENCE WE DON’T
HAVE”]
3. They may be subject to the distorting effects of cognitive biases.
[Chapter 4: “REASONING ABOUT BIASES”]
4. They are influenced by our idiosyncratic, biologically-based dispo-
sitions. [Chapter 5: “GENES AND ATTITUDES”]
I explore how acquiring this information about our beliefs (“higher-order informa-
tion”) impacts the rationality of those beliefs.8 Chapters are devoted to each of the
four kinds of higher-order information; every chapter illuminates, in a different way,
the kinds of complexity and contingency that characterize our disputed convictions.
This higher-order information, I argue, sometimes indicates that our thinking is in
error. The problem is not that our convictions have been shown to be false—it is
that they’re held on an inappropriate basis given what we have learned.
To conclude our discussion, I investigate one type of evidence that would by-
pass the above concerns about disagreement [Chapter 6: “KNOCKDOWN ARGU-
MENTS”]. I contend that there may be “knockdown” arguments for some philo-
sophical theses—arguments that compel assent once we understand the premises.
Though taking disagreement seriously means reducing confidence in many of our
controversial convictions, I allow that we can nonetheless enjoy full confidence in a
wide variety of ordinary, commonsense beliefs as well as those claims supported by
knockdown arguments.
8For more on higher-order evidence, see Feldman [2005] and Christensen [2010].
15
My overall conclusion is decidedly optimistic. Properly accommodating higher-
order information improves our intellectual lives.9 We can increase our stock of
rational beliefs by reducing confidence in some controversial ones, and thereby give
due respect to evidence of error. Confronted by disagreement, we gain motivation to
seek stronger grounds for our convictions. Information about disagreement exhibits
for us the limits of rational confidence and knowledge, revealing how rationality
depends not only on what we think of ourselves, but of others, too.
The reader may have noticed I have so far avoided using of the “s” word in de-
scribing the project. Though I would prefer not to call these arguments “skeptical”,
I can’t hold back. To keep any misunderstand at bay, however, I must say more
about how I will use that term. There is indeed the suggestion of a kind of skepticism
in these pages. For instance, the reader may occasionally notice affinities between
what I say and ideas associated with the ancient Pyrrhonians and an assortment
of later thinkers such as Montaigne, Descartes, and Bayle.10 I invite comparison
between these older skeptical ideas and my own.
But I can’t often say the same when it comes to more recent skeptical projects.
To see why, review the table of contents of a typical “introduction to epistemol-
ogy” course reader or text. There you’ll find a section on skepticism that details
‘closure-based’ and ‘underdetermination’ arguments, complete with a brain-in-the-
vat scenario. Skepticism is normally treated like a puzzle or paradox—not often
enough as a genuine threat to our judgments. Skepticism isn’t usually presented as
a sensible position to adopt. (Don’t epistemology professors sometimes say, when
asked in the classroom who is a skeptic, that there are none?) That is not to say
that puzzles and paradoxes are unimportant. They are valuable for sorting out
9Compare to Christensen [2007: 216] on the “good news” of epistemic self-improvement in light
of peer disagreement.10For useful discussion of ancient Pyrrhonian skepticism, see Annas and Barnes [1985]. Popkin
[2003] offers an overview of the reception and development of skeptical ideas in the late Renaissance
and early modern period; also see the essays in Neto and Popkin [2004].
16
our thoughts about knowledge, closure, knowledge ascriptions, and so on. But for
whatever reason, puzzles and paradoxes rarely threaten us as believers. Our philo-
sophical theories about knowledge and rationality may be refined through contact
with puzzles and paradoxes, but the vast majority of our judgments will likely be
unchanged. (It is worth noting that Feldman [2006: 217] investigates “a kind of con-
tingent real-world” skepticism that is driven by our awareness of peer disagreement.
He presumably means to contrast his disagreement-fueled skepticism with standard
modal skeptical arguments.)
With some reluctance, then, I offer what we might call skeptical arguments.
I will occasionally use the “s” word, but I remain uneasy about the misleading
associations it may conjure up. Part of my discomfort here owes something to the
limits of the English language. We have only one word to cover much philosophical
ground usually called “skepticism”. (We also have “doubt” and “incredulity”, but
neither sounds much good as an ‘ism’.) If we look carefully, however, we’ll realize
that there are many forms of skepticism, many skepticial projects. The project I
develop here is not typical and so it is perhaps unsurprising that I am unhappy to
call it by the usual name.
One standard distinction is between global and local skepticism (see, e.g., Feld-
man [2003: 109] and Fumerton [2006: 118ff]). Global skepticism has it that we
don’t (or can’t) have some or other “positive epistemic status.”11 The idea is that
all of our beliefs do not amount to knowledge, do not have whatever is required
to be rational or justified, or otherwise lack some property of positive epistemic
appraisal. For instance, Keith Lehrer [1971] and Peter Unger [1971] once argued
that we never know anything. Timothy Williamson contends that our judgments
11There are many positive epistemic statuses. Many epistemologists have fallen into the habit
of using the label “epistemic justification” to stand in for many of these different statuses. See
Cohen [1995] and Alston [1993] for critical discussion of the place of the notion of “justification”
in epistemology. Leaving justification shelved, in what follows I shall instead employ what I take
to be ordinary, non-technical notions of rationality and reasonableness.
17
are never ‘luminous’ [2000: ch. 4]—that is, there is no (non-trivial) mental state is
such that being in that state suffices for one to be in a position to know that one is
in it. Local skepticism, on the other hand, is the thought that we don’t (or can’t)
come by some particular positive epistemic status in a specific domain. Arguments
are sometimes given for moral skepticism (Sinnott-Armstrong [2006]) and religious
skepticism (Schellenberg [2007]), among other domains.
The skeptical arguments I offer here are neither global nor local. For one, the
arguments often take it for granted that some of our beliefs are perfectly rational; so
these arguments imply that global skepticism regarding rational belief is false. And
for all I say, some of our beliefs in any particular domain might enjoy any positive
epistemic status; the arguments thus don’t amount to local skepticism.
How then shall we characterize the arguments I will present? One standard map
of different skepticisms does not seem to have a place for these arguments. Yet,
something about them does seem skeptical. The arguments conclude that some of
our beliefs fail to be rational and (supposing rationality is a necessary condition
on knowledge) are not known. The arguments target beliefs regarding a range
of controversial topics—politics, philosophy, environmental law, history of Soviet
biological sciences, and so on. Saying that these arguments give us a “controversial-
belief skepticism” won’t do: some of our controversial beliefs can be rational, I
think. Perhaps the arguments amount to “real-world” skepticism, in some sense (cf.
Feldman [2006: 217]). If that means the arguments make it clear that we in fact
have good reasons to give up some of our beliefs, and that doubting particular beliefs
is a perfectly sensible stance to adopt, then I wholeheartedly approve. That is a fair
description of what I am up to here. The point of skepticism, as Marx might have
said, is to change the world. As I see it, that is an aspiration the skeptic would do
well to embrace.
All of that said, I still worry that it is misleading to call the position on offer
“skeptical”. There are important ways in which it decidedly anti-skeptical—for
18
example, insofar as traditional philosophical arguments for skepticism undermine
commonsense, I say we should do our best to reject them. But it may be that there
is no more fitting epithet for the project than the “s” word. Nevertheless, before
concluding, let me propose another potential characterization of what I’m up to
here.12
Aside from evidence and arguments, there are many factors that lead us to our
particular opinions. This dissertation, as I have promised, can help us improve our
intellectual lives. It helps by making us aware of ourselves as believers—aware of the
kind of factors that influence our judgments and lead us to disagree. Our natural
mode as believers is gluttonous : unchecked, we horde vast quantities of beliefs. We
continually gather up opinions about the world, ourselves, others, what’s right or
wrong, and so on. We are doxastic pack rats. We may or may not live up to the name
Homo sapiens, but we are, without any doubt, Homo opinator.13 The arguments I
offer help us curb our native doxastic appetites by revealing how easy and typical it
is for our controversial views to be less than fully rational. When we come to realize
that some of our disputed beliefs have been influenced by our culture and history,
our limited evidence, our natural tendency to err, and our genetic predispositions,
we often do well to cast a doubtful eye on them. The arguments serve to highlight a
rational ideal of doxastic frugality by leading us to see precisely where to hold back
in our believings.
No matter whether the reader sees my project as promoting some sort of skepti-
cism or doxastic frugality, be sure that I intend to argue for something more plausible
than puzzling. I offer these ideas as something that we might realistically take up. In
my view, reflecting on our disagreements holds valuable lessons for our intellectual
lives.
12Conversation with Chris Freiman was beneficial here.13In looking for a Latin translation of “opinionated man”, I wrote to Peter King, who explained
there is not, for various reasons, a good parallel. Homo opinator is “human being given to having
an opinion”.
19
CHAPTER 2
THE VARIABILITY PROBLEM
“I know no way in which a writer may more fittingly introduce his work to the
public than by giving a brief account of who and what he is. By this means some
of the blame for what he has done is very properly shifted to the extenuating
circumstances of his life.” – Stephen Leacock (Sunshine Sketches of a Little Town)
This essay explores a pervasive and disconcerting worry about intellectual life: our
controversial beliefs regarding morals, politics, religion, and philosophy depend on
facts about our personal history and culture. There is something accidental or
arbitrary about having that particular history or culture, however, and this strongly
suggests that those beliefs are epistemically problematic. Variability is the word I
will give to the idea that our disputed convictions vary with different backgrounds.
Many of us have entertained worries about variability. Such worries may shape
our intellectual outlook, shading particular matters with skepticism. Or variability
may remain a source of disquiet, a threat to our cherished convictions. Either
way, variability has not received sustained philosophical attention. Now and again,
thinkers point toward it, but pointing is typically all we get—worked-out arguments
based on variability are uncommon. My main concern here is to introduce and also
defend some well-developed variability arguments. In a sense, what I will call the
“variability problem” has a double meaning: it challenges our beliefs, all right, but
there’s also some difficulty in seeing what the problem really is.
In 1, I will introduce the variability problem by drawing on some historical and
contemporary sources. It is really a cluster of claims, which taken collectively seem
to recommend doubt or skepticism regarding some disputed convictions. It remains
20
uncertain, though, just how to spell out or arrange these ideas and thereby turn
them into a good argument. So, in 2 and 3, I shall develop a pair of new arguments,
each one of which I argue is plausible. Both raise a genuine and novel challenge
to the rationality of particular beliefs. I will conclude in 4 by noting some critical
differences between these two arguments.
2.1 What’s the Problem?
Common to nearly all worries about variability is that they are left under-developed.1
Let me briefly summarize three of its manifestations.
Going back to the ancient world, we find claims regarding variability in works
from Xenophanes to Sextus.2 Philo of Alexandria confesses that he won’t be sur-
prised if the “labile and heterogeneous mob” believes “whatever has once been
handed down to them,” whether by parents, masters, or culture. Philo appears
vexed, however, by the fact that philosophers, “who profess to hunt down the clear
and the true in things, are divided into brigades and platoons and set down dogmas
that are discordant.” Cultural or social factors seem to inculcate different dogmas in
different groups—and philosophers aren’t exempt. This leads Philo to apply a stan-
dard pyrrhonian ‘mode’, resulting in the canceling out of opposites and suspension
of judgment.3
From the Renaissance to the modern period, we find Montaigne, Descartes, and
Locke concerned with variability, each appreciating the influence of culture on one’s
1A few exceptions: Alvin Plantinga [2000], G.A. Cohen [2000: ch. 1], George Sher [2001],
Gideon Rosen [2001], and Peter van Inwagen [1995]. But even these arguments are not detailed
enough to tell us whether the variability problem is really a problem.2See Julia Annas & Jonathan Barnes [1985: ch. 13]. Celsus, the early critic of Christianity,
may have raised the variability problem, too; his work (177-178 CE) survives in fragmentary form
in Origen’s Contra Celsum.3See Annas & Barnes [1985: 155-156].
21
thinking.4 And a famous passage from Mill’s On Liberty vividly captures a thought
about variability:
And the world, to each individual, means the part of it with which he comes
in contact; his party, his sect, his church, his class of society. . . Nor is his
faith in this collective authority at all shaken by his being aware that other
ages, countries, sects, churches, classes, and parties have thought, and even
now think, the exact reverse. [. . . ] [I]t never troubles him that mere accident
has decided which of these numerous worlds is the object of his reliance, and
that the same causes which make him a Churchman in London, would have
made him a Buddhist or a Confucian in Pekin. [1859: ch. 2, para. 3]
Mill’s idea appears to be that some of our convictions have cultural or historical
causes, and that those causes are a matter of “mere accident”. Reflecting on the
way certain beliefs are influenced by such causes, our “faith” in our received views
should be “shaken” or “troubled”.5
And variability has recently made an appearance within debates in philosophy of
religion.6 For instance, Peter van Inwagen [1996] asserts that the Christian Church is
God’s “unique instrument of salvation”. Van Inwagen then entertains the following
objection (put in an interlocutor’s mouth): “Well, isn’t it fortunate for you that you
just happen to be a member of this unique instrument of salvation’. I suppose you
realize that if you had been raised among Muslims, you would make similar claims
4For a survey of Renaissance and early modern skepticism (“from Savonrola to Bayle”), see
Popkin [2003]; variability played a serious role here. For more on skepticism in this period, see the
papers in Neto & Popkin [2004]. In the mediæval period, I’ve found mention of variability in Peter
Abelard’s Dialogue between a Jew, a Christian, and a Philosopher—a work which, uncharacteris-
tically for its time, takes seriously the fact of intellectual diversity. Rosen [2001: 84] (who gives
credit to Stephen Menn for the lead) notes a fascinating passage on variability in Al-Ghazali.5T.J. Mawson [2009] dicusses Mill’s argument.6See Plantinga [2000]. For an overview of recent work on religious pluralism, see Nathan King
[2008].
22
for Islam?” [1996: 237–238]. Van Inwagen concurs: if he had grown up in Mecca,
he’d likely be a devout Muslim. But he isn’t sure what follows from the observation.
He notes that the objection, if it shows anything at all, can hardly be confined to
religion; it spills over to topics like politics, too:
Tell the Marxist or the liberal or the Burkean conservative that if only he had
been raised in Nazi Germany he would probably have belonged to the Hitler
Youth, and he will answer that he is aware of this elementary fact, and ask
what your point is. No one I know of supposes that the undoubted fact that
one’s adherence to a system of political thought and action is conditioned
by one’s upbringing is a reason for doubting that the political system one
favors isif not the uniquely “correct” oneclearly and markedly superior to its
available rivals. [1995: 238]
Of course, granting that the objection against van Inwagen’s religious belief works, it
works against political belief, too. Van Inwagen claims that since the latter objection
is not compelling, neither is the former.
But is it true there’s no problem with variability? Or is it a good reason for
doubt about controversial convictions? I shall say more shortly. For now, consider
some salient features of worries about variability that are revealed by the above
sources.
1. The beliefs challenged by variability typically concern controversial
topics like politics, morals, religion, and philosophy, not ordinary
or commonplace topics.
2. These controversial beliefs are connected to various causal factors
such that if one’s background is changed, one’s attitude is changed.
More exactly: if one’s background had differed in certain respects,
then one would have had different beliefs.7
7Such counterfactuals must be handled with care. Suppose you confess: “I wouldn’t believe
23
3. The fact that a person has a particular history or background is
a contingent matter. The background one gets is at least partly
beyond one’s control.
4. These beliefs are often connected to factors that are non-epistemic.
Growing up in one culture rather than another, for instance, is not
usually what we would cite as good evidence or grounds for a belief.
All of this seems sensible if not clearly true. That is part of what has led many
thinkers, from antiquity to the present, to grapple with variability. The variability
problem should not be dismissed casually. Initial plausibility to the side, a question
remains. What should we conclude? What follows? What is at stake when we
consider variability?
Befitting its subject, an argument inspired by variability can take many forms.
Here, I’ll convert some thoughts concerning variability into two arguments for the
conclusion that particular convictions are irrational, in the sense that, on balance,
there is reason to give up those convictions.
2.2 The Symmetry Argument
We just reviewed some passages that suggest an argument with the following con-
clusion:
C: Your belief that p is irrational.
that political doctrine if I had grown up outside the West”. Although that may sound true, it
is critically underspecified. Perhaps in the nearest possible world where you grew up elsewhere,
your parents were Foreign Service agents in Sudan, who dutifully raised you to hold that political
doctrine. On that reading, then, your statement is false. The counterfactual you wanted concerns
a more distant world, where (e.g.) you were snatched from the cradle and had different parents
altogether. The moral: state these counterfactuals cautiously. Cf. Plantinga [2000: n. 26].
24
What about premises? There is plausibly a causal link between particular beliefs
and backgrounds. As we’ve seen, that link is such that if your background had
been different in certain respects, then you wouldn’t have accepted everything you
now believe (and you’d have accepted claims you don’t now believe). From here,
there are two general ways to develop the premise. We might say that facts about
variability by themselves imply that certain beliefs are irrational. Alternatively, we
might say that someone’s reasons (or knowledge or beliefs) concerning variability
imply that certain of her beliefs are irrational. The sources we surveyed intend to
suggest the latter sort of premise.8 To a first approximation, then, the premise:
P1: You have reason to believe that your belief that p is such that if
your background had differed in certain respects, then you would not
have accepted p.
I use ‘not accept p’ as an umbrella term that covers someone’s either disbelieving
p or withholding judgment on p.9 C doesn’t follow from P1. To make a valid
argument, we can add a conditional as a premise whose antecedent is P1 and whose
consequent is C.
P2: If you have reason to believe that your belief that p is such that
if your background had differed in certain respects, then you would not
have accepted p, then your belief that p is irrational.10
8For instance, Mill emphasizes awareness of variability: “Nor is his faith in this collective
authority at all shaken by his being aware that other ages, countries, sects, churches, classes, and
parties have thought, and even now think, the exact reverse”9Daniel Howard-Snyder helped here.
10Plantinga discusses and rejects the following related premise: “If S’s religious or philosophical
beliefs are such that if S had been born elsewhere or elsewhen, she wouldn’t have held them, then
those beliefs are produced by unreliable belief-producing mechanisms and hence have no warrant”
[2000: 188].
25
An unrestricted P2 is dubious. Your belief that you are now reading this paper
has the following counterfactual feature: if your background had differed in certain
respects, you would not have accepted it—because you’d be working in the garden,
say. To exclude such cases, let us restrict P2 to cases where the truth value of
p stays fixed across the background changes. Consider it done. Two notes on
the significance of this restriction are in order. First, recall that we are mainly
interested in beliefs about controversial topics like politics, morals, and philosophy.
These beliefs are often such that if your background had not differed in particular
respects, then you would not have accepted p and p would still have had the same
truth value. The restriction does not prevent us from talking about those kind of
convictions. Second, it follows from the restriction that the variability argument we
are developing cannot lead to global skepticism; some beliefs, like your belief that
you are reading this paper, won’t be touched.11
Although P2 needs refinement, notice how it is supported up front by a case:
COIN FLIP. Paul believes that p is true. McCoy tells Paul a story about
a coin. The coin is not common currency—it has unique powers. If it
lands heads, someone will believe p; and if it lands tails, someone won’t
accept p. Then McCoy, who Paul sensibly thinks is a reliable testifier,
continues: “What if the outcome of the coin toss solely caused you to
believe as you do, Paul?” McCoy reveals the coin in an open hand. Paul
has reason to believe this coin affected his believing that p.
The natural reaction here is that if Paul has a reason to think the coin toss is the
sole cause—the complete explanation—of his beliefs, then his belief is not rational.
Learning that you believe something because of a coin flip and continuing to believe
isn’t rational.12 Notice how COIN FLIP supports P2. When we consider that
11Conversation with Ian Evans and Sydney Penner was helpful here.12Since we are keeping fixed p’s truth value across different backgrounds, we’ll assume that the
believed proposition is not something about the coin itself—e.g., that the coin landed heads.
26
case, we judge that once Paul appreciates his situation his belief is irrational. We
think that the consequent of P2 is true. What’s more, we think the antecedent
of the conditional is satisfied. If P2’s antecedent is true, then we have evidence
that non-epistemically relevant features of Paul’s background play a role in causally
determining his belief. Careful reflection on COIN FLIP can make it quite natural
to generalize to (some unrestricted version of) P2.
P1, P2, and C are a first approximation of one variability argument. The task
of refinement comes next.
As it stands, even the restricted version of P2 faces a trio of counterexamples. In
these cases, though P2’s antecedent is satisfied, it is plausible to deny its consequent
is true. Let me state them in rapid succession before offering the necessary fixes. (a)
Suppose you gain reason to think you wouldn’t have believed p if your background
had differed because, in such an event, you would have lacked the evidence you now
have for p. (b) Suppose you’ve reason to take it that you wouldn’t have believed p
if your background had differed because you would have lacked the cognitive skills
relevant to appropriately believing p that you now have.13 (c) And suppose you
have reason to think that you wouldn’t have accepted p if your background had
differed because you wouldn’t have made use of the evidence and cognitive skills
you now have.14 On (a) through (c), it seems that your belief could be perfectly
rational—yet P2 implies that your belief is irrational.
To repair P2 in light of (a) through (c), I will stipulate the following three
further restrictions. A fixed version of P2 must be such that, for it to obtain,
you have reason to believe each of the following: (d) that you and your dissenting
13More carefully: suppose you’ve reason to think that in close worlds the only circumstances
in which you wouldn’t have accepted p include some cognitive deficit that’s relevant to rationally
believing p.14More carefully: suppose you’ve reason to think that in close worlds the only circumstances in
which you wouldn’t have accepted p include your failure to use the relevant evidence and cognitive
skills.
27
counterfactual self have the same (total) evidence for p that you now have; (e)
that you and your counterfactual self have the same cognitive skills and/or virtues
relevant to rationally believing p; and (f ) that you and your counterfactual self
both utilize the same relevant evidence and cognitive skills. The stipulations in (d)
through (f ) allow us to refine P2 as follows:
P2a: If you have reason to believe that your belief that p is such that
if your background had differed in certain respects, then you would not
have accepted p, even though you would have used the same evidence
for p and the cognitive skills relevant to appropriately believing p that
you actually used, then your belief that p is irrational.
More refinement is surely needed, but I’ll let it be for now. There are two
premises, P2a and now this one:
P1a: You have reason to believe that your belief that p is such that if
your background had differed in certain respects, then you would not
have accepted p, even though you would have used the same evidence
for p and the cognitive skills relevant to appropriately believing p that
you actually used.
P1a and P2a together entail C. The argument was arrived at by gradually ruling
out relevant epistemic differences between you and your counterfactual self, and thus
guaranteeing a kind of symmetry. Naturally enough, I will refer to P1a, P2a, and C
as the Symmetry Argument.
With the Symmetry Argument on the table, we can discuss the premises. Why
think that P2a is true? A case like COIN FLIP may be devised to lend strong
support to P2a. (I’ll leave such a case shelved for now; it is clear enough how it
will go.) So why think that P1a is true with respect to any of our beliefs? Notice
that P1a has been strengthened by assuming someone has reason to think that he
28
and who-he-would-have-been both possess the same evidence and cognitive skills.
The salient question, then: with respect to some proposition, do we reasonably take
ourselves to have disagreeing counterfactual selves who are our equals with respect to
the relevant evidence and cognitive skills? Insofar as we do, skepticism looms. (Note
an important difference between this variability argument and “peer disagreement”
arguments. Advocates of the latter typically say that only actual disagreement is
epistemically significant; see Christensen [2007: 208], for instance. The Symmetry
Argument gets off the ground with merely possible disagreement.)
I will offer two reasons for thinking P1a is plausibly thought true by philosophers
with respect to some of their philosophical beliefs. The first starts with a case
that is—or, with enough story-telling, will be—analogous to the situation of some
philosophers. (It will be obvious how we could extend this case to controversial
beliefs about other topics.)
Green must decide where to attend grad school. He tosses a fair coin
to decide between two options. If the coin lands heads, Green will at-
tend Brown; if tails, Arizona. No matter whether the coin lands heads
or tails, a few years hence Green and who-he-would-have-been-had-the-
coin-landed-otherwise will have the same relevant evidence and cognitive
skills with respect to certain theses. Years later, in a fit of nostalgia,
Green remembers that fateful coin flip. He believes that his belief that p
is such that if that coin had landed otherwise, he wouldn’t have accepted
p, even though he would have had the same evidence and cognitive skills
he now has.15
At least arguably, Green’s situation is analogous to the situation of many philoso-
phers with respect to some of their philosophical beliefs. But then those philosophers
should accept P1a.
15Cf. Cohen [2000: 16–18] on his choice for graduate study between Oxford and Harvard. White
[2005] discusses a similar case.
29
A second reason for philosophers to think that P1a plausibly applies to them
is found in actual cases of disagreement. Some philosophers claim to have reason
to take themselves to have equals with respect to evidence and cognitive skills.16
For example, David Christensen proposes that in philosophy “the parties to the
disputes are fairly often epistemic peers” [2007: 215, emphasis added], by which
he means philosophers are often equally intelligent, informed, and rational. And
Richard Feldman writes thus: “[C]onsider those cases in which the reasonable thing
to think is that another person, every bit as sensible, serious, and careful as oneself,
has reviewed the same information as oneself and has come to a contrary conclusion
to one’s own” [2006: 235]. Feldman says such cases are common, inside philosophy
and beyond. What follows from the claim that philosophers have reason to believe
they have equals regarding evidence and cognitive skills? Well, by reflecting on
actual disagreement, someone may conclude that if she had the background of her
opponent, she would not have accepted what she now believes.
An example, often given by Peter van Inwagen, will make this clear.17 As it
happens, van Inwagen believes that incompatibilism is true and that David Lewis
denies that thesis (by believing compatibilism). Suppose also that, when it comes to
incompatibilism, van Inwagen reasonably takes Lewis to be his equal with respect to
the relevant evidence and cognitive skills.18 Now, to give the example a new twist,
suppose that van Inwagen imagines how Baby Peter might have been switched in his
cradle with Baby David.19 Then Peter considers how he could have grown up in the
16See Thomas Kelly [2005], Richard Feldman [2006], and David Christensen [2007].17On van Inwagen’s disagreement with Lewis, over incompatibilism and other matters, see his
[1996], [2004], and [2010].18Van Inwagen may deny that; see his [1996: 138], where he opines he has, or possibly has, some
“[incommunicable] philosophical insight...that, for all his merits, is denied to Lewis.” Of course,
such an insight would make for an evidential difference between the two philosophers.19Van Inwagen remarks that “if I and some child born in Cairo or Mecca had been exchanged
in our cradles, very likely I should be a devout Muslim. (I’m not so sure about the other child,
however. I was not raised a Christian.)” [1995: 238] Cohen [2000: 8] briefly discusses the case of
30
Lewis household—in just such a way to produce a boy relevantly like a compatibilist
such as Lewis. That seems possible. If the cradle switch had happened like that, van
Inwagen would have likely not accepted that incompatibilism is true, as he would
have had Lewis’s background. Yet, Lewis’s background might have given Baby Peter
just the evidence and cognitive skills that the actual van Inwagen and Lewis now
have. So, if all of that is true, then van Inwagen should believe P1a. And, reflecting
on this story, van Inwagen may come to believe as much. The story deserves a more
cautious telling, to be sure. But if you spin a similar tale about yourself, then P1a
holds for you, too.
Thus concludes a sketch of one variability argument. P2a is supported by a
natural intuition, and there are reasons to think that P1a is true of philosophers
with respect to themselves. Of course, if we accept the Symmetry Argument, then
some philosophers’ beliefs are irrational—perhaps even yours and mine.20
Even if the Symmetry Argument doesn’t persuade, it leaves us wiser. Indeed, it
allows us to glimpse a hard-to-discern element of some worries about variability. We
will remember that, initially, the Symmetry Argument’s unrefined premises let in
identical siblings separated at birth. In “Genes and Attitudes”, I explore the rational significance
of “twin studies” in behavioural genetics for our controversial beliefs.20Space won’t permit discussion of the important allegation that variability arguments are self-
defeating (see Plantinga [2000]). In my view, once aimed at P2a (as opposed to the conditional
premise discussed by Plantinga; see footnote 10 above), this objection either fails or stands incon-
clusive. Briefly. It may fail because, for the objection to work, we must suppose that your belief in
P2a, the conditional premise, satisfies its own antecedent, P1a. But, plausibly, even if your back-
ground had differed in certain respects, you still would have believed P2a had you used the same
relevant evidence and cognitive skills you actually have. So, plausibly, your belief in P2a doesn’t
satisfy P1a. Alternatively, the objection is inconclusive: even if your belief in P2a is variable and
thus irrational to hold, that is consistent with P2a being true. The fact that P2a is irrational to
hold may be a paradoxical but acceptable feature of a good variability argument. (Compare to
Christensen [2009: 763] on the self-defeat of conciliationism regarding peer disagreement.) Similar
remarks apply to a self-defeat objection for the argument developed in 3.
31
the following possibility: you could have differed in attitude with your counterfac-
tual self due to some difference in evidence or cognitive skill. And that possibility
revealed just where refinement was needed: after all, such evidential or cognitive
differences make for relevant epistemic differences between you and your dissenting
counterfactual self. Refinements excluded those potential differences.
These refinements were designed to guarantee a sort of epistemic symmetry be-
tween you and your counterfactual self. And here we come to what is suggested—but
not said—by the Symmetry Argument: by your lights, if you and your counterfac-
tual self have that kind of symmetry and yet disagree, you are open to a charge of
arbitrariness. What is your reason for sticking with your view? Given the symme-
try, you have no advantage. Holding your ground—absent reason to favour it—is
therefore irrational. As I think of it, arbitrariness plays an importantly different
role in another variability argument and, in the next section, I will explain.
2.3 The Arbitrariness Argument
One way to criticize a belief is to say it is arbitrary. Indeed, saying a belief is
“arbitrary” may carry with it an epistemic disparagement.21 Suppose Blue is faced
with incompatible propositions and knows that each one is equally well-supported
by his (total) evidence. If he comes to believe one proposition over the other, Blue
obviously does something epistemically improper. Rationality repels arbitrariness.22
What we find with Blue is evidential arbitrariness :
For propositions p and not-p, and evidence E, p and not-p are eviden-
21Thanks to Keith Lehrer for a helpful conversation on arbitrariness.22Here marks a plausible difference between rational belief and rational action: it can be sensible
to base an action on something arbitrary, like a coin toss before a game to decide who goes
first, but the same doesn’t go for rationally believing. White [2005] takes such assumptions to
favour “rational uniqueness”. But not all will accept that assumption: see Lehrer [1983] for some
discussion.
32
tially arbitrary for you with respect to E if E no better supports one
proposition than the other for you.
If we have reason to think some belief is evidentially arbitrary, then that belief
is irrational for us. To see why, recall that I stipulated in 1 that someone’s belief
that p is irrational if she takes the considerations in favour of p and against p to
be such that now believing p truly and not falsely is not best achieved by believing
p. Let us suppose you judge that your belief in p is evidentially arbitrary. Then
believing p isn’t the best way for you to achieve the end of now believing truly and
not falsely: since believing not-p is an equally good way to achieve that end, you
are best off withholding judgment on p.
A variability argument based on evidential arbitrariness would begin with the
following idea: if you and your counterfactual self have the same evidence and
disagree, then your belief is evidentially arbitrary and thus irrational. As we’ve
seen, the Symmetry Argument approximates that idea by requiring that you and
your dissenting counterfactual self use the same relevant evidence (among other
things). There is a catch: building this kind of arbitrariness into a variability
argument will quickly bring us full circle to the Symmetry Argument.
Whatever its merits, the Symmetry Argument failed to capture something crit-
ical about variability. We realize that if life had gone differently, then across a vast
range of situations, and regarding many propositions, we would have different ev-
idence; for the most part, we and our dissenting counterfactual selves would be in
evidentially asymmetrical situations. And with different backgrounds, our cognitive
skills would often enough differ and thus make for relevant asymmetries between
ourselves and who-we-would-have-been. That kind of epistemic symmetry required
by the Symmetry Argument doesn’t get to the bottom of our worry about variability.
33
So let us assume that in close worlds23 the only circumstances in which you
wouldn’t have accepted p are those in which you failed to have the same relevant
evidence and cognitive skills you’ve actually used to arrive at your belief that p.
The Symmetry Argument founders on the above assumption. Even so, we may still
worry about variability. In a sense that I will soon explain, the worry just goes ‘up’
a level.
To see why variability should still concern us, imagine that you attended, say,
Cornell for graduate school and ended up with evidence E. If you had gone to St.
Andrews, you would have had some different body of evidence, E*.24 And you know
that a coin flip determined where you attended school. This is a case of variability
that features evidential asymmetry. E seems to be good grounds for believing p;
and your counterfactual self also finds that E* is good grounds for believing not-p.
And we can even assume those grounds in fact make rational p and not-p for you
and your counterfactual self, respectively. What is perplexing is this: you could have
easily had E* and it wasn’t as though you had good reasons to select the path that
brought you to E. Isn’t this worrisome? Doesn’t sticking to your view seem somehow
arbitrary? I think so.
For comparison, consider the following case:
FLIP. You know that if a particular coin lands heads, you will get evi-
dence which will lead you to believe p; and if it lands tails, you will get
different evidence which will lead you to believe not-p. The coin lands
tails and you end up believing not-p.
Though you wind up with a belief that is apparently arbitrary, it has nothing to
do with evidential arbitrariness. Instead, FLIP puts on display what we can label
23The worlds must be close to the actual world because in discussing the Symmetry Argument we
have kept fixed p’s truth value across different backgrounds. I will drop that assumption shortly.24I explore issues raised by our learning there is evidence we don’t have in “Evidence We Don’t
Have”.
34
causal arbitrariness :
For a proposition p and events (or states of affairs or facts) e1 and e2,
p is causally arbitrary for you only if (i) were e1 to obtain, you would
believe p and (ii) were e2 to obtain, you would not accept p.25
Notice that in FLIP the proposition you believe is causally arbitrary for you.
(To simplify expression of this idea, I’ll say your belief is causally arbitrary.) Is your
belief irrational due to this sort of arbitrariness? Not plainly. That’s because—
consistent with having a causally arbitrary belief—you might have reason to think
that tails is more likely than heads to furnish you with a true belief. Another case
will bring this out:
MAJOR. Several years ago you decided to major in Philosophy rather
than Physical Education on the basis of a coin flip. One consequence is
that you end up thinking that modus tollens is a valid argument form.
Yet you might have easily become a P.E. major and, if you had, you
wouldn’t have accepted that modus tollens is valid. In the aftermath,
you realize that if you had become a P.E. major, you wouldn’t have
believed truly about modus tollens.
Your belief that modus tollens is valid is causally arbitrary. If the coin had landed
otherwise and you majored in P.E., you wouldn’t have accepted that modus tollens
is valid; but the coin in fact led you to Philosophy and you’ve come to think that
form of argument is valid. Critically, rationality does not repel causal arbitrariness.
Your belief is by all appearances rational. Lucky for you, the major you picked
reliably leads students to think rightly about modus tollens.
MAJOR tells us that a causally arbitrary belief may be rational. What’s the
matter with your belief in FLIP? Maybe nothing. Add another detail to the case:
25By “you would not accept p” I mean, as above, that you would disbelieve p or withhold on p.
35
that p is the coin landed heads and not-p is the coin didn’t land heads. If it landed
heads, you’d have evidence to believe p; if it landed tails, you’d have evidence to
believe not-p. Heads makes it more likely than tails that p is true; tails makes it
more likely than heads that not-p is true. Thus, given either outcome, you would
have reason to think that the up’ side of the coin is more likely than the other side
to give you a true belief.
The lesson is that information about the relative likelihood of your now be-
lieving truly and not falsely, in the wake of events e1 and e2, may prevent causal
arbitrariness from eliminating rational belief. A case will help to illustrate this idea:
BATH. Evelyn has two thermometers and the same reason for thinking
each one is reliable—they came from the same box. When she randomly
picks one and dunks it in the bathtub, it reads 41C. She comes to think
that the water is 41C. A moment later, she adds the second thermometer
to the tub and it reads 51C. She immediately realizes that had she ini-
tially tried the second thermometer, she would have believed the water
is 51C. Evelyn nonetheless persists in believing the water is 41C.
Evelyn’s belief seems to be arbitrary and we will call the sort of arbitrariness in play
causal arbitrariness+:
For a proposition p and events (or states of affairs or facts) e1 and e2,
p is causally arbitrary+ for you only if (i) were e1 to obtain, you would
believe p and (ii) were e2 to obtain, you would not accept p, (iii) you
lack reason to think e1 makes it more likely than e2 that you now believe
truly and not falsely whether p.
(In place of the phrase “now believing truly and not falsely whether p”, I will
occasionally opt for “gets p right”.) Although BATH features causal arbitrariness+,
something else is going on in MAJOR. In that example, you have reason to think e1
36
(the flip that leads you to major in Philosophy) makes it more likely that you now
believe truly and not falsely that modus tollens is a valid argument form than does
e2 (the flip that leads you to major in P.E.). In BATH, Evelyn has no such reason
to prefer e1 to e2.
Does causal arbitrariness+ eliminate rational belief?26 That is, supposing you
have reason to think your belief is causally arbitrary+, must your belief be irrational?
It seems doubtful and a case shows as much:
LOOKING. A coin has just flipped. Green is looking at the coin and
believes it landed heads. Let event e1 be the coin landed heads and
Green is looking at it. Green has reason to think that e1 is such that
if it obtains, then he believes the coin landed heads. Let event e2 be
the coin landed tails and Green is looking at it. He has reason to think
that e2 is such that if it obtains, then he does not accept that the coin
landed heads. But Green apparently lacks reason to think e1 makes it
more likely than e2 that he now gets it right whether the coin landed
heads. That’s because he sensibly thinks the obtaining of either e1 or e2
makes it highly likely that he will get the matter right.
In LOOKING, Green has reason to think his belief that the coin landed heads
satisfies causal arbitrariness+. His belief is perfectly rational nonetheless.
Having reason to think your belief is causally arbitrary+, then, is not enough
to make that belief irrational. When you have reason to think that each of e1
and e2 make it highly likely that you will believe truly and not falsely regarding p,
having reason to take your belief in p to be causally arbitrary+ is of little rational
significance. But in the cases of variability we are interested in here, you typically
don’t have reason to think anything like that about e1 and e2. In BATH, for
instance, Evelyn plausibly lacks reason to think that each of e1 (reading the first
26I am grateful to E.J. Coffman for discussion here.
37
thermometer) and e2 (reading the second thermometer) make it highly likely she
will get the temperature right.
A new brand of arbitrariness is at hand. I shall call it causal arbitrariness++. It
is causal arbitrariness+ with one extra condition on the right-side of the “only if”:
(iv) You lack reason to think that each of e1 and e2 makes it highly
likely that you will believe truly and not falsely regarding p.
I will recommend that having reason to think one of our beliefs is causally arbi-
trary++ makes that belief irrational for us. And that idea drives a new version of
the variability argument. Here’s the conditional premise:
P3: If you have reason to believe that your belief that p is such that
(i) were event e1 to obtain, you would believe p and (ii) were event e2
to obtain, you would not accept p, and (iii) you lack reason to think e1
makes it more likely than e2 that you now believe truly and not falsely
whether p, and (iv) you lack reason to think that each of e1 and e2 makes
it highly likely that you will believe truly and not falsely regarding p,
then your belief that p is irrational.27
We can label P3’s antecedent ‘P4’. Of course, P3 and P4 together entail C. I will
refer to this as the Arbitrariness Argument.
What should we think of these premises? To understand why P3 is reasonable,
ruminate on examples along the lines of BATH. Evelyn’s belief that the water is
41C is plausibly taken to be irrational. So, the consequent of P3 is true. When
considering the example, we judge that Evelyn is irrational only after she realizes
the two thermometers conflict. She has reason to think that her belief is causally
arbitrary++. That is, Evelyn appreciates that had she initially used the second
27I am tempted by an alternative formulation of (iii): e1 does not make you more reliable with
respect to p than e2 (where the reliability at issue is statistical).
38
thermometer, she would have instead believed the water is 51C. And she knows
that she is without reason to think using one thermometer rather than the other
makes it more likely that she will get the water’s temperature right. She also knows
that she is without reason to think using each thermometer makes it highly likely
she’ll get the temperature right. This implies that P3’s antecedent is true. By
reflecting on such examples, we will want to generalize to P3.
What of P4? Why should we think it is true with respect to our controversial
beliefs? As we start toward an answer, I can do no better than quote Peter van Inwa-
gen. He compares the way we actually adopt our philosophical opinions to opening
a book that lists “a thousand mutually inconsistent points of philosophical equilib-
rium” and then choosing one at random. It wouldn’t be rational to believe what we
randomly chose from that book. Yet, by van Inwagen’s reckoning, something like
using that book
[. . . ] is pretty much what nature and nurture and fortune have done with me.
The point of philosophical equilibrium I occupy depends [. . . ] (very likely) on
what my parents taught me about morals and politics and religion when I was
a child, and (certainly) on what university I selected for graduate study in
philosophy, who my departmental colleagues have been, the books and essays
I have read and haven’t read, the conversations I have had at APA divisional
meetings as a result of turning right rather than left when I was wandering
aimlessly about at a reception. . . Other philosophers have reached different
points of philosophical equilibrium simply because these factors have operated
differently in the course of the formation of their opinions. These reflections
suggest—and the suggestion is quite strong indeed—that I ought to withdraw
from the point of philosophical equilibrium I occupy and become a sceptic
about the answers to all or almost all philosophical questions. [2004: 342]
Van Inwagen is thinking of particular backgrounds as leading to particular philo-
sophical opinions—that’s just variability. P3 and P4 account for why reflecting on
39
the series of events in one’s background, alongside the alternative events that might
have been, carry the “suggestion” of skepticism.
Meditate on your actual background—what you were taught as a child, where
you went to school, the books and essays you’ve read, and so on—and think of it
as consisting in a rather complex event, e1, which has led you to believe p. Then
you realize the untold other ways your history might have gone. Call an alternative
background any event that leads you to not accept p. And so you see there is
an alternative event, e2, such that if it had obtained, you would have wound up
not accepting p. So far, all of that is only causal arbitrariness and, as MAJOR
indicates, that is consistent with rational belief. But causal arbitrariness++ pushes
skepticism: given the above reflections, (a) if you have no reason to think e1 makes
it more likely than e2 that you get p right and (b) if you have no reason to think
each of e1 and e2 make it highly likely that you will get p right, then your belief
that p is irrational.
When it comes to knowing what to say about P4, the real action lies in deter-
mining whether our situation—what we think of ourselves and our counterfactual
selves, I mean to say—is more like BATH than MAJOR. (To simplify the discussion,
I’ll assume that we don’t usually take ourselves to be in a situation like LOOKING
when it comes to controversial propositions. That is, we don’t have reason to think
that our background and some alternative background make it highly likely that we
get the matter right.) The question is whether our situation features mere causal
arbitrariness or the ‘double plus’ variety. Shall you trust your background as more
likely than any alternative to help you get p right? For plenty of controversial propo-
sitions, to be sure, most of us will grant that our controversial beliefs are causally
arbitrary. But many will deny those beliefs are causally arbitrary++. The following
proposition will help to focus our discussion:
NR: You lack reason to think that event e1 makes it more likely than
event e2 that you now believe truly and not falsely whether p.
40
NR will be familiar. P4 requires that we have reason to believe that a belief of ours
is causally arbitrary++. If P4 is satisfied, we should judge that we lack reason to
think e1 makes it more likely than e2 that we’ll get p right. That’s the affirmation
of NR. Supposing we think a belief in p is variable while also accepting NR, the
Arbitrariness Argument goes through and renders believing p irrational.
The remaining possibilities for rationally believing p are these: disbelieve NR or
withhold judgment with respect to it. I will argue that neither option is promising.
Suppose first that you withhold judgment regarding NR. The idea is that you
just can’t tell whether you do or don’t have reason to think e1 puts you in a better
position than e2 when it comes to getting p right. An analogy will show why this
will not do. Imagine that McCoy has a pair of thermometers, X and Y. When the
thermometers conflict, McCoy accepts X’s reading. Then we ask why she trusts X
against Y. She confesses that she is unable to tell whether she has reason to think X
makes it more likely than Y that she will get the temperature right. This admission
opens McCoy’s belief to due criticism. The natural reaction is that her belief in
X’s reading is irrational, absent some reason to prefer X to Y. Similarly, if you
think your belief in p is variable, it is irrational for you to retain it by withholding
judgment with respect to NR. Though withholding on NR allows you to circumvent
the Arbitrariness Argument, the evasion also ends with irrationality.
Turn to the possibility of disbelieving NR. Here, you will think that e1 (your
actual background) puts you in a better position than e2 when it comes to getting
p right. This enables you to reject P4. Is it ever reasonable to disbelieve NR?
Surely it is. You think, for instance, that like cases should be treated alike. Yet
had you grown up in certain dire situations, you likely wouldn’t now hold that
conviction. In such cases, though, we will want to insist that variability is not
relevant. We have reason to think that your actual background puts you in a better
position than the alternative backgrounds—your background makes it more likely
than the alternative that you get the matter right. Other variable beliefs are rightly
41
handled the same way. We should not shudder at the realization that a lot of our
beliefs are variable. Take our convictions in a heliocentric universe, a spherical
and ancient earth, a manned moon landing, the wrongness of killing innocents, the
existence of other minds, and so forth. For such matters, we’re better off than
our dissenting counterfactual selves and we know it—in precisely the way we know
studying Philosophy better positions us than P.E. would have to correctly believe
that modus tollens is a valid inference.
But can we sensibly deny NR for all of our variable beliefs? I doubt it. In a
wide range of cases, we should believe NR.
Handling these issues in the abstract is tricky, but here is an argument sketch
to favour accepting NR. It begins with actual disagreement over difficult and con-
troversial propositions from politics to history to metaphysics; such topics do not
typically admit of conclusive, knockdown arguments. For topics like these, a plau-
sible starting assumption is that many of the sensible, careful thinkers who reject
your belief that p have alternative backgrounds which led them to not accept p.
For all that, these people are normal and healthy. They suffer from no cognitive
dysfunction or obvious bias. They are subject to no intellectual, moral, or personal
failure greater than your own. Reflecting on these alternative backgrounds, where
would you locate the ‘mark of failure’? You should point somewhere if you deny NR:
you ought to say where or how their background failed them and yours prospered
you. You need reasons here; mere assertions or speculations of their failure won’t
do. What lies in their past that holds them back from attaining your position with
respect to getting p right? Ruminating in this way, it appears doubtful that you
have reason to think your position is better than theirs. But then you lack reason to
think e1 makes it more likely than e2 that you get p right, and so you have reason
to accept NR.
Not everyone will be moved by such considerations to accept NR. Indeed, I
anticipate that some among us will disbelieve NR—even when it is specified to
42
difficult and disputed propositions. Here is a general strategy for denying NR. First
off, consult your evidence for believing p. You see that your (total) evidence supports
p. Given that you have reason to think p is true, you have reason to think e1 is
more likely to ensure you now believe truly and not falsely than e2, which leads to
not accepting p. But now you have a reason to deny NR. By simply checking your
evidence for p, you can see that e1 makes it more likely than e2 that you will get p
right.
This bit of reasoning seems unsatisfying. Suppose you are genuinely in doubt
about whether NR is true. And suppose your evidence for p doesn’t entail p. Then,
clearly enough, your question about NR must be settled with more than your evi-
dence for p. For one, the reasoning at issue assumes too much about e2. From the
confines of your own perspective, you often will not possess the epistemic resources
bestowed by e2. How then can you know whether e1 makes it more likely than e2
that you’ll get p right if you don’t even know how things look over at e2?
There is a further difficulty with this general denial of NR: it conflicts with a
plausible principle for evaluating evidence. David Christensen ([2007] and [2010])
and Adam Elga [2007] each propose something like the following principle: in evalu-
ating the epistemic credentials of another thinker’s expressed belief about p, in order
to determine how (or whether) to modify your own belief about p, you should do
so in a way that doesn’t rely on the reasoning behind your initial belief about p.28
We can refer to this principle as Independence. The motivation for Independence
is simple: it prevents blatant question-begging rejections of evidence provided by
disagreement from others.29
28I have closely followed a statement of Independence due to Christensen [2010].29The point requires care, as E.J. Coffman reminded me. Begging the question is a problem with
arguments. But here you aren’t—or at least needn’t be—arguing with anyone. So while there may
be a problem akin to begging the question with the line of reasoning in question, it might not be
what we ordinarily call “begging the question”.
43
Reflecting on the way to deny NR suggested above, we can see it runs afoul of
Independence. Denying NR is your route to ensuring that the information provided
by your dissenting counterfactual self doesn’t push you to modify your belief that
p. But in doing so, you deploy reasoning that is not independent of your initial
reasoning for your belief. You beg the question on your counterfactual self.30
There are also some less general, piecemeal ways to deny NR. Suppose you pick
a disputed proposition and just insist it is obvious there’s no alternative background
which would make it more (or equally) likely than your background that you get
that matter right. Well, isn’t it especially lucky that you have come out on top?
Even if there is no irrationality in such high-flying self-confidence, surely credulity
is strained.31 Trying to envision all of the alternative backgrounds that might have
been yours, can you really believe you are in a better position than all of these
dissenting counterfactual selves? Are you the best possible you?
2.4 Conclusion
We have been trying to turn some worries about variability into an argument. In
the end, we have two interestingly different and plausible arguments. The premises
are quite reasonable, I’ve argued, and resisting them will take effort. If we were
unsure what to think about inchoate formulations of the variability problem, we
will now see more clearly why it is a problem. Of course, I have not intended to
offer knockdown arguments—rarely, if ever, do we find philosophical arguments that
30Some theorists have expressed doubts about Independence; Jennifer Lackey [2010] and Ernest
Sosa [2010] have each proposed counterexamples for it. In cases where you have exceedingly strong
reasons for p, they say, it’s rational to downgrade another thinker’s expressed belief—even though
doing so bites a thumb at Independence. (Christensen [2010] responds to such examples.) But
even if you don’t endorse Independence, there’s still a palpable sense in which this route to denying
NR is dialectically unsatisfying when your initial reasons for p are not especially strong. Absent
strong support for p, then, this kind of reasoning won’t do.31Cf. Sher [2001: 75].
44
are—but they help us count the cost of intellectual confidence.
Before I conclude, notice a couple of key differences between the two arguments.
First, the Arbitrariness Argument seems to apply to many more controversial be-
liefs than the Symmetry Argument—on the former, but not the latter, you and your
counterfactual self may well have entirely dissimilar evidence and cognitive skills.
Second, the Arbitrariness Argument raises a new and challenging question, one not
found in the Symmetry Argument. It asks whether all of the epistemic resources be-
queathed to you by your background—lessons from parents and teachers, books and
billboards you’ve read, the subtle nudging of culture and friends, among a thousand
other factors—whether this magnificent series of events that is your history better
equips you to get p right than the resources that would have been bequeathed to
you by an alternative background. I remarked earlier that the Arbitrariness Argu-
ment goes one ‘up’ on the Symmetry Argument. Focusing on bits of evidence and
specific cognitive skills, you may find asymmetry between you and your dissenting
counterfactual selves. It is easy to accept that asymmetry at the first level: perhaps
you and your dissenting counterfactual selves are different. But move up a level
and ask yourself whether your personal history better positions you to get p right
than some alternative background. Here, your nerve may fail. Indeed, if you reflect
on your history and the plethora of alternatives, the idea of rejecting NR may even
seem a touch preposterous.
So what’ll it be? You might accept the Symmetry Argument and end up judging
that your disputed beliefs are irrational. Or perhaps you will accept NR and be
drawn into the Arbitrariness Argument. Maybe you will figure out how to resist
both arguments and continue on with your controversial convictions. It may be,
too, that neither of these arguments fully capture your worries about variability,
and that some other argument does better. As for myself, I am more troubled by
variability than convinced that this pair of arguments will withstand scrutiny. And
that is appropriate, I surmise, if one lesson of variability is that we shouldn’t be too
45
confident about difficult and disputed matters.32
32Ancestors of this paper were presented at Lewis & Clark and Brown University in Fall 2007
and the Pacific APA in March 2008. Thanks to the audiences on those occasions and to my
commentators: Daniel Howard-Snyder, Andrew Rotondo, and Peter Murphy. For conversation
and comments, I am grateful to David Christensen, E.J. Coffman, Stew Cohen, Ian Evans, Terry
Horgan, Nathan King, Keith Lehrer, Mark Timmons, and Benjamin Wilson.
46
CHAPTER 3
EVIDENCE WE DON’T HAVE
“So long as I know it not, it hurteth mee not.”
– G. Pettie, Petit Palace (1576)
All of our beliefs are based on just part of the relevant evidence. There is evidence
we don’t have.
Let us make this vivid. Picture your life unfolding in the places that you have
inhabited. Homes and classrooms. Workplaces and playgrounds. In locations like
these, spread out in time, you have been exposed to an exceedingly complex set of
evidence—books, newspapers, conversations, lectures, and the making-sense of it
all. Label the part you happen to possess your evidence. You are not alone in the
world, of course, and other lives unfold elsewhere, with different sets of evidence.
Though some lives overlap in large part with yours, the vast majority overlap only
a little. You share the experience of blue skies with nearly everyone; with but a
few, you share the experience of reading the local newspaper or your undergraduate
textbooks. Picturing your evidential situation in this way powerfully suggests why
there is much evidence you don’t have. The physical limits of your life circumscribe
the limits of your evidence.
Learning of evidence we don’t have can challenge our beliefs. It is particularly
instructive here to look at our controversial beliefs concerning philosophy, politics,
history, science, and the like. Evidence we don’t have is relevant to whether there
is rational disagreement over such questions. I shall argue that, in many situations,
information indicating there is evidence we don’t have undermines controversial
convictions. I will proceed by asking how to reason about evidence we don’t have.
47
Learning of such evidence, I’ll contend, can teach us of our potential error. If I
am right, we must reassess many of our controversial beliefs—beliefs in which we
have confidence. The argument I shall offer highlights a novel sort of “contingent
real-world skepticism” (cf. Feldman [2006: 217]).
Before we begin, I should note a point of contact between our discussion here
and a recent discussion in epistemology. Many theorists have lately investigated the
significance of recognized disagreement between “epistemic peers”.1 It is typically
stipulated that peers are “equals with respect to their familiarity with the evidence
and arguments” that bear on some disputed question (Kelly [2005: 174], cf. Feldman
[2006: 219–220]). But peer disagreement seems to be a relatively idealized, and
thus perhaps uncommon, sort of disagreement.2 In many conflicts, we don’t face
off with epistemic peers because there are relevant evidential differences. Often
enough, there is evidence we don’t have (which they do have) and there is evidence
they don’t have (which we do have). Such cases apparently fall outside recent
discussion of peer disagreement, but I intend to address them here. By thinking
about unpossessed evidence, we will illuminate the significance of a wide range of
real-world disagreements.
3.1 The Question
Sometimes we gain evidence that we believe something on a limited share of the rel-
evant and available evidence. The fact that there is evidence we lack has not gone
unnoticed in recent epistemology. Some theorists propose that counterevidence we
don’t have can undermine knowledge (see Lehrer and Paxson [1969] and Harman
[1973]). These discussions focus on cases where we do not learn that there’s evi-
dence we lack. My question concerns the significance of learning, or coming to have
1For an overview of central issues, see Christensen [2009].2King [forthcoming] argues that standard “peerhood” conditions are rarely satisfied in our
actual disagreements.
48
reason to think, that there is some evidence that we don’t have. This is evidence
of evidence. To keep clear the distinction between first- and second-order evidence,
I will usually talk in terms of ‘(second-order) information of (first-order) evidence’
and thereby avoid the potentially confusing phrase ‘(second-order) evidence of (first-
order) evidence’.
Sometimes we gain information that there is some evidence E; that we don’t have
E; that E makes rational some attitude toward p. To pin down our main question, we
can distinguish between three sorts of cases where we acquire information of evidence
we don’t have. A batch of information can vary in richness and whatever information
we get about unpossessed evidence E may or may not tell us what E supports, among
other things. The information may indicate that (i) E is counterevidence for our
attitude A toward p, in the sense that it supports an attitude toward p other than
A; (ii) E supports A; or the information may (iii) leave it undetermined what E
supports. Here, we will investigate cases where we learn there is counterevidence we
don’t have.3
A clarification. Gaining information that there is counterevidence we lack is not
the same as gaining the counterevidence itself. There’s a distinction between having
counterevidence and having information that there is counterevidence. These are dif-
ferent states. The distinction between them holds even though it is highly plausible
that information of evidence is itself evidence (see Feldman [2005] and Christensen
[2010]). On a particular body of evidence, having some counterevidence and having
information of that counterevidence may affect what’s rational for someone to be-
3A comment on cases like (iii). At least some of the evidence that we learn we don’t have is
‘directionless’ from our perspective. That is, the information leaves it unsettled what the evidence
indicates about p. Green informs you, for instance, that there’s a map locked in the glove com-
partment and that it is evidence relevant to whether p. You get information that there’s some
evidence and that it either points toward p or it doesn’t. Whichever way it points isn’t part of
the information itself. Does learning of ‘directionless’ evidence ever call for a change in belief? I
suspect so, but I will leave this question shelved for now, focusing instead on cases of (i).
49
lieve in different ways. Suppose, for instance, that McCoy rationally believes p on
her evidence. Then she gains counterevidence CE. If McCoy sees that part of her
evidence defeats CE (e.g., indicates that CE is misleading), it remains rational for
her to believe p. Alternatively, if McCoy learns that CE exists, but doesn’t acquire
CE itself, what is rational for her to believe may be different. Since she doesn’t have
CE, she may not see how her evidence allows her to reject CE.
Why think there is counterevidence we don’t have? Earlier, I set out a ‘picture’
that suggests as much. Our lives have physical limits—beyond those hills there
surely lies counterevidence. Pictures aside, consider two concrete examples, which
I will discuss as we proceed.
You are at the library, meandering through rows of bookshelves. These
books investigate and pronounce upon matters about which you hold
convictions. But you haven’t read them. You judge that free will and
determinism are compatible, for instance, after having read a dozen jour-
nal articles and a couple of books. Now seeing these shelves, you appre-
ciate that there are hundreds of titles relevant to question of whether
compatibilism is true, and some authors offer arguments against what
you believe. You hadn’t considered this mass of work until now and you
haven’t yet looked at it.
And here is a second example:
Fifteen years ago, Claire was an active researcher on the anthropology
of human origins. She was then nominated to serve as an ambassador
to Sudan, where her diplomatic work prevented her from keeping pace
with her field. Now returning to her academic post, Claire learns that
some fellow anthropologists accept a thesis about human origins that
she herself rejects. She hasn’t looked at the recent books and articles,
however. Claire reflects on the earlier arguments against that thesis and
50
they still seem right to her, but she understands there is new evidence
she doesn’t have.
In commonplace cases like these, we believe p and then gain information that some
evidence we lack indicates not-p. Here is our question. How should we accommodate
information about counterevidence we don’t have? I shall propose that, in certain
circumstances, the information at issue rationalizes a change in belief. To arrive at
this view, we must explore how to reason about counterevidence we become aware
of but don’t have.
Let us underline why this question is worth our effort. First, information about
unpossessed counterevidence is a source of evidence about potential error. In fact,
within philosophical discussions, it is an unappreciated and untapped source of evi-
dence of error. (Well, I don’t know of any other discussion of evidence we don’t have;
but there may be evidence I don’t have.) By figuring out how to accommodate this
information, we can improve our intellectual lives. Second, unpossessed counterev-
idence is so commonplace. Choose at random a controversial belief of yours: there
is very likely some counterevidence for it that you haven’t come across. Finally, it
isn’t clear how to accommodate such information even though it apparently has ra-
tional import. Envision those shelves of library books on free will. Confronted with
this unpossessed counterevidence, don’t you find yourself less confident that com-
patibilism is true? What you learn at least unsettles your conviction. But it isn’t
obvious whether or why it should. Thus, we need to investigate how to rationally
accommodate information about counterevidence we don’t have.
3.2 Evidence, Defeat, and Dismissing Counterevidence
Along the way to answering our question, I will invoke the following ideas: evidence,
defeat, rationally dismissing counterevidence. I begin by briefly discussing each one.
We can stipulate that evidence, whatever else it is, indicates the truth (or fal-
51
sity) of a proposition p.4 Evidence can indicate the truth of p to some greater or
lesser degree. Having a piece of relevant evidence ‘makes rational’ or ‘rationalizes’
or ‘justifies’ a particular attitude toward p.5 Evidence can be misleading. Some-
thing may be evidence for p even when p is false; we can have evidence for false
propositions. When Wilson spots decoy ducks on the lake at a great distance, he
acquires evidence for thinking that ducks are on the lake. His evidence, though mis-
leading, is evidence nonetheless. Beneath the distinction between non-misleading
and misleading evidence is the truth. Non-misleading evidence represents the truth;
misleading evidence doesn’t.
It is rationally significant whether or not we think our evidence is non-misleading.
Suppose we are reflecting on the value of some evidence E that apparently indicates
that p is true. We will ask: “Is E non-misleading?” If our belief in p based on E
is rational upon such reflection, we will have reason to think E is non-misleading.
To see this, notice that if we withhold judgment on whether E is non-misleading or
disbelieve that E is non-misleading, we won’t judge that E is much good as evidence
for p.
Let me draw this out. If we disbelieve that E indicates p, we will think E is
misleading—thinking that it’s not the case that E is non-misleading is equivalent
to thinking E is misleading. And if we withhold judgment on whether E indicates
p—that is, neither believing nor disbelieving that E is non-misleading—we will
not take E to indicate the truth regarding p: we won’t believe that E is non-
misleading. So, either way, we end up believing p on evidence that we do not
happen to take to indicate that p is true. This isn’t rational. Thus, whenever it
is rational for us to believe p on E, we will (at least on reflection) judge that E is
non-misleading. Note well: I do not say that thinking that one’s evidence is non-
4For an overview of some central conceptions of evidence, see Kelly [2008a].5We can set to the side questions about whether a given body of evidence makes rational just
one attitude or multiple attitudes. See White [2005] and Ballantyne and Coffman [forthcoming]
for more discussion.
52
misleading is necessary for that evidence to rationalize a belief. I don’t impose any
such higher-order requirement on rational belief. My proposal, more simply, is that
certain higher-order attitudes (withholding and disbelieving) are incompatible with
a belief in p being rational, given that the thinker is considering the higher-order
proposition that E indicates p.6
It is no accident that in order for E to rationalize belief, we will need to think
(on reflection) that E is non-misleading. One typical way to defeat E’s ability to
rationalize belief is by gaining reason to believe E is misleading. Having a reason
to think E is misleading is just to have counterevidence for whatever you believe
on E’s basis. As I will speak of it here, counterevidence is relative to thinkers with
attitudes. Suppose that I believe p and you disbelieve p. If E* indicates not-p and
I acquire it, it is counterevidence, whereas if you acquire E*, it is evidence. What
is counterevidence for me is evidence for you, given our initial attitudes.7
At a particular time, counterevidence we have is either defeated or undefeated.
Undefeated counterevidence prevents our evidence from rationalizing a particular
belief. An example will serve to explain why. Matthew sees a Ruby-crowned Kinglet
resting on a tree branch and believes there’s a Ruby-crowned Kinglet. Then I say to
Matthew, who is still eyeing the bird: “You think it’s a Ruby-crowned Kinglet, eh?
Well, it’s actually a decoy.” I thus supply him with counterevidence for his belief
that there is a Ruby-crowned Kinglet. But perhaps he looks closely at the bird us-
ing binoculars. Then he has reason to think the counterevidence I have supplied is
misleading evidence—decoys don’t look like that—and so he has defeated the coun-
terevidence. Now imagine that I tell Matthew that The Sibley Guide to Birds iden-
6Compare the argument offered here to Bergmann’s [2005] argument that higher-order doubt
about an attitude’s justificatory status renders that attitude unjustified.7Suppose that you withhold judgment on p. Then evidence indicating p or evidence indicating
not-p is counterevidence for you; but a set of evidence that equally supports p and not-p is evidence
for you. A further implication is that nothing can be counterevidence, in this sense, for a thinker
who takes no attitudes.
53
tifies a sort of female warbler that is often mistaken for the Ruby-crowned Kinglet.
“This bird,” I say, “seems to be an Orange-crowned Warbler, not the Kinglet.” As
far as he can figure, the counterevidence isn’t misleading: I appear to tell the truth
about what’s in this authoritative bird-identification book. Here, Matthew has un-
defeated counterevidence and it prevents his evidence from rationalizing his belief.
This is so even if I’m wrong and the counterevidence is in fact misleading.
To defeat counterevidence, as I speak of it here, we must have it. Yet even when
we don’t possess some counterevidence, but only have reason to think it exists, we
may rationally dismiss or ignore it—that is, set it aside in our reasoning about p.
For example, you may believe that smoking is unhealthy. Then you learn that Big
Smoke, a cigarette manufacturer and stock car racing sponsor, has funded a team
of scientists to investigate the long-term health effects of smoking. Big Smoke’s
research team has just released its final report. Contrary to received expert opinion,
it concludes, smoking is not a health hazard. You haven’t cracked the 700+ page
report and so don’t possess this counterevidence. Yet, very plausibly, it is nonethe-
less rational for you to dismiss it. In section 4, I will ask why we are inclined
to say so. For now, we can say that it is rational for you to dismiss unpossessed
counterevidence if you have some reason to think it is misleading.
So much for clarifying these central ideas—evidence, defeat, rationally dismissing
counterevidence. Let us put them to use.
3.3 A Puzzle about Dismissing Evidence
How do we rationally accommodate information regarding counterevidence we don’t
have? That is our question and the answer, as I shall argue, depends on what we
have reason to think about our limited evidence. The key issue is whether we’re
rational to think our evidence is non-misleading. Now, as I argued in section 2, one
constraint on rational belief based on some evidence is that we think (on reflection)
that our evidence is non-misleading. Supposing just that we have rational beliefs
54
and that we tend to reflect on questions like “Is my evidence non-misleading?”, we
presumably enjoy reasons to take our evidence to be non-misleading.
Yet having reason to think our evidence is non-misleading leads to a puzzle.
This puzzle is closely related to Saul Kripke’s so-called “dogmatism paradox”.8
Kripke observed that if you know p, you also know that any evidence indicating
not-p that you subsequently encounter is misleading. But then you seem to know
that any counterevidence is misleading before you even acquire it. So you can simply
ignore incoming counterevidence. The trouble is, this policy seems to be irrationally
dogmatic.
Suppose you believe p on E. Then you’re introduced to the following reasoning:
A: If you have reason to think that E indicating p is non-misleading, then
you have reason to think that any counterevidence to p is misleading.
B: If you have reason to think any counterevidence to p is misleading,
you may rationally dismiss any counterevidence to p.
C: If you have reason to think that E indicating p is non-misleading,
you may rationally dismiss any counterevidence to p. [A and B]
Having reason to think your evidence is non-misleading makes it rational for you
to dismiss any counterevidence—even unpossessed counterevidence. Could this be
right?
Well, the idea that we can always rationally dismiss any such counterevidence
is dubious. Recall Claire the anthropologist. Returning from her post as an ambas-
sador in Sudan, she learns of new counterevidence for her belief regarding human
origins. Supposing that Claire’s belief is rational on her initial evidence, can she
sensibly dismiss the unpossessed counterevidence she learns of? Surely not. That
would be a wrongful dismissal. Yet we’re also pulled in another direction. Suppose
8See Harman [1973: 148–149]. For more discussion, see Sorensen [1988], Conee [2001], and
Kelly [2008b].
55
we learn of some counterevidence we don’t have. If it is not rational to dismiss
this counterevidence, then it is not rational to think our evidence is non-misleading.
That is because the denial of C’s consequent implies the denial of C’s antecedent.
But, as we have seen, difficulty often meets us when we don’t have reason to think
our evidence is non-misleading—denying that it is non-misleading prevents it from
rationalizing belief.
The puzzle taunts us with either dogmatism or doubt, but there’s an escape.9
Suppose that right now we have reason to think E is non-misleading. Then we find
out later on that there is counterevidence. What we learn later may defeat our
reason for thinking E is non-misleading. Why is that? Information that there is
counterevidence is itself evidence, and so it may tip the balance on what is rational
to think about E. And if we lose our reason to think that E is non-misleading later
on, we thereby lose our reason to think that any counterevidence is misleading.
This puzzle and its solution helpfully suggest the main argument on offer.
3.4 The Argument
I hope to address questions of rational disagreement by understanding the signifi-
cance of information of unpossessed counterevidence. Turn then to our beliefs about
difficult and disputed topics like politics, history, philosophy, and science; let us refer
to them as controversial beliefs. I will argue that information regarding unpossessed
counterevidence for our controversial beliefs often undermines those beliefs.
The argument I’ll give assumes that you’ve reason to think there is some coun-
terevidence CE for your controversial belief B based on evidence E but that you
don’t have CE.10 It is a general argument; you can plug in particular controversial
9It is similar to the standard solution of Kripke’s dogmatism paradox first given by Harman
[1973] and later elaborated by Sorensen [1988]. Kelly’s [2008b] discussion of Kripkean dogmatism
is useful.10I should note that we can reasonably believe there is such counterevidence by armchair reflec-
56
beliefs and particular unpossessed counterevidence as you see fit.
P1: If it is not rational for you to dismiss CE for B, then it is not rational
for you to think that E is non-misleading.
P2: It is not rational for you to dismiss CE for B.
P3: It is not rational for you to think that E is non-misleading. [P1 and
P2]
P4: If it is not rational for you to think that E is non-misleading, then
E does not rationalize B for you.
C: E does not rationalize B for you. [P3 and P4]
P1 and P2 are the critical steps. I’ll first say why P1 is plausibly thought true.
A reason for taking E to be non-misleading can be defeated. What I contend is
that any such reason will be defeated by learning there is counterevidence we don’t
have—unless we can rationally dismiss the counterevidence. An example: Claire
rationally believes p on E and then learns of the new research on human origins
that indicates not-p. She lacks a reason to dismiss this counterevidence. According
to P1, her reason for taking E to be non-misleading is defeated. Once again, that’s
because if she gains information that there is unpossessed counterevidence and she
can’t rationally dismiss it, then her reason for taking E to be non-misleading is
defeated.
As a defence of P1, perhaps this is so far unconvincing. So witness what follows
from the assumption that P1 is false. Given the denial of P1, the following is possible:
Claire learns there’s unpossessed counterevidence and she cannot rationally dismiss
it (P1’s antecedent), but she is nonetheless rational to think that her evidence is non-
misleading (denial of P1’s consequent). It follows that if her reason to take E to be
tion (“Well, it is a gigantic world: there must be some counterevidence out there that I haven’t
yet acquired...”) or by empirical investigation (“Just look at all of the recently published articles
arguing against my favourite theory...”).
57
non-misleading is not defeated by her learning about unpossessed counterevidence,
she can reasonably invoke that very reason in order to dismiss the counterevidence.
But then, we should see, Claire can rationally dismiss the counterevidence (denial
of P1’s antecedent). Contradiction. By assuming P1’s antecedent while denying its
consequent, the denial of P1’s antecedent follows. To avoid this consequence, we do
well to affirm P1.11
How about P2? I shall argue that when it comes to many of your controversial
beliefs, it isn’t rational to dismiss CE. To this end, I will identify a pair of principles,
both of which indicate when it is rational to dismiss counterevidence you don’t have.
But neither principle is of help here, I’ll contend. You typically cannot appeal to
either one to dismiss unpossessed counterevidence for your controversial beliefs.
I will begin with two examples. Pretend that you are in the library again, this
time inspecting shelves of books on metaphysics. You know that some of these
volumes argue that persons don’t exist and that there are no objects like hands.
You haven’t read any of them but you dismiss the counterevidence nonetheless. A
second example: Big Smoke’s 700+ page report contains counterevidence for your
conviction that smoking is unhealthy; you dismiss it without reading a page. In both
of these examples, a plausible judgment (which depends on filling in some details)
is that you are fully rational to dismiss the unpossessed counterevidence. It is right
to judge these books by their covers, you might say.12
Why are these judgments rational? I will suggest two different explanations—
each one leads us to a distinct principle for rationally dismissing unpossessed coun-
terevidence. Both explanations begin with the idea that since you don’t have the
11In this defence of P1, I’ve assumed the following closure principle for rationality: if it is rational
for you to think E is non-misleading evidence for p and it is obvious to you that it is rational for you
to think E is non-misleading evidence for p implies that E* for not-p is misleading, it is rational
for you to dismiss E*.12In fact, you may even have rational “unreliability” judgments in hand, the judgments that
enable you to dismiss the unpossessed counterevidence, before learning of the counterevidence.
58
counterevidence itself, you need to hook onto something else that provides a rea-
son to think the counterevidence is misleading. Since you can’t directly assess the
counterevidence, you must appeal to indirect means. One explanation focuses on
the source of the counterevidence, and the other on your own evidence.
The first explanation notes that what is plausibly in the background of the
two examples is some information regarding the source of the counterevidence. We
judge that you are rational to dismiss Big Smoke’s report. Perhaps that’s because
we think you enjoy reason to doubt the reliability of the (type of) evidence source.
The researchers behind the smoking-friendly report were bankrolled by a cigarette
company, after all. Partnerships like that, you know, tend not to produce or put
forward good evidence, thanks to problematic incentives and the pernicious effects of
cognitive biases.13 We also judge that you are rational to dismiss the metaphysicians’
counterevidence, which they assert indicates there are no persons or objects. It may
be that you have a reason to doubt that the source of the counterevidence—a certain
brand of metaphysical speculation—is reliable. Then it is rational for you to dismiss
the counterevidence.
In this pair of examples, you have reason to think the unpossessed counterevi-
dence is linked to an unreliable source. The critical idea is that the defectiveness
of the source ‘rubs off’ on the counterevidence itself—a kind of epistemic guilt by
association, as it were. You start with a reason to doubt whether the counterevi-
dence comes from, or is otherwise endorsed by, a reliable source. That reason in turn
provides you with a reason to doubt whether the counterevidence is non-misleading.
And with reason to doubt whether the counterevidence is non-misleading, you may
dismiss it. To be sure, this chain of reasoning can snap: any connection between the
source and the counterevidence may be broken. If you were to learn that many cred-
ible researchers, not on the Big Smoke payroll, also endorse the cigarette company’s
report, then you would plausibly lose your reason to doubt the counterevidence is
13For more, see Tavris and Aronson [2007: ch. 2].
59
non-misleading.14
When reasoning about the unpossessed counterevidence, what seems critical is
that your reasons for dismissing it are independent from your reasons to believe
either that smoking is unhealthy or that there are persons or objects. Let us see
what this independence amounts to. First, you could have invoked your reasons for
thinking smoking is unhealthy to dismiss Big Smoke’s counterevidence. But doing
so would blatantly beg the question on Big Smoke’s research team. Likewise, if you
had relied on your reasons for thinking persons or objects exist in order to dismiss
the metaphysicians’ counterevidence, this dismissal would be a dialectical flop. You
would beg the question.15
The following constraint on evaluating counterevidence looks sensible. We can
call it Independence: in evaluating the epistemic credentials of some counterevidence
indicating not-p (e.g., whether it is misleading or not), in order to determine how
(or whether) to modify your own belief that p, you should do so in a way that does
not rely on the evidence for your initial belief that p.16 By using your evidence
for p to dismiss counterevidence apparently indicating not-p, you do something
irrational and apparently dogmatic. Rationality demands that when you dismiss
the counterevidence, you don’t merely reuse or reassert your evidence for thinking
as you do. Independence explains why. Here then is the first principle for dismissing
unpossessed counterevidence:
14This implication may hold even if you still have reason to think the partnership of Big Smoke
and the bankrolled researchers makes for unreliable inquiry.15Compare to Christensen [forthcoming] on the motivation for his version of Independence,
which I’ve followed here. But the point requires care. Begging the question is a problem with
arguments. But here you aren’t—or at least needn’t be—arguing with anyone. So while there may
be a problem akin to begging the question with the line of reasoning in question, it might not be
what we ordinarily call “begging the question”. Thanks to E.J. Coffman for discussion.16See Christensen ([2007] and [forthcoming]) and Elga [2007] endorse for related principles de-
signed to evaluate someone’s stated belief regarding p.
60
If you have an independent reason to doubt that some counterevidence
you don’t have is non-misleading, then you are rational to dismiss it.17
We may apply this principle to cases where we learn of unpossessed counterevidence
for our controversial beliefs. It is doubtful that we typically enjoy independent reason
to think the sources of such counterevidence are unreliable. Of course, sometimes
this principle does help. At least some of the counterevidence we lack comes from
sources like Big Smoke. If we have reason to think that some source is unreliable,
we can dismiss its counterevidence.
But a more common situation finds us unarmed. We are often without inde-
pendent reason to doubt the source of counterevidence is reliable. Reflect on your
controversial scientific, philosophical, or political beliefs, and the sources thereof.
It is unlikely that you have independent reason for thinking that the multitudi-
nous sources of counterevidence are unreliable to any degree greater than your own
sources. While thinking about whether other sources are unreliable, anyway, you
will want to set aside your evidence for p. Many of the sources of counterevidence
are just like your own sources—albeit with different deliverances. For instance, you
accept that compatibilism is true; but you don’t have an independent reason to
doubt the source of the counterevidence generated by the incompatibilists in those
library books on free will. So far as you know, they use methods quite a lot like
yours. I do not say you won’t ever satisfy the above principle with respect to un-
possessed counterevidence for your controversial beliefs. It is just unusual that you
will.
There is an alternative explanation for why it is rational to dismiss counterev-
idence inside Big Smoke’s report and those nihilistic metaphysical volumes. Many
will notice that you have exceedingly strong reasons—knockdown reasons, we may
17We might want to state the antecedent so as to rule out cases where you have just such an
independent reason while having (misleading) reason to doubt that you do. In such cases, it is
questionable whether you would be rational to dismiss the unpossessed counterevidence.
61
call them—for thinking that smoking is unhealthy and that there are persons, hands,
and books. Perhaps your reasons for these claims are such that if a thinker under-
stands them, she is obligated to accept those claims. And perhaps, on occasion,
you have reason to think your evidence is knockdown. Then it is at least plausible
that you can rationally dismiss counterevidence you don’t have. Consider how that
would go. Moore takes himself to know he has two hands; Moore’s colleagues deny
it, chiding him with claims that they’ve devised arguments showing there are in fact
no hands. Moore has learned that there is counterevidence he doesn’t have. But he
dismisses it by invoking his patented knockdown argument: “Here is one hand, and
here is another. . . ”.
All of this suggests a second principle for dismissing unpossessed counterevidence:
If you have a knockdown reason for p, you are rational to dismiss unpos-
sessed counterevidence indicating not-p.18
We will notice that using this principle to dismiss unpossessed counterevidence won’t
obviously square with Independence. It recommends that you evaluate the epistemic
credentials of the counterevidence in a way that depends on the evidence for your
initial belief. In any case, those who hesitate to endorse Independence may welcome
this second principle.
As we pass through, note how one divide within recent discussion of “peer dis-
agreement” is mirrored by these two ways to dismiss unpossessed counterevidence.
Christensen ([2007] and [forthcoming]) and Elga [2007] propose that when evaluat-
ing a peer’s stated dissenting belief that not-p, and determining how (or whether)
your view should change, you should not depend on your evidence for p. That
constraint fits comfortably beside the first principle we looked at. Frances [2010],
18We might want to state the antecedent so as to rule out cases where you have a knockdown
reason while having reason to doubt that you do. In such cases, it is not entirely obvious whether
you would be rational to dismiss the unpossessed counterevidence.
62
Lackey [2010], and Sosa [2010] each give examples suggesting that we can sensibly
reject the constraint when you have exceedingly strong reasons for p. In such cases,
it is apparently rational to downgrade your peer even though doing so runs afoul of
the constraint in question. That is in concert with the second principle.19
Do we take ourselves to have knockdown reasons for our controversial beliefs?
Not usually. I will briefly look at the example of philosophical convictions to suggest
why. (Similar remarks apply to other sorts of controversial belief.) Some prominent
philosophers—David Lewis [1983: x–xi] and Peter van Inwagen [2004: section 1], for
instance—teach that there are no knockdown arguments for philosophical theses.20
These days at least, philosophical theses are only rarely taken to be supported by
exceedingly strong reasons. But then, for these thinkers, it won’t be knockdown
reasons that make it rational to dismiss unpossessed counterevidence. At any rate,
insisting that your controversial convictions about causation, substance dualism,
socialism, or free will are supported by knockdown reasons betrays no small self-
confidence.
It is doubtful, then, that either of these methods to dismiss counterevidence
will do the trick when it comes to our controversial beliefs. We usually don’t have
an independent reason to doubt that some unpossessed counterevidence is non-
misleading; and we usually don’t have knockdown reasons for the negations of those
propositions apparently supported by counterevidence we don’t have. For now, I
leave open the possibility that there are other ways to rationally dismiss counterev-
idence for our controversial beliefs than I have identified here. It is good enough for
now that two main paths are blocked.
Thus concludes my defence of P2. That’s the premise that it is not rational
for you to dismiss some unpossessed counterevidence for one of your controversial
19Christensen [forthcoming] discusses examples like those due to Frances, Lackey, and Sosa.20For more on these arguments, and whether there are any in philosophy, see my “Knockdown
Arguments”.
63
beliefs. Earlier on, I gave some support for P1, the premise that if it is not rational
for you to dismiss some unpossessed counterevidence for your controversial belief,
then it is not rational for you to think that your evidence for that controversial belief
is non-misleading. Put P1 and P2 together and we get P3, which is the premise that
it is not rational for you to think that your evidence for your controversial belief is
non-misleading.
The final premise of the argument is P4: if it is not rational for you to think
that your evidence for your controversial belief is non-misleading, then your evidence
does not rationalize that belief. This premise needs only a brief defence. I argued in
section 2 that if you deny that your evidence is non-misleading (on reflection), then
it isn’t clear why it is much good as evidence for you. A variation on this theme
will support P4. Suppose you reflect on whether your evidence is non-misleading.
If it isn’t rational to think your evidence is non-misleading, then you have a reason
to deny (withhold judgment or disbelieve) that it is non-misleading—the fact that
it isn’t rational to think your evidence is non-misleading just is your reason. But if
you have a reason to deny that your evidence is non-misleading, then your evidence
does not rationalize your controversial belief. So P4 appears to force itself on us. To
reject P4, you must insist both that you’re not rational to think that your evidence
is non-misleading and that it makes rational your controversial belief. That is
implausible.21
At long last, our conclusion, which follows from P3 and P4: your evidence does
not rationalize your controversial belief.
The argument applies widely. I content that all of us have reason to think there’s
unpossessed counterevidence for at least some of our controversial beliefs, and that
we are unable to rationally dismiss some of that counterevidence. When we take
21At least according to plausible understanding of higher-order evidence. What is rational to
think about the value of some evidence should be able to impact upon what it rationalizes: see
Feldman [2005] and Christensen [2010]. It is only by resisting this idea that P4 can be rejected.
64
the argument schema and plug in some controversial belief of ours, along with our
information of unpossessed counterevidence, we will come to appreciate that our
evidence doesn’t rationalize that belief. Learning there is counterevidence we lack
undermines our convictions.
Our discussion has implications for the possibility of rational disagreement.
Gaining reason to think that our evidence does not rationalize our controversial
beliefs means we aren’t now rational to disagree. Though we may hope that our
conflicts with thinkers who possess different evidence (non-“epistemic peers”) are
often rational disagreements, the argument here sets some limits. When we aren’t
positioned to dismiss some unpossessed counterevidence for our controversial beliefs,
we are rational to disagree just if we remain ignorant of that counterevidence. We
now have a novel skeptical argument that challenges a range of controversial beliefs
from philosophy to politics to science.
3.5 Objections and Replies
Let us look at two brief objections and replies before concluding.
Here is a natural reaction to learning of unpossessed counterevidence: let’s go
out and get it.22 If you know there are books on free will in the library that contain
evidence against your view, then you could borrow those books. Learning about
such evidence might even sometimes generate a reason for you to look. When you
find out there is unpossessed evidence against some belief of yours, perhaps you
should try to acquire it.23
We can convert this idea into an objection. Let’s say you are a conscientious
22Paul Thagard raised an objection like this in conversation.23Compare this idea to a sensible reponse to learning about disagreement. Alvin Plantinga
writes: “When you note that others disagree with you...perhaps there is a duty to pay attention
to them and to their objections, a duty to think again, reflect more deeply, consult others, look
for and consider other possible defeaters” [2000b: 252–253].
65
thinker, disposed to investigate more whenever you have gained information of un-
possessed counterevidence. Careful investigation takes time and resources, so you
make a note of the required investigative work and stick it on your bulletin board.
(“Borrow & study: Essay on Free Will, Living Without Free Will, et al.”) Suppose
you see things like this: until you actually acquire the counterevidence in those
books, you surmise that you are in the clear to maintain your challenged belief.
By your lights, rationally speaking, you’re being all that you can be. But then the
argument I have offered goes wrong.
This objection implies that dispositions to respond to information of counterev-
idence with more investigation can make an evidential difference. That is doubtful.
Suppose you think that your belief is rational because it is based on your evidence;
this is typical for beliefs in many controversial topics. Now, by accepting the ar-
gument’s conclusion, you think your evidence doesn’t rationalize your belief. That
serves to undermine your belief. You may be disposed to investigate more when-
ever you learn of unpossessed counterevidence. And it may be that once you do
investigate, you’ll find the counterevidence is actually misleading; but you haven’t
as of yet. Given that the argument undermines your belief, how does having those
dispositions change anything?
To see more clearly how the objection fails, imagine that you and your evidential
twin have both worked through the argument and find yourselves thinking your
evidence does not rationalize belief in p. If this objection succeeds, then so long as
you are disposed to investigate and your twin is not, you are rational to accept p
but your twin is not. Quite curious indeed. Both of you accept that your evidence
does not rationalize belief in p. One difference between you and your twin is your
disposition to investigate more. Yet it’s doubtful that having such a disposition
contributes anything at all to your evidence which might stop belief in p from being
undermined. While I gladly admit that it is good to be well-disposed intellectually
in the teeth of unpossessed counterevidence you can’t dismiss, I doubt that being
66
so disposed implies you avoid irrationality.
A second worry about the argument is that it is somehow self-defeating.24
Roughly, the idea is that it won’t be rational to accept one or other premise, given
the sort of epistemic principles in play. Suppose you believe p and then learn there
is unpossessed counterevidence for p. You come to believe the premises of the ar-
gument (relative to p). It may be that p is one of the argument’s premises. Now, if
the argument holds relative to one of its premises, then your belief in that premise
stands irrational. The argument may therefore ‘backfire’ on whoever uses it: be-
lieving a premise could lead one to think that very belief is irrational. But then it
appears doubtful that one should accept the argument.
In reply, it is critical to see that the argument is only contingently self-defeating
if at all.25 Remember how we applied the argument to your belief that you have
hands. As I observed in section 4, you can sensibly dismiss unpossessed counterev-
idence, the variety contained in nihilistic metaphysical volumes, for the belief that
you have hands. Even so, it is possible that you will one day stumble into an unfor-
tunate situation where you can’t dismiss such evidence. Similarly, you might gain
information of unpossessed counterevidence for your belief in one of the argument’s
premises. But you might not. And even supposing you do learn of counterevidence
you don’t have, you might be able to dismiss it. We should wonder: what is the
unpossessed counterevidence that undermines belief in some premise, and can we
dismiss it? Without filling out the proposal that there’s self-defeat, it is difficult to
evaluate this objection.
Even if you’re unable to dismiss some counterevidence for a premise you accept, I
am unsure what that shows about the truth of the premise. Showing that the premise
is not rational to believe is not the same as showing it is false. Indeed, the self-defeat
24Conversation with Nathan King and Craig Warmke led me to include discussion of this objec-
tion.25Christensen [2009: 762] distinguishes between “potential” and ”automatic” self-undermining
for principles related to rational disagreement.
67
worry is consistent with the following possibility: though the premise is true, there
are possibly situations in which we can’t rationally believe it. Such situations may
be paradoxical, but it is far from clear whether they reveal a significant defect with
the argument. So, I am not convinced that potential self-defeat is a good reason to
reject the argument, though it may suggest that there are situations in which using
it is problematic.
3.6 Conclusion
Evidence we don’t have makes up most of the evidence there is. I have tried to take
this fact seriously. It is an unavoidable fact. With the rising tide of specialization
in many fields, any individual thinker can only grasp an infinitesimal share of the
collective evidence.
If the argument I have given comes as a surprise, we should notice that the
importance of unpossessed evidence is regularly highlighted by our commonplace
thinking. Witness, for instance, the student who comes to class with convictions
about matters of serious debate. This student has not studied the canonical books
and articles in the field—in fact, he doesn’t even know about these works. You
explain to this student there is evidence he lacks; he remains confident in his views.
It is right and good, of course, to criticize unbridled belief in the face of unpossessed
evidence and we typically do just that. The argument I have offered here shows how
this criticism can turn inward. Confronted with evidence we don’t have, we may
deserve criticism, too.26
26For comments, conversation, or correspondence, I’d like to thank Alex Arnold, David Chris-
tensen, E.J. Coffman, Stew Cohen, Tom Crisp, Nathan King, Keith Lehrer, Craig Warmke, and
Benjamin Wilson. I am grateful to audiences at the University of Waterloo and Saint Mary’s
University in Halifax, where ancestors of this paper were presented in early 2010.
68
CHAPTER 4
REASONING ABOUT BIASES
“The history of human opinion is scarcely anything more than the history of
human errors.” – Voltaire (Philosophical Dictionary)
Psychologists report that humans predictably fail on many tasks of judgment and
reasoning (see Gilovich, Griffin, and Kahneman [2002]; Kahneman and Tversky
[1982]; Nisbett and Ross [1980]). What does the empirical evidence tell us about
human rationality?
Some theorists and observers insist that the empirical results raise doubts. But
what are the reasons for these doubts, and what exactly is being doubted? Theorists
have long waged so-called “rationality wars” over such questions, drawing both
pessimistic and relatively optimistic conclusions from the evidence of our tendency
to err. Some argue that the evidence shows we are not rational (Stein [1996]; Samuels
and Stich [2004]). Other theorists are more optimistic and allege that rationality
(or at least a psychologically realistic version thereof) is vindicated by either a
proper understanding of the empirical results (Gigerenzer [1991]) or philosophical
considerations (Cohen [1981]).
Theorists have typically considered the implications of the empirical results for
humanity in general, including those of us who are not even aware of the results. Cu-
riously, theorists have almost universally left reasons and reasoning on the sidelines.
According to me, this is an oversight. Learning of biases can have a critical impact
on our reasons. We discover our limits and frailties as truth-seekers; we can reason
about the affect of the empirical evidence on our attitudes. I shall here contend that
one central route to understanding the rational significance of the evidence of biases
69
comes through investigation of our reasons in light of that evidence. To that end,
I will reflect on the import of the empirical results for humans who are themselves
aware of the results. How should learning of the human propensity to err affect our
thinking?
That is our question. It has been largely ignored.1 In section 1, I will explain
what is distinctive about the kind of arguments I’m after by comparing them to one
standard argument often discussed by psychologists and philosophers of psychology.
In section 2, I’ll develop an argument suggested by Richard Foley [2001: ch. 3]. Then
in section 3 I shall offer a new argument takes off from the argument of section 2. In
section 4, I will consider an entirely different argumentative strategy that shows how
evidence of biases can lead to doubts about how we accommodate counterevidence
for our attitudes. Finally, I’ll conclude in section 5 with reflections on the role of
these empirical results in epistemological discussions. Empirical research on human
judgment offers a rich source of evidence to aid in the evaluation of our attitudes.
4.1 The Question
Experimental work on biases has shown that humans systematically suffer from con-
firmation biases, covariation illusions, base rate neglect, overconfidence bias, hind-
sight bias, self-serving bias, among many other errors. Since these results are now
widely discussed outside of psychology departments, we need not rehearse them.2
My intentions here are purely epistemological, you might say: I want to explore
the rational impact of information about our tendency to err upon our attitudes.
In subsequent sections, I will discuss three argument schemas that clarify how in-
formation regarding biases makes a difference for the rationality of our attitudes.
Allow me first to comment on the nature of the arguments on offer by contrasting
1But Foley [2001: ch. 3], Elga [2005], Kelly [2008b] each touch on similar questions. I discuss
Foley’s efforts in section 2.2Gilovich [1991] offers a useful introduction.
70
them with one standard biases-to-irrationality argument.
While discussing the evaluative significance of the empirical evidence, theorists
have often proposed normative ideals along the following lines:
[T]o be rational is to reason in accordance with principles of reasoning
that are based on rules of logic, probability theory and so forth. (Stein
[1996: 4])
The presence of an error of judgment is demonstrated by comparing
people’s responses [...] with an accepted rule of arithmetic, logic, or
statistics. (Kahneman and Tversky [1982: 493])
Then it is noted that the empirical evidence indicates that many humans often form
attitudes in violation of the normative canons. It follows that many humans are
often irrational with respect to the attitudes in question. Sketchy as this is, let us
call it the Standard Argument.
This argument has not achieved a consensus. In the “rationality wars”, it has
been subject to voluminous debate (see Samuels and Stich [2004] and Rysiew [2008]
for useful discussion). As I’ve said, I want to investigate how learning about biases
should affect our attitudes. The Standard Argument concludes, again roughly, that
most people are often irrational. Even granting this, some people are not irrational.
It’s helpful to put our question as follows: what happens when those people find out
about the empirical evidence of error? We want to determine the correct response
to learning of that evidence. The Standard Argument is apparently not designed
to provide us with a reason to give up some attitude. And I doubt that it has
been effective in changing anyone’s first-order views. After all, we might not even
be included among those who are in error; or we might not even know about the
premises and thus not be rationally required to accept its conclusion.
We may compare the Standard Argument and the biases-to-irrationality argu-
ments developed below. (i) For the Standard Argument to work, subjects need
71
not reflect on the empirical evidence; even if many subjects are oblivious of that
evidence, it (allegedly) renders at least some of their attitudes irrational. The argu-
ments I’ll consider, though, require that subjects are aware of that evidence. The
contrast here is between making mistakes and learning that you’re part of a group
that makes mistakes. The Standard Argument goes through if it is a fact that its
premises obtain whereas the arguments below succeed just when thinkers gain rea-
sons that enable them to move from premises to conclusions. (ii) The Standard
Argument would show that humans have always been irrational, long before the
evidence of biases came to light. With the biases-to-irrationality arguments I’m
setting out, acquiring the empirical evidence makes an epistemic difference. (iii)
The Standard Argument focuses on a certain kind of irrationality that involves a
violation of particular formal ideals. The arguments I’ll give allege that particular
attitudes are epistemically irrational—an evaluative status tied to norms of belief
revision and good reasoning. These arguments furnish us with reasons to change our
views. It may even be that the empirical studies are subtly flawed. If we possess the
evidence and know nothing of the flaws, however, the evidence will have an impact
on us in just the same way it would were the studies not flawed.
Before moving on, let me comment on the scope and nature of the arguments.
First off: what attitudes are targeted by these arguments? It is unlikely that any
of us are or have been biased in our attitudes regarding whether we have hands,
whether clouds are white and sometimes grey, whether heavy objects fall when
dropped, among many other ordinary matters. So, the arguments will focus on
“non-ordinary” attitudes. As we proceed, the reader is invited to keep in mind
attitudes about the following sorts of matters: whether free will is compatible with
determinism, whether capital punishment is an effective deterrent of capital crimes,
whether all value is human-centered, and the like.
There is real challenge in assessing the impact of evidence of error on our own
judgments. The reason is simple: the processes that give rise to most biases are
72
not reliably detected by introspection (see Wilson, Centerbar, and Brekke [2002]).
In fact, psychologists have recently identified a “bias blind spot”, a kind of ‘bias’
bias (see Pronin, Lin, and Ross [2002] and Ehrlinger, Gilovich, and Ross [2005]).
Not only are we subject to various biasing effects, but we often err when making
judgments about our susceptibility to biases. More specifically, we tend to think
that our own attitudes are less prone to bias than the attitudes of others, because
we depend on introspection for evidence of bias in ourselves, whereas we use ‘lay’
theories of bias when assessing bias in others. The following reasoning, therefore,
won’t go far: “I see that my attitude influenced by a bias. So, that attitude is
irrational.” Such reasoning cannot tell us much about the impact of the empirical
evidence. From the inside, biased attitudes are by and large just like non-biased
attitudes.3 We need to be tipped off to the threat of bias more indirectly. The
arguments I will explore in this paper shall attempt to do just that.
4.2 The Statistical Syllogism Argument
I will start with a rough first thought about the connection between our evidence
of error among the general population and our own attitudes. If we have reason to
think that most humans’ attitudes in some domain are biased and thus irrational,
and we’re like most humans in the relevant respects, then we have reason to think our
attitudes in that domain are irrational. Following some refinement, that thought
will yield an interesting biases-to-irrationality argument. The good news is, we
3Here is how Wilson and Brekke put the point, comparing bias to a contaminant: “In the
physical realm... there are often observable symptoms of contamination, even when the process
of contamination is unobservable. Although people cannot observe rhinoviruses, a stuffed-up nose
tells them they have a cold. If one is wondering whether a gallon of milk is fresh or spoiled, a
quick whiff will reveal the answer. There are seldom such observable symptoms, such as smell,
temperature, or physical appearance, indicating that a human judgment is contaminated. As a
result, people are often unaware that their judgment is “spoiled”... Human judgments—even very
bad ones—do not smell” [1994: 120–121].
73
needn’t start from scratch. Foley has noticed some “first-person epistemological
issues” raised by empirical evidence of error [2001: ch. 3].
Foley uses studies of unstructured personal interviews to address his questions.
In the interview studies, one-hour interviews have been shown not to increase the
accuracy of interviewers’ predictions about the future accomplishments or behaviour
of interviewees [2001: 55–56]. For instance, one study of medical school admissions
revealed that when short interviews were used to supplement “impersonally gath-
ered” information (e.g., MCAT scores, GPA, quality of undergraduate institution),
admissions committees’ judgments did not become more accurate. In fact, inter-
views led to judgments that were less accurate on average than judgments formed
soley on the basis of the impersonally gathered information.4
Foley writes of his main questions:
On the assumption that it is reasonable for me to accept at least some of
these studies at face value, the question is what effect the studies should
have on me when I am the one making the judgments. If people in general
are unreliable when they make certain kinds of judgments, should this
undermine my trust in my own ability to make such judgments reliably?
If so, to what extent? [2001: 58]
It is useful to organize Foley’s overall position as an argument that presses us toward
the thought that one of our attitudes (that is, a belief, a disbelief, or a suspension
of judgment) is epistemically irrational. What Foley says is insightful, but we must
clarify and evaluate what he says. Though his work suggests the argument I will
set out, we should not attribute it to him. For one, I will depart from Foley’s
favoured sense of ‘epistemic rationality’, which requires a kind of counterfactual
invulnerability to self-criticism (see his [2001: ch. 2]). Unlike Foley, I won’t cast
the argument in terms of ‘self-trust’. For another, I’ll fill in some important blank
4See, for example, DeVaul et al. [1957] and Milstein et al. [1981].
74
spaces in his argument, elaborating as needed. But Foley’s important discussion
will nonetheless shape ours.
Here is the argument:
(1) By learning about the empirical evidence of error, I have a reason to
think that my attitude A is unreliably formed.
(2) If I have a reason to think that my attitude A is unreliably formed,
then A is irrational.
Therefore,
(C) My attitude A is irrational.
This argument is valid. (2) should be uncontroversial. It captures what is often
called a “no-defeater” condition—one version of a widely affirmed necessary condi-
tion on epistemically rational, justified, or warranted belief (see Bergmann [1997]).
The critical question is why we should accept (1). Support for (1) involves a statisti-
cal syllogism and so I’ll call the whole argument the Statistical Syllogism Argument.5
A non-numerical version of a statistical syllogism can be put as follows:
Most Xs have feature F.
This is an X.
Therefore,
This has F.
Suppose you have a reason to believe that most Xs have feature F. Then you find an
X. Given what you’ve reason to believe about most Xs, under what conditions can
you sensibly think that this particular X has F? Foley doesn’t himself characterize
his discussion in terms of the statistical syllogism, but it is a natural understanding
of his approach. I take him to be outlining conditions under which having a reason
5Thanks to Alex Arnold and Stew Cohen for conversations as I was deciding between different
formulations of this argument.
75
to think, approximately, that most humans’ attitudes are unreliably formed fails to
be a sufficient reason to think my attitudes are unreliably formed.
The first step in the statistical syllogism is just the gist of the empirical evidence
of error regarding some class of attitudes:
(3) Most humans unreliably form attitudes in a particular class of atti-
tudes CA.
In a particular instance of (3), the relevant class of attitudes, CA, is demarcated by
the empirical evidence. For instance, research on ‘interview bias’ focuses on judg-
ments of a candidate’s future performance that are formed partly on the basis of
a short, unstructured personal interview. In this case, CA includes just those at-
titudes concerning the candidate’s future performance that are formed under those
conditions. To take another example, empirical work on the tendency to overrate
oneself concerns judgments of an individual’s own traits (relative to others) in par-
ticular conditions (see Taylor and Brown [1988]). Here, CA features only attitudes
regarding one’s own traits formed in those conditions. Of course, there are often
many attitudes within any particular class as it is demarcated by some empirical re-
search. If the statistical syllogism holds, we’ll have reason to think that a particular
attitude of ours is included in that general class of attitudes.
(4) My attitude A is unreliably formed (where A is a member of CA).
Of course, (4) gives us (1); so if the statistical syllogism holds, (C) follows. The
critical moment in the argument is clearly the syllogism. Foley’s key contribution
can be understood as specifying three distinct defeat conditions for that step.
The late John Pollock taught us that there are two main types of defeaters:
undercutting and rebutting. All defeaters are reasons for giving up some attitude
A to proposition p. Undercutting defeaters are reasons that attack the connection
between A and its grounds R. So an undercutting defeater is consistent with the
76
truth of p, but it removes or neutralizes R in such a way that taking A to p be-
comes unsupported by R and thus irrational. Rebutting defeaters, on the other
hand, attack p itself. Rebutting defeaters for A to p are reasons to adopt attitudes
incompatible with A. How do defeaters work in the case of our statistical syllogism?
Undercutting defeaters are reasons that attack the inference from (3) to (4) whereas
rebutting defeaters are reasons that attack (4) itself.
Translated into the terms of the present argument, Foley’s idea is that (3) sup-
ports (4) if we lack reason to deny we are like most humans in the relevant respects.
To defeat the statistical syllogism, Foley says it’s not enough to be unlike most
humans or to assert that we are unlike them—we need a reason to think we’re
relevantly different [2001: 63].6 Here are the three defeaters that Foley proposes:
D1: I am a member of a “protected class”that is, a group whose mem-
bership is not unreliable with respect to a class of attitudes CA [2001:
63].
D2: My attitude A was formed (or sustained) using an ‘anti-unreliability’
strategy [2001: 63–68].
D3: For most people, the attitudes in a class CA are unreliably formed
due to some biasing factor BF and that, by self-monitoring or recalibrat-
ing, I have reason to think my attitude A is not influenced by BF (where
A is a member of CA) [2001: 68–72].
The idea is that if we have a reason to think one of D1–D3 obtains, the statistical
syllogism fails. D1 is a rebutting defeater for accepting (4) because having reason
to think D1 obtains gives us a reason to reject (4). D2 and D3 are both under-
cutting defeaters for accepting (4) because gaining reason to think either one holds
neutralizes any support that (3) gives to (4).
6I will assume here that if we gain reason to think a defeater obtains, the inference fails. I thus
overlook some subtleties concerning ‘partial’ defeaters.
77
The Statistical Syllogism Argument needs some refinement. But let’s first briefly
unpack and discuss D1–D3, beginning with D1. For at least some types of biases,
there are specific groups of humans who do not tend to be subject to those effects.
As Foley notes, economists are less likely to commit “sunk cost” fallacies than the
general population. And clinically (unipolar) depressed people are less likely than
non-depressed people to systematically overrate themselves on ability or attainment
in some domain; they aren’t as likely to harbour “positive illusions” about them-
selves (see Taylor and Brown [1988]). Belonging to such a group means we’re in a
“protected class”—our attitude is likely safe from harmful biases. The statistical
syllogism is defeated when we have reason to accept D1.
We can move along to D2. Psychologists have discovered that the form in which
information is presented can influence the likelihood of particular errors we make in
response to it. For instance, some biases in statistical reasoning can be reduced by
presenting subjects with relative frequencies (e.g., “40/100”) rather than single event
probabilities (e.g., “40%”) (see Gigerenzer [1991]). By evaluating information in a
form that helps us avoid errors, we utilize a strategy that guards our attitude against
unreliability. This is reflected in D2. There are other kinds of anti-unreliability
strategies. For example, collaborating with others can protect us from the effects of
biases (see Heath et al. [1998]).7 When we have reason to think our attitude A has
been formed using such a strategy, the statistical syllogism fails.
With D3, Foley gives us two more ways to block that syllogism: self-monitoring
and recalibrations. Self-monitoring is supposed to detect relevant evidence in the
context of forming or holding some attitude. For instance, interviewers who are
aware of the tendency of short, unstructured interviews to skew the accuracy of atti-
tudes might try to identify introspectively any tendency on their part to be, as Foley
puts it, “unduly swayed by the mannerisms or the appearance of the interviewee”
7Foley appreciates the potential role of groups [2001: 77-78], but doesn’t make much of the
point in connection with defeating the statistical syllogism.
78
[2001: 69–70]. Introspection offers but one source of evidence for self-monitoring.
According to Foley, effective self-monitoring, will also draw on “public evidence”.
If there are other interviewers present, for instance, an interviewer can compare her
impressions with those of her colleagues [2001: 70–71]. Insofar as the impressions
of others conflict with her own, she has occasion to seek out the source of the dif-
ference. Closely related to self-monitoring are recalibrations. Even if we’re unable
to avoid biasing tendencies, we may be able to adjust our attitudes to compensate
appropriately [2001: 71]. If you know that some bias leads you to overestimate some
probability, say, then a recalibrated estimate will be lower.
Foley rightly sees limits for self-monitoring and recalibrations. They will come
up short when we are ignorant of the factors that lead most subjects to form biased
attitudes:
[I]f I have no sense of what it is about personal interviews that causes
interviewers to be unreliable in their postinterview assessments of inter-
viewees, I will have no clear idea about how to monitor myself for the
error-producing factors or how to recalibrate my opinion in light of them.
[2001: 71–72]
Successful self-monitoring and recalibrations thus require us to reasonably think
that, for most people, attitudes in a class CA are unreliably formed due to a partic-
ular biasing factor BF. Being able to identify BF is crucial. Self-monitoring provides
reason to think that our attitude A is not due to BF. And recalibrations provide rea-
son to think we’ve appropriately compensated for F’s presence by setting A where
it would be absent BF. Either way, if one of these strategies is properly deployed,
Foley suggests we can enjoy reason to think we are not influenced by BF in the way
most people are—and so that we’re not like most people in the relevant respects
[2001: 72]. There is considerably more to say here, but I’ll let it rest for now.8
8Foley remarks that “[s]ometimes it can be reasonable to rely on careful self-monitoring [and
79
What shall we say about the Statistical Syllogism Argument? It is crucial to
appreciate that a reason to think a defeater obtains isn’t necessarily the end of the
story: that reason can be defeated, and the reason that defeats the initial reason
can be itself defeated, thereby reinstating the initial reason, and so on. Whether
the Statistical Syllogism Argument renders an attitude ultimately irrational may be
a complicated matter indeed.
Are D1–D3 the only potential defeaters for the statistical syllogism? No. I
will extend what Foley has offered by identifying an important sort of undercutting
defeater. Let us say that we have reason to think most Xs have feature F. Then we
find a particular X—call it X1. An undercutting defeater prevents our reason for
thinking most Xs have F from giving us reason to accept the claim that X1 has F ;
a rebutting defeater, on the other hand, is a reason for thinking X1 does not have
F. D1–D3 aren’t the only relevant defeaters. Another class of defeater depends on
recalibrations] to guard against [errors due to bias]” [2001: 73]. But he seems to overestimate our
ability to self-monitor and recalibrate. Foley writes that “[w]ith respect to many of the mistakes
of reasoning and judgment described in the literature, there have been studies of this sort, and, for
the most part, they indicate that to be forewarned is to be forearmed” [2001: 74, emphasis added].
That is an overly cheerful outlook. In various discussions of what’s required to ‘debias’—roughly,
to identify bias and adjust judgment as needed to prevent its untoward affects (see Wilson and
Brekke [1994] and Wilson, Centerbar and Wilson [2002])—theorists have observed that informing
subjects about biases does not always lead subjects to effectively control their responses. For
instance, in one experiment, mock jurors heard testimony that was later shown to be inaccurate
(e.g., an eyewitness is revealed to have poor vision and admits to not having worn his spectacles at
the time of the murder). Jurors were able to discount discredited testimony, though not entirely:
jurors’ responses were adjusted, but they were unable to completely eliminate the effects of the
discredited testimony. Merely being alerted to the biasing information wasn’t enough to prevent
the jurors’ judgments from being influenced by it (see Wilson and Brekke [1994: 130–131]). For a
striking real-world example, see Lewandowsky et al. [2005], reporting on their study of the impact
of retractions of misinformation during the America-led war in Iraq. It turns out that subjects in
the United States did not show sensitivity to the correction of war-related misinformation, whereas
subjects in Australia and Germany discounted the corrected misinformation.
80
the state of being epistemically unclear about whether X1 has F. The idea is that,
given all of our evidence, we should suspend judgment on the question of whether
X1 has F. Being in a state of unclarity (‘unclarity’ for short) is not a reason that
attacks the claim that X1 has F, and so it is not a rebutting defeater. Instead, a
state of unclarity attacks the inference from most Xs have F to X1 has F ; it is an
undermining defeater.9
With this idea of unclarity in hand, we can return to the Statistical Syllogism
Argument. Foley does not hint at unclarity, but it is easy to see how such a state
is relevant to assessing whether the syllogism goes through. Suppose we know that
most Xs have F and that a small class of Xs have a property D that makes it unlikely
that they have F. Does the evidence regarding the general population of Xs furnish
reason to think X1, one particular X, has F? That depends on what else we have
reason to think. Perhaps we know that X1 has a feature B and that B, on its own,
makes it likely that X1 has D (again, D is a feature that makes it unlikely that an
X has F). We may also suppose we know that X1 also has a feature C that, on its
own, makes it likely that X1 has F, and thereby unlikely that X1 has D. Then our
information runs dry. We know that X1 has B and C. But we have no idea whether
an X that has both B and C—just like X1—is likely or unlikely to have F. And so, on
the basis of our overall information, we should suspend judgment regarding whether
X1 has F. To put it another way, it is epistemically unclear to us what the fact that
X1 possesses B and C implies for the likelihood of X1’s having F. By learning that
X1 has both B and C, we should be unsure whether X1 has F even though we know
that most Xs have F. That is one way we might gain an undermining defeater for
the inference from most Xs have F to X1 has F. Our stock of information keeps us
in darkness, as it were, about whether X1 has F, despite what we know about most
Xs.
9A discussion with Brandon Warmke pushed me, fittingly, to clarify some of what I say about
unclarity.
81
Here is a further example of unclarity, applied to the topic at hand. Perhaps we
know that most people tend to commit “sunk cost” fallacies and that economists
as a group are one of the few “protected classes”. Then we reflect on whether
Smithers probably tends to commit that fallacy.10 Perhaps we don’t know much
about her background. For all we know, she may be a kindergarten teacher, a cellist,
a Quartermaster General, or whatever. Insofar as it is unlikely that Smithers enjoys
membership in one of the few protected classes, or is otherwise guarded by anti-
unreliability strategies or self-monitoring or recalibration, what we know about the
general population gives us reason to think that Smithers—like most people—tends
to commit sunk cost fallacies.
But then we learn a little more: Smithers is not a professional economist, but her
husband is and several of her friends are as well. Can we now run a simple induction
from most humans tend to commit sunk cost fallacies to Smithers tends to commit
them? I doubt it. That’s because it is now unclear whether the information about
the general population tells us whether Smithers is likely to commit that fallacy,
given the information regarding Smithers’s relationships to people who are squarely
in a protected class. To be sure, knowing that Smithers is close to economists is
no guarantee that she is unlikely to commit sunk cost fallacies. Yet that bit of
evidence appears to be grounds for us to be unclear on whether what we know
about the general population supplies a reason to think she is likely to commit the
fallacy. Learning of Smithers’s close ties to economists can thus be an undercutting
defeater for accepting that Smithers tends to commit sunk cost fallacies. The point
is not that we’ve gained reason for thinking Smithers doesn’t have the tendency to
commit the fallacy—that is, we do not acquire a rebutting defeater for the conclusion
in question. The point is that we should now be unsure what information regarding
the general population tells us about Smithers. Being in that state of unclarity
10We could construct an example in the first-person, but the third-person makes the illustration
clearer.
82
furnishes an undercutting defeater for the inference in question.
Back with our Statistical Syllogism Argument, unclarity can defeat an inference
from (3) to (4). Easily enough, we can be unclear regarding the obtaining of D1–D3.
That is, we may be unclear regarding whether we have membership in a protected
class; we may be unclear on whether our attitude A has been formed using an
anti-reliability strategy; and it may be unclear to us whether, for most people, the
attitudes in some class CA are unreliably formed due to basing factor BF and also
whether we have effectively used self-monitoring and recalibrations regarding our
attitude A. Unclarity is not hard to come by. As is often enough our predicament,
we lack reasons for thinking one of D1–D3 does or does not obtain. Perhaps our
reasons ‘combine’ in just such a way that we cannot tell whether or not any of
D1–D3 hold for us—some reasons indicate they apply, other reasons indicate they
do not. It just isn’t easy to see. When that happens, we will have an undercutting
defeater for the statistical syllogism.
To understand why, recall again the critical inference:
(3) Most humans unreliably form attitudes in a particular class of atti-
tudes CA.
(4) My attitude A is unreliably formed (where A is a member of CA).
As we have seen, D1 is a rebutting defeater because having reason to think it obtains
gives us a reason to reject (4). D2 and D3 are both undercutting defeaters because
gaining reason to think either one holds neutralizes any support that (3) lends to
(4). Being in a state of unclarity with respect to D1–D3, given our total evidence,
furnishes an undercutting defeater. Here, we don’t have reason to think A is unreli-
ably formed. The idea, once more, is that on the basis of our total evidence—which
includes (3) and whatever supports it—we should not accept (4).
If it is epistemically unclear for us whether or not D1–D3 obtain, then it is not
rational for us to think either that D1–D3 obtain or that D1–D3 don’t obtain.
83
We should rather suspend judgment (if we reflect) on the matter. If we are in
such a state—if it’s rational for us to be epistemically unclear, that is to say—then
we thereby have an undercutting defeater for the statistical syllogism. All of this
highlights a trio of new defeaters:11
D4: It is unclear to me whether I am a member of a protected class
regarding a class of attitudes CA.
D5: It is unclear to me whether my attitude A has been formed (or
sustained) using an anti-reliability strategy.
D6: It is unclear to me whether, for most people, the attitudes in a class
CA are unreliably formed due to some biasing factor BF and that, by
self-monitoring or recalibrating, I have reason think my attitude A is not
influenced by BF.
We have doubled the number of possible defeaters for the statistical syllogism. Just
because these six defeaters can block the inference doesn’t mean they do. Does the
Statistical Syllogism Argument show that some of our attitudes are irrational?12
I don’t mean to give a disappointing answer, but here it is: it depends. To give
a more satisfying answer, we will need a grasp of our actual reasons to think D1–D3
obtain in particular situations. Note well: if we lack reasons to think D1–D3 do or
don’t obtain, then D4–D6 enter the mix. That’s because lacking reasons to think
D1–D3 do or don’t obtain implies that we should suspend judgment with respect to
whether D1–D3 obtain. Being required to suspend judgment about that amounts
to being epistemically unclear regarding D1–D3—and that is just D4–D6.
11What follows is supposed to be unclear for us on the basis of our total evidence, but I’ve left
that supressed.12Recall that Foley suggested D1–D3 defeat the statistical syllogism just if we have reasons
to think they obtain. I suspect the same is not true for D4–D6. These defeaters can block the
inference from (3) to (4) just in virtue of obtaining, it appears to me, and so their effectiveness
does not require us to have reasons to think they obtain. But what I say here is consistent with
the assumption that D4–D6 only make a difference if we have reasons to think they obtain.
84
So, to repeat: do we ever have reason to think D1–D3 do or don’t obtain?
How often are we protected from the argument by D1–D6? Proceeding requires
generalization. That seems to be the best we can muster here anyway, for we can’t
easily set out in detail our reasons for each attitude that is potentially challenged by
the Statistical Syllogism Argument. I shall instead attempt to reflect more generally
on the sorts of reasons we possess, asking whether and how often they leave us
threatened by the Statistical Syllogism Argument. I am trying to assess how much
damage the argument will inflict, given the typical character of our reasons.
To start with, we either have considerations regarding D1–D3 to weigh or we
don’t. Supposing that there are no considerations to reflect on—excluding whatever
supports (3) for us, of course—we can have no reason to think that any of D1–D3 do
obtain and no reason to think they don’t obtain. Then the statistical syllogism will
go through and (C) will follow. But set this possibility aside. Whenever we have
considerations regarding D1–D3, and we often do, there are three distinct situations
we might find ourselves in.
(a) We have reason to think each of D1–D3 does not obtain. Then the
statistical syllogism succeeds.
(b) We have reason to think at least one of D1–D3 obtains. Then the
statistical syllogism fails.
(c) Regarding at least one of D1–D3, our reasons are such that we should
suspend judgment regarding whether it does or does not obtain. And
so we’re in a state of unclarity that entails one of D4–D6 and so the
statistical syllogism fails.
Supposing we’ve got considerations that bear on D1–D3, (a)–(c) reveal what is
required for the Statistical Syllogism Argument to go through. It succeeds only
on (a): we need reasons to think that D1–D3 do not obtain. If (b) or (c) instead
capture our situation, the argument fails.
85
Here is my contention. Regarding lots of our threatened attitudes, it is going to
be unusual for (b) to sum up our reasons. But then either (a) or (c) will describe our
typical situation. For (b) to hold, of course, we’ll need reason to think at least one of
D1–D3 obtains. I will argue that we regularly lack any such reason regarding each
of D1–D3, in virtue of the character of these defeaters and our relative ignorance
about the operation of biases. Let’s see why, looking at the defeaters in reverse
order.
D3: For most people, the attitudes in a class CA are unreliably formed
due to some biasing factor BF and that, by self-monitoring or recalibrat-
ing, I have reason to think my attitude A is not influenced by BF (where
A is a member of CA).
As Foley observed, in a given case, we might not know what the biasing factor is.
But without such information, we necessarily lack reason to think D3 obtains. On
the (doubtful) assumption that we often do know enough about the biasing factor to
acquire reason to favour D3, there is even more trouble here, as I’ve already noted.
The problem owes much to our reliance on introspection to detect potential influence
from biasing factors. To repeat: according to psychologists, the processes that
generate biases are not reliably detected by introspection. This makes it doubtful
that introspection positions us to either self-monitor or reclibrate; after all, both of
these require information about the impact of biasing factors upon reasoning and
judgment. Even if we took ourselves to have an introspective reason R to think that
D3 obtains, learning of the empirical evidence concerning our introspective abilities
should significantly reduce our confidence in R.
Introspection isn’t the only way to support D3. Foley has also invoked the idea of
“public evidence”. If we believe that p and then discover that others thinkers agree,
this is supposed to favour D3. So even if we reject the deliverances of introspection,
D3 may still enjoy some support. I doubt that, in part because pervasive social
86
biases may lead everyone to think alike—if everyone is subject to “groupthink”,
what good is the agreement of others? That aside, public evidence won’t normally
help when p is an interesting proposition—the kind of claim over which there is
conflict. For instance, return momentarily to Foley’s own example: you are a medical
school admissions interviewer. Following unstructured interviews, there is always
disagreement over candidates—who is the best, who has promise, who is strong on
paper but a dud in person, and so on. It so happens that four of your colleagues
agree with you on some point of contention and five disagree. How can you rely on
public evidence in such a situation? Foley’s suggestion is that “you have an occasion
to seek the source of the differences, which in turn may lead to the discovery of
idiosyncracies or biases” [2001: 70–71]. The upshot: when there is conflict over
some attitude, as there will be with many non-ordinary matters, and you’re without
a good reason to think your viewpoint is better than an alternative viewpoint, D3
won’t be boosted by public evidence.
So much for D3. How about the other two defeaters?
D2: My attitude A was formed (or sustained) using an ‘anti-unreliability’
strategy.
For most of our attitudes threatened by bias, we likely lack awareness of anti-
unreliability strategies. That isn’t to say there are no such strategies. The concern
is just that we will not be positioned to identify them for many of our attitudes,
absent serious help from social scientists. But let us assume that we have an anti-
unreliability strategy available. Perhaps we work within a scientific community: it
serves to elimate biases through institutions such as blind peer review and profes-
sional incentives. We might come to worry, even here, that a perfectly effective
strategy is in fact effective. We might discover that not all science is successful:
one reviewer suggested that the claims put forward in the primary research journals
in physics are “90% false”. If that’s correct, the vast majority of new hypotheses
87
tabled by scientists are wrong, even though textbooks are “90% true” (see Ziman
[1978: 40]). In light of this, shall we still rely on our community to deliver us from
biases? As I see it, it may be quite difficult to accept D2, given that doubts about
anti-unreliability strategies are easy to conjure up.
Here, finally, is the third defeater:
D1: I am a member of a “protected class”.
Briefly, the main issue with D1 is that we often don’t ourselves have detailed enough
information about the boundary lines between protected and unprotected classes.
For many biases, we’ll have only a vaguest sense for what counts as being protected.
Articles in psychology that detail mistakes does not ordinarily come along with a
disclosure on what kind of subjects aren’t prone to the mistakes. That said, work on
‘debiasing’ may in years may clarify where the lines are in fact drawn (see Wilson,
Centerbar, and Brekke [2002]).
In an impressionistic spirit, I have argued that it is not usually the case that we’ve
reason to think at least one of D1–D3 obtains [i.e., (b)]. Earlier, I noted that the
statistical syllogism fails on only (b) or (c). So one route for avoiding the Statistical
Syllogism Argument is not often open to us. If I’m correct about that, then our
reasons are most often summed up by either (a) or (c)—that is, we either have
reason to think each of D1–D3 does not obtain [i.e., (a)] or our reasons regarding
one of D1–D3 are such that we should suspend judgment regarding whether it does
or does not obtain [i.e., (c)].
We know where (a) leads: if it holds, the statistical syllogism goes through. And
I have said that on (c) the syllogism fails. Insofar as (c) is the right summary of our
reasons regarding D1–D3, the following is true: those reasons don’t seem to point
clearly in one direction. They do not clearly indicate to us that we usually have
the properties that exclude us from being counted in the majority regarding some
biasing tendency. Nor do they clearly indicate that we usually do not have those
88
properties that would prevent us from being included in the majority. Wherever we
may happen to stand is often, though not always, unclear to us. In the next section,
I will argue that if (c) holds for us, then we will face a different challenge.
4.3 The Unclarity Argument
Before I introduce a second biases-to-irrationality argument, let’s recount one sig-
nificant lesson of the previous section. For the Statistical Syllogism Argument to
succeed, we need reason to think D1–D3 do not obtain or—when we have no rel-
evant considerations—no reason to think that D1–D3 do or do not obtain. If we
have reason to think that one of D1–D3 obtains; or if we should be epistemically
unclear about the obtaining of one of D1–D3 (and so one of D4–D6 obtains); then
the argument fails. The statistical syllogism is fragile. It is relatively easy for the
inference from (3) to (4) to go down in defeat. Short of having reason to think
we’re unlike the majority, we can be thrown into a state of unclarity about whether
we have or lack the properties that would exclude us from being counted in the
majority.
What I would like to observe now is that some of the states that defeat the sta-
tistical syllogism are states that, together with a little reflection, imply irrationality
for some attitude A of ours. Being in a state of unclarity regarding D1–D3 will
render A irrational. In this idea we’ll find a new biases-to-irrationality argument.
Before getting down to details, it will be instructive to consider one salient dif-
ference between the Statistical Syllogism Argument and the new argument. The
new argument is not driven by a statistical syllogism, but it turns out to be in-
timately related to the Statistical Syllogism Argument’s critical step. Let me say
more. Whenever the Statistical Syllogism Argument succeeds, it is our recognition
of evidence about the unreliability of most humans in some domain that is the basis
for our belief that we are unreliable and thus our belief that our attitude is irra-
tional. With the new argument, however, our recognition of the evidence about the
89
unreliability of most humans plays an essential role in our belief that our attitude is
irrational even though that evidence is not the basis for our belief. To put it other-
wise: although coming to appreciate the evidence of most humans’ tendency to err
plays an indispensable role in our coming to believe our attitude is irrational, that
belief is not based on that evidence.13
Beginning with the critical move in the Statistical Syllogism Argument, another
sort of biases-to-irrationality argument suggests itself. I will call it the Unclarity
Argument.
(3) Most humans unreliably form attitudes in a particular class of atti-
tudes CA.
(4) My attitude A is not reliably formed (where A is a member of CA).
Recall that undercutting defeaters due to being in a state of unclarity block the
inference from (3) to (4). Here is where the Unclarity Argument makes its entrance.
For simplicity’s sake, we can refer to any undercutting defeater that is due to a state
of unclarity as an unclarity defeater. Let’s assume that an unclarity defeater (i.e.,
one of D4–D6) holds in our case for the inference from (3) to (4):14
(5) It is epistemically unclear to me whether or not my attitude A is not
reliably formed.
By joining (3) and (5), we’ll see that together they do not support (4) and hence the
Statistical Syllogism Argument fails. Notice that (5) has an important implication:
(6) If it is epistemically unclear to me whether or not my attitude A
is not reliably formed, then I lack reason to believe that A is reliably
formed.
13Thomas Kelly notes that “[e]ven if one’s recognition that [reason] R plays an essential role in
the causal history of one’s x-ing this is not sufficient for one’s x-ing to be based on R” [2002: 173].14Again, I have supressed that it is unclear to us on the basis of our total evidence.
90
(5) straightaway implies (6)’s consequent. If we have a reason to believe our attitude
A is reliably (or unreliably) formed, we will not be epistemically unclear about that.
Having a reason to believe that the attitude is reliably formed makes that matter
clear. But being epistemically unclear regarding a proposition p implies that, on
the basis of our considerations, we should suspend judgment regarding p. The idea
is that when we size up our reasons regarding p, we can’t tell what we should think
about p.
Moving on, (5) and (6) deliver this:
(7) I lack reason to believe that my attitude A is reliably formed.
And then comes a plausible principle:
(8) If I take myself to lack reason to believe my attitude A is reliably
formed, then A is irrational.
For starters, it’s critical to notice that (8) does not imply that having a rational
attitude A requires taking ourselves to have reason for A. That is a kind of ‘higher-
level’ requirement that plays no role in (8); it generates a skepticism-inducing regress
of reasons. By design, (8) avoids all of that.
So why accept (8)? It is the principle that moves us from I take myself to lack
reason to believe my attitude A is reliably formed to the claim that my attitude A
is irrational. What we think about the reliability of our attitudes affects whether
they are rational for us. And coming to think that we lack reason to believe we’ve
reliably formed some attitude is of no small significance. Suppose that you reflect
on the following higher-order proposition: my attitude A is reliably formed. If you
take yourself to lack reason to believe you’re reliable (or unreliable) regarding atti-
tude A, then—once you reflect on the higher-order proposition—you should suspend
judgment on that proposition. The trouble is, suspending judgment here brings ir-
rationality.
91
To see why, ruminate on a Pollock-esque example (due to Bergmann [2005: 424–
427]). Pretend that you are in a factory, observing a conveyer belt moving widgets
that appear to you to be red. You form the belief that the widgets are red. Then
Wilson approaches, gestures to the passing widgets, and menacingly asks you: “Are
those widgets red? Or do they just appear red because a red light is shining on
them to check for defects? Huh?” You have no reason to think there is no red light.
On reflection, you consider the following higher-order proposition: my belief that the
widgets are red is reliably formed. It is equally likely from your position that the
widgets are deceptively illuminated as that they are not. Here, you have no reason
to accept (or reject) the higher-order proposition and it is eminently plausible that
your belief that the widgets are red is irrational. You should give it up and suspend
judgment instead. (8) perfectly captures this judgment about the widget factory
example.
Finally, we can put (7) and (8) together to arrive at our conclusion:
(C) My attitude A is irrational.
The Unclarity Argument is valid. Each premise is initially plausible. It is another
route from information about biases to irrationality. But do we really have reason to
accept each premise? Does the argument threaten any of our attitudes? It does—if
we avoid the Statistical Syllogism Argument by way of unclarity defeaters. When-
ever we consider claims like most humans unreliably form attitudes in a particular
class CA [i.e., (3)] and find ourselves in a state of unclarity regarding whether we’re
relevantly unlike most people [i.e., (5)], we will need to face up to the Unclarity
Argument.
So (3) now pushes us toward (C) through two distinct channels. D1–D6 offer
escape routes from the Statistical Syllogism Argument. But whenever there is an
unclarity defeater (i.e., D4–D6) to block the inference from (3) to (4), we’ll find a fur-
ther challenge awaiting us—the Unclarity Argument. In other words, though unclar-
ity defeaters may save us from the Statistical Syllogism Argument, they threaten
92
us with the Unclarity Argument. Faced with (3) and hoping to resist (C), the best
escape from irrationality is reason to think one of D1–D3 obtains. As I argued in
the previous section, that is a somewhat unusual condition to find ourselves in.
4.4 The Irrational Accommodation Argument
It is time for something entirely different. Many kinds of bias prevent us from
rationally accommodating or evaluating evidence. Psychologists have observed that
our attitudes sometimes persevere in the face of strong counterevidence, where the
attitude should instead be extinguished (Ross et al. [1975] and Wegner et al. [1985]);
that gaining counterevidence sometimes has the polarizing effect of increasing the
strength of our initial attitude (Lord et al. [1979]); and that we sometimes engage
in motivated reasoning (see Kunda [1990] and Westen et al. [2006]). I shall argue
that learning about such biases will sometimes lead us to give up at least some of
our attitudes. To begin with, I will develop some general ideas about rationality
which will guide us toward a new biases-to-irrationality argument.
Most of us will hope to believe the following:
Expected Rational Accommodation (‘ERA’): If I am presented with coun-
terevidence for my attitude toward p, then I will rationally accommodate
it.
What counts as ‘rationally accommodating counterevidence’ in some situation hangs
on our overall evidential situation. Once the counterevidence is added to our set of
evidence, rational accommodation may call for attitude revision, either in kind or
degree. Or rational accommodation may call for ignoring the counterevidence and
maintaining the attitude. What is important for present purposes is that we may
come to have doubts that ERA holds for us in particular situations, relative to some
attitude toward p. Here is the trouble: suspending judgment on ERA threatens the
93
rationality of our attitude.15
Let me explain, starting with an example.
DRUG. You’ve just learned there is a drug that when ingested often
makes thinkers fail to rationally accommodate counterevidence they re-
ceive. 95 times out of 100, this drug screws up thinkers so that they
respond irrationally to counterevidence. There are many ways to go
astray. For instance, suppose that a particular batch of counterevidence
should change your attitude toward p—you believe p, but you should
now disbelieve it. Loaded with this nasty drug, you may instead retain
full confidence in your attitude; perhaps you become more confident;
you might even wind up suspending judgment on p. It doesn’t really
matter; all of those responses to the counterevidence are irrational (to
varying degrees). Now let’s say that you have acquired a good reason
to think you yourself have unfortunately ingested this drug. As a result,
you suspend judgment regarding ERA. That is, you are unsure whether
it is true that if you are presented with counterevidence for believing p,
then you will rationally accommodate it. Then I hand you a piece of
counterevidence for your attitude. You take yourself to have considered
it.16
What should we say about your attitude, given these circumstances? All of that is
enough, it appears to me, to appreciate that your doubt about ERA undercuts your
attitude toward p.
15It is worth adding that disbelieving ERA also threatens the rationality of our attitude toward
p. I will leave further discussion shelved because I don’t think the empirical evidence normally
leads us to disbelieve ERA.16DRUG is similar to an example given by David Christensen in his discussion of the impact of
higher-order evidence upon attitudes [2010: 187].
94
Even so, I won’t rest my case on an assertion that it is obvious in DRUG that
doubt about ERA (somehow) has an impact on the rationality of your attitude.
That is my contention, alright, but there is more: I shall attempt to articulate
what some might call the ‘epistemology’ of your attitude to p, given that you also
suspend judgment on ERA and take yourself to have considered some apparent
counterevidence. In the end, getting clear on these abstract issues will reveal why
the empirical evidence of error leads us to doubt particular attitudes of ours.
We are supposing that you have suspended judgment regarding ERA—the idea,
again, that if you are presented with counterevidence, you rationally accommodate
it. We’re also assuming that you take yourself to have considered a particular bit of
counterevidence (‘CE’ henceforth). Here, you will be unable to embrace the thought
that your evaluation of CE was rational. The crucial claim—one that you will want
to accept but, according to me, can’t—is as follows:
Rational Accommodation of CE (‘RA’): My accommodation of coun-
terevidence CE for my attitude A to p is rational.
Pretend that you find yourself in an example like DRUG. Reflecting on RA, you
wonder: “Was my evaluation of that counterevidence I just got rational or not?
What should I think of RA?” In answering, keep in mind that you already suspend
judgment on ERA, given that you take yourself to have ingested that terrible drug.
So you should be unsure how to answer your question. As a result, you should
suspend judgment regarding RA.
We are ready for the opening step of what I shall call the Irrational Accommo-
dation Argument :
(9) If (i) I hold attitude A to p and (ii) I have reason to suspend judg-
ment regarding ERA (i.e., that if I am presented with counterevidence
for my attitude toward p, then I will rationally accommodate the coun-
terevidence) and (iii) I take myself to have been presented with some
95
counterevidence CE for A to p, then I have reason to suspend judgment
regarding RA (i.e., my accommodation of CE for A to p is rational).
It seems to me that (9) is motivated by examples along the lines of DRUG. The
details of DRUG, taken together, entail (9)’s antecedent. Given those details, the
natural reaction is to suspend judgment with respect to RA; that is just (9)’s con-
sequent. So we can generalize from our reaction in DRUG and similar examples to
(9). Yet we can do even more to establish (9). Pretend that you’re in situation, just
like DRUG, where (9)’s antecedent obtains. Now assume for a moment that you
do not have reason to suspend judgment regarding RA. That assumption leads us
astray, though it takes work to show why. For starters, given (9)’s antecedent, we
know this much: that you hold A to p while taking yourself to have been presented
with CE for A to p and that you also have a reason to suspend judgment whether
ERA is true.
On that basis, what do you have reason to think about RA? There are three
options: you might have reason to (a) accept RA, (b) deny RA, or (c) suspend
judgment regarding it. Of course, our assumption means that (c) is false—we’ve
assumed that you don’t have reason to suspend judgment concerning RA. Appar-
ently, then, options (a) or (b) must hold. Yet if we attend to your situation, we’ll
find that both (a) and (b) look dubious.
First, let’s recall that you suspend judgment whether it is true that if you are
presented with counterevidence for A to p, you will rationally accommodate it (i.e.,
ERA). Second, we know that you have been presented with CE for A to p. The
idea that these two facts underwrite having a reason to accept RA [i.e., (a)] appears
incoherent. For (a) to hold here, it seems you would need reason to accept ERA,
contrary to what we’ve supposed. Similarly, the thought that you have a reason to
deny RA [i.e., (b)] seems like a stretch as well. If you happened to have reason to
deny ERA, then you would also have a reason to deny RA; but you don’t.17 So (c),
17Even supposing you had a reason to deny RA when (9)’s antecedent obtains, you would
96
the thought that you’ve reason to suspend judgment concerning RA, begins to look
inevitable. Since the assumption that you do not have reason to suspend judgment
regarding RA forecloses (c), we are best off to reject that assumption. Therefore,
when (9)’s antecedent obtains, we should affirm that you do indeed have a reason
for suspending judgment regarding RA. And that is (9) itself.
Our next premise is the following:
(10) If (i) I hold A to p and (ii) I take myself to have been presented with
CE for A to p and (iii) I have a reason to suspend judgment regarding
RA (i.e., my accommodation of CE for A to p is rational), then my
attitude A to p is irrational.
It is easy to motivate (10). You wake up from an afternoon nap to a knock at your
office door. You’re feeling unusually groggy. It’s your especially eager colleague,
asking whether you believe p. You say you do, and she tells you that p is false for
reasons she mentions. As she leaves, you realize that you still believe p, though it
seems to you that you followed her reasons. Returning to your chair, you spot a
bottle of that awful drug, open on your desk. You know the drug prevents thinkers
from properly evaluating counterevidence for their attitudes. A large number of the
pills are gone. You check the trash bin, the desk drawer, your pockets. Nothing.
Though you aren’t sure, you suspect you may have ingested some of the drug, given
your grogginess and the missing pills; you certainly can’t deny as much now. Thus,
you have reason to suspend judgment regarding whether your accommodation of
that counterevidence you received was rational. What shall you now think about
p? Given the situation, I say, you should suspend judgment on p. Believing it is no
longer rational for you. By reflecting on examples like this, we will be inclined to
accept (10).
nonetheless be unable to resist the conclusion of the present argument. More fully: a premise with
(9)’s antecedent and with I have a reason to deny RA as a consequent would still lead us to the
desired conclusion.
97
So far, we’ve set down two conditional premises. The remaining premises are
the conjuncts in the antecedents of (9) and (10). For ease of reference, it will be
helpful to have everything in one place:
(9) If (i) I hold attitude A to p and (ii) I have reason to suspend judg-
ment regarding ERA (i.e., that if I am presented with counterevidence
for my attitude toward p, then I will rationally accommodate the coun-
terevidence) and (iii) I take myself to have been presented with some
counterevidence CE for A to p, then I have reason to suspend judgment
regarding RA (i.e., my accommodation of CE for A to p is rational).
(10) If (i) I hold A to p and (ii) I take myself to have been presented
with CE for A to p and (iii) I have a reason to suspend judgment re-
garding RA, then my attitude A to p is irrational.
(11) I hold A to p.
(12) I take myself to have been presented with CE for A to p.
(13) I have reason to suspend judgment regarding ERA.
(14) I have reason to suspend judgment regarding RA.
Our conclusion follows from (9)–(14). It is this:
(C) My attitude A to p is irrational.
With a little work, the reader will be able to see that (C) follows from the premises.
For the record, I’ll outline the logic of the Irrational Accomodation Argument. The
obtaining of (11)–(13) are sufficient for (9)’s antecedent. So, if (9) itself and (11)–
(13) hold, (9)’s consequent follows. (9)’s consequent is (14). The obtaining of (11),
(12), and (14) are sufficient for (10)’s antecedent. So if (10) as well as (11), (12),
and (14) hold, (10)’s consequent follows, which is just (C).
Thus concludes our foray into some general ideas concerning rationality. Put
briefly, I am recommending that what we’ve reason to think about ERA supplies a
98
kind of ‘higher-order’ evidence that makes a rational difference for what we think
about RA. By way of sensible principles, all of this can be connected to the ratio-
nality of our attitude to p. At the opening of this section, I promised to develop
an argument that can be pushed forward by empirical evidence of error. How is
that supposed to work? Relative to many of our interesting, “non-ordinary” atti-
tudes, (13) can be bolstered by reflection on empirical evidence of error. The idea
is that learning of the empirical evidence is effectively akin to learning that we’re
in a situation like DRUG. More fully, by way of reflection on experimental results
that indicate humans regularly fail to accommodate evidence in a rational manner,
we’ll gain reason to suspend judgment regarding ERA, specified to some attitudes
toward some propositions. It is worth remembering that ‘ERA’ is shorthand for
Expected Rational Accommodation. As we shall see, the empirical evidence can ruin
our expectations that we will treat counterevidence in a rational way.
Here is what I have in mind for the relevant sort of reflection. Let’s imagine that
you have decided to acquaint yourself with psychological work on belief polarization
(see Lord et al. [1979] and Gilovich [1991: ch. 4]).18 There is a large body of
research indicating that humans put counterevidence to much more scrutiny than
evidence that supports their attitudes. When faced with counterevidence, you find
out, humans often transform it into evidence that is considerably less hostile to
their viewpoint. You are struck by the clever experiments that reveal this effect.
Psychologists start with two groups who disagree about some political, moral, or
social issue. Then the subjects are presented with the same body of evidence.
Contrary to sensible expectation, that shared evidence does not tend to eliminate
the opposing groups’ initial conflict by bringing them closer together. Instead,
the two parties become increasingly polarized and more confident in their initial
attitudes.
18Kelly [2008b] discusses the significance of this bias, though offers different suggestions than
mine here.
99
After learning about belief polarization, you reflect on situations in which this
effect may influence you—disagreements at work, home, and monthly Freemasonry
meetings. ERA (again, if you are presented with counterevidence for your attitude
toward p, then you will rationally accommodate it) can be specified to the attitudes
toward various propositions you form in those conflicts. Now, on the basis of what
you know about belief polarization, it appears that you have reason to suspend
judgment on ERA (specified to those disagreements). That is, after learning about
belief polarization, you are properly unsure whether it is true that if you are pre-
sented with counterevidence for your various attitudes in particular situations, then
you will rationally accommodate it. For instance, when you argue about politics at
the office, or with your spouse about who took out the garbage last, you realize that
you’re at risk for belief polarization. When others present new counterevidence, you
aren’t entirely sure how you’ll respond. You have at least some reason for suspicion
that the processes underlying belief polarization may be operational in your own
situation. And so, by learning about belief polarization and reflecting on your life,
you have gained reason to suspend judgment on ERA (suitably specified). Learning
about all of that will lead you to accept (13).19
Experimental evidence of error can make (13) look attractive, relative to lots of
our attitudes. Indeed, (13) can be bolstered by reflecting on research concerning
belief perseverance (see Ross et al. [1975] and Wegner et al. [1985]), various “or-
dering” effects whereby initial evidence gets weighted more strongly than evidence
acquired later (see Nickerson [1998]), and motivated reasoning more generally (see
Kunda [1990] and Weston et al. [2006]). Once we recognize that we are often subject
to such errors, shall we confidently assert ERA? In many situations, it seems to me,
we shouldn’t be too confident that we rationally accommodate counterevidence.
Where does all of this leave us? As I have suggested, we have reason for (13)
19Can we be protected from this bias? Yes, but “in the wild” it is unusual that you’ll find
yourself with good reason to think you are. For more, see section 2.
100
in light of different lines of empirical evidence, specified to a range of controversial
attitudes. Importantly, (11) and (12)—i.e., I hold A to p and I take myself to have
been presented with CE for A to p, respectively—obviously hold in many of the same
cases where (13) looks plausible. By (9), if (11)–(13) are true, (14) follows. And by
(10), if (11), (12), and (14) hold, (C) follows. Thus, we have reason to judge that a
range of attitudes are irrational. Sorting out which of your own attitudes succumb
to this argument remains requires more knowledge than I’ve got. But call to mind
the “non-ordinary” attitudes that we take efforts to cultivate and defend: that
one’s favourite (but highly controversial) theory, regarding metaphysics or politics
or history, say, is correct. The Irrational Accommodation Argument, I surmise, may
help set us right regarding such attitudes.
The Irrational Accommodation Argument explains how reflection on evidence
indicating we often fail to rationally accommodate counterevidence leads to the
thought that some of our attitudes are irrational. It deserves further attention, but
it is time to conclude.
4.5 Conclusion
This essay has developed and defended three biases-to-irrationality arguments that
move us from empirical evidence of bias to conclusions about the irrationality of
our attitudes. These arguments stake out a new field for discussions of the rational
significance of empirical research upon human judgment. What we learn about
biases can indeed have an impact on what we should think, and I’ve explained why
that is so.
What I want to underline, as we conclude, is just how susceptible many of our
interesting, non-ordinary attitudes are to these sorts of arguments. When I judge
that each of my hands has five fingers, or that the Evening Star is now visible in
the western sky, I needn’t expose myself to much evidence. For many interesting
topics, however, we form and sustain judgments on the basis of exceedingly complex
101
bodies of evidence. We search for and gather evidence over months and years, using
a variety of sources and evidence-gathering tools. At each step of the way, mistakes
may creep in. We conduct our investigations in full knowledge of the possibility of
error from bias. It would be valuable to better understand these risks—the ways
in which biasing effects can undermine our intellectual efforts. We would benefit, I
think, from a clearer sense of the situations in which awareness of the risks should
get us to hold back in our confidence. That is what I have sought to begin to do here.
Considerable work remains. By my thinking, psychologists come bearing gifts for
philosophers. Now we must sort out the full normative impact of a detailed portrait
of ourselves as creatures often inclined to err in our judgment and reasoning.20
20I am grateful to Stew Cohen, Chris Freiman, and Keith Lehrer for discussion.
102
CHAPTER 5
GENES AND ATTITUDES
“The brothers shook hands stiffly, when they saw each other for the first time.
They then hugged and burst into laughter. ‘I looked into his eyes and saw a
reflection of myself... I want to scream or cry, but all I could do was laugh.’”
(New York Times Magazine, 9 December 1979)
Personality, intelligence, psychiatric health, and even attitudes are shaped by our
genes. There is a connection between human genotypes and human psychological
traits: differences between individuals’ traits partly have their basis in genetic differ-
ences. This is a dramatic and relatively new idea. It would be surprising if learning
about it didn’t call for at least some reassessment of ourselves.
In this essay, I will argue that evidence regarding the influence of genes on our
attitudes can challenge the rationality of those attitudes. Evidence indicating a
connection between genes and attitudes tells us something about those attitudes.
It can even lead us to the thought that those attitudes are irrational. But then
evidence about genes can threaten how we think of ourselves.1
To start off, I will tell a story, realistic in its details, to draw the threat out
into the open for a first look. Though you may be a singleton, imagine that you
have an identical twin. You and your twin, let’s pretend, were adopted at birth by
different families and raised in different places. Months ago, researchers learned of
your twinhood in some dusty adoption agency files and now the two of you have been
1The idea that we are influenced by our genes is sometimes thought to threaten human freedom
and responsibility; see Lipton [2004] for critical discussion. As I think of it, the worries I raise here
are considerably more plausible.
103
flown to a university campus to spend time in a laboratory. The researchers quickly
establish a high correlation between your psychological profiles and behaviours. On
standard measures of personality, you two are highly similar. You and your twin
have also settled into similar careers; and you both feel much the same way about
your lives and relationships. You two even share many of the same attitudes about
political and moral matters. Like other reunited siblings, you initially suffer from
shock and confusion.2
In the aftermath of the reunion, there is something about the discovery that
makes you uneasy. It’s not just that your twin has a similar personality and pat-
terns of behaviour—it is also that your newfound sibling has similar attitudes about
political and moral matters. You had always thought that your attitudes arose
from a careful assessment of the evidence and arguments that you had come across.
You’ve always tried to be intellectually cautious and honest; you’ve always been
interested in the truth. But now it seems that can’t be the entire story behind your
attitudes. The high concordance between your attitudes and those of your twin is no
fluke. Something in your genes disposed you and your twin towards forming similar
attitudes. And there is more. Other multi-birth siblings, you’ve heard, have stories
that often go like yours—separated at birth, reunited later, remarkable similarities.
Other sets of identical siblings, much like you and your twin, share many political
and moral attitudes in common; some identicals, though, have different attitudes
than yours. Something in their genes has disposed them towards their attitudes,
too.
A take-home lesson, you surmise, is that different genetic endowments incline
us to form different attitudes. Somehow, our weighing of evidence and arguments,
2Wright recounts a story of reunited triplets, separated at birth by an adoption agency: “[A]ll
three were on the phone with each other, comparing lives, asking each other questions about school
and food and sports and women. ‘It’s all the same! It’s all the same!’ Eddie [one of the triplets]
kept crying” [1997: 35–38]. See also Segal [2000: chapter 7].
104
what we find intuitively plausible or reasonable, the way we pursue questions of
truth, is affected by our psychological traits. There is something worrisome and
even disturbing here: what privileges your psychological set-up, you wonder? Why
think your particular way of seeking or seeing political and moral truth is better than
those due to different genes? Have you even formed your attitudes on the basis of
your evidence, or is something else driving what you think? And what if attitudes
regarding other controversial matters are implicated in all of this as well?
This story is inspired by results from behavioural genetics and real-life anecdotes,
and it reveals the issue that I wish to explore here. In the story, you learn of a
connection between genes and psychological traits and attitudes. Given what you’ve
learned about your attitudes, does anything follow regarding their rationality? More
generally, our question is this: what is the rational significance of that evidence
regarding the influence of genes on attitudes? I am concerned with epistemological
issues that arise from reflection on an interesting body of empirical data. For now,
we can leave aside questions of data interpretation and research methodology. I’ll
presume that we can take theorists at their word and then see what follows for the
rationality of particular attitudes.3
5.1 From Genes to Attitudes
Let’s briefly look at central results from behavioural genetics in an attempt to un-
derstand how genes and attitudes could be linked. To begin with, we should note a
standard view about one source of individual differences. Why is one person smarter,
more anxious, or more introverted than another? Over the last couple decades a
consensus has emerged. The researchers say:
[A]ny dispassionate reading of the evidence leads to the inescapable con-
3As for as I know, no one has discussed the epistemological upshot of this intriguing, though
controversial, research.
105
clusion that genetic factors play a substantial role in the origins of indi-
vidual differences with respect to all psychological traits. (Rutter [2002:
2])
[D]iscussions regarding genetic influences on psychological traits are not
about whether there is genetic influence, but rather about how much
influence there is, and how genes work to shape the mind. (Bouchard
[2004: 148])
This consensus has come about chiefly through the study of twins.4 Twin studies
aim to quantify the strength of genetic and environmental factors with respect to a
particular trait or feature. If this can be done, researchers can unravel genetic and
environmental influence on traits. Twins are key players because we know something
about their genetic endowment. Identical (“monozygotic”) twins share all of their
genes5 and non-identical (“dizygotic” or “fraternal”) twins have in common roughly
half of their genes.
One popular and misleading perception of twin research is worth addressing. In
the process of studying twins, researchers have often noted striking and bizarre sim-
ilarities between their subjects. Here is a passage from researchers at the University
of Minnesota that lists anecdotes collected during one study of identical twins raised
apart:
There were two “dog people” among the MZA [“monozygotic”] individ-
uals; one showed her dogs, and the other taught obedience classes—they
4For overviews of twins research geared for a general audience, see Segal [2000] and Wright
[1997]. The consensus is also due in part to twin-family and adoption studies, which I don’t
discuss here; see Rowe [1994] for an introduction.5There can be minor genetic differences between a pair of identical twins due to “copy num-
ber variations” of DNA. At conception, identicals may have all the same DNA, but then during
subsequent DNA replications as cells divide, mutations may occur. See Bruder et al. [2008] for
details.
106
were an MZA pair. Only two of the more than 200 individual twins
reared apart were afraid to enter the acoustically shielded chamber used
in our psychophysiology laboratory, but both separately agreed to con-
tinue if the door was wired open—they were a pair of MZA twins. When
at the beach, both women had always insisted on entering the water
backwards and then only up to their knees; they were thus concordant,
not only in their phobic tendencies but also in the specific manifestations
of that timidity. There were two gunsmith hobbyists among the group
of twins; two women who habitually wore seven rings; two men who of-
fered a (correct) diagnosis of a faulty wheel bearing on [one reseacher’s]
car; two who obsessively counted things; two who had been married five
times; two captains of volunteer fire departments; two fashion designers;
two who left little love notes around the house for their wives . . . in
each case, an MZA pair. (Lykken et al. [1992: 1565–1566])
Anecdotes like these are the stuff of legend and daytime talk shows. But the impor-
tant results of twin research have nothing to do with such stories.
The standard experimental design for a twin study focuses on populations. These
experiments yield information about the heritiability of some trait across a popula-
tion. The typical way to do this is straightforward enough—find pairs of identical
and non-identical twins who have been raised in the same home with the same family
and then measure their traits. If there is a higher correlation of some trait between
pairs of identical twins than there is between non-identical pairs, then that difference
is likely due to the identical pairs’ shared genes, not their shared home environment.
As a matter of fact, researchers have found a higher correlation of psychological and
attitudinal traits between identical twins than between non-identical ones.
That experimental design isn’t perfect, though. It may be that identical pairs
have a more similar environment than non-identical pairs; identical twins are prob-
ably treated more similarly than non-identical twins are, after all. A second exper-
107
imental design addresses this concern: it involves pairs of identical twins who have
been raised apart. These pairs tend to be just as similar as identical pairs raised
together, as it happens, so there is reason to think identical pairs raised together
are similar because of their shared genes, not their shared environment.
Identical siblings are nature’s experiment and scientists get to peek at the re-
sults. One overall conclusion from behavioural geneticists and psychologists is that
personality, intelligence, interests, psychiatric health, various behavioural traits, and
some attitudes are heritable.6 The correlation of many traits between identical twins
is in the neighbourhood of fifty percent while the correlation between non-identical
ones is somewhat lower. Think about the results like this: since identical twins
share genes, their minds turn out to be quite similar and they often display similar
psychological and attitudinal traits. Non-identical twins share approximately half
of their genes in common, however, so they are less likely to share many of the same
traits.
It is critical to note that genes influence attitudes indirectly. Genes require
an environment to express themselves, along with those neurochemical processes
required for the interaction of genes and environment. To be sure, there is not one
single gene for any particular trait. Geneticists think there are thousands of different
genes, in varying combinations, that interact with each other, the environment,
neurochemical processes, and so on, to produce some trait. Whatever brings about
a trait is a dizzyingly complicated matter indeed. For present purposes, nothing
hangs on how it happens to work.7
Earlier on, while setting out the story about you and your reunited twin, I as-
sumed that particular attitudes are heritable. That was no mere story. Researchers
have discovered many heritable attitudes. For example, in one recent study of po-
6See Bouchard [2004] and Rutter [2002]. Rowe [1994] offers a helpful overview of many lines of
research in behavioural genetics.7For one recent discussion, see Meyer-Lindenberg and Weinberger [2006].
108
litical attitudes (Alford et al. [2005]), twins were independently asked to express
their agreement, disagreement, or uncertainty on issues. On any particular issue—
property taxes, capitalism, pacifism, socialism, censorship, abortion, and the like—
identical twins registered higher concordance on their answers than non-identical
twins. These results are taken to suggest that the attitudes in question have a
heritable component.8
Other research has used the twin study design to furnish evidence for the her-
itability of attitudes towards topics like the death penalty, jazz, apartheid, cen-
sorship, and divorce. One classic study on social attitudes (Martin et al. [1986])
offered results intended to undermine “cultural” models of attitude transmission, on
which the resemblance of family members’ attitudes can be interpreted or explained
in purely social terms; identical twins showed higher correlations than non-identical
twins on measures of attitudes regarding the death penalty, striptease shows, suicide,
censorship, and other social issues. Another study (Olsen et al. [2001]) claimed that
attitudes towards reading books, playing organized sports, roller coaster rides, abor-
tion on demand, and voluntary euthanasia are heritable. Other studies have shown
the heritability of antisocial attitudes (Mednick [1984]) and prosocial attitudes such
as social responsibility (Ruston [2004]). (For the record, I am willing to hypothesize
that if twins researchers presented sets of twins with standard philosophical thought-
experiments—for example, trolley cases, Dretske’s zebra case, Nozick’s “experience
machine”—we would discover a higher correlation of responses to these cases among
identical than non-identical twins. Philosophical judgments of the most low-level
sort may sometimes be heritable.)
Here is a question about these results. Given that attitudes toward socialism,
jazz, roller coasters, property taxes, and so on have only been possible going back
a few generations, how could specific genes end up shaping such attitudes? Isn’t
8Hatemi et al. [2010] extends these conclusions by offering results from the study of parents
and non-twin full siblings of twins.
109
the time frame far too brief? Well, yes. But what is apparently happening when
researchers discover heritability for attitudes concerning the “issues of the day” is
this: deeper, permanent features of cognition help to frame current questions. To
take an example from Hatemi et al. [2010: 799], there is surely no direct genetic
basis for Americans’ attitudes toward a wall on the Mexico–US border, yet there
may be genes that indirectly influence Americans’ views of outgroups, preference for
ingroup cohesion, and the like.
One interesting contention to arise from twin research is that attitudes with
higher heritabilities are more resistant to change and social pressure and are more
strongly held than attitudes with relatively lower heritabilities (see Tesser [1993]
and Tesser et al. [1994]). It may be that heritable attitudes are what we might
think of as more ‘basic’ amongst our attitudes—much in the way philosophers often
presuppose that spontaneous intuitive judgments are more basic than the theories
built out of them.
So there is evidence that a range of attitudes are heritable. How do genes influ-
ence attitudes? It is comprehensible that extroversion and eye colour are heritable—
but attitudes? A brief answer, which I shall expand below, goes as follows. Genes
clearly influence psychological traits. For example, genes play a significant role
fixing the standard “big five” personality traits—openness, conscientiousness, ex-
traversion, agreeableness, and neuroticism (see Jang et al. [1996])—as well as traits
like intelligence and memory (see Bouchard [1998]). Attitudinal traits plausibly en-
ter the mix partly through our psychological traits—specifically, those traits that
furnish us with attitude-forming dispositions. In other words, some of our traits
dispose us to form particular attitudes, given some circumstances or conditions.
Having a particular psychological set-up makes it more likely, on a specified experi-
ence, that we will form some attitudes rather than others. To be sure, a certain set
of psychological traits does not determine anyone’s attitudes. It constrains what
attitudes will arise, given experience, by making the formation of some attitudes
110
more likely than others. This is merely a sketch, but it will serve us well enough
here.
5.2 Non-shared Perspectives
Does learning about the connection between genes and attitudes affect the ratio-
nality of those attitudes? The empirical evidence of heritability offers us evidence
concerning our attitudes and, as I’ll argue, it sometimes prompts us to change our
attitudes. In order to highlight the relevance of that evidence for our attitudes, I
will suggest that many of our attitudes are linked to particular, non-shared perspec-
tives. When we realize our attitudes fall out of a non-shared perspective, we’ll be
led to wonder what makes that perspective better than some other one. The picture
is suggestive but little more. In the following two sections, I will outline a pair of
arguments that start with evidence of heritability for some attitudes and conclude
that those attitudes are not rational.
We can suppose that you think that your psychological traits are partly based on
your genes and that some of your attitudes are somehow linked to your psychological
traits. What follows is a picture to hint at why evidence of the heritability of
attitudes can have rational significance for those attitudes. You know that those
who disagree with you have different psychological set-ups, different genes. Why
privilege your genes? What reason is there to prefer your set-up? (One common
answer goes as follows: “It is my perspective. It produces what seem to me to be
true beliefs, and I’m just doing my level best here.” Or, more briefly, “I am me, so I
win!” This answer, in its primitive form, may be followed by vigorous fist-pumping.
For now, though, let’s ignore it.) Silence in the face of such questions appears
to suggest cause for concern. If there is no good reason to be partial to our own
perspective in such a conflict, our views appear to be called into doubt.
For disputed issues, from politics to morals to metaphysics, one’s perspective
often influences one’s attitudes. A perspective is distinct from attitudes or judg-
111
ments: as I’m thinking of it, a perspective is a set of abilities or attitude-forming
dispositions, a way of “seeing” things.9 Given a perspective together with some
experiences, it is likely that particular attitudes will result. A perspective allows us
to form non-inferential judgments about what’s plausible or reasonable; it equips
us with theoretical values that help us select between competing theories and ex-
planations; and so forth. Some examples are in order. It is sometimes remarked
that “one man’s modus ponens is another man’s modus tollens”; often enough, I say,
those men have different perspectives. Indeed, we sometimes speak of “clashing”
or “irreconcilable” perspectives—liberal vs. conservative, empiricist vs. rationalist,
Kantian vs. consequentialist. I propose that these sorts of clashes sometimes find
their origin in diverse abilities: from different perspectives, we see the issues dif-
ferently. Our attitude-forming dispositions, just like our personalities, are different.
Our minds don’t work the same way. And as a result, we do not always see eye to
eye, as it were.
We know that identical twins have similar psychological traits and that among
such twins there is a high concordance of attitudes regarding, for instance, social
and political issues. One plausible explanation of this concordance is the thought
that perspectives somehow depend on psychological traits. The idea is that similar
traits give rise to similar perspectives, and those perspectives in turn produce similar
attitudes. Since twins tend to have minds that are alike, they often see issues the
same way, and thus they often think the same way.
Consider a few examples to suggest how different perspectives might give rise
to different attitudes in the same circumstances. For starters, suppose that I have
a shy and reserved perspective and that you’ve got an outgoing perspective. We
can think, albeit crudely, of the two perspectives as being ‘built from’ personality
traits in different measures. When we walk into a loud party, I say to myself,
“How dreadful...” I want nothing more than to leave, and I will within minutes.
9I use “seeing” in a non-factive sense here and below.
112
You think, “This is splendid”, as your face lights up. Here is another example:
suppose that differing psychological traits dispose us to form different judgments
about risk.10 For better or worse, you’ve got a risk-inclined perspective. And so
you are inclined to think that a dangerous action is worth it so long as its payoff
is big—to hell with the hazards. You judge, for example, that it is a fine idea to
jump the Grand Canyon on a motorcycle in order to gain millions of dollars and
notoriety. It turns out that my perspective is more risk-averse: I’m disposed to
judge that dangerous actions, like that motorcycle stunt, aren’t worth it even with
the potential payoff. One more example. Suppose that some perspectives dispose
people to hold some theoretical value or other, perhaps a preference for what W.
V. O. Quine called “desert landscapes”. You have the desert landscape perspective
and I do not. Accordingly, you are more likely than me to accept ontologically
sparse theories, given our exposure to the same sort of philosophical considerations—
reading the all of the same books and attending the same talks, for instance. These
three examples suggest that from our non-shared perspectives, we will sometimes
form different attitudes even when faced with comparable situations.
This thought deserves amplification. When we learn that a particular attitude
is heritable, we gain empirical support for the thesis that idiosyncratic, biologically-
based dispositions partly drive the formation of that attitude. Even if two thinkers
with distinct perspectives have exactly the same (propositional) evidence and are
equally careful in assessing that evidence, they might end up disagreeing. That’s
a startling idea. We might sensibly begin to view ourselves and those we disagree
with as being equipped with different tendencies to process evidence. From that
difference, our disagreements flow.
In this connection, witness what Hilary Putnam wrote about political disagree-
10The tendency to pathological gambling appears to be heritable: see, for example, Slutske et
al. [2010].
113
ments he had with Robert Nozick:11
Each of us regards the other as lackinga certain kind of sensitivity and
perception. To be perfectly honest, there is in each of us something akin
to contempt, not for the other’s mind—for we each have the highest
regard for each other’s minds—nor for the other as a person—, for I
have more respect for my colleague’s honesty, integrity, kindness, etc.,
than I do for that of many people who agree with my ‘liberal’ political
views—but for a certain complex of emotions and judgments in the other.
[1982: 165]
According to Putnam, he and Nozick each take themselves to have a “sensitivity”
not shared by the other. Putnam’s suggestion is that their different abilities—
which are independent from intelligence and moral traits—have led them to disagree
about politics. Putnam wrote this passage after reflecting on persistent conflict
between himself and Nozick, but his remarks fit well beside what we know about
the heritability of social and political attitudes. We are apparently built to process
evidence in different ways.
So, we differ in our perspectives and that can lead to conflicts in attitude. For
all that, it is worth observing that most everyone agrees on many commonplace
matters. What underlies this agreement? My contention is simple: our attitudes
regarding such matters are largely in concord because we share large segments of
our perspectives in common. There is, after all, substantial overlap in our shared
cognitive hardware and psychological set-up. Our similarities run deep. We’ve
got the same genes to a large extent, with any genetic differences being rather
small. And most of us grow up in roughly similar cognitive environments. Due to
our similarities, I surmise, virtually everyone has closely matched attitude-forming
dispositions. Again, any differences are at the edges. Almost all of us are disposed
11Thanks to Mark Timmons for directing me to this passage.
114
to believe countless common things. We will believe many claims like the following,
for example: that a green object is before us when we are appeared to greenly; that
it is wrong to torture innocent people for fun or profit; that Wilson is in pain when
we see that he stubs his toe; that many human bodies other than one’s own have
before now lived on the earth; and so forth.
An unfortunate few lack the standard issue dispositions. We will recall Descartes’
brain-damaged madmen in his Meditations, thinking they are made of glass or that
they are pumpkins. Or there is Oliver Sacks’s man who mistook his wife for a hat;
that man certainly wasn’t disposed as most of us are. Some people confuse green
objects for red ones because of colour blindness. And some people lack commonplace
beliefs about the minds of others because of autism. Yet most of us do concur on
myriad ordinary topics. That should be unsurprising. Most of us have many of the
same attitude-forming dispositions and experiences, and so we tend to form many
of the same sorts of attitudes. We agree in large part because of our cognitive and
psychological similarities.
Despite the extensive similarities between us, differences in perspectives that bear
on attitudes regarding many non-commonplace matters are the rule. To see why,
we might consider all sorts of interesting questions about politics, law, cosmology,
metaphysics, and history. There is little agreement when the questions become
complicated, even among those who are in agreement on commonplace matters.
When politicians and senior diplomats meet to talk about global warming or AIDS
in Africa, everyone will agree that they are sitting on chairs and sipping soda water,
even though they won’t (or can’t) agree about the substantive issues. Plausibly,
such disagreement is partly related to the fact that there are a plethora of overall
ways to be disposed to evaluate arguments and evidence. There are a range of values
to employ in reasoning. It is hard to doubt this: in many disagreements, we face
opponents who are equipped with different perspectives.
All of this leads to questions. Given the range of possible perspectives, why do
115
we prefer the perspective we find ourselves with when there’s a disagreement? Does
anything make our perspective special? If so, what is it? Often enough, we should
be tempted by the answer that it is pretty much an accident, or a matter of luck,
that we find ourselves with a particular perspective and that, for all we can see, ours
has no advantage over alternatives.
That anyway is hinted at by the picture I wanted to set out. Even though
pictures are not arguments, by looking it over, we may well begin to doubt our
disputed attitudes. The fact of non-shared perspectives has some persuasive force, it
seems to me. If the picture isn’t already clear, an analogy will bring it into sharper
focus. Imagine that we have been entered in a lottery that randomly distributes
wrist watches. Once the watches are given out, we form beliefs by reading our
own timepieces. You think the time is 1:40; I believe it is 12:30. We have no
independent way to check the time. But suppose we find out that the watches
have been distributed by lottery. What is reasonable for us now is to take no view
about the present time. It might be 12:30 or 1:40 or some other time; we should
suspend judgment regarding what time it is. Even if one watch tells the right time,
it has been called into doubt. Continuing to trust it is not reasonable. The same
goes for us, at least sometimes, upon discovering that our perspectives dispose us
to form divergent attitudes about disputed claims. We see things our way. But the
way we are, the way we see things, is an accident. Once we think that about our
perspectives, we should doubt our heritable attitudesabsent good reason to think
our perspectives are preferable to alternative ones.
Pictures can take us no further. I am proposing that gaining evidence indicating
a connection between genes and attitudes can undermine those attitudes. In the
next two sections, I will consider two distinct arguments to (further) support my
contention.
116
5.3 The Alternative Explanation Argument
Given a moment to reflect, we will often think that a particular attitude of ours
is based on our total evidence. Perhaps you believe that you have two hands, you
suspend judgment on whether Lake Wilcox in Ontario is now frozen over, and you
disbelieve that the Apollo moon landings were faked. On reflection, you probably
take those attitudes to be based on all of the relevant evidence that you happen
to possess—you think that your attitudes are sensitive to the whole set of your
evidence. After all, you’ll probably surmise that insofar as the rationality of such
attitudes depends on your evidence, it is your total evidence that counts. Coming
up next, I will argue that learning that an attitude of ours is heritable may prevent
us from thinking that attitude is based on our total evidence. But insofar as we fail
to have reason to think an attitude is based on our total evidence, we’ll want to
reduce confidence in it.12
In certain circumstances—while reading this sentence, say—you will ask whether
you have formed an attitude toward some proposition p on the basis of your total
evidence. So let us suppose you are presently entertaining the proposition ¡that my
attitude to p is based on my total evidence¿. I’ll label that proposition BTE. We
can think of BTE, specified to some attitude, as an explanation of the fact that you
hold that attitude. If BTE is true, then you wound up with that attitude because
your total evidence supports it.
Oftentimes, you will find good reason to accept BTE. But suppose that you
discover alternative stories that explain why you take the attitude you do, without
appeal to the total evidence. For instance, you’ve been reading books by Sigmund
Freud recently; you now surmise that certain of your attitudes may well be formed
by “wish-fulfillment”, not due to responding to your total evidence. Perhaps you
can’t entirely rule out or discredit that Freudian story. It has at least minimal
12Correspondence with E.J. Coffman and Nathan King on some related matters helped my
thinking here.
117
credibility for you. Here is what I want to say: once this alternative explanation
of your attitudes has found a place in your thinking, it follows that you should be
less than fully confident that BTE is true. As a result, you should also reduce your
confidence in your attitude toward p. Let me explain.
An analogy will get us started.13 Back in 1794, William Paley argued that a
supernatural creator is by far the best explanation of certain biological facts. More
than half a century later, Charles Darwin offered an alternative, naturalistic expla-
nation for those same biological facts. What happens when Paley’s advocates learn
about Darwin’s theory? We will likely say that their confidence in their supernatu-
ral explanation should dip below pre-On the Origin of Species levels. Thomas Kelly
proposes why this is so:
For a given body of evidence and a given hypothesis that purports to
explain that evidence, how confident one should be that the hypothesis
is true on the basis of the evidence depends on the space of alternative
hypotheses of which one is aware. [2008: 620]
Upon becoming aware of the Darwinian theory, Paley’s advocates should reduce their
confidence in the supernatural explanation of the biological facts. Even supposing—
contrary to what many actually think—that a supernatural explanation of apparent
biological design is better than the Darwinian theory, Paley’s advocates should still
end up with lowered confidence in their explanation. Now suppose that they accept
deism solely on the basis of Paley’s design argument. Pretty obviously, once their
rational confidence in the supernatural explanation is diminished by learning about
Darwin’s naturalistic explanation, they will be led to reduce confidence in their
deistic conviction as well.
This example illuminates my central contention regarding alternative explantions
and your acceptance of BTE (again, ¡that my attitude to p is based on my total
13I have drawn this example from Thomas Kelly [2008: 621].
118
evidence¿). Remember that we have supposed that you accept BTE, relative to
some attitude to p, and that you possess an alternative explanation for your holding
that attitude. The Paley/Darwin example suggests two key ideas: that alternative
explanations for your attitude toward p may lead you (i) to reduce confidence in
BTE and, in turn, (ii) to reduce your confidence in your attitude.
As for (i), the BTE explanation and an alternative explanation, such as a
Freudian “wish-fulfillment” story, both have this feature: if either one is true, you
would form some attitude to p. But these explanations are not compatible. Fol-
lowing Kelly, we should think that the alternative explanation, since it remains in
play for you, ‘steals’ some of the probability space that would otherwise go to BTE.
Turning to (ii), it seems quite plausible that if you should reduce confidence in BTE,
it follows that you should thereby reduce confidence in your attitude to p. The two
are connected. To understand why, let’s assume that you are required to reduce
confidence in BTE, in light of the known alternative explanation for your attitude,
but that you happen to remain just as confident in your attitude to p as you were
prior to learning of the alternative. The trouble with that assumption is that your
rational confidence in BTE has no impact on your confidence in your first-order atti-
tude. But shouldn’t there be what we might call a ‘level connection’ here?14 It does
seem so. Yet by assuming that your rational confidence in your attitude should not
diminish along with confidence in BTE, we erroneously disconnect what’s rational
to think about BTE from the rational standing of your first-order attitude. Thus,
we should affirm (ii).
To repeat: my contention is that alternative explanations for your attitude to-
ward p may lead you (i) to reduce confidence in BTE and, in turn, (ii) to reduce
your confidence in your attitude. In some more detail, here is the argument schema
I have in mind:
AE1: If you believe BTE regarding your attitude to p and you are aware
14Compare this to the idea of ‘level confusions’ in Alston [1980].
119
of an alternative to BTE, you have reason to reduce your confidence in
BTE.
AE2: If you have reason to reduce your confidence in BTE, then you
have reason to reduce confidence in your attitude to p.
AE3: As a matter of fact, you believe BTE regarding your attitude to p
and you are aware of an alterantive to BTE.
Therefore,
AE4: You have reason to reduce confidence in your attitude to p.
We can refer to this as the Alternative Explanation Argument. I’ll now briefly tie it
back to our discussion of genes.
Evidence about the connection between genes and attitudes gives us an alterna-
tive explanation of certain of our attitudes. Most of us will accept BTE relative to
our social and political attitudes, for instance. We think those attitudes are based
on our total evidence; perhaps we even have plenty of reason to accept that. Later
on, we read some articles by behavioural geneticists and learn that attitudes regard-
ing the death penalty and abortion, among other topics, have a significant heritable
component.15 For all we know now, it may be that we have arrived at particular
attitudes because of the operation of our idiosyncratic perspective, partly built by
the genes we’ve got—not because our total evidence supports those attitudes. BTE
is one explanation for our attitudes; the plausible story about genes is an alterna-
tive explanation. What’s critical is that AE3 holds, in the wake of what we have
learned about genes. By AE1, AE3 gives us AE2’s antecedent. From AE2 and its
antecedent, we get AE4. We thus have reason to reduce confidence in our attitudes
concerning the death penalty and abortion.
15See, for example, Martin et al. [1986], Alford et al. [2005], and Hatemi et al. [2010], all of
which are briefly mentioned in section 1 above.
120
The Alternative Explanation Argument gives us one way to see how genetic
information can undermine our attitudes. Let’s try a second argument.
5.4 The Arbitrary Perspective Argument
According to the picture outlined in section 2, some conflict in our attitudes is
due to our non-shared perspectives. Then I asked why we favour our perspective
over alternative perspectives when we’ve reason to think a difference in perspective
induces a difference in attitude. In situations where we have no reason to favour
our perspective, there’s a strong hint of arbitrariness.
What does arbitrariness amount to here? How should we understand it? Accord-
ing to a paradigmatic kind of arbitrariness, our reasons do not settle what action we
should perform. Imagine you arrive at a fork in a trail. You have no idea whether
home is to the right or the left. You may take one fork over the other here, of course,
but to do so is what we might call practically arbitrary. You have no reason to take
one particular fork over the other—though you may have reason to take one or the
other. To move forward, you must invoke an arbitrary factor in order to select which
way to go.16
That paradigmatic arbitrariness has to do with action, but arbitrariness can
also affect our attitudes. Suppose that the evidence regarding a question equally
supports two incompatible answers. If you know that, it is evidentially arbitrary
for you to accept one answer over the other. As I see it, information about the
connection between genes and attitudes does often suggest a sort of arbitrariness.
It is not the practical or evidential variety, however. What is arbitrary is that we
16Compare yourself here to Buridan’s Ass—the poor creature faced with two equally succulent
bales of hay, unable to decide which one to munch. It is precisely the fact that the animal cannot
invoke an arbitrary factor in his decision-making that he goes hungry. You can decide between the
two trails by flipping a coin, or taking the left fork because you’re left handed, or just starting to
walk whichever way you feel like walking.
121
stick to our disputed attitude while taking ourselves to lack a reason to think our
perspective is better than some alternative perspective when it comes to getting
some matter right. This is an arbitrariness that afflicts our judgments concerning
our perspective relative to an alternative perspective.17
I have proposed that for many controversial propositions—those we take different
attitudes toward—it is a difference in perspective that often leads us to disagree.
Let us call your perspective P1 and your opponent’s perspective P2. Suppose you
get a reason to think that your disagreement over a proposition p is due in part to
different perspectives; you will have such a reason by way of learning that attitudes
regarding p tend to be heritable. Sometimes, we will be led to the thought that our
controversial attitude is irrational. Here is what we can call the Arbitrary Perspective
Argument :
AP1: For perspectives P1 and P2, where P1 is your actual perspective
and P2 is some alternative perspective, if (a) you believe that p, (b) you
have reason to believe that P1 leads thinkers (yourself included) to belief
in p and P2 leads thinkers to the denial of p and (c) you lack reason to
think that P1 makes it more likely than P2 that you now get p right,
then your belief that p is irrational.
Then we affirm A1’s antecedent:
AP2: (a) You believe that p, (b) you have reason to believe that P1
leads to belief in p and P2 leads to the denial of p, and (c) you lack
reason to think that P1 makes it more likely than P2 that you now get
p right.
The conclusion follows:
AP3: Your belief that p is irrational.
17For more details, see the discussion of arbitrariness in “The Variability Problem”, section 3.
122
Why accept AP1? And how do we gain reason to accept AP2? I will treat these
questions in order.
AP1 can be bolstered with examples. A brilliant but disturbed scientist has
designed you and Hezekiah. The scientist assembled you two using a ghastly mix of
biological parts, exhumed from a local graveyard, and advanced computer hardware.
Until now, you’ve thought that the scientist created you and Hezekiah in just the
same way. But imagine you learn this: your brain contains a microchip, X, that
leads you to believe capital punishment is always wrong whereas Hezekiah’s brain
features another chip, Y, that leads him to think capital punishment is sometimes
permissible. So far we’ve got enough to satisfy (a) and (b) in AP1. Let us proceed
by supposing that you have no idea whether X makes it more likely than Y that
you reach a right answer concerning the morality of capital punishment. That is,
of course, (c) in AP1. Given what you have learned about Hezekiah and yourself,
it is hard to deny that your belief that capital punishment is wrong is irrational.
That natural conclusion is just AP1’s consequent. By generalizing from examples
like that, we can lend support to AP1.
What about AP2? Here’s my pitch: when controversial propositions are at
issue, and you learn that attitudes toward such propositions are heritable, you will
sometimes lack reason to think P1 makes it more likely than P2 that you get the
matter right. That is, finding out that disputed attitudes are influenced by genes
will lead you to accept AP2. To start to appreciate why, consider first what P1
and P2 are. They are clusters of psychological traits produced by the interaction of
genes and the environment, traits that dispose us in certain ways to form attitudes.
(Some might even take them to be exceedingly complex biological ‘microchips’.)
For some disagreements, you will obviously have a reason to think that P1 makes
it more likely than P2 and thereby reason to reject AP2. For example, it may be
that you and your opponent clash over whether the wall is red. You think it is; your
opponent denies it is red. This dispute, you surmise, traces back to a difference in
123
perspective. Knowing that your opponent suffers from red-green colour blindness,
you have a reason to think that your perspective makes it more likely than his
perspective that you are right about the wall’s colour. Something similar holds for
disagreements between you and Descartes’ madmen over whether they are made of
glass or pumpkins.
Not all disagreements that trace back to differences in perspective are like those
disagreements, however. We sometimes reasonably think that our opponents are
healthy and normal, suffering no malfunction, subject to no failings greater than our
own. We have no reason to think our perspective is better than their perspectives in
relevant respects. The example of Hilary Putnam and Robert Nozick is instructive
here. Much like these philosophers, we may have respect for our opponent’s mind,
moral and personal qualities, and so on. Supposing that we ultimately conclude that
our opponent lacks “a certain kind of sensitivity and perception”, we will certainly
want reason to back up this judgment. The mere assertion of our superiority here
would seem self-congratulatory. Faced with AP2, then, the critical question is this:
do we have a reason to think P1 makes it more likely than P2 that we get p right?
When we learn that a difference in an attitude goes back to a genetic difference, this
question becomes salient. Within limits—excluding thinkers with cognitive defects
and deficits, that is to say—we will be hard pressed to think that our genotype is
more likely than another to get us the truth about p.
When we find that one of our attitudes is heritable, we face the Arbitrary Per-
spective Argument. I don’t say it is impossible to avoid irrationality. But we will
need to give some preference to our perspective, and to have reason for so doing. I
myself find it difficult to see just where those reasons could come from.18
18For further discussion of this sort of argument, see “The Variability Problem”, section 3.
124
5.5 Conclusion
I have tried to suggest how evidence about the connection between genes and at-
titudes sometimes undercuts particular attitudes of ours. The arguments on offer
target controversial attitudes, ones due to non-shared perspectives. By learning
about the heritibility of attitudes that we hold, we come to see alternative explana-
tions for the fact that we’ve arrived at those attitudes. The Alternative Explanation
Argument shows how this calls for a drop in our confidence. And when we are con-
fronted with alternative perspectives, we may doubt that our perspective is more
likely than alternatives to get some questions right. The Arbitrary Perspective
Argument points out why we are pushed to give up our attitudes.
At moments, our discussion has been unapologetically speculative. So perhaps
an apology is now in order. To date, there are empirical results alleging that some
attitudes have a heritable component; but the overwhelming majority of our atti-
tudes have not been studied. Select at random some attitude of ours concerning
a disputed issue—that humans are wholly made of material stuff, say, or that Ne-
anderthals were a contemporary subspecies that bred with Homo sapiens and then
disappeared through interbreeding. It will be rather unlikely that the attitude we
select is among the known-to-be heritable attitudes. (In fact, those attitudes are not
known-to-be heritable.) Vast streches of our attitudes are unstudied; as I noted in
section 1, this is true for philosophical attitudes. Behavioural genetics is a young
discipline, though, and more research will likely relay a more complete sense of
what’s heritable and what’s not. So, the full extent of the problem I’ve raised in
this essay remains to be seen.
Even still, I am inclined to say that, at the end of inquiry into the genetics of
human attitudes, the rough initial picture will be vindicated. Our attitudes—no
different than our eye colour, height, and personality—are partly in our genes. In
the meantime, I myself cast a suspicious eye upon attitudes that by all appearances
125
are good candidates for being due to my idiosyncratic, non-shared perspective.
We began with a story about you and your long-lost identical twin. Let us
conclude with another story. Scientists are now isolating particular genes for all sorts
of traits—from mental illnesses to political ideology.19 In some not-so distant future,
it isn’t hard to envision that a simple genetic test might reveal your dispositions to
form various controversial attitudes. Perhaps you are a proponent of gun control,
an advocate of a flat tax for the rich and poor, or a utilitarian about right action.
The genetic test, administered in the comfort of your home, might surprise you.
It can reliably predict your viewpoint. These predictions may sometimes be off,
but that is in keeping with the nature of the test. There is an incredibly complex
interplay of genes and environments, leading to the manifestation of some attitude.
But given particular clusters of genes, there is a good chance you’ll end up having
particular attitudes.20 Imagine, if you can, that you’re looking over the results from
your genetic test—with a high degree of accuracy, it captures your tendencies to
form various controversial attitudes.
What shall we think of ourselves now? For my part, I can’t see how that test
would be anything short of disorienting. Walking away from it, we should have a
sense that our attitudes are somehow on less than firm ground than we initially
thought. We should wonder, here, whether our attitudes are really based on our
total evidence, and whether we’ve reason to think our psychological set-up is more
likely to lead us aright than alternatives.
Long in advance of this genetic test, however, we can begin to worry. We have
been roused to the possibility that many of our attitudes are partly driven by genes
19For instance, dopamine receptor D4 has been linked to everything from ‘liberal’ political at-
titudes to Parkinson’s disease. For the former case, which is the kind that interests us here, see
Settle et al. [2010].20The test might even control for various environmental assumptions and thereby become even
more accurate: just dial in your socio-economic-status, continent of birth and adolescence, and the
like.
126
and that differences in attitudes sometimes trace back to genetic differences. If I
am right, this information is significant for the assessment of our attitudes. In light
of the lessons of genes, I propose that we would do well to temper at least some of
our disagreements with doubt.21
21An earlier version of this material was presented to a seminar at the University of Arizona in
September 2008. For comments and conversations, I am grateful to E.J. Coffman, Stew Cohen, Ian
Evans, Chris Freiman, Peter Hatemi, Keith Lehrer, and Craig Warmke. I am especially grateful
to Alex Skiles, Benjamin Wilson, and Jennifer Wolfe: each one contributed much to this project.
127
CHAPTER 6
KNOCKDOWN ARGUMENTS
What is philosophy good for? Some have recently said what it isn’t good for—
devising knockdown arguments. David Lewis confesses:
The reader in search of knockdown arguments in favor of my theories
will go away disappointed. Whether or not it would be nice to knock
disagreeing philosophers down by sheer force of argument, it cannot be
done. Philosophical theories are never refuted conclusively. [1983: x]1
Peter van Inwagen claims:
There are...no knockdown arguments in philosophy. There are no philo-
sophical arguments that all qualified philosophers regard as compelling.
[2002: 83, cf. 27]
1See section 2 below for discussion of Lewis’s exceptions [viz., “(...Gdel and Gettier may have
done it.)”]. Cf. Lewis [1993: 150]: “I am an atheist. So you might suspect that my purpose
is to debunk free-will theodicy, and every other theodicy besides, so as to provide—at last!—
a triumphant knockdown refutation of Christianity. I am convinced that philosophical debate
almost always ends in deadlock, and that this case will be no exception.” In a posthumous paper
(edited by Philip Kitcher using Lewis’s notes and Kitcher’s memory of conversation with Lewis),
Lewis says this: “In my view, even the most ambitious version [of the argument from evil] succeeds
conclusively. There is no evasion, unless the standards for success are set unreasonably high”
[2007: 231]. Perhaps this late text represents a change to Lewis’s earlier view: he might be read as
suggesting there is a KD philosophical argument against theism. An alternative reading is that he
thought requiring a knockdown argument as the “standard for success” was “unreasonable” and
that the argument from evil didn’t succeed by such a standard. (But then why would Lewis say
that any version of the argument “succeeds conclusively”?)
128
To a first approximation, a knockdown argument is one that establishes its con-
clusion conclusively. Many philosophers, perhaps unreflectively, try to do exactly
what Lewis and van Inwagen say can’t be (or at least isn’t) done: they write as
though their conclusions are established or proven and that the considerations of-
fered for these conclusions are decisive.2 So, we’re faced with some important ques-
tions: what are knockdown arguments? Are there any in philosophy? If not, why
not? If so, what are they, and why are they knockdown? These questions concern
the nature of the philosophical enterprise, but they have not received due attention.3
In this paper, I will begin to answer those questions. In section 1, I shall char-
acterize knockdown arguments in a way that fits with common talk. Then, after
clarifying in section 2 the position staked out by Lewis and van Inwagen, I will
assess in section 3 some reasons for thinking there are no knockdown philosophical
arguments. In section 4, I’ll propose and defend an argument against van Inwagen’s
claim that there are knockdown arguments in non-philosophical fields, but none in
philosophy.
I suspect that another philosophical question hides beneath questions about
knockdown arguments. How wide are the limits on the doxastic attitudes of ra-
tional, informed people?4 Bertrand Russell raised a similar question to begin The
Problems of Philosophy : “Is there any knowledge in the world which is so certain
2Van Inwagen remarks that most present-day analytical philosophers “believe that there are
knockdown arguments in philosophy. (And it is certainly true that they believe that there could
be.)” [2004: 339]. Notably, he admits that writing as if there are no knockdown arguments in
philosophy is virtually impossible for philosophers (himself included) [2006: 37–38], cf. [2004:
338–339].3To my knowledge, no recent journal article or book chapter explores knockdown arguments.
The texts I will draw upon are in the introduction of Lewis [1983] and parts of van Inwagen
[2002], [2004], [2006]. Wreen [1995] is entitled “Knockdown Arguments”, though it discusses the
argumentum ad baculum, not what I call knockdown arguments.4Closely related questions have been raised in discussion of “rational uniquness”: see White
[2005], Feldman [2007: 204–205], Kelly [2010: 117–121], and Ballantyne and Coffman [forthcoming].
129
that no reasonable man could doubt it?”5 As I will suggest in section 1, if there
are knockdown arguments, then there are theses that no rational, informed person
should doubt—there are some strict limits on the attitudes that such people could
have. Indeed, if rational, informed people may take a range of attitudes toward
all philosophical theses, then there are no knockdown arguments in philosophy.
Whether or not there are (or could be) knockdown arguments reveals something
about the limits of rational inquiry. We may begin to address this further question
by exploring knockdown arguments.
6.1 What is a Knockdown Argument?
A knockdown argument is one that, in some sense, establishes its conclusion conclu-
sively. But what is it to do that? Can we say more? For starters, I will try to spell
out what some philosophers mean when they talk about “knockdown arguments”. I
do not mean to be seeking to define a vague technical term. The goal is to examine
a few suggestions about what a knockdown argument is, and then to explore logical
space using those suggestions as a springboard.
Van Inwagen’s remark that a knockdown argument is one that “all qualified
philosophers regard as compelling” offers a starting point. A given argument, we
may say, lies in a particular field of inquiry (botany, geology, physics, history of
Soviet science, whatever), and that field has experts, namely those who count as
“qualified”. Among philosophers and non-philosophers alike, it is sometimes implied
that knockdown arguments bring about agreement among experts. Van Inwagen
remarks that he once thought that Church’s Thesis (a thesis in the philosophy of
mathematics) could be proven in such a way that meets this high standard. But
then he learned that
5Russell comments on his question: “This question, which at first sight might not seem difficult,
is really one of the most difficult that can be asked” [1997: 7].
130
an important authority (Lszl Kalmr) had his doubts about the cogency
of the argument I had found so impressive and was in fact inclined to
think that Church’s Thesis was false. Since I was unwilling to suppose
that Kalmr was mad or irrational, I changed my mind. “Back to zero,”
I thought. [2006: 39–40]
The suggestion here, it seems, is that the rejection of an argument by an expert
implies it is not knockdown.
Can this idea be used to characterize knockdown arguments? We can put it as
follows:
A: X is a knockdown argument iff all experts accept X’s conclusion on
the basis of X’s premises.
Notice that A characterizes knockdown arguments in sociological terms—I mean, in
terms of a sociological fact, namely, bringing about agreement among experts. But
if we assume that experts can make mistakes, even when they agree, their agreement
over some thesis need not entail there is a knockdown argument. It is possible, after
all, that fallible experts accept X’s conclusion on the basis of its premises, even
though it is not a knockdown argument. Perhaps X is an awful argument; X is
so patently awful that most non-experts, unblinkered by professional committment,
can see as much. Yet the experts are biased and thus happen to think it is conclusive.
So, all experts accepting X’s conclusion on the basis of its premises is not sufficient
for it being a knockdown argument. Neither is it necessary. It is possible that some
experts (mistakenly) do not accept X’s conclusion on its premises, even though X is
a knockdown argument; and it is possible that some experts for some reason haven’t
so much as heard of the argument.
Perhaps knockdown arguments are better characterized in normative terms,
rather than sociological ones. The idea might be that a knockdown argument is
131
one that were someone to understand the argument, it would be epistemically ir-
rational for her not to accept it.6 Though the class of experts was at issue in
A, a better characterization might focus on thinkers more generally. Van Inwagen
appears to presuppose something similar:
[A]nyone who does not agree that the continents are in motion either
does not fully appreciate the data and arguments a geologist could put
forward in support of the thesis that the continents are in motion, or else
is intellectually perverse. (There exists an organization called the Flat
Earth Society, which is, as one might have guessed, devoted to defending
the thesis that the earth is flat. At least some of the members of this
society are very clever and are fully aware of the data and arguments—
including photographs taken from spacethat establish that the earth is
spherical. Apparently this is not a joke; they seem to be quite sincere.
What can we say about them except that they are intellectually per-
verse?) [2002: 17]
The reason these flat-earthers are “intellectually perverse” is supposed to be obvious:
there is a knockdown argument that they understand perfectly well but they do not
accept its conclusion. (I suppose that intellectual perversity entails irrationality,
though not vice versa.) This geological argument is supposed to render irrational
anyone who grasps it, expert or non-expert alike.
So, with that idea in hand, I will take an initial stab at a normative characteri-
zation:
B: X is a knockdown argument iff, were any subject S to understand X,
then S would be irrational not to accept X’s conclusion.
6By “epistemically irrational for her not to accept it”, I have nothing technical in mind. It may
mean that if she does not accept the argument’s conclusion, she has erred, failed to do her duty,
or somehow gone wrong from the “epistemic point of view”, as some like to say. What I say below
does not presuppose the details of any particular theory of epistemic rationality.
132
B faces a counterexample. Suppose that X is a knockdown argument and that Claire
comes to appreciate X. Before accepting X’s conclusion, however, Claire comes to
believe that she has accidentally ingested peyote and that it inhibits proper cognitive
function. She has not in fact; but since she has excellent reason to believe otherwise,
she suspects that she has failed to understand X. To put it another way, Claire has
acquired a undercutting defeater for thinking that she has understood X: she has
understood it, but her reason for thinking as much has been removed. The following
observations apply to the example: (i) X is a knockdown argument, (ii) Claire
understands X, (iii) she does not accept X’s conclusion, but (iv) there is nothing
irrational about her not accepting that conclusion, given her good reason to believe
she has ingested peyote is thereby a reason for her to doubt she has understood X.
This example indicates that the conditional on the right-side of B is not necessary
for X being a knockdown argument: X can be a knockdown argument without that
conditional coming out true.
One way to repair B is to rule out defeaters for the thought that we have under-
stood the argument (i.e., defeaters for S’s believing that S understands it).
C: X is a knockdown argument iff, if any subject, S, were to understand
X and lack defeaters for believing S understands X, then it would be
irrational for S not to accept X’s conclusion.
There is reason to think C is not a satisfying characterization. Supposing C is
true, any argument—no matter how powerless it is to conclusively establish its
conclusion—might come out as a knockdown argument. Take an example to see why:
Wilson understands a lame, non-knockdown argument, L, while lacking defeaters for
thinking he has understood it. Let’s suppose that L lends some small support for
its conclusion, but if Wilson thought about it for a few minutes, he would find
it inadequate. While none of Wilson’s current beliefs count as good reasons for
rejecting L (that is, denying the premises support L’s conclusion or denying L’s
133
conclusion itself), he nevertheless rejects L. Keep in mind that Wilson is not aware
that L is lame and he has no reason to reject it. Isn’t there something irrational
about his rejection of L? It appears so. The right-side of C is satisfied and that
implies that L is knockdown. But we should not say a lame argument like L is
knockdown. So, the right-side of C is not sufficient for its left-side.
C stands in need of tinkering, then. It may be fixed by distinguishing between
different degrees of irrationality. In the above example, it seems that something
goes wrong if Wilson rejects a lame argument like L without reason. Yet whatever
goes wrong is not on a par with what goes wrong if someone rejects a knockdown
argument. There are misdemeanors—and then there are felonies. If Wilson rejects
L absent reason to do so, he does something weakly irrational. That is expected be-
cause, as we have assumed, L is a lame argument. Crucially, a knockdown argument
makes a conclusion so evident for a thinker that it would be strongly irrational for
her not to accept it. So here is the fix:
D: X is a knockdown argument iff, if any subject, S, were to understand
X and lack defeaters for believing S understands X, then it would be
strongly irrational for S not to accept X’s conclusion.
There are questions about how to understand degrees of irrationality. I’ll ignore
those here and proceed to discuss knockdown arguments, using D as a working
characterization.
According to D, a knockdown argument is not one that brings about agreement.
Instead, it is one that ought to, were everyone to understand it while lacking de-
featers for thinking they understand it, bring about agreement. Importantly, if X
is a knockdown argument, but those who understand it are biased or intellectually
perverse, X won’t lead to agreement. That may be disappointing, but complaints
should be lodged with the audience, not the argument. So far as I can tell, D fits
with what some philosophers mean when they talk about “knockdown arguments”.7
7Perhaps what philosophers do mean isn’t what they should mean. As best I can tell, D mostly
134
It is worth adding that knockdown arguments most directly concern evidence
and rationality, not the truth. As a result, there may be knockdown arguments
for false theses. Here is an example from physics.8 In the nineteenth century—
prior to Einstein’s famous 1905 paper on special relativity, and its aftermath—all
of the best physicists agreed that light and other electromagnetic phenomena were
carried by a medium they called the “luminiferous ether”. On the basis of the
most advanced science of the day, physicists agreed that the ether was everywhere.
But for a variety of reasons, including the implications of special relativity, we
now know that luminiferous ether is a fiction; and one of the revolutions in early
twentieth-century physics was the banishment of this idea from dignified science to
the intellectual scrap heap. Plausibly enough, before Einstein’s paper, there were
knockdown arguments for propositions such as “the luminiferous ether exists” and
“the ether has such and such properties”. That is true, I surmise, even though there
are nowadays knockdown arguments for the proposition that the ether doesn’t exist.
The moral of the story: knockdown arguments guarantee the strongest of rational
credentials for our beliefs, but not the truth itself.
6.2 Distinctions
Are there any knockdown arguments? And are there any in philosophy? Van Inwa-
gen answers the first question affirmatively: there are such arguments in fields like
geology, mathematics, physics, history, and the like.9 Lewis is silent on that ques-
fits with van Inwagen’s use of “knockdown argument” in his [2002] and [2004], and one proposal
for “philosophical success” [2006: 39-40] (namely, an argument that “should convert any rational
person”). And van Inwagen tacitly assumes in [2004] that he and Lewis use that term in the same
way. I submit van Inwagen’s tacit assumption as indirect evidence that D also fits with Lewis’s
use of the term. The philosophers I’ve informally polled have agreed that something like B or C
or D, but not A, is what they and their colleagues usually mean by the term.8For the details, I am indebted to Benjamin Wilson.9See van Inwagen [2002: 8ff] and [2004: 335, 339].
135
tion. Both answer the second question negatively: there are (almost) no knockdown
arguments in philosophy. (See the quotations in the introduction.)
The second answer comes with a qualification, however. Both van Inwagen and
Lewis allow that there may be knockdown arguments of a special sort in philos-
ophy. Lewis remarks that “[p]hilosophical theories are never refuted conclusively.
(Or hardly ever, Gdel and Gettier may have done it.)”10 And Van Inwagen agrees
with Lewis here,11 bringing the qualification into somewhat sharper relief by distin-
guishing between substantive and minor theses. To follow van Inwagen’s idea, the
two philosophers actually deny that there are knockdown arguments for substan-
tive theses, while allowing there may be some such arguments for minor theses. A
substantive thesis is, for example, the claim that free will is incompatible with de-
terminism or that knowledge is justified true belief. A minor thesis is, for example,
the claim that the analysis of knowledge isn’t justified true belief.12
(But why must the line between substantive and minor theses get drawn there?
I don’t see why the thesis that knowledge isn’t justified true belief, or the thesis
that there is not a complete and consistent set of axioms for mathematics, aren’t
substantive theses. If you take epistemologists’ folklore seriously, you will think that
for a long time—from Plato until 1963—philosophers accepted the JTB account
of knowledge. Isn’t it rather significant that Edmund L. Gettier III (or, decades
prior, Russell) showed the error in this way of thinking? And doesn’t that suggest
the thesis Gettier established is indeed substantive? Leaving that aside, how are
we supposed to discern that one thesis is substantive and another is minor? Van
Inwagen doesn’t tell us. That is all just to say this: if we draw the line between
substantive and minor a little differently than van Inwagen has, we will say that
there are knockdown arguments for substantive philosophical theses.)
10Lewis [1983: x], cf. Lewis [1993: 150].11See van Inwagen [2004: 335] and [2006: 39].12The distinction between substantive and minor theses won’t be clear cut. Van Inwagen [2006:
40] says that there may be “borderline” cases of substantive theses.
136
From here on in, when I discuss the claim that there are no knockdown arguments
in philosophy, only arguments for substantive theses are at issue. Unfortunately,
neither Lewis nor van Inwagen has made explicit his reasons for thinking there
are no knockdown arguments in philosophy. True, both Lewis and van Inwagen
don’t present themselves as offering knockdown arguments. They both run “cost
counting” methodologies instead: for them, the intuitive and theoretical costs and
benefits of a thesis, not a knockdown argument in its favour or against its negation,
determine whether it is fit for acceptance. But that doesn’t much reveal why there
are no knockdown arguments. So next I will canvass and assess a few reasons
that might be offered to think there are none. Then I’ll criticize the position van
Inwagen has taken up, arguing that it is preferable either to admit that there are
knockdown arguments in philosophy or to deny that there are particular knockdown
arguments outside philosophy. The thought that there are knockdown arguments
outside philosophy, but none inside, is unstable.
6.3 The Case Against Knockdown Philosophical Arguments
Let us suppose that someone is unsure whether or not there are knockdown argu-
ments in philosophy. What reasons might we give this person to think there are
none? Although there are many potential reasons, I will limit the discussion here
to five.
The first reason appeals to a common sentiment: that there is a fundamental dif-
ference between philosophy and the sciences with respect to the sort of conclusions
practitioners can expect from their subject matter.13 The sentiment is captured in
Russell’s memorable quip that “[s]cience is what you know, philosophy is what you
don’t know.”14 (Of course, if Russell counts that claim as the handiwork of philos-
ophy, he should not claim to know it.) Van Inwagen hints at this difference, too,
13Conversations with E.J. Coffman and Stew Cohen helped here.14Russell [1995: 204].
137
when he says there are “established facts” in fields like geology and mathematics,
but no such facts in philosophy.15
While there undeniably is a difference between philosophy and other fields, it is
hard to say exactly what it amounts to. One possibility is that the difference just
is that there are no knockdown arguments in philosophy. But that won’t do. It
is question-begging. Remember that we are trying to offer someone who is unsure
whether there are knockdown arguments in philosophy a reason to think there are
none; she will quite properly find this reason unimpressive. An alternative possibility
is that the difference is something that entails, or makes it probable, that there are
no knockdown arguments in philosophy. Then there’s a question: what is that
something? I don’t know. There are certainly ways in which philosophy differs from
the sciences. It is sometimes suggested that the subject matter of philosophy has to
do in part with very general and abstract concepts like knowledge, causation, luck,
and justice. And so the methodology of philosophy depends critically on “rational
insight” or “intuition” in exploring its subject matter. Alternatively, it may be
that the critical distinction between science and philosophy concerns agreement
about what constitutes a proper test or experiment of a thesis: in science there is
agreement, in philosophy there is not.16
Perhaps there is a way to argue from these claims about subject matter and
methodology to the conclusion that there are no knockdown arguments in philos-
ophy. If so, it isn’t obvious how the argument would go—and surely these claims
about the subject matter and methodology will be disputed by some. Maybe there
is a reason in the neighbourhood to think there are no knockdown arguments in
philosophy, but it is not immediately apparent what it is.
Another reason employs a familiar idea about epistemic underdetermination.
The idea is that whenever a thinker’s ‘web of belief’ must be revised, the question
15Van Inwagen [2002: 8ff].16Keith Lehrer suggested this latter idea to me.
138
of which belief(s) to revise is underdetermined. It might be claimed on that basis
that there are no knockdown arguments in philosophy. Perhaps, for any putative
knockdown argument, a thinker can understand it but reject one of its premises
or assumptions, and thereby do nothing irrational by not accepting its conclusion.
That may be defensible and I don’t mean to dispute it here. I want to observe,
though, that this reason to think there are no knockdown arguments in philosophy
will count equally against knockdown arguments outside philosophy. It is no help
here if we claim there are knockdown arguments outside philosophy.
A third reason, related to the previous one, connects a thesis about the limits
of rational belief with knockdown arguments. According to “rational uniqueness”,
given any set of evidence E and any proposition p, there is one uniquely rational
doxastic attitude (i.e., believing, disbelieving, suspending judgment) to take toward
p on the basis of E.17 For present purposes, we may limit the scope of p to all
substantive philosophical theses. Roughly put, uniqueness implies that there’s one
rational attitude determined by some bit of evidence. Some epistemologists reject
uniqueness and embrace “rational permissiveness” instead. According to one quite
strong version of permissiveness, for any p and evidence E, E allows someone to
rationally accept p or not accept p. There is a link between this permissiveness and
knockdown arguments: if strong permissiveness is true, then knockdown arguments
are impossible. To see why, suppose that X is an argument with p as its conclusion
and E is the premises including the relevant evidence for those premises. If X is
a knockdown argument, then it is such that if someone were to understand X and
have the relevant evidence E while lacking defeaters for thinking she understands
X, it would be irrational for her not to accept p. Now suppose that, on the basis
of E, there is not only one rational attitude to take toward p; there is more than
one rational attitude—in other words, strong permissiveness is true. But then X
couldn’t be knockdown, for it would be rational to understand X and have E while
17See White [2005] and Ballantyne and Coffman [forthcoming] for discussion.
139
failing to accept p. If we think uniqueness is false because strong permissiveness is
true, there can be no knockdown arguments.
I doubt this is a compelling reason for someone who happens to be unsure
whether there are knockdown arguments to think there are none. First, there is
disagreement over the truth of uniqueness. Roger White [2005] and Richard Feld-
man [2007] raise some considerations against strong permissiveness.18 Suppose you
claim that given one’s total evidence, it is rational to either accept that p or deny
it; you affirm strong permissiveness. As it happens, you believe that p. The critial
question: why believe p rather than deny it? Given the way you deny uniqueness, it
can’t be that the evidence that better supports one attitude over the other; but then
it doesn’t seem as though there is a good (epistemic) reason to believe p rather than
deny it. Absent good reason to take one attitude rather than the other, your atti-
tude ends up looking arbitrary—it is not obviously different than an attitude formed
by a coin flip. But an attitude based on a coin flip isn’t rational. So why think your
attitude is rational, given your denial of uniqueness? A second worry about the
above reason is this: even if that strong permissiveness is true, it doesn’t follow that
it is irrational to accept that there are knockdown arguments. To see why, suppose
that some set of evidence, E, bears on the claim that there are no knockdown argu-
ments, N. If strong permissiveness holds, does E determine that belief is the rational
attitude to take toward N? Not necessarily. It could also be rational to disbelieve N
or suspend judgment on it. So, even granting permissiveness, we need not accept N:
we might be able to rationally accept the negation of N, or just suspend judgment
on N. Finally, note again that if the above reason precludes knockdown arguments
in philosophy, it does the same outside philosophy. It thus doesn’t sit comfortably
beside the claim that there are knockdown arguments outside philosophy.
A fourth reason to think there are no knockdown arguments involves a philo-
18The following argument sketch follows a more detailed argument in White [2005]. Ballantyne
and Coffman [forthcoming] critically assess White’s case for uniqueness.
140
sophical expert who offers some testimony : “I’ve been working in field for years,
searching carefully for knockdown arguments. But I haven’t found any and I don’t
believe any are to be found.” Experts like Lewis and van Inwagen19 are surely willing
to testify. On the basis of an expert’s say-so, someone who is unsure whether there
are such arguments might come to believe there are none. Yet, as van Inwagen ob-
serves, “most present-day analytical philosophers...do believe there are knockdown
arguments in philosophy. (And it is certainly true that they believe that there could
be.)”20 So, there are many other experts who are willing to supply contrary testi-
mony. Indeed, if van Inwagen has gotten the sociology right, perhaps the majority
view weighs against his view. But without a way to decide which experts to trust,
or how to balance the testimonial evidence, testimony for or against knockdown
arguments in philosophy is of small help.
Finally, consider a reason that begins with disagreement. As we observed in
section 1, characterizing knockdown arguments in terms of bringing about agree-
ment is problematic. Yet disagreement in philosophy may count as evidence against
the claim that there are knockdown arguments in that field. An argument in this
spirit might proceed as follows: if there were knockdown arguments in philosophy,
then there would be widespread agreement among philosophers about some sub-
stantive theses. But there is no such widespread agreement. Therefore, there are
no knockdown arguments in philosophy for any substantive thesis.
In my view, there are reasons to be unsure whether philosophers would agree
even if there were knockdown arguments. If that is so, then the above argument’s
conditional premise is counterbalanced for us; our reasons to accept it are no stronger
than our reasons to deny it.
Let me argue this out. One reason here starts with the doubt that there is ex-
pertise within philosophy. Plausibly, some might say, philosophers can be experts
19See his [2006: 39–40]; cf. [2004: 335 fn. 2].20Van Inwagen [2004: 338–339].
141
about what some philosopher once said, for example, or how to construct a valid
argument, but not about what substantive theses are true. Without experts with
respect to substantive philosophical theses, why think that the philosophers’ propos-
als about what’s true are reliable? Accordingly, as the doubt continues, there will
not be agreement over substantive theses. So, there is one reason to expect there
won’t be agreement and this reason should be in force even if there were knockdown
arguments.
A second reason begins with the idea that understanding philosophical argu-
ments is hard and misunderstanding them is easy. Understanding an argument in
theoretical physics or mathematics is hard, too, though in a different sense. There
is a kind of conceptual squishiness or mushiness in philosophical discourse and this
makes presenting philosophy hard in a different way than other fields happen to be
hard.21 This is plausibly so because of the cognitive limitations of practitioners22
and the vagaries of language that philosophical arguments often involve. It isn’t
that philosophical propositions are hard to grasp when presented with clarity—it is
that they are not often presented with clarity. In fairness, perhaps that is because
the philosophical labourers are few. And maybe there is a much higher degree of
agreement in the sciences than philosophy because significantly more people and
resources have been devoted to scientific work. It is possible, I suppose, that more
effort in philosophy will serve to identify and clarify knockdown arguments, leading
to more widespread agreement than we find at present.23
So even if there were knockdown arguments, we should think it is unclear that
there would be more agreement among philosophers, given that it is hard to under-
stand, and easy to misunderstand, philosophical arguments. Given considerations
like these, even if there were knockdown arguments, it is hard to see whether they
21Thanks to David Christensen for the word “squishiness” and for correspondence here.22This is a major theme in McGinn [1993]. And see van Inwagen [2002: 12] for his comparison
between acrobats and metaphysicians.23Parfit makes a similar point about ethics [1984: 453–454].
142
would bring about widespread agreement. I propose, then, that we neither affirm
nor deny the conditional that if there were knockdown arguments in philosophy,
there would be widespread agreement. We should suspend judgment. But then the
argument from disagreement to no knockdown philosophical arguments fails.
If you agree with Lewis and van Inwagen, none of the above need shake your
conviction. In fact, some or all of the reasons considered might play into an overall
assessment that, while not conclusive, favours your view. But I want to observe
that these five reasons against knockdown arguments in philosophy are inconclusive.
Reasons may be multiplied, but the inconclusiveness will remain. That should be
unsurprising, for a natural way to conclusively establish that there are no knockdown
arguments in philosophy remains off-limits: a knockdown argument.24
6.4 From Non-Philosophical Knockdown Arguments to Philosophical Knockdown
Arguments
I shall now argue against van Inwagen’s answer to the questions in section 2 above,
namely, that there are knockdown arguments in non-philosophical fields but none
in philosophy.
According to van Inwagen, the conclusion of a knockdown argument is what he
calls an “established fact”,25 and there are such facts in non-philosophical fields:
24Someone may insist that the thesis that there are no knockdown arguments in philosophy
is a piece of metaphilosophy and that metaphilosophy is not itself a branch of philosophy. So,
if there can be knockdown arguments in metaphilosophy, that thesis might itself be conclusively
established. It is less than obvious, however, why anyone would think that metaphilosophy isn’t
philosophy. Thanks to Stew Cohen here.25Van Inwagen writes: “Consider those enviable theoretical disciplines in which ‘pervasive agree-
ment’ is the order of the day. Consider any proposition that, as the result of the researches of the
experts in these disciplines, is generally agreed to be true... Will there not in every case be at
least one knockdown argument for the truth of this proposition (or at least an argument that the
experts regard as a knockdown argument)?” [2004: 339].
143
It is an established fact, a piece of information, that the continents are in
motion. We call the latter fact “established” because anyone who does
not agree that the continents are in motion either does not fully appre-
ciate the data and arguments a geologist could put forward in support
of the thesis that the continents are in motion, or else is intellectually
perverse. [...] It cannot be said that the existence of an ultimate reality
is an established fact, however... [2002: 17]26
(‘Ultimate reality’ is whatever there really is behind appearances. It is the subject-
matter of metaphysics; if there is no ultimate reality, metaphysics has no subject.27)
Since van Inwagen here admits there are knockdown arguments in geology, yet none
in philosophy, he is willing to say it is an established fact (e.g.) that the continents
are in motion, but not an established fact that there is ultimate reality. This position
seems plausible on its face and it is sometimes endorsed in conversation.
A concern for this position arises when we realize that an established fact in
geology seems to entail that there is ultimate reality. What, after all, could it mean
for it to be an established fact that the continents are in motion if it is not an
established fact that there are continents? And if it is an established fact that there
are continents, then isn’t ultimate reality something that includes continents?28
26Cf.: “Unlike the physical sciences, history does not have “a large body of settled, usable,
uncontroversial theory” at its disposal but, like the physical sciences, it does have a large body
of established and incontrovertible fact to work with. In philosophy, however, there is neither
settled theory nor incontrovertible fact” [2004: 335]. Van Inwagen’s last observation applies to
his philosophical position on the absence of knockdown arguments in philosophy: his position is
neither settled theory nor incontrovertible fact (though he might think it is either one or the other).27Van Inwagen [2002: 1–3].28Terry Horgan and Jonathan Schaffer independently reminded me that van Inwagen [1990]
paraphrases statements about (e.g.) continents in terms of ‘simples arranged continent-wise’. This
doesn’t supply reason for van Inwagen to doubt the entailment in question, for after the required
translation, he will say that it is an established fact that there are simples arranged continent-wise.
And if it is an established fact there are simples arranged continent-wise, it is an established fact
144
We discover the worry creeping into other fields as well. For many a philosophical
thesis, there is some putative established fact that appears to entail it. Take, for
instance, the thesis that the world wasn’t created five minutes ago. Although van
Inwagen may deny there is a knockdown argument for that thesis, he will grant that
there are established facts in history. Oscar Peterson, the Canadian piano master,
was born in 1925, for example. But doesn’t this established fact entail that the
world wasn’t created five minutes ago? Or consider the anti-Zenoian thesis that
there is motion. It is a philosophical thesis, I surmise, and van Inwagen will deny
that there is a knockdown argument for it. Astronomers, however, say that it is an
established fact that the earth is rotating, and this fact seems to entail that there
is motion.
Let us look at some objections to the above argument.
Objection 1. “The meaning of ‘continent’29 can be defined in terms of appear-
ances. Then there remains a further question whether or not there is anything
‘behind’ appearances. Accordingly, that there are continents does not entail that
there is ultimate reality.”
Reply. Defining ‘continent’ in terms of appearances leaves a residual question:
what is the status of appearances? Presumably, continents are real, for things must
be real if there are any. It seems, then, that if there are appearances, there is
ultimate reality in the sense that there is something. Maybe there being something
isn’t enough for what van Inwagen intends ‘ultimate reality’ to mean. Until he says
more about what ultimate reality is supposed to be, I surmise that the entailment
goes through.
A salient question here concerns the meaning of the term ‘ultimate reality’.
We might worry that any argument for the existence of ultimate reality won’t be
knockdown precisely because there isn’t a way to determine what ‘ultimate reality’
that there are simples. Van Inwagen is back to ultimate reality.29Or ‘simples arranged continent-wise’. See the previous footnote.
145
means. But that worry can be short-circuited. Consider the thesis that there exists
at least something. Surely, that is a substantial thesis and a philosophical one,
too. And just as surely, that thesis is entailed by the established fact that there
are continents. Even if ‘ultimate reality’ remains problematic, there is at least one
substantial, philosophical thesis entailed by particular established facts.
Objection 2. “In a geological or historical context, there are certain sorts of
assumptions that hold. But in a philosophical context, quite different assumptions
hold. For example, geologists don’t doubt for a moment whether there is an external,
material world or whether it can be known. They just assume as much. That is
not so with philosophy, I say. When we begin with the established fact in geology
that there are continents and then transport it to philosophy, we must notice that
the assumptions change. What was an established fact in geology is disputed in
philosophy. Consequently, the putative entailments from established facts in non-
philosophical fields to philosophical theses fail to hold, or we’d fail to know they
hold: since there are assumptions in philosophy not shared with, say, geology, what
is an established fact in the latter isn’t established in the former. The two fields are
separate epistemic realms.”30
Reply. Perhaps that is so. But according to many philosophers, van Inwagen in-
cluded, philosophy and non-philosophical fields are not “separate epistemic realms”
in the sense that completely different assumptions hold in them. Other philoso-
phers have different conceptions of the assumptions and aims of philosophy—and
they sometimes host doubts about whether there is an external, material world and
whether it can be known. These doubts serve to block, for them, the entailments
from established facts in non-philosophical fields to philosophical theses. But van
Inwagen seems to think the sorts of assumptions that hold in fields like geology are
a subset of the assumptions that hold in philosophy. Given this overlap of assump-
tions, I don’t see why van Inwagen should say the entailments in question fail.
30Stephanie Wykstra raised an objection along these lines.
146
Objection 3. “When we say, for example, that it is an ‘established fact’ that the
continents are in motion, here is what we don’t mean: that if Smith denies that
the continents really are in motion, he either doesn’t understand the arguments to
support that thesis or is irrational. Instead, we mean that if Smith denies that the
continents appear to be in motion, he either doesn’t understand the arguments to
support that thesis or is irrational. Consequently, the established fact in question
does not entail anything about what really is, namely, ultimate reality; it only entails
what appears to be the case.”31
Reply. The objector’s response appears out of step with van Inwagen’s view.
Maybe someone could so respond; but that someone is not van Inwagen. First off,
it sits uncomfortably beside some of van Inwagen’s remarks. For instance, when
clarifying his use of ‘ultimate reality’, he writes:
[I]t is sometimes possible to ‘get behind’ the appearances the world
presents us with and to discover how things really are: we have discov-
ered that the earth is really rotating, despite the fact that it is apparently
stationary. [2002: 2, emphasis in original]
Here, he contrasts the reality of the earth’s motion with the appearance of its being
unmoving. The idea is evidently not that we have (merely) discovered that reality
appears to involve the motion of the earth. The thought is that it really is in motion.
Otherwise, the contrast doesn’t quite make sense. Second, the objection implies that
the term ‘established fact’ is not univocal—some established facts are established
in the sense that they really are so, whereas others are established in the sense that
they appear to be so. Van Inwagen does not seem to trade on two senses of the term,
however. He seems to think that if there were established facts in philosophy, these
would be just like established facts in geology or history or mathematics. When
31Josh Rasmussen raised an objection like this.
147
van Inwagen says that philosophy is different from non-philosophical fields because
it lacks established facts,32 he assumes there is only one sense of ‘established fact’.
I myself find no good reason to deny the entailments from established facts
in non-philosophical fields to philosophical theses. Why not think that there are
knockdown arguments for ultimate reality (or there being something), motion, the
world’s being older than five minutes, and so on? The question raises a further issue:
what sort of reason is there to think that the entailments hold?
To answer we’ll need some distinctions. A thinker may have good reason(s)
to think that an entailment E holds; it may be immediately obvious to someone
that E holds; or someone may have a knockdown argument that E holds. The
entailments in question are, I think, immediately obvious. Now suppose we set out
an argument beginning with (i) its being immediately obvious to us that (e.g.) if
it’s an established fact that the earth is in motion, then it’s an established fact that
there is motion and (ii) concluding with that selfsame conditional. Add whatever
premise(s) you need to see that the move from (i) to (ii) is acceptable. It seems that
if we were to understand this argument and lack (undefeated) defeaters for thinking
we understand it, then it would be irrational for us not to accept its conclusion.
I think, then, that there are knockdown arguments for the entailments; so, the
entailments are established facts. Since the antecedents of those entailments express
established facts (e.g., it is an established fact that the earth is in motion), that
is enough for the arguments from established facts in non-philosophical fields to
philosophical theses to be knockdown.33
What is the status of the argument that there are such knockdown arguments?
Do I mean to offer a knockdown argument that there are such knockdown argu-
ments? To begin with, there is a difference between giving an argument that X
32See especially [2004: 335] and [2002: 8ff].33That there are knockdown arguments in philosophy is certainly a (meta)philosophical thesis. Is
it substantive? I leave that question aside, in part because I am unsure how to draw the distinction
between substantive and minor theses. For more, see section 2 above.
148
is knockdown and giving a knockdown argument that X is knockdown. X may be
knockdown, even if an argument for it being so is not itself knockdown. I do not
mean to provide a knockdown argument that the arguments for ultimate reality,
motion or the world’s being older than five minutes are knockdown. Instead, I mean
to argue that van Inwagen and others like him should think that those sorts of ar-
guments are knockdown. There may be escape routes (see the objections above),
but I can’t see how any are available to van Inwagen. He has good reason to think
there are knockdown arguments, even if not a KD argument to think as much.
I admit that it is curious to say that commonsense claims about continents, plan-
ets, and a jazz pianist entail philosophical theses. But that’s just the (uncommon)
beauty of commonsense arguments. Van Inwagen, or whoever shares his position,
faces trouble if he comes to appreciate the entailment. Astonishingly perhaps, he
would then possess knockdown arguments for philosophical theses—ones for which
he would have geologists, astronomers, and historians to thank. (Who ever said
analytic philosophy isn’t interdisciplinary?)
When reflecting on widespread agreement in non-philosophical fields and
widespread disagreement in philosophy, we may feel the pull and plausibility of
van Inwagen’s claim that there are knockdown arguments outside philosophy, but
none inside. But the above argument highlights a difficulty for that claim (when
it comes to certain sorts of arguments). Doubt about knockdown arguments inside
philosophy should be paired with doubt about knockdown arguments outside philos-
ophy, or acceptance of knockdown arguments outside philosophy should be paired
with acceptance of such arguments inside. I conclude that, for all Lewis and van
Inwagen say, there may be knockdown arguments in philosophy if there are such
arguments in other fields. It remains open for us to admit that there are knockdown
arguments in philosophy or to deny that there are particular knockdown arguments
outside philosophy.
149
6.5 Conclusion
Dare we think there are, or could be, knockdown arguments in philosophy? Answer-
ing in anything but a tentative way seems ill-advised. The example of G.E. Moore
might be instructive here, however. Moore laments the “peculiarly unsatisfactory
state” of philosophy: there is no agreement about answers to philosophical questions
“as there is about the existence of chairs and lights and benches”. He continues:
I should therefore be a fool if I hoped to settle one great point of contro-
versy, now and once for all. It is extremely improbable I shall convince.
[...] Philosophical questions are so difficult, the problems they raise are
so complex, that no one can fairly expect, now, any more than in the
past, to win more than a very limited assent. And yet I confess that the
considerations which I am about to present appear to me to be abso-
lutely convincing. I do think that they ought to convince, if only I can
put them well. In any case, I can but try. I shall try now to put an end
to that unsatisfactory state of things, of which I have been speaking.
[1903: III, 45]
Moore judges that the considerations he has are strong and that they ought to con-
vince, even though he expects they won’t bring about agreement. (He might add,
echoing David Lewis [1986: 115], that agreement would need a spell, not an argu-
ment.) Moore’s expectation for continued disagreement does not lead him to doubt
the rational strength of his argument. Perhaps it is uncommon for philosophers to
think, like Moore, that they have a knockdown argument in hand. But suppose
we take ourselves to have an argument that ought to convince. Why can’t we join
Moore and try to put an end to that unsatisfactory state of things?34
34Ancestors of this paper were presented at Yale University in October 2006 and Arizona State
University in March 2007. Thanks to the audiences and commentators, Stephanie Wykstra and
150
Craig Carley, on those occasions. (Thanks especially to the young child at Yale who, out in
the hallway, briefly interrupted the Q&A when he exclaimed, “Sarah, they have cookies!”) For
comments and conversation, I am grateful to Tomas Bogardus, E.J. Coffman, Stew Cohen, Tom
Crisp, William Dyer, Ian Evans, Chris Freiman, Terry Horgan, Victor Kumar, Keith Lehrer, Josh
Rasmussen, Mark Timmons, Benjamin Wilson, and Jeffrey Wisdom. I can’t now express my
gratitude to the late John Pollock, who devised many KD arguments. He will be missed.
151
Alford, J. at al. 2005. “Are Political Orientations Genetically Transmitted? Amer-
ican Political Science Review 99: 153–167.
Alston, William. 1980. “Level-Confusions in Epistemology.” Midwest Studies In
Philosophy 5: 135–150.
Alston, William. 1993. “Epistemic Desiderata.” Philosophy and Phenomenological
Research 53: 527–551.
Annas, Julia and Jonathan Barnes. 1985. The Modes of Skepticism: Ancient Texts
and Modern Interpretation. New York: Cambridge University Press.
Ballantyne, Nathan and E.J. Coffman. Forthcoming. “Uniqueness, Evidence, and
Rationality.” Philosophers’ Imprint.
Bergmann, Michael. 1997. “Internalism, Externalism, and the No-Defeater Condi-
tion.” Synthese 110: 399–417.
Bergmann, Michael. 2005. “Defeaters and Higher-Level Requirements.” Philosoph-
ical Quarterly 55: 419–436.
Bergmann, Michael. 2009. “Rational Disagreement After Full Disclosure.” Epis-
teme 6: 336–353.
Bogardus, Tomas. 2009. “A Vindication of the Equal-Weight View.” Episteme 6:
324–335.
Bouchard, T.J. 2004. “Genetic Influence on Human Psychological Traits. Current
Directions in Psychological Science 13: 148–151.
Bruder, Carl et al. 2008. “Phenotypically Concordant and Discordant Monozygotic
Twins Display Different DNA Copy-Number-Variation Profiles. American
Journal of Human Genetics 82: 763–771.
152
Christensen, David. 2007. “Epistemology of Disagreement: The Good News. The
Philosophical Review 116: 187–217.
Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Con-
troversy.” Philosophy Compass 4: 756–767.
Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomeno-
logical Research 81: 185–215.
Christensen, David. Forthcoming. “Disagreement, Question-Begging and Epis-
temic Self-Criticism.” Philosophers’ Imprint.
Cohen, G.A. 2000. If youre an Egalitarian, how come youre so rich? Cambridge,
MA: Harvard University Press.
Cohen, L. J. 1981. “Can Human Irrationality Be Experimentally Demonstrated?
Behavioral and Brain Sciences 4: 317–370.
Cohen, Stewart. 1995. “Is there an Issue about Justified Belief?” Philosophical Top-
ics 23: 113–127.
Cohen, Stewart. Manuscript. “A Defense of the (Approximately) Equal Weight
View.”
Conee, Earl. 2001. “Heeding Misleading Evidence.” Philosophical Studies 103: 99–
120.
DeVaul, R. A., et al. 1957. “Medical School Performance of Initially Rejected Stu-
dents.” Journal of the American Medical Association 257: 47–51.
Ehrlinger, Joyce, Thomas Gilovich, and Lee Ross. 2005. “Peering Into the Bias
Blind Spot: Peoples Assessments of Bias in Themselves and Others.” Per-
sonality and Social Psychology Bulletin 31: 680–692.
153
Elga, Adam. 2005. “On Overrating Oneself and Knowing It. Philosophical Studies
123: 115–124.
Elga, Adam. 2007. “Reflection and Disagreement. Nous 41: 478–502.
Feldman, Richard. 2003. Epistemology. Upper Saddle River, NJ: Prentice-Hall.
Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19:
95–119.
Feldman, Richard. 2006. “Epistemological puzzles about disagreement.” Epistemol-
ogy Futures, ed. S. Hetherington. Oxford: Oxford University Press, 216–236.
Feldman, Richard. 2007. “Reasonable Religious Disagreements.” Philosophers
without Gods ed. L. Antony. Oxford: Oxford University Press, 194–214.
Feldman, Richard and Ted Warfield (eds). 2010. Disagreement. New York: Oxford
University Press.
Foley, Richard. 1987. The Theory of Epistemic Rationality. Cambridge, MA: Har-
vard University Press.
Foley, Richard. 2001. Intellectual Trust in Oneself and Others. New York: Cam-
bridge University Press.
Frances, Bryan. 2010. “The Reflective Epistemic Renegade.” Philosophy and Phe-
nomenological Research 81: 419–463.
Fumerton, Richard. 2006. Epistemology. Malden, MA: Blackwell.
Gigerenzer, Gerd. 1991. “How to make cognitive illusions disappear: Beyond
heuristics and biases. European Review of Social Psychology 2: 83–115.
154
Greene, Joshua. 2007. “The Secret Joke of Kants Soul.” Moral Psychology, Vol. 3:
The Neuroscience of Morality: Emotion, Disease, and Development, ed. W.
Sinnott-Armstrong. Cambridge, MA: MIT Press, 35–79.
Gutting, Gary. 1982. Religious Belief and Religious Skepticism. Notre Dame, IN:
Notre Dame University Press.
Hatemi, Peter K. et al. 2010. “Not by Twins Alone: Using the Extended Twin
Family Designed to Investigate the Genetic Basis of Political Beliefs” Ameri-
can Journal of Political Science 54: 798–814.
Harman, Gilbert. 1973. Thought. Princeton: Princeton University Press.
Heath, Chip, Richard P. Larrick, and Joshua Klayman. 1998. “Cognitive Repairs:
How Organizational Practices can Compensate for Individual Shortcomings.
Research in Organization Behavior 20: 1–37.
Jang, K., Livesley, W. J., Vemon, P. A. 1996. “Heritability of the Big Five Person-
ality Dimensions and Their Facets: A Twin Study. Journal of Personality 64:
577–591.
Joyce, Richard. 2006. The Evolution of Morality. Cambridge, MA: MIT Press.
Kelly, Thomas. 2002. “The Rationality of Belief and Some Other Propositional At-
titudes. Philosophical Studies 110: 163–196.
Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement. Oxford Studies
in Epistemology, eds. J. Hawthorne and T. Gendler. Oxford: Oxford Univer-
sity Press, 167–196.
Kelly, Thomas. 2008a. “Evidence: Fundamental Conceptions and the Phenomenal
Conception.” Philosophy Compass 3: 933–955.
155
Kelly, Thomas. 2008b. “Disagreement, Dogmatism, and Belief Polarization.” Jour-
nal of Philosophy 105: 611–633.
Kelly, Thomas. 2010. “Peer Disagreement and Higher Order Evidence.” Disagree-
ment, eds. R. Feldman and T. Warfield. Oxford: Oxford University Press, pp.
111-174.
King, Nathan. 2008. “Religious Diversity and its Challenges to Religious Belief.
Philosophy Compass 3: 830–853.
King, Nathan. Forthcoming. “Disagreement: What’s the Problem? or A Good Peer
is Hard to Find.” Philosophy and Phenomenological Research.
Kunda, Ziva. 1990. “The Case for Motivated Reasoning. Psychological Bulletin 108:
480–498.
Lackey, Jennifer. 2010. “A Justificationist View of Disagreement’s Epistemic Sig-
nificance.” Social Epistemology, eds. A. Haddock, A. Millar and D. Pritchard.
Oxford: Oxford University Press, 298–325.
Lehrer, Keith and Thomas Paxon Jr. 1969. “Knowledge: Undefeated Justified
True Belief.” The Journal of Philosophy 66: 225–237.
Lehrer, Keith. 1971. “Why Not Scepticism? The Philosophical Forum 2: 289–298.
Lehrer, Keith. 1976. “When Rational Disagreement is Impossible.” Nous 10: 327–
332.
Lehrer, Keith. 1983. “Sellars on Induction Reconsidered. Nous 17: 469–473.
Lehrer, Keith. Forthcoming. “Evidentialism and the Paradox of Parity.” Eviden-
tialism and Its Discontents, ed. T. Dougherty. Oxford: Oxford University
Press.
156
Lewandowsky, Stephan, Werner Stritzke, Klaus Oberauer, and Michael Morales. 2005.
“Memory for Fact, Fiction, and Misinformation: The Iraq War 2003.” Psy-
chological Science 16: 190–195.
Lewis, David. 1983. Philosophical Papers, Vol. I. New York: Oxford University
Press.
Lewis, David. 1986. On the Plurality of Worlds. Oxford: Blackwell.
Lewis, David. 1993. “Evil for Freedoms Sake. Philosophical Papers 22: 149–172.
Lewis, David. 2007. “Divine Evil” in Philosophers without Gods, ed. L. Antony.
New York: Oxford University Press, 231–242.
Lipton, Peter. 2004. “Genetic and Generic Determinism: A New Threat to Free
Will? The New Brain Sciences: Perils and Prospects, eds. D. Rees and S.
Rose. New York: Cambridge University Press, 88–100.
Lord, Charles, Lee Ross, Mark Lepper. 1979. “Biased Assimilation and Attitude
Polarization: The Effects of Prior Theories on Subsequently Considered Evi-
dence. Journal of Personality and Social Psychology 37: 2098–2109.
Lykken, D.T., M. McGue, A. Tellegen, T.J. Bouchard. 1992. “Emergenesis: ge-
netic traits that may not run in families. American Psychologist 47: 1565–
1577.
Mawson, T.J. 2009. “Mill’s argument against religious knowledge.” Religious Stud-
ies 45: 417–434.
Martin, N.G., et al. 1986. “Transmission of Social Attitudes. Proceedings of the Na-
tional Academy of Sciences 83: 4364–4368.
McGinn, Colin. 1993. Problems in Philosophy: The Limits of Inquiry. Malden,
MA: Wiley-Blackwell.
157
Mednick, S. A., W.F. Gabrielli and B. Hutchings. 1984. “Genetic influences in
criminal convictions: evidence from an adoption cohort. Science 224:
891–894.
Meyer-Lindenberg, Andreas and Daniel M. Weinberger. 2006. “Intermediate phe-
notypes and genetic mechanisms of psychiatric disorders. Nature Reviews Neu-
roscience 7: 818–827.
Milstein, R. M., L. Wildkinson, G. N. Burrow, and W. Kessen. 1981 “Admission
Decisions and Performance During Medical School.” Journal of Medical
Education 56: 77–82.
Moore, G.E. 1903. Principia Ethica. Cambridge: Cambridge University Press.
Neto, Jose Raimundo and Richard Popkin, eds. 2004. Skepticism in Renaissance
and Post-Renaissance Thought. Amherst, NY: Humanity/Prometheus Books.
Nickerson, Raymond. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in
Many Guises.” Review of General Psychology 2: 175–220.
Nisbett and Ross. 1980. Human Inference: Strategies and Shortcomings of Social
Judgement. Englewood Cliffs, NJ: Prentice-Hall.
Olson, J.M., et al. 2001. “The Heritability of Attitudes: A Study of Twins. Journal
of Personality and Social Psychology 80: 845–860.
Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon Press.
Pettit, Philip. 2006. “When to defer to majority testimony – and when not.” Anal-
ysis 66: 179–87.
Plantinga, Alvin. 2000. “Pluralism: A Defense of Religious Exclusivism. The Philo-
sophical Challenge of Religious Diversity, eds. P. Quinn and K. Meeker. Ox-
ford: Oxford University Press, 172–192.
158
Plantinga, Alvin. 2000b. Warranted Christian Belief. New York: Oxford Univer-
sity Press.
Popkin, Richard. 2003. The History of Scepticism from Savonarola to Bayle. New
York: Oxford University Press.
Pronin, Emily, Daniel Y. Lin, and Lee Ross. 2002 “The Bias Blind Spot: Percep-
tions of Bias in Self Versus Others.” Personality and Social Psychology Bulletin
28: 369–381.
Putnam, Hilary. 1982. Reason, Truth and History. New York: Cambridge Univer-
sity Press.
Rosen, Gideon. 2001. “Nominalism, Naturalism, Epistemic Relativism. Philosoph-
ical Perspectives 15: 60–91.
Rowe, David C. 1994. The Limits of Family Influence: Genes, Experience, and Be-
havior. New York: The Guilford Press.
Russell, Bertrand. 1995. My Philosophical Development. London and New York:
Routledge.
Russell, Bertrand. 1997. The Problems of Philosophy. New York: Oxford Univer-
sity Press.
Rutter, M. 2002. “Nature, Nurture, and Development: From Evangelism through
Science toward Policy and Practice. Child Development 73: 1–21.
Rysiew, Patrick. 2008. “Rationality Disputes – Psychology and Epistemology. Phi-
losophy Compass 3(6): 1153–1176.
Schellenberg, John. 2007. The Wisdom to Doubt: A Justification of Religious Skep-
ticism. Ithaca, NY: Cornell University Press.
159
Segal, Nancy L. 2000. Entwined Lives: Twins and What They Tell Us About Hu-
man Behavior. New York: Plume.
Settle, Jaime et al. 2010. “Friendships Moderate an Association between a
Dopamine Gene Variant and Political Ideology. The Journal of Politics 72:
11891198.
Sher, George. 2001. “But I Could Be Wrong. Social Philosophy and Policy 18: 64–
78.
Sinnott-Armstrong, Walter. 2006. Moral Skepticisms. New York: Oxford Univer-
sity Press.
Slutske, Wendy S., et al. 2010. “Genetic and Environmental Influences on Disor-
dered Gambling in Men and Women.” Archives of General Psychiatry 67:
624–630.
Sorensen, Roy. 1988. “Dogmatism, Junk Knowledge, and Conditionals.” The Philo-
sophical Quarterly 38: 433–454.
Sosa, Ernest. 2010. “The Epistemology of Disagreement.” Armchair Philosophy.
Princeton: Princeton University Press.
Stein, Edward. 1996. Without Good Reason: The Rationality Debate in Philosophy
and Cognitive Science. New York: Oxford University Press.
Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value. Philo-
sophical Studies 127: 109–166.
Tarvis, Carol and Elliot Aronson. 2007. Mistakes Were Made (but Not by Me).
New York: Harcourt.
Taylor, S. E. and Brown, J. 1988. “Illusion and well-being: A social psychological
perspective on mental health. Psychological Bulletin 103: 193–210.
160
Tesser, Abraham. 1993. “The Importance of Heritability in Psychological Research:
The Case of Attitudes. Psychological Review 100: 129–142.
Tesser, Abraham and Rick Crelia. 1994. “Attitude Heritability and Attitude Rein-
forcement: A Test of the Niche Building Hypothesis. Personality and Individ-
ual Differences 16: 571–577.
Thune, Michael. 2010. “‘Partial Defeaters’ and the Epistemology of Disagreement.
Philosophical Quarterly 60: 355–372.
Unger, Peter. 1971. “A Defense of Skepticism.” The Philosophical Review 80: 198–
219.
Van Inwagen, Peter. 1990. Material Beings. Ithaca, NY: Cornell University Press.
Van Inwagen, Peter. 1995. “Non Est Hick. The Rationality of Belief and the Plu-
rality of Faith: Essays in Honor of William P. Alston, ed. T. Senor. Ithaca,
NY: Cornell University Press, 216–241.
Van Inwagen, Peter. 1996. “It Is Wrong, Everywhere, Always, and for Anyone,
to Believe Anything upon Insufficient Evidence.” Faith, Freedom, and Ratio-
nality, eds. J. Jordan and D. Howard-Snyder. Lanham, MA: Rowman and
Littlefield, 137–153.
Van Inwagen, Peter. 2002. Metaphysics (2nd edition). Boulder, CO: Westview
Press.
Van Inwagen, Peter. 2004. “Freedom to Break the Laws.” Midwest Studies in Phi-
losophy XXVII: 334–350.
Van Inwagen, Peter. 2006. The Problem of Evil. New York: Oxford University
Press.
161
Van Inwagen, Peter. 2010. “Were Right. Theyre Wrong. Disagreement, eds. R.
Feldman and T. Warfield. Oxford: Oxford University Press, 10–28.
Wegner, D. M., Coulton, G., & Wenzlaff, R. 1985. “The transparency of denial:
Briefing in the debriefing paradigm”. Journal of Personality and Social Psy-
chology 49: 382–391.
Westen, Drew et al. 2006. “Neural Bases of Motivated Reasoning: An fMRI Study
of Emotional Constraints on Partisan Political Judgment in the 2004 U.S.
Presidential Election. Journal of Cognitive Neuroscience 18: 1947–1958.
White, Roger. 2005. “Epistemic Permissiveness.” Philosophical Perspectives 19:
445–459.
Williamson, Timothy. 2000. Knowledge and Its Limits. New York: Oxford Univer-
sity Press.
Wilson, Timothy and Brekke, Nancy. 1994. “Mental Contamination and Mental
Correction: Unwanted Influences on Judgments and Evaluations. Psychologi-
cal Bulletin 116: 117–142.
Wilson, Timothy, Centerbar, David, and Brekke, Nancy. 2002. “Mental Contami-
nation and the Debiasing Problem. Heuristics and Biases: The Psychology
of Intuitive Judgment, eds. T. Gilovich, D. Griffin, D. Kahneman. New York:
Cambridge University Press, 185–200.
Wolterstroff, Nicholas. 1988. “Once Again Evidentialism This Time, Social.”
Philosophical Topics 16: 53–74.
Wreen, Michael. 1995. “Knockdown Arguments. Informal Logic 3: 316–336.
Ziman, John. 1978. Reliable Knowledge: An Exploration of the Grounds for Belief
in Science. Cambridge: Cambridge University Press.