anti-luminosity: four unsuccessful strategies

17
This article was downloaded by: [Stony Brook University] On: 24 October 2014, At: 16:31 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Australasian Journal of Philosophy Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rajp20 Anti-Luminosity: Four Unsuccessful Strategies Murali Ramachandran a a University of Sussex , Published online: 11 Aug 2009. To cite this article: Murali Ramachandran (2009) Anti-Luminosity: Four Unsuccessful Strategies, Australasian Journal of Philosophy, 87:4, 659-673, DOI: 10.1080/00048400802587408 To link to this article: http://dx.doi.org/10.1080/00048400802587408 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub- licensing, systematic supply, or distribution in any form to anyone is expressly

Upload: murali

Post on 26-Feb-2017

216 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Anti-Luminosity: Four Unsuccessful Strategies

This article was downloaded by: [Stony Brook University]On: 24 October 2014, At: 16:31Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Australasian Journal ofPhilosophyPublication details, including instructions for authorsand subscription information:http://www.tandfonline.com/loi/rajp20

Anti-Luminosity: FourUnsuccessful StrategiesMurali Ramachandran aa University of Sussex ,Published online: 11 Aug 2009.

To cite this article: Murali Ramachandran (2009) Anti-Luminosity: FourUnsuccessful Strategies, Australasian Journal of Philosophy, 87:4, 659-673, DOI:10.1080/00048400802587408

To link to this article: http://dx.doi.org/10.1080/00048400802587408

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, orsuitability for any purpose of the Content. Any opinions and views expressedin this publication are the opinions and views of the authors, and are not theviews of or endorsed by Taylor & Francis. The accuracy of the Content shouldnot be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions,claims, proceedings, demands, costs, expenses, damages, and other liabilitieswhatsoever or howsoever caused arising directly or indirectly in connectionwith, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly

Page 2: Anti-Luminosity: Four Unsuccessful Strategies

forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 3: Anti-Luminosity: Four Unsuccessful Strategies

ANTI-LUMINOSITY: FOUR UNSUCCESSFULSTRATEGIES

Murali Ramachandran

In Knowledge and Its Limits Timothy Williamson argues against the luminosityof phenomenal states in general by way of arguing against the luminosity of

feeling cold, that is, against the view that if one feels cold, one is at least in aposition to know that one does. In this paper I consider four strategies thatemerge from his discussion, and argue that none succeeds.

I. Aim

Timothy Williamson [2000] reckons that hardly any mental state is luminous,i.e. is such that if one were in it, then one would invariably be in a positionto know that one was. To this end, he presents an argument against theluminosity of feeling cold—which he claims generalizes to other phenomenalstates, such as, e.g., being in pain. As we shall see, however, no fewerthan four lines of argument for that conclusion can be extracted fromWilliamson’s remarks. This is not to suggest that it is unclear which of thesestrategies is the one Williamson intends to present; but it is instructive toconsider the others for the light they shed on the issue and on his ownreasoning. However, all of them fail; so I shall argue. My aim here is not todefend the luminosity of phenomenal states per se—indeed, I am undecidedabout the matter—but, rather, to uncover the different strategies whichemerge from Williamson’s discussion, and show that they fall short ofrefuting luminosity.

II. The Shape of the Argument

Here is an initial sketch of Williamson’s argument to get an idea of theoverall structure. Consider a morning on which one feels freezing cold atdawn and very gradually warms up so that by noon one is very warm.Suppose one’s feelings of heat and cold vary so slowly during this processthat one is not aware of any change in them over one millisecond (I shall callthis the no-awareness-of-change hypothesis, or (NAC)). Let t0, t1, . . . , tn be aseries of times at one millisecond intervals from dawn to noon, and let C bethe condition of one’s feeling cold. Williamson argues against the luminosity

Australasian Journal of Philosophy

Vol. 87, No. 4, pp. 659–673; December 2009

Australasian Journal of Philosophy

ISSN 0004-8402 print/ISSN 1471-6828 online � 2009 Australasian Association of Philosophy

http://www.tandf.co.uk/journals DOI: 10.1080/00048400802587408

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 4: Anti-Luminosity: Four Unsuccessful Strategies

of C by way of a reductio—by showing that (C0), (Cn)*, (R) and (L) belowlead to a contradiction [2000: 96–7]:1

(C0) At t0 C obtains.

(Cn)* At tn C does not obtain.(R) If at ti one knows C obtains, then at tiþ1 C obtains (for any i, 0� i� n).(L) If at ti C obtains, then at ti one knows C obtains (for any i, 0� i� n).

(C0) and (L) entail:

(L0) At t0 one knows C obtains.

(L0) and (R) entail

(C1) At t1 C obtains,

which, together with (L), entails

(L1) At t1 one knows C obtains.

(L1) and (R) entail

(C2) At t2 C obtains,

and so on. Repeated use of this line of reasoning eventually yields:

(Cn) At tn C obtains,

which contradicts (Cn)*. Q. E. D.

(L), the luminosity claim, can be saved only by denying (R), so we need toassess Williamson’s case for (R) [2000: 97]. The four lines of argumentagainst (L) we shall be considering correspond to four ways of defending (R)that emerge from Williamson’s discussion. But here is the basic idea:

Argument A

(A1) At ti one knows that one feels cold only if one is at least reasonablyconfident that one feels cold—for, otherwise, one would not know it—

and, moreover, this confidence must be reliably based.

(A2) At tiþ1 one is almost equally confident that one feels cold, by thedescription of the case.

1I am simplifying the argument in two ways. First, where Williamson talks of cases a1–an, and e.g. knowingor believing C in case ai, I talk just of times and what one knows (believes) at time ti. Second, the luminosityclaim is strictly just the claim that if one is in C, then one is in a position to know that one is. Williamsonarrives at (L) by way of the additional hypothesis that the subject has reflected adequately on his feelings; Iwill take that for granted.

660 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 5: Anti-Luminosity: Four Unsuccessful Strategies

(A3) So, if one does not feel cold at tiþ1, then one’s confidence at ti that onefeels cold is not reliably based. (For, one’s almost equal confidence amillisecond later that one felt cold is mistaken.)

(A1) and (A3) jointly entail (R). This is easier to see if one considers theconverse of (A3): if at ti one’s confidence that one feels cold is reliably based,then one feels cold at tiþ1. Before presenting my misgivings, and thealternative lines of argument, let us consider a couple of objections made byBrueckner and Fiocco [2002].

III. Time and Safety

In another connection, Williamson provides the following example:

The Dead President ExampleConsider . . . the situation of a generally well-informed citizen N. N. who has not

yet heard the news from the theatre where Lincoln has just been assassinated.Since Lincoln is dead, he is no longer President, so N. N. no longer knows thatLincoln is President (knowing is factive). However, N. N. is in no position toknow that anything is amiss . . .. Although N. N. does not know that Lincoln is

President, he is in no position to know that he does not know.[Williamson 2000: 23]

Brueckner and Fiocco object that what Williamson maintains here conflictswith his position in the anti-luminosity argument:

Let t be one millisecond before Lincoln dies, and let ‘L’ stand for Lincoln is

President. In this example we have KL [N. N. knows L] at t and *KL at tþ1.In this transition, N. N.’s confidence regarding L remains constant. Accordingto the reasoning behind (R), it would follow that (1) N. N.’s confidence

regarding L at t was not reliably based, and hence that (2) at t, *KL. But thisseems clearly mistaken: by Williamson’s own lights, N. N. does know at t thatLincoln is President.

[Brueckner and Fiocco 2002: 288]

But this objection reads more into the dead President example thanWilliamson is committed to. He nowhere claims that N. N. knows Limmediately before Lincoln’s death—all he is committed to is that afterLincoln’s death N. N. doesn’t know L but is in no position to know this.This is compatible with N. N. not knowing L immediately before the deatheven though his belief in L is not mistaken then.

Brueckner and Fiocco’s misunderstanding undermines another of theirobjections. They claim that Williamson’s reasoning is underpinned by theview that knowledge requires ‘safety from error’, a requirement theyunderstand as follows:

(Safety) If S knows p, then S would believe p (in sufficiently similar

circumstances) only if p were true.

Anti-Luminosity: Four Unsuccessful Strategies 661

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 6: Anti-Luminosity: Four Unsuccessful Strategies

They argue against (Safety) by way of the dead President example. Considera world, w, just like the actual world but where Lincoln dies at t not at tþ1.By (Safety), if N. N. knows L at t in the actual world, the followingconditional holds (at t): N. N. would believe L in sufficiently similarcircumstances only if L were true. But it does not hold; for w is sufficientlyclose to actuality, and in w N. N. believes L at t even though L is false at t.Hence, Brueckner and Fiocco conclude, since by hypothesis N. N. doesknow L at t, (Safety) is refuted. But, as noted earlier, that N. N. knows L at twas not part of the original hypothesis; and advocates of (Safety) would—and perhaps should—deny it for the very reason Brueckner and Fiocco give.Note: this does not mean that N. N. never knows L; there will be a timebefore t at which his belief is safe from error.

The points I wish to pursue against Williamson do not question (Safety).Neta and Rohrbaugh [2004] provide alleged counterexamples to (Safety)which may force one to qualify it, but I find (Safety) plausible in the presentcontext: it strikes me that if one falsely believes C at a time tk, then oneindeed does not know C a millisecond earlier at tk–1.

2 This squares withWilliamson’s remarks on his appeal to reliability:

The use of the concept is reliable here is a way of drawing attention to anaspect of the case relevant to the application of the concept knows . . . The aimis not to establish a universal generalization . . . we are not required to derive

our judgement as to whether the concept applies in a particular case fromgeneral principles.

[2000: 101]

On this understanding, let us proceed to the first anti-luminosity strategy.

IV. Strategy #1: Argument A With Belief-entailing Confidence

Williamson’s elaboration of the notion of confidence at play in Argument Aactually gives rise to two of the lines of argument we’ll be considering. Hesays that degrees of confidence in his sense ‘should not be equated withsubjective probabilities as measured by one’s betting behaviour’ [2000: 98,my emphases]; and he distinguishes between believing that p is very likely (orassigning p a high subjective probability) and, what he calls, outright belief inp. He then remarks:

What incurs the charge of unreliability is believing a false proposition outright,not assigning it a high subjective probability . . .. The degrees of confidence

mentioned in the argument should therefore be understood as degrees ofoutright belief.

[Williamson 2000: 99]

2In this paper I will sometimes use ‘C’ to signify the condition of one’s feeling cold, as in the originalspecification of Williamson’s overall strategy, and sometimes for the proposition that one feels cold, as in thepreceding sentence. The appropriate reading should be obvious from the context.

662 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 7: Anti-Luminosity: Four Unsuccessful Strategies

It is natural to read this passage as affirming that confidence in therelevant sense—henceforth, confidenceB—simply has no application incases where one does not believe the proposition in question. On thisview, ‘S has some degree of confidenceB in the truth of p’ entails ‘Sbelieves that p’.3

So, the first defence of (R) I should like to consider is Argument A withconfidence understood as confidenceB above, i.e. as belief-entailing. In thatcase, I suggest, (A2) remains to be proved:

(A2) At tiþ1 one is almost equally confident that one feels cold [as one is atti], by the description of the case [for any i, 0 �i �n].

The reason is this. Suppose there is a pertinent time tk at which one believesC, that one feels cold, but that at tkþ1 one does not believe C. Then, at tk onehas some degree of confidenceB that C, and at tkþ1 one has no degree ofconfidenceB in that proposition, since one does not believe it outright. So,contra (A2), it would not be true that at tkþ1 one is almost equally confidentBthat one feels cold. Without (A2), the case for (A3), and, consequently, thiscase for (R), falls.

One may attempt to defuse the objection by questioning the feasibility ofthe supposition that one goes from believing C at some time tk to notbelieving C at the next instant, tkþ1. But the onus is surely on defenders ofWilliamson’s argument to show that the supposition is untenable. Especiallyas Williamson’s own views on vagueness [1994, 2005], roughly, that there isno vagueness in reality, suggest that he would endorse that suppositionunreservedly.

Another counter to the objection might be that at the pivotal time tk one’sdegree of confidenceB in C would be so low that it would still be true, eventaking the degree of confidenceB at tkþ1 to be zero, that the latter was almostequal to the former. However, if this manoeuvre is maintained by defendersof Argument A, they must hold that one’s confidenceB in C at a time may berendered unreliable solely by virtue of one’s having zero confidenceB in C atthe next instant, when C is indeed false! This view of unreliability is highlycounterintuitive.

So, I think this first case for (R) stands refuted. However, other remarksof Williamson’s make clear that he does not intend confidence to beunderstood as confidenceB, for they countenance the possibility of one’shaving confidence in C when one doesn’t believe C:

Even if one’s confidence at ti was just enough to count as belief, whileone’s confidence at tiþ1 falls just short of belief, what constituted that

belief at ti was largely misplaced confidence; the belief fell short ofknowledge.

[2000: 97]

3One might query my reading of the quoted passage (as a referee has done). Just as ‘S feels some degree ofanger’ does not entail that ‘S is angry’, one might argue, ‘S has some degree of outright belief in p’ does notentail that ‘S outright believes p’. But the phrase ‘outright’ just blocks that weak reading to my ears.

Anti-Luminosity: Four Unsuccessful Strategies 663

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 8: Anti-Luminosity: Four Unsuccessful Strategies

We will reconsider Argument A in this light shortly. But, I think it will beinstructive to consider one other strategy in support of (R) before that. Itemerges from the following passage:

The intuitive idea is that if one believes outright to some degree that acondition C obtains, when in fact it does, and at a very slightly later time onebelieves outright on a very similar basis to a very slightly lower degree that C

obtains, when in fact it does not, then one’s earlier belief is not reliable enoughto constitute knowledge.

[Williamson 2000: 101, my emphasis]

Notice that deleting the emphasized phrases makes no salient difference tothe central claim: that one’s true belief that C obtains is unreliable if onecould falsely believe that C obtains on a ‘very similar basis’; the appeal todegrees of outright belief here is simply superfluous.

So, let us consider a cousin of Argument A that understands reliability interms of (outright) belief.

V. Strategy #2: Outright Belief and ‘No Awareness of Change’

Argument B

(B1) At ti one knows that one feels cold only if one believes that one feelscold, and this belief is reliably based.

(B2) If at ti one knows that one feels cold, then at tiþ1 one believes that one

feels cold.

(B3) So, if one does not feel cold at tiþ1, then one’s belief at ti that one feelscold is not reliably based. (For one’s belief a millisecond later that onefelt cold is mistaken.)

(B1) and (B3) jointly entail (R). (B1) is compelling, and (B3)’s take onreliability in this context seems correct (as we have noted earlier). So, theweight of the argument rests on (B2): why should one’s knowing C at time tientail one’s believing C at tiþ1?

Berker [2008] is sceptical that Williamson has independent motivation for(B2)—‘other than its being what Williamson needs in order to derive (MAR)[¼ (R)]’ [Berker 2008: 8]. But, actually, there is independent motivation.Think back to the Dead President example, and the response I ventured onWilliamson’s behalf against Brueckner and Fiocco’s first objection. Thatresponse took N. N. to stop knowing L, that Lincoln is president, some timebefore their belief in L became false; for a short period, N. N. believed Ltruly, but this belief was not reliable enough to count as knowledge; thebelief became less reliable as the assassination attempt grew closer. Primafacie, the same seems true of one’s attitude towards C in the luminosity case.To begin with, at dawn, one will affirm that one feels cold without any

664 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 9: Anti-Luminosity: Four Unsuccessful Strategies

hesitation or protracted reflection; but, as one gradually warms up, andapproaches the critical period where one goes from feeling cold to not feelingcold, one’s affirmations will become less and less strident; at some stage,attentive reflection seems necessary to decide whether or not C; one mayaffirm C, but less confidently. In this ‘indecisive’ period, one might think,one’s reliability is less than it was at the beginning; it is then natural to thinkthat, as in the case of N. N., one does not go immediately from knowing C atone time to not even believing it at the next instant: one goes from knowing,to merely believing truly, and then perhaps to believing falsely.

Hence, contra Berker, I think the onus is in fact on those who dispute (B2)to provide an argument against it, rather than on Williamson to defend it.Weatherson [2004] identifies a promising line of attack. He argues thatWilliamson’s reasoning ignores the possibility of a physical state thatconstitutes both the state of feeling cold and the state of believing that onefeels cold. As has been noted (e.g. by Berker [2008]), one can press theunderlying point without such concession to physicalism. One’s knowing atti that one feels cold is threatened by one’s not feeling cold at tiþ1 only if itmight easily have turned out at tiþ1 that one falsely believes that one feelscold. But why should we accept that one might easily have had a false beliefof this kind? Suppose one’s belief that one feels cold normally co-varies withone’s feeling cold:

(Cv1) If one were to feel cold, then one would believe one felt cold, and

(Cv2) If one were not to feel cold, then one would not believe one felt cold.4

Williamson himself claims that the ‘invocation of reliability [as a necessarycondition for knowledge] does not presuppose that whether one feels cold isindependent of one’s disposition to judge that one does’ [2000: 99–100].5 So,he does not dispute that there might be a constitutive connection betweenfeeling cold and believing that one feels cold. Granting such a constitutiveconnection, it is not a far step to the co-variance thesis, that, except for somespecial (abnormal) circumstances—in extreme cases of coldness perhaps, orwhen one is misled by false conflicting beliefs—one would believe one feltcold if and only if one did feel cold. On such a view, there won’t be a time tisuch that at ti one knows that one feels cold but at tiþ1 one falsely believesthat one feels cold, even if tiþ1 is the precise time at which one stops feelingcold.6 On the face of it, a defender of luminosity and (Cv2) can maintainthat one simply goes from knowing that one feels cold at ti to not believing

4It is important to distinguish feeling cold from being cold; I am not claiming that one’s belief that one feelscold (or one’s belief that one is cold) co-varies with one’s being cold. It would be easy, in any case, to refute theclaim that being cold is luminous: one may have a fever—i.e. be hot—and yet feel (and know that one feels)cold. Nor am I claiming that the co-variance theses hold tout court: there may well be extreme cases whereone’s reflective abilities are distorted; the co-variance is being claimed simply for ordinary circumstances, suchas Williamson’s scenario—hence, the qualifier ‘normally’.5Again, this claim is plausible for the condition of feeling cold but not for being cold.6Williamson anticipates, and counters, challenges to his argument based on the supposed vagueness of thepredicate feels cold [2000: 102 ff.]; the point of the ‘even if’ clause is that my objection applies even if there is aprecise cut-off point, i.e. even if feels cold is not vague.

Anti-Luminosity: Four Unsuccessful Strategies 665

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 10: Anti-Luminosity: Four Unsuccessful Strategies

that one feels cold at tiþ1. (B2) is thereby compromised, and the present casefor (R) is undermined.

So one must discredit the co-variance thesis in order to defend ArgumentB for (R). The burden of proof lies with the anti-luminosity theorist. For adefender of the luminosity of feeling cold ipso facto maintains thatknowledge of C co-varies with C’s obtaining; such a view may well beaccompanied, if not underpinned, by the view that belief in C also co-varieswith C’s obtaining. Hence, no case against luminosity can simply presupposethe falsity of the belief co-variance thesis.

This puts a defender of Argument B in an awkward position: anysatisfactory case against (Cv1) would constitute an independent argumentagainst the luminosity of feeling cold. For if one establishes that one can feelcold without believing that one feels cold, one thereby establishes that onecan feel cold without knowing that one feels cold. Hence, Argument B isapparently unsound or unnecessary!7

In allowing that at some point one goes from knowing C at one instant tonot believing it the next, are we perhaps just begging the question against(NAC), Williamson’s hypothesis that one’s feelings of hot and cold vary sogradually that one is not aware of any change in them over one millisecond?I think not. What (NAC) precludes is one’s going from believing C at onetime to believing not-C the next millisecond; it does not preclude one’s goingfrom believing (or knowing) C to not believing C.8 Obviously, one cannotsimply preclude the second possibility by stipulation; and, given William-son’s denial of vagueness in the world, without the second possibility onecould not go from knowing that one felt cold at dawn to knowing that onefelt warm at noon. So Williamson himself should accept that the presentobjection to Argument B is not prohibited by (NAC). (But, as you willrecall, Argument B is not the anti-luminosity strategy he intends.)

What of the considerations I presented earlier as prima facie support for(B2)? Doesn’t one’s belief in C become less reliable as one gets warmer, assuggested by one’s decreasing self-assurance? No—not if the co-variancethesis, comprising (Cv1) and (Cv2), is correct, and reliability is understoodas compliance with (Safety); given the former, one is in no danger ofbelieving C falsely, even when one is getting close to not being cold; one’sbelief in C is constantly safe from error, and thus reliable. One should nottake any hesitation or ‘difficulty’ we may have in making a judgementabout C as signalling unreliability of that judgment, not if one accepts theco-variance thesis.

7One may object that to show that one can feel cold without believing that one does is not to show that onecan feel cold without being in a position to know that one does—so to refute (Cv1) is not to refute luminosity.But, I take a ‘satisfactory’ case against (Cv1) to be one that applies in cases where one has duly reflected onwhether one feels cold or not, as Williamson’s example has it. If (Cv1) does not hold in such circumstances,luminosity, as Williamson understands it, does not either.8Thus, I dispute Blackson’s [2007] take on the issue: ‘If a condition is luminous and does not obtain, a subjectwho has turned his attention to the matter will know it does not obtain and presumably not believeotherwise’ [Blackson 2007: 403–4]. If this were true, one would go from knowing C at one time to knowingnot-C the next millisecond (assuming a sharp cut-off point, as Williamson does), and, consequently, frombelieving C to believing not-C. This would surely conflict with the hypothesis that one is not aware of anychange in one’s feelings (of cold) over any millisecond. In any case, it is clear from Williamson’scharacterization of luminosity [2000: 24, 95] that a condition may be luminous without being ‘transparent’,i.e. without one always being in a position to know whether or not it obtains.

666 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 11: Anti-Luminosity: Four Unsuccessful Strategies

VI. Strategy #3: Argument A with Non-belief-entailing Confidence

Let us turn now to Williamson’s intended strategy. Here is Argument Aagain:

Argument A

(A1) At ti one knows that one feels cold only if one is at least reasonablyconfident that one feels cold—for, otherwise, one would not know it—and, moreover, this confidence must be reliably based.

(A2) At tiþ1 one is almost equally confident that one feels cold, by thedescription of the case.

(A3) So, if one does not feel cold at tiþ1, then one’s confidence at ti that onefeels cold is not reliably based. (For one’s almost equal confidence a

millisecond later that one felt cold is mistaken.)

As noted earlier, (A1) and (A3) jointly entail (R). When we understoodconfidence as belief-entailing in xIV, I disputed (A2). On the strategy we areconsidering here, confidence is understood as being available in the absenceof outright belief (call it confidenceA). (A2) seems entirely plausible given(NAC): intuitively, confidence in C, on any viable understanding ofconfidence, should not drop significantly over a millisecond if one is notaware of any change over that time.

The premise I dispute in this case is (A1): not the idea that knowing Crequires some reasonable degree of confidenceA in C, but the claim that thisconfidenceA has to be reliably based in Williamson’s sense. Let us say abelief in a proposition p is safe from belief-error (B-safe) if: one would notbelieve p (in sufficiently similar circumstances) unless p were true; and safefrom confidenceA-error (C-safe) if: one would not have almost as muchconfidenceA in p (in sufficiently similar circumstances) unless p were true. Iam assuming that one’s knowing C at a time t does require one’s belief in Cat t to be B-safe; the question facing us whether C-safety is also required.9

Here is a line of argument against the latter requirement advanced byLeitgeb [2002] and, more recently, Berker [2008]. Suppose one’s degree ofconfidenceA in C, one’s degree of outright belief in C, co-varied perfectlywith how cold one actually felt. So, if (at all) it is vague that C obtains at atime, it is vague to the same degree that one believes C at that time; if there isa sharp cut-off point between feeling cold and not, there is at the same time asharp cut-off between one’s believing C and not believing C. Now, even if allthis were the case, Williamson’s reasoning against luminosity would still gothrough, and this seems wrong:

[A]t some point during the morning one’s belief that one feels cold is too

unreliable [on Williamson’s measure] to constitute knowledge. However, this

9I take the ‘sufficiently similar circumstances’ to include the situation a millisecond earlier and later.

Anti-Luminosity: Four Unsuccessful Strategies 667

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 12: Anti-Luminosity: Four Unsuccessful Strategies

just seems wrong: one’s beliefs about whether one feels cold appear to be asreliable as they possibly could be.

[Berker 2008: 13, my emphasis]

As Leitgeb puts it, there would be no margin for error in this case,

because a calibrated system [such as we would have under the above

supposition] simply draws the relevant distinctions appropriately, such thatthere is no danger of misapprehending a total state in which one does not feelcold by having the simultaneous belief that one does.

[Leitgeb 2002: 216, my emphasis]

The point, in short, is this: C-safety is too strong a requirement because itwould rule out luminosity in the hypothesized ‘perfect-calibration’ situation,which is daft, because one couldn’t be any more reliable.

The trouble with the above argument is that, on the face of it, theperfectly attuned subject it envisages would not be one for whom (NAC)holds: such an individual would be aware of changes in her feelings of coldfrom one millisecond to the next. So, Williamson could simply grantluminosity in the ideal case, but insist that luminosity is only possiblebecause (NAC) does not hold; for less perfect individuals, i.e. every actualperson, for whom (NAC) is obviously correct, feeling cold is not luminous.More work needs to be done, then, if one wishes to pursue the Leitgeb–Berker strategy.

I am going to be less ambitious; rather than presenting an argumentagainst C-safety as a requirement for knowing, I am going to attempt toundermine the case in its favour.

In the quoted passage above, Leitgeb remarks that in the perfect-calibration case, there would be no danger of misapprehending a state inwhich one does not feel cold (by virtue of having the belief that one does).But we have seen in xIV that this could hold in ordinary situations too. If theco-variance thesis, comprising (Cv1) and (Cv2) is correct, there is simply nodanger of one falsely believing C, even if there isn’t a fine-grained matchbetween the degree of one’s confidenceA in C and the degree to which onefeels cold. Thus, Leitgeb’s avowed reasons for endorsing luminosity in theperfect-calibration case should apply in the ordinary, ‘un-idealized’ caseenvisaged by Williamson too: for, there is, in effect, ‘no margin for error’here either. So, why not settle for B-safety as the sole reliability condition forknowing?

One motivation for requiring C-safety is what I call the continuity view.When one believes a proposition p, one is willing to use it as a premise inpractical reasoning; according to Williamson [2000: 99], the degree ofconfidenceA one has in a proposition is meant to be a measure of one’swillingness to so use it. So, Williamson is naturally read as takingconfidenceA to be ‘continuous’ with (outright) belief in the sense that onebelieves a given proposition solely in virtue of one’s having enoughconfidenceA in it (see e.g. [Berker 2008: 11, n. 18]). I find the continuityview implausible: it strikes me that there should be a difference in kind, not

668 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 13: Anti-Luminosity: Four Unsuccessful Strategies

just degree, between belief and confidence. Moreover, there appear to becases where one may have a very high degree of confidenceA in a propositionwhile falling short of believing it outright. Consider, for example, theproposition (p) that my lottery ticket was not the winning ticket last night.One might be very willing to use p as a premise in practical reasoning, butdeny that one believed p until one had checked the results. If asked whetherone had won the lottery, one would rightly respond, ‘I don’t know’;presumably, if one did indeed believe p, one should have answered, ‘No’. So,the continuity view is at least questionable. It might be objected that the factthat one would not throw away one’s ticket on the basis of the followingpiece of practical reasoning10

My lottery ticket was not the winning ticket.

So, it won’t matter if I throw it away without checking the results.So, I’ll throw it away without checking the results.

suggests that one does not have a high degree of confidenceA in p after all.But, this strategy can be used in the reverse direction too. One might well beprepared to sell the ticket for £50, say, to someone who also did not knowthe result. This suggests that one is tacitly employing p in something like thefollowing piece of reasoning:

My lottery ticket was not the winning ticket.

Jones has offered me £50 for it and he has no more information than I have.If I get £50, I’ll have made a hefty profit.So, I will sell the ticket to Jones.

Given that one still accepts that for all one knows, the ticket might be thewinning ticket, and, hence, that one might have won well over a millionpounds, one must surely be very confident that the ticket is not the winningticket if one sells it for £50.

Hence, I am not persuaded by the continuity view. Of course, one mighttake confidenceA to be a technical notion, so that confidenceA can bestipulated to have whatever properties Williamson wishes to specify. In thatcase, whether I find it implausible or not is beside the point. But, clearly, thistack won’t do: no real phenomenon may fit Williamson’s specifications, and,in any case, if his anti-luminosity argument is to persuade us, it must employa notion of confidence that we recognize.

I grant, however, that one might find the C-safety requirement plausibleindependently of the continuity view. What I shall do is offer two putativeexplanations of its initial plausibility and argue that they do not in fact fullysupport it.

The first putative explanation relies on the observation that C-safety goeshand in hand with B-safety in many ordinary cases. Consider the DeadPresident example again. In that case, the day (at any rate, the week) beginswith the subject, N. N., knowing L, that Lincoln is President; later, as the

10Thanks to a referee for the objection and the following example.

Anti-Luminosity: Four Unsuccessful Strategies 669

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 14: Anti-Luminosity: Four Unsuccessful Strategies

assassination approaches, she stops knowing L, but continues to believe L;later still, at time tE, she hears of the assassination and stops believing L.Her confidenceA in L remains pretty much constant over a periodencompassing a time where she knows L and the whole time she merelybelieves L truly; her confidenceA plummets at tE when she hears of theassassination, but she stops believing L then too. Thus, in this case,whenever N. N.’s belief in L is B-safe, it is C-safe, and vice versa. (Of course,this is not to say that her belief is always B-safe or C-safe.)

Here is another example, one where the subject’s confidenceA does not falldramatically when she stops believing the proposition in question. SupposeAnnie has been assured by her good, and generally reliable, friend Bill thathe will attend her dinner party that evening. At 6 p.m., Annie knows (S) thatshe will see Bill that evening, and her confidenceA in S is high. However, asshe and her other guests are sitting down to dinner at 8 p.m., she assumesthat Bill has been delayed slightly. Annie still knows S, but her confidenceAin S is much lower. By 8.30 p.m., while she still believes S, her confidenceA issufficiently low that she cannot be said to know S. And by 9 p.m., herconfidence is so low that she has stopped believing S. But Bill turns up intime for dessert! His train had been delayed owing to an earlier suicideattempt further down the tracks by a failed epistemologist. In this example,Annie’s belief in S is B-safe and C-safe when she knows S earlier in theevening, and, arguably, for the whole period she believes S.

I suggest that our tacit recognition of the compliance with C-safety in suchcases is perhaps responsible for our finding its application to Williamson’scase plausible. But, one could argue that it was B-safety in these cases thatactually secured knowledge: C-safety was an inessential bystander.

The second putative explanation of why the appeal to C-safety is initiallyplausible is that, intuitively, knowledge is jeopardized by misplaced highconfidence, however confidence is understood: i.e. one cannot know aproposition p if one could have had a high degree of confidence in p insufficiently similar circumstances and yet p not be the case. In particular, onecannot know C at a time ti if one is highly confidentA of C at tiþ1 eventhough C does not obtain at tiþ1. The trouble is, our intuitions regardingmisplaced very low confidence are far from clear: we do not haveunequivocal intuitive support for C-safety across the whole confidence-scale here. B-safety, by contrast, seems a definite requirement, at least in theWilliamson scenario.

I contend that Williamson’s intended argument against luminosity fallsshort of establishing that until a more persuasive case for requiring C-safetyis provided.

VII. Strategy #4: Knowing Requires a Reasonably High Degree

of Confidence

The fourth anti-luminosity strategy suggested by Williamson’s remarks hasbeen neglected in the literature but is perhaps the most promising. (A1), thefirst premise of Argument A, proposes that knowing p requires reasonable

670 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 15: Anti-Luminosity: Four Unsuccessful Strategies

confidence in p, by which Williamson presumably means a reasonably highdegree of confidence. But although this requirement is mentioned, it actuallyplays no pivotal part in his reasoning: for example, replacing (A1) with(A1*) below makes no difference to the workings of Argument A:

(A1*) At ti one knows that one feels cold only if one has some confidence thatone feels cold, and this confidence is reliably based.

The fourth strategy gives the notion of ‘reasonable confidence’ an essentialrole. The leading idea, which serves as a lemma for the following arguments,goes something like this:

(CL) Confidence Lemma: One cannot go from believing C (that one feels

cold) with a reasonably high degree of confidence to not believing C amillisecond later, when one is not aware of any change in one’s feelings.

Argument C

(C1) At ti one knows C (that one feels cold) only if one believes C with areasonably high degree of confidence, and this belief is reliably based.

(C2) If at ti one believes C with a reasonably high degree of confidence, thenat tiþ1one believes C (from (CL)).

(C3) So, if one does not feel cold at tiþ1, then one’s belief in C at ti that onefeels cold is not reliably based. (For, one’s belief a millisecond later that

one felt cold is mistaken.)

(C1) and (C3) establish (R), that if one knows C at ti, then C obtains at tiþ1.The precise nature of confidence here is immaterial, it could even beunderstood as subjective probability; the fact is, (C2) seems plausiblewhatever notion of confidence is in play.

We saw that earlier strategies were scuppered by the co-variance thesis,viz. that one’s believing that one feels cold (normally) co-varies with one’sactually feeling cold. The present strategy, on the other hand, appearscompatible with that thesis, as attested to by the following argument:

Argument D For (R)

(D1) At ti one knows C (that one feels cold) only if one believes C with a

reasonably high degree of confidence.

(D2) If at ti one believes C with a reasonably high degree of confidence, thenat tiþ1 one believes C (from (CL)).

(D3) If at tiþ1 one believes C, then C is true at tiþ1 (from belief co-variance).

(R) So, if one knows C at time ti, then C is true at tiþ1 (from (D1) – (D3)).

Anti-Luminosity: Four Unsuccessful Strategies 671

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 16: Anti-Luminosity: Four Unsuccessful Strategies

Another point to note is that Argument D is available to those who denythat knowledge invariably, or even normally, requires belief-reliability.

So, as I say, this fourth strategy, where the appeal to reasonably highconfidence is essential, seems to have more going for it than the ones wepreviously considered. There is scope for resistance, however. Suppose wereplaced the phrase ‘reasonably high’ with ‘fairly high’ throughout; then theConfidence Lemma, (CL), is certainly compelling, but the claim thatknowledge requires a fairly high degree of confidence seems incorrect. Isuggest that (CL) seems plausible because we read ‘reasonably high’ as‘fairly high’, but that we hear (C1) as plausible because we read ‘reasonablyhigh’ as something like ‘not negligible’. On the first reading, (C1) need notbe true at times close to the critical period during which we go from feelingcold to not feeling cold; on the second reading, the move from (C1) to (C2)is not sanctioned by (CL) in situations where the confidence is not fairlyhigh. (The same points apply to (D1) and (D2), respectively.)

That the phrase ‘reasonably high’ admits of such divergent readings isborne out by the following reductio of (CL) (that makes no reference toknowledge):

(E1) At time t0 one believes C. (hypothesis)

(E2) At time tn one does not believe C. (hypothesis)

(E3) At ti one believes C only if one’s confidence in C at that time isreasonably high. (assumption for reductio)

(E4) If at ti one believes C with a reasonably high degree of confidence, then

at tiþ1 one believes C. (from (CL))

(E5) So, if at ti one believes C, then at tiþ1 one believes C. (from (E3/4))

(E6) Hence, at time tn one believes C (from (E1), and (E3) – (E5)).

If one has negligible confidence in C, then presumably one does not believeit. So, there seems to be a natural reading of ‘reasonably high’ on which (E3)is true. In that case, the argument would appear to be a reductio of (CL),since (E6) contradicts (E2). But this is at odds with the initial plausibility of(CL). These two ‘verdicts’ give succour to my suggestion that arguments (C)and (D) equivocate between two senses of ‘reasonably high’: the sense of‘reasonably high’ which makes (CL) plausible is not the same as that whichmakes (C1) or (D1) plausible.

One might attempt to save the present strategy by way of the idea that theminimum degree of confidence required for knowledge, dK, is higher thanthe minimum degree of confidence required for belief, dB.

11 On thatassumption, if one knows C at ti, one’s degree of confidence in C at tiþ1 willbe at least dK or just negligibly smaller; so it will, at any rate, be higher than

11Another point suggested by a referee.

672 Murali Ramachandran

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4

Page 17: Anti-Luminosity: Four Unsuccessful Strategies

dB. Hence, (CL) and (C1) are secured in one fell swoop. But why should theminimum degree of confidence required for believing always be sufficient forbelief? In xVI, I suggested that one may be very confident that one has notwon the lottery, but this falls short of actually believing it outright if one hasnot checked the results. If this is right, a defender of luminosity couldcontinue to maintain that one stops knowing C precisely when one stopsbelieving C. So, more needs to be said to make this fourth strategy fly.

To sum up: we have considered four anti-luminosity strategies inspired byWilliamson’s remarks, and seen that none succeeds. I would say that theprospects for an essentially logical refutation of the luminosity ofphenomenal states, such as Williamson aims to provide, do not look good.12

University of Sussex Received: May 2008Revised: August 2008

References

Berker, S. 2008. Luminosity Regained, Philosophers’ Imprint 8/2, URL¼5www.philosophersimprint.org/008002/4

Blackson, T. 2007. On Williamson’s Argument for (Ii) in His Anti-Luminosity Argument, Philosophy andPhenomenological Research 74/2: 397–405.

Brueckner, A. and M. O. Fiocco 2002. Williamson’s Anti-Luminosity Argument, Philosophical Studies 110/3:285–93.

Leitgeb, H. 2002. Review of ‘Timothy Williamson, Knowledge and its Limits’, Grazer Philosophische Studien65: 207–17.

Neta, R. and G. Rohrbaugh 2004. Luminosity and the Safety of Knowledge, Pacific Philosophical Quarterly85/4: 396–406.

Weatherson, B. 2004. Luminous Margins, Australasian Journal of Philosophy 82/3: 373–83.Williamson, T. 1994. Vagueness, London: Routledge.Williamson, T. 2000. Knowledge and Its Limits, Oxford: Oxford University Press.Williamson, T. 2005. Vagueness in Reality, in The Oxford Handbook on Metaphysics, M. Loux and

D. Zimmerman, eds, Oxford: Oxford University Press: 690–715.

12Research for this paper was supported by the University of Sussex Humanities Research Fund, and theSpanish Ministry of Science and Education: project HUM2007-63797/FISO. Thanks to Jacob Berkson,David Efird, Michael Morris, Andrew Rebera, Tom Stoneham, and Timothy Williamson for comments onearlier versions. Thanks also to the referees of this journal for exceptionally constructive and detailed reports.

Anti-Luminosity: Four Unsuccessful Strategies 673

Dow

nloa

ded

by [

Ston

y B

rook

Uni

vers

ity]

at 1

6:31

24

Oct

ober

201

4