In his book, Philosophical Explanations, in the chapter “Knowledge and Skepticism,” Robert Nozick endeavors to construct a set of conditions that are jointly sufficient for knowledge. Let’s call this the ‘truth-tracking’ account of knowledge. Truth-tracking is meant to deal with a range of hard epistemological cases, including Gettier-style problems and Brain-in-the-Vat-style skeptical arguments. In this paper, I aim to show that, while Nozick’s truth-tracking account doesn’t succeed as a whole (largely for reasons noted by Saul Kripke), the driving intuition of Nozick’s account may still be relevant – or even necessary – for any good theory of knowledge. I will refer to this driving intuition as ‘Sensitivity.’ What I mean by this term will become clearer as we go along.
PART I: Truth-tracking Explained
Nozick begins his project by taking for granted the usual first two conditions for knowledge: (1) p is true; (2) S believes that p. I’ll take these for granted as well. To these he adds a third condition, the subjunctive conditional: (3) If p weren’t true, S wouldn’t believe that p. We may also express this in Lewisian language as ~p□→~(SBp).[1. Nozick expresses it as: not-p → not-(S believes that p) (page 172).] I will, however, avoid such formalisms, because I want to be semantically clear about what I’m referring to. Nozick also uses ‘possible worlds’ semantics to express (3), though is not committed to such expressions.[2. “I do not mean to endorse any particular possible-world account of subjunctives, nor am I committed to this type of account” (page 174).] I will make use of these. For example, S’s belief should be counterfactually sensitive so that, in the closest worlds where not-p, S wouldn’t mistakenly believe that p. Note that (3) is the heart of Sensitivity, though it doesn’t capture the whole truth-tracking account. To develop that, let’s apply our conditions thus far to some classic hard cases.
Clock Case: It’s 12:00. S sees a stopped clock and comes to believe that it’s 12:00. Apply (3): If it were 12:15, S would (mistakenly) believe that it was 12:00. Or: In the closest 12:15-worlds, S would mistakenly believe that it’s 12:00. So, S doesn’t know that it’s 12:00.
Sheep Case: S sees a realistic fake sheep on a hill, and so comes to believe that there’s a real sheep on the hill. There is a real sheep hidden on the hill, so S’s belief is correct. Apply (3) as in the above Clock Case, and we see that S doesn’t know there’s a sheep on the hill.
Barn Case: S drives past a barn, and comes to believe that he has seen a barn. What S doesn’t know, however, is that there are several convincing fake barns in the area. Which means that, had there not been a real barn there, there might[3. Note that “might” is enough here. We include close-world “might”s, and exclude remote ones; e.g., I know the coin will land heads or tails, even though it also might be carried off by an errant hummingbird.] have been a fake barn there, and so there are a substantial amount of worlds in the neighborhood of actuality in which S would have mistakenly come to believe that there was a barn there. Therefore, S doesn’t know that there’s a barn there.
Grandma Case: Grandma Smith is visited by her healthy grandson, whom she comes to believe to be in good health because she sees that he’s in good health. Had he been in poor health, however, he would not have visited, and relatives would have told Grandma Smith that he was well. We see here that Grandma’s belief fails to satisfy (3), yet it seems wrong to say that she doesn’t know her grandson is well. Nozick’s solution here is to add a rule that I’ll call ‘Fixed Method’: S must come to believe that p via the same method in the actual and counterfactual situations. Now (3) is satisfied: In the closest worlds in which she sees her grandson, she doesn’t believe he is well unless he appears well.
Dictator Case: S reads, in a reliable newspaper, the true report of a dictator’s assassination. Later, a fake story is planted, in which it is claimed that the dictator was not really assassinated. The reliable newspaper now retracts the true report of the dictator’s assassination. Everyone in the world comes to believe the retraction, except for the conspirators and except for S, who by mere and bizarre chance never sees the report. (3) is satisfied here, however it seems clear that S doesn’t know that p, and thus the intuition that drives Sensitivity is not satisfied, even though Sensitivity’s abstraction as condition (3) is.
The problem here is that, in the closest possible worlds, S sees and believes the retraction. So, Nozick adds what Kripke refers to as a “factual counterfactual”[4. Saul Kripke, “Nozick on Knowledge,” which can be found in the book Philosophical Troubles (p 177).]: If p were true, then S would believe that p. Or: In the closest p-worlds, S believes that p.[5. Nozick formalizes this as p → not-(S believes that not-p) (page 178)] Let’s call this fourth condition ‘Stability.’
Stability seems to take care of the Dictator Case. As we will see in a moment, it seems that there is little hope for Stability, and I don’t intend to defend it here. However, Stability rounds out Nozick’s truth-tracking account, and is featured into Nozick’s notion of Sensitivity: If p were false, S wouldn’t believe it true; if p were true, S wouldn’t believe it false. S’s belief is fully sensitive to the way things actually are, and thus tracks the truth. I think that such a broad formulation of Sensitivity is a mistake on Nozick’s part that arises from his desire to solve knowledge rather than develop a single but solid necessary condition. I’ll consider this more closely in a moment, after commenting on another hard case: The Brain in the Vat.
PART 2: Skepticism
The following Brain in the Vat Case is a variation of the classic skeptical argument: (1) I (think I) know that I have two hands; (2) Knowing that I have two hands entails knowing that I am not a handless brain in a vat (BiV); (3) I don’t know that I’m not a BiV; (4) THEREFORE, I don’t know that I have two hands.
One thing that’s special about this argument, and indeed that makes it so difficult to challenge, is that the evidence available to me is exactly the same whether or not I’m a BiV, while in our other cases, S could investigate further to fortify justification of belief. The BiV argument also seems to be perfectly valid. Nozick, however, wants to challenge the validity of the second premise, which is derived from the principle that knowledge is closed under known entailment (PCKE).
One way he can accomplish this is by applying his truth-tracking account. In the closest worlds in which I have two hands, I know that I have two hands; and in the closest worlds where I don’t have two hands, I know that I don’t have two hands (e.g., I lost one or both in an accident). BiV-world is a remote world, so: I know that I have two hands. A consequence of this result is that PCKE fails. That is, I know that knowing that I have two hands entails knowing that I’m not a BiV. Yet, I do indeed know that I have two hands, despite not knowing that I’m not a BiV.
Let’s assume for a moment that we need some sort of Sensitivity norm. Does this mean we must do away with PCKE? I don’t think so, however I think that this does suggest that refinement of PCKE may be needed in order to ensure its validity. The formulation Nozick is challenging is: If S knows that p, and S knows that p entails q, then S knows that q. Suppose, however, that p is a fairly simple scientific principle that S understands, and q is a complex theoretical scientific principle that is entailed by p, but that S doesn’t understand. S may believe that p entails q (perhaps because of Stephen Hawking’s testimony, which may change tomorrow), but does S really know that p entails q? Arguably not.
We can revise PKCE to exclude such cases: If S knows that p, and S knows that p entails q because S has competently inferred q from p, then S knows that q.[6. I would actually go so far as to argue that S does not currently know that q even if S was able to get to q from p while in college twenty years ago, and remembers that he once knew how to do this.] I imagine that this is a formulation of PCKE with which Nozick would have agreed. But what does this mean for Sensitivity?
I think this is an odd case, and don’t have space to get into all the possible ways of dealing with it, but it seems to me that one can challenge the premise that I don’t know I’m not a BiV. One might appeal, for example, to Contextualism or a Putnam-style argument. I don’t see, however, that Sensitivity can help us here: I believe that the BiV-world is far from the actual world. But how do I know I’m not in a world that’s far from the actual world (i.e., a BiV)? If I were in a world that’s far from the actual world, I wouldn’t believe that I was.[7. The usual interpretation is more like, “If BiV-world were the actual world, I would mistakenly believe that it wasn’t.” My point is simply that odd results can be had, depending on how we translate Sensitivity’s underlying intuition into possible world semantics.]
Odd indeed, though I’m not sure this bodes worse for skepticism, Sensitivity, or possible world semantics. As it stands, I leave skepticism considered an open question; PCKE intact; and Sensitivity to still be in the running as a necessary knowledge condition.
PART 3: Problems with Truth-Tracking
Let’s review the truth-tracking account in its entirety: (1) p is true; (2) S believes that p via method (M); (3) If p were false, S wouldn’t mistakenly come to believe that p (via M); (4) If p were true, S would believe that p (via M). [8. It seems that Nozick ultimately considers the combination of (3) and (4) to represent modal sensitivity; truth-tracking as a whole also includes Fixed Method. My intention is to restrict the term ‘Sensitivity’ to (3), and ‘Stability’ to (4). I apologize for any confusion that may arise. The issue is that ‘Sensitivity’ is, on my view, an intuition rather any specific abstraction of that intuition as a condition for knowledge.] I intend to defend a version of condition (3), but first I’ll mention the aspects of truth-tracking that I won’t defend: Fixed Method and Stability.
It seems that Fixed Method isn’t going to work because it allows for an ad hoc restriction on the possible worlds similarity (i.e., closeness) relation so that only included are the worlds in which knowledge is vindicated. While this isn’t a problem in the Grandma case (seeing versus being lied to are different methods indeed), Nozick himself mentions a problematic example: Jesse James’ mask happens to slide off, and S recognizes Jesse James. In many close worlds, the mask doesn’t slide off; thus: S doesn’t know that it’s Jesse James. Nozick claims, however, that to hold the method fixed we must include the mask falling off: In the closest worlds where the mask falls off and it is Jesse James, S knows it’s Jesse James. This seems to arbitrarily guarantee knowledge, and so leads us to question Fixed Method.
It also reveals an issue with Stability: It’s possible to set conditions that guarantee knowledge by ensuring that S remains in appropriately close worlds. Kripke, in “Nozick on Knowledge,” observes that we can likely always satisfy Stability by prefixing belief claims with “I believe (via M) that…”[9. Kripke, 183.] So, S need only think, “I believe that I correctly believe the dictator was assassinated…” and thus restrict Stability tests to the closest worlds in which he has not seen and believed the retraction.
It’s becoming clear that much of what’s at issue here is determining how close or far to extend our sphere of relevant counterfactual worlds. This is an issue for Nozick’s condition (3) as well. Consider the Barn Case. If S hadn’t seen THAT real barn, S would have seen some other real barn. One with two windows fewer, perhaps, or moved a foot to the left (or whatever the smallest change needed is in order for it to no longer be THAT barn). Surely these are closer worlds than the fake-barn worlds. Kripke points out in a footnote that he avoids discussion of metaphysical issues such as these and agrees to follow the intuition that had some real barn not been there, then a fake one might have been.[10. Kripke, 171, Footnote 23. Note, however, that Kripke does emphasize that it is “that particular field” in the actual and counterfactual worlds.] I think, however, that Kripke undermines this intuition in his critique when he particularizes the barn–i.e., makes it about THAT barn–by adding to the barn (or to its very barn-ness?) a specific secondary property q (more on how this works below).
This is a consequence, though, of a powerful observation on Kripke’s part[11. Kripke makes many other observations, but I’ll restrict myself mostly to this one.], which is that, often when p fails to satisfy (3), p&q will satisfy (3). Suppose the fake barns are blue and the real barn is red. S doesn’t know that he saw a barn, but he does know that he saw a red barn.
A similar result can be had if we alter the Barn example slightly.[12. I consider this a loose variation on Kripke’s ‘woman in the acting profession’ example, which I don’t have space for here.] Suppose the real barn is green and the fake barns are red. S doesn’t know that he saw a real barn, but he does know that he saw a green barn, because, in many of the closest worlds where the barn isn’t real and green, the barn is still real, but is (the more common color) red. Thus, again, p (the barn is real) fails to satisfy, but p&q (the barn is real and is green) satisfies. (I.e.: In the closest worlds where p is false, S might have come to believe p; in the closest worlds where p&q is false, S might have come to believe p, but not q; and so would not have come to believe p&q.)
Part 4: Defending Sensitivity
Kripke for the most part seems to think that the intuition that motivates Nozick’s condition (3) is a strong one.[13. Well, until the end of his paper, where he brings up whether Sensitivity can help me know that I’m not supercredulous. While a clever observation along the lines of Russell’s Paradox, I don’t think it’s a good reason to abandon Sensitivity. Perhaps Russell’s types will aid us here. Or perhaps it breaks down at the belief norm, making it epistemologically unintelligible: S believes p and ~p.] I agree, and in fact would like to defend some version of (3) as a necessary condition for knowledge. My expression of that intuition is as follows: If S’s belief would assign the same truth value to p, whether or not p is true, then S doesn’t know that p.
The trick is to come up with a Sensitivity norm that doesn’t apply this intuition ad hoc. I propose the following guidelines as a starting point, while keeping in mind that Stability has (for now) been ruled out:
(3): In the closest not-p worlds, S doesn’t mistakenly believe that p; (3’): Rule of Application: Apply (3) when S’s evidence for p is the same or substantially similar within the neighborhood of actuality (i.e., within the sphere of worlds closest to, or most similar to, the actual world);[14. Note that Kripke himself points out that Sensitivity is unproblematic in cases where S’s experience is exactly the same (p 188).] (3’’): The Rule of Predicate Exclusivity Across Propositions. [NOTE: I mean “propositional units” here, i.e, p and q are each a propositional unit.]
With Kripke in mind, I’ll demonstrate these guidelines by holding them up to our cases. In the BiV[15. I include this while keeping in mind the caveats noted in the above section on Skepticism.], Clock, and Sheep cases, (3’) obtains, so we apply (3), which is satisfied. In the Grandma case, evidence is substantially different, so (3’) doesn’t obtain, and so we don’t apply (3). I realize that there is a concern here about (3’) that is similar to our concerns about the ad hocery of Fixed Method, and arbitrarily designated degrees of possible-worlds closeness (and perhaps David Lewis’ Rule of Resemblance; see his paper, “Elusive Knowledge”).
At the same time, it seems that this vagueness will be an issue for any modal account of knowledge and must be overcome. Restricting our account to qualifications about (3) seems like a step in that direction. That said, let’s consider the case I’m most interested in here, Kripke’s Fake Blue Barns example.
As I noted above when I introduced that example, Kripke particularizes the real barn by adding a second predicate, q: When S believes he’s seen a red barn, he believes he’s seen a real barn (p), AND that the barn is red (q). According to (3”), however, we should maintain predicate exclusivity across propositions. That is, predicates should not be repeated across propositions, so that it’s very explicit what we are testing for when we test p&q. One way to do this is to examine that about which we are predicating, which in this case is the thing that is the barn, and the thing that is red, so that p = a barn thing and q = a red thing. Done this way, p&q fails to satisfy (3), so that S doesn’t know he’s seen a barn, though he does know he’s seen a red structure.
What I’m suggesting is that we should take more care in how we parse out S’s belief in relation to the “it” here, for the sake of testing Sensitivity – done, of course, in some consistent, principled way.
My suggestion works in other particularizing examples as well: S doesn’t know that he saw a barn, but he does know that he saw a structure with a water stain on it (here I’m applying the word ‘predicate’ loosely, but the sort of exclusivity – or perhaps ‘uniqueness’ would be a better word for it – I’m aiming for is clear). And considered more broadly: S doesn’t know that he saw a barn, red or otherwise, because, if some barn hadn’t been there, among the things that might have been there – a house; a silo; an empty lot; in a remote world, a real barn erected on fake-barn-repellent soil – is the relevant alternative of a fake barn, whatever color it might or may have been.
A brief note about (3’), in which I appeal to evidence rather than experience. Though S’s experience differs between actuality and counterfactuality in the above Fake Blue Barns scenario, his evidence is substantially similar: S sees a barn-like structure. I think the difference in color doesn’t result in a failure to satisfy (3’), because, though the experience of blue is different than that of red, S’s evidence for believing he’s seen a barn is substantially similar. It might also be that there is available evidence that could have, but didn’t, factor into S’s experience (e.g., had S looked more closely at the structure). Surely there are vaguer cases, but I’ll leave these alone for now.
In conclusion, Nozick’s truth-tracking account fails to provide us with jointly sufficient conditions for recognizing instances of knowledge; nor does it convince (most of) us of the failure of closure. However, there remains the intuition that S doesn’t know he’s seen a barn, because there are relevant fake-barn alternatives. If an account of knowledge doesn’t bear this out, perhaps it’s a failing of our (subjunctive conditional) account, or a failing of the logic we use to get at our accounts, but it seems to me that the intuition remains one that must be dealt with.
[NOTE: A critical point I’m making is that the barn’s red-ness is not part of its ontological makeup — i.e., not essential to its being a real barn — and therefore is not a relevant piece of evidence for S’s conclusion about the barn being real. So that aspect of S’s belief isn’t what’s being tested when we test for whether he knows there’s a barn there.
A response to this might be that the propositional units could go further down, then, and thus my account is too vague. Why stop at red-ness? E.g., S might be said to believe that there is a window in a certain region of space, and another window in another region of space, and so on, until we’ve got all the visible parts — phenomenological units — of the barn accounted for, and now we need to figure out what’s essential to the barn, etc. I don’t think this is a legitimate concern, for reasons already noted in the previous section. The belief is that there’s a barn there, whatever the barn’s essential features may be. But even if we are concerned about the barn’s essential features, red-ness and water stains would be uncontroversially off the list.
I suppose I’m suggesting that the belief in question should be parsed out into propositional units in a way that honors the belief’s “ontological correspondence” (let’s say) to the things in the world the belief refers to, which corresponds to the external evidence for each proposition. P corresponds to barn-thing and q corresponds to red-ness. When conjoined as p&q, p fails Sensitivity but q passes.
Which is to say that S believes there’s a real red barn there but might have mistakenly believed there’s a real green barn there; the “real” part fails, the “green” part passes: S wouldn’t have mistakenly come to believe that there’s a real barn there that’s red, but she would have mistakenly come to believe that there’s a real barn there that’s green; so, she would have mistakenly come to believe that there’s a real barn there.]