Inquiry An Interdisciplinary Journal of Philosophy

ISSN: 0020-174X (Print) 1502-3923 (Online) Journal homepage: http://www.tandfonline.com/loi/sinq20

One’s own reasoning Michael G. Titelbaum To cite this article: Michael G. Titelbaum (2016): One’s own reasoning, Inquiry To link to this article: http://dx.doi.org/10.1080/0020174X.2017.1261995

Published online: 16 Dec 2016.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=sinq20 Download by: [24.240.42.186]

Date: 17 December 2016, At: 12:18

Inquiry, 2016 http://dx.doi.org/10.1080/0020174X.2017.1261995

One’s own reasoning Michael G. Titelbaum Department of Philosophy, University of Wisconsin-Madison, Madison, WI, USA

ABSTRACT

Responding to Cappelen and Dever’s claim that there is no distinctive role for perspectivality in epistemology, I argue that facts about the outcomes of one’s own reasoning processes may have a different evidential significance than facts about the outcomes of others’.

ARTICLE HISTORY  Received 13 April 2016; Accepted 8 June 2016  KEYWORDS  Reasoning; higher-order evidence; peer disagreement; Uniqueness Thesis; indexicals; first-person perspective

The thesis of Herman Cappelen and Josh Dever’s The Inessential Indexical is that ‘There is no philosophically distinctive role to be played by perspectivality in the explanation of action, inquiry, or perception’ (Cappelen and Dever 2013, 2). They apply this thesis to the epistemological case as follows: If evidence is holistic, then a Bayesian picture can be achieved by having the evidential relation hold between total credence states. But any of these epistemic pictures is made a mockery of if first-person states are given epistemological center stage, because then the holder of the evidence mysteriously enters into the evidential relation. (178)

Similarly, Ofra Magidor expresses skepticism about any special status for the de se, and challenges ‘the thought that de se attitudes have a certain kind of epistemically privileged status’ (Magidor 2015, §4). Against Cappelen and Dever and Magidor, I will argue that there is a substantive asymmetry in epistemology between information about oneself and information about others. In particular, CONTACT  Michael G. Titelbaum 

[email protected]

© 2016 Informa UK Limited, trading as Taylor & Francis Group

2 

 M. G. TITELBAUM

information about the outcome of one’s own reasoning processes bears an evidential feature that information about the outcome of other agents’ reasoning need not.

1.  Peer disagreement and screening off Though epistemologists have carried on related discussions for some time, current discussion of peer disagreement cases was kick-started – and largely shaped – by Elga’s (2007). Elga imagined a case in which two agents – say, you and me – have formed divergent opinions about some particular hypothesis. Interestingly, we have each formed our opinions on the basis of the same batch of relevant total evidence. Also, we consider each other to be epistemic peers: we antecedently take each other to be equally good at assessing the bearing of bodies of evidence on hypotheses like this one. We then interact and discover our disagreement (and its evidential basis). How (if at all) should each of us change our opinions about the hypothesis upon discovering the disagreement? Systematic answers to this question tend to fall into two groups: conciliationist responses hold that each of us should move our opinion about the hypothesis at least some distance toward that of the disagreeing party; steadfast views hold that at least one of us may rationally keep her opinion unchanged. For instance, Elga endorses the conciliationist Equal Weight View, which implies that in a peer disagreement case the agents should come to have the same attitude toward the hypothesis after discovering their disagreement, with that attitude lying somewhere between their initial positions. (If one believed and the other disbelieved, they should now suspend judgment; if we’re working with credences, the final credence might be an average of their initial degrees of belief.) Elga also discusses the steadfast Right Reasons View, which holds that whichever of the two agents initially read the evidence correctly should stick to her opinion even in light of the disagreement. Of course, it may turn out that there is no systematic, general rational requirement on responses to peer disagreement; Kelly (2010) endorses a Total Evidence View on which either steadfast or conciliatory responses may be called for, depending on various features of the evidence at hand. Weatherson (Unpublished manuscript ) characterizes the idea behind the Equal Weight View in a particularly perspicuous fashion.

INQUIRY 

 3

To understand his approach, we need the notion of screening off. Screening off is a technical notion defined in terms of conditional probabilities, but for our purposes we can employ a rough, intuitive gloss. Roughly, S screens off E from H when E generally has an evidential influence on H, but if S is incorporated first then taking E into account has no further bearing on H.1 Screening off can occur in cases of undercutting defeat. Suppose E is the fact that no one has ever liked a Facebook post I put up, and H is the hypothesis that everybody hates me. E is presumably some evidence for H. But now suppose S is the fact that I don’t have a Facebook account. In light of S, E no longer has any bearing on H at all. S screens off E from H, by undercutting the evidential connection between E and H. Screening off can also occur in cases that involve not undercutting, but evidential redundancy. If you discover E, that my office light is on in the middle of the day, you might suspect that I’m on campus today, and therefore H that I will be attending afternoon tea. But if you’re already certain that I’m on campus today (perhaps you saw me in the hall), this fact S will screen off E from H. Notice that this office light case isn’t one of undercutting defeat. In undercutting cases, the possibility that some particular relationship holds between E and H makes E evidentially relevant to H; the undercutter demonstrates that this relationship does not obtain and thereby cancels the evidential relevance.2 In the office light case, the presumed relationship generating E’s evidential relevance to H is a causal fork; my being on campus today both causes my light to be on and causes me to attend tea. Because E is evidence that I’m on campus, it’s also evidence that I will be at tea. When you establish S (that I’m on campus), this makes office light facts redundant with respect to establishing my attendance at tea. But this isn’t by negating the relationship that originally provided E’s relevance to H; you’re still just as sure of the causal fork structure between E, H, and S.

Among many other subtleties I’m avoiding, strictly speaking S should screen off E from H when facts about whether S is true mute E’s influence on H. That is, screening off requires both S and ~S (or, more generally, every element of a partition of which S is a part) to render E irrelevant to H. This complication won’t matter for the discussion that follows. 2 John Pollock, who introduced the distinction between undercutting and rebutting defeat, wrote that undercutting defeaters ‘attack the connection between the reason and the conclusion’. (Pollock 1987, 485) 1

4 

 M. G. TITELBAUM

Reichenbach (1956) introduced screening off in discussing common cause cases. Causal forks give rise to evidential redundancy screeners rather than undercutting screeners. Generally, redundancy screening arises whenever the evidential pathway between E and H passes entirely through S. Besides causal forks, we have cases in which E causes S which in turn causes H. Or we may have cases in which the evidential connection is generated by something other than a causal relation. Consider the hypothesis H that Bob is unmarried, the evidence E that Norma says Bob is a bachelor, and the proposition S that Bob is indeed a bachelor. S screens off E from H by making E redundant, but the connection between S and H is not causal. To apply screening off to peer disagreement, let’s walk carefully through each stage of what happens in such a disagreement: Stage 0: Neither of us has ever considered the hypothesis in question, H. We each possess the same total evidence, E, relevant to H. Stage 1: We have now each considered H in light of E, and adopted an attitude toward H in light of E. Our two attitudes are different from each other’s, but neither of us is aware of that. Stage 2: Each of us notices the judgment he has made about H in light of E. I thereby come to possess the fact that I have judged some particular attitude toward H appropriate in light of E. You come to possess the fact that you have judged some attitude toward H appropriate. Again, the two attitudes are different, but neither you nor I is aware of that because I don’t possess the facts about what you have judged and vice versa. Stage 3: We have now interacted and discussed our opinions. Each of us now possesses not only the original E, but also all the facts about who judged what. I’ll admit it’s a bit artificial to separate out these stages – especially Stages 1 and 2 – as if they happen at distinct, identifiable times. But distinguishing them makes it easier to separate out the evidential effects of various facts possessed by each party in the disagreement.

INQUIRY 

 5

Theories of peer disagreement differ on what attitude you should have toward H at Stage 3. At Stage 3, each of us possesses a bundle of total evidence containing both E (the initial evidence relevant to H) and some facts about who judged what, when. Let’s call those latter facts the J-facts. Weatherson suggests that it is both necessary and sufficient for the Equal Weight View to maintain that J screens off E from H – he calls this the ‘Judgments Screen Evidence’, or ‘JSE’ view. To understand why, consider Elga’s characterization of the Equal Weight View: Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right. Prior to what? Prior to your thinking through the disputed issue, and finding out what the advisor thinks of it. Conditional on what? On whatever you have learned about the circumstances of the disagreement. (2007, §11)

This characterization of the Equal Weight View works with credences (subjective probabilities), and is designed to apply generally to all disagreements (not just disagreements with epistemic peers). Yet the important idea for our purposes is there. Upon finding yourself in a disagreement, your attitude toward the hypothesis should be based on the circumstances of the disagreement (the fact that the parties drew opposite conclusions) and should isolate out whatever you took to follow from the evidence (your thinking through the disputed issue). In our case, your Stage 3 attitude toward H should be determined by the J-facts, with no influence from E. E may have had an impact on your attitude toward H at Stage 1, but at Stage 3 E is rendered irrelevant by the J-facts. The J-facts have screened off E from H. Having gone this far, we might also identify the Right Reasons View on peer disagreement with ESJ, Evidence Screens Judgments. Let’s suppose that of the two of us, I was the one who initially gauged the significance of E for H correctly. That is, at Stage 1 I had a rational attitude toward H while yours was irrational. According to Right Reasons, I should maintain this attitude toward H even after learning of our disagreement. That is, at Stage 3 I should maintain the attitude toward H I possessed at Stage 1. Put another way, once I have incorporated the influence of E into my attitude toward H, the addition of J-facts into my total evidence between Stage 1 and Stage 3 should not change that attitude at all. E screens off the

6 

 M. G. TITELBAUM

J-facts from H. Absent other evidence bearing on a hypothesis, it might be rational to form one’s views based on information about the judgments various agents (including oneself ) have made. But according to ESJ, when one possesses the very evidence on which those judgments were based, that evidence renders facts about the judgments irrelevant to the hypothesis. Finally, we should note that since Total Evidence doesn’t make any systematic general recommendations, it doesn’t subscribe to JSE, or ESJ, or any other screening off thesis. I, on the other hand, am eventually going to defend a thesis about peer disagreement cases that does say something systematic involving screening off. Yet my thesis will be identical to neither JSE nor ESJ. Before we get to that, though, we need to make some finer-grained distinctions among peer disagreements.

2.  Focusing the issue Our first key distinction will be between disagreements in permissive cases and disagreements in non-permissive cases. We will suppose that agents draw conclusions from evidence by applying evidential standards. In the abstract, we can characterize an agent’s evidential standards as a function that moves her from bodies of total evidence to attitudes toward particular hypotheses.3 More substantively, we can think of your evidential standards as embodying your approach to evidence: maybe you analyze information using particular statistical techniques, or by looking for the simplest hypothesis consistent with the evidence, or by thinking about what your mother would say in this case, etc. Some evidential standards – perhaps counterinductive ones? – are irrational. A rational agent possesses rational evidential standards and adopts the attitudes toward hypotheses recommended by those evidential standards when applied to her total evidence. A number of philosophers maintain that there is exactly one rationally permissible set of evidential standards. They defend the

White (2005) and Schoenfield (2014) refer to these as ‘epistemic standards’; I prefer a terminology that avoids the association with knowledge and focuses on these standards’ operating on evidence. Bayesians represent evidential standards using what’s known as a ‘hypothetical prior’ or ‘ur-prior’; see (Meacham Forthcoming).

3

INQUIRY 

 7

Uniqueness Thesis: Given any body of evidence and hypothesis, rationality permits only one attitude toward that hypothesis for any agent with that body of total evidence.

Though the general issue has been debated by epistemologists for decades if not centuries, the Uniqueness Thesis was given that name by Richard Feldman in his (2007), then forcefully defended by White (2005).4 If the Uniqueness Thesis is true, then any time two agents with the same total evidence disagree, at least one of them is being irrational. Two kinds of rational mistake might be present. First, it might be that the agents draw different conclusions about the hypothesis because they apply different evidential standards to their shared evidence. In this case, at least one of the agents is applying a rationally incorrect set of standards, since the Uniqueness Thesis says only one set of standards is permissible. Second, it might be that the agents are applying the same evidential standards to the same total evidence, but (at least) one of them has made a performance error – she has implemented her standards incorrectly in drawing a conclusion from the evidence. This is what happens in oft-discussed tipping cases, in which two diners each mentally add a 15% tip to the same restaurant bill and then split the amount equally.5 Presumably, the diners are applying the same arithmetic standards of what it is for a bill to gain 15% then be split in two; if the diners reach different conclusions about how much each owes, it must be that one of them has failed at applying those standards, by calculating incorrectly.6 Of course, it’s consistent with the Uniqueness Thesis that there be disagreements in which both kinds of rational error occur; at least one agent applies the wrong standards and at least one agent fails to apply her standards correctly. But if Uniqueness is false, a completely different type of disagreement becomes possible. In such disagreements, the two agents have the same relevant evidence, the agents draw different conclusions about some My statement of the Uniqueness Thesis here is somewhat rough, but good enough for our purposes. For more precise versions, and citations to philosophers and arguments on each side of the thesis, see Kopec and Titelbaum (2016). 5 I believe this sort of example originated at Christensen (2007), 193. 6 Note that performance errors can be of either commission or omission. For instance, an agent’s evidential standards may demand that she believe a hypothesis whenever it stands in a certain relation to her total evidence, yet the agent may fail to notice in a particular case that the hypothesis under consideration stands in that relation. 4

8 

 M. G. TITELBAUM

hypothesis, yet no one has made a rational error. This is because the agents reach their divergent conclusions by correctly applying different evidential standards, and both evidential standards are rationally permissible. White calls people who deny the Uniqueness Thesis ‘permissivists’. Permissivism need not be anything-goes; the permissivist can think that there are some bodies of evidence and hypotheses on which all rational evidential standards agree. For instance, rational standards might be required to get deductive and arithmetic cases right. If so, the tipping example would be a non-permissive case, and any disagreement found in such an example must be attributable to irrationality. The permissivist earns her label by thinking that at least some cases are permissive; there are at least some cases in which two agents with the same relevant evidence may disagree without anyone’s making a rational error. In my view, the correct response to a peer disagreement case depends on whether it’s permissive or non-permissive. In a non-permissive case, Right Reasons holds: if either agent drew the rationally mandated conclusion prior to discovering the disagreement, she should maintain that conclusion after conferring with her peer. I argue for this conclusion in my (2015); the main premise of my argument is that rationality prohibits akratic states. I won’t go through the details here, but I will mention one stage in the argument that will matter for our discussion later. For me, the Right Reasons position on peer disagreement follows from the Fixed Point Thesis, which states (in slogan form) that mistakes about the requirements of rationality are mistakes of rationality. In other words, if a particular conclusion is rationally required in a particular situation, it is a rational mistake to have false beliefs about whether that conclusion is so required. What about peer disagreement in permissive cases? For reasons to be revealed presently, I think that in at least some permissive cases agents should be conciliatory when a peer disagreement becomes apparent. Before I explain this position, though, I want to make a couple of stipulations that will delimit our discussion going forward: (1)  Our discussion will focus exclusively on permissive cases. These are the sorts of cases to which the thesis I will ultimately propose applies. Of course, if Uniqueness is true

INQUIRY 

 9

then there are no permissive cases. So I am effectively stipulating that the Uniqueness Thesis is false. If you want to know why I think it’s false, see Titelbaum (2010) and Titelbaum and Kopec (Unpublished manuscript). (2)  In the cases we will discuss, no performance errors occur, and all the agents involved are certain no performance errors have occurred. So any disagreements we encounter when agents share the same evidence will be attributable entirely to differences in their evidential standards, and the agents involved know that. I make this stipulation not because I think performance errors are unimportant; on the contrary, I think understanding them is crucial to epistemology. But eliminating performance errors reduces the number of moving parts in our examples, thereby simplifying the analysis. I will return to the significance of this stipulation later.

3.  The reasoning room I now want to refute a couple of claims philosophers have made about permissive cases. First, some philosophers (such as (Cohen 2013)) grant that permissive cases are possible, but deny the possibility of ‘acknowledged permissive cases’. In other words, they claim that while two agents with the same evidence may rationally draw opposing conclusions, it is irrational for either agent to acknowledge that other conclusions would be rationally permissible.7 Second, Kelly argues that if acknowledged permissive cases do exist, then conciliatory responses to peer disagreement in such cases are unmotivated. Considering a case in which two agents with differing, rational evidential standards discover a disagreement between them, he writes, Ex hypothesi, the opinion that I hold about [the hypothesis] is within the range of perfectly reasonable opinion, as is the opinion that you hold. Moreover, both of us have recognized this all along. Why then would we be rationally required to change? (2010, 119)

Again, see (Kopec and Titelbaum 2016) for more citations.

7

10 

 M. G. TITELBAUM

According to Kelly, once we recognize that we’re in a permissive case, the question of what to do in the light of disagreement is settled: since each party recognizes the rational acceptability of the other party’s position, there can be no rational pressure for either party to move. Both of these claims can be rebutted using an example invented by Matthew Kopec and myself: You are standing in a room with nine other people. Over time the group will be given a sequence of hypotheses to evaluate. Each person in the room currently possesses the same total evidence relevant to those hypotheses. But each person has different ways of reasoning about that evidence (and therefore different evidential standards). When you are given a hypothesis, you will reason about it in light of your evidence, and your reasoning will suggest either that the evidence supports belief in the hypothesis, or that the evidence supports belief in its negation. Each other person in the room will also engage in reasoning that will yield exactly one of these two results. This group has a well-established track record, and its judgments always fall in a very particular pattern: For each hypothesis, 9 people reach the same conclusion about which belief the evidence supports, while the remaining person concludes the opposite. Moreover, the majority opinion is always accurate, in the sense that whatever belief the majority takes to be supported always turns out to be true. Despite this precise coordination, it’s unpredictable who will be the odd person out for any given hypothesis. The identity of the outlier jumps around the room, so that in the long run each agent is oddperson-out exactly 10% of the time. This means that each person in the room takes the evidence to support a belief that turns out to be true 90% of the time. (Titelbaum and Kopec Unpublished manuscript)

Kopec and I defend two intuitions about the Reasoning Room. First, we maintain that when you first receive a hypothesis and reason about it in light of your evidence, it is rational for you to believe whatever conclusion you judge the evidence to support. (Or at least for you to assign a high credence to that conclusion.) Notice that if this intuition is correct, then at least one person in the room will rationally form a belief at odds with yours in light of the same evidence. So the Reasoning Room is a permissive case. Moreover, if you believe this intuition is correct, your knowledge of the group’s track-record will allow you to realize that someone else in the room has formed a rational belief at odds with yours. Yet

INQUIRY 

 11

this realization doesn’t change what it’s rational for you to believe about the hypothesis. So the Reasoning Room is an acknowledged permissive case, refuting the first claim made above. Now let’s suppose that after forming your own opinion about the hypothesis, you select one other inhabitant of the room at random and inquire what she concluded. Our second intuition is that if she reveals she has drawn the opposite conclusion from you, it is rational for you to suspend belief about (or adopt a 0.5 credence toward) the hypothesis at issue. This position can be defended by some probability mathematics, and Kopec and I go through those mathematics in the paper. But the basic idea is this: If the conclusion you initially drew is true, it would be highly unlikely that a randomly selected peer in the room would disagree with your opinion (since 8 out of 9 of them agree). On the other hand, if your initial conclusion is false, such disagreement is much more likely (indeed, guaranteed). Encountering disagreement from a randomly selected peer is much more likely if your initial conclusion was false, so upon encountering such disagreement you should decrease your confidence in that conclusion. Notice that if this second intuition is correct, the Reasoning Room is an acknowledged permissive case in which it is rational to conciliate upon encountering a disagreeing peer. This refutes Kelly’s claim above. Obviously both of these intuitions could be disputed. A defender of the Uniqueness Thesis could, for instance, deny that it’s rational for all of the parties in the room to believe their initial conclusions. At most one of the evidential standards in the room may be rational, whoever applies an irrational standard has drawn a conclusion not supported by the evidence, so that agent shouldn’t believe the conclusion of her own reasoning. But I think it’s more interesting to notice that a Uniqueness defender (or a defender of the two claims discussed above) could grant both of our intuitions about the Reasoning Room, yet still dispute our interpretation of the case. To see how, notice that the four stages of a peer disagreement I identified above are present in the Reasoning Room: Stage 0: No one in the Reasoning Room has yet considered the hypothesis, H. Each member of the Room possesses the same total evidence, E, relevant to H.

12 

 M. G. TITELBAUM

Stage 1: Each person in the Room has now considered H in light of E, and adopted an attitude toward H in light of E. Nine people have adopted the same attitude toward H, while one person is the outlier. Yet no one knows who the outlier is. Stage 2: Each person in the Room notices the judgment she has made about H in light of E, and comes to possess a fact about what judgment she has made. Stage 3: You have randomly selected a peer, and come to possess the fact that that randomly selected peer formed the opposite judgment from yours about H in light of E. The Uniqueness defender can maintain that our first intuition concerning the Reasoning Room – that each agent should believe the conclusion of her own reasoning – is ambiguous. Does it apply at Stage 1, or Stage 2? For Uniqueness to be maintained, it must be the case that at Stage 1, at least one person in the Reasoning Room is making a rational error. That’s because all the people in the room have the same total evidence at that time, yet distinct attitudes toward H are adopted. However, the Uniqueness defender can claim that at Stage 2, it is perfectly rational for each agent to adopt the attitude toward H that she does. That’s because at Stage 2, agents with different attitudes toward H possess different bodies of total evidence. (Whoever believes H has as part of her evidence ‘I judged that belief in H is supported by E’, while whoever disbelieves H has ‘I judged that disbelief in H is supported by E’.8) Our intuition that it’s rational for the agents to have different beliefs about H is really an intuition about Stage 2, when the agents have taken into account facts about the outcomes of their own reasoning (perhaps in combination with their background knowledge about the track records of the people in the room). This position may be buttressed by the claim that in real life it’s difficult to separate out Stage 1 from Stage 2; there’s not much lag between the time when an agent renders a judgment and the time when she Compare Feldman’s discussion of ‘private evidence’, in which he entertains the suggestion that ‘Another way to think about private evidence is to identify it with the clear sense one has that the body of shared evidence – the arguments – really do support one’s own view. The theist’s evidence is whatever is present in the arguments, plus her strong sense or intuition or ‘insight’ that the arguments on balance support her view.’ (2007, §IIIC) Feldman credits this suggestion to van Inwagen (1996).

8

INQUIRY 

 13

has available facts about what she’s judged. So it’s easy to mistake an intuition about Stage 2 for an intuition about Stage 1. Notice that for this interpretation to work, the evidence gained at Stage 2 must change what it’s rational for at least one agent to believe about H. If Uniqueness is true, there is a unique rational conclusion about H to be drawn from E. Suppose (without loss of generality) that belief in H is the attitude uniquely supported by evidence E on its own. At least one person in the room concludes ~H from E at Stage 1. According to the Uniqueness defender, it is irrational for this agent to believe ~H at Stage 1. But then at Stage 2, this agent notices that she has concluded ~H from E, and adds this fact to her stock of total evidence. On the interpretation being proposed, it is now rational for the agent to believe ~H (supporting the intuition that it’s rational for the agents in the room to have different attitudes about the hypothesis at Stage 2). So adding a fact about her own reasoning to her total evidence has changed what that agent’s total evidence supports with respect to H. Could this interpretation be correct? Can noticing the outcome of your own reasoning about a hypothesis change the rational attitude for you to adopt toward that hypothesis?9 I think the answer is no, for a number of reasons.10 First, the proposed interpretation of the Reasoning Room seems to license illicit forms of evidential bootstrapping. By hypothesis, the agent who believed ~H at Stage 1 did not have a body of total evidence supporting that conclusion. To make matters more dramatic, we can imagine that none of the contents of E supported ~H at Stage 1. Yet the claim is that by noticing at Stage 2 the outcome of her Stage 1 reasoning, the agent has come to possess evidence supporting ~H. So an agent who lacked any evidence for ~H has, by faulty reasoning, given herself evidence for ~H? To make matters worse, imagine that the agent adopts attitudes not only toward H but also toward propositions about whether E supports H. Presumably at Stage 1 she believes (falsely) that E supports ~H. And presumably, she maintains this belief at Stage I take it there are a few obvious cases involving self-referential hypotheses for which the answer must be ‘yes’. Consider, for instance, the hypothesis ‘I have never previously reasoned about this hypothesis’, or hypotheses like ‘The total number of hypotheses I have reasoned about in my life is odd.’ Yet these strike me as fairly trivial cases, and I will assume in what follows that the hypotheses at issue are not of this type. 10 Some of the reasons that follow are adapted from (Titelbaum and Kopec Unpublished manuscript). 9

14 

 M. G. TITELBAUM

2. Is that Stage 2 belief rational? If it’s rational for her to believe the support proposition at Stage 2, then irrationally judging a false proposition to be true has made it rational for the agent to believe that proposition. On the other hand, if it’s irrational for the agent to believe this proposition at Stage 2, then the agent’s continued belief in ~H is made rational by a judgment that itself is irrational. Either way we’ve developed a highly unattractive epistemology. The interpretation in question also suffers from the embarrassment that the key piece of evidence making it rational for the agent to believe ~H at Stage 2 – namely, the fact that she judged E to support ~H – is not the kind of thing that the agent herself would ever explicitly cite as a crucial piece of evidence for ~H were you to ask her. So despite being rational in her beliefs at Stage 2, the agent is misled about what’s making those beliefs rational. Continuing our criticism of the proposed interpretation, let’s focus our attention on an agent who did judge matters correctly at Stage 1. In other words, she (rationally) concluded H from E. On the interpretation under consideration, noticing that one concluded ~H from E decreases the confidence it’s rational to assign to H. (Because E rationalizes a high confidence in H, while E plus this J-fact rationalizes a low confidence in H.) Presumably, then, when one has already assigned a high confidence to H on the basis of E, noticing that one has done so makes it rational to be even more confident of H.11 But this feels like another form of illicit bootstrapping. I consider E, and reason from it to a high confidence in H. Then I notice that I myself approved of H on the basis of E, and in light of this good news for H decide to bump my confidence in H even higher? Do I do this while patting myself on the back for such an excellent bit of reasoning? And where does it end? Once I give H the extra boost, do I then notice that I reasoned from the fact that I approved of H on the basis of E to a boost in H, take that as further evidence for H, and grow even more confident in the hypothesis? For the Bayesians in the audience, this claim can be backed up by some simple probability mathematics. At Stage 1, your evidence doesn’t contain any facts about whether you reason to a high credence in H based on E or a low credence. But given your background knowledge about the Reasoning Room, you’re certain that you do exactly one of these two things. So the rational credence in H at Stage 1 (based strictly on E) is a weighted average of the rational credence in H conditional on the supposition that you assign high credence to H in light of E and the rational credence in H conditional on the supposition that you assign low credence to H in light of E. On the interpretation in question the latter is lower than your unconditional Stage 1 credence in H, so the former must be higher.

11

INQUIRY 

 15

Every time I take a particular piece of evidence to rationalize an increase in my confidence in H, I may then notice that I did so, and take that as even further evidence for H. Perhaps the confidence boosts to H decrease at the margin, and I asymptotically approach some limiting maximal degree of belief. But this process sure doesn’t look good.

4.  The self-screening thesis Yet now a tension has developed in my position. I have just argued that at Stage 2 in the Reasoning Room, when an agent incorporates into her total evidence facts about what she concluded from E at Stage 1, these facts should not influence her opinions about H. Strictly speaking my arguments were in the context of an interpretation that took the Reasoning Room to be a non-permissive case. But they apply equally well to my preferred interpretation of the Reasoning Room, on which it’s rational for the agents to draw differing conclusions about H at Stage 1. Either way, an agent’s noticing the outcome of her reasoning about H in light of E should not affect her attitude toward H. But recall the second intuition I endorsed about the Reasoning Room: At Stage 3, when the agent learns the outcome of another agent’s reasoning about H in light of E, this information should affect her attitude toward H. This seems odd. If we like, we can formulate an argument to bring out the tension: First Symmetry Argument: Interpersonal Symmetry • Learning the outcome of another agent’s reasoning about a hypothesis can rationally affect your opinion about that hypothesis. • Therefore, noticing the outcome of your own reasoning about a hypothesis can rationally affect your opinion about that hypothesis. My response to this argument is to deny its validity. In other words, I deny that the symmetry in question is probative. Instead, I endorse the following position, which is the main thesis of this article: Self-Screening Thesis: Facts about what one has concluded concerning a hypothesis from a particular body of evidence are screened off from

16 

 M. G. TITELBAUM

that hypothesis by that evidence, while facts about what others have concluded need not be.

This thesis resembles Weatherson’s ESJ, but doesn’t go quite as far: E screens off J-facts about one’s own reasoning, but isn’t guaranteed to screen off J-facts involving others’.12 To illustrate the Self-Screening Thesis more clearly, let’s see how it would respond to another putative symmetry argument against it. Imagine a case in which you’re very aware that sometime in the past you concluded H on the basis of strong evidence, but now you’ve forgotten what that evidence was.13 In such cases, it nevertheless seems rational to maintain your confidence in H. But then we have the following apparent argument against the SelfScreening Thesis: Second Symmetry Argument: Temporal Symmetry • Facts about the outcome of your reasoning in the past can rationally affect your opinion about a hypothesis. •  Therefore, facts about the outcome of your present reasoning can rationally affect your opinion about a hypothesis. Once more, my response is that the imagined case is not appropriately symmetrical to the case that drives the Self-Screening Thesis. The symmetrical case would be one in which you knew the outcome of past reasoning from some evidence, but also retained that evidence. In that case, the Self-Screening Thesis would say that the evidence screens off facts about your reasoning from any conclusions of that reasoning, and I think that’s the right result. But in the present case, you lack the original evidence, so it has no ability to screen off facts about your past reasoning from the conclusions One might think that the symmetry assumed by the Interpersonal Symmetry Argument is an instance of a more general parity principle in epistemology requiring an agent to treat information about her own cognition the same way she treats information about others’. For what it’s worth, I think there are many potential counterexamples to that parity principle. First, there may be principles requiring a rational agent’s higher-order beliefs (or credences) about what her own first-order beliefs/credences actually are to match those actual beliefs/ credences. No rule requires a rational agent’s beliefs about another agent’s beliefs to match that agent’s actual beliefs. Second, anti-akratic principles may require a match between a rational agent’s beliefs concerning what she ought to believe and what she actually believes. No rational rule requires my beliefs about what you ought to believe to match what you actually believe. Finally (and perhaps more controversially), if Uniqueness is false then a principle of rational diachronic consistency may require my present beliefs to match up with my past beliefs in certain ways, but not to match up with your past beliefs in the same ways. 13 I find myself in this position with respect to a number of philosophical theses of which I convinced myself in the past. 12

INQUIRY 

 17

of that reasoning. I agree that those past reasoning facts should affect your present attitudes, but this concession yields no counterexample to the Self-Screening Thesis.

5.  Arguments for the thesis I now want to suggest a couple of arguments for the Self-Screening Thesis, which will in turn lead to an explanation of why it’s true. My first argument for the thesis requires thinking a bit more about why one might doubt it – why one might think facts about the outcome of an agent’s own reasoning could be evidentially relevant for that agent. Philosophers who argue that such facts can be relevant tend to focus on cases in which the agent has reliability information about attitudes suggested by her reasoning. So let’s focus on a case like that. Suppose you’ve recently become aware that you’re not very reliable when it comes to a particular type of reasoning. I don’t mean that when performing this sort of reasoning you’re prone to frequent performance errors; for purposes of this article I’ve stipulated performance errors out of consideration. I mean that you’ve recently been informed by an absolutely unimpeachable source that in the past when you’ve executed this type of reasoning impeccably according to your own evidential standards, the conclusion you drew was often false. Now suppose that an instance of that type of reasoning is before you. You have evidence E, you (flawlessly) apply your reasoning according to your evidential standards, and you draw conclusion H. But then you note that you’ve concluded H from E by applying this type of reasoning, and reflect on what you’ve recently learned about your own track record. It seems very plausible that upon so reflecting, you should decrease your confidence in H. But then we have the following argument: Third Symmetry Argument: Reliability Symmetry • Given a track record demonstrating unreliability, facts about the outcome of your reasoning can rationally downgrade your confidence in a hypothesis.

18 

 M. G. TITELBAUM

• Therefore, given a track record demonstrating reliability, facts about the outcome of your reasoning can rationally improve your confidence in a hypothesis. If this argument is sound, then I was wrong above; for a proven excellent reasoner, reflecting on the fact that you’ve reasoned to a particular conclusion should rationally permit you to boost your confidence in that conclusion. My response to this argument is that its premise is false. Our first reactions notwithstanding, it’s not rational for an agent aware of a poor past track record to downgrade her confidence in a hypothesis after noting that she’s reasoned to it. To see why, let’s take a concrete example. In constructing this example, I want to avoid a common mistake among philosophers working on higher-order evidence. It’s often tempting, in discussing an agent’s reasoning, to treat that reasoning as some sort of black box. But while I said above that evidential standards can be abstractly characterized as a function from bodies of total evidence to attitudes toward hypotheses, realistically they are more like algorithms. An agent’s reasoning doesn’t just magically move from evidence to conclusion; it has to pick up on some relationship between the evidence and a hypothesis in order to endorse belief in that hypothesis. So let’s imagine an agent possesses evidence E, which is the conjunction of two propositions: (1) if H then G; and (2) G. On the basis of E, this agent reasons to H, and comes to believe H. Moreover, this agent has reasoned according to this pattern many times before – she has a strong tendency to affirm the consequent.14 So after carrying out the reasoning and forming a belief in H, the agent notices that she’s concluded H by affirming the consequent, and

I realize that affirming the consequent is a deductively invalid argument form, but that doesn’t mean that affirming the consequent must be a performance error. I’m imagining that this agent has a set of evidential standards that positively direct her to affirm the consequent whenever she has the opportunity to do so, and she is faithfully executing those standards. This need not be irrational, just because affirming the consequent is invalid. After all, all patterns of inductive reasoning are deductively invalid, yet that doesn’t make all of them irrational. Still, if you don’t like the example, feel free to substitute in something else more subtle. Perhaps whenever this agent confronts a hypothesis, she runs a Fisherian significance test on it, and if the resulting p-value is less than 5% she disbelieves the hypothesis. But it turns out that in a large proportion of these cases the hypothesis is true…. 14

INQUIRY 

 19

then recalls that in the past when she’s affirmed the consequent, her conclusions have turned out to be true about half the time. According to the thinking behind our third symmetry argument, the agent should at this point decrease her confidence in H (perhaps by suspending judgment in that hypothesis). And that’s exactly what I would deny. Because for a rational agent, there would be no need to decrease her confidence in H after noting the outcome of her reasoning – she would never form a belief in H to begin with! Here’s why: If the agent possesses a track record about the outcomes of past affirmations of the consequent, she shouldn’t just conclude something about her own reasoning in response to a particular evidential pattern; she should conclude something about that evidential pattern itself. Given her track record, she should no longer take an affirming-the-consequent relation between evidence and hypothesis to support the truth of the hypothesis. She should think to herself, ‘In the past when a hypothesis stood in this particular relation to my evidence, the hypothesis turned out to be true only half the time. So standing in this relation to my evidence is no indicator of truth’. Notice that I’m not saying track record facts should be irrelevant to an agent’s reasoning. The Self-Screening Thesis doesn’t say that E screens off reliability facts from H; it says that E screens off facts about the outcome of one’s reasoning from H. What I am saying is that whatever an agent learns from her track record should be baked into her initial reasoning from E to H; it shouldn’t have to wait until after that reasoning issues in an attitude to play a role. In terms of our stages from earlier, the track record evidence should influence what goes on at Stage 1, when the agent initially assesses H in terms of E. It shouldn’t have to wait until Stage 2, when the agent has already concluded her reasoning and then noted the resulting belief. If the agent was reasoning rationally all along, then by the time Stage 2 arrives there should be no more work for the track record information to do, and noting the outcomes of her own reasoning process shouldn’t affect the agent’s opinions about H one whit. To use a Bayesian term, rational reasoning is well-calibrated: An agent’s degree of confidence in a hypothesis should track how often she thinks hypotheses like that (hypotheses with that kind of evidential backing) turn out to be true. So I call this the Calibration

20 

 M. G. TITELBAUM

Argument for the Self-Screening Thesis. It shows that if an agent has rational (among other things, well-calibrated) evidential standards, then reasoning from the initial evidence E takes up all the work with respect to H that reasoning from facts about her own conclusions might do down the road. E screens off such facts from having any relevance to H. Earlier I noted that there are at least two kinds of screening off: undercutting defeat and evidential redundancy. Though I’m not certain of this, the Calibration Argument suggests that the SelfScreening Thesis involves evidential redundancy screening. Return to the case of past reasoning considered in the Temporal Symmetry Argument. There, a fact about your own reasoning in the past suggested a certain kind of relationship held between the evidence you had in the past and H. Absent the evidence itself, this reasoning fact is evidence for H. But when you have E itself, you have access to the crucial relationship in which E stands to H. This relationship explains both why H should get some evidential boost, and why you rendered the judgment that you did. Due to this forking structure, E makes the fact about the outcome of your own reasoning redundant with respect to establishing H; E screens off facts about your reasoning from the hypothesis it is reasoning about. We can use the Fixed Point Thesis I mentioned earlier to build another argument for the Self-Screening Thesis (though I suspect this argument may ultimately come to the same thing as the Calibration Argument).15 The Fixed Point Thesis says that it’s a rational mistake for an agent to misunderstand what rationality requires of her. In my other work I developed the Fixed Point Thesis with non-permissive cases in mind, so I didn’t really draw out its consequences for permissive situations. Still, the most natural interpretation is that the Fixed Point Thesis requires an agent to be able to anticipate what her own rational standards will direct her to conclude from a given body of evidence. In other words, rational evidential standards can never be misled about how they assess particular bodies of evidence. If that’s right, then when a rational agent receives a body of evidence E, that body of evidence will already (relative to the In Titelbaum (2015) I argued for the Fixed Point Thesis from the claim that rationality forbids akrasia. I realize some people don’t find this a convincing reason to support the Fixed Point Thesis, and so will find this second argument unconvincing. Nonetheless, here goes.

15

INQUIRY 

 21

agent’s evidential standards) support the truth about how those standards will rule on any given H in light of E. When applied to E at Stage 1, the agent’s evidential standards already anticipate the information she will come to gain at Stage 2. (Keep in mind that we’ve stipulated there are no performance errors, and the agent is certain of that. So the evidential standards’ knowing how they assess E is tantamount to their knowing what the agent will conclude from E.) When the information about the outcome of her reasoning actually arrives at Stage 2, there’s no more evidential work for it to accomplish. To put this argument in Bayesian terms: If an agent’s evidential standards satisfy the Fixed Point Thesis, then relative to those standards any truths about the outcome of that agent’s reasoning from E are already certain conditional on E. If a claim is certain conditional on an agent’s previous evidence, then learning that claim cannot alter her credences in any way from what she already believed on that previous evidence. So when she learns the outcome of her own reasoning, this cannot affect her credences. E screens off from H any proposition that’s certain given E, including facts about the outcome of the agent’s reasoning from E.

6.  Asymmetrical access The two arguments I’ve just provided for the Self-Screening Thesis explain why an agent’s evidence should screen off from a hypothesis facts about that agent’s reasoning from that evidence concerning that hypothesis. But the Self-Screening Thesis also says that the evidence need not have this effect on facts about the reasoning of others. My arguments haven’t said anything about the reasoning of others, so how can they ground this asymmetry? To understand where I think the asymmetry comes from, it’s helpful to contrast my approach with David Enoch’s in his (2010). Enoch first characterizes philosophers who might offer the Interpersonal or Reliability Symmetry arguments. He says that they view agents (and agents’ reasoning processes) as ‘truthometers’: When an agent reasons to a particular conclusion, this is usually an indicator of that conclusion’s truth, much as the reading on a thermometer is an indicator of the current temperature. Of course things can go wrong – agents can be unreliable, as can

22 

 M. G. TITELBAUM

thermometers. But in standard cases, the truthometer view says we should treat the outcomes of agential reasoning just like we treat readings on thermometers. And just as all reliable thermometers should be treated identically, so should an agent treat identically facts about the outcomes of all (reliable) reasoning processes, whether those processes were carried out by others or by herself. Responding to the truthometer view, Enoch writes In forming and revising your beliefs, you have a unique and ineliminable role. You cannot treat yourself as just one truthometer among many, because even if you decide to do so, it will be very much you – the full, not merely the one-truthometer-among-many, you – who so decides.… The point is naturally put Nagelianly: even though from a third-person perspective – which you can take towards yourself – we are all truthometers, still the first-person perspective – from which, when it is your beliefs which are being revised, it is you doing the revisions – is ineliminable… Once you reflect on a question, asking yourself, as it were, what is the truth of the matter, and so what is to be believed – once the believing self is fully engaged – you can no longer eliminate yourself and your reflection. (Enoch 2010, §3)

Like me, Enoch wants to support an asymmetry between one’s own reasoning and the reasoning of others. But the grounds on which he wants to do so are different from mine. Enoch focuses on something we can’t do: we can’t separate ourselves from our reasoning in the way that we can separate ourselves from others’; we can’t treat ourselves ‘as just one truthometer among many’.16 I, on the other hand, want to argue from something we can do with respect to our own reasoning that we can’t always do with respect to others’. That is, we have a certain sort of privileged access to our own reasoning processes. Recall how the Calibration Argument for Self-Screening worked. That argument relied on the fact that when an agent has track record information about the reliability of her own reasoning processes, she can convert that information into reliability data about certain evidential relations, namely the relations operative in those processes. She can do this because she can determine what relations between evidence and hypotheses factor into her own reasoning. And this, in turn, allows her to bake the reliability information into the Stage 1 reasoning that works with such relations. Compare also the ‘perspectivalism’ of Kvanvig (2013).

16

INQUIRY 

 23

An asymmetry then arises when the agent doesn’t have access to the evidential standards of others. Knowing that your reasoning has a particular degree of reliability doesn’t tell me anything about what to do with a particular batch of evidence, unless I also know what your reasoning would do with that evidence. Sometimes I may be in such a position; perhaps you’ve laid out for me in complete detail how you move from evidence to belief. In that case, I can at Stage 1 look for the relations you employ in reasoning, apply what I know about your reliability, and render any observation of the actual outcome of your reasoning processes redundant. But this case is hardly typical. Most of the time I don’t know exactly how my peers reason, and so despite knowing that their reasoning is reliable I can’t predict what the outcome of that reasoning will be. In that case, interacting with them and learning their views can give me information I didn’t have before, and couldn’t have recovered from E. Thus, when I lack access to the evidential standards of others, E does not screen off facts about the outcome of their reasoning processes from the hypotheses under consideration in those processes.17 This is exactly what happens in the Reasoning Room. Exchanges with my peers are interesting and informative because I lack access to their standards and so genuinely don’t know what they’re going to conclude.18 If I already knew how the standards of everyone else in the Reasoning Room worked, I could skip any interaction with them entirely, and pass directly to the truth about any hypothesis just by reasoning on my own. ‘Access’ has recently become a scary word in epistemology; it seems to implicate such dreaded notions as ‘luminosity’ and ‘the

The Self-Screening Thesis describes an asymmetry between facts about one’s own reasoning and facts about the reasoning of ‘others’. When I described the Temporal Symmetry example near the end of Section 4, I assumed that in that case your evidential standards had remained constant from past to present (and that you were certain of this constancy). If your evidential standards were different in the past, and you aren’t sure now exactly what they were, then possession of the same evidence you had in the past may not screen off facts about the outcome of your past reasoning. That is, for purposes of the Self-Screening Thesis a past version of yourself with unknown evidential standards counts as an ‘other’. 18 To highlight the informativeness of peer interactions in the Room and the distinction between learning about peers’ reasoning outcomes and noticing my own, consider what happens if I randomly select a peer from the room and find out that she’s drawn the same conclusion as me about H. Given my background track record information, this should make me certain that my belief is true. The same does not happen when I notice what conclusion I’ve drawn from E about H. 17

24 

 M. G. TITELBAUM

transparency of the mental’.19 My line of argument may be accused of over-intellectualizing; surely typical agents just reason, without being able to recognize and report the relations between evidence and hypotheses to which their reasoning is attuned. Here I’d respond, first, that the discussion in which we’re engaged is already one about atypically reflective agents. After all, we are asking how an agent should behave who, having reasoned to a conclusion, explicitly notices that she’s reasoned to that conclusion, adds this fact to her stock of evidence, and then considers what to make of it. But second, we should keep in mind that the screening-off relation is about what a body of evidence supports, or renders irrelevant, relative to a particular set of evidential standards, regardless of whether an agent manages to work those implications out. Going back to my earlier example of screening off and evidential redundancy, you may fail to consciously realize that evidence about my office light affects your confidence that I’ll be at tea by way of your opinions about whether I’m on campus. But that doesn’t change the fact that on-campus facts screen off office-light facts from tea-attendance facts relative to your evidential standards.20 In the present case, possessing a set of evidential standards is sufficient for you to figure out (if you set your mind to it) what those standards will make of a particular body of evidence. At worst, even if you can’t identify and articulate the relations to which your reasoning processes are keyed, you can determine what those reasoning processes would say about a given hypothesis by running them in a sort of simulation mode. (Much as one does when one engages in conditional reasoning.) An agent need not wait around until her reasoning process has concluded and she has actually formed a belief about the hypothesis to see which way

Those unequppied with the requisite dread may acquire it from Williamson (2000). This point also helps address a question Garrett Cullity raised to me. In the Calibration Argument I suggested that a rational agent would incorporate track record reliability data into her reasoning at Stage 1 rather than Stage 2. But what if the agent somehow neglects to incorporate this data at Stage 1 – doesn’t rationality then demand that she do so at Stage 2 (upon noticing the attitude she has assigned)? Even if the answer is yes, this doesn’t get at our ultimate question of which pieces of evidence screen off which others. It may be that if you find out I’m on campus but fail to incorporate this fact into your opinions about whether I’ll be at tea, noticing later that my light is on rationally requires you to alter those tea opinions. But this doesn’t change the fact that relative to your evidential standards, whether I’m on campus screens off office-light facts from tea facts. (And notice further that this case involves a performance error of omission.)

19 20

INQUIRY 

 25

her evidential standards lean.21 The key difference between one’s own reasoning and the reasoning of others is that this worst-case approach isn’t always available for other people – one can’t always run a simulation to determine how the reasoning of others would come out.22 This explanation for the asymmetry in the Self-Screening Thesis highlights the importance of the stipulations I made at the end of Section 2. First, it’s important that we’re working in a permissive case in which rational agents apply different evidential standards. (Otherwise concluding her own reasoning would allow each agent to predict the outcome of the others’.) Second, it’s important that the agent has ruled out any performance errors on her own part. For an agent with the possibility of performance errors, running a simulation does not guarantee determination of what her evidential standards say about a matter – after all, that simulation may have applied the standards incorrectly. In such a case, observing the attitude actually issued by her reasoning may provide a significant further data point as to what those standards require. I still believe there’s an important access asymmetry in performance error cases, and that even in those cases an agent should treat information about other agents’ reasoning differently from information about her own. But the differences in such cases may not be as stark as the Self-Screening Thesis maintains.

7.  An essential indexical? I have argued that in a certain range of cases, facts about the outcome of one’s own reasoning are screened off by the evidence from which that reasoning proceeds, while facts about the outcomes of other agents’ reasoning from that evidence need not be. I have suggested that this asymmetry is due to the fact that Readers familiar with decision theory may note some similarities between my approach here and what’s now known as the ‘tickle defense’ of evidential decision theory. (Eells 1982) Just as the tickle defense suggests that an agent can sense her temptation to behave in certain ways and then take it into account in her decision-making – such that in some sense the act she ultimately performs should never be a surprise to her – I am suggesting that a rational agent can tell when her cogitation is pulling her in a particular direction, and take that fact into account before concluding her (Stage 1) reasoning. (Thanks to Tim L. Williamson for discussion on this point.) 22 Though perhaps one can sometimes. Am I alone in feeling like I have the voice of my dissertation advisor in my head? I often feel like I can simulate what he would conclude about various philosophical problems. Perhaps that’s why I don’t solicit his feedback as much as I used to. 21

26 

 M. G. TITELBAUM

in such cases our reasoning processes always have access to the standards on which they are based, while we are not guaranteed access to the evidential standards of others. Returning now to the questions with which this article began, have we found any distinctive role for indexicality – or for the de se – in reasoning? It’s not clear that we have, because it’s not clear that the facts about one’s own reasoning mentioned in the SelfScreening Thesis must be expressed indexically. Perhaps Mike Titelbaum could have been wired up so as to have special access to what he thinks of as ‘Mike Titelbaum’s reasoning processes’, and therefore to treat facts about the outcomes of ‘Mike Titelbaum’s reasoning processess’ differently from how he treats facts about the deliverances of ‘Herman Cappelen’s reasoning processess’, ‘Josh Dever’s reasoning processes’, etc. – without any indexicals’ being involved.23 Yet even without identifying a distinctive role for indexicals, we may still have found a distinctive role for perspectivality in inquiry, because perspectivality need not demand expression in any particular linguistic form. To contrast my approach with Enoch’s again, the role I’ve identified for perspectivality is not that of a confinement; it’s that of a lookout from which certain sights can always be seen. When one considers a hypothesis in light of a body of evidence, one’s own reasoning processes are the engine by which that consideration proceeds. This distinctive role allows certain tests, certain inquiries to be conducted in the course of reasoning that can’t necessarily be run for reasoning processes not so directly engaged. One consequence is a peculiar evidential status for facts about the reasoning processes themselves – they may be screened off by the very bodies of evidence from which those processes proceed. There’s a similarity here to the self’s special role in Descartes’ cogito. Purely by thinking one can establish the existence of oneself as a thinking thing, but not the existence of others (regardless of whether what’s thereby established is expressed indexically). The thinking in question – in this case about existence – is carried out by the thing doing the thinking, and therefore establishes the existence of that thing. I play a special role in my thinking, as the Compare Cappelen and Dever’s discussion of the ‘Relational Fallacy’ at (2013, 44ff), which in turn draws upon (Millkan 1990).

23

INQUIRY 

 27

thinker; so thoughts about myself, and about the outcomes of my thinking, may play a role in that thinking not played by thoughts about others.24

Disclosure statement No potential conflict of interest was reported by the author.

References Cappelen, H., and J. Dever. 2013. The Inessential Indexical. Oxford: Oxford University Press. Christensen, D. 2007. “Epistemology of Disagreement: The Good News.” Philosophical Review 116: 187–217. Cohen, S. 2013. “A Defense of the (almost) Equal Weight View.” In The Epistemology of Disagreement: New Essays, edited by J. Lackey and D. Christensen, 98–119. Oxford: Oxford University Press. Eells, E. 1982. Rational Decision and Causality. Cambridge Studies in Philosophy. Cambridge: Cambridge University Press. Elga, A. 2007. “Reflection and Disagreement.” Noûs 41: 478–502 Enoch, D. 2010. “Not Just a Truthometer: Taking Oneself Seriously (but not Too Seriously) in Cases of Peer Disagreement.” Mind 119: 953–997. Feldman, R. 2007. “Reasonable Religious Disagreements.” In Philosophers without Gods: Meditations on Atheism and the Secular Life, edited by L. M. Antony, 194–214. Oxford: Oxford University Press. Kelly, T. 2010. “Peer Disagreement and Higher-order Evidence.” In Disagreement, edited by R. Feldman and T. A. Warfield, 111–174. Oxford: Oxford University Press. Kopec, M., and M. G. Titelbaum. 2016. “The Uniqueness Thesis.” Philosophy Compass 11: 189–200. Kvanvig, J. L. 2013. “Perspectivalism and Reflective Ascent.” In The Epistemology of Disagreement: New Essays, edited by J. Lackey, and D. Christensen, 223–243. Oxford: Oxford University Press. Magidor, O. 2015. “The Myth of the de se.” Philosophical Perspectives 29: 249–283 Meacham, C. J. G. Forthcoming. Ur-priors, Conditionalization, and Ur-prior Conditionalization. Ergo. Millkan, R. G. 1990. “The Myth of the Essential Indexical.” Noûs 24: 723–34. Pollock, J. L. 1987. “Defeasible Reasoning.” Cognitive Science 11: 481–518.

I’m grateful to audiences at the University of Melbourne, Monash University, the University of Sydney, the University of Adelaide, and the Workshop on the First-Person Perspective at the Norwegian Institute at Athens for discussion of this material. This article was completed while I was supported by a Visiting Fellowship at the Australian National University.

24

28 

 M. G. TITELBAUM

Reichenbach, H. 1956. “The Principle of Common Cause.” In The Direction of Time, 157–160. Berkeley: University of California Press. Schoenfield, M. 2014. “Permission to Believe: Why Permissivism is True and What it Tells us about Irrelevant Influences on Belief.” Noûs 48: 193–218. Titelbaum, M. G. 2010. “Not Enough There There: Evidence, Reasons, and Language Independence.” Philosophical Perspectives 24, 477–528. Titelbaum, M. G. 2015. “Rationality’s Fixed Point (or: In Defense of Right Reason).” In Oxford Studies in Epistemology Vol. 5, edited by T. S. Gendler and J. Hawthorne, 253–94. Oxford: Oxford University Press. Titelbaum, M. G. and M. Kopec. XXXX. Plausible Permissivism. Unpublished manuscript. van Inwagen, P. 1996. “Is it Wrong Everywhere, Always, and for Anyone to Believe Anything on Insufficient Evidence?” In Faith, Freedom, and Rationality: Philosophy of Religion Today, edited by J. Jordan and D. HowardSnyder, 136–53. Lanham, MD: Rowman and Littlefield. Weatherson, B. XXXX. Do Judgments Screen Evidence? Unpublished manuscript. White, R. 2005. “Epistemic Permissiveness.” Philosophical Perspectives 19: 445–459. Williamson, T. 2000. Knowledge and Its Limits. Oxford: Oxford University Press.

One's own reasoning

Dec 16, 2016 - ment; it's that of a lookout from which certain sights can always be seen. When one considers a hypothesis in light of a body of evidence, one's own reasoning processes are the engine by which that consideration proceeds. This distinctive role allows certain tests, certain inquiries to be conducted in the ...

2MB Sizes 0 Downloads 247 Views

Recommend Documents

One's own reasoning
Dec 16, 2016 - facts about whether S is true mute E's influence on H. That is, screening off requires both. S and ~S (or, more generally, every element of a partition of which S is a part) to render E irrelevant to H. This complication won't matter f

[1,5]benzodiazepin-13-ones - Arkivoc
by metal mediated electrons and two hydrogen atoms, the elimination of water molecule ... After the reductive addition of a hydrogen atom, intermediates IIIa-i ...

Wild Ones sia.pdf
Mix electro pop 2012 . vol. 1 _ wild ones floridaftsia _ dj. Flo rida wild ones lyrics online musiclyrics, news, imagesand. Flo rida wild ones download youtube.

Reasoning - PhilPapers
high degree (cf. ..... Other times, it is not an abbreviation: by 'a good F', we mean something that is .... McHugh (2014), McHugh and Way (2016 b), Howard (ms.).

young ones demolition.pdf
much your Harold MacMillan; thanks to. him, I am never having it, yeah! R- Mr. Balowski! We have residents. rights, you know! You're supposed to. knock!

ones to watch
at the end of 2009 with a bachelor's degree in computer sci- ence. Plans to do a master's program in the coming year. How He Got Interested in Unmanned Systems: I was always fascinated by ... where the chances to do work with robotics are higher. For

[1,5]benzodiazepin-13-ones - Arkivoc
disadvantages such as cost and availability of the reagents such as 2-azidobenzoyl chloride and noble metal .... For easier comparison of NMR data, the arbitrary numbering of atoms is presented in Figure. 1 (A – for ..... frontier molecular orbital

A Bedroom of Ones Own.pdf
therapeutically appropriate. See State v. Brenan, 772 So.2d 64 (La. 2000). Georgia, Texas, and Mississippi statutes prohibiting the sale of obscene devices have.

Notes on Practical Reasoning - COGENCY | Journal of Reasoning ...
cess of reasoning. Something written down, for example. I don't want to confuse the process with the product, so I here use “reasoning” just for the process. The product of reasoning might be a linear sequence of ..... evidence against them (Lord

automated reasoning
as programming language and as logical language [20, 43]. .... logic programming language. ..... In M D Agostino, D Gabbay, R Haehnle, and J Posegga, editors ...

proportional reasoning
9, May 2011 ○ MATHEMATICS TEACHING IN THE MIDDLE SCHOOL 545 p of proportion occur in geometry, .... in your notebook. Use your estimation skills to ...

Quantitative Reasoning
of both raw and derived quantitative data. Expertly recognizes and differentiates between raw and derived data, and expertly appraises the appropriateness of .... literacy skills. Competently describes and explains the processes and results applying

Reasoning
high degree (cf. ..... Other times, it is not an abbreviation: by 'a good F', we mean something that is .... McHugh (2014), McHugh and Way (2016 b), Howard (ms.).

Ones Dualitat ona corpuscle.pdf
m/s; h = 6,626·10–34 J·s; 1 eV = 1,60·10–19 J; 1 nm = 10–9 m. Resultat: 2,54·10-20 J. 3,32·1017 fotons. 22. (PAU juny 01) Calcula l'energia cinètica màxima ...

reasoning 220815.pdf
All boxes are trunks. Conclusions: I. Some trunks are tables. II. All chairs are boxes. III. Some boxes are desks. IV. All desks are trunks. 1) Only I, II and III follow.

Test of Reasoning-2 - WordPress.com
All stones are gems. 2. Some gems are marbles. Conclusions: I. All gems are stones. II. Some stones are marbles. III. Some marbles are not stones. IV. No stone ...

8.0 Critical Reasoning -
primitive cutting tools known to have been used by early hominids. 14. In Washington County, attendance at .... decrease sales of Plexis' current line of computer chips. (D) Plexis' major rivals in the computer ...... Products sold under a brand name

Test of Reasoning-2 - WordPress.com
entertainment, while watching the same film in a multiplex theatre at four ... statement is followed by three courses of action that are proposed as a solution to the ...

Reasoning-3.pdf
Page 1 of 8. Reasoning. 1. What should come in the place of (?) in the given series? ACE, FGH, ?, PON. (A) KKK. (B) JKI. (C) HJH. (D) IKL. Ans. (A). 2. Typist ...