What is justified credence? March 5, 2017

Aafira and Halim are both 90% confident that it will be sunny tomorrow. Aafira bases her credence on her observation of the weather today and her past experience of the weather on days that follow days like today — around nine out of ten of them have been sunny. Halim bases his credence on wishful thinking — he just really likes the sun. Aafira, it seems, is justified in her credence, while Halim is not. Just as one of your full beliefs might be justified if it is based on visual perception under good conditions, or on memories of recent important events to which you attended carefully, or on testimony from experts you know to be reliable, so might one of your credences be; and just as one of your full beliefs might be unjustified if it is based on wishful thinking, or on racially-biased stereotypical associations, or on testimony from ideologically driven news outlets, so might your credences be. In this paper, we seek an account of justified credence — in particular, we seek necessary and sufficient conditions for a credence to be justified. Our account will be reliabilist. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Roughly, process reliabilism says that a belief is justified if it is formed by a reliable process, while indicator reliabilism says that a belief is justified if it is based on a ground that renders it likely to be true. Reliabilism about justified credence also comes in two varieties; indeed, it comes in the same two varieties. And, in fact, of the two existing proposals, Jeff Dunn’s is a version of process reliabilism (Dunn, 2015) while Weng Hong Tang offers a version of indicator reliabilism (Tang, 2016). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. It begins from the observation that we value justified belief more than we value unjustified belief. And it proceeds by noting that, if Dunn’s account of justified credence is correct, or if Tang’s is, then this fact calls out for an explanation that neither of those accounts can provide. Neither can explain why we value justified belief more than unjustified belief.

1

I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well. Having done that in the first half of the paper, I spend the second half considering some objections to the account I propose and some consequences of it: I consider the Swamping Problem for reliabilism, make it precise, and argue that it poses no problem; I consider potential objections to the solution to the Generality Problem that I favour; I consider the New Evil Demon objection to reliabilism and argue that it fails; and I show how we might use my favoured account of justified credence to argue for Probabilism, the central tenets of Bayesian epistemology.

1

Reliabilism and Dunn on reliable credence

Let us begin with Dunn’s process reliabilism for justified credences. Now, to be clear, Dunn takes himself only to be providing an account of reliability for credences and credence-forming processes. He doesn’t necessarily endorse the third conjunct of reliabilism, which connects reliability and justification, saying that a credence is justified just in case it is reliable. Instead, Dunn speculates that perhaps reliability is just one from an array of epistemic virtues, each of which is required for justification. Nonetheless, I will consider a version of reliabilism for justified credences that is based on Dunn’s account of reliable credence. For reasons that will become clear, I will call this the calibrationist version of process reliabilism for justified credence. Dunn rejects it based on what I will call below the Graining Problem. As we will see, I think we can solve that problem. For Dunn, a credence-forming process is perfectly reliable if it is well calibrated (or nearly so). Here’s what it means for a process ρ to be well calibrated. • First, we construct a set of all and only the outputs of the process ρ in the actual world and in nearby counterfactual scenarios. An output of ρ consists of a credence x in a proposition X at a particular time t in a particular possible world w — so we represent it by the tuple ( x, X, w, t). If w is a nearby world and t a nearby time, we call ( x, X, w, t) a nearby output. Let Oρ be the set of nearby outputs — that is, the set of tuples ( x, X, w, t), where w is a nearby world, t is a nearby time, and ρ assigns credence x to proposition X in world w at time t. 2

• Second, we say that the truth-ratio of ρ for credence x is the proportion of nearby outputs ( x, X, w, t) in Oρ such that X is true at w and t.1 • Finally, we say that ρ is well calibrated (or nearly so) if, for each credence x that ρ assigns, x is equal to (or approximately equal to) the truth-ratio of ρ for x.2 For instance, suppose a process only ever assigns credence 0.6 or 0.7. And suppose that, 60% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true; and 70% of the time it assigns 0.7 it assigns it to a true proposition. Then that process is well calibrated. If, on the other hand, it assigns 0.6 to a true proposition only 59% of the time and 0.7 to a true proposition 71% of the time, it is not well calibrated, but it is nearly so. And if it assigns 0.6 to a truth 95% of the time and 0.7 to a truth only 5% of the time, then it is neither well-calibrated nor nearly so. This, then, is Dunn’s calibrationist account of the reliability of a credenceforming process — a credence-forming process is reliable iff it is well calibrated or nearly so. Any version of reliabilism about justified credences that is based on it requires two further ingredients. First, we must use the account to say when an individual credence is reliable; second, we must add the claim that a credence is justified iff it is reliable. Each of these moves creates problems. We will address them below. But first it will be useful to present Tang’s version of indicator reliabilism for justified credence. It will provide an important clue that helps us solve one of the problems that Dunn’s account faces. And, having it in hand, it will be easier to see how these two accounts end up coinciding.

2

Tang’s indicator reliabilism for justified credence

According to indicator reliabilism for justified belief, a belief is justified if the ground on which it is based is a good indicator of the truth of that belief. Thus, beliefs formed on the basis of visual experiences tend to be justified because the fact that the agent had the visual experience in question makes 1 In

symbols: TR(ρ, x ) =

2 That

|{( x, W, w, t) ∈ Oρ : X is true at w and t}| |{( x, X, w, t) ∈ Oρ }|

is:

– ρ is well calibrated if TR(ρ, x ) = x for all x; – ρ is nearly well calibrated if TR(ρ, x ) ≈ x for all x.

3

it likely that the belief they based on it is true.3 Wishful thinking, on the other hand, usually does not give rise to justified belief because the fact that an agent hopes that a particular proposition will be true — which in this case is the ground of their belief — does not make it likely that the proposition is true.4 Tang follows William Alston’s account of grounds, and I will too: the ground “for a belief is not what we might call the total complete input to the belief-forming mechanism, but rather those features of that input that are actually taken account of in forming the belief” (Alston, 1988, 268). Tang seeks to extend this account of justified belief to the case of credence. Here is his first attempt at an account: Tang’s Indicator Reliabilism for Justified Credence (first attempt) A credence of x in X by an agent S is justified iff (TIC1-α) S has ground g; (TIC2-α) the credence x in X by S is based on ground g; (TIC3-α) the objective probability of X given that the agent has ground g approximates or equals x — we write this P( X |S has g) ≈ x. Thus, just as an agent’s full belief in a proposition is justified if its ground makes the objective probability of that proposition close to 1, a credence x in a proposition is justified if its ground makes the objective probability of that proposition close to x. There is a substantial problem here in identifying the notion of objective probability to which Tang wishes to appeal. But we will leave that aside for the moment, other than to say that he conceives of it along the lines of hypothetical frequentism – that is, the objective probability of X given Y is the hypothetical frequency with which propositions like X are true when propositions like Y are true. We return to the issue of objective probabilities in section 8. However, Tang notes that, as it is stated, his version of indicator reliabilism faces a problem. Suppose I am presented with an urn. I know that it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black; but I do not know which are black and which white. I shake the urn vigorously and extract a ball. It’s number 73 and it’s white. I look at its colour and the number printed on it. I have a visual experience of a white ball with the numeral ‘73’ on it. On the basis of my visual experience of the numeral alone, I assign credence 0.5 to the proposition that ball 73 is white. According to Wang’s first version of indicator reliabilism for 3 Obvious

exceptions: when the lighting conditions are poor; when the agent is under the influence of hallucinogenics. 4 Obvious exceptions: when, by hoping that you will win a competition, you make your success more likely, because your hope makes you perform better.

4

justified credence, my credence is justified. My ground is the visual experience of the number on the ball; I have that ground; I base my credence on that ground; and the objective probability that ball 73 is white given that I have a visual experience of the numeral ‘73’ printed on it is 50% — after all, half the balls are white. Of course, the problem is that I have not used my total evidence — or, in the language of grounds, I have not based my belief on my most inclusive ground. I had the visual experience of the numeral on the ball as a ground; but I also had the visual experience of the numeral on the ball and the colour of the ball as a ground. The resulting credence is unjustified because the objective probability that ball 73 is white given I have the more inclusive ground is not 0.5 — it is close to 1, since my visual system is so reliable. This leads Tang to amend his account of justified credence as follows, where we write g0 ⊆ g when g and g0 are grounds, and g0 is at least as inclusive as g — that is, g0 is at least as strong as g, so that if g is the ground visual experience of ‘73’ printed on a ball and g0 is the ground visual experience of numeral ‘73’ printed on a white ball, then g0 ⊆ g; indeed, in this case, g0 ( g, since g0 is strictly stronger or more inclusive than g. Tang’s Indicator Reliabilism for Justified Credence A credence of x in X by an agent S is justified iff (TIC1) S has ground g; (TIC2) the credence x in X by S is based on ground E; (TIC3) if S has ground g0 ⊆ g, then the objective probability of X given that the agent has ground g0 approximates or equals x — that is, P( X |S has g0 ) ≈ x. That is, x in X by S based on g is justified if the objective probability of X given that the agent has g is approximately x, and remains so as we condition on S having any more inclusive grounds g0 that she also has. This, then, is Tang’s version of indicator reliabilism for justified credences.

3

Same mountain, different routes

Thus, we have now seen Dunn’s process reliabilism and Tang’s indicator reliabilism for justified credences. Is either correct? If so, which? In one sense, both are correct; in another, neither is. Less mysteriously: Dunn’s process reliablism and Tang’s indicator reliabilism are extensionally equivalent — that is, the same credences are justified on both accounts. What’s more, both are extensionally equivalent to the correct account of justified credence, which is thus a version of both process and indicator reliabilism. However, while they get the extension right, they do so for the wrong reasons. It may well be that a credence is justified just in case it is formed by a well calibrated process; but it is not justified because it is formed by a well 5

calibrated process. And it may well be that a credence is justified just in case it matches the objective chance given its grounds; but it is not justified because it has that feature. Thus, Dunn and Tang delimit the correct extension, but they use the wrong intension. In section 4, I will offer what I take to be the correct intension. But first, let’s see why it is that the routes that Dunn and Tang take lead them both to the top of the same mountain. We begin with Dunn’s calibrationist account of the reliability of a credenceforming process. Any version of reliabilism about justified credences that is based on this account requires two further ingredients. First, we must use the account to say when an individual credence is reliable. The natural answer: when it is formed by a reliable credence-forming process. But then we must be able to identify, for a given credence, the process of which it is an output. The problem is that, for any credence, there are a great many processes of which it might be the output. I have a visual experience of a piece of red cloth on my desk, and I form a high credence that there is a piece of red cloth on my desk. Is this credence the output of a process that assigns a high credence that there is a piece of red cloth on my desk whenever I have that visual experience? Or is it the output of a process that assigns a high credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are good, while it assigns a middling credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are bad? It is easy to see that this is important. The first process is poorly calibrated, and thus unreliable on Dunn’s account; the second process is better calibrated and thus more reliable on Dunn’s account. This is the so-called Generality Problem, and it is a challenge that faces any version of reliabilism. ˜ I will offer a version of Juan Comesana’s solution to this problem below ˜ 2006). As we will see, that solution also clears the way for a (Comesana, natural solution to the Graining Problem, which we consider next. Dunn provides an account of when a credence-forming process is reliable. And, once we have a solution to the Generality Problem, we can use that to say when a credence is reliable — it is reliable when formed by a reliable credence-forming process. Finally, to complete the version of process reliablism about justified credence based on Dunn’s account, we just need the claim that a credence is justified iff it is reliable. But this too faces a problem, which we call the Graining Problem. As above, suppose I am presented with an urn. I know that it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black; but I don’t know which are black and which white. I shake the urn vigorously and extract a ball. I look at its colour and the numeral printed on it. I have two processes at my disposal. Process 1 takes my visual experience of the numeral only, say ‘n’, and assigns the credence 0.5 to the proposition that ball n is white. Process 2 takes my visual experience of the numeral, ‘n’, and my visual experience of the colour of the ball, and assigns credence 1 to the proposition 6

that ball n is white if my visual experience was of a white ball, and assigns credence 1 to the proposition that ball n is black if my visual experience was of a black ball. Note that both processes are well calibrated (or nearly so, if we allow that my visual system is very slightly fallible). But we would usually judge the credence formed by the second to be better justified than the credence formed by the first. Indeed, we would typically say that a Process 1 credence is unjustified, while a Process 2 credence is justified. Thus, being formed by a well calibrated or nearly well calibrated process is not sufficient for justification. And, if reliability is calibration, then reliability is not justification and reliabilism fails. It is this problem that leads Dunn to reject reliabilism about justified credence. So, two problems face our calibrationist account of reliabilism for justified credence: the Generality Problem and the Graining Problem. We con˜ offers sider the Generality Problem first. To this problem, Juan Comesana ˜ 2006). Every account of doxastic justhe following solution (Comesana, tification — that is, every account of when a given doxastic attitude of a particular agent is justified for that agent — must recognize that it is possible that two agents have the same doxastic attitude and the same evidence while the doxastic attitude of one is justified and the doxastic attitude of the other is not because they do not base that doxastic attitude on the same evidence. One might base their belief on the total evidence, for instance, whilst the other ignores that evidence and bases their belief purely on wish˜ claims, every theory of justification needs a ful thinking. Thus, Comesana notion of the grounds or basis of a doxastic attitude. But, once we have that, ˜ spells out the a solution to the Generality Problem is very close. Comesana solution for process reliabilism about full beliefs: Well-Founded Process Reliablism for Justified Full Beliefs A belief that X by an agent S is justified iff (WPB1) S has ground g; (WPB2) the belief that X by S is based on ground g; (WPB3) the process producing a belief state X based on ground g is a reliable process. This is easily adapted to the credal case: Well-Founded Process Reliablism for Justified Credences A credence of x in X by an agent S is justified iff (WPC1) S has ground g; (WPC2) the credence x in X by S is based on ground g; (WPC3) the process producing a credence of x in X based on ground g is a reliable process. 7

˜ Let us now try to apply Comesana’s solution to the Generality Problem to help Dunn’s calibrationist reliabilism about justified credences. Recall: according to Dunn, a process ρ is reliable if it is well calibrated, or nearly so. Consider the process producing a credence of x in X based on ground g g — for convenience, we’ll write it ρ X,x . There is only one credence that it g assigns, namely x. So it is well calibrated if that truth-ratio of ρ X,x for x is equal to x. Now, Oρgp,x is the set of tuples ( X, x, w, t) where w is a nearby g

world and t a nearby time where ρ X,x assigns credence x to proposition X. g But, by the definition of ρ X,x , those are the nearby worlds and nearby times g at which the agent has the ground g. Thus, the truth-ratio of ρ X,x for x is the proportion of those nearby worlds and times at which the agent has the ground g at which X is true. And that, it seems to me, is the something like the objective probability of X conditional on the agent having ground g, at least given the hypothetical frequentist account of objective probability of the sort that Tang favours. As above, we denote the objective probability of X conditional on the agent S having grounds g as follows: P( X |S has g). g Thus, P( X |S has g) is the truth-ratio of ρ p,x for x. And thus, a credence x in X based on ground g is reliable on the calibrationist account iff x is close to P( X |S has g). That is, Well-founded Calibrationist Process Reliabilism for Justified Credences (first attempt) A credence of x in X by an agent S is justified iff (WCPC1-α) S has ground g; (WCPC2-α) the credence x in X by S is based on ground g; (WCPC3-α) the process producing a credence of x in X based on ground g is a reliable process — that is, P( X |S has g) ≈ x. But now compare this first attempt to formulate well-founded calibrationist ˜ process reliabilism, based on Dunn’s account of reliable processes and Comesana’s solution to the Generality Problem, with Tang’s first attempt to formulate Indicator Reliabilism. Consider the necessary and sufficient conditions that each imposes for justification: TIC1-α = WCPC1-α; TIC2-α = WCPC2-α; TIC3-α = WCPC3-α. Thus, these are the same account. However, as we saw above, Tang’s first pass at Indicator Reliabilism fails because it counts as justified a credence that is not based on an agent’s total evidence; and we also saw that, once the Generality Problem is solved for Dunn’s calibrationist process reliabilism, it faces a similar problem, namely, the Graining Problem from above. Tang amends his version of indicator reliabilism by strengthening the third condition TIC3-α to give TIC3. Might we amend Dunn’s calibrationist process reliabilism in a similar way, and thereby solve the Graining Problem? Yes, we can:

8

Well-Founded Calibrationist Process Reliabilism for Justified Credences A credence of x in X by an agent S is justified iff (WCPC1) S has ground g; (WCPC2) the credence x in X by S is based on ground g; (WCPC3) If S has ground g0 ⊆ g, then the process producing a credence of x in X based on ground g0 is a reliable process — that is, P( X |S has g) ≈ x. Since TIC3 is equivalent to WCPC3, this final version of process reliabilism for justified credences is equivalent to Tang’s final version of his indicator reliabilism for justified credences. Thus, Dunn and Tang have reached the top of the same mountain, albeit by different routes.

4

Two other routes up the mountain

Once we have addressed certain problems with a calibrationist version of process reliabilism for justified credence, we see that it agrees with the current best version of indicator reliabilism, namely, Tang’s. This gives us a little hope that both have hit upon the correct account of justification. In the end, I will conclude that both have indeed hit upon the correct extension of the concept of justified credence; but they have done so for the wrong reasons, for they have not hit upon the correct intension. There are three sorts of route you might take when pursuing an account of justification for a given sort of doxastic attitude, such as a credence or a full belief. You might look to intuitions concerning particular cases and try to discern a set of necessary and sufficient conditions that sort these cases in the same way that your intuitions do. Or, you might begin with an existing account of justification for another sort of doxastic attitude and formulate your account for the sort of attitude that interests you by analogy. Or, you might begin with an account of epistemic value, assume that justification must be linked in some natural way to the promotion of epistemic value, and then provide an account of justification that vindicates that assumption. Dunn and Tang have both taken routes of the second sort, though both also make some appeal to intuitions. They take themselves to be generalising reliabilist accounts of justified full belief. In Tang’s case, Alston’s; in Dunn’s case, Goldman’s. I will follow a route of the third sort. I will adopt the veritist’s account of epistemic value. In the case of categorical doxastic attitudes — that is, belief, disbelief, suspension — veritism says that, if such an attitude is directed towards a true proposition, it is most valuable if it is a belief, least valuable if it is a disbelief, and neutrally valuable if it is a suspension of judgment; if the attitude is directed towards a false proposition, the order is reversed. In the case of graded 9

doxastic states — that is, credences — veritism says that, if such an attitude is directed towards a truth, it is better the higher it is; and if it is directed towards a falsehood, it is better the lower it is. Given this account of epistemic value, what is the natural account of justification? Well, at first sight, there are two: one is process reliabilist; the other is indicator reliabilist. But, in a twist that should come as little surprise given the conclusions of the previous section, it will turn out that these two accounts coincide, and indeed coincide with the final versions of Dunn’s and Tang’s accounts that we reached above. Thus, I too will reach the top of the same mountain, but by another pair of routes.

4.1

Epistemic value version of indicator reliabilism

In the case of full beliefs, indicator reliabilism says this: a belief in X by S on the basis of grounds g is justified iff the objective probability of X given that S has grounds g is high — that is, close to 1. Tang’s indicator reliabilism for justified credence is inspired by this. It is an attempt to generalise this account to the case of credence. However, I think he generalises in the wrong direction; that is, he takes the wrong feature to be salient and uses that to formulate his indicator reliabilism for justified credence. He takes the general form of indicator reliabilism to be something like this: a doxastic attitude s towards X by S on the basis of grounds g is justified iff the attitude s ‘matches’ (or comes close to ‘matching’) the objective probability of X given that S has grounds g. And he takes belief to ‘match’ high objective probability, and credence x to ‘match’ objective probability of x. The problem with this account is that it leaves mysterious why justification is valuable. Unless we say that matching objective probabilities is somehow epistemically valuable in itself, it isn’t clear why we should want to have justified doxastic attitudes in this sense.5 After all, there are many quantities pertaining to X and g that a credence might match. It might match the proportion of people with ground g who form a credence in X. Or it might match the proportion of people with credences in X who formed them on the basis of ground g. And so on. What is epistemically relevant about matching the objective probability of X given that you have ground g that isn’t relevant about matching those other quantities? I contend instead that the general form of indicator reliabilism is this: Indicator Reliabilism for Justified Doxastic Attitude (epistemic value version) A doxastic attitude s towards proposition X by agent S is justified iff 5 See (H´ ajek, ms) and (Pettigrew, 2012) for the claim that credences aim to match the objective probability just as belief aims to match the truth. On this account, credences have greater epistemic value the closer they come to achieving that aim. See (Pettigrew, 2016a, Section 9.4) for what I take to be a decisive refutation.

10

(EIA1) S has g; (EIA2) Attitude s towards X by S is based on g; (EIA3) if S has ground g0 ⊆ g, then for every doxastic attitude s0 of the same sort as s, the objective expected epistemic value of attitude s0 towards X given that S has g0 is at most (or not much more than) the objective expected epistemic value of attitude s towards X given that S has g0 . Here, when we say that attitude s0 is of the same sort as attitude s, we mean that both are credences in X, or both are categorical doxastic attitudes towards X. Thus, a full belief in X is of the same sort as a full disbelief in X or a suspension in X; but it is of a different some from a credence of 0.7 in X. And when we talk of the objective expected epistemic value of an attitude, we mean its expected epistemic attitude calculated using the objective probability function. Thus, for the veritist, attitude s towards X by S is justified if s is based on a ground g that S has, and s is the attitude towards X that has (nearly) the highest objective expected accuracy conditional on S having the most inclusive grounds she has. The parenthetical qualifications simply make the account slightly less demanding than it would otherwise be. Let’s consider this in the full belief case. We have: Indicator Reliabilism for Justified Belief (epistemic value version) A belief in proposition X by agent S is justified iff (EIB1) S has g; (EIB2) the belief that X by S is based on g; (EIB3) if S has ground g0 ⊆ g, then (a) the objective expected epistemic value of disbelief in X, given that S has g0 , is at most (or not much more than) the objective expected epistemic value of belief in X, given that S has g0 ; (b) the objective expected epistemic value of suspension in X, given that S has g0 , is at most (or not much more than) the objective expected epistemic value of belief in X, given that S has g0 . To complete this, we need only the veritist’s account of the epistemic value of these categorical doxastic attitudes. As described above: if the proposition is true, belief has greatest epistemic value, then suspension of judgment, then disbelief; if it is false, the order is reversed. It is natural to say that a belief in a truth and disbelief in a falsehood have the same high epistemic value — following Kenny Easwaran, we denote this R (for ‘getting it Right’), and assume R > 0 (Easwaran, 2016). And it is natural to say 11

that a disbelief in a truth and belief in a falsehood have the same low epistemic value — again following Easwaran, we denote this −W (for ‘getting it Wrong’), and assume W > 0. And finally it is natural to say that suspension of belief in a truth has the same epistemic value as suspension of belief in a falsehood, and both have epistemic value 0. Following Easwaran, we assume that W > R — that is, we disvalue getting things wrong more than we value getting things right.6 Now, suppose proposition X has objective probability p. Then the expected epistemic utility of different categorical doxastic attitudes towards X is given below: • Expected epistemic value of belief in X = p · R + (1 − p) · (−W ). • Expected epistemic value of suspension in X = p · 0 + (1 − p) · 0. • Expected epistemic value of disbelief in X = p · (−W ) + (1 − p) · R. Thus, belief in X has greatest epistemic value amongst the possible catW egorical doxastic attitudes to X if p > R+ W ; disbelief in X has greatest R epistemic value if p < R+W ; and suspension in X has greatest value if R W W R R+W < p < R+W (at p = R+W , belief ties with suspension; at p = R+W , disbelief ties with suspension). With this in hand, we have the following version of indicator reliabilism for justified beliefs: Indicator Reliabilism for Justified Belief (veritist version) A belief in X by agent S is justified iff (EIB1∗ ) S has g; (EIB2∗ ) the belief in X by S is based on g; (EIB3∗ ) if S has ground g0 ⊆ g, then the objective probability of X W given that S has g0 is (nearly) greater than R+ W. And of course this is simply a more explicit version of the standard version of indicator reliabilism. It is more explicit because it gives a particular threshold above which the objective probability of X given that S has g counts as ‘high’, and above which (or not much below which) the belief in W X by S counts as justified — that threshold is R+ W. Note that this also gives a straightforward account of when a suspension of judgment is justified. Replace (EIB3) with: (EIS3) if S has ground g0 ⊆ g, then the objective probability of X given that W R S has g0 is (nearly) between R+ W and R+W . 6 Easwaran notes that, if W = R, and if we calculate the epistemic value of a set of doxastic attitudes by summing the value of the individual attitudes, then it is just as valuable to believe both a proposition and its negation as it is to suspend judgment on both; and if W < R, it is better to believe both a proposition and its negation than to suspend judgement on both. Thus, he assumes W > R.

12

And when a disbelief is justified. Replace (EIB3) with: (EID3) if S has ground g0 ⊆ g, then the objective probability of X given that S has g0 is (nearly) less than R+RW . Next, let’s turn to indicator reliabilism for justified credence. Here’s the epistemic value version: Indicator Reliabilism for Justified Credence (epistemic value version) A credence of x in proposition X by agent S is justified iff (EIC1) S has g; (EIC2) credence x in X by S is based on g; (EIC3) if g0 ⊆ g is a ground that S has, then for every credence x 0 , the objective expected epistemic value of credence x 0 in X given that S has g0 is at most (or not much more than) the objective expected epistemic value of credence x in X given that S has g0 . Again, to complete this, we need an account of epistemic value for credences. As noted above, the veritist holds that epistemic value of a credence is given by its accuracy. There is a lot to be said about different potential measures of the accuracy of a credence. Such measures are often called scoring rules or local accuracy measures, and there are many that are used for a variety of different purposes in epistemology, statistics, and psychology (Savage, 1971; de Finetti, 1974; Joyce, 1998, 2009; Predd et al., 2009; Pettigrew, 2016a).7 Here, I will say only this: we assume that those measures are continuous and strictly proper. That is: first, the accuracy of a credence in a given proposition varies continously with that credence; second, any probability p in a proposition X expects credence p to be more accurate than it expects any other credence x 6= p in X to be.8 These assumptions are widespread in the literature on accuracy-first epistemology, 7 As

we will use it here, a scoring rule is a measure of the accuracy of a credence. It is a function s : {0, 1} × [0, 1] → [−∞, 0], so that s(1, x ) is the accuracy of having credence x in a true proposition while s(0, x ) is the accuracy of having credence x in a false proposition. Sometimes in the literature, scoring rules are taken to be measures of inaccuracy; that is, they are the negatives of the functions considered here. Here is an example: q(1, x ) = −(1 − x )2 and q(0, x ) = − x2 . This is sometimes call the quadratic scoring rule. 8 More precisely: s(1, x ) and s(0, x ) are continuous functions of x; and, for any 0 ≤ p ≤ 1, ps(1, x ) + (1 − p)s(0, x ) is maximised, as a function of x, at p = x. Recall the quadratic scoring rule q from footnote 7. It is clearly continuous, and a little calculus shows that pq(1, x ) + (1 − p)q(0, x ) = − p(1 − x ) − (1 − p) x2 is maximised, as a function of x, at p = x.

13

and it is one of the basic assumptions on which many of the central arguments in that area rest (Greaves & Wallace, 2006; Predd et al., 2009; Moss, 2011; Pettigrew, 2013; Horowitz, 2014; Schoenfield, 2015; Pettigrew, 2016b; Levinstein, 2015; Pettigrew, ta). Given both assumptions, (EIC3-α) is provably equivalent to: (EIC3) if S has ground g0 ⊆ g, then the objective probability of X given that the agent has ground g0 approximates or equals x — that is, P( X |S has g0 ) ≈ x. But of course EIC3 = TIC3 = WCPC3 from above. Thus, the veritist version of indicator reliabilism for justified credences is equivalent to Tang’s indicator reliabilism, and to the calibrationist version of process reliabilism.

4.2

Epistemic value version of process reliabilism

Next, let’s turn to process reliabilism. How might we give an epistemic value version of that? The mistake made by the calibrationist’s version of process reliabilism is of the same sort as the mistake made by Tang in his formulation of indicator reliabilism — both generalise from the case of full beliefs in the wrong way by mistaking an accidental feature for the salient feature. For the calibrationist, a full belief is justified if it is formed by a reliable process, and a process is reliable if a high proportion of the beliefs it produces are true. Now, notice that there is a sense in which such a process is calibrated: a belief is associated with a high degree of confidence, and that matches, at least approximately, the high truth-ratio of a reliable process. In fact, we want to say that this process is belief -reliable. For it is possible for a process to be reliable in its formation of beliefs, but not in its formation of disbeliefs. A process is disbelief -reliable if a high proportion of the disbeliefs it produces are false. And we might say that a process is suspension-reliable if a middling proportion of the suspensions it forms are true and a middling proportion are false. In each case, we think that there is, corresponding to each sort of categorical doxastic attitude s, a fitting proportion x such that a process is s-reliable if x is (approximately) the proportion of truths amongst the propositions to which it assigns s. Applying this in the credal case gives us the calibrationist version of process reliabilism that we have already met — a credence x in S is justified if it is formed by a process whose truth-ratio for a given credence is equal to that credence. However, being the product of a belief-reliable process is not the feature of a belief in virtue of which it is justified. After all, it is not a feature that is connected to epistemic value. Thus, the calibrationist account of justified belief or credence faces the Connection Problem. Why should we want to have beliefs or credences that are justified in this sense? Why should

14

we want to have beliefs or credences that are formed by well calibrated processes? What epistemic value does this promote? One possibility is that forming beliefs or credences using a well calibrated process makes it more likely that your credences themselves will be well calibrated, something that some philosophers take to be a source of epistemic value (van Fraassen, 1983; Shimony, 1988; Lange, 1999). We say that a set of credences is well calibrated if, for each 0 ≤ x ≤ 1, the proportion of true propositions to which we assign x is x. The problem is that, as is well known, being well calibrated is not in fact the aim of credence nor a source of epistemic value (Seidenfeld, 1985). For one thing, it is too easy to come by — providing you have a credence in the negation of a proposition whenever you have a credence in that proposition, you can ensure that your credences are well calibrated by assigning 0.5 to each proposition — whatever the truth values of the propositions, your credences will be well calibrated. Also, for an agent with credences only in a proposition and its negation, while credences of 0.5 in both are guaranteed to be well calibrated, a credence of 0.99 in the proposition and a credence of 0.01 in its negation are guaranteed not to be; thus, they would count as less epistemically valuable according to the calibrationist account of epistemic value, even if the proposition is true and its negation false. Thus, being the product of a belief-reliable process is not the feature of a belief in virtue of which it is justified. Rather, a belief is justified if it is the product of a process that has high expected epistemic value. Process Reliabilism for Justified Doxastic Attitude (epistemic value version) Doxastic attitude s towards proposition X by agent S is justified iff (EPA1-α) s is produced by a process ρ; (EPA2-α) If ρ0 is a process that is available to S, then the expected epistemic value of ρ0 is at most the expected epistemic value of ρ. To complete this account, we must say which processes count as available ˜ to an agent? To answer this, recall Comesana’s solution to the Generality Problem. On this solution, the only processes that interest us have the form, process producing doxastic attitude s towards X on basis of ground g. Clearly, a process of this form is available to an agent exactly when the agent has ground g. This gives Process Reliabilism about Justified Doxastic Attitudes (epistemic value version) Attitude s towards proposition X by S is justified iff g

(EPA1) s is produced by process ρs,X ; 15

(EPA2) If S has ground g0 ⊆ g, then for every doxastic attitude s0 , g0

the expected epistemic value of process ρs0 ,X is at most the g0

expected epistemic value of process ρs,X . Thus, in the case of full beliefs, we have: Process Reliabilism for Justified Belief (epistemic value version) A belief in proposition X by agent S is justified iff g

(EPB1) Belief in X is produced by process ρbel,X ; (EPB2) if S has ground g0 ⊆ g, then g0

(a) the expected epistemic value of process ρdis,X is at most (or not much more than) the expected epistemic value g0

of process ρbel,X ; g0

(b) the expected epistemic value of process ρsus,X is at most (or not much more than) the expected epistemic value g0

of process ρbel,X ; And it is easy to see that (EPB1) = (EIB1) + (EIB2), since belief in X by S is g produced by process ρbel,X iff S has ground g and her belief in X is based on g. Also, (EPB2) is equivalent to (EIB3). Thus, as for the epistemic version of indicator reliabilism, we get: Process Reliabilism for Justified Beliefs (veritist version) A belief in X by agent S is justified iff (EPB1∗ ) S has g; (EPB2∗ ) the belief in X by S is based on g; (EPB3∗ ) if S has ground g0 ⊆ g, then the objective probability of X W given that S has g is (nearly) greater than R+ W. Next, consider how the epistemic value version of process reliabilism applies to credences. Process Reliabilism for Justified Credence (epistemic value version) A credence of x in proposition X by agent S is justified iff g

(EPC1) the credence in x is produced by process ρ x,X ; (EPC2) if S has ground g0 ⊆ g, then for any credence x 0 the exg0

pected epistemic value of process ρ x0 ,X is at most (or not much more than) the expected epistemic value of process g0

ρ x,X . 16

As before, we see that (EPC1) is equivalent to (EIC1) + (EIC2). And, providing the measure of accuracy is continuous and strictly proper, we get that (EPC2) is equivalent to (EIC3). So, once again, we arrive at the same summit. The routes taken by Tang, Dunn, and the epistemic value versions of process and indicator reliabilism lead to the same spot, namely, the following account of justified credence: Reliabilism for Justified Credence (epistemic value version) A credence of x in proposition X by agent S is justified iff (ERC1) S has g; (ERC2) credence x in X by S is based on g; (ERC3) if S has ground g0 ⊆ g, then the objective probability of X given that the agent has ground g0 approximates or equals x — that is, P( X |S has g0 ) ≈ x. In the remainder of the paper, we take up some objections to the account just given, clarify certain aspects of it, and explore its consequences.

5

The Swamping Problem

In this section, we consider a standard objection to reliabilism, called the Swamping Problem. In its usual formulation, this problem arises because of an inconsistency between three plausible principles: the first is an intuition about the added value that justification bestows, the second a putative norm of rationality, and the third is reliabilism. We state this formulation precisely, using the framework of decision theory, and we show that, in fact, the putative norm of rationality is false. So the problem poses no threat to reliabilism. Above, I motivated the epistemic value versions of process and indicator reliabilism by noting that the two existing proposals — Dunn’s and Tang’s — face the Connection Problem, which arises from the fact that we value a doxastic attitude more if it is justified than if it is unjustified. If either Tang’s or Dunn’s proposal is correct, this fact is mysterious — it is left unexplained — for neither Tang nor Dunn posit a connection between justification and epistemic value. The epistemic value versions of indicator and process reliabilism, on the other hand, clearly do posit such a connection. And, moreover, that connection is sufficient to explain why we value a doxastic attitude more when it is justified than when it is unjustified. According to the epistemic value version of indicator reliabilism, a doxastic attitude is justified if it has maximal objective expected epistemic value given the ground on which it is based. And according to the epistemic value version of process reliabilism, a doxastic attitude is justified if the process by which it is formed has maximal objective expected epistemic 17

value amongst the processes that are available to the agent. Unlike Dunn’s and Tang’s accounts, both epistemic value accounts, therefore, explain why, if I have a doxastic attitude towards a proposition, I assign higher value to that attitude being justified than I do to that attitude being unjustified. After all, if you have something, it is rational to prefer that it has high objective expected utility; it is rational to prefer that possibility to one in which it has low objective expected utility. For instance, suppose I find a lottery ticket on the street; it comes either from a 10-ticket lottery or from a 100-ticket lottery; both lotteries pay out the same amount to the holder of the winning ticket. Then it is rational for me to hope that the ticket I hold belongs to the smaller lottery, since that would maximise my chance of winning and thus maximise the expected utility of the ticket. However, while epistemic value versions of reliabilism explain why we assign greater value to my doxastic attitude towards X being justified than we do to it being unjustified, it might seem that they cannot explain why we assign greater value to my doxastic attitude to X actually having a certain amount of epistemic value and being justified than we do to my attitude actually having same amount of epistemic value and being unjustified. And, it is claimed, we do have that preference. For instance, I prefer a justified true belief to an unjustified true belief; I prefer a justified false belief to an unjustified false belief; I prefer a justified credence with accuracy score 0.01 to an unjustified credence with accuracy score 0.01. If reliabilists cannot explain that preference, then so much the worse for reliabilism — or so runs the Swamping Problem. Let’s begin by explaining in a little more detail how this objection plays out in the case of full beliefs for someone who subscribes to the epistemic value version of reliabilism about those attitudes — that is, someone who claims that an agent’s doxastic attitude based on a given ground is justified just in case it has maximal objective expected epistemic value given that the agent has that ground. (It will be easier to explain the issues in the case of beliefs. We will return to credences at the end of the section.) For such a reliabilist, if you prefer a justified true belief to an unjustified true belief, you must prefer a belief that actually has maximal epistemic value and had a high chance of having maximal epistemic value to a belief that actually has maximal epistemic value but had a low chance of having maximal epistemic value. And this, certain philosophers claim, is irrational. It is rational to value a high chance of a high utility more than a low chance of a high utility; but it is not rational to value an actual high utility that had a high chance of coming about more than an actual high utility that had a low chance of coming about. The actual high utility ‘swamps’ any consideration of the chance of obtaining that utility. For instance, in the example above, it is rational for me to value having a ticket from the smaller lottery more than I value having a ticket from the larger lottery; but it is irrational for me to value having the winning ticket from the smaller lottery more 18

than I value having the winning ticket from the larger lottery. The fact that I actually have the winning ticket ‘swamps’ any consideration of the chance that I would have it. Thus, there is a particular way we value beliefs: we value them when they are true and justified more than when they are true and unjustified. Call this the Value Intuition. If reliabilism is true, then this means that I value something that actually has high utility and that had a high chance of having high utility more than I value something that actually has high utility and that had a low chance of having high utility. But this is irrational. We must preserve the Value Intuition and the irrationality judgment; so we must reject reliabilism. This is known variously as the Swamping Problem or the Value Problem for reliabilism about justification (Zagzebski, 2003; Kvanvig, 2003). The central assumption of the Swamping Problem is a principle that, in another context, H. Orri Stef´ansson and Richard Bradley call Chance Neutrality (Stef´ansson & Bradley, 2015). They state it precisely within the framework of Richard Jeffrey’s decision theory (Jeffrey, 1983). In that framework, we have a desirability function V and a credence function c, both of which are defined on an algebra of propositions F . For a proposition A in F , V ( A) measures how strongly our agent desires A, or how greatly she values it, while c( A) measures how strongly she believes A, or her credence in A. The central principle of the decision theory is this: Desirability Suppose the propositions X1 , . . . , Xn form a partition. Then n

V (X) =

∑ c ( Xi | X ) V ( X & Xi )

i =1

That is, roughly, the value of X is its expected value. This is calculated by taking the credence in each element of the partition given X and using that to weight the value of X being true in that element of the partition. Now, suppose the algebra on which V and c are defined includes some propositions that concern the objective probabilities of other propositions in the algebra. Then we suppose throughout that the credence function c obeys Lewis’ Principal Principle: Principal Principle Suppose the propositions X1 , . . . , Xn form a partition. And suppose 0 ≤ α1 , . . . , αn ≤ 1 and ∑in=1 αi = 1. Then c( X j |

n ^

Objective probability of Xi is αi ) = α j

i =1

That is, an agent’s credence in X j , conditional on information that gives the objective probability of X j and other members of a partition to which X j belongs, should be equal to the objective probability of X j . In this framework, Chance Neutrality can be stated as follows: 19

Chance Neutrality Suppose X1 , . . . , Xn form a partition. And suppose 0 ≤ α1 , . . . , αn ≤ 1 and ∑in=1 αi = 1. Then V (Xj &

n ^

Objective probability of Xi is αi ) = V ( X j )

i =1

That is, the actual outcome of the chance process that picks between X1 , . . . , Xn ‘swamps’ information about the chance process itself in our evaluation, which is recorded in our value or desirability function V. A simple consequence of this: if 0 ≤ α1 , α10 . . . , αn , α0n ≤ 1 and ∑in=1 αi = 1 and ∑in=1 αi0 = 1, then V (Xj &

Vn

i =1 Objective

V (Xj &

probability of Xi is αi ) =

Vn

i =1 Objective

probability of Xi is αi0 )

Now consider the particular case of this that is cited in the Swamping Problem. Suppose I believe X on the basis of ground g. I assign greater value to this belief being true and justified than I do to it being true and unjustified. Moreover, for any given way of being justified and any given way of being unjustified, I assign greater value to this belief being true and justified in that way than I do to it being true and unjustified in that way. That is the Value Intuition. Now, recall the reliabilist’s account of justification: • Belief in X is justified ⇔ the objective probability of X given I have W grounds g is at least R+ W; • Belief in X is unjustified ⇔ the objective probability of X given I have W grounds g is less than R+ W. Thus, the Value Intuition can be stated as follows in this framework: Value Intuition For any α0 <

W R +W

< α,

V ( X & Objective probability of X is α0 given I have g) < V ( X & Objective probability of X is α given I have g) And this violates Chance Neutrality. Thus, in this framework, the argument of the Swamping Problem can be presented as follows. The following three claims are inconsistent: (SPB1) The Value Intuition. (SPB2) Reliabilism for Justified Belief (veritist version). 20

(SPB3) Chance Neutrality. Thus, to evaluate the Swamping Problem, as stated here, we must evaluate Chance Neutrality. Is it a requirement of rationality? Stef´ansson and Bradley argue that it is not (Stef´ansson & Bradley, 2015, Section 3). They show that, in the presence of Desirability and the Principal Principle, Chance Neutrality entails a principle called Linearity.9 Then they claim that Linearity is not a requirement of rationality. If it is permissible to violate Linearity, then it cannot be a requirement to satisfy a principle that entails it. So Chance Neutrality is not a requirement of rationality. Linearity is the following principle: Linearity V(

n ^

n

Objective probability of Xi is αi ) =

∑ α i V ( Xi )

i =1

i =1

That is, an agent should value a lottery at the objective expected value of its outcome. Now, as is well known, real agents often violate Linearity (Buchak, 2013). The most famous violations are known as the Allais preferences (Allais, 1953). Suppose there are 100 tickets numbered 1 to 100. One ticket will be drawn and you will be given a prize depending on which option you have chosen from L1 , . . . , L4 . The table below shows how the four options pay out. L1 L2 L3 L4

Tickets 1-89 £1,000,000 £1,000,000 £0 £0

Tickets 90-99 £1,000,000 £5,000,000 £1,000,000 £5,000,000

Ticket 100 £1,000,000 £0 £1,000,000 £0

9 We make the following abbreviation: Pα is the proposition The objective probability of X X given I have g is α — that is, P( X |I have g) = α. By Desirability,

V(

n ^

n ^

n

PXαii ) =

i =1

∑ c( X j |

j =1

PXαii )V ( X j &

i =1

By the Principal Principle, c( X j |

n ^

PXαii ) = α j

i =1

By Chance Neutrality, V (Xj &

n ^

PXαii ) = V ( X j )

i =1

Therefore, V(

n ^ i =1

n

PXαii ) =

∑ α i V ( Xi )

i =1

which is just Linearity, as required.

21

n ^ i =1

PXαii )

Each ticket has an equal chance of winning. Now, it turns out that many people have preferences recorded in the following desirability function V: V ( L1 ) > V ( L2 ) and

V ( L3 ) < V ( L4 )

That is, they strictly prefer L1 to L2 and L4 to L3 . When there is an option that guarantees them a high payout (£1m), they prefer that over something with 1% chance of nothing (£0) even if it also provides 10% chance of much greater payout (£5m). On the other hand, when there is no guarantee of a high payout, they prefer the chance of the much greater payout (£5m), even if there is also a slightly greater chance of nothing (£0). The problem is that there is no way to assign values to V (£0), V (£1m), and V (£5m) so that V satisfies Linearity and also these inequalities.10 Stef´ansson and Bradley show that, in the presence of Desirability and the Principal Principle, Chance Neutrality entails Linearity; and they argue that there are rational violations of Linearity (such as the Allais preferences); so they conclude that there are rational violations of Chance Neutrality. So far, so good for the reliabilist: the Swamping Problem assumes that Chance Neutrality is a requirement of rationality; and we have seen that it is not. It might seem, however, that reliabilism is not out of the woods just yet. For it might seem that we in fact appealed to Linearity when we formulated our epistemic value versions of reliabilism! Let’s recall how we proceeded there. We began by positing an epistemic value R for a true belief and an epistemic value −W for a false belief. And we said that S’s belief in X on the basis of g is justified iff its objective expected epistemic value given S has g is maximal amongst other possible attitudes — that is, if αR − (1 − α)W is no less than 0 or −αW + (1 − α) R, where α = P( X |S has g). We did this because we wanted to ensure that justified beliefs are more valuable than unjustified beliefs — that is, we wanted to avoid the Connection Problem. 10 Suppose,

for a reductio, that there is. By Linearity, V ( L1 ) V ( L2 )

= =

0.89V (£1m) + 0.1V (£1m) + 0.01V (£1m) 0.89V (£1m) + 0.1V (£5m) + 0.01V (£0m)

Then, since V ( L1 ) > V ( L2 ), we have: 0.1V (£1m) + 0.01V (£1m) > 0.1V (£5m) + 0.01V (£0m) But also by Linearity, V ( L3 ) V ( L4 )

= =

0.89V (£0m) + 0.1V (£1m) + 0.01V (£1m) 0.89V (£0m) + 0.1V (£5m) + 0.01V (£0m)

Then, since V ( L3 ) < V ( L4 ), we have: 0.1V (£1m) + 0.01V (£1m) < 0.1V (£5m) + 0.01V (£0m) And this gives a contradiction.

22

But in order to ensure that we did this, we must have taken the value of a justified belief to be its objective expected epistemic value, and the value of an unjustified belief to be its objective expected epistemic value. And that, it might seem, is simply Linearity. Before we see why this argument is mistaken, let’s see how bad it would be for the reliabilist if it were correct and they were committed to Linearity. The problem is that we can tweak Value Intuition to give an alternative principle, Value Intuition∗ , so that, just as Chance Neutrality, Reliabilism, and Value Intuition form an inconsistent triad, so do Linearity, Reliabilism, and Value Intuition∗ . Value Intuition∗ says that there is some 0 < α∗ < 1 such that: (i) the value of a true belief in X with objective probability α∗ given its grounds is greater than the value of a true belief in X, (ii) the value of a false belief in X with objective probability α∗ given its grounds is greater than the value of a false belief in X. In other words, justification gives a boost in value both to true beliefs and to false beliefs. In symbols: Value Intuition∗ There is 0 < α∗ < 1 such that (i) V ( X ) < V ( X & Objective probability of X given I have g is α∗ ) (ii) V ( X ) < V ( X & Objective probability of X given I have g is α∗ ) The following three claims are inconsistent:11 (SPB1∗ ) The Value Intuition∗ . 11 We

make the following abbreviations:

• PXα is the proposition Objective probability of X given I have g is α • f (α) = V ( X & PXα ) and g(α) = V ( X & PXα ) • F = V ( X ) and G = V ( X ) By Desirability and the Principal Principle, we have, for 0 ≤ α ≤ 1, V ( PXα )

= =

c( X | PXα )V ( X & PXα ) + c( X | PXα )V ( X & PXα ) αV ( X & PXα ) + (1 − α)V ( X & PXα )

So V ( PXα ) = α f (α) + (1 − α) g(α). Moreover, by Linearity, V ( PXα ) = αV ( X ) + (1 − α)V ( X ) So V ( PXα ) = αF + (1 − α) G. Thus, for all 0 ≤ α ≤ 1, αF + (1 − α) G = V ( PXα ) = α f (α) + (1 − α) g(α). In particular, α∗ F + (1 − α∗ ) G = α∗ f (α∗ ) + (1 − α∗ ) g(α∗ ). But by Value Intuition∗ , F < f (α∗ ) and G < g(α∗ ). So α∗ F + (1 − α∗ ) G < α∗ f (α∗ ) + (1 − α∗ ) g(α∗ ). And this gives our contradiction.

23

(SPB2∗ ) Reliabilism for Justified Belief (veritist version). (SPB3∗ ) Linearity. Thus, if reliabilism is to preserve the intuitions about the value of justification that are set out in Value Intuition∗ , it must not be committed to Linearity. Fortunately, I think it is not. The problem with the argument given above lies in the final step. It assumes that, since we talk of R as the epistemic value of a true belief and −W as the epistemic value of a false belief, we must take V ( X ) to be R, V ( X ) to be −W, αR − (1 − α)W to be αV ( X ) + (1 − α)V ( X ), and αV ( X ) + (1 − α)V ( X ) to be the value of my belief in X when the objective probability of X given I have g is α, which in turn we take to be V ( PXα ), as Linearity demands. But that’s not how this works. When you deny Chance Neutrality, and allow the chance of an outcome to influence the value of that outcome, you have to determine the value of an outcome in two stages. First, you determine what we might call its primary value. In the case of the epistemic value of a belief, the primary value of a true belief in X is R, while the primary value of a false belief in X is −W. Next, you use that primary value to determine how you’ll assign value to different objective probabilities of X being true. In this case, we note that, if I believe X on the basis on g, then my belief has maximal value W if the objective probability of X given I have g is at least R+ W . Thus, I might decide that I will assign no extra primary value to a belief if it is unjustified, but I will assign a boost of primary value if it is justified — let’s say I add J to the existing primary value of the belief if it is justified. This allows me to assign what we might call a secondary value of R + J to a belief in X when it is true and justified; when it is false and justified, it receives −W + J; when it is true and unjustified, it receives R; and when it is false and unjustified, it receives −W. Now, it is the secondary value of states of affairs that V encodes. So, if I believe X on the basis of g, we have:  W R if α < R+ W V ( X & Obj probability of X given I have g is α) = W R + J if α ≥ R+ W and  V ( X & Obj probability of X given I have g is α) =

−W if α < −W + J if α ≥

W R +W W R +W

Now, by Desirability, V ( X ) is the expectation of V ( X & Obj prob of X given I have g is α), where the probabilities are given by c(−| X ). And, similarly, V ( X ) is the expectation of V ( X & Obj prob of X given I have g is α), where the probabilities are given by the same credence function. Thus, if I let p = c( P( X |I have g) ≥ W R+W | X ), then • V ( X ) = (1 − p) R + p( R + J ) = R + pJ 24

• V ( X ) = −(1 − p)W + p(−W + J ) = W + pJ Thus, notice that, if p, J > 0, V ( X ) 6= R and V ( X ) 6= −W. R gives the primary value of X when I believe X — that is the value that the truth of my belief initially contributes to the value of the situation in which it is true. That is then used to determine the value of different objective probabilities of X given I have grounds g for my belief. Those different values also contribute to the value of a situation in which X is true and it has a particular objective probability given I have g. And, given Desirability, the values of those situations determine a new value for X when I believe X — we call this the secondary value of X and record it in V. Something similar happens in the case of credences. As noted above, the primary value of a credence x in X based on g is measured by a continuous and strictly proper local accuracy measure s(i, x ), where i = 1 if X is true and i = 0 if X is false. Next, we use this primary value to determine the value of different objective probabilities of X given I have g. Given that s is strictly proper, the objective expected primary value of a credence x in X given I have g is maximal when x equals the objective probability of X given I have g. Thus, as before, I might decide that I will assign no extra primary value to a credence if it is unjustified, but I will assign a boost of primary value if it is justified — again, let’s say I add J to the existing primary value of the credence if it is justified. Then  s(1, x ) if x 6= α V ( X & Obj probability of X given I have g is α) = s(1, x ) + J if x = α and  V ( X & Obj probability of X given I have g is α) =

s(0, x ) if x 6= α s(0, x ) + J if x = α

Now, again by Desirability: • V ( X ) is the expectation of V ( X & Obj prob of X given I have g is α); • V ( X ) is the expectation of V ( X & Obj prob of X given I have g is α). Thus, if we let p = c( P( X |I have g) = α)| X ), then: • V ( X ) = s(1, x ) + pJ • V ( X ) = s(0, x ) + pJ So, again, if p, J > 0, then V ( X ) 6= s(1, x ) and V ( X ) 6= s(0, x ). However, note also that, if we let s∗ (1, x ) = V ( X ) and s∗ (0, x ) = V ( X ), then the resulting scoring rule s∗ is continuous and strictly proper if the original scoring rule s is — indeed, s0 is just a positive affine transformation of s. Thus, all of the usual results of accuracy-first epistemology apply. 25

Thus, I conclude: the Swamping Problem can be answered. It relies on Chance Neutrality, but that is not a requirement of rationality. There is an alternative Swamping Problem based on Linearity, but that is not a requirement of rationality either; nor is it something to which the reliabilist is committed. Thus, it is perfectly rational for the reliabilist to assign greater value to a a justified true belief than to an unjustified true belief; and it is rational for her to assign greater value to a justified credence of a certain degree of accuracy than to an unjustified credence of the same degree of accuracy.

6

The Well-Founded Solution to the Generality Problem

Above, we treated two versions of process reliabilism: Dunn’s calibrationist version and the epistemic value version. In both cases, we evoked ˜ Juan Comesana’s well-founded solution to the generality problem. This says, first: if doxastic attitude s towards X is based on ground g, then it is formed by the process producing attitude s towards X based on ground g — we g g denote this process ρs,X . Second, it says: it is this process ρs,X that is relevant to the epistemic evaluation of attitude s towards X. Thus, s towards g X based on g is justified iff s towards X is reliable iff ρs,X is reliable. But is this correct? Suppose I am a poor identifier of wildflowers. When presented with agrimony, I believe it’s birdsfoot trefoil, and vice versa; when presented with common mallow, I believe it’s corncockle, and vice versa. With the exception of germander speedwell, whenever I am presented with one type of flower, I believe it’s another. But when presented with germander speed˜ my belief that there’s well, I always get it right. According to Comesana, germander speedwell in front of me based on my visual experience of that plant is reliable and thus justified, because the relevant process is the reliable process producing a belief that there’s germander speedwell in front of me on the basis of a perceptual experience of that plant. The relevant process is not the extremely unreliable process producing a belief that there’s germander speedwell in front of me on the basis of a perceptual experience of that plant or a belief that there’s agrimony in front of me on the basis of a perceptual experience of birdsfoot trefoil or producing a belief that there’s common mallow in front of me on the basis of a visual experience of corncockle or . . . . Is that the right result? I suspect that, whether or not you go so far as to say that my beliefs concerning germander speedwell are unjustified, you’ll likely feel that there is something epistemically flawed either about me or about my beliefs concerning germander speedwell. But we can account for this while adhering ˜ to Comesana’s account of the epistemically relevant process. Firstly, my epistemic flaw is that, while my beliefs concerning germander speedwell 26

may be justified, all of my other beliefs about the identity of wildflowers are not. Secondly, we might identify the epistemic flaw in my beliefs concerning germander speedwell by appealing to the notion of epistemic luck. What seems to be flawed about my belief that there is germander speedwell in front of me on the basis of a visual experience of that plant is not that the belief is unjustified, I submit, but rather that, while it is justified, it was lucky to be. Had I been presented with a visual experience of chicory or devil’s-bit scabious or indeed any other wildflower, I would have formed an unjustified (and false) belief. This is the epistemic flaw in my belief. And indeed, it may be serious enough that, while my beliefs concerning ˜ germander speedwell are justified, in line with Comesana’s account, they could not count as knowledge — after all, knowledge is usually taken to be incompatible with epistemic luck of any sort. If so, they fail to count as knowledge not because they are not safe or not sensitive, for they are both of these things: were there not to be germander speedwell in front of me, I wouldn’t believe there were; and there is no nearby situation in which I believe that there is germander speedwell in front or me while there is no such plant in my vicinity. However, there is a nearby situation — one in which my eye catches not the germander speedwell in front of me but the chicory to its left — in which I form a false belief. Perhaps that is sufficient to preclude knowledge. But it is not sufficient to preclude justification. So ˜ I submit that Comesana’s account remains untouched.

7

The New Evil Demon Objection

A standard objection to reliabilism about justified belief is Stewart Cohen’s New Evil Demon Objection (Cohen, 1984). We are asked to consider someone in a different possible world. While this person has exactly the same phenomenological experiences that I have, hers are the result of an evil demon’s manipulations; they are not the outputs of properly functioning perceptual or mnemonic or cognitive or other belief-forming faculties. This person, we might suppose, forms the same beliefs and credences on the basis of the same experiences that I do. Whereas mine are, for the most part, formed by highly reliable processes, however, hers are formed by maximally unreliable processes. The grounds on which I base my beliefs and credences — namely, my perceptual experience, my memory, the testimony of others, for instance — make the resulting beliefs likely, whereas the grounds on which she bases hers — the same grounds, in fact, on which I base mine — make the resulting beliefs unlikely. The reason is simply that the objective probabilities differ between my world and hers. While she and I base the same doxastic attitudes on the same grounds, the sort of reliabilism proposed in this paper says that hers are unjustified whereas mine are, for the most part, justified. Cohen submits, however, that intuitively, 27

whenever one of my beliefs is justified, so is her corresponding belief. Thus, reliabilism is false. Above, we noted that all epistemologists will allow that it is possible that two agents have the same doxastic attitude towards a proposition and the same evidence and yet the doxastic attitude of the first is justified while the attitude of the second is not. This could happen, for instance, if the first based her doxastic attitude on the evidence while the second did not. Cohen’s argument assumes that it is not possible that two agents have the same doxastic attitude towards a proposition, have the same evidence, and base that doxastic attitude on the same evidence and yet the first is justified while the second is not. The justificatory status of a doxastic attitude, he assumes, depends only on the attitude, the proposition towards which it is directed, and the ground on which it is based. Since these are always identical for me and my phenomenological twin in the evil demon world, she is justified exactly when I am. The problem with Cohen’s argument is that this assumption is false. To see that, we need only note that we often discover that a belief or credence isn’t justified by discovering an empirical fact about the way in which it was formed. Take, for instance, the famous Invisible Gorilla Test that establishes the phenomenon of inattention blindness (Simons & Chabris, 1999). Subjects are asked to watch a video clip of a group of people passing a basketball around. And they are asked to count the number of passes. During the video, someone in a gorilla suit walks through the scene. After the video clip has finished and the test is over, subjects report that they remember no gorilla. In this case, we learn that beliefs based on certain grounds are unjustified: a belief is unjustified if it is based on memories concerning an aspect of a scene that were formed whilst the subject was attending to another aspect of that scene. We learn that because we learn something about their reliability at the actual world we inhabit. In an alternative world, where we do not suffer from inattention blindness, such beliefs may well be justified. Suppose we read an eyewitness account of a murder committed in 1893, long before the prevalence, severity, and systematicity of inattention blindness were known. The murder takes place during a tennis match that the eyewitness attended. By their own account, they were concentrating hard on the tennis match, watching every rally attentively. The solution to the case turns on whether someone walked along the side of the court in full view of the whole audience. The eyewitness tells us that they saw no-one do that.12 The belief turns out to be true. But is it justified? Before we know of inattention blindness, I think we are inclined to say that it is — our default is to say that beliefs based on perception, memory, and testimony are 12 The Father Brown story, The Invisible Man, by G. K. Chesterton is based on just such a plot device.

28

justified. But when we learn of this cognitive phenomenon, we revise our evaluation. However, all we have learned is that the grounds on which the belief was based are not a good indication of its truth. We used to think that memories formed about events in one’s visual range while one is concentrating are reliable; now we learn that they are not. The difference between what we thought was the case and what turned out to be the case seems to me analogous to the difference between the actual world and Cohen’s New Evil Demon world. Thus, the same analysis is appropriate in the two cases — in the new evil demon world, my twin’s beliefs are unjustified, just as reliabilism predicts. Of course, inattention blindness is just one case where we have learned how, in certain situations, our most basic belief-forming mechanisms — perception, memory, and testimony — are unreliable. For instance, John Shieber (2011) points to research that shows that we do not reliably discriminate true testimony from false (though Fricker (2016) disputes the interpretation of the empirical results). There are further worries about beliefs based on memory — the misinformation effect, for instance, where memories become less reliable as time passes because of new information the agent receives that corrupts her original memories (Loftus & Hoffman, 1989). And there are many well known types of case in which our perceptual faculties are highly unreliable — we perceive objects we desire as closer to us than they are, and objects that we fear as further away (Bacletis & Dunning, 2010); our racial stereotypes lead us to perceive weapons where there are none (Payne, 2006). In each of these cases, it seems to me, we learn that the beliefs we form in the cases covered are not justified. And, I submit, we should use our responses to these real cases to inform our response to the evil demon case. There, we learn that the grounds on which our agent bases her beliefs and credences are highly unreliable. And this should lead us to conclude that those beliefs are not in fact justified.

8

Objective probabilities

According to our favoured epistemic value version of reliabilism for justified credences, a credence of x in X by S based on g is justified if, for each ground g0 ⊆ g that S has, the objective probability of X given that S has g0 is x, or close to x — that is, P( X |S has g0 ) ≈ x. That is, our account makes a crucial appeal to a notion of objective probability. In this section, we ask what sort of probability this is. Given Jenann Ismael’s distinction between single-case objective probabilities and general objective probabilities, our notion falls under the latter heading (Ismael, 2011). They are what Elliott Sober (2010) calls macro-probabilities; they are what David Albert (2000) finds in classical statistical mechanics. For Ismael, single-case probabilities tend to be unconditional, and assign 29

values to particular token events, such as this particular die landing six on this particular roll. They are sometimes called chances, and they are the sort of probabilities we find in quantum mechanics. They are the objective probabilities that propensity accounts and best-system analysies aim to explicate. General probabilities, in contrast, tend to be conditional, and they take as arguments a pair of event types, such as dice of a certain sort landing six given that they are rolled. These are the sorts of probabilities that are found in statistical mechanics and evolutionary biology. They are what frequentist accounts attempts to explicate. Crucially, for our purpose, non-trivial general probabilities are possible even in deterministic worlds, whereas non-trivial chances are not — indeed, it is part of what it means for a world to be deterministic that the chance of an event at any time is either 0 or 1. Thus, in such a world, any particular roll of any particular die either is determined to come up six or is determined not to come up six. Nonetheless, it is still possible in such a world that the general objective probability of a die with certain general physical properties landing six given that it is rolled is 16 . Similarly, while it is determined by the deterministic laws whether any particular egg in any human reproductive system will survive to reproductive age or will not, there is nonetheless a non-trivial probability that an egg will survive to reproductive age, given that it is a human egg — and this is a general probability. And while it is determined by the deterministic laws whether any particular block of ice in warm water will or will not melt, there is nonetheless a non-trivial (though very high) probability that a block of ice will melt given that it is in warm water — again, this is a general probability. So, when I speak of P( X |S has g0 ), the objective probability of X given that S has g0 , I speak of a general probability; that is, a probability of the same sort as appears in the examples from evolutionary theory and statistical mechanics above. S has g0 is an event type; so is X is true. P( X |S has g0 ) is the general probability of the latter event type given the former. And, as in the case of those examples, if A and B are event types, P( A| B) = x means roughly the following: within the volume of phase space that realises event type B, the proportion that realises event type A is x. So P( X |S has g0 ) = x means: within the volume of phase space that includes all and only those states in which agent S has grounds g0 , x is the proportion that includes all and only those states in which X is true. Of course, to complete this account, we have to specify the phase space, and we have to say what determines the measure over it that gives these volumes. But these are tasks for anyone who wishes to appeal to general probabilities, whether they appeal to them in evolutionary biology, classical statistical mechanics, or reliabilism about justified belief and credence. So I will leave their treatment for another time.

30

9

Epistemic utility arguments for credal norms

One appealing feature of the account of justified credence presented above is that it gives a quick and straightforward argument for Probabilism. Probabilism is one of the core norms of Bayesian epistemology. It demands that an agent’s credences satisfy the laws of the probability calculus: that is, her credence in a tautology should be maximal, i.e. 1; her credence in a contradiction should be minimal, i.e. 0; and her credence in a disjunction of mutually exclusive propositions should be the sum of her credences in the disjuncts. The standard epistemic value argument for Probabilism runs as follows (Joyce, 1998, 2009; Pettigrew, 2016a). Firstly, whereas, in this paper, we have followed the veritist in taking the epistemic value of an individual credence to be its accuracy, the standard epistemic value argument for Probabilism goes further, for it says that, just as we can measure the accuracy of an individual credence, so we can also measure the accuracy of an entire credal state, which comprises many different credences in different propositions; and, as with individual credences, it takes the epistemic value of an entire credal state to be given by its accuracy. Next, the argument lays down conditions on these global accuracy measures for entire credal states, much as we demanded above that our local accuracy measures are continuous and strictly proper. And it shows that, for any accuracy measure that satisfies these conditions and any credal state that violates Probabilism, there is an alternative state that satisfies Probabilism that is guaranteed to be more accurate — and thus guaranteed to have greater epistemic value. One problem with this sort of argument is that these global measures of accuracy — those that measure the accuracy of entire credal states — allow for epistemic trade-offs (Greaves, 2013; Berker, 2013a,b; Carr, ms; Konek & Levinstein, ta). Suppose I am standing in an election. I know that the 100 voters in my constituency are put off by candidates who lend any credence at all to the existence of anthropogenic climate change. More precisely: if I have positive credence in man-made global warming, I know that exactly half the electorate will vote for me, but I don’t know which voters in particular. On the other hand, if I manage to assign credence 0 to it, everyone will vote for me. I assign credences to 101 propositions: for i = 1, . . . , 100, proposition Vi says that voter i will vote for me; and proposition C says that human activity has contributed to climate change. I have extremely strong evidence for C, having read and understood the IPCC reports. By hypothesis, if my credence in C is positive, I know that half of the Vi s are true and nothing more, so it would seem rational to assign 0.5 to each. But if my credence in C is 0, I know that all of the Vi s are true, so it would seem rational to assign 1 to each. Now, according to many global accuracy measures, a positive credence in C and credence 0.5 in each Vi is less accurate at a world at which C is true and half of the Vi s are true than a credence 31

of 0 in C and 1 in each Vi at a world at which C is true and all of the Vi s are true.13 That is, I have an epistemic incentive — one based purely on considerations of epistemic value — to trade-off the accuracy (and justification) of a high credence in C in order to increase the accuracy (though not the justification) of my credences in Vi s. And this, many philosophers claim, is a bad consequence. And thus, these philosophers might conclude, it is a mistake to think that the epistemic value of an entire credal state is given by its accuracy, as measured by a global accuracy measure. Since it appeals only to the accuracy of individual credences, the epistemic value versions of reliabilism about justified credence are not vulnerable to the trade-offs objection. From the point of view of justification, such trade-offs just aren’t permissible. In the sort of situation described above, the agent chooses between having 101 justified credences — if she has credence 1 in C and 0.5 in each Vi — and having 100 justified credences and 1 unjustified credence — if she has credence 0 in C and 1 in each Vi . Thus, from the point of view of justification, the latter is not worse than the former — indeed, it is better. In any case, the present argument for Probabilism does not rely on the existence of legitimate global accuracy measures, nor on any claim to the effect that they provide the epistemic value of an entire credal state. First, let F be the set of propositions to which our agent, S, assigns a credence. Represent her credal state by her credence function c : F → [0, 1], which takes each proposition X in F and returns her credence c( X ) in X. Now, for each X in F , let gX be the ground on which the agent’s credence of c( X ) V in X is based. And let g = X ∈F gX . That is, g is the conjunction of all the grounds gX for X in F ; in other words, g is the weakest ground that is nonetheless at least as strong as all the grounds gX for X in F . Now suppose that each credence that our agent assigns is justified. Thus, for each X in F , c( X ) = P( X |S has gX ) What’s more, for each X in F , since g ⊆ gX , c( X ) = P( X |S has g) 13 For instance, recall the quadratic scoring rule from footnote 7. Suppose my credence in C is 1; so exactly half of the Vi s are true; and I assign credence 0.5 to each Vi . And suppose we take the global accuracy of my entire credal state to be the sum of the local accuracies of my individual credences, as measured by the quadratic scoring rule. Then my accuracy is:

−(1 − 1)2 − 50(0 − 0.5)2 − 50(1 − 0.5)2 = −25 Now suppose my credence in C is 0; so every Vi is true; and I assign credence 1 to each Vi . Then my accuracy is: −(1 − 0)2 − 100(1 − 1)2 = −1

32

That is, c(−) = P(−|S has g). Since P(−|S has g) obeys the axioms of the probability calculus, so does c(−). Therefore, Probabilism.

10

Conclusion

In sum, here is our account of justified credence: Reliabilism for Justified Credence (epistemic value version) A credence of x in proposition X by agent S is justified iff (ERC1) S has g; (ERC2) credence x in X by S is based on g; (ERC3) if S has ground g0 ⊆ g, then the objective probability of X given that the agent has ground g0 approximates or equals x — that is, P( X |S has g0 ) ≈ x. It is the point at which Dunn meets Tang, and both meet the veritist. As we have seen, it solves the Connection Problem and the Swamping Problem, for it permits a rational preference for justified credence over unjustified credence, and a rational preference for a justified credence of a certain degree of accuracy over an unjustified credence with that same degree of accuracy. Nonetheless, our treatment leaves open a number of questions — I conclude here with two of them: (I) For which other doxastic attitudes might we appeal to our account of reliabilism for justified doxastic attitudes in order derive an epistemic value version of reliabilism for that attitude? Comparative probabilities (Fitelson, ms)? Imprecise credences (Konek, ta)? (II) As Gettier (1963) observed, knowledge is not justified true belief. Since then, much has been written on the epistemic features that a belief must have in order to count as knowledge. How might these features manifest themselves in credences (Moss, 2013, ms)? We have discussed justification itself here; but can we follow the epistemic value approach to provide credal versions of features such as safety, sensitivity, or the absence of epistemic luck (Konek, 2016)?

References Albert, D. (2000). Time and Chance. Cambridge, Mass.: Harvard University Press. Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’´ecole Am´ericaine. Econometrica, 21(4), 503–546. 33

Alston, W. (1988). An Internalist Externalism. Synthese, 74, 265–83. Alston, W. (2005). Beyond “Justification”: Dimensions of Epistemic Evaluation. Ithaca, NY: Cornell University Press. Bacletis, E., & Dunning, D. (2010). Wishful seeing: more desired objects are seen as closer. Psychological Science, 21(1), 147–152. Berker, S. (2013a). Epistemic Teleology and the Separateness of Propositions. Philosophical Review, 122(3), 337–393. Berker, S. (2013b). The Rejection of Epistemic Consequentialism. Philosophical Issues (Supp. Nous), ˆ 23(1), 363–387. Buchak, L. (2013). Risk and Rationality. Oxford University Press. Carr, J. (ms). Epistemic Utility Theory and the Aim of Belief. Unpublished manuscript. Cohen, S. (1984). Justification and Truth. Philosophical Studies, 46(3), 279–95. ˜ J. (2006). A well-founded solution to the generality problem. Comesana, Philosophical Studies, 129, 27–47. de Finetti, B. (1974). Theory of Probability, vol. I. New York: John Wiley & Sons. Dunn, J. (2015). Reliability for degrees of belief. Philosophical Studies, 172(7), 1929–1952. Easwaran, K. (2013). Expected Accuracy Supports Conditionalization - and Conglomerability and Reflection. Philosophy of Science, 80(1), 119–142. Easwaran, K. (2016). Dr Truthlove, Or: How I Learned to Stop Worrying and Love Bayesian Probabilities. Nous, ˆ 50(4), 816–853. Fitelson, B. (ms). Coherence. Oxford University Press. Fricker, E. (2016). Unreliable Testimony. In B. P. McLaughlin, & H. Kornblith (Eds.) Goldman and His Critics. Hoboken, NJ: John Wiley & Sons. Gettier, E. (1963). Is Justified True Belief Knowledge? Analysis, 23(6), 121– 123. Goldman, A. (1979). What is justified belief? In G. S. Pappas (Ed.) Justification and Knowledge. Dordrecht: Reidel. Goldman, A. (2008). Immediate justification and process reliabilism. In Q. Smith (Ed.) Epistemology: New Essyays, (pp. 63–82). New York: Oxford University Press. 34

Greaves, H. (2013). Epistemic Decision Theory. Mind, 122(488), 915–952. Greaves, H., & Wallace, D. (2006). Justifying Conditionalization: Conditionalization Maximizes Expected Epistemic Utility. Mind, 115(459), 607– 632. H´ajek, A. (ms). A Puzzle about Partial Belief. Horowitz, S. (2014). Immoderately rational. Philosophical Studies, 167, 41– 56. Ismael, J. (2011). A Modest Proposal about Chance. Journal of Philosophy, 108(8), 416–442. Jeffrey, R. C. (1983). The Logic of Decision. Chicago and London: University of Chicago Press, 2nd ed. Joyce, J. M. (1998). A Nonpragmatic Vindication of Probabilism. Philosophy of Science, 65(4), 575–603. Joyce, J. M. (2009). Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief. In F. Huber, & C. Schmidt-Petri (Eds.) Degrees of Belief . Springer. Konek, J. (2016). Probabilistic Knowledge and Cognitive Ability. Philosophical Review, 125(4), 509–587. Konek, J. (ta). Epistemic Conservativity and Imprecise Credence. Philosophy and Phenomenological Research. Konek, J., & Levinstein, B. A. (ta). The Foundations of Epistemic Decision Theory. Mind. Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Lange, M. (1999). Calibration and the Epistemological Role of Bayesian Conditionalization. The Journal of Philosophy, 96(6), 294–324. Leitgeb, H., & Pettigrew, R. (2010). An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy. Philosophy of Science, 77, 236–272. Levinstein, B. A. (2015). With All Due Respect: The Macro-Epistemology of Disagreement. Philosophers’ Imprint, 15(3), 1–20. Loftus, E., & Hoffman, H. G. (1989). Misinformation and memory: The creation of new memories. Journal of Experimental Psychology, 188(1), 100– 104. 35

Moss, S. (2011). Scoring Rules and Epistemic Compromise. Mind, 120(480), 1053–1069. Moss, S. (2013). Epistemology Formalized. Philosophical Review, 122(1), 1– 43. Moss, S. (ms). Probabilistic Knowledge. Oxford University Press. Payne, B. K. (2006). Weapon Bias: Split-Second Decisions and Unintended Stereotyping. Current Directions in Psychological Science, 15(6), 287–291. Pettigrew, R. (2012). Accuracy, Chance, and the Principal Principle. Philosophical Review, 121(2), 241–275. Pettigrew, R. (2013). A New Epistemic Utility Argument for the Principal Principle. Episteme, 10(1), 19–35. Pettigrew, R. (2016a). Accuracy and the Laws of Credence. Oxford: Oxford University Press. Pettigrew, R. (2016b). Jamesian epistemology formalised: an explication of ’The Will to Believe’. Episteme, 13(3), 253–268. Pettigrew, R. (ta). On the Accuracy of Group Credences. In T. S. Gendler, & J. Hawthorne (Eds.) Oxford Studies in Epistemology, vol. 6. Oxford: Oxford University Press. Predd, J., Seiringer, R., Lieb, E. H., Osherson, D., Poor, V., & Kulkarni, S. (2009). Probabilistic Coherence and Proper Scoring Rules. IEEE Transactions of Information Theory, 55(10), 4786–4792. Savage, L. J. (1971). Elicitation of Personal Probabilities and Expectations. Journal of the American Statistical Association, 66(336), 783–801. Schoenfield, M. (2015). Bridging Rationality and Accuracy. Journal of Philosophy, 112(12), 633–657. Seidenfeld, T. (1985). Calibration, Coherence, and Scoring Rules. Philosophy of Science, 52(2), 274–294. Shieber, J. (2011). Against Credibility. Australasian Journal of Philosophy, 90(1), 1–18. Shimony, A. (1988). An Adamite Derivation of the Calculus of Probability. In J. Fetzer (Ed.) Probability and Causalty. Reidel. Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception, 28, 1059–1074.

36

Sober, E. (2010). Evolutionary Theory and the Reality of MacroProbabilities. In J. H. Fetzer, & E. Eells (Eds.) The Prolace of Probability in Science, (pp. 133–61). New York: Springer. Stef´ansson, H. O., & Bradley, R. (2015). How Valuable Are Chances? Philosophy of Science, 82, 602–625. Tang, W. H. (2016). 125(497), 63–94.

Reliability Theories of Justified Credence.

Mind,

van Fraassen, B. C. (1983). Calibration: Frequency Justification for Personal Probability. In R. S. Cohen, & L. Laudan (Eds.) Physics, Philosophy, and Psychoanalysis. Dordrecht: Springer. Zagzebski, L. (2003). The search for the source of the epistemic good. Metaphilosophy, 34(12-28).

37

just-cred.pdf

Mar 5, 2017 - time t in a particular possible world w — so we represent it by the. tuple (x, X, w, t). If w is a nearby world and t a nearby time, we call. (x, X, w, t) ...

218KB Sizes 5 Downloads 225 Views

Recommend Documents

No documents