Erkenn (2015) 80:1237–1243 DOI 10.1007/s10670-015-9748-8 CRITICAL DISCUSSION

Reply to Kim’s ‘‘Two Versions of Sleeping Beauty’’ Michael G. Titelbaum1

Received: 22 January 2015 / Accepted: 12 July 2015 / Published online: 10 October 2015  Springer Science+Business Media Dordrecht 2015

Abstract I begin by discussing a conundrum that arises when Bayesian models attempt to assess the relevance of one claim to another. I then explain how my formal modeling framework (the ‘‘Certainty Loss Framework’’) manages this conundrum. Finally, I apply my modeling methodology to respond to Namjoong Kim’s objection to my framework. Keywords Relevance  Modeling  Sleeping Beauty  Self-locating belief  De se  Updating  Certainty loss  Bayesianism In my (2008) and (2013), I developed a formal framework (the ‘‘Certainty-Loss Framework’’, or ‘‘CLF’’) for building models of stories in which agents assign degrees of belief to claims over time. These models can be used to answer a variety of questions about such stories: Do the confidence assignments described in the story violate particular rational rules? What degrees of belief should the agent assign besides those explicitly described in the story? Should the agent’s confidence in particular claims change over time? At a given time, which pieces of information should the agent consider relevant to a particular claim? One great advantage of Bayesian modeling frameworks is their ability to assess relevance among claims. On the Bayesian approach, one claim is relevant to a second claim for a given agent at a given time just in case learning the first claim at that time would rationally require the agent to change her degree of belief in the second claim. With this test for relevance in hand, the Bayesian can answer a synchronic question about relevance by investigating a diachronic question concerning attitude change. & Michael G. Titelbaum [email protected] 1

Department of Philosophy, University of Wisconsin-Madison, 5113 Helen C. White Hall 600 North Park Street, Madison, WI 53706, USA

123

1238

M. G. Titelbaum

Yet relevance relations also create a specific obstacle to Bayesian modeling. Before she builds a formal model, the Bayesian must specify the set of objects to which that model will assign numerical credences. If the model assigns credences over sets of possible worlds, the modeler must decide how fine-grained those sets of worlds will be. CLF doesn’t work with sets of possible worlds; instead, CLF models assign credences to sentences in formal languages. So before she builds a CLF model, the modeler must specify a language of sentences over which the model’s credences will be assigned. The need to specify a modeling language prior to doing any formal modeling creates a conundrum for Bayesian assessments of relevance: On the one hand, Bayesians want to use their formal models to determine which claims are relevant to which. On the other hand, a modeling language must be selected before a model can be built. And it seems that decisions about which sentences to include in that language must depend on prior judgments about which claims are relevant to the situation we want to model. We had hoped that our model would reveal to us which claims are relevant to which, but it looks like we must adjudicate such questions before our model can be built.1 The nonmonotonicity of probabilistic relations makes this conundrum particularly acute for Bayesian models. To take a simple example that I have used before: Suppose our story concerns an agent’s being informed about the outcome of a fair die roll. We are particularly interested in the agent’s degree of belief that the roll came out 3. So we build a model whose language contains one atomic sentence, representing the claim that the die came up 3. Between the two times in the story (call them t1 and t2 ) the agent learns neither this claim nor its negation. So our model makes it look as if the agent learned nothing between t1 and t2 ; assuming it employs traditional Bayesian diachronic norms, the model will recommend to the agent that (in light of the constancy of her evidence) she keep constant her credence that the die came up 3. But now suppose the story clearly specifies that between t1 and t2 , the agent learned that the die roll came up odd. This information that the agent learns cannot be captured in our original model of the story, because there is no sentence in that model’s language to represent it. If we built another model—whose modeling language represented not only the claim that the die came up 3 but also the claim that it came up odd—that model would show the agent learning something between t1 and t2 . On standard Bayesian diachronic norms the expanded model would also require the agent’s degree of belief in 3 to increase between t1 and t2 , as our intuitions suggest it should in the story described. Here we see the nonmonotonicity of probabilistic relations. Classical deductive relations are monotonic, in the sense that they are not affected by adding sentences to a modeling language. Any deductive relations that obtain in our simple die model containing just one atomic sentence (for instance, the fact that ‘‘the roll came up 3’’ and ‘‘the roll did not come up 3’’ are logically inconsistent) continue to obtain when 1

Let me emphasize that this problem is not unique to CLF—it is faced by every Bayesian modeling approach. In practice, Bayesians often begin writing down equations without explicitly specifying the modeling language they’re using, or how fine-grained they’ve decided to make their sets of possible worlds. But somewhere in there a decision must be made. CLF has the advantage of forcing a modeler to make her modeling language explicit before any modeling begins. This draws attention to the conundrum about judgments of relevance I’ve just described, but the problem was there all along.

123

Reply to Kim’s ‘‘Two Versions of Sleeping Beauty’’

1239

we add further sentences. But probabilistic questions—should the agent’s credence in 3 change between t1 and t2 ?—can receive different answers when sentences are added to the modeling language. The example I’ve given may seem silly. After all, it’s intuitively obvious before we start modeling the die story that facts about oddness are relevant to the die’s coming up 3, and therefore should be represented in our modeling language. (Perhaps the most natural language for modeling this story contains one atomic sentence for each possible outcome of the die roll). But there are other examples in which questions about relevance—and therefore questions about what sentences to include in our model—have less obvious answers. Nick Bostrom, for instance, argued in his (2007) that extant models of the Sleeping Beauty Problem all went wrong because they failed to represent a relevant piece of information Beauty learns between Monday morning and Monday night.2 Besides learning that it’s Monday between those two times, Beauty also learns that she learns that it’s Monday. We already know that properly modeling certain stories in the Bayesian literature—such as the Monty Hall Problem and Thomason’s example3—requires us to represent not only an agent’s direct evidence but also the fact that she learns that evidence (at a particular time, in a particular manner, etc.). Bostrom argued that Sleeping Beauty is just such a story. Now ask yourself: In order to gauge Beauty’s rational confidence at various times that the coin came up heads, do we need to take into account the fact that she not only learns what day it is but also learns that she has learned the day? I take it the answer isn’t intuitively obvious, or at least isn’t obvious enough to discount considerations Bostrom presents that the answer is ‘‘yes’’. And so we face our conundrum: We want to use a Bayesian model to assess the relevance of the additional evidence Bostrom highlights, but it seems that in order to build such a model we must first decide whether that evidence is relevant or not (and so whether it should be represented in the model’s language). CLF addresses this conundrum with a modeling rule and a methodology. The rule, which my (2013, Section 8.1.3) calls the Multiple Models Principle, is simple: If we have two models of the same story, one has a more expansive language than the other, and the verdicts of the less expansive model are not confirmed by the verdicts of the more expansive, then the verdicts of the less expansive are not to be trusted. This modeling rule reflects an asymmetry in the relevance concern. The Bayesian worry is always that a model may have left something out (like the die roll’s coming up odd). If irrelevant claims are included in a model’s language, the model will reveal that they’re irrelevant—the model will show that learning one claim would require no change to the agent’s confidence in the other. But if relevant claims are excluded, the formal model will have no way to detect that fact (because it fails to represent—and therefore even be aware of—the information that has been left out). So we should always favor the more expansive language; the Multiple Models Principle makes sure we do. The modeling rule then gives rise to a modeling methodology. When building a model of a particular story, we should try to represent in its language all the claims we think are relevant to the degrees of belief of interest. However, we should also be 2

Bostrom was working with a version of Sleeping Beauty like the one Kim calls Sleeping BeautyB.

3

See Hutchison (1999) and Bradley (2010), respectively.

123

1240

M. G. Titelbaum

open to the possibility that we have left something relevant out. If someone suggests a further claim that might be relevant, we respond by building a new model that adds a sentence representing that claim to its modeling language. Then we apply the Multiple Models Principle: If particular verdicts produced by the old model are not reproduced by the new, we should not rely on those verdicts to represent genuine rational requirements. We have received an indication that the old model’s language may be impoverished by virtue of failing to represent relevant claims; in light of this indication, we treat the old model’s verdicts as unreliable. Take the Sleeping Beauty case: Initial authors working on the Sleeping Beauty Problem (Elga (2000), Lewis (2001), etc.) did a responsible job of taking into account what seemed to them all the claims relevant to Beauty’s confidence in Heads.4 Bostrom then proposed that those authors had overlooked a relevant claim (the claim that Beauty learns that it’s Monday). On CLF’s methodology, we do not respond to Bostrom by engaging in philosophical debate about whether this claim is relevant or not. The proper response is to build a new model that represents Bostrom’s claim, then see whether the new claim disrupts verdicts we obtained without including it. At my (2013, Section 9.2.4) I apply this methodology, build the model in question, and show that it reproduces verdicts obtained by earlier models. CLF thereby demonstrates that Bostrom’s new claim is not relevant to Beauty’s credence in Heads.5 





In his ‘‘Titelbaum’s Theory of De Se Updating and Two Versions of Sleeping Beauty’’ (Kim 2015), Namjoong Kim proposes a novel objection to CLF, based on a new version of the Sleeping Beauty Problem. Kim presents one version of the problem—his ‘‘Sleeping BeautyB’’—which I analyzed using CLF. He then introduces a new version of the problem, ‘‘Sleeping BeautyC’’, which he also analyzes by building a model in CLF. Kim finds that while CLF recommends Beauty assign a Heads degree of belief less than 1 / 2 upon awakening in Sleeping BeautyB, his CLF model of Sleeping BeautyC recommends a Heads confidence at that moment of exactly 1 / 2. Kim then argues that whatever Heads degree of belief rationality requires of Beauty at that moment, it should be the same in both stories. So CLF gets things wrong. I agree with Kim that in each story Beauty should have the same degree of belief in Heads upon first awakening Monday morning. However, I do not agree that CLF gives different recommendations for the two stories. According to CLF’s methodology, the model Kim builds of Sleeping BeautyC is not to be relied upon; so its verdicts do not genuinely represent CLF’s recommendations for that story. 4

Notice, by the way, that the initial Sleeping Beauty debate between Elga and Lewis was explicitly cast by Lewis as an argument over whether certain types of claims were relevant to Beauty’s confidence in Heads. This is exactly the kind of debate that CLF allows us to adjudicate on purely formal grounds, instead of weighing up relevance intuitions.

5

(Titelbaum 2013) also provides tools to assist in applying this methodology. In Chapter 8 I prove a series of results that limit the conditions under which adding sentences to a model’s language can undermine that model’s verdicts. For instance, if a claim never goes from less-than-certain to certain or vice versa for the agent over the course of a story, adding a claim representing that sentence will not alter extant verdicts (Theorem E.2 in Chapter 8). Results like this allow us to anticipate how most potential additions to a model’s language will alter (or fail to alter) that model’s verdicts without doing extensive further calculations.

123

Reply to Kim’s ‘‘Two Versions of Sleeping Beauty’’

1241

Kim is correct that the CLF model he builds (which he calls ‘‘M C2 ’’) yields the verdict that Beauty’s Monday-morning Heads confidence should be 1 / 2. However, the language of that model does not include a sentence representing the claim ‘‘Today is Monday.’’ (Kim calls that sentence ‘‘MON’’.) If we built a new CLF model of Sleeping BeautyC and included in its modeling language not only the sentences in Kim’s model but also MON, that CLF model would not reproduce the verdict that Beauty’s Monday-morning Heads confidence should be 1 / 2.6 So by the Multiple Models Principle, that verdict is not to be relied upon. That verdict does not represent a genuine commitment of CLF, so CLF cannot be criticized on the grounds that it is committed to one thing in Sleeping BeautyC but a different thing in Sleeping BeautyB. Kim anticipates this response on my part, and writes, Probably, Titelbaum will try to show that...M C2 , is impoverished because it fails to include an important sentence, such as MON.... He will have to explain why the given agent has to take MON into consideration in updating her credence in HEADS although MON is not part of the evidence on which she is conditioning. This is likely to be a difficult job. Here Kim misunderstands CLF’s methodology. I do not have to offer any informal argument that the agent needs to take the claim represented by MON into consideration. I have given up the job of adjudicating on informal bases which claims are relevant to which—I have handed off that job to my formal modeling system. Instead of arguing about whether MON represents a relevant claim or not, CLF directs us to build a new formal model and let that model tell us the answer. That adding MON to a model’s language changes the model’s verdicts is all the argument I need that this sentence should be included.7 Kim does make a point that could be taken as an objection to CLF’s modeling methodology and the Multiple Models Principle.8 He offers another variant of Sleeping Beauty called ‘‘Normal BeautyA’’ (for our purposes the details are not 6

Some technical details about how this would go: Construct a model M C2þ that is an expansion of model M C2 , with MON the only atomic sentence added to the modeling language. At tm Beauty assigns a nonextreme credence to MON, and there is no claim represented in the modeling language of M C2 that she is certain at that time has the same truth-value as the claim represented by MON. So M C2 is not a proper reduction of M C2þ , and not all the verdicts of M C2 (or M C2 ) will be verdicts of M C2þ . In particular, Kim’s verdict that C2 PwC2 ðHEADSÞ ¼ Pm ðHEADSÞ C2þ will have no anologue verdict in M . 7 Strictly speaking Kim says that I have to explain why Beauty has to take MON into consideration, not just argue that she needs to do so. While it would be nice to have explanations of the rational requirements in each story we analyze, if we’re simply looking to settle what rationality requires then Kim’s demand goes too far. On CLF’s methodology, the fact that adding MON changes the verdicts of M C2 demonstrates that those verdicts should not be trusted. To respond to Kim’s charge that CLF produces contradictory results for Sleeping BeautyB and Sleeping BeautyC, I simply need to demonstrate that if the framework and its methodology are applied as stated in my (2013), the results obtained for the two stories are consistent. 8

Kim actually offers this point to demonstrate that a principle he calls ‘‘Universal Inclusion’’ is too strong. I don’t endorse Universal Inclusion; I will assess whether the same point could be used to show that the Multiple Models Principle is too strong.

123

1242

M. G. Titelbaum

important). Kim notes that it’s possible to build a CLF model of Normal BeautyA with verdicts matching what we intuitively think rationality requires in the story. But that model’s language does not include a sentence representing the claim ‘‘It is now 8 am.’’9 Kim shows that if a sentence representing ‘‘It is now 8 am’’ is added to the modeling language, CLF no longer obtains the desirable intuitive verdicts. He then writes, What lesson can we learn from this? Too much detail can be bad. Applying the straightforward strategy...we could derive an intuitive verdict on the Normal BeautyA case. However, if we include [the claim that it’s 8 am] in the domain, we suddenly become unable to derive this verdict. Thus, we should not be permitted to include it in the domain, sticking to [the old model]. I disagree with the lesson Kim draws from this example. To see why, imagine confronting a person who genuinely thinks that the entire Sleeping Beauty literature has been misled up to this point because it ignores the significance of claims about the time.10 This person might be focused on an underspecification in the typical way Sleeping Beauty is presented: Once it’s been determined that Beauty will awaken on a particular day, how do the experimenters determine what time to awaken her? Contrast two (of the many) possible responses: • •

The experimenters awaken Beauty at 7 am on Monday and leave her awake for two hours. If they awaken Beauty at all on Tuesday, they don’t do so until 9 am. The experimenters awaken Beauty at the same time of day, and for the same duration, on every day on which she awakens. Those times are not influenced by the outcome of the coin flip.

We now have two different Sleeping Beauty stories that could be modeled. In the first story, whether it’s currently 8 am could be very relevant to Beauty’s confidence in Heads, especially if she has evidence (for instance, the amount of sunlight pouring through a window) that indicates one time of day over another. On the other hand, in the second Sleeping Beauty story (with consistent awakening times) we would expect time-of-day information to be irrelevant to Beauty’s degree of belief in Heads. Questions about the relevance of ‘‘It is now 8 am’’ force us to be more clear about precisely which Sleeping Beauty story we’re modeling. Kim is right that if we move from a model with no sentences representing time-of-day claims to another model that differs only by adding such sentences to the modeling language, then the latter model will not recreate intuitively correct verdicts obtained from the original model. But when we add sentences representing time-of-day claims to our model, we also need to specify how times of day work in the Sleeping Beauty story under 9

Kim actually focuses on a sentence representing the claim ‘‘It is now 8 am on Monday.’’ But the models in question already include a sentence representing the claim ‘‘It is now Monday.’’ I will read ‘‘It is now 8 am on Monday’’ as a conjunction of the claim that it’s Monday and the claim that it’s 8 am, and will focus on whether a sentence representing this last claim should be added to the modeling language.

10

Judging from some of the suggestions in his (2012), Joel Pust may actually be such a person.

123

Reply to Kim’s ‘‘Two Versions of Sleeping Beauty’’

1243

consideration, then add those specifications to the model as well. For instance, if we decide to analyze the second Sleeping Beauty story above, we will need to add constraints to our model specifying that awakening times are the same every day and are independent of the coin flip.11 If those constraints are also included when we add time-of-day sentences to the modeling language, our CLF model will once more recover the intuitive verdicts Kim is after. One might respond, ‘‘Okay, I understand how to get verdicts for the two modified Sleeping Beauty versions above specifying how awakening times work, but I wanted to know whether claims about the time are relevant to Heads in the original Sleeping Beauty story.’’ Yet the original story is underspecified with respect to times of day; there simply is no answer to the relevance question until more information has been supplied. Kim seems to think that in certain cases, particular claims should not be represented in our models of a particular story, even if those claims are suggested as potentially relevant to matters of interest. If that were true, then CLF’s Multiple Models Principle and modeling methodology would be incorrect. Yet I think we should always respond to relevance suggestions by adding claims to our model and witnessing the formal results. This will sometimes generate new questions about the story being modeled, and point out ambiguities we hadn’t noticed before. But in a nonmonotonic setting more detail is better than less. If adding detail to our models’ languages forces us to ask new questions, we should answer those questions, incorporate the answers into our models, then rely on verdicts from the most informed models we have. This is the best method I know for answering questions about relevance using probabilistic modeling systems.12

References Bostrom, N. (2007). Sleeping Beauty and self-location: A hybrid model. Synthese, 157, 59–78. Bradley, D. (2010). Conditionalization and belief De Se. Dialectica, 64, 247–250. Elga, A. (2000). Self-locating belief and the Sleeping Beauty problem. Analysis, 60, 143–147. Hutchison, K. (1999). What are conditional probabilities conditional upon? British Journal for the Philosophy of Science, 50, 665–695. Kim, N. (2015). Titelbaum’s theory of De Se updating and two versions of Sleeping Beauty. Erkenntnis. doi:10.1007/s10670-015-9721-6. Lewis, D. (2001). Sleeping Beauty: Reply to Elga. Analysis, 61, 171–176. Pust, J. (2012). Conditionalization and essentially indexical credence. Journal of Philosophy, 109, 295–315. Titelbaum, M. G. (2008). The relevance of self-locating beliefs. Philosophical Review, 117, 555–605. Titelbaum, M. G. (2013). Quitting certainties: A Bayesian framework modeling degrees of belief. Oxford: Oxford University Press.

11 In the parlance of CLF, Beauty’s information about the determination of daily waking times would be represented in the model’s extrasystematic constraints. 12

Thanks to Namjoong Kim for an extended and extremely helpful correspondence about these issues.

123

Reply to Kim's “Two Versions of Sleeping Beauty”

10 Oct 2015 - Finally, I apply my modeling methodology to respond to Namjoong. Kim's objection to my framework. ... that time would rationally require the agent to change her degree of belief in the second claim. With this test for ..... Sleeping Beauty and self-location: A hybrid model. Synthese, 157, 59–78. Bradley, D.

338KB Sizes 1 Downloads 31 Views

Recommend Documents

TWO INFINITE VERSIONS OF NONLINEAR ...
[5] A. Grothendieck, Sur certaines classes de suites dans les espaces de ... geometric analysis (Berkeley, CA, 1996), volume 34 of Math. ... Available online at.

Sleeping-Beauty-La-Bella-Durmiente-Keepsake-Stories.pdf ...
Page 1 of 3. Download ]]]]]>>>>>(~EPub~~) Sleeping Beauty: La Bella Durmiente (Keepsake Stories). (EPub) Sleeping Beauty: La Bella Durmiente (Keepsake.

the claiming of sleeping beauty pdf free
There was a problem loading more pages. Retrying... the claiming of sleeping beauty pdf free. the claiming of sleeping beauty pdf free. Open. Extract. Open with.

pdf-1466\sleeping-beauty-another-grandma-chatterbox-fairy-tale ...
Try one of the apps below to open or edit this item. pdf-1466\sleeping-beauty-another-grandma-chatterbox-fairy-tale-book-6-by-barbara-hayes.pdf.

[PDF] Sleeping with Beauty By Tia Siren Download ...
... Beauty eBook Tia Siren Amazon ca Kindle Store Download one of the Free Kindle apps to start ... Online PDF Sleeping with Beauty, Read PDF Sleeping with Beauty, Full PDF Sleeping with ... make you forget your own name.xx TiaTia Siren.

[PDF] The Sleeping Beauty Killer (An Under Suspicion ...
Nov 15, 2016 - who seems to have all the dirt (and a surprising informant), and Casey’s longstanding bad reputation: Laurie must face this and more to do what she believes is right, to once and for all prove Casey’s innocence—tha

Sleeping-Beauty-An-Islamic-Tale-Islamic-Fairy-Tales.pdf ...
There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Sleeping-Beauty-An-Islamic-Tale-Islamic-Fairy-Ta

the sleeping beauty trilogy anne rice pdf
Page. 1. /. 1. Loading… Page 1. the sleeping beauty trilogy anne rice pdf. the sleeping beauty trilogy anne rice pdf. Open. Extract. Open with. Sign In. Main menu. Displaying the sleeping beauty trilogy anne rice pdf. Page 1 of 1.

PDF Download The Sleeping Beauty (Everyman s ...
PDF Download The Sleeping Beauty. (Everyman s Library Children s Classics). Free Ebook. Download The Sleeping Beauty (Everyman s Library Children s Classics), Download The Sleeping Beauty (Everyman s Library Children s Classics) PDF, Download The Sle

EKM KIMS Kochi
E-mail. [email protected]. 2. Telephone No. Land Line 0484-3041000. 0484-3961222. Mobile. Name of Nodal Officer /. Contact Person. N R Pillai ...

Reply to Shiner
My response will take the form of a series of questions followed by my own proposed .... assurances in his reply that ' it looks like Kristeller may not have been ...