Sleeping Beauty Problem

hypericin May 11, 2023 at 20:16 15300 views 726 comments
From an excellent clickbait article by Scientific American:


The problem vexing the minds of experts is as follows: Sleeping Beauty agrees to participate in an experiment. On Sunday she is given a sleeping pill and falls asleep. One of the experimenters then tosses a coin. If “heads” comes up, the scientists awaken Sleeping Beauty on Monday. Afterward, they administer another sleeping pill. If “tails” comes up, they wake Sleeping Beauty up on Monday, put her back to sleep and wake her up again on Tuesday. Then they give her another sleeping pill. In both cases, they wake her up again on Wednesday, and the experiment ends.

The important thing here is that because of the sleeping drug, Sleeping Beauty has no memory of whether she was woken up before. So when she wakes up, she cannot distinguish whether it is Monday or Tuesday. The experimenters do not tell Sleeping Beauty either the outcome of the coin toss nor the day.

They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?


I suggest you read the full article. Its a curious problem, seemingly too simple to still be in dispute.

What do you think?

At first I was a confident halfer, and I was about to post as such. But then I realized after a little more thought, thirders are correct after all! I think the arguments in the article are sound. Here is another:


Sleeping Beauty is perplexed, and is determined to work out the answer. She consents to repeat the experiment 100 times. She keeps a tally by her bed, with two columns, one marked "Heads", the other marked "Tails". Every time she wakes up, she looks at the coin, and adds a mark to the correct column. At the end of the experiment, it turns out that the coin came up 50 times for each side. Looking at her tally, Sleeping Beauty sees 50 marks under Heads, and 100 marks under Tails. She thereby concludes, if she had been asked the question and replied "heads" every time, she would have been correct 1/3 of the times.

Edit: I'm back to 1/2

Edit2: Back to 1/3 again! I think this is where I will stay.

Edit3: Sigh. Back to 1/2


Comments (726)

noAxioms May 11, 2023 at 23:27 #807374
The basic test is if SB places a bet each time. She makes money (does not break even) if she bets on tails. This is pretty easy to work out.
If the experiment is done twice (one heads, one tails), she sees heads once and tails twice. That sounds like 1/3 heads to me.

To emphasize this answer, imagine head: they wake her the once, but tails, they do it 100 times before the experiment ends. The coin flip odds are still 50/50, but the odds that on a random waking she sees tails is overwhelming.
hypericin May 12, 2023 at 00:24 #807382
Reply to noAxioms

But now I think we are both cheating. The experiment specifically is conducted once, not as many times as we please. That changes everything.

Quoting noAxioms
To emphasize this answer, imagine head: they wake her the once, but tails, they do it 100 times before the experiment ends. The coin flip odds are still 50/50, but the odds that on a random waking she sees tails is overwhelming.


Presuming the experiment is conducted multiple times, yes it is overwhelming. But what if it is just conducted once?

The article states this counterargument nicely. Consider this variation:

A 100 sided dice is rolled. If it lands on 100, SB is awakened 1000 times. Otherwise, she is awakened once. On each awakening, is it more likely to be 100 than 1-99 inclusive??

L'éléphant May 12, 2023 at 01:53 #807386
I thought I saw this problem posted before in the Lounge?

They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?

Without memory of prior awakening or knowledge of what day it is, she would have to answer 1/2. What SB remembers is she was put to sleep and she awakens. The coin is tossed once in her memory.

I thought the question of the experimenters was directed to SB, not to the readers of the puzzle.
jgill May 12, 2023 at 02:16 #807391
Quoting L'éléphant
I thought I saw this problem posted before in the Lounge?


Yes. It's been discussed before. It can be considered a problem in decision making.

Sleeping Beauty
unenlightened May 12, 2023 at 07:52 #807449
Reply to jgill

[quote=wiki]The Sailor's Child problem, introduced by Radford M. Neal, is somewhat similar. It involves a sailor who regularly sails between ports. In one port there is a woman who wants to have a child with him, across the sea there is another woman who also wants to have a child with him. The sailor cannot decide if he will have one or two children, so he will leave it up to a coin toss. If Heads, he will have one child, and if Tails, two children. But if the coin lands on Heads, which woman would have his child? He would decide this by looking at The Sailor's Guide to Ports and the woman in the port that appears first would be the woman that he has a child with. You are his child. You do not have a copy of The Sailor's Guide to Ports. What is the probability that you are his only child, thus the coin landed on Heads (assume a fair coin)?[/quote]

This version looks a lot clearer to me, and the question at the end looks like a deception. 2 possible worlds, contain 3 possible identities. So other things (ie coins) being equal, I am more likely to be one of two than one of one. So P. (only child) is 1/3 notwithstanding P. (heads) is 1/2, because tails is twice as fruitful as heads.
Michael May 12, 2023 at 11:23 #807467
Let's say that there are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

If Michael is woken then what is his credence that the coin landed heads?

David Lewis' answer, summarised here, states that:

Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment.


Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.

However, given that if woken he is 1 iff the coin landed heads, he ought to have a credence of either P(Heads) = 1/3 or P(1) = 1/2.
hypericin May 12, 2023 at 16:40 #807514
Quoting Michael
since he gains no new relevant evidence if he wakes up during the experiment.


But in this setup, he did gain relevant evidence. Prior to the experiment there was a possibility he would not be woken, this has been eliminated.
hypericin May 12, 2023 at 16:53 #807516
I think betting is cheating, and is not actually correlated to the probabilities.

Bet on heads or tails. If tails, you get to repeat the same bet again, on the same toss

Of course, you would pick tails because you get a bonus if you win. But this doesn't make tails more likely.

What if we let SB bet, but only once. If the probability of heads is really 1/3 on each awakening, then a bet on tails should still be profitable. But it is not.


hypericin May 12, 2023 at 18:06 #807536
Quoting unenlightened
This version looks a lot clearer to me, and the question at the end looks like a deception. 2 possible worlds, contain 3 possible identities. So other things (ie coins) being equal, I am more likely to be one of two than one of one. So P. (only child) is 1/3 notwithstanding P. (heads) is 1/2, because tails is twice as fruitful as heads.


Alter the setup:

The sailor had two choices, have sex with a homely fishmaiden of Innsmouth, with whom he could only bear 1 coupling, or visit the Siren of Fertility, whereupon he knows he will sire 10 sprats. Feeling queasy about such a paternal burden, he flips 2 coins: He will only visit the siren if both coins are heads.

You are given a bet: Guess your mother correctly, and win your father's treasure, a giant sack of doubloons. Which would you choose?
unenlightened May 12, 2023 at 19:43 #807541
Reply to hypericin The odds favour the siren of fertility, and my own beauty confirms it.
hypericin May 13, 2023 at 00:07 #807609
Reply to unenlightened

I don't think they do. Extend the numbers out to a ridiculous amount: the sailor flipped 12 coins, and fathers 10000 with the Siren. Would you wager the giant sack that dad flipped 12 heads?

Seriously, put yourself in that position. A giant sack of gold coins, but you have to choose correctly.







If you chose The Siren, you would feel like The World's Idiot for the rest of your life.

The argument for The Siren only works if you repeat this setup multiple times. Even though the Siren is much more unlikely, she more than makes up for it when she is selected with her ultra fecundity. So, if there were a million trials, and you were one of all those lives, you should pick The Siren. But this doesn't work at all with just one trial.

(and BTW, frankly speaking you have kind of a horse face)
hypericin May 13, 2023 at 00:53 #807613
Reply to Michael

Your example has two unknowns, the number and the coin, and so the mental complexity is squared. My little brain is too small for it.
unenlightened May 13, 2023 at 07:06 #807647
Quoting hypericin
If you chose The Siren, you would feel like The World's Idiot for the rest of your life.


Not if I was right. The odds that I uniquely am the sailors lovely frog daughter are surely very small indeed if it is a unique occurrence. It is as unlikely as that a frog should be mistaken for a horse. And that is my gamble, not the one my father made.
Michael May 13, 2023 at 07:36 #807651
Reply to hypericin The number assignment and the coin toss do not affect each other, and so they are independent events.

But there is something different about my example. If we use Bayes' theorem then:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{{1\over3}*{1\over2}}\over{1\over2}}\\&={1\over3}\end{aligned}[/math]

Given that P(Heads|Awake) ? P(Heads), waking does in fact provide new relevant evidence. And the above shows that the correct answer is 1/3.

Applying this same formula to the original problem gives P(Heads|Awake) = P(Heads), and so is consistent with Lewis' reasoning and his conclusion that the correct answer is 1/2.

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]
hypericin May 13, 2023 at 19:13 #807724
Reply to Michael

How would you respond to this:

This formula makes no mention of the possibility of waking up on Tuesday. If we told SB when she woke up, "It is Monday", she could use this reasoning, and come up with the correct answer of 1/2.

But then suppose we didn't tell her anything. This reasoning works for the case of Monday, and gives 1/2. But there is an unaccounted for possibility that it is Tuesday. And if so, the probability of heads is 0.

So therefore, the correct answer cannot be 1/2, it must be less, since 1/2 is the Monday-only answer, which doesn't factor in the possibility of it being Tuesday.
hypericin May 14, 2023 at 04:06 #807799
I have finally solved this thing. I am now 100% convinced 1/3 is correct, and everyone who answered the poll except for me is wrong.

There are three possible waking events.

A. Heads, Monday
B. Tails, Monday
C. Tails, Tuesday

To answer correctly, sleeping beauty must evaluate the probability she is experiencing each of these events.

1: P(A) = P(B): If heads, A happens, if tails, B happens
2: P(A) = P(C): If heads, A happens, if tails, C happens
3: P(B or C) = 2P(A): The combined probability of B and C must be twice the probability of A
4: P(A) = P(Heads) = 1/3
Michael May 14, 2023 at 07:53 #807818
A variation of my variation.

Let's say that there are two beauties; Michael and Jane. They are put to sleep and assigned a random number from {1, 2}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and Tuesday.

If Michael is woken then what is his credence that the coin landed heads?

If we use Bayes' theorem then:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{{1\over2}*{1\over2}}\over{1\over2}}\\&={1\over2}\end{aligned}[/math]
Michael May 14, 2023 at 08:59 #807826
Here's a Venn diagram to show how the probabilities interlink (where "Monday" should be read as "she will be woken on Monday" and "Tuesday” as "she will be woken on Tuesday"):

User image

The probability that the coin will land heads and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Tuesday is 1/2.

As the Venn diagram shows, there are two (overlapping) probability spaces, hence why the sum of each outcome's probability is greater than 1.
L'éléphant May 14, 2023 at 18:48 #807926
Quoting hypericin
To answer correctly, sleeping beauty must evaluate the probability she is experiencing each of these events.

She can't. The instruction reads that she has no memory of prior awakening or what day it is. She doesn't even know that the experimenter tosses the coin, because they do it when she's put back to sleep. The question to her is "what are the odds that a coin will land heads (or tails)". Since she must know what a coin is, and what heads and tails is, she must answer "1/2".
The sleeping pills, the day she awakens, and other conditions included in the instruction are all distractions for you or me or others who are reading the puzzle to divert our attention from the correct answer.

Edit: the correct answer is 1/2 when the coin is tossed once. All SB knows is the coin is tossed once, and her information is purely coming from the question.

hypericin May 14, 2023 at 19:05 #807927
Quoting Michael
The probability that the coin will land heads and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Tuesday is 1/2.


This is straightforwardly true, but from the perspective of an observer of the experiment. But to answer the problem you must adopt SB's perspective. That makes all the difference.

We all know the odds of flipping a coin. But SB is asked on every wakening, and is woken twice as much on tails. This must influence the odds, in the same way that

Quoting hypericin
Bet on heads or tails. If tails, you get to repeat the same bet again, on the same toss


Influences the gambling odds, even though the coin toss is fair in both cases
hypericin May 14, 2023 at 19:11 #807928
Quoting L'éléphant
She can't.


Oh? Then what specifically is wrong with my reasoning
Michael May 14, 2023 at 20:29 #807940
Quoting hypericin
This is straightforwardly true, but from the perspective of an observer of the experiment. But to answer the problem you must adopt SB's perspective. That makes all the difference.


Why does it make a difference?

Quoting hypericin
But SB is asked on every wakening, and is woken twice as much on tails. This must influence the odds


Why would it influence the odds?

Let's say that I wanted to bet on a coin toss. I bet £100 that it will be tails. To increase the odds that it's tails, I ask you to put me to sleep, wake me up, put me back to sleep, wake me up, put me back to sleep, wake me up, and so on. Does that make any sense?

Quoting hypericin
Influences the gambling odds, even though the coin toss is fair in both cases


It doesn't influence the odds. You just get to bet twice, hence twice the payout.
Baden May 14, 2023 at 20:33 #807941
Quoting Michael
Let's say that I wanted to bet on a coin toss. I bet £100 that it will be tails. To increase the odds that it's tails, I ask you to put me to sleep, wake me up, but me back to sleep, wake me up, put me back to sleep, wake me up, and so on. Does that make any sense?


Nicely put.

hypericin May 15, 2023 at 03:27 #808003
Quoting Michael
Let's say that I wanted to bet on a coin toss. I bet £100 that it will be tails. To increase the odds that it's tails, I ask you to put me to sleep, wake me up, put me back to sleep, wake me up, put me back to sleep, wake me up, and so on. Does that make any sense?


I get the intuition. But no, it doesn't make sense, and it it is a poor analogy to the problem.

Here is a much better analogy, that I think is more intuitively clear than this weird experiment:

An idiot runs a gambling website, HeadsOrTails.com. You place a bet, and if you win, the site pays you 97c on the dollar, along with your bet. Behind the scenes, a coin is mechanically flipped in advance, and then users are asked to bet heads or tails based on that flip. But there is an idiosyncrasy: If the coin toss is heads, the site generates one betting prompt for one user. But if it's tails, the site generates two prompts for two users.

Therefore, for every prompt, the coin is twice as likely to be tails as it is heads, because tails generates two prompts, while heads generates one. By asking more questions for one coin side vs the other, the probability at every question is skewed towards the side that gets more questions asked. Therefore t will be profitable to always bet tails, and the site will go bankrupt quickly.
Srap Tasmaner May 15, 2023 at 05:26 #808018
Suppose you actually did this, as an experiment, just looking for each subject's response.

Let's say you have 1200 subjects; one coin toss each, and let's say you get 600 heads and 600 tails. So 600 subjects get a single interview, and 600 get two. Total of 1800 interviews.

How many interviews are conducted, the toss having come up heads? 600.

Then for any given interview, the chances that it's one of the heads interviews are 1 in 3. I think that's all the reasoning you need.

****

If you're inclined to double-check, you might try something like this:

[math]
\small pr(\mathsf{heads | interview}) = \cfrac{pr(\mathsf{heads}) \times pr(\mathsf{interview | heads})}{pr(\mathsf{interview})}
[/math]

Not perfectly clear what that means though, so let's explicitly take subjects one at a time, something like this:

[math]
\small pr(\mathsf{hds}(x) | \mathsf{intv}(x)) = \cfrac{pr(\mathsf{hds}(x)) \times pr(\mathsf{intv}(x) | \mathsf{hds}(x))}{pr(\mathsf{intv}(x))}
[/math]

That looks a little more tractable. Certainly pr(heads(x)) is 1/2 for everyone. If in fact x gets heads, then x is one of the 1800 interviews, so pr(interview(x)) would be 1/1800, and for tails 2/1800. For an "average" subject x, then, the value should be their average (because halves), which is 1.5/1800, or 1/1200. Hey, that looks right! Maybe we needn't have bothered about the heads and tails...

(Is 1/1200 really the right value for pr(interview(x)) -- doesn't it make a difference whether their coin came up heads or tails? Yes and no: what we want here is an absolute probability, not a conditional one, so we deliberately average out the cases to get a baseline, and the result is just what you'd expect. Pick an interview at random, and the chances of it being an interview of a specific subject are, on average, 1 in 1200. Just as we leave pr(heads(x)) at the baseline of 1/2 for everyone.)

What about pr(interview(x) | heads(x))? What does this mean, and how do we assign a value to it? It's the chances that a given interview is of subject x, given that x's toss came up heads. That's the 1/1800 we just looked at.

Now we have values for everything on the right:

[math]
\small \begin{align}pr(\mathsf{heads}(x) | \mathsf{interview}(x)) &= \cfrac{1/2 \times 1/1800}{1/1200}\\ &= 1/3 \end{align}
[/math]

That is, given any interview, the chances of the subject of that interview's toss having come up heads are 1 in 3. Exactly the same as above.

SB ought to reason that it's more likely the interview she's currently engaged in is one of the tails interviews, so her credence for heads should be 1/3.
Michael May 15, 2023 at 07:07 #808025
Reply to Srap Tasmaner

Given that the coin flip just determines if a Tuesday interview will happen, Beauty is being asked for her credence that a Tuesday interview will happen.

That she's interviewed twice as often if a Tuesday interview will happen isn't that a Tuesday interview will happen twice as often.

That she's interviewed twice as often if it's tails isn't that it's tails twice as often.

If it helps, consider that the coin toss happens after the Monday interview.

Quoting Srap Tasmaner
No, I'm in the double halfer camp now. The post right above explains my current thinking.

((This is, I don't know, maybe the third time I've argued with Michael about something and then concluded he was right all along.))


So you've switched back to being a thirder?
Srap Tasmaner May 15, 2023 at 13:56 #808095
Quoting Michael
So you've switched back to being a thirder?


I switched back shortly after that post; it's right there in the thread. The argument that convinced me was this: consider a variation, "Informative SB", in which Beauty is told she will be awakened twice either way, but if it was heads she will be told at the second interview that it was heads and that this is her second interview; at none of the others will she be given such information.

Beauty cannot distinguish between this variation and the actual experiment. This just makes it clear that being interviewed is itself informative and she should condition on it; it amounts to being told "either it was tails or this is your first interview." The likelihood that the coin came up heads is of course 1/2; the likelihood that it came up heads, given that you're asking, the conditional probability, is 1/3.

Quoting Michael
Let's say that I wanted to bet on a coin toss. I bet £100 that it will be tails. To increase the odds that it's tails, I ask you to put me to sleep, wake me up, put me back to sleep, wake me up, put me back to sleep, wake me up, and so on. Does that make any sense?


In our version, the base rate of heads interviews is 1 in 3. Make it 1 in 1000. (That is, 999 awakenings on tails, not 2.) Isn't it obvious that if I'm a subject in such an experiment, I know it's far more likely I'm being asked for my credence because my coin came up tails? If I'm one of 1200 subjects, I know there are 600,000 interviews, only 600 of which were for heads, while 599,400 were for tails. Equally likely that this interview is for heads as for tails? Not by a long shot.
Michael May 15, 2023 at 14:13 #808101
Quoting Srap Tasmaner
I switched back shortly after that post; it's right there in the thread. The argument that convinced me was this: consider a variation, "Informative SB", in which Beauty is told she will be awakened twice either way, but if it was heads she will be told at the second interview that it was heads and that this is her second interview; at none of the others will she be given such information.


I don't understand how this is different to the original. In the original she's woken up on Wednesday and told the result of the coin flip (whether heads or tails). So there are two interviews if heads, three interviews if tails.
Srap Tasmaner May 15, 2023 at 14:18 #808102
Reply to Michael

The point of the variation is that she is told something will happen only at the second heads interview. Fill in whatever you like, it will be missing from the others and from all of the interviews in stock SB. The absence of that thing is informative, it amounts to "it was tails or this is your first interview," and this is true as well for stock SB. Being asked is itself information you can condition on.
Michael May 15, 2023 at 14:24 #808104
Quoting Srap Tasmaner
The absence of that thing is informative, it amounts to "it was tails or this is your first interview," and this is true as well for stock SB. Being asked is itself information you can condition on.


How do you condition on such a thing? What values do you place into Bayes' theorem?

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\end{aligned}[/math]
Srap Tasmaner May 15, 2023 at 14:26 #808106
Quoting Michael
How do you condition on such a thing? What values do you place into Bayes' theorem?


Just scroll back up. I went through all that.
Michael May 15, 2023 at 14:27 #808107
Reply to Srap Tasmaner I don't think your numbers are accurate there.

But in this case we're not asking about an outsider's analysis of frequencies over many experiments, but Sleeping Beauty's when it is known that just a single experiment is being run.
Srap Tasmaner May 15, 2023 at 14:35 #808109
Quoting Michael
I don't think your numbers are accurate there.


Meaning?

Quoting Michael
But in this case we're not asking about an outsider's analysis of frequency, but Sleeping Beauty's when it is known that just a single experiment is being run.


And how do you expect to apply Bayes's rule without any base rate information? SB can reason as I have described to determine what those base rates would be were the experiment repeated a number of times, and set her subjective probabilities accordingly.
Michael May 15, 2023 at 14:40 #808110
Quoting Srap Tasmaner
And how do you expect to apply Bayes's rule without any base rate information? SB can reason as I have described to determine what those base rates would be were the experiment repeated a number of times, and set her subjective probabilities accordingly.


We have the base rate information.

The probability of a coin landing heads is 0.5. The probability that I will be questioned if the coin lands heads is 1. The probability that I will be questioned is 1. So:

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\\&={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]

Compare with Bayes' theorem as applied to my variation:

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\\&={{{1\over3}*{1\over2}}\over{1\over2}}\\&={1\over3}\end{aligned}[/math]

It should be clear that the reasoning is sound in the second case, and so too is it sound in the first case.
Srap Tasmaner May 15, 2023 at 14:49 #808113
Quoting Michael
The probability that I will be questioned if the coin lands heads is 1. The probability that I will be questioned is 1.


But these are useless, uninformative, ambiguous categories. What you want is the odds that this interview is a heads-type interview. If Beauty could determine her current state in the world, what type of interview she is being given, she could answer with certainty. She cannot, but she can determine the likelihood of her being in each of the two possible states, states that she cannot otherwise distinguish. And she knows that it is more likely that she is being interviewed because it was tails.
Michael May 15, 2023 at 14:49 #808114
Quoting Srap Tasmaner
What you want is the odds that this interview is a heads-type interview.


Yes, that's the left hand side of the theorem that we're trying to solve: P(Heads|Questioned).

We use the known values on the right hand side to determine it.
Srap Tasmaner May 15, 2023 at 15:35 #808127
Quoting Michael
What you want is the odds that this interview is a heads-type interview. — Srap Tasmaner


Yes, that's the left hand side of the theorem: P(Heads|Questioned).


No, that's the probability that the coin was heads given that I'm being interviewed. The trouble is on the RHS, the probability that I am being interviewed given that my coin was heads.

Take a space and partition it evenly into a heads half and a tails half. Now, in the heads half put one point, an interview; in the tails half, put many interviews.

If you choose a half space, in each you will find at least one interview. You're interviewed either way.

But if you randomly select an interview among all the interviews, is it more likely to have come from the heads half, with its one interview, or the tails, where there are more?

That's what we want to capture with the base rate of heads interviews among all interviews, not the non-emptiness of the heads half of the space.
Michael May 15, 2023 at 15:44 #808132
Quoting Srap Tasmaner
The trouble is on the RHS, the probability that I am being interviewed given that my coin was heads.


Which is 1. I know that I will be interviewed if the coin lands heads.

Consider a simpler version of the experiment. If heads then I will be interviewed once. If tails then I will be interviewed once. Bayes' theorem is unproblematically:

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\\&={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]

We don't say that because half of all questions are when tails then P(Questioned | Heads) = 0.5. That would give us this very clearly wrong calculation:

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\\&={{{1\over2}*{1\over2}}\over1}\\&={1\over4}\end{aligned}[/math]
hypericin May 15, 2023 at 19:25 #808169
Quoting Srap Tasmaner
In our version, the base rate of heads interviews is 1 in 3. Make it 1 in 1000. (That is, 999 awakenings on tails, not 2.) Isn't it obvious that if I'm a subject in such an experiment, I know it's far more likely I'm being asked for my credence because my coin came up tails? If I'm one of 1200 subjects, I know there are 600,000 interviews, only 600 of which were for heads, while 599,400 were for tails. Equally likely that this interview is for heads as for tails? Not by a long shot.


This problem is like one of those optical illusions, a drawing that can be interpreted in two ways, but you can only hold one interpretation in your head at once.

It is easy to show the probability of heads is 1/3, if you allow multiple trials. I do so in my op. But the problem limits to one trial. In one trial, each of those 1000 interviews occurring is contingent on the coin flip being tails. So the probability that SB experiences all 1000 of them is 1/2, and the probability that SB is experiencing any one of them during an interview is 1/2000. While, the probability that SB is experiencing the heads interview is 1/2.

This is very easy to see if you make the coin toss unfair. Suppose heads comes up 99/100 times, and if tails, a million sleeps happen. Are you willing to believe, in a single trial, that tails is overwhelmingly likely on each awakening? Yet, if you extend to enough trials, tails is overwhelmingly likely.
hypericin May 16, 2023 at 08:06 #808255
Following this logic, I wrote a python script which calculates the probability of heads given a number of trials. As I expected, the results begin at 1/2 at 1 trial, and with increasing trials converge to 1/3. This unifies the two answers in a way I haven't seen before.

Unfortunately for thirders, the problem is explicitly concerned with the 1 trial case. Halfers were right all along. As far as I'm concerned, the problem is put to rest for all time.

User image


import numpy as np
import matplotlib.pyplot as plt

def experiment(trials):
headsProbability = 0.0
for heads in range(trials + 1):
headArrangements = np.math.factorial(trials)/np.math.factorial(trials-heads)/np.math.factorial(heads)
headsRightAnswer = heads / (heads + (trials - heads)*2.0)
headsProbability += headArrangements / np.power(2.0, trials) * headsRightAnswer
return headsProbability

trials = [i+1 for i in range(50)]
results = [experiment(t) for t in trials]

plt.plot(trials, results, '-')
plt.ylabel("heads probability")
plt.xlabel("trials")
plt.show()


Quoting Srap Tasmaner
((This is, I don't know, maybe the third time I've argued with Michael about something and then concluded he was right all along.))


Sorry, Srap, it happened yet again. Reply to Michael was right.
Michael May 17, 2023 at 06:22 #808448
Quoting Michael
Let's say that there are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

If Michael is woken then what is his credence that the coin landed heads?

…

Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.

And given that if woken the coin landed heads iff he is 1, he ought to have a credence of P(Heads) = 1/3.


Does it not stand to reason that the probability that the coin landed heads in this example is less than the probability that the coin landed heads in the original example, given that in this example the coin landed heads iff Michael was assigned the number 1?

Do you agree that Michael’s credence that the coin landed heads is 1/3?

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{{1\over3}*{1\over2}}\over{1\over2}}\\&={1\over3}\end{aligned}[/math]

If so then it must be that Sleeping Beauty’s credence in the original example is greater than 1/3, i.e 1/2.
Michael May 18, 2023 at 15:33 #808809
An interesting variation taken from here:

Four volunteers will be assigned a random number but each will undergo an experiment that is functionally equivalent to the popular version of the problem. The same sleep and amnesia drugs will be used, and each will be awoken at least once, but maybe twice, based on the same fair coin toss. Only their schedules and the question they are asked will differ, but end up being equivalent to the popular problem. On Monday and Tuesday:

#1 Will be awoken unless it is Tuesday, after Heads.

#2 Will be awoken unless it is Tuesday, after Tails.

#3 Will be awoken unless it is Monday, after Heads.

#4 Will be awoken unless it is Monday, after Tails.

Each will be asked for their credence that this is the only time they will be awoken. For #1 and #3, that means credence in Heads. For #2 and #4, it is credence in Tails. For all four, the answer has to be the same as the correct answer to the popular version of the Sleeping Beauty Problem.

On each day, we can bring the three awake volunteers together to discuss their answers. Of these three, exactly one will not be, or was not, awakened on the other day of the experiment. But none of the three can have more, or less, credence that she is that one instead of one of the others.

So with three awake volunteers, one of whom will be awakened only once, the answer is 1/3.


To be clearer with what each volunteer is considering, it is:

P(1 and Heads or 2 and Tails or 3 and Heads or 4 and Tails | Awake)

Prima facie the answer is [math]1\over2[/math], however it is a fact that for [math]1\over3[/math] of the awake volunteers, "I will only wake once" is true.

The question, then, is whether or not A entails B:

A. "I will only wake once" is true for [math]1\over3[/math] of us
B. The probability that "I will only wake once" is true for me is [math]1\over3[/math]

Thirders say it does, halfers say it doesn't.

I think this might be a Monty Hall problem. Consider a slight variation which I think is functionally equivalent to the above. All 4 are awoken on each day, are put in a room together, and then one of them is put back to sleep according to the rules. Each person left awake is then asked to consider the probability that they will be awake both days. I think it's a mistake to ignore the person who is put back to sleep, who is comparable to the door Monty opens. It was [math]1\over2[/math] before seeing someone get put to sleep, so how does seeing someone get put to sleep make it more or less likely that I will be awake both days?

So if this is equivalent to the original problem then the original problem is a Monty Hall problem as well.
sime May 19, 2023 at 14:56 #809029
Always start by writing down the probability of everything :

Variables :

C = Tossed coin (Binary variable in {head, tail} )
D = Awoken Day (Binary variable in {mon, tue} )

Unknown Probability Measure :

P (C, D) = P (C | D) P(D) = P(D | C) P(C)

Constraints given on the Probability Measure :

P (C = head) = 1/2
P (D = mon | C = head) = 1
P (C = tail | D = tue) = 1

Task :

Determine the marginal distribution P (C = head ) from the above premises

Answer :

P(C = head) = 1/2 (by premise)



Why was the scientific american wasting time on this?
Michael May 19, 2023 at 15:01 #809030
Quoting sime
Why was the scientific american wasting time on this?


Self-locating belief and the Sleeping Beauty problem, Elga 2000

Sleeping Beauty: reply to Elga, Lewis 2001
fdrake May 19, 2023 at 15:24 #809034
As a general rule, simulating probabilities doesn't resolve disputes about which computations are appropriate. Every randomness has a generating mechanism. Addressing these paradoxes means addressing the generating mechanism.

Demonstrative example:

1) The moon is made of cheese or eggs
2) both are equally likely
3) the probability the moon is made of cheese given by the following R script:
mean(rbinom(10000,1,0.5))


So the moon is made of cheese with 50% probability.

It's just right, look at the code!

All coding something up does is let you check a calculation for it being correct, not whether it's the appropriate calculation to do.
sime May 19, 2023 at 15:30 #809035
Reply to Michael

Thanks.

The premises of the Sleeping Beauty, at least in some popular incarnations of the problem, specify an experimental design, but they don't specify or assume how temporal self-location is related to that design. The design only specifies a few constraints relating the physical outcome of a coin toss C to the actual days of the week D that a subject is awoken. But an awoken subject's mental state M comprises yet another variable that has no specified relationships to C or D in the question's premises.

The question is literally asking, on the basis of a partial definition of (C,D) what is the marginal distribution of P(C), which it's premises already state is 1/2. So if it is intending to probe the valid philosophical question pertaining to self-location it needs to be more candid and ask

What is the probability P (C = h | M) ?

But if nothing about M and it's relationships to C and D are specified or assumed, one isn't in a position to assign conditional probabilities, and can only respond with the vacuous answer " P(C = h | M) is between 0 and 1. "
hypericin May 21, 2023 at 19:03 #809557
Quoting fdrake
It's just right, look at the code!

All coding something up does is let you check a calculation for it being correct, not whether it's the appropriate calculation to do.


You seem to suggest I just arbitrarily whipped up some code and said "hey guys, code! problem solved!"

The program calculates the probability a wakening is heads, with a given number of trials.
Here are some examples:

1 trial:
Two possibilities, 0 and 1 heads
0 Heads: (does not contribute to likelihood)
1 Heads: 1/2 chance, all awakenings are heads = 1/2
Answer: 1/2

2 trials:
Three possibilities, 0, 1 and 2 heads
0 Heads: (does not contribute to likelihood)
1 Heads: 1/2 chance, 1/3 awakenings are heads = 1/6
2 Heads: 1/4 chance, all awakenings are heads = 1/4
Answer: 1/6 + 1/4 = 10/24 = ~.417

3 trials:
Four possibilities, 0, 1, 2 and 3heads
0 Heads: (does not contribute to likelihood)
1 Heads: 3/8 chance, 1/5 awakenings are heads = 3/40
2 Heads: 3/8 chance, 1/2 awakenings are heads = 3/16
3 Heads: 1/8 chance, all awakenings are heads = 1/8
Answer: 1/8 + 3/16 + 3/40 = ~.3875

If I got something wrong, it is quite an odd coincidence that the program captures both answers: 1/2 at one trial, and 1/3 at N trials.





fdrake May 21, 2023 at 19:08 #809559
Quoting hypericin
You seem to suggest I just arbitrarily whipped up some code and said "hey guys, code! problem solved!"


Not what I intended, apologies for any offence caused. I whipped up some code myself to illustrate different scenarios in another thread.
hypericin May 22, 2023 at 02:08 #809643
Quoting sime
Task :

Determine the marginal distribution P (C = head ) from the above premises


[/quote]

No, the question is what is the probability SB experiences an awakening with the coin being heads.
sime May 22, 2023 at 08:18 #809707
Quoting hypericin
No, the question is what is the probability SB experiences an awakening with the coin being heads


Unless additional premises are included in the problem, I cannot tell the difference between how I phrased the question and your phrasing of the question.

Upon being awoken, SB is asked: "What is your credence now for the proposition that the coin landed heads?" "

She is therefore being asked "What is P(C | M) , where M is your current mental state?"

But the premises don't specify anything regarding the correlation of SB's mental state M to either the state of the coin C or the day of the week D.

So we can only interpret P (C | M ) as referring to P(C) .



Srap Tasmaner May 22, 2023 at 20:15 #809883
This is a quote from Descartes posted by @Fooloso4 next door in that thread:

For a life-span can be divided into countless parts, each completely independent of the others, so that from my existing at one time it doesn’t follow that I exist at later times, unless some cause keeps me in existence – one might say that it creates me afresh at each moment.


We can say of the sleeping beauty problem, just as a way of beginning, that there are three possible future SB slices which the SB being interviewed could be.

The halfer camp seems to take the view that two of those belong to one person, and one to another, and that the correct analysis is that the coin flip partitions SB's future slices into a heads set and a tails set, just two, equal chances of being in each set.

The thirder view is that only the current slice that you might be is relevant, and there are more being-interviewed slices in the tails partition, so you're more likely one of those.

It may not be true that I am more likely to be a person who is interviewed more than once, and yet be true that this slice of me being interviewed is more likely to be a somewhat common tails slice, considering the entire pool of possible slices of me, than to be a comparatively rarer heads slice.
Srap Tasmaner May 22, 2023 at 20:23 #809887
For an analogy: it's 11 am, and you're asleep in bed; how old are you? Odds are you're one of the ages where you were more often asleep in bed at 11 am, even if every year of your life has had at least one such day.
Dawnstorm May 23, 2023 at 06:47 #810041
Quoting Michael
The probability that I will be questioned if the coin lands heads is 1. The probability that I will be questioned is 1.


How do you get 1? How do you interpret P(Questioned)?

I get 3/4. There are two variables (Coin and Day), and two checks, for four outcomes altogether, three of which result in "questioned".

Similarly, the likelihood that the current situation is one in which we're both questioned and the coin comes up heads is 1/2.

I don't know how to do formulae, but I get the following:

1/3 = (1/2*1/2)/(3/4)

Which checks out. I mean, test it all out on the event space:

Heads and Monday = Not Questioned
Heads and Tuesday = Questioned
Tails and Monday = Questioned
Tails and Tuesday = Questioned

Likelihood to be questioned:

Heads and Monday = Not Questioned
Heads and Tuesday = Questioned
Tails and Monday = Questioned
Tails and Tuesday = Questioned

Likelihood to be questioned when Heads:

Heads and Monday = Not Questioned
Heads and Tuesday = Questioned
[s]Tails and Monday = Questioned
Tails and Tuesday = Questioned[/s]

Likelihood to be Heads when Questioned:

[s]Heads and Monday = Not Questioned[/s]
Heads and Tuesday = Questioned
Tails and Monday = Questioned
Tails and Tuesday = Questioned

I mean, I'm no mathematician. But this, at the very least, makes sense to me. Have I gone wrong anywhere?

Michael May 23, 2023 at 07:41 #810045
Quoting Dawnstorm
How do you get 1?


Because Sleeping Beauty is certain to be questioned during the experiment.

Compare with my alternative scenario here where the probability of being questioned is 1/2.

Quoting Dawnstorm
How do you interpret P(Questioned)?


"My credence that I will be questioned."
Dawnstorm May 24, 2023 at 00:20 #810299
This is messing with my head. I tried to reply but couldn't get a coherent post going.
hypericin May 24, 2023 at 03:00 #810311
Quoting sime
She is therefore being asked "What is P(C | M) , where M is your current mental state?"


No, the question has nothing whatsoever with her mental state.

She is being asked, given that she is awakened, what is the probability of heads. If she is awakened 1000 times for every tails and once with heads, given enough coin flips you can see that it is overwhelmingly likely to be tails, even though the probability of heads remains 50%. This is independent of her mental state.
hypericin May 24, 2023 at 05:00 #810318
Quoting Srap Tasmaner
The thirder view is that only the current slice that you might be is relevant, and there are more being-interviewed slices in the tails partition, so you're more likely one of those.


This relies on the intuition of repeating the experiment over and over. If so, then there are unconditionally more tail slices. But the coin is flipped exactly once. Therefore, even though there are more tail slices, they both exist only upon a tails flip. Therefore,

Quoting Srap Tasmaner
the correct analysis is that the coin flip partitions SB's future slices into a heads set and a tails set, just two, equal chances of being in each set.


Is the correct one.
sime May 24, 2023 at 07:48 #810330
Quoting hypericin
No, the question has nothing whatsoever with her mental state.

She is being asked, given that she is awakened, what is the probability of heads. If she is awakened 1000 times for every tails and once with heads, given enough coin flips you can see that it is overwhelmingly likely to be tails, even though the probability of heads remains 50%. This is independent of her mental state.



Her "mental state" M refers to her epistemic state upon waking. In the context of repeated trials, it would include her knowledge of previous trials. But the paradox doesn't assume repeated trials. If it did in the manner of your example, then

P ( C = tails |M) = 1000/1001

In which case, her mental state has proven to be a very reliable indicator of the state of the coin.

But this isn't part of the question's premises. We are told that in the context of a single trial, that her mental state is conditioned so as to have no previous memories of waking up. This condition implies that her mental state is statistically independent to the state of the coin, i.e.

P(C | M ) = P(C) = 1/2

Here i have assumed that she is told that the physical probability of the coin landing heads is 1/2. If she isn't told this, then she should refrain from assigning any distribution to P(C | M)
Srap Tasmaner May 24, 2023 at 14:46 #810392
Quoting hypericin
This relies on the intuition of repeating the experiment over and over. If so, then there are unconditionally more tail slices. But the coin is flipped exactly once. Therefore, even though there are more tail slices, they both exist only upon a tails flip.


I think it's hard.

If you wake up and ask yourself "Which of the selves I could have become while I was asleep am I?" there are two ways to take that.

You could think of the self you're seeking as a continuous self (a sort of world-line, if I have the parlance right) accumulating experiences, in which case it's reasonable to describe that self from any point along the line: in the past you're the self who's going to end up having exactly one or exactly two interviews; in the future, you're the self that had exactly one or exactly two interviews.

But you could also think of your self as this momentary self, even if it is part of a larger stream of momentary selves. Your question then is, which stream is this self-moment likely to be part of? If any such moment is more likely to be part of one stream than another, then you are more likely to be the self living out that stream than the other.
fdrake May 24, 2023 at 16:46 #810414
The problem vexing the minds of experts is as follows: Sleeping Beauty agrees to participate in an experiment. On Sunday she is given a sleeping pill and falls asleep. One of the experimenters then tosses a coin. If “heads” comes up, the scientists awaken Sleeping Beauty on Monday. Afterward, they administer another sleeping pill. If “tails” comes up, they wake Sleeping Beauty up on Monday, put her back to sleep and wake her up again on Tuesday. Then they give her another sleeping pill. In both cases, they wake her up again on Wednesday, and the experiment ends.

The important thing here is that because of the sleeping drug, Sleeping Beauty has no memory of whether she was woken up before. So when she wakes up, she cannot distinguish whether it is Monday or Tuesday. The experimenters do not tell Sleeping Beauty either the outcome of the coin toss nor the day.

They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?


EDIT: MISINFORMATION

Timeline:
1) Sleeping Beauty falls asleep on Sunday evening.
2) The experimenter tosses a coin. Call this random variable F.
2_1a) If F=heads, Sleeping Beauty is awoken on Monday morning.
2_1b) After 2_1a, the experimenters put Sleeping Beauty to sleep by giving her a sleeping pill.
2_2a) If F=tails, Sleeping Beauty is awoken on Monday morning.
2_2b) After 2_2a) F=tails, Sleeping Beauty is put to sleep again and awoken on Tuesday morning, then put back to sleep.
3) Sleeping beauty is awoken on Wednesday morning.

Seeping Beauty does not know the outcome of the coin toss, nor the day. She does know the logic in 2_1a), 2_1)b, 2_2a) and 2_2b). Let's just assume this occurs every week, forever.

When she wakes up she doesn't know if it's Monday, Tuesday or Wednesday. All she knows is that she has woken up. You then ask what's the probability of heads given that she's woken up? That means knowing what "Awoken on a day" means... And since she doesn't know what day it is, that's more conditional reasoning.

She's awoken every Wednesday (from 3), she's awoken every Monday (from 2_1a) and (2_2a), she's awoken on a Tuesday iff 2_2b) is triggered, which occurs when and only when 2_2a) is triggered, which is equivalent to F=tails. She also doesn't know what day it is.

I'd need to model "Awoken on a day" as a compound event of "Day" and "Awoken", Day can take values Monday, Tuesday, Wednesday. Awoken can take values "Asleep" or "Awake". If I recall correctly this formula lets you marginalise conditional distributions over another variable:

P(A|B) = P(A|B,C1)P(C1) + P(A|B,C2)P(C2)

Where C1 and C2 are disjoint events. I'm going to use A as the coinflip, B as Awoken and C as Day (so there'll be three rather than two). Now let's go onto the probabilities...

P(F=Heads|Awoken=Awake)=
P(F=Heads|Awoken=Awake, Day=Monday)P(Day=Monday) +
P(F=Heads|Awoken=Awake, Day=Tuesday)P(Day=Tuesday) +
P(F=Heads|Awoken=Awake, Day=Wednesday)P(Day=Wednesday)

P(Monday)=P(Tuesday)=P(Wednesday)=1/3 , assuming principle of indifference and the experiment runs only over these days.
EDIT: DON'T USE INDIFFERENCE HERE, WEDNESDAY'S VARIABLE DAY IS LINKED TO FLIP
P(Heads|Awake, Monday) - how many times is the coin heads if she's awoken on a Monday? She knows the experimental plan, so this should be 0.5 .
P(Heads|Awake, Tuesday) - how many times is the coin heads if she's awoken on a Tuesday? Should be 0. Since she can't wake up on a Tuesday unless F=tails.
P(Heads|Awake, Wednesday) - how many times is the coin heads if she's awoken on a Wednesday? Should be 0.5, since she's always woken up on a Wednesday.

That means we've got (1/3)*0.5+(1/3)*0.5=1/3 probability for P(Heads|Awake). Seems I'm a thirder.

EDIT: MISINFORMATION
Srap Tasmaner May 24, 2023 at 18:52 #810438
Reply to fdrake

Here's a halfer analogy I found convincing before:

Two urns, one with a single white marble, one with many black marbles; you flip a coin to decide which urn to draw from; even though there are more black marbles than white, the chances of getting the white marble are equal to the chances of getting one of the many blacks.

The argument I've been making lately seems to be roughly this: if you close your eyes and someone selects a marble and places it in your hand, and if you know there are more black marbles than white, then you can figure it's more likely to be black. Fair enough.

But if you know they selected the marble in your hand by flipping a coin to select which urn to draw from, you should figure it's just as likely to be the one white as any of the blacks.

That looks like trouble for the thirder position, but it's missing the repetition, and missing SB's uncertainty about her own state. It's not just that she knows someone's put a marble in her hand, she knows they'll do it more than once for tails. So SB is justified in wondering, who is this person with a marble in her hand? Who is she more likely to be?

I think.
fdrake May 24, 2023 at 19:06 #810441
Quoting Srap Tasmaner
Two urns, one with a single white marble, one with many black marbles; you flip a coin to decide which urn to draw from; even though there are more black marbles than white, the chances of getting the white marble are equal to the chances of getting one of the many blacks.


:up:

Quoting Srap Tasmaner
The argument I've been making lately seems to be roughly this: if you close your eyes and someone selects a marble and places it in your hand, and if you know there are more black marbles than white, then you can figure it's more likely to be black. Fair enough.


:up:

Quoting Srap Tasmaner
But if you know they selected the marble in your hand by flipping a coin to select which urn to draw from, you should figure it's just as likely to be the one white as any of the blacks.


:up:

This works with Heads for Monday alone, Tails for Tuesday alone and no Wednesday. If the chance of the coin being heads depended upon the number of marbles in the jar, we're in a closer situation to OP's SB, I think.

Quoting Srap Tasmaner
and missing SB's uncertainty about her own state.


Indeed. What I'm imagining the halfer position as is that the probability of flipping a coin is always half, and that sleeping beauty's state of awakeness has no correlation with the coin. Whereas you know it does, as the "sampling day" of SB's report depends upon the coin flip. An analogy there might be asking shoppers on a Saturday what day they shop for longest on. Asking the question that way, itself, influences the responses.

fdrake May 24, 2023 at 20:39 #810457
I am also confused now. God damnit. I fear I answered the wrong question.
Michael May 24, 2023 at 20:56 #810466
Quoting fdrake
Whereas you know it does, as the "sampling day" of SB's report depends upon the coin flip.


Only if it’s Tuesday. She gets interviewed on Monday regardless.

So what if the coin toss doesn’t happen until after the Monday interview? Does that affect your answer?
fdrake May 24, 2023 at 21:02 #810467
Quoting Michael
So what if the coin toss doesn’t happen until after the Monday interview? Does that affect your answer?


It wouldn't in this case I think. I'm going to commit now to writing 1 if my random number generator spits out 1, and 0 if it spits out 0. It spat out 1. If I'd computed it earlier, I would've written the same thing.

More extremely, if someone wrote down the study plan for heads and tails on specific weeks, and coin flipped them, it would behave exactly the same as if they flipped them at the appropriate times in the experiment.

It would be different if SB knew the "sampling day" AND the "sampling day" informed what the RNG did. In that case I'd need a prewritten plan of SB's random responses to the different sampling days too.
Srap Tasmaner May 24, 2023 at 21:03 #810468
Reply to fdrake

Consider what the halfer says next with my marble analogy: which marble-holding-self this is was selected by a coin toss. That's the same as saying whether the marble I have is white or black was determined by a coin toss.

But that's not good enough, because there's more than one black-marble-holding momentary self. There's just no way to select from more than two moments this might be with a single coin toss.
Michael May 24, 2023 at 21:21 #810476
There are four people, each assigned a number (unknown to them) between 1 and 4. Two of them are to be put to sleep at random, determined by a single coin toss: if heads then 1 and 2 are put to sleep; if tails then 3 and 4 are put to sleep.

After the coin toss one of them is put to sleep first. For each of the remaining three, what is the probability that they will be put to sleep? Is it 1/3, because there are three of them and only one is to be put to sleep, or is it 1/2, because that was the probability before the first was put to sleep?

I think this basically is the Sleeping Beauty problem.
fdrake May 24, 2023 at 21:47 #810478
[math]\begin{array} {|r|r|}\hline Flip & Day & Status \\ \hline Heads & Monday & Awake \\ \hline Heads & Tuesday & Asleep \\ \hline Tails & Monday & Awake \\ \hline Tails & Tuesday & Awake \\ \hline \end{array}[/math]

I think that's all possible outcomes. You can calculate the probabilities by the proportion of their occurrence in the table. You can also subset to calculate conditional probabilities.

From that you can say P(Heads)=0.5, P(Awake)=3/4, P(Awake|Heads)=0.5, P(Heads|Awake)=1/3 .

It's interesting that the principle of indifference prior on 3 interview days (what I did before) gives 1/3, which is the same as the frequency table here. Though my previous calculation was silly as the logic which ties Day to Flip in the 2 interview case also ties Days to Flip in my shite 3 day interpretation. The frequency table calculation there gives 2/5 . I've flagged my previous attempt as misinformation.

So I'm still a thirder, even though my previous calculation is wrong lol.

Michael May 25, 2023 at 07:15 #810549
Reply to fdrake I don't think that table is how to calculate the probabilities. Consider a slight variation where there are no days, just number of awakenings. If heads then woken once, if tails then woken twice. And consider perhaps that the coin isn't tossed until after the first awakening.

P(Heads) is just the prior probability that the coin will land heads, which is 0.5, P(Awake) is just the prior probability that Sleeping Beauty will be woken up, which is 1, and P(Awake|Heads) is just the prior probability that she will be woken up if the coin lands heads, which is 1.

That gives us:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake| Heads) * P(Heads)} \over P(Awake)}\\&={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]
sime May 25, 2023 at 10:29 #810563
Reply to fdrake

MacTaggart's remarks concerning the A and B series are relevant here. The premises of the SB paradox are tenseless and so refer only to the B series , whereas Sleeping Beauty's question is tensed and so refers to her ability to correlate her A series to her B series.

The problem's premises can be written

[math]\begin{array} {|r|r|}\hline Sample\ Space & Awoken\ Mon & Awoken\ Tue & Joint\ Probability \\ \hline Heads & True & False & 1/2 \\ \hline Tails & True& True & 1/2 \\ \hline \end{array}[/math]

According to this parameterisation, it doesn't make sense to ask SB "what Day is Today?" for "Today" isn't a random variable of the sample-space. (Thirders implicitly ask this question). But all that can be talked about, according to this parameterisation, is the state of the coin and whether SB is awoken on both monday and tuesday, and not "what day is today?".

To ring home the point, suppose that if the coin lands tails then the experimenters change SB's location from London on Monday to Paris on Tuesday. in which case, we can eliminate the notions of time and date from the premises of the the problem, by changing "Monday" to "London" and "Tuesday" to Paris.


Recall that MacTaggart decided the A series to be "unreal" and contradictory. From the B series point of view, "today" is subjective and refers to SB's mental state that comprises a variable that is separate and independent of the "calendar" events in the B series, such as wakening a subject up in London and then in Paris. Under the assumption of SB undergoing amnesia however, her mental state is uncorrelated with her understanding of B series events, and hence uncorrelated with the state of the coin.

fdrake May 25, 2023 at 10:55 #810567
Quoting sime
According to this parameterisation, it doesn't make sense to ask SB "what Day is Today?" for "Today" isn't a random variable of the sample-space. (Thirders implicitly ask this question). But all that can be talked about, according to this parameterisation, is the state of the coin and whether SB is awoken on both monday and tuesday, and not "what day is today?".


I see. This one is clear. I'll have a think about it.
PhilosophyRunner May 25, 2023 at 11:15 #810571
Isn't the confusion here in the ambiguity of the question put forward to sleeping beauty? Are not the two camps interpreting the question put to SB differently?

Camp 1 - The probability that the coin landed on heads. This is 1/2 for a fair coin regardless of anything else.

Camp 2 - The probability that the coin is showing heads on the day she awakes. This is 1/3 as it is a combination of the probability of the coin and the probability of her waking up. She is more likely to wake up and see a coin showing tails, as she will wake up more often if the coin lands on tails.

So perhaps SB should ask the experimenters to clarify what probability they are after, exactly.
Michael May 25, 2023 at 11:22 #810572
Quoting PhilosophyRunner
She is more likely to wake up and see a coin showing tails, as she will wake up more often if the coin lands on tails.


That’s a non sequitur.

That it happens more often isn’t that it’s more likely.
PhilosophyRunner May 25, 2023 at 11:39 #810574
Reply to Michael it isn't a non sequitur. The probability that see sees tails at the point she wakes up is partly dependent on how often she wakes up for each outcome.
Michael May 25, 2023 at 11:42 #810575
Reply to PhilosophyRunner It only depends on whether or not the single coin flip landed tails.

Imagine a different scenario. If I flip a coin 100 times and it lands heads every time I will wake you up a million times, otherwise I will wake you up once.

After waking up, is it more likely that I got heads 100 times in a row?
PhilosophyRunner May 25, 2023 at 12:12 #810578
Reply to Michael It is more likely that you wake up and not have got heads 100 times in a row. The probability that heads lands 100 times in a row is in 8^-31, while the probability that you wake up to see heads 100 time in a row is 8^-25.

So the condition of you waking up to see it still changes the probability, but since you are starting off with a very unlikely outcome, you still end up with an unlikely outcome.

The key point is that SB waking up is liked conditionally to the coin toss. They are not independent of each other.

Imagine a different extreme scenario. I flip a coin and if it lands heads I wake you up tomorrow, if it lands tails I never wake you up. If you wake up and are asked the probability the coin landed heads, what would you say?

In both your and SB case the waking up and coin toss are linked, just yours is in a more extreme way.
Michael May 25, 2023 at 12:29 #810580
Reply to PhilosophyRunner Then if it’s heads 100 times in a row I wake you up 2[sup]101[/sup] times, otherwise I wake you up once.

I don’t think it reasonable to then conclude, upon waking, that it is more likely that it landed heads 100 times in a row. The fact that you would be woken up far more times if it did happen just doesn’t make it more likely to have happened.

It is only reasonable to understand that it landing heads 100 times in a row is so unlikely that it almost certainly didn’t.

The only thing that matters is the coin flip(s). The rest is a distraction.
PhilosophyRunner May 25, 2023 at 12:32 #810581
Reply to Michael In that case it is more likely that given an instance I wake up I will see the coin has been flipped heads 100 times in a row.

Could you address my counter extreme scenario that I proposed?

I flip a coin and if it lands heads I wake you up tomorrow, if it lands tails you never wake you up. If you wake up and are asked the probability the coin landed heads, what would you say?

The coin being flipped and you waking up are not independent in any of the scenarios. We are looking at conditional probabilities here.
Michael May 25, 2023 at 12:38 #810582
Quoting PhilosophyRunner
In that case it is more likely that given an instance I wake up I will see the coin has been flipped heads 100 times in a row.


I think the reasoning that leads you to this conclusion is clearly wrong, given that it’s an absurd conclusion.

Quoting PhilosophyRunner
I flip a coin and if it lands heads I wake you up tomorrow, if it lands tails you never wake you up. If you wake up and are asked the probability the coin landed heads, what would you say?


1.
PhilosophyRunner May 25, 2023 at 15:55 #810611
Quoting Michael
1.


And yet the probability of a heads being flipped is 0.5. So you see that the probability of you seeing a heads is conditional on the head being flipped and the criteria for you waking up. In my example the criteria of waking up is 1 for heads and 0 for tails. In the SB problem it is 1 for heads and 2 for tails. The probability of you or SB seeing a heads is not 0.5 in either case

Quoting Michael
I think the reasoning that leads you to this conclusion is clearly wrong, given that it’s an absurd conclusion.


I do not think it is an absurd conclusion at all. The set up in your example is absurd in the practical sense of waking someone 2^101 times, but in order to explore your though experiment we suspend that absurdness.
Michael May 25, 2023 at 16:00 #810612
Quoting PhilosophyRunner
In the SB problem it is 1 for heads and 2 for tails.


No it's not. It's 1 for heads and 1 for tails. A probability of 2 makes no sense.
PhilosophyRunner May 25, 2023 at 16:22 #810615
Reply to Michael I was talking about frequency not probability.
Michael May 25, 2023 at 16:23 #810616
Quoting PhilosophyRunner
I was talking about frequency not probability.


And my first comment to you was literally "that it happens more often isn’t that it’s more likely", i.e. that it's more frequent isn't that it's more probable.

Waking on tails is twice as frequent but equally probable.
PhilosophyRunner May 25, 2023 at 16:24 #810617
Reply to Michael I am saying you are wrong. And in my example, where you wake up once for heads and never for tails shows that the probability of you seeing heads when you wake up is conditional on how often you wake up for heads and how often for tails.

There is an above post by fdrake that sets this out more clearly in a table for the SB example.
Michael May 25, 2023 at 16:27 #810618
Quoting PhilosophyRunner
the probability of you seeing heads when you wake up is conditional on how often you wake up for heads and how often for tails


No, the probability of you seeing heads when you wake up is conditional on how likely you wake up for heads and for tails, not on how often you wake up.

In your case, the reason it matters is because the probability of waking on heads is 1 and tails is 0.

In the ordinary case it doesn't matter, because the probability of waking on heads is 1 and tails is 1.
PhilosophyRunner May 25, 2023 at 16:29 #810619
Reply to Michael The conditional probability is dependent on the frequency in this case. Because SB wakes up more on tails, a given wake up event is more likely to be caused by a tail flip that a head flip.

1/3 of wake up event are caused by a head flip. 2/3 by a tail flip. So the conditional probability is influenced by the frequency.
Michael May 25, 2023 at 16:31 #810620
Quoting PhilosophyRunner
Because SB wakes up more on tails, a given wake up event is more likely to be caused by a tail flip that a head flip.


And that's the non sequitur.

That I would wake up more often if the coin lands heads 100 times in a row isn't that, upon waking, it is more likely that the coin landed heads 100 times in a row.
PhilosophyRunner May 25, 2023 at 16:35 #810622
Reply to Michael That is how the conditional probability works in this instance. If I am SB and I wake up, I know it could be (Heads and Monday), (Tails and Monday), (Tails and Tuesday) all with equal probability. The probability that it is heads is therefore 1/3 and tails 2/3.

There are 3 possible wake up event with equal probability, and only 1 of them is heads.
fdrake May 25, 2023 at 16:53 #810626
Quoting sime
Under the assumption of SB undergoing amnesia however, her mental state is uncorrelated with her understanding of B series events, and hence uncorrelated with the state of the coin.


I'm basing this on the table you've given me, I think.

I'm not sure this is true. Here are my thoughts on it. If SB being awoken was independent of the state of the coin, you'd expect the probability that she is awake on Monday, and on Tuesday to be independent of the state of the coin. By independent there I mean "joint probability is equal to the product of the marginals".

The probability that she is awake on Monday would be 1, if it were considered as a random variable by itself it would be a constant ("Awake Mon = True"), and so independent from any other random variable. The probability that she is awake on Tuesday, however, is just a relabelling of the coin flip. Specifically, the two types of events (Awake on Tuesday) and (Coinflip) are a measurable function f of each other;

Awake on Tuesday=f(Tails)
Asleep on Tuesday=f(Heads)

Which thereby means, since compositions of measurable functions are measurable (I believe) whether they are awake on Tuesday is a random variable when the coinflip is.

Explicitly terms of your table, it looks like this:

(Awake on Tuesday, Awake on Monday) = g(Tails)
(Asleep on Tuesday, Awake on Monday) = g(Heads)

Where the function maps the coin flip results to the awake/asleep bit of the table. They're not independent, one totally determines the other.

I appreciated your example, because it's getting at something which seems fundamental about the problem - our intuitions seem to suggest not every normalised measurable function (on a space of events) can be considered a random variable. But that's just what they are mathematically.
Michael May 25, 2023 at 17:24 #810633
Reply to fdrake

What do you make of this?

if it’s heads 100 times in a row I wake you up 2[sup]101[/sup] times, otherwise I wake you up once


You know the experiment is only being run one time.

When you wake up, do you follow thirder reasoning and argue that it is more probable that the coin landed heads 100 times in a row?

Or do you follow halfer reasoning and argue that the number of times you would wake up were it to land heads 100 times in a row is irrelevant, and that it almost certainly didn't land heads 100 times in a row?
fdrake May 25, 2023 at 17:30 #810634
Reply to Michael

I'll analyse that case if you can describe it very specifically. Like in the OP.
Michael May 25, 2023 at 17:48 #810643
Quoting fdrake
I'll analyse that case if you can describe it very specifically. Like in the OP.


I don't know how to explain it any simpler than the above. It's exactly like the traditional experiment, but rather than two interviews following from a [math]P = {1\over2}[/math] coin toss it's 2[sup]101[/sup] interviews following from a [math]P = {1\over{2^{100}}}[/math] coin toss (with just 1 interview otherwise).

Thirder reasoning would entail that, after waking, it is more likely that the coin landed heads 100 times in a row, and I think that's an absurd conclusion.

I would say that it doesn't matter how many times you will wake me if the coin lands heads 100 times in a row. When I wake up the only reasonable conclusion is that it almost certainly didn't land heads 100 times in a row.
fdrake May 25, 2023 at 17:53 #810645
Reply to Michael

Can you write me it in the style of the OP? No variables, just when the coin flips happen, when sleeping beauty would wake up etc.
Michael May 25, 2023 at 18:00 #810647
Reply to fdrake

Sleeping Beauty is put to sleep and a coin is tossed 100 times. If it lands heads every time then she is woken up, interviewed, and put back to sleep 2[sup]101[/sup] times, otherwise she is woken up, interviewed, and put back to sleep once.

When being interviewed, is her credence that the coin landed heads 100 times in a row greater than her credence that it didn’t?

If we accept thirder reasoning then it is, which I think is wrong. It is a mistake to use the number of times that she would wake up were it to land heads 100 times in a row to determine the probability that it did land heads 100 times in a row.

Any reasonable Sleeping Beauty would understand that it almost certainly didn’t land heads 100 times in a row, and so that her current interview is almost certainly her first and only.

Whichever reasoning applies to this experiment must also apply to the traditional experiment.
fdrake May 25, 2023 at 23:26 #810743
Quoting Michael
Sleeping Beauty is put to sleep and a coin is tossed 100 times. If it lands heads every time then she is woken up, interviewed, and put back to sleep 2101 times, otherwise she is woken up, interviewed, and put back to sleep once.


If I understand right, if the coin is heads 100 times, she wakes up on Monday and is not woken up on Tuesday. If the coin is not heads 100 times, she wakes up on Monday and Tuesday? Then the experiment ends.
fdrake May 26, 2023 at 00:59 #810775
@Michael - you ruined my mind again god damnit.

Edit: now I'm a halfer. Damnit it's late.
Michael May 26, 2023 at 07:58 #810824
Quoting fdrake
If I understand right, if the coin is heads 100 times, she wakes up on Monday and is not woken up on Tuesday. If the coin is not heads 100 times, she wakes up on Monday and Tuesday? Then the experiment ends.


I don't think we need to worry about days. The traditional experiment can be simply stated as: if tails, two interviews, otherwise one interview. In my experiment it is: if 100 heads, 2[sup]101[/sup] interviews, otherwise one interview.

But if thinking about it in days helps then: if the coin is heads 100 times, she wakes up on Day 1, Day 2, Day 3, ..., and Day 2[sup]101[/sup]. If the coin is not heads 100 times, she wakes up on Day 1. Then the experiment ends.

What I think this variation shows is that it is wrong to determine the probability by imagining that we randomly select from the set of all possible interviews (weighted by their probability), and then "dropping" Sleeping Beauty into that interview. The experiment just doesn't work that way; it works by tossing a coin 100 times and then waking her up. I think it's clear at a glance that these will give different results, and I think that the second is the correct one, even from Sleeping Beauty's perspective.

So I suppose this is a reductio ad absurdum against the self-indication assumption that guides thirder reasoning.

Quoting fdrake
Michael - you ruined my mind again god damnit.


You're welcome. ;)
PhilosophyRunner May 26, 2023 at 12:18 #810852
Reply to Michael

Can this not be experimentally validated using a simulation? Write a computer program simulating SB and the experiment. Run the simulation 1 million times. Each time SB wakes up make a note of whether the coin was seen on heads or tails.

Will the number of heads and tails seen on wake up be 1/2 and 1/2 or 1/3 and 2/3?

Surely someone must have thought of doing this? I might have a search.
Michael May 26, 2023 at 13:24 #810860
Reply to PhilosophyRunner

The simplest "experiment" is just to imagine yourself in Sleeping Beauty's shoes. You know that if the coin lands heads 100 times then you will be interviewed 2[sup]101[/sup] times, otherwise you will be interviewed once, and you know that the experiment will not be repeated.

After being woken up, which of these is the most rational consideration?

1. The coin almost certainly didn't land heads 100 times, and so this is most certainly my first and only interview, or

2. If this experiment was repeated 2[sup]100[/sup] times then the total number of interviews after the coin landed heads 100 times is greater than the total number of interviews after it didn't, and so if I was to pick an interview at random from that set then there is a greater probability that that interview would have followed the coin landing heads 100 times.

I think the first is the most (and only) rational consideration.

Your proposed simulation would certainly prove that the second is true (although the math alone is enough to prove it), but given that the experiment isn't conducted by picking an interview at random from that set and dropping Sleeping Beauty into it, it's also irrelevant.
Srap Tasmaner May 27, 2023 at 01:38 #810969
What should SB's credence be that this is her first (or second) interview?

Any thoughts?
Pierre-Normand May 27, 2023 at 06:23 #810985
Quoting Michael
The simplest "experiment" is just to imagine yourself in Sleeping Beauty's shoes.


Wasn't that rather the Cinderella problem?

You're inviting us to imagine ourselves in Sleeping Beauty's shoes to support the halfer position. However, there are other ways to imagine her situation that can be argued to support the thirder position. Consider the following thought experiment:

Suppose we introduce a character, Sue, who signs up for a lottery study. She joins a waitlist where participants are selected one or two at a time based on a coin toss: two participants when it lands heads and one when it lands tails. Upon being selected, they're given the option to buy a ticket for $100 that could potentially be worth $180 if the coin had landed heads (or else is worth nothing).

The expected value of the ticket, and whether Sue should purchase it, depends on her credence about the coin toss. If Sue holds the halfer position, her credence is 1/2, and the expected value of the ticket is $90. Hence, she shouldn't buy it. However, if Sue holds the thirder position, her credence in the proposition that the coin landed heads is 2/3, making the ticket's expected value $120. Hence, she should buy it.

Sue could argue for the thirder position as follows: if she has been selected from the waiting list, it is twice as likely that she has been chosen (together with another participant) as a result of the coin landing heads. As a frequent participant in the study, Sue would find, over time, that she profits if she always buys the ticket (an average net gain of $20 per participation), which corroborates the thirder position.

To make this scenario more directly analogous to the original problem, let's imagine that Sleeping Beauty, upon each awakening, can not only express her belief about the coin toss but also place a bet on it. In the long run, she would profit from taking the bet as a thirder, further reinforcing this position.

The pitfall of the 'imagine-yourself-in-her-shoes' argument lies in conflating Sue's perspective with the experimenter's by focusing only on Sue's situation before the coin toss. Eventually, everyone on the waitlist will be selected, just as Sleeping Beauty is guaranteed to be awoken at least once. Her credence that the coin will land heads is 1/2 before being put to sleep, and the same is true for the study participants before they're selected. However, once the coin has been tossed and they've been chosen, their credence about the value of their tickets being $180 (and that the coin landed on heads) should be updated to 2/3. The same applies to Sue's credence that her current awakening was due to a coin landing heads up.
Michael May 27, 2023 at 07:10 #810992
Quoting Pierre-Normand
You're inviting us to imagine ourselves in Sleeping Beauty's shoes to support the halfer position.


Well yes. The very question posed by the problem is “what is Sleeping Beauty’s credence that the coin landed heads?”, or in my version “what is Sleeping Beauty’s credence that the coin landed heads 100 times in a row?”

Quoting Pierre-Normand
To make this scenario more directly analogous to the original problem, let's imagine that Sleeping Beauty, upon each awakening, can not only express her belief about the coin toss but also place a bet on it. In the long run, she would profit from taking the bet as a thirder, further reinforcing this position.


Being able to bet twice if it lands tails, and so make more money, doesn’t make it more likely that it landed tails; it just means you get to bet twice.

You might as well just say: you can place a £1 bet on a coin toss. If you correctly guess heads you win £1; if you correctly guess tails you win £2.

Obviously it’s better to bet on tails, but not because tails is more probable.

Perhaps this is more evident with my extreme example. Place a bet on each interview whether or not the coin landed heads 100 times in a row. In the long run (e.g. after ~2[sup]100[/sup] experiments) you’ll start making money, but it just doesn’t follow that it having landed heads 100 times in a row is more probable.

And also the premise of the problem is that the experiment is just run once.
Pierre-Normand May 27, 2023 at 07:15 #810993
This is a follow-up to my previous post.

Quoting Michael
How do you condition on such a thing? What values do you place into Bayes' theorem?

P(Heads|Questioned)=P(Questioned|Heads)?P(Heads) / P(Questioned)


In the case of Sue's selection to participate in the lottery study, we have

P(Heads|Selected)=P(Selected|Heads)?P(Heads) / P(Selected)

Since on each fair coin toss, 1.5 participants are being selected on average and when the coin lands on heads 2 participants are selected, P(Selected|Heads) / P(Selected) is 2/1.5 = 4/3.

P(Heads|Selected) therefore is 4/3 * P(Heads) = (4/3)*(1/2) = 2/3

Likewise, in the case of Sleeping Beauty

P(Heads|Awoken) =(P(Awoken|Heads) / P(Awoken)) * P(Heads) = 2/1.5 * 1/2 = 2/3
Michael May 27, 2023 at 07:17 #810994
Reply to Pierre-Normand I think you numbers there are wrong. See this.
Michael May 27, 2023 at 07:23 #810996
Quoting Pierre-Normand
P(Selected|Heads) / P(Selected) is 2/1.5 = 4/3.


Also this makes no sense. You can't have a probability of 2.
Pierre-Normand May 27, 2023 at 07:26 #810998
Quoting Michael
Being able to bet twice if it lands tails, and so make more money, doesn’t make it more likely that it landed tails; it just means you get to bet twice.

You might as well just say: you can place a £1 bet on a coin toss. If you correctly guess heads you win £1; if you correctly guess tails you win £2.

Obviously it’s better to bet on tails, but not because tails is more probable.


It makes it twice as likely that individual bets are winning bets. Right? Likewise in Sleeping Beauty's problem, the fact that she is being awoken twice when the coin lands heads makes it more likely that a randomly selected awakening is the result of a coin having landed heads. When she if afforded the opportunity to make one singe bet on any given awakening, her expected value when making this bet is conditioned on the probability that this particular awakening is the result of the coin having landed heads. Do you agree that her expected value for this single bet (in my scenario) is $120? If she would rather judge that the probability for the coin having landed heads is 1/2, she should expect the expected value of her bet to be $90 and would be rationally justified to decline waging $100.
Michael May 27, 2023 at 07:28 #810999
Quoting Pierre-Normand
It makes it twice as likely that individual bets are winning bets. Right? Likewise in Sleeping Beauty's problem, the fact that she is being awoken twice when the coin lands heads makes is more likely that a randomly selected awakening is the result of a coin having landed heads.


This is where I think my extreme example is helpful. Place a bet on each interview whether or not the coin landed heads 100 times in a row. In the long run (e.g. after ~2[sup]100[/sup] experiments) you’ll start making money, but it just doesn’t follow that it having landed heads 100 times in a row is more probable.

And also the premise of both the original and my problem is that the experiment is just run once.

So this goes back to what I said above:

After being woken up, which of these is the most rational consideration?

1. The coin almost certainly didn't land heads 100 times, and so this is most certainly my first and only interview, or

2. If this experiment was repeated 2[sup]100[/sup] times then the total number of interviews after the coin landed heads 100 times is greater than the total number of interviews after it didn't, and so if I was to pick an interview at random from that set then there is a greater probability that that interview would have followed the coin landing heads 100 times.

I think the first is the most (and only) rational consideration.

[Although] the second is true ... given that the experiment isn't conducted by picking an interview at random from that set and dropping Sleeping Beauty into it, it's also irrelevant.
Pierre-Normand May 27, 2023 at 07:30 #811000
Quoting Michael
Also this makes no sense. You can't have a probability of 2.


This is not a probability. It's a ratio of probabilities that I have expressed as a ratio of corresponding frequencies. The updated probability P(Heads|Awoken) is 2/3. The quoted ratio being larger than one just reflects the fact that Bayesian updating results in a probability increase in this case.
Michael May 27, 2023 at 07:33 #811001
Quoting Pierre-Normand
This is not a probability. It's a ratio of probabilities. The updated probability P(Heads|Awoken) is 2/3. The quoted ratio being larger than one just reflects the fact that Bayesian updating results in a probability increase in this case.


My mistake. I think your example here is the same as the example I posted at the start?

As I later showed here, it provides a different answer to the original problem.
Pierre-Normand May 27, 2023 at 07:51 #811003
Quoting Michael
I think you numbers there are wrong. See this.


In the quoted post you say: "P(Awake|Heads) is just the prior probability that she will be woken up if the coin lands heads"

I think my lottery study analogy suggests a better interpretation of the P(Awoken|Heads)/P(Awoken) Bayesian updating factor. But I must go to sleep now. Thanks for engaging! I'll be back with equal probability on one of my next two awakenings.
Michael May 27, 2023 at 08:05 #811004
I think there are two different questions with two different answers:

1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads 100 times in a row?

2. If the experiment is repeated 2[sup]100[/sup] times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads 100 times in a row?

I don't think it's rational for Sleeping Beauty to use the answer to the second question to answer the first question. I think it's only rational for Sleeping Beauty's credence that the coin landed heads 100 times in a row to be [math]1\over2^{100}[/math].

And so too with the original problem:

1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?

2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?

Thirders answer the second question, which I believe is the wrong answer to the first question.
fdrake May 27, 2023 at 08:12 #811006
Quoting Michael
I don't think it's rational for Sleeping Beauty to use the answer to the second question to answer the first question. I think it's only rational for Sleeping Beauty's credence that the coin landed heads 100 times in a row to be


The question which has been eating me is "What is the probability of the day being Tuesday?". I think it's necessary to be able to answer that question for the thirder position. But I've not found a way of doing it yet that makes much sense. Though I'm sure there is a way!
Michael May 27, 2023 at 08:14 #811007
Quoting fdrake
What is the probability of the day being Tuesday?


Maybe this Venn diagram helps?

Of course, this is from the experimenter's perspective, not Sleeping Beauty's, but it might help all the same.
Pierre-Normand May 27, 2023 at 08:26 #811008
(I woke up early)

Quoting fdrake
The question which has been eating me is "What is the probability of the day being Tuesday?". I think it's necessary to be able to answer that question for the thirder position. But I've not found a way of doing it yet that makes much sense. Though I'm sure there is a way!


P(Tuesday|Awoken) = (P(Awoken|Tuesday) / P(Awoken)) * P(Tuesday)

Sleeping Beauty is awoken with probability 3/4 on an average day (Monday or Tuesday). On Tuesdays, she is awoken with P = 1/2. Therefore, P(Awoken|Tuesday) / P(Awoken) = (1/2)/(3/4) = 2/3.

This (2/3) is the Bayesian updating factor. The unconditioned probability of the day being Tuesday is 1/2. The updated probability therefore is P(Tuesday|Awoken) = (2/3)*(1/2) = 1/3, as expected.
Michael May 27, 2023 at 08:28 #811009
Quoting Pierre-Normand
P(Tuesday|Awoken) = (P(Awoken|Tuesday) / P(Awoken)) * P(Tuesday)

Sleeping Beauty is awoken with probability 3/4 on an average day (Monday or Tuesday). On Tuesdays, she is awoken with P = 1/2. Therefore, P(Awoken|Tuesday) / P(Awoken) = (1/2)/(3/4) = 2/3.

This (2/3) is the Bayesian updating factor. The unconditioned probability of her being awoken on Tuesday is 1/2. The updated probability therefore is P(Tuesday|Awoken) = (2/3)*(1/2) = 1/3, as expected.


I don't think it correct to say P(Awake) = 3/4. P(Awake) is just the probability that she will be woken up, which is 1.

This is clearer if we forget the days. It is just the case that if it is heads then she is woken up once and if it is tails then she is woken up twice. It doesn't make sense to say that she's not awake for her second awakening if heads.
fdrake May 27, 2023 at 08:33 #811011
Reply to Michael

I'll illustrate my reasoning about the days.

There's an issue in calculating the marginal distribution over the days for the thirder. I reasoned something like this - if you're SB, you can wake up once or twice per week. If you've woken up once per week, it seems to make sense to say that probability that you woke up on a Monday is 1. If you've woken up twice per week... Well what's the probability that you woke up on Monday? You'd wake up "half the time" on Monday and "half the time" on Tuesday, but those "half the times" would then need to apply to the case of the flip being tails.

That would give the probability of monday as P(monday and heads) + P(monday and tails), which equals P(monday|heads)P(heads)+P(monday|tails)P(tails).

P(heads) and P(tails) for the fair coin would be 0.5, as in the prior. But then I'm needing to calculate P(monday|heads). I could say that's 1. P(monday|tails) - I'd say that's 0.5. Which would give the probability of Monday as 0.5+0.5*0.5=0.75?

P(Monday)+P(Tuesday) I think should sum to 1, since they're mutually exclusive and exhaustive events. So P(Tuesday) should be 0.25.

Marginalising over awakenings and days to get a posterior probability of heads given awakening should behave like marginalising over the number of awakenings per week. As @Pierre-Normand points out, P(Awoken) (whatever event or random variable Awoken is) should look like 0.75 for the thirder position. But I'm also thinking it looks like P(Monday) is 0.75 after you take into account the impact heads and tails have on the day frequencies.

Which isn't to say P(Monday) = 0.75, It's to say something funny goes on, as saying "The probability that I'm awoken on a Monday is equal to the probability that I'm awoken at all" makes little sense.
Pierre-Normand May 27, 2023 at 08:37 #811012
Quoting Michael
I don't think it makes sense to say P(Awake) = 3/4. P(Awake) is just the probability that she will be woken up, which is 1.


But I'm not saying that. What I'm saying is that she is being awoken every Mondays and she is awoken half the time on Tuesdays. So, on average, on a random day, she is being awoken 3/4 times. I am then using Bayes' theorem to deduce the probability of a random awakening having occurred on a Tuesday. But the result was rather trivial.
Michael May 27, 2023 at 08:42 #811013
Reply to Pierre-Normand So how would your reasoning work for this situation?

Let's say that there are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.


My reasoning is that P(Awake) = 1/2 given that there are 6 possible outcomes and I will be awake if one of these is true:

1. Heads and I am 1
2. Tails and I am 2
3. Tails and I am 3

My reasoning is that P(Awake | Heads) = 1/3 given that if it is heads I will only be awake if I am number 1.

This gives the correct Bayes' theroem:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{{1\over3}*{1\over2}}\over{1\over2}}\\&={1\over3}\end{aligned}[/math]
Michael May 27, 2023 at 08:49 #811015
Quoting Pierre-Normand
I am then using Bayes' theorem to deduce the probability of a random awakening having occurred on a Tuesday.


Then this goes back to what I said above. These are two different questions with, I believe, two different answers:

1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?

2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?

Thirders answer the second question, which I believe is the wrong answer to the first question. The experiment doesn't work by repeating the experiment several times, randomly selecting an interview from the set of all interviews, and then dropping Sleeping Beauty into it.

This is most clear with my extreme example of 2[sup]101[/sup] interviews following a coin toss of 100 heads in a row. Any random interview selected from the set of all interviews is most likely to have followed 100 heads in a row. But when we just run the experiment once, it is most likely that the coin didn't land 100 heads in a row, and so Sleeping Beauty's credence should only reflect this fact.
Pierre-Normand May 27, 2023 at 08:54 #811016
Quoting Michael
My reasoning is that P(Awake) = 0.5 given that there are 6 possible outcomes and I will be awake if one of these is true:

1. Heads and I am 1
2. Tails and I am 2
3. Tails and I am 3


If you mean P(Awake) to refer to the probability of your being awakened at all (on at least one day) then P(Awake) is indeed 0.5. But in that case P(Awake|Heads) should, consistently with this interpretation, refer to your being awakened at all conditioned on the case where the coin landed heads. This is (1/3+2/3)/2 = 0.5
Michael May 27, 2023 at 08:56 #811017
Quoting Pierre-Normand
But in that case P(Awake|Heads) should, consistently with this interpretation, refer to your being awakened at all conditioned on the case where the coin landed heads. This is (1/3+2/3)/2 = 0.5


Me being awaked at all conditioned on the case where the coin lands heads is 1/3, given that if it lands heads then I am only woken up if I was assigned the number 1.

Quoting Pierre-Normand
If you mean P(Awake) to refer to the probability of your being awakened at all (on at least one day) then P(Awake) is indeed 0.5.


And it's the same in the normal case. The probability of being awakened at all (on at least one day) is 1. That's what should be used in Bayes' theorem.
fdrake May 27, 2023 at 09:00 #811018
Quoting Michael
Then this goes back to what I said above. These are two different questions with, I believe, two different answers:

1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?

2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?


I agree those are different btw. They describe completely different approaches to modelling the problem. That doesn't immediately tell us which SB ought to model the situation as, or whether they're internally coherent.
Pierre-Normand May 27, 2023 at 09:02 #811019
Quoting Michael
1. If the experiment is run once, what is Sleeping Beauty's credence that the coin landed heads?

2. If the experiment is repeated several times, what is the probability that a randomly selected interview from the set of all interviews followed the coin landing heads?

Thirders answer the second question, which I believe is the wrong answer to the first question. The experiment doesn't work by randomly selecting an interview from a set of interviews after repeating the experiment several times and then dropping Sleeping Beauty into it.


I would rather say that the experience works by ensuring that Sleeping Beauty finds herself being awoken in circumstances that she knows to be twice as likely to occur (because twice a frequent) as a result of a coin having landed heads than as a result of this coin having landed tails. This formulation entails no metaphysical queerness.
Pierre-Normand May 27, 2023 at 09:05 #811020
Quoting fdrake
They describe completely different approaches to modelling the problem. That doesn't immediately tell us which SB ought to model the situation as, or whether they're internally coherent.


One clue to this is to let SB bet on the outcome that her credence is about and see if her betting behavior leads her to realize the EV she is anticipating.
fdrake May 27, 2023 at 09:06 #811021
Quoting Pierre-Normand
One clue to this is to let SB bet on the outcome that her credence is about and see if her betting behavior leads her to realize the EV she is anticipating.


I never buy betting arguments unless the random variables are set up!
Pierre-Normand May 27, 2023 at 09:07 #811022
Quoting fdrake
I never buy betting arguments unless the random variables are set up!


They are!
fdrake May 27, 2023 at 09:08 #811024
Reply to Pierre-Normand

Can you put them to me in excruciating detail please?
Michael May 27, 2023 at 09:10 #811025
Quoting Pierre-Normand
One clue to this is to let SB bet on the outcome that her credence is about and see if her betting behavior leads her to realize the EV she is anticipating.


In my extreme example she wins in the long run (after 2[sup]100[/sup] experiments) by betting on the coin landing heads 100 times in a row.

It doesn't then follow that when the experiment is run once that Sleeping Beauty's credence that the coin landed heads 100 times in a row should be greater than her credence that it didn't.

As I have said before, that she can bet more times if one outcome happens just isn't that that outcome is more probable.

Quoting Pierre-Normand
I would rather say that the experience works by ensuring that Sleeping Beauty finds herself being awoken in circumstances that she knows to be twice as likely to occur (because twice a frequent) as a result of a coin having landed heads than as a result of this coin having landed tails.


Then you have to say the same about my extreme example; that even when she knows that the experiment is only being run once, Sleeping Beauty's credence that the coin landed heads 100 times in a row should be greater than her credence that it didn't.

And I think that's an absurd conclusion, showing that your reasoning is faulty.
Pierre-Normand May 27, 2023 at 09:22 #811026
There is a space of possible awakening/interview events A that are being characterised by the day in which they occur ((M)onday or (T)uesday) and by the state of a coin that has been flipped prior to them occurring ((H)eads or (T)ails). P(H) = P(T) = 0.5. The rest of the conditional dependencies are part of the stipulation of the problem or can be inferred from them.
Michael May 27, 2023 at 09:28 #811028
Quoting Pierre-Normand
There is a space of possible awakening/interview events A that are being characterised by the day in which they occur ((M)onday or (T)uesday) and by the state of a coin that has been flipped prior to them occurring ((H)eads or (T)ails). P(H) = P(T) = 0.5.


There's actually two spaces. See here.
Pierre-Normand May 27, 2023 at 09:51 #811029
Quoting Michael
Then you have to say the same about my extreme example. Even when she knows that the experiment is only being run once, Sleeping Beauty's credence that the coin landed heads 100 times in a row is greater than here credence that it didn't.

And I think that's an absurd conclusion, showing that your reasoning is false.


I'm not sure why you think this is absurd. Compare again my lottery study example. Suppose there are one billion people on the waiting list. If a coin lands heads 20 times in a row, then, 100 million people get pulled from the list. Else, one single person gets pulled from the list. I am then informed that I got pulled from the list (but not whether I am alone or one from 100 million). Is it absurd for me to believe that the coin landed heads 20 times in a row? My credence in this proposition should be roughly 99%

In Sleeping Beauty's case, your intuition that her credence in the high probability of the sequence of heads ought to be absurd apparently stems from your unwillingness to contemplate the possibility of her being able to updating it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. The need for her to update her credence upon being awakened stems from her lacks of power of distinguishing those events in respect of their causes (i.e. the coin flip results). Another reason why you are neglecting the need she has for updating her credence might be a result of your characterizing the experiment that is being run once as starting with a sequence of coin flips. But from Sleeping Beauty's perspective, it really begins with a random awakening.
Michael May 27, 2023 at 09:56 #811030
Quoting Pierre-Normand
I'm not sure why you think this is absurd. Compare again my lottery study example. Suppose there are one billion people on the waiting list. If a coin lands heads 20 times in a row, then, 100 million people get pulled from the list. Else, one single person gets pulled from the list. I am then informed that I got pulled from the list (but not whether I am alone or one from 100 million). Is it absurd for me to believe that the coin landed heads 20 times in a row? My credence in this proposition should be roughly 99%


The difference is that the unconditional probability of being called up is very low, and so just being called up at all affects one's credence. In the Sleeping Beauty case (both the normal and my extreme version), she's guaranteed to be woken up either way.

Quoting Pierre-Normand
In Sleeping Beauty's case, your intuition that her credence in the high probability of the sequence of heads ought to be absurd apparently stems from your unwillingness to contemplate the possibility of her being able to updating it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event.


She's certainly able to update it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. I'm saying that it's irrational of her to.

The only rational approach, upon waking, is to recognize that it landing heads 100 times in a row is so unlikely that it almost certainly didn't, and that this is her first and only interview.
Pierre-Normand May 27, 2023 at 09:57 #811031
Quoting Michael
There's actually two spaces. See here.


Yes, I agree with your representation.
Pierre-Normand May 27, 2023 at 10:04 #811032
Quoting Michael
The difference is that the unconditional probability of being called up is very low, and so just being called up at all affects one's credence. In the Sleeping Beauty case (both the normal and my extreme version), she's guaranteed to be awoken either way.


I can easily adjust my lottery study example such that I am guaranteed to be selected but, once selected, the very (unconditionally) low event that led to my selection (alongside a very large number of people) still is more likely than not to have been the cause of my selection. All that is needed is to shorten the waitlist by about 98%.
Michael May 27, 2023 at 10:08 #811033
Regarding betting, expected values, and probability:

Rather than one person repeat the experiment 2[sup]100[/sup] times, the experiment is done on 2[sup]100[/sup] people, with each person betting that the coin will land heads 100 times in a row. 2[sup]100[/sup] - 1 people lose, and 1 person wins, with the winner's winnings exceeding the sum of the losers' losses. The expected value is greater than the cost, but the probability of being a winner is still [math]1\over2^{100}[/math].

Even though I could win big, it is more rational to believe that I will lose.

It is a non sequitur to claim that a greater EV means a greater probability.
Pierre-Normand May 27, 2023 at 10:12 #811034
Quoting Michael
She's certainly able to update it on the basis of her knowledge that she might be awoken an even more absurdly large number of times as a consequence of this very unlikely event. I'm saying that it's irrational of her to.

The only rational approach, upon waking, is to recognize that it landing heads 100 times in a row is so unlikely that it almost certainly didn't, and that this is her first and only interview.


She does recognize that for the coin to land 100 times in a row is unconditionally unlikely. But why would it not be rational for her to condition her credence in the truth of this proposition on her knowledge that her awakening isn't an event that she can distinguish in respect of its cause, and that a very large number of such indistinguishable awakening events can stem from such an unlikely cause?
Pierre-Normand May 27, 2023 at 10:24 #811035
Quoting Michael
Regarding betting, expected values, and probability:

Rather than one person repeat the experiment 2^100 times, the experiment is done on 2^100 people, with each person betting that the coin will land heads 100 times in a row. 2^100 - 1 people lose, and 1 person wins, with the winner's winnings exceeding the sum of the losers' losses. The expected value of betting that the coin will land heads 100 is greater than the cost, but the probability of winning is still 1/2 ^100

Even though I could win big, it is more rational to believe that I will lose.


For sure, but your new variation doesn't mirror the Sleeping Beauty problem anymore. You earlier version was better. We must rather imagine that in the unlikely event that the coin lands heads 100 times in a row, 2^101 persons get pulled from the waitlist (or else, only one person does). In that case, if you are one random person who has been pulled from the waitlist, it is twice as likely that your have been so pulled as a result of the coin having landed heads 100 times in a row than not. It is hence rational for you to make this bet and in the long run 2/3 of the people pulled from the waitlist who make that bet with be right.
Michael May 27, 2023 at 10:33 #811036
Quoting Pierre-Normand
We must rather imagine that in the unlikely event that the coin lands heads 100 times in a row, 2^101 persons get pulled from the waitlist (or else, only one person does). In that case, if you are one random person who has been pulled from the waitlist, it is twice as likely that your have been so pulled as a result of the coin having landed heads 100 times in a row than not. It is hence rational for you to make this bet and in the long run 2/3 of the people pulled from the waitlist who make that bet with be right.


This isn't comparable to the Sleeping Beaty problem because being a participant isn't guaranteed. That makes all the difference.

Compare these two scenarios:

1. There is a probability of 1 that I will be a participant if the coin lands heads 100 times and a probability of 1 that I will be a participant if the coin doesn't land heads 100 times

2. There is a probability of 1 that I will be a participant if the coin lands heads 100 times and a probability of [math]1\over2^{101}[/math] that I will be a participant if the coin doesn't land heads 100 times

In the first case it is rational to believe that the probability that the coin landed heads 100 times is [math]1\over2^{100}[/math]

In the second case it is rational to believe that the probability that the coin landed heads 100 times is [math]2\over3[/math].

It certainly isn't the case that the two have the same answer. Surely you at least accept that? If so then the question is which of 1 and 2 properly represents the traditional problem. I say 1. There is a probability of 1 that she will be a participant if the coin lands heads and a probability of 1 that she will be a participant if the coin lands tails.
Pierre-Normand May 27, 2023 at 22:19 #811141
Quoting Michael
This isn't comparable to the Sleeping Beaty problem because being a participant isn't guaranteed. That makes all the difference.


I appreciate your viewpoint, but I can modify my analogies to meet the condition in your first scenario.

Scenario 3 (Lottery study)

Imagine that tickets numbered from one to 2^100 are randomly mixed. We have a waitlist of 2^101 participants. The experimenter selects tickets one by one from the mixed pile. When they draw the 'winning' ticket numbered 2^100, they select (2^100)+1 participants from the waitlist and assign them this ticket. Otherwise, they select just one participant and allocate the losing ticket drawn. This protocol guarantees that the waitlist is exhausted, and everyone gets a chance to participate.

As one of the 2^101 participants, your credence that you've been assigned the winning ticket numbered 2^100 should be equal to (or slightly greater than) the probability that you haven't, as half the participants (plus one) have been assigned this ticket.

Scenario 4 (Sleeping Beauty)

Let's again take tickets numbered from one to 2^100, randomly mixed. We have a waitlist of 2^100 participants. Each participant is assigned a ticket, put to sleep, and the participant with the 'winning' ticket is awoken and interviewed (2^100)+1 times, while everyone else is awoken and interviewed only once.

As one of the 2^100 participants who has just been awoken, your credence that you've been assigned the winning ticket numbered 2^100 should be equal to (or slightly greater than) the probability that you haven't, given that half the participant interviews (plus one) are with participants assigned the winning ticket. Or so I still would argue; here I am just addressing your most recent objection regarding guaranteed participation.

Scenario 5 (Bonus for @fdrake)

Returning to the original Sleeping Beauty problem: upon awakening and awaiting her interview, Sleeping Beauty reflects on her credence that the coin landed on heads. As she checks her smuggled-in cellphone and learns that today is Monday, her halfer position would require her to maintain her credence at 1/2, despite ruling out the possibility of it being Tuesday. The thirder position, on the other hand, allows her to adjust her credence from 2/3 to 1/2 after acquiring this new piece of information. This seems more plausible, and I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.
Srap Tasmaner May 27, 2023 at 23:21 #811150
Quoting Pierre-Normand
I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.


For Lewis, if I recall correctly, it raises her credence for heads from 1/2 to 2/3, which he finds curious, but that's it.
Pierre-Normand May 28, 2023 at 05:28 #811220
I now realize that in the OP's stipulation of the problem, and in line with most discussions of it, it is when the fair coin lands tails that Sleeping Beauty is awoken twice, and else she is awoken once. I all my examples I had assumed the opposite. This is just an arbitrary convention but I hope my unfortunate reversal of it didn't generated any confusion.

In any case, in the future, I'll revert back to the conventional practice.
Michael May 28, 2023 at 09:43 #811238
Quoting Pierre-Normand
I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.


Well, consider the Venn diagram here (which you said you agreed with).

There are two probability spaces. Monday or Tuesday is one consideration and Heads or Tails is a second consideration. Finding out that today is Monday just removes the blue circle.
Michael May 28, 2023 at 09:55 #811239
Quoting Pierre-Normand
As one of the 2^100 participants who has just been awoken, your credence that you've been assigned the winning ticket numbered 2^100 should be equal to (or slightly greater than) the probability that you haven't, given that half the participant interviews (plus one) are with participants assigned the winning ticket.


I think this is a non sequitur. That most interviews are with a winner isn't that I am mostly likely a winner. Rather, if most participants are a winner then I am most likely a winner.

So there are two ways for the participants to approach the problem:

1. I should reason as if I am randomly selected from the set of all participants
2. I should reason as if my interview is randomly selected from the set of all interviews

I don't think the second is the rational approach. The experiment doesn't work by randomly selecting from the set of all interviews and then dropping each participant into that interview (in such a case the second might be the more rational approach, although how it would be determined which participant(s) are chosen to have more than one interview and the order in which the participants are placed is unclear, and it may be that there is more than one winner). The experiment works by giving each participant a ticket (and in such a case I think the first is the more rational approach).
Pierre-Normand May 28, 2023 at 09:59 #811241
Quoting Michael
Finding out that today is Monday just removes the blue circle.


I agree with the idea that Sleeping Beauty's credence in H is updated to 1/2 after she learns that her current awakening is occurring on a Monday. The question is what was this credence updated from, before she learned that the current day was Monday. At that point she could not rule out that the current day was Tuesday. Does that not imply that her credence in H was lower than 1/2 before she learned that the day was Monday? Before she learned that, all three mutually exclulsive areas of the Venn diagram represent possible states for her to be in. She therefore should have non-zero credences for each one of them.
Michael May 28, 2023 at 10:11 #811244
Quoting Pierre-Normand
I agree with the idea that Sleeping Beauty's credence in H is updated to 1/2 after she learns that her current awakening is occurring on a Monday. The question is what was this credence updated from, before she learned that the current day was Monday. At that point she could not rule out that the current day was Tuesday. Does that not imply that her credence in H was lower than 1/2 before she learned that the day was Monday? Before she learned that, all three mutually exclulsive areas of the Venn diagram represent possible states for her to be in. She therefore should have non-zero credences for each one of them.


There are two approaches. The normal halfer approach is:

P(Heads) = 1/2
P(Tails) = 1/2
P(Monday) = 3/4
P(Tuesday) = 1/4

The double halfer approach is:

P(Heads) = 1/2
P(Tails) = 1/2
P(Monday) = 1/2
P(Tuesday) = 1/2

The double halfer approach does entail:

P(Heads & Monday) = P(Tails & Monday) = P(Tails & Tuesday) = 1/2

This reflects what I said before:

The probability that the coin will land heads and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Monday is 1/2.
The probability that the coin will land tails and she will be woken on Tuesday is 1/2.

As the Venn diagram shows, there are two (overlapping) probability spaces, hence why the sum of each outcome's probability is greater than 1.


I'm undecided on whether or not I'm a double halfer.
Pierre-Normand May 28, 2023 at 10:31 #811246
Quoting Michael
The double halfer approach does entail:

P(Heads & Monday) = P(Tails & Monday) = P(Tails and Tuesday) = 1/2


From the external point of view of the experimenter, it makes sense that the tree probabilities add up to more than one since the three outcomes are non exclusive. The events (Tails & Monday) and (Tails and Tuesday) can (and indeed must) be realized jointly. The intersection zone in the Venn diagram highlights the fact that those two events aren't mutually exclusive. The perspective (and epistemic position) of Sleeping Beauty is different. When she is awoken, those three possible states are mutually exclusive since it is not possible for her to believe that, in her current state of awakening, the current day is both Monday and Tuesday. Her credences in the three mutually exclusive states that she can be in therefore must add up to 1.
Michael May 28, 2023 at 10:34 #811247
Quoting Pierre-Normand
From the external point of view of the experimenter, it makes sense that the tree probabilities add up to more than one since the three outcomes are non exclusive. The events (Tails & Monday) and (Tails and Tuesday) can (and indeed must) be realized jointly. The intersection zone in the Venn diagram highlights the fact that those two events aren't mutually exclusive. The perspective (and epistemic position) of Sleeping Beauty is different. When she is awoken, those three possible states are mutually exclusive since it is not possible for her to believe that, in her current state of awakening, the current day is both Monday and Tuesday. Her credences in the three mutually exclusive states that she can be in therefore must add up to 1.


I think this is best addressed by a variation I described here:

There are four people, each assigned a number (unknown to them) between 1 and 4. Two of them are to be put to sleep at random, determined by a single coin toss: if heads then 1 and 2 are put to sleep; if tails then 3 and 4 are put to sleep.

After the coin toss one of the two is put to sleep first. For each of the remaining three, what is the probability that they will be put to sleep? Is it 1/3, because there are three of them and only one is to be put to sleep, or is it 1/2, because that was the probability before the first was put to sleep?


Before anyone was put to sleep, for each of the four participants the probability of being put to sleep was 1/2, which of course doesn't add to 1.

And depending on your answer to this scenario, for each of the remaining three participants the probability of being put to sleep is 1/2.
Michael May 28, 2023 at 11:25 #811249
Quoting Michael
So there are two ways for the participants to approach the problem:

1. I should reason as if I am randomly selected from the set of all participants
2. I should reason as if my interview is randomly selected from the set of all interviews


To apply this to the traditional problem: there are two Sleeping Beauties; one will be woken on Monday and one on both Monday and Tuesday, determined by a coin toss.

What is their credence that they have been or will be woken twice?

Do they reason as if they are randomly selected from the set of all participants or do they reason as if their interview is randomly selected from the set of all interviews?

Which is the most rational?

The math can prove that the former reasoning gives an answer of [math]1\over2[/math] and the latter an answer of [math]1\over3[/math] but I don’t think it can prove which reasoning is the correct to use. It might be that there is no correct answer, only a more compelling answer.

Given the way the experiment is conducted I find the former reasoning more compelling. This is especially so with the example of the coin landing heads 100 times in a row.
Pierre-Normand May 29, 2023 at 09:36 #811437
Quoting Michael
I think this is basically a Monty Hall problem. I would say that the probability that I will be put to sleep is 1/2, that the probability that the person to my left will be put to sleep is 1/2, that the probability that the person to my right will be put to sleep is 1/2, and that the probability that one of the other two will be put to sleep is 1/2.


Your variation of the problem indeed appears to me to contain elements reminiscent of the Monty Hall problem, but with a key difference in the selection process.

In the original Monty Hall scenario, Monty's action of revealing a goat behind a door you didn't choose is non-random. This non-random revelation influences your strategy, prompting you to switch doors. (The probabilities of your initial choice and the remaining door hiding the car that were 1/3 and 1/3, respectively, get updated to 1/3 and 2/3.

Your scenario can be compared to a modified Monty Hall scenario. In this modified scenario, imagine Monty picks one of the unchosen doors at random and reveals a goat by chance. Because this revelation is a random event, it doesn't provide the same targeted information as in the traditional Monty Hall scenario. Consequently, the initial probability estimates (1/3) for your chosen door and the remaining unchosen door hiding a car are updated to 1/2 each. Therefore, you would be indifferent about whether to stick with your original door or switch to the remaining door.

Similarly, in your variation, if a participant is revealed in random way to have been selected to be put to sleep, much like Monty randomly revealing a goat, the probabilities for the remaining three participants (including yourself) would need to be updated similarly. If it's known that exactly one of the remaining three participants had been selected, then the probabilities for each of those three would increase to 1/3.
Michael May 29, 2023 at 10:10 #811445
Quoting Pierre-Normand
In this modified scenario, imagine Monty picks one of the unchosen doors at random and reveals a goat by chance.


It's not exactly comparable as in my example he can only put to sleep one of the two people who will be put to sleep; he cannot put to sleep someone who won't be put to sleep.

A more comparable example would be if there are four doors, two containing a goat and two containing a car. You pick a door (say 1) and then Monty opens one of the two doors that contain a goat (say 2). What is the probability that your chosen door contains a car? What is the probability that the car is behind door 3, or door 4?
Pierre-Normand May 29, 2023 at 10:18 #811448
Quoting Michael
To apply this to the traditional problem: there are two participants; one will be woken on Monday, one on both Monday and Tuesday, determined by a coin toss.

I am one of the participants. What is the probability that I am woken twice?

Do I reason as if I am randomly selected from the set of all participants, and so that I am equally likely to be woken twice, or do I reason as if my interview is randomly selected from the set of all interviews, and so that I am more likely to be woken twice?

Halfers do the former, thirders the latter.

Which is the most rational?

Given the way the experiment is conducted I think the former (halfer) reasoning is the most rational.


The halfer and thirder responses, as you frame them here, correspond to different questions answered from different epistemic perspectives.

Consider this: let's say you are being hired as an observer in these experiments. Each observer is tasked to attend one session with one participant. The participants are woken up and interviewed once on Day 1, or twice on Day 1 and Day 2, which is determined by a coin toss. As an observer, you are assigned to a randomly chosen participant, and you don't know whether this is their only awakening or one of two.

In the experiment facility, there are, on average, twice as many rooms occupied by participants waking twice (due to tails on the coin toss) as there are rooms with participants waking once (heads on the coin toss). Now suppose you had access to the participant registry where all active participants are listed. You spot the name 'John Doe.' What are the chances he will be woken up twice? You credence is 1/2, and this would also be the case for John Doe's credence before he undergoes the first sleep session.

Now, let's say that by a stroke of luck, you are assigned to John Doe on that particular day. Your job is to measure his vitals as he awakens and get him breakfast as he waits for the interview. You arrive in John's room and wait for him to wake up. What are the chances that the coin toss resulted in tails, indicating this could be one of two awakenings rather than the only one?

Once you have been assigned to John Doe, your credence (P(T)) in this proposition should be updated from 1/2 to 2/3. This is because you were randomly assigned a room, and there are twice as many rooms with participants who wake up twice as there are rooms with participants waking up once. Once John Doe awakens, he can rule out the possibility of it being Day 2 of his participation, and so can you. His credence (P(T)) then aligns with yours because both your credences are targeting the exact same proposition, and both of you have the same epistemic access to it.
Pierre-Normand May 29, 2023 at 10:26 #811450
Quoting Michael
A more comparable example would be if there are four doors, two containing a goat and two containing a car. You pick a door (say 1) and then Monty opens one of the two doors that contain a goat (say 2). What is the probability that your chosen door contains a car? What is the probability that the car is behind door 3, or door 4?


In that case, the probability that my chosen door contains a car remains 1/2. The probabilities that a car is behind door 3 or behind door 4 get updated from 1/2 to 3/4 each.
Michael May 29, 2023 at 10:27 #811451
Quoting Pierre-Normand
Once John Doe awakens, he can rule out the possibility of it being Day 2 of his participation, and so can you.


Why?

Quoting Pierre-Normand
Once you have been assigned to John Doe, your credence (P(H)) in this proposition should be updated from 1/2 to 2/3. This is because you were randomly assigned a room, and there are twice as many rooms with participants who wake up twice as there are rooms with participants waking up once.


Yes, if you are randomly assigned an interview from the set of all interviews then the probability of it being a tails interview is greater than the probability of it being a heads interview.

The question is whether or not it is rational for the participant to reason this way. Given that the experiment doesn't work by randomly assigning an interview to them from the set of all interviews, I don't think it is. The experiment works by randomly assigning an interview set from the set of all interview sets (which is either the head set or the tail set), and so I believe it is more rational to reason in this way.
Pierre-Normand May 29, 2023 at 10:30 #811452
Quoting Michael
Why?


Sorry, I meant to say that he can rule out it being the case the the coin landed heads and that this is Day2.
Pierre-Normand May 29, 2023 at 10:32 #811455
Quoting Michael
The question is whether or not it is rational for the participant to reason this way. Given that the experiment doesn't work by randomly assigning an interview to them from the set of all interviews, I don't think it is. The experiment works by randomly assigning an interview set from the set of all interview sets (which is either the head set or the tail set), and so I believe it is more rational to reason in this way.


Do you agree that from the sitter point of view, the probability that the coin landed tails is 2/3?
Michael May 29, 2023 at 10:33 #811457
Quoting Pierre-Normand
Sorry, I meant to say that he can rule out it being the case that the coin landed heads and that this is Day2.


They ruled that out before the experiment begun. You might as well say that they can rule out it being the case that the coin landed heads and that this is day 3.

Do you agree that from the sitter point of view, the probability that the coin landed tails is 2/3?


Yes.
Pierre-Normand May 29, 2023 at 10:42 #811460
Quoting Michael
Yes.


So, as they await the interviewer, John Doe and the sitter contemplate the probability that the coin landed tails. The coin might be right there on a nightstand with piece of cardboard covering it. Both John Doe and his sitter are equally informed of the full details of the protocol. After John Doe has woken up, they both have access to exactly the same relevant information. There credences target the proposition: "This coin landed tails". Since they are evaluating the exact same proposition from the exact same epistemic perspective, why don't they have the same credences on your view?
Michael May 29, 2023 at 10:52 #811463
Reply to Pierre-Normand

Consider my extreme example. There are two ways to reason:

1. [math]2\over3[/math] of all interviews are 100 heads in a row interviews, therefore this is most likely a 100 heads in a row interview
2. [math]1\over2^{100}[/math] of all participants are 100 heads in a row participants, therefore I am most likely not a 100 heads in a row participant

I would say that both are true, but also contradictory. Which reasoning it is proper to apply depends on the manner in which one is involved.

For the sitter, his involvement is determined by being randomly assigned an interview, and so I think the first reasoning is proper. For the participant, his involvement is determined by tossing a coin 100 times, and so I think the second reasoning is proper.

We might want to measure which reasoning is proper by appeals to bets or expected values or success rate or whatever, but then there are two ways to reason on that:

1. [math]2\over3[/math] of all guesses are correct if every guess is that the coin landed heads 100 times in a row
2. [math]1\over2^{100}[/math] of all participants are correct if they all guess that the coin landed heads 100 times in a row

How do we determine which of these it is proper to apply?

So maybe there is no right answer as such, just more or less proper (or more or less compelling). And all I can say is that if the experiment is being run just once, and I am put to sleep and then woken up, my credence that the coin landed heads 100 times in a row would be [math]1\over2^{100}[/math].
Pierre-Normand May 29, 2023 at 10:52 #811464
Quoting Michael
They ruled that out before the experiment begun. You might as well say that they can rule out it being the case that the coin landed heads and that this is day 3.


Before the experiment begins, neither John Doe nor the sitter can rule out a possible future in which we are on Day2 of the experiment and John Doe remains asleep. As soon as John Doe wakes up, they can both rule out the proposition "Today is Day2 of the experiment and the coin landed tails". This is one crucial change to their epistemic situation. They now are able to refer the the current day at issue with the indexical "today" and rationally update their credences by means (self-)reference to it.
Pierre-Normand May 29, 2023 at 11:32 #811471
Quoting Michael
I would say that both are true, but also contradictory. Which reasoning it is proper to apply depends on the manner in which one is involved.

For the sitter, his involvement is determined by being randomly assigned an interview, and so I think the first reasoning is proper. For the participant, his involvement is determined by tossing a coin 100 times, and so I think the second reasoning is proper.


Let us stick with the normal Sleeping Beauty scenario for now, if you don't mind, as I think the lessons drawn will generalize even to your extreme variation but the analysis is simpler.

You're arguing that Sleeping Beauty and the sitter can have different, yet warranted, credences of P(T) at 1/2 and 2/3 respectively. Let's consider this: they could both be regular participants in this experiment, occasionally meeting again by chance. Whenever they meet, they might agree to wager $1 on the outcome—with the sitter betting on tails and John Doe betting on heads.

Under this arrangement, Sleeping Beauty would expect to win two-thirds of the time, and John Doe would expect to lose two-thirds of the time, correct? However, if John Doe's credence P(T) truly is 1/2, should he not expect to break even in the long run? If he still expects the outcome to be tails two-thirds of the time, despite his 1/2 credence, would he not refuse to place a wager? His actions would reveal his true expectations about the probability of the outcome.
Michael May 29, 2023 at 11:35 #811472
Reply to Pierre-Normand I’ve explained the error with betting examples before. Getting to bet twice if it’s tails doesn’t mean that tails is more likely.
Pierre-Normand May 29, 2023 at 12:05 #811475
Quoting Michael
I’ve explained the error with betting examples before. Getting to bet twice if it’s tails doesn’t mean that tails is more likely.


Suppose you've been kidnapped. Each morning, your captor flips a coin. If it lands on heads, you're blindfolded and taken to safehouse #1 (or simply walked around and returned there). If it lands on tails, you're blindfolded and taken to safehouse #2 (or similarly walked around and returned) for two consecutive days. Both safehouses are indistinguishable, and you're informed of this procedure.

On any given day, what would be your credence in the proposition "I am at safehouse #1"? Would it not be P(#1) = 1/3? You could argue that it's 1/2, with the rationale that "getting to guess twice doesn't mean that tails is more likely". But wouldn't this conflate the epistemic perspective of the person making the guess at the time of guessing with the epistemic perspective of the person flipping the coin at the time of flipping?
Michael May 29, 2023 at 12:09 #811476
Reply to Pierre-Normand I don’t quite understand this example. There are multiple coin flips and no amnesia?
Pierre-Normand May 29, 2023 at 12:16 #811477
Quoting Michael
I don’t quite understand this example. There are multiple coin flips?


Yes, my first sentence was wrong. There is a new coin flip every day when the captor must decided on a new (or the same) safehouse. In the case where the coin lands tails, the new decision happens after two days have elapsed (which is analogous to waking up and interviewing Sleeping Beauty twice).
Michael May 29, 2023 at 12:18 #811478
Reply to Pierre-Normand Then on the first day P = 1/2, the second day P = 1/4, the third day P = 1/8, etc.
Michael May 29, 2023 at 12:20 #811479
Actually that’s not right (starting third day). Need to think about this. First two days are right though.

Not sure how this is at all relevant though.
Pierre-Normand May 29, 2023 at 12:22 #811480
Quoting Michael
Actually that’s not right. Need to think about this.


Take you time. I'm being moved to a new safehouse until tomorrow.
Michael May 29, 2023 at 12:47 #811485
First day P = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc.

Not sure what this is supposed to show?
Pierre-Normand May 29, 2023 at 13:10 #811487
Quoting Michael
Not sure what this is supposed to show?


It's worth noting that your provided sequence converges on 1/3. If the captive is not keeping track of the date, their credence should indeed be exactly 1/3. The crucial detail here is that the captive gets to guess twice regarding the same coin toss when the result is tails. This very fact is what explains why their credence in being presently in safehouse #1 (and thus, the preceding coin toss resulting in heads) is 1/3 rather than 1/2.

I'd like to draw your attention to the use of the terms "presently" and "preceding" in the proposition's statement. These indexicals are vital as they give us insight into the captive's perspective on the previous coin toss. This subjective epistemic perspective must be distinguished from an external point of view that would consider the next coin toss.
Michael May 29, 2023 at 13:29 #811489
Quoting Pierre-Normand
It's worth noting that your provided sequence converges on 1/3. If the captive is not keeping track of the date, their credence should indeed be exactly 1/3.


I don't think this is relevant to the Sleeping Beauty problem. It's a different experiment with different reasoning.

In this case you're in safehouse 1 if not tails yesterday and not tails today and you're in safehouse 2 if either tails yesterday or tails today. Obviously the latter is more likely. I think only talking about the preceding coin toss is a kind of deception.

Also it converges to 1/3 only as you repeat the coin tossing, whereas in the traditional problem the coin is only tossed once.
Pierre-Normand May 30, 2023 at 00:46 #811649
Quoting Michael
In this case you're in safehouse 1 if not tails yesterday and not tails today and you're in safehouse 2 if either tails yesterday or tails today. Obviously the latter is more likely. I think only talking about the preceding coin toss is a kind of deception.


The original Sleeping Beauty problem does indeed hinge on a single coin toss, but it's crucial to understand the unique nature of this coin toss within the context of the experiment. When we consider the result of 'the' coin toss, we're not considering a generalized coin toss in a vacuum; we're considering the specific toss that determined the course of the experiment that Sleeping Beauty currently finds herself in.

Sleeping Beauty's question - 'What is the probability that the coin shows heads?' - is not a generalized question about the inherent probability of a coin toss result. Instead, it's a targeted question about the outcome of the specific coin toss that dictated the structure of her current experience. Given this context, I argue that Sleeping Beauty is justified in updating her prior credence P(T) from 1/2 to 2/3.

Let's modify the safehouse example to more closely mirror the Sleeping Beauty problem. Imagine you are kidnapped and held hostage. The captor flips a coin only once. If the coin lands heads, you are blindfolded and transported to safehouse #1, held captive for one day, and then released. If the coin lands tails, you are taken to safehouse #2, where you're held for two days and then released. In the latter case, however, an amnesia-inducing drug is administered at the end of the first day, such that you forget the events of the day and wake up on the second day with no memory of the previous day.

Just like in the Sleeping Beauty problem, you are unable to distinguish between the first and second day of captivity in safehouse #2. Now, when you find yourself in a safehouse and try to guess the outcome of the coin toss, your best bet would be to assign a 1/3 probability to being in safehouse #1 (and hence the coin toss resulting in heads) and a 2/3 probability to being in safehouse #2 (and hence the coin toss resulting in tails). This is not a reflection of the inherent probabilities of a coin toss, but rather an acknowledgment of your unique epistemic position in the situation, and the contextual information you have access to as a captive. Your credence in each possibility is based on the number of ways in which you could find yourself in your current situation given the possible outcomes of the specific coin toss.

To further bolster the point, imagine that you find a hidden cellphone and have a brief window to send a text message to the police. However, due to character limitations, you can only provide the address of one of the two safehouses. Given your current epistemic position, would it not make sense to communicate the address of safehouse #2? This decision is based on your updated credence in the outcome of the coin toss: you believe there is a 2/3 chance you're in safehouse #2 and a 1/3 chance you're in safehouse #1. So, despite the inherent probabilities of a fair coin toss being 1/2 for each outcome, your unique context and the information you have access to lead you to favor safehouse #2 as the most likely location for your captivity.

Let us now consider the perspective of the police and the information available to them. If they have knowledge of the captor's coin-flip protocol, then prior to receiving any communication, they would have a 1/2 credence of the hostage being in either safehouse.

How does their epistemic position changes after they receive your message? I would argue that we now need to distinguish two cases.

In the first case, if the police have themselves provided the communication means in both safehouses, the message does not convey any new information about the coin toss nor about the hostage's location. Given that the hostage would have been equally likely to find and use the device in either location, the message itself doesn't change the police's prior beliefs.

In the second case, however, if the hostage independently finds a way to communicate, the longer duration spent at safehouse #2 makes it twice as likely for the hostage to send a message from there. This makes a crucial difference. If a message is sent, the police now have evidence that is more likely to occur if the coin landed tails, which should update their credences accordingly, both for the location of the hostage and the result of the coin toss.

The message, in this context, acts as evidence that bears differently on the hypotheses of interest (the coin toss and the location), given the known differences in the likelihoods of being able to send the message from the two locations.

In case-1, since the hostage has an equal chance to communicate from either safehouse, the police are not able to update their credences based on the received message. Here, the police's prior beliefs remain the same, P(H) = P(T) = 1/2, and their strategy should reflect these priors.

In case-2, however, the hostage's ability to communicate is influenced by the duration of captivity. If they get a message, the police can update their beliefs because the message is more likely to be sent if the coin landed tails and the hostage was in safehouse #2 for longer. Their updated beliefs should be P(H) = 1/3, P(T) = 2/3.

Now, consider the question whether the police acting based on the hostage's updated credence rather than their own increases their chances of success, it seems that this depends on the case. In case-1, if the police act based on their own beliefs, they're expected to be right half the time. If they act based on the hostage's belief, they'd be misled half the time (when the coin lands heads). So, in case-1, the police need not update their credence.

In case-2, however, acting based on the hostage's beliefs (which coincide with the updated beliefs of the police if they consider the message as evidence) increases their chances of success. This is because the hostage's ability to communicate is correlated with the coin toss and their location, unlike in case-1. So, in case-2, the police are enabled to update their credence to align them with the hostage's.

The ambiguity stems from the fact that whether the police should act based on the hostage's belief depends on the specific circumstances of each case. A complete answer should specify the case under consideration and fully specify the epistemic perspectives of all the agents.

On edit: I foresee a possible objection to the previous discussion, particularly that it draws upon an analogy with a situation involving a communication device, which has no direct parallel in the original Sleeping Beauty problem. To address this, I propose another scenario that may mirror Sleeping Beauty's epistemic situation even more closely:

[hide]Let's reconsider the safehouse scenario. Suppose the hostage finds an opportunity to escape. However, the nature of the obstacles he must overcome depends on the safehouse he's in: safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

In this scenario, we can distinguish between two cases. In case-1, the opportunity to escape is equally likely for both potential outcomes of the coin toss. In this instance, when planning an escape, the hostage has equal odds of encountering either crocodiles or lions. Therefore, his credence in the coin having landed heads should be updated to 1/2, in light of the fact that he has an opportunity to escape.

On the other hand, in case-2, opportunities to escape are proportional to the time spent in a safehouse. In this situation, the hostage is twice as likely to encounter lions and should, therefore, choose to carry the torch.

Considering which of these two cases more closely mirrors the Sleeping Beauty problem, I argue in favor of case-2. The opportunities for the hostage to escape can be seen as opportunities for him to indirectly learn about the coin toss result. The correlations between tails and lions, and heads and crocodiles, both stand at exactly 1. Similarly, in the Sleeping Beauty problem, her credence in the coin toss resulting in tails corresponds directly to her credence in the likelihood of this result being revealed to her if she had the opportunity to confirm it at that moment. Each time Sleeping Beauty is awakened (or given an opportunity to express her credence), she is aware that such opportunities are twice as frequent if the coin landed tails. Therefore, being afforded such an opportunity is directly analogous to case-2 in the hostage scenario.[/hide]
Michael May 30, 2023 at 07:48 #811694
Quoting Pierre-Normand
Your credence in each possibility is based on the number of ways in which you could find yourself in your current situation given the possible outcomes of the specific coin toss.


That's the very point I disagree with, and is most evident with the example of tossing a coin 100 heads in a row. The possible outcomes have no bearing on my credence that the coin landed heads 100 times in a row. The only thing I would consider is that the coin landing heads 100 times in a row is so unlikely that it almost certainly didn't happen, and I think any rational person would agree.

This is less clear to see with the Sleeping Beauty problem given that heads and tails are equally likely, and so prima facie it doesn't matter which you pick, but given that there are two opportunities to win with tails there's no reason not to pick tails.

If you don't like to consider my extreme example because the numbers are too high then let's consider a simpler version. Rather than a coin toss it's a dice roll. If 1 - 5 then safehouse 1 with crocodiles for one day, if 6 then safehouse 2 with lions for six days. Any rational person would take the wooden plank, and 5 out of every 6 kidnapped victims would survive.
Michael May 30, 2023 at 08:14 #811699
Also, as an aside, if you correctly reason that it's tails then you escape on the first day, and so you can rule out today being the second day (assuming you understand that you would always reason as if it's tails).
Pierre-Normand May 30, 2023 at 08:17 #811700
Quoting Michael
If you don't like to consider my extreme example because the numbers are too high then let's consider a less extreme version. Rather than a coin toss it's a dice roll. If 1 - 5 then woken once (or safehouse 1 with crocodiles for one day), if 6 then woken six times (or safehouse 2 with lions for six days). Any rational person would take the wooden plank, and 5 out of every 6 kidnapped victims would survive.


The analysis you provide would hold true if the hostage was guaranteed in advance to have exactly one opportunity to escape during the entirety of the experiment (case-1). I agree that in this context, upon being provided the means to escape, the prisoner's updated credence would match his initial credence, leading him to select the wooden plank and expect to survive five out of six times (or one out of two times in the original problem).

However, consider a different scenario where the hostage has a small, constant probability ? of discovering the means of escape each day (case-2). In this scenario, stumbling upon this means of escape would provide the hostage with actionable evidence that he could use to update his credence. Now, he would believe with a probability of 6/11 that he's in safehouse #2, thereby justifying his decision to pick up the torch. Consequently, given that 6 out of 11 kidnapped victims who find the means to escape are surrounded by lions, 6 out of 11 would survive.

In my previous post, I made a point to explain why the original Sleeping Beauty setup mirrors case-2 more closely than it does case-1. The crucial aspect in the Sleeping Beauty problem is the opportunity for her to express her credence, which, as in the case-2 scenario, is not a single guaranteed occurrence, but something that happens with different frequency depending on the result of the coin toss.
Michael May 30, 2023 at 08:25 #811701
Quoting Pierre-Normand
However, consider a different scenario where the hostage has a small, constant probability ? of discovering the means of escape each day (case-2). In this scenario, stumbling upon this means of escape would provide the hostage with actionable evidence that he could use to update his credence. Now, he would believe with a probability of 6/11 that he's in safehouse #2, thereby justifying his decision to pick up the torch. Consequently, given that 6 out of 11 kidnapped victims who find the means to escape are surrounded by lions, 6 out of 11 would survive.


Then this is a different scenario entirely. If we consider the traditional problem, it would be that after the initial coin toss to determine which days she could be woken, another coin toss each day determines if she will be woken (heads she will, tails she won't).

So the probability that the coin landed heads and she wakes on Monday is 1/4, the probability that the coin lands tails and she wakes on Monday is 1/4, and the probability that the coin lands tails and she wakes on Tuesday is 1/4. A simple application of Bayes' theorem is:

[math]\begin{aligned}P(Heads | Questioned) &= {{P(Questioned | Heads) * P(Heads)} \over P(Questioned)}\\&={{{1\over2}*{1\over2}}\over{2\over3}}\\&={3\over8}\end{aligned}[/math]

As compared to the normal situation which would be:

[math]\begin{aligned}P(Heads | Questioned) &={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]
Pierre-Normand May 30, 2023 at 08:26 #811702
Quoting Michael
Also, as an aside, if you correctly reason that it's tails then you escape on the first day, and so you can rule out today being the second day (assuming you understand that you would also reason as if it's tails).


Yes, that's a valid point. This is precisely why I introduced the concept of a small, constant probability ?, representing the chance for the hostage to find the means of escape on any given day. By doing so, we can marginalize the potential inferences the hostage could make from the fact that he has not yet escaped (or perished in the attempt). The key aspect of the Sleeping Beauty problem, and indeed the comparison to our hostage scenario, is the existence of multiple actionable opportunities to express one's credence - be it through betting, verification, rational action, and so on. So, while your observation uncovers a minor discrepancy, I do not believe it fundamentally undermines my overall argument.
Michael May 30, 2023 at 08:39 #811703
Reply to Michael

And with this variation, do you not agree that the probability of it being heads is 3/8? Would you not also agree that the probability of it being heads in this scenario must be less than the probability of it being heads in the traditional scenario, where being woken up on your assigned day(s) is guaranteed? If so then it must be that the probability of it being heads in the traditional scenario is greater than 3/8, i.e. 1/2.
Pierre-Normand May 30, 2023 at 08:45 #811704
Quoting Michael
Then this is a different scenario entirely. If we consider the traditional problem, it would be that after the initial coin toss to determine which days she could be woken, a second coin toss determines if she will be woken (heads she will, tails she won't).


It is fundamentally the same scenario, except we're adding external threats like crocodiles or lions to the test environment. In the hostage situation, the captive finding himself still held hostage upon waking up is parallel to Sleeping Beauty finding herself still under experimentation when she wakes up, as opposed to being dismissed on the seventh day.

The rational means that increase the captive's survival chances in our scenario (from 5/11 to 6/11) are comparable to those Sleeping Beauty would use to enhance her correct retrodictions of the die roll result being in the [1, 5] range (from 5/11 to 6/11).

The only significant divergence lies in the frequency of opportunities: the hostage can't be provided with frequent chances to escape without invalidating the analogy, whereas Sleeping Beauty can be given the chance to guess (or place a bet) every single day she awakens without undermining the experiment.

However, we can further refine the analogy by allowing the hostage to escape unharmed in all instances, but with the caveat that he will be recaptured unknowingly and re-administered the amnesia-inducing drug. This would align the scenarios more closely.
Michael May 30, 2023 at 08:56 #811706
Quoting Pierre-Normand
The only significant divergence lies in the frequency of opportunities: the hostage can't be provided with frequent chances to escape without invalidating the analogy, whereas Sleeping Beauty can be given the chance to guess (or place a bet) every single day she awakens without undermining the experiment.

However, we can further refine the analogy by allowing the hostage to escape unharmed in all instances, but with the caveat that he will be recaptured unknowingly and re-administered the amnesia-inducing drug. This would align the scenarios more closely.


This is heading towards a betting example, which as I've explained before is misleading. There are three different ways to approach it:

1. The same participant plays the game 2[sup]100[/sup] times. If they bet on 100 heads then eventually they will win more than they lose, and so it is rational to bet on 100 heads.

2. 2[sup]100[/sup] participants play the game once. If they bet on 100 heads then almost everybody will lose, and so it is rational to not bet on 100 heads (even though the one winner's winnings exceed every losers' losses).

3. One participant plays the game once. If they bet on 100 heads then they are almost certain to lose, and so it is rational to not bet on 100 heads.

Given that the very premise of the experiment is that it is to only be run once, a rational person would only consider 3.

And I really don’t see any counterexamples refuting 3. I will never reason or bet that 100 heads in a row is more likely.

But even if the same participant were to repeat the experiment 2[sup]100[/sup] times, they don't bet on 100 heads because they think it's more likely, they bet on 100 heads because they know that eventually they will win, and that the amount they will win is greater than the amount they will lose.
Pierre-Normand May 30, 2023 at 11:42 #811732
Quoting Michael
But even if the same participant were to repeat the experiment 2^100 times, they don't bet on 100 heads because they think it's more likely, they bet on 100 heads because they know that eventually they will win, and that the amount they will win is greater than the amount they will lose.


Sleeping Beauty isn't asked to place a bet on the outcome on day zero, before she's put to sleep for the first time, with payouts occurring on each subsequent awakening based on her initial prediction. Instead, she's asked about her credence in the outcome of the 100 tosses on each individual awakening (and given an opportunity to place a bet). Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2 on the occasion of a particular awakening. This same reasoning would also be the ground for her betting on that outcome, assuming her primary goal is to maximize her expected value for that single bet.

However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).

Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of her credence.
Michael May 30, 2023 at 11:45 #811734
Quoting Pierre-Normand
Most of her awakenings occur on the rare occasion when 100 tosses yield heads, which forms the basis for her credence P(100H) being greater than 1/2.


Except the experiment is only conducted once. Either all her interviews follow one hundred heads or all her interviews (one) follow not one hundred heads.

The second is more likely. That’s really all there is to it. I would say it’s irrational for her to reason any other way.

Quoting Pierre-Normand
However, the Sleeping Beauty problem specifically inquires about her credence, not about the rationality of her attempt to maximize her expected value, or her preference for some other strategy (like maximizing the number of wins per experimental run rather than average gain per individual bet).

Even if she were to endorse your perspective on the most rational course of action (which doesn't seem unreasonable to me either), this wouldn't influence her credence. It would simply justify her acting in a manner that doesn't prioritize maximizing expected value on the basis of this credence.


And this is precisely why the betting examples that you and others use don’t prove your conclusion.
Pierre-Normand May 30, 2023 at 12:16 #811737
Quoting Michael
Except the experiment is only conducted once.


I've also been working under the assumption that the experiment is conducted only once. Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.

In the same vein, in your dice roll variation, Sleeping Beauty knows that if the die didn't land on six, she will be awakened once, but if it did land on six, she would be awakened six times. This information is sufficient to justify her credence P(6) = 6/11. If we have eleven participants each participating in this experiment once, and they all bet on 'six' every time they are awakened, they will be correct 6/11 of the time on average.
Michael May 30, 2023 at 12:25 #811738
Quoting Pierre-Normand
If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.


2/3 bets are right, but that’s because you get to bet twice if it’s tails. That doesn’t prove that tails is more likely. With 4 participants, 1/2 of participants are right whether betting heads or tails. You can frame bets to seemingly support either conclusion.

Although you literally said in your previous post that betting is irrelevant, so why go back to it?
Michael May 30, 2023 at 12:32 #811739
Quoting Pierre-Normand
Sleeping Beauty's calculation that P(H) = 1/3 doesn't hinge on her participation in the experiment being repeated. She's aware that if the coin lands heads, she will be awakened once, but if it lands tails, she will be awakened twice. If we run this experiment once with three participants, and all three of them bet on T every time they are awakened, they will be correct 2/3 of the time on average, which aligns with their credences.


This goes back to what I said before. There are two ways to reason:

1. I should reason as if I am randomly selected from the set of all participants
2. I should reason as if my interview is randomly selected from the set of all interviews

Why would Sleeping Beauty reason as if the experiment was conducted multiple times and that her current interview was randomly selected from that set of all possible interviews, given that that's not how the experiment is conducted?

The experiment is conducted by tossing a coin, and so it is only rational to reason as if she was randomly selected from the set of all possible participants.
Pierre-Normand May 30, 2023 at 12:32 #811740
Quoting Michael
And this is precisely why the betting examples that you and others use don’t prove your conclusion.


The betting examples serve to illustrate that the credences held by thirders, unlike those held by halfers, align with the frequencies of outcomes in such a way that if they were to place bets based on these credences, their expected values would satisfy the equation EV = ?(P(i)*$i), where P(i) represents the probabilities of outcomes and $i the corresponding payouts. This illustration does not necessitate that Sleeping Beauty actually seeks to maximize her expected value. Rather, it reveals that if this were her goal, her credences would guide her actions effectively. Moreover, if numerous participants were placed in identical situations to Sleeping Beauty's, their credences would correspond to the frequencies of the outcomes they anticipate, which aligns closely with the intuitive definition of a credence.

On the other hand, it's challenging to reconcile a statement such as: 'My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.' This perspective, typically associated with halfers, seems to introduce an incongruity between stated credence and expected frequency of outcomes.
Michael May 30, 2023 at 12:34 #811741
Quoting Pierre-Normand
My current credence P(H) is 1/2, but if I were placed in this exact same situation repeatedly, I would expect the outcome H to occur one third of the time.


I wouldn't say that the outcome H occurs one third of the time. I would say that one third of interviews happen after H occurs, because two interviews happen after every tails.

I think thirders commit a non sequitur when they claim that tails is twice as likely. Amnesia between interviews doesn't make it any less fallacious.
Michael May 30, 2023 at 15:02 #811754
Quoting Pierre-Normand
I would like the halfer to explain why ruling out the Tuesday scenario doesn't affect their credence in the coin toss outcome at all.


I've been thinking about this and I think there's a simple analogy to explain it.

I have one red ball in one bag and two blue balls in a second bag. I am to give you a ball at random. Your credence that the ball will be red should be [math]1\over2[/math].

Being told that it's Monday is just like being told that the second bag only contains one blue ball. It does nothing to affect your credence that the ball you will be given is red.
Michael May 30, 2023 at 20:29 #811841
I think the above in fact shows the error in Elga's paper:

But your credence that you are in T[sub]1[/sub], after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T[sub]1[/sub]|T[sub]1[/sub] or T[sub]2[/sub]), and likewise for T[sub]2[/sub]. So P(T[sub]1[/sub]|T[sub]1[/sub] or T[sub]2[/sub]) = P(T[sub]2[/sub]|T[sub]1[/sub] or T[sub]2[/sub]), and hence P(T[sub]1[/sub]) = P(T[sub]2[/sub]).

...

But your credence that the coin will land Heads (after learning that it is Monday) ought to be the same as the conditional credence P(H[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]). So P(H[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub])=1/2, and hence P(H[sub]1[/sub]) = P(T[sub]1[/sub]).

Combining results, we have that P(H[sub]1[/sub]) = P(T[sub]1[/sub]) = P(T[sub]2[/sub]). Since these credences sum to 1, P(H[sub]1[/sub])=1/3.


There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random. According to Elga's reasoning:

1. P(B[sub]1[/sub]|B[sub]1[/sub] or B[sub]2[/sub]) = P(B[sub]2[/sub]|B[sub]1[/sub] or B[sub]2[/sub]), therefore P(B[sub]1[/sub]) = P(B[sub]2[/sub])

2. P(R|R or B[sub]1[/sub]) = 1/2, therefore P(R) = P(B[sub]1[/sub])

3. Therefore, P(R) = P(B[sub]1[/sub]) = P(B[sub]2[/sub]) = 1/3

The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B[sub]1[/sub]) = P(B[sub]2[/sub]) = 1/4.

So his reasoning is a non sequitur.
Pierre-Normand May 30, 2023 at 23:05 #811893
Quoting Michael
There is a red ball in one bag and two numbered blue balls in a second bag. You will be given a ball at random.


This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.

Quoting Michael
According to Elga's reasoning:

1. P(B1|B1 or B2) = P(B2|B1 or B2), therefore P(B1) = P(B2)

2. P(R|R or B1) = 1/2, therefore P(R) = P(B1)

3. Therefore, P(R) = P(B1) = P(B2) = 1/3

The second inference and so conclusion are evidently wrong, given that P(R) = 1/2 and P(B1) = P(B2) = 1/4.

So his reasoning is a non sequitur.


Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.

Incidentally, since we started this discussion, I've read Elga's and Lewis's papers. But I've also read most of Robert Stalnaker's "Another Attempt to Put Sleeping Beauty to Rest" (2013) and Silvia Milano's "Bayesian Beauty" (2020), both of which are illuminating. Stalnaker modifies Lewis' centred-world approach, and Milano reconciles such approaches (which interpret indexical content) with Bayesian principles.
Michael May 31, 2023 at 05:54 #811958
Quoting Pierre-Normand
Here, P(R|R or B1) is the probability that the ball you've just received is red, conditioned on the information (revealed to you) that this is the first ball you've received in this run of the experiment. In other words, you now know you haven't taken a shot of tequila. Under these circumstances, P(R) = P(B1) = 1/2.


There is a difference between these two assertions:

1. P(R|R or B[sub]1[/sub]) = P(B[sub]1[/sub]|R or B[sub]1[/sub])
2. P(R) = P(B[sub]1[/sub])

The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.

Quoting Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment.


Even if it doesn't, it does show that Elga's assertion that if P(A|A or B) = P(B|A or B) then P(A) = P(B) is not true a priori, and as he offers no defence of this assertion with respect to the Sleeping Beauty experiment his argument doesn't prove that P(H[sub]1[/sub]) = 1/3.
Pierre-Normand May 31, 2023 at 06:47 #811965
Quoting Michael
That’s not accurate. There is a difference between these two assertions:

1. P(R|R or B1) = P(B1|R or B1)
2. P(R) = P(B1)

The first refers to conditional probabilities, the second to unconditional probabilities, and in my example the first is true but the second is false.


It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.

To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red? This conditional probability is represented as P(R). This also represents how you should update your credence about the color of the ball upon learning that this is the first (or the only) ball given to you before the tequila shot. In this scenario, both you and the game master occupy the same centered possible world and share the same evidence.

Incidentally, I think Stalnaker and Milano's paper both produce arguments that are easier to follow and, it seems to me, more rigorous than Elga's valiant first attempt.
Michael May 31, 2023 at 06:55 #811967
Quoting Pierre-Normand
It looks like you may have misinterpreted Elga's paper. He doesn't define P as an unconditional probability. In fact, he expressly defines P as "the credence function you ought to have upon first awakening." Consequently, P(H1) and P(T1) are conditional on Sleeping Beauty being in a centered possible world where she is first awakened. The same applies to P(R) and P(B1), which are conditional on you being in a centered possible world where you are presented with a ball still wrapped in aluminum foil before being given a tequila shot.


I don't see how this entails that P(A|A or B) = P(B|A or B) entails P(A) = P(B).

My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.

It certainly doesn't follow a priori, and so without any further explanation his argument fails.

Quoting Pierre-Normand
To understand what P(R) entails, let's look at the situation from the perspective of the game master. At the start of the game, there is one red ball in one bag and two blue balls in the other. The game master randomly selects a bag and takes out one ball (without feeling around to see if there is another one). They hand this ball to you. What's the probability that this ball is red?


1/2.
Pierre-Normand May 31, 2023 at 08:22 #811984
Quoting Michael
My example proves that this doesn't follow where P is the credence function I ought to have after being explained the rules of my game. Elga doesn't explain why it follows where P is the credence function I ought to have upon first awakening.


I apologize. I misunderstood this part of Elga's argument. Although P(T1) and P(H1) are the credences Sleeping Beauty should have upon being first awakened, they are not conditional on her being in a centered world where she is first awakened. It is rather P(H1|H1 or T1) that is thus conditioned.

I'm going to study the argument more closely and comment later.

Incidentally, the fact that Sleeping Beauty's credence that the coin landed (or soon is going to land) heads, upon first awakening, is being increased by 1/6 after she learns that today is Monday, is a point of agreement between Lewis and Elga.

Elga's interpretation that her credence P(H) gets updated from 1/3 to 1/2 is easily accounted for by the fact that, after she can rule out today being Tuesday, the possible worlds [H1, asleep] and [T1, T2] remain equiprobable. Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one. My previous remark that Sleeping Beauty and the "game master" here are in the same centered possible world and share the same evidence applies.

Lewis's halfer argument, though, commits him to claim that Sleeping Beauty's credence P(H) gets updated from 1/2 to 2/3. In other words, after she is first awakened and told that today is Monday, it becomes rationally warranted for Sleeping Beauty to believe that the probability that the coin landed heads (or will land heads when it is being tossed later tonight) is 2/3. Lewis bites this bullet and explains this as a peculiar form of knowledge about the future. It would have been intriguing to know if Lewis would have extended this line of reasoning to the game master's credence, given that they share the same epistemic situation as Sleeping Beauty. If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.
Michael May 31, 2023 at 08:30 #811985
Quoting Pierre-Normand
If so, this would suggest a highly unusual implication - that one could acquire knowledge about future events based solely on the fact that someone else would be asleep at the time of those events.


Elga's reasoning has its own unusual implication. In his own words:

Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday.

...

Thus the Sleeping Beauty example provides a new variety of counterexample to Bas Van Fraassen’s ‘Reflection Principle’ (1984:244, 1995:19), even an extremely qualified version of which entails the following:

"Any agent who is certain that she will tomorrow have credence x in proposition R (though she will neither receive new information nor suffer any cognitive mishaps in the intervening time) ought now to have credence x in R."


I'm inclined towards double-halfer reasoning. P(Heads) = P(Heads | Monday) = 1/2, much like P(Red) = P(Red|Red or Blue 1) = 1/2. Even if the experiments are not exactly the same, I suspect something much like it is going on, again given the Venn diagram here. I just think the way the Sleeping Beauty problem is written makes this harder to see.
Michael May 31, 2023 at 08:46 #811991
Quoting Pierre-Normand
Since the setup of the experiment doesn't even require that anyone look at the result of the toss before Monday night, nothing changes if the toss is actually performed after Sleeping Beauty's awakening. In that case the credences expressed on Monday are about a future coin toss outcome rather than an already actualized one.


I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews (where we distinguish between Heads & Monday and Tails & Monday).

And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant. As such:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{1*{1\over2}}\over1}\\&={1\over2}\end{aligned}[/math]

This, incidentally, would be my answer to Milano's "Bayesian Beauty".

I don't have access to Stalnaker's paper to comment on that.
Pierre-Normand May 31, 2023 at 09:06 #811995
Quoting Michael
That's exactly the implication of Elga's reasoning.


Lewis's treatment yields an incredible result. Elga's treatment yields an unsurprising result by means of a controversial method. This is why I prefer the treatments by Stalnaker and (especially) Milano. They both demystify the method and Milano, in addition, shows the self centered world method to be consistent with Bayesian conditionalization, and both to satisfy van Fraassen's principles of reflection.
Michael May 31, 2023 at 09:13 #811997
Quoting Michael
And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Tuesday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.

With this reasoning I think Bayes' theorem is simple. The probability of being woken up is 1 and the probability of being woken up if Heads is 1. That she's woken up a second time if Tails is irrelevant.


In fact there's an even simpler way to phrase Bayes' theorem, even using days (where "Mon or Tue" means "today is Monday or Tuesday").

P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
P(Heads | Mon or Tue) = 1 * 1/2 / 1
P(Heads | Mon or Tue) = 1/2
Pierre-Normand May 31, 2023 at 09:28 #811999
Quoting Michael
I think this is a better way to consider the issue. Then we don't talk about Heads & Monday or Tails & Monday. There is just a Monday interview and then possibly a Tuesday interview. It's not the case that two thirds of all interviews are Tails interviews; it's just the case that half of all experiments have Tuesday interviews. Which is why it's more rational to reason as if one is randomly selected from the set of possible participants rather than to reason as if one's interview is randomly selected from the set of possible interviews.


It appears that you are suggesting a false dichotomy. Logically, both (1) two-thirds of all interviews being Tails interviews, and (2) half of all experiments having Tuesday interviews can simultaneously hold true. Both (1) and (2) are in fact logical implications from the problem's stipulations. The terms 'heads', 'tails', 'Monday', and 'Tuesday' merely serve as convenient labels for the spectrum of possibilities. Solving a problem doesn't involve selectively ignoring one of its stipulations or dismissing its acknowledgement as irrational. If you intentionally disregard a stipulation of the problem (or a logical consequence thereof), you are effectively addressing a different problem.

And I think it's even better to not consider days and just consider number of times wakened. So first she is woken up, then put to sleep, then a coin is tossed, and if tails she's woken again. Then we don't get distracted by arguing that her being asleep on Monday if Heads is part of the consideration. It doesn't make sense to say that she's asleep during her second waking if Heads.


I don't think anyone said that she experiences anything on Tuesday when she sleeps. Milano addresses the related concern that it is not an epistemic possibility that Sleeping Beauty will experience the centered possible world in which it is Tuesday and she is asleep. That doesn't mean that this possibility can't be considered in Bayesian conditionalization. You can ascribe non-zero probabilities to states that you will not experience, which happens for instance when you buy life insurance.
Pierre-Normand May 31, 2023 at 09:35 #812002
Just for fun, I asked GPT-4 to comment on your Bayesian analysis.

[hide]
What do you make of Michael's purported solution to the Sleeping Beauty problem:

"In fact there's an even simpler way to phrase Bayes' theorem, even using days.

P(Heads | Monday or Tuesday) = P(Monday or Tuesday | Heads) * P(Heads) / P(Monday or Tuesday)
P(Heads | Monday or Tuesday) = 1 * 1/2 / 1
P(Heads | Monday or Tuesday) = 1/2"

GPT-4

Michael's computation seems to be misguided. He's attempting to use Bayes' theorem to calculate P(Heads | Monday or Tuesday), i.e., the probability of Heads given that it is either Monday or Tuesday. Here's why this approach doesn't work:

First, the event "Monday or Tuesday" is trivial - it will always be either Monday or Tuesday, hence P(Monday or Tuesday) = 1. This factor does not contribute any information to the analysis.

Second, P(Monday or Tuesday | Heads) equals 1, which indicates that if the coin lands Heads, it will definitely be either Monday or Tuesday, which is trivially true.

Third, P(Heads) is the unconditional or prior probability of the coin landing Heads, which equals 1/2 under the problem setup.

Thus, Michael's computation ends up simply restating the original unconditional probability of Heads, P(Heads) = 1/2, instead of calculating the conditional probability of Heads given it's either Monday or Tuesday.

The Sleeping Beauty problem is trickier than a standard conditional probability problem because Sleeping Beauty's state of knowledge changes throughout the problem. On waking, she doesn't know what day it is, and this affects her credences in ways that can't be captured by Michael's use of Bayes' theorem. To accurately solve the problem, one needs to account for Sleeping Beauty's changing state of knowledge and the conditions under which she forms her beliefs.
[/hide]
Michael May 31, 2023 at 09:36 #812003
Reply to Pierre-Normand So ChatGPT is saying that P(Heads | today is Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?
Pierre-Normand May 31, 2023 at 09:49 #812006
Quoting Michael
So ChatGPT is saying that P(Heads | Monday or Tuesday) = 1/2 is trivially true. Doesn't that just prove my point?


GPT-4 wasn't endorsing your conclusion. Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.

The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.
Michael May 31, 2023 at 10:00 #812007
Quoting Pierre-Normand
Rather, it pointed out that your calculation of P(Heads | Monday or Tuesday) = 1/2 simply restates the unconditional probability P(H) without taking into account Sleeping Beauty's epistemic situation.


As Elga says:

This belief change is unusual. It is not the result of your receiving new information — you were already certain that you would be awakened on Monday. (We may even suppose that you knew at the start of the experiment exactly what sensory experiences you would have upon being awakened on Monday.) Neither is this belief change the result of your suffering any cognitive mishaps during the intervening time — recall that the forgetting drug isn’t administered until well after you are first awakened. So what justifies it?

The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H.


Sleeping Beauty's "epistemic situation" is only that her current situation is relevant. She doesn't learn anything new. All she knows is that her temporal location is either Monday or Tuesday. Before the experiment began this wasn't relevant, and so she only considers P(H). After being woken up this is relevant, and so she considers P(H | Mon or Tue).

That they both give the same answer (because Monday or Tuesday is trivially true) just suggests that Lewis was right. It really is as simple as (in his words) "Only new relevant evidence, centred or uncentered, produces a change in credence; and the evidence (H[sub]1[/sub] ? H[sub]2[/sub] ? H[sub]3[/sub]) is not relevant to HEADS vs TAILS".

Quoting Pierre-Normand
The argument you've put forward could be seen as suggesting that the vast body of literature debating the halfer, thirder, and double-halfer solutions has somehow missed the mark, treating a trivial problem as a complex one. This isn't an argument from authority. It's just something to ponder over.


Well, I would also think that my argument that P(A|A or B) = P(B|A or B) doesn't entail P(A) = P(B) is quite trivial. Maybe I've made a mistake (whether with this or my interpretation of Elga), or maybe Elga did. I'll admit that the former is most likely, but my reasoning appears sound.
fdrake May 31, 2023 at 10:14 #812012
I've been reading along, I have a meta question for you both @Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is?
Michael May 31, 2023 at 10:21 #812013
Reply to fdrake I'm not sure what you mean by the sampling mechanism. There is one experiment with one coin toss. We both appear to agree on that.
Pierre-Normand May 31, 2023 at 10:37 #812017
Quoting fdrake
I've been reading along, I have a meta question for you both Pierre-Normand@Michael - why is it helpful to discuss variants which are allegedly the same as the original problem when you both don't seem to agree what the sampling mechanism in the original problem is?


I introduced the prisoner variant to satisfy Michael's insistence that Sleeping Beauty's credence only be evaluated within a single-run experimental context. It aimed to show that it is rational for the prisoner to act under the assumption that their safehouse is surrounded by crocodiles with probability 6/11 whenever they get an opportunity to act of the basis of their credence, and this in spite of the fact that they had initially been brought to a safehouse surrounded by lions with probability 6/11. It also highlighted how the credence of the participant (prisoner) might interact with the different credence of an external agent (the police). I think my distinction of case-1 and case-2 also helped diagnose the source of Michal's halfer intuition. It stems from confusing the standard Sleeping Beauty problem with a different but closely related one (that indeed may be construed as stemming from a different sampling mechanism).

The blue and red balls mechanism was introduced by Michael but I proposed to refine it in order to better mirror the Sleeping Beauty protocol.

Regarding the sampling mechanism, I believe the centered possible world approach offers a compelling interpretation, although Milano suggests that there may still be residual disagreement about setting priors. This approach is compelling because it allows the participant to reason about her epistemic situation in a natural way without the need to incorporate any weird metaphysical conjectures about how she finds herself in her current situation. However, I'm not entirely confident about this. Milano's chosen values for the parameters ? and ? seem well-argued to me, grounded in both the stipulations of the problem and uncontroversial rationality assumptions. But there may be something I've overlooked.
fdrake May 31, 2023 at 11:11 #812023
Reply to Michael @Pierre-Normand

Aye. But you're disagreeing on whether the coin toss is the only random thing in the experiment (and realises in pairs of days), whether it's appropriate to assign a random variable to model part of SB's credence (her "subjective probability") and the relationship between that random variable and the coin toss. Like the enduring disagreement you both had regarding whether the interviews can be seen as drawn individually from a bag and have probabilities assigned to that (which can be used to support a thirder position).

The sampling mechanism determines what would create an item of data in the experiment. That could be "pairs of days" or "individual days" or "conflip result day pair triples" or whatever. The description in the OP doesn't determine one sampling mechanism, it just suggests one. Given such a mechanism, the calculations are pretty trivial.

As an example, when you (Michael) were speaking about seeing the interviews as drawn from a bag, because SB's interview-days "observe" coin toss results, that specifies a sampling mechanism on individual interview days which are conditionally related to coin tosses. Two of those interview days occur in tails, one in heads, so the probability of heads is 1/3 in that case.

Inversely, when you were speaking about seeing the awake-day pairs as drawn from a bag; (awake, not awake) for heads, (awake, awake) for tails, that either assigns the random variable "am I awake?" to the days (with values awake/not awake) or assigns a coinflip random variable to the pairs (awake, not awake) for heads and (awake, awake) for tails. If the sampling mechanism is specified the first way, the probability of heads given "awake" turns out to be 1/3 as before. But if it's on the pairs alone (and not the elements of the pairs) it turns out to be 1/2.

As @sime was intimating, a lot of the disagreement comes from how you end up assigning random variables. EG using the principle of indifference over the days for SB's day credence isn't a "neutral" move with respect to the sampling mechanism, since it conceives what day it is as a random variable which can be subject to a prior. That isn't an available move if the constitutive events of the sample space are (awake, awake) and (awake, asleep) - since the elements of both pairs are stipulated to be nonrandom in that model. This is distinguished from the bivariate approach we spoke about earlier which yields a thirder position.

This bottoms out in not agreeing on what constitutes the space of events for the random variables, rather than in the calculations I think!

Quoting Pierre-Normand
It yields a natural interpretation because it enables the participant to reason about her epistemic situation in a natural way without the need to import some weird metaphysical baggage about the ways in which she is being "dropped" in her current situation


The "metaphysical baggage" about being "dropped" into a day in the centred world case, as I see it, is a three sided equiprobable coin flip. It's only as mysterious as a coinflip. In the "non-centred" case, SB isn't "randomly dropped" into a day at all, she's instead dropped into a heads-awakening or a tails-awakening (I think).

I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated.
Michael May 31, 2023 at 11:20 #812025
Reply to fdrake I did mention this. There are two ways to reason:

1. I should reason as if I am randomly selected from the set of possible participants
2. I should reason as if my interview is randomly selected from the set of possible interviews

I do the former, he does the latter.

My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3).

Although he accepts this conclusion, so at least he’s consistent.

But you’re right that this fundamental disagreement on how best to reason might make these arguments irresolvable. That’s why I’ve moved on to critiquing Elga’s argument, which is of a different sort, and to an application of Bayes’ theorem with what I believe are irrefutable terms (although we disagree over whether or not the result actually answers the problem).
Pierre-Normand May 31, 2023 at 11:26 #812026
Quoting fdrake
I ended up in a state of confusion in the calculations, having a few contradictions in reasoning, which this paper elevates into framings of the experiment (including SB's setting within it) having inconsistent sample spaces between the centred and non-centred accounts. Thus yielding a "dissolution" of the paradox of the form; it's only a paradox when centred and non-centred worlds are equated.


Thanks! I'm going to read this paper. I like the conclusion: "Thus, we suggest that although it is true that Beauty is in a heads-awakening if and only if the coin landed heads, Beauty, upon awakening, should assign probability 1/3 to the former and probability 1/2 to the latter."

Is that not closely analogous to my distinction of case-2 and case-1 (respectively) in my analysis of the prisoner scenario? My analysis also suggests a pragmatist interpretation of the choice between centered and non-centered accounts. The choice depends not on metaphysical preferences, and not either on arbitrary stipulations regarding sampling mechanisms, but rather on the specification of the use one intends to make of one's credence in the outcome.
fdrake May 31, 2023 at 11:27 #812027
Quoting Michael
?fdrake I did mention this. There are two ways to reason:

1. I should reason as if I am randomly selected from the set of possible participants
2. I should reason as if my interview is randomly selected from the set of possible interviews


Quoting Michael
My use of variants, such as that of tossing the coin 100 times, was to show that applying his reasoning leads to what I believe is an absurd conclusion (that even if the experiment is only done once it is rational to believe that P(100 Heads) = 2/3).


Aye! I remember these. It was a good point. This is what put me onto the idea that there's contradictions inherent in the framing. If you end up trying to use the tiny probability of P(100 Heads) to update SB's interview credence (combining the "awakening" process with the "day sampling=coinflip" process), I think you end up in clown logic land.

Only I was wrong in saying that it was "the thirder's" position which was incoherent, it was my assumption that (roughly) the sampling mechanisms you intimated can be unproblematically combined. They appear to be talking about the same thing, but they do so so differently you end up with contradictions.




Pierre-Normand May 31, 2023 at 12:05 #812032
Quoting Michael
I don’t think any reasonable person would believe this. I certainly wouldn’t.


Perhaps a rational individual might not believe in the plausibility of being woken up and put back to sleep 2^101 times. But even if this extreme scenario makes the consideration of such a highly unlikely albeit extraordinary event unreasonable, it does not follow that thirder-like solutions to a less extreme version of the Sleeping Beauty problem would be equally unreasonable.

For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 nine times out of ten.
Michael May 31, 2023 at 12:08 #812033
Quoting Pierre-Normand
For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 in nine times out of ten.


It would be unreasonable of you to believe that you are most likely to win, even if it’s financially reasonable to play.
Pierre-Normand May 31, 2023 at 12:19 #812037
Quoting Michael
It would be unreasonable of you to believe that you are most likely to win.


Consider this scenario in the context of the Sleeping Beauty problem where a ten-sided dice is thrown. In this situation, it would be rational for me to believe that I am more likely to win an even money bet placed on the outcome being a 10 during the current awakening. Despite the fact that in 9 out of 10 experimental runs I would lose my only bet, in the remaining run, I would win ten times my bet by the time the experiment concludes. Thus, on average, I'd win 10 out of 19 times. And this is because my credence that the outcome is 10, P(10) = 10/19, would exactly match, in the long run, the proportion of times the outcome indeed was 10 when I awoke.
Michael May 31, 2023 at 12:23 #812039
Reply to Pierre-Normand Yes, it is rational to believe that if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.
Pierre-Normand May 31, 2023 at 12:39 #812042
Quoting Michael
Yes, if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.


Indeed, it's true that if you play the game once, it's not most likely that you'll win during that particular experimental run. However, when I suggest it's more likely than not that the outcome is 10 during the current awakening, I'm not claiming that it's most likely that the current experimental run will result in a win for me. These are two distinct statements, despite their truth values being perfectly correlated. This seems to be the point that was made in the conclusion of the paper that @fdrake mentioned. Furthermore, these claims hold true irrespective of whether the experiment is run once or multiple times.

It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated. But that's just because the one gerrymanders the other.
fdrake May 31, 2023 at 12:52 #812045
Quoting Pierre-Normand
and that both outcomes are perfectly correlated


Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space. As an analogy, it's like having one coinflip labelled with outcomes Heads and Tails, and another labelled Flibberty and Gibbet, then asking what's the probability that the Heads/Tails outcome coin comes out Flibberty or Gibbet. You can analogise the mechanisms by blurring the eyes, but that commits a category error. If that paper is to be believed.
Michael May 31, 2023 at 12:53 #812046
Quoting Pierre-Normand
It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated.


So this goes back to what I said before. Either we reason as if we’re randomly selected from the set of all participants, and so P(10) = 1/10, or we reason as if our interview is randomly selected from the set of all interviews, and so P(10) = 10/19.

Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.

How we chose to bet just has no bearing on one’s credence that one is likely to win. With your lottery example we play even if we know that we’re most likely to lose (and in fact I play the lottery even though the expected value of winning is less than the cost). And with this example I might bet on 10 even if my credence is that it is less likely, simply because I know that I will win in the long run (or, if playing one game, I’m just willing to take the risk because of the greater expected value).
Pierre-Normand May 31, 2023 at 12:58 #812047
Quoting fdrake
Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space.


What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa. But Sleeping Beauty's credences in those two different outcomes (centered = 1/3 and uncentered = 1/2) is different, they are both right, and they both perfectly tack the frequencies of those corresponding outcomes when the experiment is repeated.
fdrake May 31, 2023 at 13:14 #812048
Quoting Pierre-Normand
What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa.


Aye I see what you mean. Thank you for your clarification. I really like the paper's use of the idea that the claim "X iff Y" works completely differently from "event X coincides with event Y", because event X and event Y both belong in different sample spaces. And we usually leave that difference unattended to.
Pierre-Normand May 31, 2023 at 13:21 #812049
Quoting Michael
Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.


Your assertion that I'm reasoning "as if" someone selected an interview from all possible interviews isn't quite accurate. Sleeping Beauty is fully aware that the current experimental run comprises either one or multiple awakenings, depending on the outcome of a coin toss. She doesn't assume that someone arbitrarily selected an awakening of hers. Rather, she knows she's currently experiencing an awakening and isn't certain whether it's the only one in the present run, or one among many.

She's asked about her credence regarding the outcome of the coin toss. Given the known setup of the experiment, she can choose to calculate her credence in either the centered or non-centered heads outcome. She could even calculate both. If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3. If she decides to track her experimental runs (non-centered), her credence in heads is 1/2. These are credences in distinct outcomes, and both can serve different purposes.

For example, in my hostage scenario, one approach maximizes one's chance of survival during an escape, while the other maximizes one's chances of being rescued by the police when communicating probable location. (I believe I may have made a mistake in my earlier analysis of the police rescue situation).
Michael May 31, 2023 at 13:23 #812050
Quoting Pierre-Normand
If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3.


How does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?
Pierre-Normand May 31, 2023 at 14:02 #812054
Quoting Michael
And how does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?


If by "reasoning as if one's interview is randomly selected from the set of possible interviews" you're referring to Sleeping Beauty's avowal of her inability to single out any one of those possible awakenings as more or less likely than another, then yes, she is reasoning in that way. However, this form of reasoning is guided by her need to assess her credence in a heads-awakening outcome (her centered possible world), not by the setup of the experiment.

The design of the experiment allows participants to assess both their centered and non-centered credences. What they decide to track can be influenced by various factors. For instance, they might be trying to optimize their expected value or their chances of winning; they could be trying to maximize their chances of escaping alive versus maximizing their chances of being rescued. Their choice could also be guided by their interests, such as whether they want to track the frequency of heads-awakenings or the frequency of heads-experimental-runs. In other words, the choice is driven by the participants' individual needs and interests, not dictated by the structure of the experiment itself.
Michael May 31, 2023 at 14:10 #812057
Quoting Pierre-Normand
Sleeping Beauty's inability to single out any one of those possible awakenings as more or less likely than another


Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, and so it is a non sequitur to argue that because Tails awakenings are twice as frequent in the long run they are twice as likely.

So how does the thirder argue that they are equally likely if not by first reasoning as if an interview is randomly selected from the set of possible interviews?
Pierre-Normand May 31, 2023 at 14:18 #812062
Quoting Michael
Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, [...]


Would not a halfer say that they are equally as likely?
Michael May 31, 2023 at 14:41 #812065
Quoting Pierre-Normand
Would not a halfer say that they are equally as likely?


Equally likely to happen, such that P(Monday & Heads) = P(Monday & Tails) = P(Tuesday & Tails) = 1/2, as per that earlier Venn diagram, but not equally likely that today is that interview, because if P(Heads) = 1/2 then P(Monday & Heads) = 1/2, and so P(Monday & Tails) + P(Tuesday & Tails) = 1/2, therefore if P(Monday & Tails) = P(Tuesday & Tails) then P(Monday & Tails) = P(Tuesday & Tails) = 1/4.

This was Lewis' reasoning in his paper.
Michael May 31, 2023 at 15:38 #812082
Quoting Pierre-Normand
This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.


I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

If I'm told that this is my first ball (it's Monday) then P(R) = P(B[sub]1[/sub]) = 1/2.

If I'm told that this is a blue ball (it's Tails) then P(B[sub]1[/sub]) = P(B[sub]2[/sub]) = 1/2.

If I don't know anything then I should reason as if my ball was randomly selected from one of the two bags, and so P(R) = 1/2 and P(B[sub]1[/sub]) = P(B[sub]2[/sub]) = 1/4 (or just P(R) = P(B) = 1/2).

This contrasts with your reasoning as if my ball is randomly selected from a pile such that P(R) = P(B[sub]1[/sub]) = P(B[sub]2[/sub]) = 1/3.

At the very least this shows how halfers can be double halfers to avoid Lewis' P(Heads | Monday) = 2/3.
Pierre-Normand June 01, 2023 at 01:19 #812191
Quoting Michael
I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

If I don't know whether this is my first ball or is a blue ball (neither know it's Monday or tails) then I should assume I randomly select a ball from a bag, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4.

This contrasts with your reasoning where we randomly select a ball from a pile such that P(R) = P(B1) = P(B2) = 1/3.


Your revised scenario seems to neglect the existence of a state where the player is being dismissed. This isn't quite equivalent to Milano's 'blind state' in the Sleeping Beauty context – where the 'blind state' refers to the case in which Sleeping Beauty is not awakened on Tuesday and thus never experiences it – but it functions in a similar way for the purpose of this analysis. In this state, the time comes to pull a second ball from the bag, but there's no such ball because the randomly selected bag only contained one. Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.

I now concur with you that there are scenarios – contingent upon specific purposes or interests – where Sleeping Beauty would want her credence P(H|W) to track the possibility of her awakening during a Heads-experimental run. In such scenarios, since the 'blind state' Tuesday&asleep isn't an independent possibility but is considered merely a feature of the relevant centered state (i.e., the state where she is in a Heads experimental run), it can be assigned the same prior as the probability that she will be awakened (either once or twice) during a Heads-experimental-run. This prior is 1/2. In this case, P(H|W) is calculated to be 1/2, but the interpretation of states H and W differs. The space of possibilities is parameterized differently, corresponding to 'wider' centered possible worlds (i.e., centered states that span a whole experimental run).

I've previously suggested cases where the participant's goals make them more interested in tracking narrower centered states. As revealed in the analysis by Laureano Luna (following Groisman), it can be misleading to assume that outcomes perfectly correlated with centered 'H & awake' states – regardless of how broadly or narrowly they are being defined – must also share the same probabilities. That doesn't mean the frequencies and probabilities don't track each other, a point Milano also makes. The (Bayesian) updated credences still track the frequencies of the relevantly defined centered states, which is why they can underpin rational betting behaviors, even though maximizing EV is just one among many goals an agent can have when making use of their credences.
Pierre-Normand June 01, 2023 at 03:48 #812249
Here is a excerpt of a discussion I've had with GPT-4. I don't think its last suggestion about the error Lewis might be making regarding Beauty's updating her credence in H in light of the fact that today is Monday is plausible. Despite that, and other small mistakes, I'm nevertheless quite impresses with its ability to grasp formal arguments.

[hide]
PN:

Thank you! I've now made use of the stipulations above, assuming a prior P(W) = 3/4 that Sleeping is awakened on any given day, and that W, and H represent narrow possible worlds centered on Beauty's potential awakening events (or on the event of where remaining asleep on Tuesday). I've calculated P(H|W) using Bayes' theorem and arrived at the expected answer. Would you kindly do the same, step by step, explaining your reasoning as you go along, to see if our intuitions match?

GPT-4:

Sure! We can use Bayes' Theorem to calculate P(H|W) step by step. Let me first explain the process.

Bayes' Theorem states that P(H|W) = [P(W|H) * P(H)] / P(W), where:

• P(H|W) is the probability of hypothesis H (that the coin landed heads) given evidence W (that Sleeping Beauty is awake),
• P(W|H) is the probability of evidence W given hypothesis H,
• P(H) is the prior probability of hypothesis H, and
• P(W) is the prior probability of evidence W.
Now, let's start by establishing our priors:

• P(H) = 1/2 (because the coin is fair),
• P(W) = 3/4 (as given in the problem; Sleeping Beauty is awakened on any given day with this probability).

Next, we need to figure out P(W|H):

• If H is true (the coin landed heads), Sleeping Beauty is awakened only once (on Monday), so P(W|H) =
1.

We can now plug these values into Bayes' theorem to get P(H|W):

P(H|W) = [P(W|H) * P(H)] / P(W)
= [1 * 1/2] / 3/4
= 1/2 / 3/4
= 2/3.

So, P(H|W) = 2/3, which means that given that Sleeping Beauty is awake, there is a 2/3 probability that the coin landed heads. This answer contrasts with the common answer of 1/3 often provided in the literature, but our version of the problem has altered the prior probabilities, which has in turn influenced the outcome.

PN:

I myself arrived at 1/3, and the prior seems consistent with what thirders also assume. I think your interpretation of P(W|H) may be questionable. P(W|H) is the probability that Beauty is in a narrow centered world where she is awakened, conditioned on the outcome H. When the outcome is H, she is awakened in the world Monday&H, and not awakened in the world Tuesday&H. P(W|H) is therefore 1/2, and P(H|W) is calculated to be 1/3, as expected.

GPT-4:

You're right, and I apologize for the confusion. My interpretation of P(W|H) was incorrect. Let's correct that.

P(W|H) should indeed be the probability of W given H, which is the probability that Sleeping Beauty is awake on a given day given that the coin landed heads. In the case where the coin lands heads, Sleeping Beauty is only awakened on Monday, not Tuesday. Therefore, P(W|H) is indeed 1/2.

Given that, let's recalculate:

P(H|W) = [P(W|H) * P(H)] / P(W)
= [1/2 * 1/2] / 3/4
= 1/4 / 3/4
= 1/3.

You're correct. Given that Sleeping Beauty is awake, there is a 1/3 probability that the coin landed heads, which is the commonly accepted solution among "thirders". Thank you for your patience and for catching that mistake.

PN:

No worry! Unlike you, I always have the opportunity to reread my drafts and scour for mistakes!

GPT-4:

Thank you for your understanding! It's true that as a machine learning model, I provide responses in real time without the ability to review or edit my responses after they are generated. I rely on the accuracy of the information I was trained on and the rules of inference I use to generate responses. I always strive for precision, so I appreciate your patience and understanding when occasional errors do occur. Let me know if you have any more questions or problems to solve - I'm here to help!

PN:

Interestingly, my approach also can warrant the halfer solution when the suitable priors are chosen consistently with the focus being displaced on wider centered worlds that span complete experiment runs. (This is the approach Michael favors although he is inexplicit about his choice of centered worlds). Lewis's approach, though, has the untoward consequence that upon learning that toady is Monday, Beauty must update her credence in H to 2/3. Intuitively, it should remain 1/2. In the light of our previous discussion, can you surmise what mistake Lewis might be making?

GPT-4:

Lewis's approach is a prominent variant of the halfer position in the Sleeping Beauty problem. According to his view, Sleeping Beauty should assign a credence of 1/2 to the coin landing heads because the initial probability, before she went to sleep, was 1/2 and she doesn't get any new information upon waking up that would lead her to update this credence.

However, as you've pointed out, upon learning that it is Monday, Lewis suggests that Sleeping Beauty should update her credence in the coin having landed heads to 2/3. This has been criticized as counterintuitive. From Sleeping Beauty's perspective, the fact that it's Monday doesn't seem to provide her with any information that would make the coin landing heads more probable.

One potential error in Lewis's reasoning might be related to the point we've discussed earlier, namely the definition and interpretation of the events and the kind of possible world one chooses to center upon. When Sleeping Beauty wakes up, her situation is not symmetrical with respect to the possible worlds "Heads & Monday" and "Tails & Monday". While it's true that these two worlds are equally likely given that it's Monday, Sleeping Beauty's evidence upon waking up isn't just that it's Monday, but rather something like "it's an awakening day and I don't know which day it is". If one interprets the evidence in this way, it seems more reasonable to say that her credence in heads should remain at 1/2 upon learning that it's Monday.

Additionally, an error in Lewis' argument could also arise from a misunderstanding of conditional probabilities, specifically the probability of it being Monday given that the coin landed heads, P(Monday|H). This would naturally be 1, because if the coin landed heads, she is only awakened on Monday. The halfer could potentially misinterpret this as saying the coin is more likely to have landed heads given that it's Monday, which isn't necessarily the case.

As always in these sorts of debates, much depends on how exactly we interpret the notions of evidence, updating, and conditional credence. Different interpretations can lead to different solutions to the problem.
[/hide]
Michael June 01, 2023 at 06:55 #812317
Quoting Pierre-Normand
Your revised scenario seems to neglect the existence of a state where the player is being dismissed.


It doesn't. You're dismissed after red or the second blue.

It is still the case that if I don't know whether this is Monday or Tails then I reason as if my ball is randomly selected from one of the two bags, such that P(R) = 1/2 and P(B1) = P(B2) = 1/4 (or just P(R) = P(B) = 1/2). It better reflects how the experiment is actually conducted.

I don't reason as if my ball is randomly selected from a pile such that P(R) = P(B1) = P(B2) = 1/3.
Michael June 01, 2023 at 07:06 #812318
Quoting Pierre-Normand
Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.


If you're going to reason this way then you also need to account for the same with blue. You reach in after the second blue and pull out nothing. So really P(Dismissed) > 1/4.

I just don't think it makes sense to reason this way.

If it helps, consider no tequila after the final ball. After being given a ball and asked your credence you're then dismissed if it's either red or second blue.
Pierre-Normand June 01, 2023 at 07:20 #812321
Quoting Michael
It doesn't. You're dismissed after red or the second blue.


That's cute but rather point missing. Sleeping beauty also is dismissed on the third day no matter what. But if we ignore the narrow centered case where the day is Tuesday and the coin landed Heads, we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.
Michael June 01, 2023 at 07:23 #812322
Quoting Pierre-Normand
we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.


That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

That P(Heads & Tuesday (or second waking)) consideration is a distraction that leads you to the wrong conclusion.
Pierre-Normand June 01, 2023 at 07:25 #812323
Quoting Michael
If you're going to reason this way then you also need to account for the same with blue. You reach in after the second ball and pull out nothing. So really P(Dismissed) = 2/5.


Under the narrow centered worlds parameterization of the problem space, the possibility (indeed, certainty) of Sleeping Beauty being dismissed on the third day is irrelevant. The third day isn't a day of possible interviews. And likewise in the ball picking analogy, if we wish to make it mirror the structure of the Sleeping Beauty problem.
Pierre-Normand June 01, 2023 at 07:34 #812324
Quoting Michael
That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

That (Heads & Tuesday (or second waking)) is a distraction that leads you to the wrong conclusion.


There are two possible conclusions depending on the way you chose to define the temporal span of your centered possible worlds (the states in the problem space). The conclusions are different because the questions are different. Of course, if you are unclear about the manner in which you define your states, then you might get confused and answer the question about the narrowly defined states (possible awakenings) with an answer that is suitable for the widely defined states (possible experiment runs), or vice versa. But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.
Michael June 01, 2023 at 07:40 #812325
Quoting Pierre-Normand
But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.


Are you referring to the safehouse and escape? That's a different scenario entirely.

I flip a coin. If heads then I flip again. If heads you win a car, otherwise you win nothing. If the first flip is tails then I flip again. If heads you win a motorbike, otherwise I flip again. If heads you win a motorbike, otherwise you win nothing.

I do this in secret and then tell you that you've won a prize. Given that you're more likely to win a prize if it's tails then it's reasonable to believe that it was most likely tails.

Now consider a similar scenario, but if heads then you win a car and if tails then you win two motorbikes. I do this is in secret and tell you that you've won at least one prize. It is not reasonable to believe that it was most likely tails.

Your safehouse and escape example is analogous to that first case. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing analogous to an escape opportunity with which to reassess the probability of the coin toss.
Pierre-Normand June 01, 2023 at 08:36 #812326
Quoting Michael
Your safehouse and escape example is analogous to that first case, not the second. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing comparable to it (and so if anything the second case above is closer to the Sleeping Beauty problem).


My safehouse and escape example – before I introduce the idea of escape possibilities – is designed in such a way that the prisoner's epistemic situation exactly mirrors Sleeping beauty's epistemic situation. Upon awakening on any given day, the prisoner's credence that they are in safehouse #1 is the same as Sleeping Beauty's credence that the coin landed heads. In both case, 'H' or 'being-in-safehouse-#1' are states that admit of both narrow centered possible worlds or wide centered possible worlds interpretations. If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events). Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 5/6 position).
Michael June 01, 2023 at 08:50 #812328
Quoting Pierre-Normand
Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 6/11 position).


Introducing the concept of escape possibilities simply changes the answer. You're more likely to have an opportunity to escape in safehouse #2, and so if given an opportunity to escape then you are more likely in safehouse #2. P(Safehouse #2|escape opportunity) > P(Safehouse #1|escape opportunity)

This answer isn't relevant to a scenario where there are no opportunities to escape, where P(Safehouse #1|no escape opportunity) > P(Safehouse #2|no escape opportunity).

Quoting Pierre-Normand
If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events).


I don't understand the connection between Sleeping Beauty's credence that the coin landed heads and the tracked frequency of heads-awakenings. It's a non sequitur to claim that because tails-awakenings are twice as frequent over repeated experiments then a coin toss having landed tails is twice as likely in any given experiment.

Sleeping Beauty is being asked "in this current, one-off experiment, what is the probability that the coin I tossed on Sunday evening landed heads?".

She's not being asked to guess if it's heads or tails and then being rewarded for each successful guess.

Her choice of guess in the latter has nothing to do with what her answer would be to the former.

If I were Sleeping Beauty I would answer "1/2" and guess tails.
Pierre-Normand June 01, 2023 at 09:28 #812329
Quoting Michael
It doesn't make sense to me to say that the answer in this scenario is relevant to a scenario where there are no opportunities to escape.


The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.

I discussed this a little further with GPT-4:
[hide]

PN:

Just to give you the idea (since your context window is narrower than Michael's) in this variations, a hostage is taken to one of two safehouses based on the throw of a die. If the die doesn't land on six, the hostage is taken to safehouse #1. If the die lands on six, the hostage is taken to safehouse #2 for six consecutive days. He also is being administered an amnesia inducing drug on each night. The hostage is informed of this protocol. The question posed is: "On any given day, what is your credence in the proposition 'I am at safehouse #1'?" The credence here refers to the hostage's subjective probability, or belief, in being at safehouse #1. The answer provided in the discussion suggests that the hostage's credence should be 5/11 rather than 5/6, despite the fact that there is a 5/6 chance of the die landing on six. This reasoning is based on the fact that the hostage gets to guess six times regarding the same die throw when the result is six.

GPT4:

Thank you for providing the context, it's indeed a crucial piece of information for this discussion. I now see why the 'thirder' position, or the 5/11 in this case, comes into play. It's a fascinating way to illustrate the Sleeping Beauty problem. And it makes perfect sense to apply the reasoning here in a similar way, given the similar structure of the problems.

When the hostage awakes, they have to consider not only the initial 5/6 chance of being in safehouse #1 but also that, should they be in safehouse #2, they could be guessing their location six different times on the basis of the same die roll. Hence, despite the initial die roll favoring safehouse #1, the multiplicity of questioning events when in safehouse #2 adjusts the credence to 5/11.

Your explanation above about how the hostage's goals can dictate whether they adopt a 'thirder' or 'halfer' perspective is clear and logical, in light of this information.

PN:

In a further refinement I suggested that safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

I suggested that the prisoner should maximise their survival chances by carrying the torch, in line of their 6/11 credence of being in safehouse #2. But this is true only of their chances of finding such an escape opportunity is proportional to the number of days they are held captive such that this finding doesn't yield an updating of their credence.

Likewise, if they wish to direct the police to rescue them (by communicating the address of one safehouse), and know that they can only be afforded one opportunity to do so per kidnaping event, they should direct them to safehouse #1. One twist here is that under those conditions, finding the unique opportunity to communicate with the police would lead the prisoner to update their credence to 5/6 that they are in safehouse 1. But if they can ensure that their message will be received by sending it every day, and that the police will not arrive before the end of the "kidnaping event" (but before they are being moved overseas, say) then their credence will not be updated and it will be rational for them to direct the police to warehouse #1 despite their credence being that they are in warehouse #2 with 6/11 probability. Can you see why?

GPT4:

This scenario adds another layer of complexity to the problem, but your explanations are clear and make sense.

When it comes to choosing between the plank and the torch, if the hostage's chances of finding an escape opportunity are proportional to the number of days they are held captive, and finding such an opportunity doesn't lead to an updating of their credence about which safehouse they're in, it's rational for them to choose the torch, aligning with their 6/11 credence of being in safehouse #2. This is because they will have more opportunities to use it.

On the other hand, if they can only communicate with the police once per kidnapping event, it would be more rational to direct the police to safehouse #1, even though their credence is that they are in safehouse #2 with a 6/11 probability. If finding this unique opportunity to communicate leads the hostage to update their credence to 5/6 that they are in safehouse 1, then it makes more sense to direct the police to safehouse #1. This is because, after the update, the chances that they are in safehouse #1 is higher.

However, if the hostage is able to send a message every day, and they know that the police will not arrive until the end of the kidnapping event, it would still be rational to direct the police to safehouse #1, despite the 6/11 credence of being in safehouse #2. The reason is that, in this scenario, the credence will not be updated every day because the opportunity to send a message doesn't provide new information about their location. In this case, the best strategy would be to guide the police to the location they're most likely to be in over the course of the whole kidnapping event, which would be safehouse #1 due to the higher probability of being taken there initially, despite having fewer chances to send a message from there.

PN:

Yes, indeed. It seems counterintuitive that the hostages thinks that they are more likely than not to presently be located in safehouse #2 but that they are being rational in directing the police to safehouse #1. But that's because even though they are thereby more likely to be sending a message that will direct the police to the wrong warehouse, those wrong message will arrive in batches of 6 and will only misdirect the police once!

GPT4:

Yes, exactly! This counterintuitive scenario arises from the peculiar structure of the problem, which combines the probabilities of being in a particular safehouse on a given day with the probabilities of being there across the entirety of the kidnapping event. The messages, when sent in batches, will indeed only misdirect the police once, even though there are six of them. And even though on any given day the hostage thinks they're more likely in safehouse #2, across the entirety of the kidnapping event, they're more likely to be in safehouse #1. Therefore, it makes sense to direct the police to safehouse #1, which seems counterintuitive when considering daily probabilities but becomes logical when taking into account the entire event's probabilities.
[/hide]
Michael June 01, 2023 at 09:35 #812331
Quoting Pierre-Normand
The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.


It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.

You're more likely to win a prize if it's tails, therefore upon being offered a prize you reason that tails is more likely.

You're more likely to win the opportunity to escape if in safehouse #2, therefore upon being offered the opportunity to escape you reason that safehouse #2 is more likely.
Michael June 01, 2023 at 09:38 #812332
To set out the scenario:

Dice roll 1-5 (safehouse #1): day 1, 50% opportunity to escape
Dice roll 6 (safehouse #2): day 1, 50% opportunity to escape; day 2, 50% opportunity to escape; day 3, 50% opportunity to escape; day 4, 50% opportunity to escape; day 5, 50% opportunity to escape; day 6, 50% opportunity to escape

It's quite straightforward that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape), but P(Dice roll 1-5|no opportunity to escape) > P(Dice roll 6|no opportunity to escape).

And so when there's a 0% opportunity to escape on each day, as with the traditional problem, P(Dice roll 1-5) > P(Dice roll 6).
Pierre-Normand June 01, 2023 at 09:43 #812334
Quoting Michael
It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.


Which is why I had included the proviso that the (rare) opportunities be proportional to the number of days the hostage is held captive. Under those conditions, they carry no information to the hostage.

Likewise, enabling Sleeping Beauty to bet on H on each awakening provides no information to her, provided only the payouts are delivered after the experiment is over.
Michael June 01, 2023 at 09:48 #812335
Quoting Pierre-Normand
Which is why I had included the proviso that the (rare) opportunities be proportional to the number of days the hostage is held captive. Under those conditions, they carry no information to the hostage.


Can you give actual numbers? Because that determines the answer. If there's a 90% opportunity to escape on day 1 in safehouse #1 but a 1% opportunity to escape on each day in safehouse #2 then P(Dice roll 6|opportunity to escape) < P(Dice roll 1-5|opportunity to escape).

Or if you set the %s up right then it can work out as P(Dice roll 6|opportunity to escape) = P(Dice roll 1-5|opportunity to escape), and so it's equally likely.
Pierre-Normand June 01, 2023 at 09:49 #812336
Quoting Michael
It's quite straightforward that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape)


Indeed, which is basically the 'thirder' solution (in this case, the 5/11er solution).
Michael June 01, 2023 at 09:50 #812337
Quoting Pierre-Normand
Indeed, which is basically the 'thirder' solution (in this case, the 5/11er solution).


Yes, but it's only P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape) because of the opportunity to escape (and where the %s are set up a certain way). Without that opportunity to escape it's P(Dice roll 1-5|no opportunity to escape) > P(Dice roll 6|no opportunity to escape), which is comparable to the Sleeping Beauty problem.
Michael June 01, 2023 at 09:55 #812338
Quoting Pierre-Normand
Likewise, enabling Sleeping Beauty to bet on H on each awakening provides no information to her, provided only the payouts are delivered after the experiment is over.


You may have missed my edit above:

I don't understand the connection between Sleeping Beauty's credence that the coin landed heads and the tracked frequency of heads-awakenings. It's a non sequitur to claim that because tails-awakenings are twice as frequent over repeated experiments then a coin toss having landed tails is twice as likely in any given experiment.

Sleeping Beauty is being asked "in this current, one-off experiment, what is the probability that the coin I tossed on Sunday evening landed heads?".

She's not being asked to guess if it's heads or tails and then being rewarded for each successful guess.

Her choice of guess in the latter has nothing to do with what her answer would be to the former.

If I were Sleeping Beauty I would answer "1/2" and guess tails.
Pierre-Normand June 01, 2023 at 10:00 #812339
Quoting Michael
Can you give actual numbers?


Suppose there is a 0.01% chance to find an opportunity to escape on any given day held captive regardless of that day being the only one or one among six in a kidnapping event. Finding such opportunities doesn't yield any updating of credence. That would actually yield a very small updating since the fact that you didn't yet escape reduces a little the chances that you are not experiencing your first day of captivity. But this is quite immaterial. We can suppose that you always are recaptured but that your goal nevertheless is to maximize your number of stints (while warding off the crocodiles or lions). In Sleeping Beauty's case, opportunities to bet could be 100%, and provided the payoffs are paid after the experiment is over, that wouldn't impact her credence either.
Michael June 01, 2023 at 10:01 #812340
Quoting Pierre-Normand
Suppose there is a 0.01% chance to find an opportunity to escape on any given day held captive regardless of that day being the only one or one among six in a kidnapping event. Finding such opportunities doesn't yield any updating of credence.


It does.

Dice roll 1-5 (safehouse #1): day 1, 0.01% opportunity to escape
Dice roll 6 (safehouse #2): day 1, 0.01% opportunity to escape; day 2, 0.01% opportunity to escape; day 3, 0.01% opportunity to escape; day 4, 0.01% opportunity to escape; day 5, 0.01% opportunity to escape; day 6, 0.01% opportunity to escape

If I'm doing my maths right, the probability of being offered an opportunity to escape from safehouse #2 is 0.058, so you're almost 6 times more likely to be offered the opportunity to escape from safehouse #2, so upon being offered the opportunity to escape, you can condition on this and determine that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape).

Conversely, upon not being offered the opportunity to escape, you can condition on this and determine that P(Dice roll 6|no opportunity to escape) < P(Dice roll 1-5|no opportunity to escape).

And it would be strange to say that P(Dice roll 6|no opportunity to escape) < P(Dice roll 1-5|no opportunity to escape) where there is a 0.01% chance of an opportunity to escape each day but then to say P(Dice roll 6) > P(Dice roll 1-5) where there is a 0% chance of an opportunity to escape each day.
Pierre-Normand June 01, 2023 at 10:34 #812346
Quoting Michael
It does.


I don't understand your argument at all. Finding an escape opportunity with probability P is a constant feature of the prisoner's environment. Sleeping Beauty's waking days (of the prisoner's captivity days) aren't being distinguished by one another in point of betting or escape opportunities. Suppose there always is a tulip on Sleeping Beauty's nightstand. Would her seeing the tulip enable her to change her credence in H? What if there is a 1% chance that the tulip is red on any given awakening day? Would that make any difference?
Michael June 01, 2023 at 10:38 #812348
Quoting Pierre-Normand
What if there is a 1% chance that the tulip is red on any given awakening day? Would that make any difference?


Yes. The probability of it being red on a waking day if heads is 1%. The probability of it being red on a waking day if tails is 1 - 0.99^2 = 1.99%.

The probability of it being red on a waking day if tails is greater than the probability of it being red on a waking day if heads, therefore if it's red on a waking day then it is more likely tails and if it's not red on a waking day then it is more likely heads.

This might be clearer if we say that the probability of it being red on a waking day if heads is 1.5%. Sleeping Beauty should still reason that it is more likely tails. But if we increase it further to 2% then she should reason that it is more likely heads.

Introducing these additional probabilistic events upon which to condition her credence changes the answer and so doesn't accurately represent the problem without them.
Pierre-Normand June 01, 2023 at 11:24 #812353
Quoting Michael
Yes. The probability of it being red if heads is 1%. The probability of it being red if tails and Monday is 1%. The probability of it being red if tails and Tuesday is 1%. The probability of it being red if tails is 1 - 0.99^2 = 0.199.

The probability of it being red if tails is greater than the probability of it being red if heads, therefore if it's red then it is more likely tails.


I believe you meant 0.0199.

That represents the probability of the tulip being red at least once across the two days of awakening in the event that the coin lands tails, correct? While it's true that Sleeping Beauty, upon observing the red tulip, can deduce the truth of this disjunctive proposition, the two events are probabilistically independent. Thus, although it's more probable that at least one of these events will occur if there are two opportunities for it to occur within a single experiment run rather than just one, the observation of one such event doesn't entail that it's more probable that you're in an experiment run wherein two such opportunities occur.

Consider this analogy: you're entering a movie theater where there's an even chance of a double feature being shown (according to a coin flip). There's a one percent chance that any given film will feature Rex Harrison. Suppose you see Harrison featured in the first film. Does that increase your credence that there will be a subsequent feature?

If you think about it, Lewis's notion—that Sleeping Beauty can conclude from knowing it's Monday that a future coin toss is more likely to yield heads with a 2/3 probability—is already rather puzzling. It would be even more extraordinary to believe that any random observation made by Sleeping Beauty could allow her to update her credence about an unrelated coin flip.
Michael June 01, 2023 at 11:33 #812355
Quoting Pierre-Normand
Consider this analogy: you're entering a movie theater where there's an even chance of a double feature being shown. There's a one percent chance that any given film will feature Rex Harrison. Suppose you see Harrison featured in the first film. Does that increase your credence that there will be a subsequent feature?


No, but if I walk in not knowing if it’s the first or second film then my credence favours it being part of a double feature.

Quoting Pierre-Normand
If you think about it, Lewis's notion—that Sleeping Beauty can conclude from knowing it's Monday that a future coin toss is more likely to yield heads with a 2/3 probability—is already rather puzzling.


I agree, which is why I offered the example of taking balls from a bag which is a double halter interpretation of the problem.
Michael June 01, 2023 at 11:41 #812358
Sorry, misunderstand the movie example. It’s a different answer if I only get to walk into one film, which would be comparable to Sleeping Beauty only waking on Monday (or Tuesday) if tails.

In such a case I can’t say it’s more likely a double feature, and thirders would say Sleeping Beauty can’t say it’s more likely tails.
Pierre-Normand June 01, 2023 at 11:42 #812359
Quoting Michael
No, but if I walk in not knowing if it’s the first or second then my credence favours it being part of a double feature.


Yes, me too. But that's just because two out of three showings are parts of double features. Seeing Rex Harrison being featured doesn't change my credence. It remains 2/3.
Pierre-Normand June 01, 2023 at 11:44 #812360
Quoting Michael
Sorry, misunderstand the movie example. It’s a different answer if I only get to walk into one film, which would be comparable to Sleeping Beauty only waking on Monday (or Tuesday) if tails.


The main point is that seeing Rex Harrison being featured (while knowing that 1% of the movies randomly being shown in this theater feature him) doesn't impact your credence in this movie being part of a double feature.
Michael June 01, 2023 at 11:51 #812361
Quoting Pierre-Normand
The main point is that seeing Rex Harrison being featured (while knowing that 1% of the movies randomly being shown in this theater feature him) doesn't impact your credence in this movie being part of a double feature.


That's only because I walk into one film. If I'm given amnesia and walk into the second film (if there is a second film) then it affects my credence.

It's exactly like my scenario with the coin toss and prizes. If heads then the car is the possible prize, otherwise the motorbike is the possible prize. If a car then a single coin toss determines if I get it (if heads), if a motorbike then two coin tosses determine if I get it (one head is enough to win).

If I'm told that I've won a prize then my credence favours that it is a motorbike. If I'm told that I didn't win a prize then my credence favours that the first coin toss was heads.

Knowing that I've won (or not won) a prize is the same as (not) seeing a red tulip or Harrison. It conditions my credence of that first coin flip.
Pierre-Normand June 01, 2023 at 12:01 #812362
Quoting Michael
That's only because I walk into one film. If I'm given amnesia and walk into the second film (if there is a second film) then it affects my credence.

It's exactly like my scenario with the coin toss and prizes. If heads then the car is the possible prize, otherwise the motorbike is the possible prize. If a car then a single coin toss determines if I get it (if heads), if a motorbike then two coin tosses determine if I get it (one head is enough to win).


In your scenario, the nature of the prize is conditioned on the coin toss results. In the film example, the number of features (and the coin toss result that determined this number) is probabilistically independent of the probability that Rex Harrison will be featured. And likewise with Sleeping Beauty and the betting opportunities, or the prisoner and their escape opportunities.
Michael June 01, 2023 at 12:20 #812365
Quoting Pierre-Normand
In your scenario, the nature of the prize is conditioned on the coin toss results.


Then forget the nature of the prize.

If I know that I’ve won a prize my credence favours the first coin toss having landed tails, otherwise it favours heads. If I see that the tulip is red my credence favours the coin toss having landed tails, otherwise it favours heads.
Pierre-Normand June 01, 2023 at 12:41 #812371
Quoting Michael
Then forget the nature of prize. If I know that I’ve won a prize my credence is that the first coin toss landed tails.


In your prize scenario, knowing that you've won a prize indeed gives you new information, and this can rightly affect your credences. However, in the scenarios with Sleeping Beauty and the prisoner, merely being presented with an opportunity to bet or escape does not give them any new information about the outcome of the coin toss (or throw of the die). They must decide how to take advantage of this opportunity (by choosing to carry the torch or the plank, or choosing what safehouse address to communicate to the police) before gaining any knowledge about the success of the attempt. The offering of the opportunities carry no information and provide no ground for updating credences.
Michael June 01, 2023 at 12:54 #812373
Quoting Pierre-Normand
However, in the scenarios with Sleeping Beauty and the prisoner, merely being presented with an opportunity to bet or escape does not give them any new information about the outcome of the coin toss (or throw of the die). They must decide how to take advantage of this opportunity (by choosing to carry the torch or the plank, or choosing what safehouse address to communicate to the police) before gaining any knowledge about the success of the attempt. The offering of the opportunities carry no information and provide no ground for updating credences.


She can and should use known priors to condition her credence, and one such prior is that she is more likely to win a prize/have the opportunity to escape if tails/a dice roll of 6. As such, if she wins a prize or has the opportunity to escape she should condition on this and her credence should favour tails/a dice roll of 6, otherwise she should condition on not winning a prize or having the opportunity to escape and her credence should favour heads/a dice roll of 1-5.

And if she's guaranteed the opportunity to escape each day, or she never has the opportunity to escape each day, then her credence should favour a dice roll of 1-5.

But honestly, all this talk of successes is irrelevant anyway. As I said before, these are two different things:

1. Sleeping Beauty's credence that the coin tossed on Sunday for the current, one-off, experiment landed heads
2. Sleeping Beauty's most profitable strategy for guessing if being asked to guess on heads or tails over multiple games

It's simply a non sequitur to argue that if "a guess of 'tails' wins 2/3 times" is the answer to the second then "1/3" is the answer to the first.
Pierre-Normand June 01, 2023 at 13:45 #812385
Quoting Michael
She can and should use known priors to condition her credence, and one such prior is that she is more likely to win a prize/have the opportunity to escape if tails/a dice roll of 6. As such, if she wins a prize or has the opportunity to escape she should condition on this and her credence should favour tails/a dice roll of 6, otherwise her she should condition on not winning a prize or having the opportunity to escape and her credence should favour heads//a dice roll of 1-5.


She can't condition her credence on H by winning a prize if she only learns about her victory after the experiment ends. The same applies to her opportunity to bet — this fact is probabilistically independent of H and thus doesn't allow for an update in credence. Would you still argue that your belief that the movie you're watching is part of a double feature increases the moment you see Rex Harrison appearing on screen? Or that it increases even further when you notice he isn't wearing a top hat, or when he is?

And if she's guaranteed the opportunity to escape each day her credence should favour a dice roll of 1-5.


So, she should be carrying a plank and end up being eaten by lions on 6 out of 11 escape attempts?

But honestly, all this talk of successes is irrelevant anyway. As I said before, these are two different things:

1. Sleeping Beauty's credence that the coin tossed for the current, one-off, experiment landed heads
2. Sleeping Beauty's most profitable strategy for guessing if being asked to guess on heads or tails over multiple games

It's simply a non sequitur to argue that if "tails" is the answer to the second then "1/3" is the answer to the first.


The manner in which (1) is stated suggest that Sleeping Beauty is referring to the wide centered possible world spanning the whole experiment run. In that case, her credence in H should be 1/2.

The second one makes it rational for her to rely on her credence regarding narrow centered possible worlds spanning single awakening episodes. There indeed isn't any entailment from the suitability of one framing of the question from (1) to (2) or vice versa. The two sentences concern themselves with different questions.

By the way, I've just finished reading Groisman's The End of Sleeping Beauty’s Nightmare. Although I found it somewhat glib, it's undoubtedly the most readable paper on this topic that I've encountered so far. Groisman makes some very compelling points that are directly relevant to the discussion we are presently having. However, I prefer Laureano Luna's approach, which elaborates further on Groisman's work and brings more nuance.
Michael June 01, 2023 at 13:53 #812387
Quoting Pierre-Normand
So, she should be carrying a plank and end up being eaten by lions on 6 out of 11 escape attempts?


5 out of every 6 victims escape. I count by participants, not by escape attempts. I think it's more reasonable.

Quoting Pierre-Normand
The manner in which (1) is stated suggest that Sleeping Beauty is referring to the wide centered possible world spanning the whole experiment run. In that case, her credence in H should be 1/2.

The second one makes it rational for her to rely on her credence regarding narrow centered possible worlds spanning single awakening episodes. There indeed isn't any entailment from the suitability of one framing of the question from (1) to (2) or vice versa. The two sentences concern themselves with different questions.


And that first question is the premise of the problem. Sleeping Beauty is asked her credence that the current experiment's coin toss landed heads. That's it. She's not being asked to consider the most profitable betting strategy for multiple games, which even Lewis (I assume) would agree is to bet on tails.
Pierre-Normand June 01, 2023 at 14:00 #812388
Quoting Michael
5 out of every 6 victims escape. I count by participants, not by escape attempts. I think it's more reasonable.


Since on my approach probabilities track frequencies, even if there is just one kidnapping event, the hostage's chances of survival are 5 in 11 whenever an escape attempt occurs.
Michael June 01, 2023 at 14:02 #812390
Quoting Pierre-Normand
Since on my approach probabilities track frequencies, even if there is just one kidnapping event, the hostage's chances of survival are 5 in 11 whenever an escape attempt occurs.


So then there are two different ways to reason with nothing to prove that one or the other is the "right" way?
Pierre-Normand June 01, 2023 at 14:06 #812391
Quoting Michael
And that first question is the premise of the problem. Sleeping Beauty is asked her credence that the coin landed heads. That's it. She's not being asked to consider the most profitable betting strategy for multiple games.


The state "the coin landed heads" is ambiguous with respect to the span of the centered possible world that it might be referring to. I think you would benefit from reading Groisman.
Pierre-Normand June 01, 2023 at 14:13 #812393
Quoting Michael
So then there are two different ways to reason with nothing to prove that one or the other is the "right" way?


That's correct since events that happen in the world don't come flagged with sign posts that say: "the current event begins here" and "the current event terminates here." How credences in the probabilities of events are assessed depend on the way those events are individuated and this can be dictated by pragmatic considerations.
Michael June 01, 2023 at 14:23 #812394
Quoting Pierre-Normand
I think you would benefit from reading Groisman.


I've just read it. Seems to be saying what I said here:

There are two ways to reason:

1. [math]2\over3[/math] of all interviews are 100 heads in a row interviews, therefore this is most likely a 100 heads in a row interview
2. [math]1\over2^{100}[/math] of all participants are 100 heads in a row participants, therefore I am most likely not a 100 heads in a row participant

I would say that both are true...


Quoting Pierre-Normand
That's correct since events that happen in the world don't come flagged with sign posts that say: "the current event begins here" and "the current event terminates here." How credences in the probabilities of events are assessed depend on the way those events are individuated and this can be dictated by pragmatic considerations.


I don't think this is right. The experiment is only being conducted once and Sleeping Beauty is asked "what is your credence that the coin I tossed on Sunday landed heads?" I think there's only one appropriate way to reason, which above you seemed to accept gives an answer of [math]1\over2[/math].

But if you want to insist that it's ambiguous, I'll just fall back to that previous application of Bayes' theorem (despite ChatGPT's objection):

P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
P(Heads | Mon or Tue) = 1 * 1/2 / 1
P(Heads | Mon or Tue) = 1/2

As Elga says, she learns nothing new (and Lewis would agree); she only recognises that her temporal location (of which all she knows is it's either Monday or Tuesday) is relevant. But contrary to Elga's reasoning (which I believe I showed to be faulty) this doesn't change her credence.

It might just be as simple as this.

The issue of optimum betting strategies over multiple games is irrelevant to the problem. There's nothing to debate on that; it's obviously better to bet on tails. I'm sure Lewis and every other halfer would agree.

But as this debate has gone on long enough and I don't think I have the energy to continue it much more, I'm happy to just say that both [math]1\over2[/math] and [math]1\over3[/math] are correct answers to distinct but equally valid interpretations of the question.

So thanks for the enlightening discussion.
Srap Tasmaner June 02, 2023 at 04:19 #812547
Quoting Michael
But as this debate has gone on long enough and I don't think I have the energy to continue it much more, I'm happy to just say that both 1/2 and 1/3 are correct answers to distinct but equally valid interpretations of the question.


I'm still mulling it over despite myself, but I think there's something to this.

Rather than getting back into the nitty-gritty, I'm thinking about the stuff I posted a while back, the possible self slices and all that.

It could be the two natural positions represent slightly different attitudes toward the self.

Halfers define participants as ending up having one interview, or ending up with two. It's a cumulative view of the self: I am my life story, and the story draws a connected arc passing through circumstances; what matters most is that storyline, and situations are only things I (robust, impervious) encounter along the way.

Thirders define participants by what they're experiencing at the moment, and seem less confident that what matters most is the unique historical self for whom this is merely an incident in their story; thirders seem to feel that the situation in part defines them, they are just the someone this is happening too -- anything could happen to anyone or not happen to them, and that's who you are at that moment.

It's hard to state each side clearly, but you could also say, roughly, that for a halfer the question is only: what happened? how did the coin land? For thirders, the question is: who am I? what's happening to me right now that defines who I am? (This is pretty clear in @JeffJo's thirder analysis, posted here on the forum and also here on a site dedicated to the sleeping beauty problem, in which the whole point is that participants don't know which sort of participant they are, and recognize that roles could be shuffled amongst them.)

The who am I? question doesn't really seem to be on the halfer radar at all, thus my suggesting that halfers view the self as more robust in the face of circumstances.

Maybe one more analogy: thirders might be more likely to buy the results Zimbardo claimed for the Stanford Prison Experiment. --- That's more than a bridge too far for me, but might be the clearest way to put the distinction.
Michael June 02, 2023 at 08:54 #812561
Reply to Srap Tasmaner I wonder if the variation I considered here highlights the difference in perspective, with the question forcing you to consider it from the halfer perspective:

There are two Sleeping Beauties; one will be woken on Monday and one on both Monday and Tuesday, determined by a coin toss.

What is their credence that they have been or will be woken twice?


We can even consider it with just one Sleeping Beauty, as with the traditional problem.

Does it make sense for her credence to be that she's twice as likely to be woken twice simply because she's woken twice if she is woken twice compared to once if woken once? I don't think it does. The very question requires that you think of both wakings as being part of the same outcome, with your credence in that outcome being what is questioned.

But given that you're woken twice iff the coin landed tails, it stands to reason that the answer to "what is your credence that you have been or will be woken twice" is the same as the answer to "what is your credence that the coin landed tails".
Srap Tasmaner June 02, 2023 at 12:28 #812620
Reply to Michael

Yeah there's some similarity to Bostrum's thing. In Stanford terms, you could say that thirders are identifying the self with the role they are playing at the moment, and it's a matter of chance that they are playing one role at a given moment rather than another.

Self and repetition are linked in Nietzsche's puzzling doctrine that truly to will something is to will its eternal recurrence. The Good Place is also oddly sleeping-beauty adjacent: [hide="spoiler"]in an attempt to prove that people have a self that's just good or bad, Michael ends up having to wipe participants' memories and reboot them hundreds of times. The results are really mixed: Michael comes to believe that Eleanor would have been a better person in life if her circumstances had been different, but this only happens because Chidi's self is relatively robust and he always ends up helping her.[/hide]

The Monty Hall "problem" has an answer: it's just an illustration of a tempting but fallacious way of reasoning about probability. Two Envelopes rises to the level of paradox because a clearly and provably wrong answer can be arrived at by reasoning, the flaw in which is so difficult to determine that there is no professional consensus on what the flaw is.

I'm not sure SB is the same sort of thing: which is the natural, obvious, tempting mistake and which the correction? If you haven't been around these sorts of puzzles, maybe "a fair coin is 50-50, period" is the obvious answer, and SB is one of the only scenarios bizarre enough to undermine that confidence; if you've been around these sorts of puzzles, carving the space into quarters and conditioning to get one third might be the obvious thing to do, but that's to be tricked into misreading the source of randomness here. Because the analysis of the 1200-sophomore study I posted is so straightforward, but getting standard SB to align with it is so difficult, I have even wondered if SB doesn't undermine the whole idea of "subjective probability". A more optimistic take would be that SB is unsolvable, and shows you what background assumptions are necessary for probabilistic reasoning to work -- memory, continuity of self, objective verification, something in here.
Pierre-Normand June 02, 2023 at 12:41 #812623
Quoting Srap Tasmaner
Rather than getting back into the nitty-gritty, I'm thinking about the stuff I posted a while back, the possible self slices and all that.


I think this observation is very astute. As I am developing a pragmatist conception of credence, I am trying to move away from the idea that one's credence in the state H is entirely determined by the specification of the ways in which one can come to be in that state. The Sleeping Beauty problem completely specifies how (and in which circumstances) Sleeping Beauty can come to be in state H, however one wants to define what counts as "being in state H". But it fails to specify how one might want to individuate such states (or 'self slices') so as to determine the relevant frequency or probability ratios that we ordinarily express as credences.

Whenever Sleeping Beauty is awakening during a H experimental run, she is awakening during a H day, and vice versa. But there isn't a one to one relationship between the two sets of 'events' (i.e. between the two sets of centered possible worlds) once they have been individuated.

The relationship is one to one between types but not between tokens, we might say.

In the Sleeping Beauty problem the equivalence relation is:

H-run <--> H&Monday (H-day)
T-run <--> T&Monday or T&Tuesday (T-day)

The ratio considered by the thirders is the ratio of H-days per day long centered possible worlds (or individual awakenings). It is indeed 1/3. The ratio considered by the halfers is the ratio of H-runs per run-long centered possible worlds. It is indeed 1/2. The credences expressed by thirders and by halfers therefore have different topics.
sime June 02, 2023 at 14:00 #812636

I would say that neither the halfers nor the thirders position concerns credence. Plus the concept of "credence" is a minefield of unresolved philosophical difficulty, which is usually overlooked by advocates of probabilism.

The halver's position is based on the common-sense postulate that the physical properties and propensities of objects remain unchanged when their states are conditioned upon irrelevant information. Given acceptance of that postulate plus the premise of a fair coin, why shouldn't the halver insist that they are talking about reality, as opposed to their mere mental state?

The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.

For example, if one were to ask the average person to express their credences regarding the outcome of a two horse race that they know absolutely nothing about, they will simply say "I don't know who will win" and refrain from assigning any odds, equal or otherwise. They will also tend to accept bets in which they have knowledge that the physical probabilities are 50/50 over bets that they are totally ignorant about.

Fundamentally, it is difficult to even know what the object of a credence or belief is, which relates to an overlooked problem of epistemology: If every belief is fully explained by it's natural causes, then how are false beliefs possible and where does the error reside? In the mind of the speaker who expresses their beliefs, or in the mind of the listener who interprets the speaker?

Whatever SB says when interrogated upon awakening, if she is assumed to have amnesia then how are her interrogators supposed to interpret her "credence estimate"? Why should they consider her uttered "credence estimate" as being qualitatively different from yawning?
Srap Tasmaner June 02, 2023 at 15:05 #812645
Quoting sime
For example, if one were to ask the average person to express their credences regarding the outcome of a two horse race that they know absolutely nothing about, they will simply say "I don't know who will win" and refrain from assigning any odds, equal or otherwise. They will also tend to accept bets in which they have knowledge that the physical probabilities are 50/50 over bets that they are totally ignorant about.


Two Envelopes seems to encourage abuse of the principle of indifference in exactly this way. Maybe it's just something like this: rationality requires treating "It's one or the other but I've literally no idea which" as an uninformative 50:50 prior only when there's the real possibility of acquiring new information upon which to update that prior. I'd rather just say, no, don't do that, "I don't know" doesn't mean "It's 50:50", but there are a great many usages in which the prior is quickly swamped by actual information, and the PoI is a harmless formality. --- In Two Envelopes, you know there will never be any new data, so that harmless prior metastasizes.

Y'all know the math much better than I, so maybe I'm speaking out of turn.

Quoting Pierre-Normand
trying to move away from the idea that one's credence in the state H is entirely determined by the specification of the ways in which one can come to be in that state


Even the word "state" feels too coarse for Sleeping Beauty, since it could denote the situation a robust well-defined subject finds themselves in, or it could denote the very identity of that subject. --- At least, that's how the two main camps look to me. One wonders, where am I? how did I get here? One wonders, what am I? what has made me into this?

As you say, it's all about individuation. Lacking a fixed point of individuation, you can push the lever however you like but you won't actually move anything.

Quoting sime
The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.


Elsewhere Lewis is pretty careful about what he calls de se modality -- epistemic questions are not just about possible worlds but irreducibly about your epistemic counterpart's status in a given possible world.

That's in the neighborhood of what I've been musing about anyway.
Srap Tasmaner June 02, 2023 at 15:47 #812649
Here's one more ridiculous comparison to clarify the difference between Where am I? and What am I? (if it's even relevant to Sleeping Beauty): the Chomsky-Foucault debate. Chomsky wants to know how we got here and what we can do about it, and all he needs is reason, plenty of careful analysis. Foucault, steeped in the hermeneutics of suspicion, doesn't believe there is a neutral faculty of reason which could deploy a battery of neutral concepts like "justice" as if they were not compromised, even tainted, by history and capital. Chomsky's concern is that we don't understand where we are and how we got here, Foucault's that we don't understand what we are and what we're doing.
fdrake June 02, 2023 at 15:49 #812650
Quoting Srap Tasmaner
you know there will never be any new data, so that harmless prior metastasizes.


:up:

Priors supposedly representing no prejudice turning out to have major influence on results has been a thing in recent years.

Quoting Srap Tasmaner
Even the word "state" feels too coarse for Sleeping Beauty, since it could denote the situation a robust well-defined subject finds themselves in, or it could denote the very identity of that subject. --- At least, that's how the two main camps look to me. One wonders, where am I? how did I get here? One wonders, what am I? what has made me into this?


Also mega agree on this, the principle of indifference has a bunch of knowledge built into it. It assumes you know all the possible configurations something could be in, in the first place. And not just that, you know how all the variables relate within your stipulated model.
Pierre-Normand June 03, 2023 at 09:02 #812759
Quoting Srap Tasmaner
Two Envelopes seems to encourage abuse of the principle of indifference in exactly this way. Maybe it's just something like this: rationality requires treating "It's one or the other but I've literally no idea which" as an uninformative 50:50 prior only when there's the real possibility of acquiring new information upon which to update that prior. I'd rather just say, no, don't do that, "I don't know" doesn't mean "It's 50:50", [...]


I think there are good grounds for Sleeping Beauty upon awakening to ascribe strictly equal probabilities P = 1/3 to the three cases: (1) Today is Monday and the coin landed tails, (2) Today is Monday and the coin landed heads and (3) Today is Tuesday and the coin landed tails, rather than simply claiming ignorance. This doesn't involve any abuse of the principle of indifference so far as I can see. Rather, Sleeping Beauty can deduce those three outcomes to have equal probabilities from the statement of the problem, or experimental setup. Suppose upon awakening she would always bet on the same one of the three outcomes chosen in advance. In that case, upon repeating the whole experiment run many times, she's be right roughly 1/3 of the time she awakens and make this bet. This would be the case regardless of which one of the three outcomes she chooses to always bet on. Halfers have an obvious rejoinder to this line of thinking but I just wanted to make this point for now.
Pierre-Normand June 03, 2023 at 09:58 #812762
Quoting sime
The halver's position is based on the common-sense postulate that the physical properties and propensities of objects remain unchanged when their states are conditioned upon irrelevant information. Given acceptance of that postulate plus the premise of a fair coin, why shouldn't the halver insist that they are talking about reality, as opposed to their mere mental state?

The thirder's position is indeed a ratio of possible words, but there is scant evidence to support the idea that credences are accurately represented by taking ratios over possible worlds.


Taking ratios over possible worlds is a vivid way to illustrate the issue at hand when it comes to Bayesian updating of credences in light of new information. Regarding credences, in well-defined problems like Sleeping Beauty, as Pradeep Mutalik suggested in this column (in the comments section), "if you have 'a degree of certainty' of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet."

I would challenge the idea that Sleeping Beauty's judgement about the coin flip is merely a judgement about the propensities of that coin. Rather, it's a judgement about her own epistemic relation to the result of the flip. Or you could argue it's a judgement about the propensity of the whole experimental setup to put her in relationship with specific coin flip results. This echoes Werner Heisenberg's comment: "What we observe is not nature in itself but nature exposed to our method of questioning." I believe this to be true generally, not just in the context of quantum theory.

Upon awakening, Sleeping Beauty does gain information about the coin flip result. She knows that (from her own perspective) "if today is Monday, then heads and tails are equally likely." She also learns that "if today is Tuesday, then the coin landed tails." Since she can't rule out "today" being Tuesday, her credence (and willingness to bet) on a tails outcome should be higher than 1/2. "Today" being an indexical, the knowledge expressed pertains to her own situated epistemic perspective (as well as her 'pragmatic' perspective, or affordances to make bets or predictions). But this perspective only is "subjective" in the sense that it is suitable for capturing probabilities, or frequencies, relevant to her personal involvement in the experiment.
sime June 03, 2023 at 11:10 #812765
Reply to Pierre-Normand

But indifference with respect to the joint outcomes (Day, Flip) implies not only

(1) a "posterior" probability for heads that differs from it's prior probability in spite of not learning anything upon awakening and before interacting with the world upon waking up.

but also

(2) different credence assignments for marginals one is equally ignorant of.

So even if (1) is acceptable for Principle of Indifference advocates, how can they justify (2) that flagrantly disregards their beloved principle?

From the premises

P(Flip) = 1/2
P (Day = mon | Flip = heads) = 1
P (Flip = tails | Day = tue) = 1

If a PoI advocate assigns the credence

P(Day = mon, Flip = tails ) = P(Day = mon , Flip = heads) = P(Day = tue, Flip = tails) = 1/3

Then his unconditioned credence that it is Monday is

P(Day = mon) = 2/3

which is clearly not the expression of indifference.

Then for sake of consistency he must ignore the actual information he is given about the coin, by using

P(Flip = tails) = 2/3

Not only does this credence assignment ignore the physical probability of heads that is known to him in advance, but it also indicates different credence assignments for outcomes of Flip when assuming ignorance of Flip! - in flagrant contradiction to PoI.

In general, when a PoI advocates uses PoI for specifying an unknown distribution, they don't possess the epistemic resources to determine the credences that they are implying regarding the values of related variables. And when their implied credences are pointed out to them, they are likely to withdraw their initial credence estimates.

IMO, there isn't a good reason for using the principle of indifference, not even in the case of describing credences. For handling ignorance, it is always better to assign probability intervals than to assign precise probabilities.



Pierre-Normand June 03, 2023 at 12:05 #812768
Quoting sime
If a PoI advocate assigns the credence

P(Day = mon, Flip = tails ) = P(Day = mon , Flip = heads) = P(Day = tue, Flip = tails) = 1/3

Then his unconditioned credence that it is Monday is

P(Day = mon) = 2/3

which is clearly not the expression of indifference.


This assignment is an expression of pairwise indifference between the three possible awakening circumstances. But rather than relying on the Principle of Indiference, I proposed to assign credences on the basis the odds Sleeping Beauty should rationally be willing to accept when betting on those outcomes. The problem's original specification is already precise enough ensures that if Sleeping Beauty accepts 2 to 1 odds on bets on any of these three outcomes upon awakening, she would break even in the long run.

Then for sake of consistency he must ignore the actual information he is given about the coin, by using

P(Flip = tails) = 2/3


There's no inconsistency here. It's precisely because the premise holds that the coin is equally likely to land heads or tails when tossed, and the fact that tails would result in twice as many awakenings, that Sleeping Beauty deduces that she could accept 2 to 1 odds on each awakening on the outcome 'Flip = heads' and break even over the long run.
sime June 03, 2023 at 12:48 #812774
Quoting Pierre-Normand
There's no inconsistency here. It's precisely because the premise holds that the coin is equally likely to land heads or tails when tossed, and the fact that tails would result in twice as many awakenings, that Sleeping Beauty deduces that she could accept 2 to 1 odds on each awakening and break even in the long run.


Let P(Flip | Today ) represent the conditional probability of the state of coin, relative to the value of SB's indexical "Today". Then

2/3 = P(Flip = Tails | Today = Mon Or Tue) = P(Flip = Tails) = 1/2

What is wrong with this conclusion of inconsistency?
Pierre-Normand June 03, 2023 at 13:12 #812778
Quoting sime
What is wrong with this conclusion of inconsistency?


I'm going to respond tomorrow since I'm now nearing the end of my current period of awakening. Meanwhile, I want to post this answer GPT-4 gave—on behalf of the halfer this time—to a challenge I had imagined. GPT-4 still has the ability to knock my socks off!

PN:

Yesterday, as I was thinking about the halfer's stance that upon awakening Sleeping Beauty's credence on the coin having landed heads should be 1/2, while acknowledging that when the coin landed tails she has two opportunities to guess "tails", thereby acknowledging that a thirder SB would guess right two thirds of the time when betting on tails (and thereby be more willing to bet on tails in order to win more bets), I thought of the following challenge. We suppose that on each awakening a second fair coin also is tossed. The second coin toss is independent and, unlike the first, doesn't impact how the experiment is run. If Sleeping Beauty is more willing to bet on the first coin having landed tails than the second one, does that not imply that the second fair coin now has a less than 1/2 chance to have landed tails?

GPT-4:

Your challenge indeed illustrates a fascinating aspect of this problem. Here is a possible way to respond:

As a halfer, Sleeping Beauty would be more willing to bet on the first coin landing tails, not because she believes it's more likely to land tails, but because there are two opportunities to be correct when betting on tails as opposed to one opportunity when betting on heads. However, this should not influence her betting behavior regarding the second coin, which is independent of the first and does not affect the experiment's procedure.

The crucial point here is that the additional awakenings under the tails outcome for the first coin change the betting odds in favor of tails, but they do not change the underlying physical probabilities of the coin flip, which remain at 1/2 for both heads and tails.

For the second coin, which is not linked to the experiment's procedure and leads to only one awakening regardless of its outcome, there are no such changes in the betting odds, and the physical probabilities remain the only relevant factor for Sleeping Beauty's bets. Therefore, Sleeping Beauty would remain indifferent between betting on heads or tails for the second coin, considering both outcomes equally likely.

In this scenario, the credence Sleeping Beauty assigns to the first coin's outcome upon awakening (1/2 for heads and tails according to the halfer perspective) reflects the physical probabilities of the coin flip, while her willingness to bet on tails rather than heads reflects the altered betting odds due to the additional awakenings under the tails outcome. Conversely, both her credence and her willingness to bet for the second coin directly reflect the physical probabilities, as there are no additional factors affecting the betting odds.
JeffJo June 03, 2023 at 22:07 #812848
Reply to Srap Tasmaner No, the thirder answer is not based on "who am I?" That appears to be just an excuse to reject the logic. In fact, in that post you linked, that question is neither asked nor answered. But there are equivalent ways to get the right answer, with only one SB.

Before I describe one, I need to point out that what most people think is the Sleeping Beauty Problem is actually a more contrived modification of it, invented by Adam Elga to enable his thirder solution. And the controversy is entirely about the parts he added, not the original.

This is the problem as it first ever appeared publicly, in the paper "Self-locating belief and the Sleeping Beauty problem". The two modifications alter words that clearly indicate Elga was thinking about his solution, since they have no impact on this text:
Adam Elga:Some researchers are going to put you to sleep. During the [time] that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. [While you are awake], to what degree ought you believe that the outcome of the coin toss is Heads?


Elga made the two wakings occur on Monday and Tuesday, and made them different by having Monday be a mandatory waking, and Tuesday's be optional. The first issue you must face, is to decide whether you think Elga's changes alter the correct response.

But there is another way to implement the original problem, as worded, where there is an easy answer. After putting SB to sleep, flip two coins. Call them C1 and C2. If ether is showing Tails, wake SB and ask her "to what degree do you believe that coin C1 is showing Heads?"

After she answers, put her back to sleep with amnesia and turn coin C2 over to show its other side. Then repeat the steps in the previous paragraph, starting with "If either...".

The issue with the SB problem, is whether to consider the two potential wakings as the same experiment, or different ones. This version resolves that. SB knows that when the researchers looked at the coins, there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}. She also knows that, since she is awake, HH is eliminated. She can update her beliefs in the other three to 1/3 each.
Srap Tasmaner June 04, 2023 at 03:23 #812902
Reply to JeffJo

It was not my intention to misrepresent your views.

Have a nice day.
jgill June 04, 2023 at 04:05 #812905
Quoting JeffJo
there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}.


:up:
Pierre-Normand June 04, 2023 at 09:57 #812924
Quoting JeffJo
The issue with the SB problem, is whether to consider the two potential wakings as the same experiment, or different ones. This version resolves that. SB knows that when the researchers looked at the coins, there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}. She also knows that, since she is awake, HH is eliminated. She can update her beliefs in the other three to 1/3 each.


I love your variation! However, your conclusion may be a bit premature as halfers can counter it using the same argument with which they counter the thirder position in Elga's setup. Allow me to break down the possible scenarios considering the initial flip results, the rearrangement of the second coin, and the resultant awakening sequence:

TT -> TH -> two awakenings (ww)
TH -> TT -> two awakenings (ww)
HT -> HH -> waking then sleeping (ws)
HH -> HT -> sleeping then waking (sw)

Given these four possible experimental runs following the four possible initial coin flip results, we find that when Sleeping Beauty awakens, she can certainly rule out HH as the current state of the two coins during that specific awakening episode. However, this does not eliminate the possibility of being in either of the last two experimental runs (in addition to, of course, either of the first two).

She could be waking up due to the initial coin flip (which is consistent with run-1, run-2, or run-3), or she could be waking up as a result of the rearrangement of the second coin (run-1, run-2, or run-4). As all these runs are still consistent with her experience, halfers might argue that Sleeping Beauty has gained no new information upon awakening and that these four possible runs remain equiprobable.
Pierre-Normand June 04, 2023 at 10:16 #812927
Quoting sime
Let P(Flip | Today ) represent the conditional probability of the state of coin, relative to the value of SB's indexical "Today". Then

2/3 = P(Flip = Tails | Today = Mon Or Tue) = P(Flip = Tails) = 1/2

What is wrong with this conclusion of inconsistency?


From Sleeping Beauty's perspective, the conditional probability of the coin flip being tails upon her awakening isn't conditioned on 'today' being either Monday or Tuesday, but rather on her actual state of being awakened 'today'. In other words, she conditions this probability on her being in a narrow (day long rather than experimental-run long) centered possible world in which she is awakened. It's this conditioning that yields her updating her prior 1/2 to the posterior 2/3.
Pierre-Normand June 04, 2023 at 10:23 #812930
Quoting Michael
So thanks for the enlightening discussion.


Thanks to you! I'll likely revisit some of your most recent objection just for the sake of clarifying my position (and refine it if needs be). Of course, I'd be delighted if you'd chime in again whenever you feel like it.
Pierre-Normand June 04, 2023 at 10:46 #812934
Quoting Srap Tasmaner
Even the word "state" feels too coarse for Sleeping Beauty, since it could denote the situation a robust well-defined subject finds themselves in, or it could denote the very identity of that subject. --- At least, that's how the two main camps look to me. One wonders, where am I? how did I get here? One wonders, what am I? what has made me into this?

As you say, it's all about individuation. Lacking a fixed point of individuation, you can push the lever however you like but you won't actually move anything.


I agree. I view the states at issue to be irreducibly epistemic states of agents. They are, in other words, relational states defined pragmatically. So, when expressing their credences, thirders and halfers had better properly specify the intended relata of this relation. If credences are explicitly stated as ratios, then one must likewise attend to the way in which the two things being divided are individuated. I'm working on a variation of the Sleeping Beauty problem in which all the options for individuating the relevant relata are laid out so that it becomes clear when halfers and thirders are talking past each other, and when it is that they are led astray through being inconsistent with their individuating assumptions.
fdrake June 04, 2023 at 10:51 #812935
Quoting Pierre-Normand
Thanks to you! I'll likely revisit some of your most recent objection just for the sake of clarifying my position (and refine it if needs be). Of course, I'd be delighted if you'd chime in again whenever you feel like it.


@Srap Tasmaner @Michael @sime

I also want to thank you all for the same reason. Thank you bandwagon.
JeffJo June 04, 2023 at 11:24 #812939
Quoting Pierre-Normand
Given these four possible experimental runs following the four possible initial coin flip results, we find that when Sleeping Beauty awakens, she can certainly rule out HH as the current state of the two coins during that specific awakening episode. However, this does not eliminate the possibility of being in either of the last two experimental runs (in addition to, of course, either of the first two).


The points of the variation are:

  • There is an "internal" probability experiment that begins when the researchers look at the two coins, and ends when either they see HH or they put SB back to sleep after seeing anything else.
  • The effect of the amnesia drug not just that no information is transmitted between "this" SB and a possible version of herself in another internal experiment. It is that there are two such internal experiments in the full experiment, and each is a well-defined, self-contained experiment unto itself.
  • The question in the internal probability experiment is about the state of the coins during it, not whether it is the first or second possible internal experiment. So the issue of "indexical 'today'" is completely irrelevant.


But as you point out, halfers are very good at making up reasons for why the answer they intuitively believe must be correct, could be. The "indexical" argument is one. Your example, where they might try to argue that SB must consider the version of herself in the other internal experiment is another - I've seen it used in the classical version.

So, if you need, make one slight change to mine. Always wake SB. If the coins are showing HH, take her on a shopping spree at Saks Fifth Avenue. If not, ask the question. This gets around your suggestion, since there are always two wakings. It is being in a "question" waking that allows an update: Pr(HH|Q)=0 and Pr(HT|Q)=Pr(TH|Q)=Pr(TT|Q)=1/3.

Now ask yourself whether it matters, to the reasons why these updates are possible, if the researchers had simply said "something other than a waking question will happen." The update is valid because SB has observed that the consequence of HH did not happen, regardless of what that consequence could be.

+++++

When I tell you that the card I drew is a Spade, 39 of 52 outcomes are "eliminated" as possibilities. So you can update Pr(Ace of Spades)=1/52 to Pr(Ace of Spades|Spade)=1/13. But "Diamonds" are still a part of the sample space, and their (prior) probabilities are still used for that calculation.

The difficulty with the classic version of the SB Problem arises when we try to physically remove one possibility from the sample space, not just eliminate it from the current instance of the process. That can't be done when the two "internal" parts of the whole problem are different, but SB has to view them as the same. That is the motivation for my variation, to make the internal parts identical in the knowledge basis used o answer the question.

Being left asleep does not remove Tuesday+Heads from whatever sample space you think is appropriate. It is still a possibility, and being awake constitutes an observation that it did not happen.
sime June 04, 2023 at 16:17 #812980
Here's another criticism of the thirder position: Their reasoning implies that self-induced amnesia is a valid strategy for controlling outcomes:

Suppose that SB gets paid $1 if the coin lands tails, otherwise she must pay $1. Furthermore, suppose that before the experiment begins she is given the choice as to whether or not she will have amnesia during the course of the experiment. According to thirder reasoning, she should choose to have amnesia in order to raise the probability of tails to 2/3
PhilosophyRunner June 05, 2023 at 02:47 #813097
Quoting Michael
But honestly, all this talk of successes is irrelevant anyway. As I said before, these are two different things:

1. Sleeping Beauty's credence that the coin tossed on Sunday for the current, one-off, experiment landed heads
2. Sleeping Beauty's most profitable strategy for guessing if being asked to guess on heads or tails over multiple games

It's simply a non sequitur to argue that if "a guess of 'tails' wins 2/3 times" is the answer to the second then "1/3" is the answer to the first.


Reply to Michael

I took some time to read through the many good arguments on this thread, I agree with the above 2 different things you mentioned above.

The crux of the matter as I see it is whether the question about what the coin "shows" (taking the Scientific American article) is relevant to 1 or 2.

And the way I interpreted what the coin "shows" is that at the moment SB opens her eyes photon leave the coin and hit her retina. What is the probability that those photons contain information for heads? This related to 2 and is 2/3

While as far as I can tell you interpreted "shows" to mean what the coin landed on when it was flipped. This is related to 1 and is 1/2, absolutely.

And I put forward that this kind of difference in interpretation is what is happening in the two different position in the SB problem.
PhilosophyRunner June 05, 2023 at 02:54 #813100
Reply to sime In your example using the thirder position,

If she has amnesia she should guess heads and will will 2/3 of the time.

If she doesn't have amnesia she should guess either on the first wake up (1/2 probability so doesn't matter which she guesses) and she should guess tail with absolute certainty if she remembers having woken up before (ie on her second wake up). Again she will win 2/3 of the time.

So having amnesia or not does not change the probability that she will win, but the tactics she should use are different.
sime June 05, 2023 at 10:51 #813123
Quoting PhilosophyRunner
If she has amnesia she should guess heads and will will 2/3 of the time.

If she doesn't have amnesia she should guess either on the first wake up (1/2 probability so doesn't matter which she guesses) and she should guess tail with absolute certainty if she remembers having woken up before (ie on her second wake up). Again she will win 2/3 of the time.

So having amnesia or not does not change the probability that she will win, but the tactics she should use are different.


In my argument, SB isn't asked to guess anything during the experiment. To make things really simple, let us only suppose that SB will lose $1 if the coin lands heads, without a wager being involved.

According to thirders, if she has amnesia then

P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday) = 1/3

So, according to thirders her probability of losing $1 when awakened on monday in a state of amnesia is 1/3.

On the other hand, if she doesn't have amnesia when waking up on Monday then thirders will agree that her probability of losing $1 is 1/2.

So according to thirder logic, it is rational for SB to consent to having amnesia before the experiment begins, in order to bias the coin's outcome towards tails.


Michael June 05, 2023 at 11:29 #813129
Reply to sime Alternatively, she loses $1 each time she wakes. What is her credence that she will lose $2?

It’s not the case that she loses $2 2/3 of the time, although it is the case that 2/3 of the time she wakes up she’s in an experiment where she’s going to lose $2.

I think thirders conflate the two. The latter is obvious but not what is asked about when asked her credence that the coin landed tails.
Pierre-Normand June 05, 2023 at 12:58 #813138
Quoting sime
According to thirders, if she has amnesia then

P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday) = 1/3


More precisely, the thirder view could be expressed as the claim that whenever Sleeping Beauty awakens, from her epistemic perspective, P(today is Tuesday and the coin landed tails) = P(today is Monday and the coin landed tails) = P(today is Monday and the coin landed heads) = 1/3

So, according to thirders her probability of losing $1 when awakened on monday in a state of amnesia is 1/3.


1/3 would be her prior upon awakening and before being informed that the day is Monday. Upon being informed of this, Sleeping Beauty would update her credence to 1/2.
sime June 05, 2023 at 13:43 #813142
Quoting Pierre-Normand
1/3 would be her prior upon awakening and before being informed that the day is Monday. Upon being informed of this, Sleeping Beauty would update her credence to 1/2.


That is true according to thirder's logic, but it doesn't alter their fallacious implication that consenting to amnesia before the experiment improves SB's chances of getting tails. If SB accepts that amnesia isn't a relevant causal factor of the coin's outcome, then she must avoid using a self-location credence that has that implication.

Similar magical thinking is demonstrated in avoidance coping. A poor person who has financial anxiety might raise their credence that their bank balance is in the black by choosing not to read their financial statements. Having accurate credences can be painful and reduce one's ability to cope in the short term, so there are short-term psychological benefits in choosing ignorance.



JeffJo June 05, 2023 at 14:35 #813148
This is why I refuse to use betting arguments.
Quoting sime
Suppose that SB gets paid $1 if the coin lands tails, otherwise she must pay $1. Furthermore, suppose that before the experiment begins she is given the choice as to whether or not she will have amnesia during the course of the experiment. According to thirder reasoning, she should choose to have amnesia in order to raise the probability of tails to 2/3


Is/does she paid/pay this $1 on both days, or on Wednesday after the experiment is over? In the latter case, can she choose not to have amnesia, and then choose "Heads" if she recalls no other waling but change that to Tails if she does?

This is why I tried to use a different setup, where this illogical juxtaposition of two different states is not an issue.

+++++

Probability is not a property of the system, it is a property of what is known about the system. Say I draw a card. After I look at it, I tell Andy that it is a black card, Betty that it is a spade, Cindy that its value is less than 10, and David that it is a seven (all separately). I ask each what they think the probability is that it is the Seven of Spades. Andy says 1/26, Betty says 1/13, Cindy says 1/32, and David says 1/4. All are right, but that does not affect my draw. I had a 1/52 chance to draw it.

The thirder argument is that SB knows of only three possible states. To her knowledge, there is no functional difference between them, so each has the same probability. Since they must sum to 1, each is 1/3. The fault they find with the halfer argument, is that it implies that to her, Monday&Tails and Tuesday&Tails represent the same state of possible knowledge. And they don't. They represent the same future, as determined on Sunday, but not the same state on a waking day.
PhilosophyRunner June 05, 2023 at 14:43 #813150
Quoting Michael
The latter is obvious but not what is asked about when asked her credence that the coin landed tails.


This right here is the key disagreement.

Taking the wording from the Scientific American article, it says "What is the probability that the coin shows heads?"

It does not say "what is the probability that the coin landed heads."

The linguistic difference between the coin showing heads and landing on heads when flipped is the disagreement.

Thirders take the position that "shows" is referring to the event of light reflecting from the coin and then entering SB retina, which is dependent on when and how many times she opens her eyes. The showing happens multiple times in the experiment if the coin lands on heads, it is not a one off event.

Halfers take the position "shows" is only referring to how the coin landed when it was flipped. That is independent to when and how many times SB opens her eyes. The coin lands only once per experiment - it is a one off event as you say.

The disagreement is linguistic rather than statistical.
JeffJo June 05, 2023 at 14:55 #813153
Reply to jgill
I'm not sure what your point was, but you can't make a valid one by leaving out half of the quote.
Quoting JeffJo
when the researchers looked at the coins, there are four possible arrangements with probability 1/4 each: {HH, HT, TH, TT}.


The state of the system at that time consisted of four equally-likely possibilities. SB knows, from the experiment set up, about these four equally-likely possibilities. But she also knows that the current state of the system does not include HH. It matters not what the state would be (asleep, gone shopping, whatever) if it did include HH, because she knows that it does not.

Except for the unusual setup, this is a classic example of an introductory-level conditional probability problem. Within her knowledge, the conditional probability of HT is now 1/3.

The fallacy in the halfer argument is that they confuse the changing state of the system, with what SB was aware could be possible states when the experiment started on Sunday. They do this because they can't describe the current state in terms of SB's knowledge when she is awake.

The point of my variation is to make the current state of the system functionally the same anytime she is asked the question.
PhilosophyRunner June 05, 2023 at 14:59 #813155
Reply to sime You are missing out that she will also wake up on the Tuesday if the coin landed on heads - you are only looking at Monday.

When you add in the probabilities of the Tuesday as well, then you find that amnesia is no better or worse as per my previous post.
sime June 05, 2023 at 15:57 #813159
Quoting JeffJo
Is/does she paid/pay this $1 on both days, or on Wednesday after the experiment is over? In the latter case, can she choose not to have amnesia, and then choose "Heads" if she recalls no other waling but change that to Tails if she does?


I simplified my thought experiment to say that she loses $1 on Monday if heads comes up, otherwise she loses nothing. No stake is involved, and she isn't asked to bet on the outcome during the course of the experiment.

It boils down to the following question. Should she risk a single draw from

P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

Or should she risk a single draw from

P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

According to thirders, x = 1/3, indicating that she gains from choosing amnesia.

Quoting JeffJo
Say I draw a card. After I look at it, I tell Andy that it is a black card, Betty that it is a spade, Cindy that its value is less than 10, and David that it is a seven (all separately). I ask each what they think the probability is that it is the Seven of Spades. Andy says 1/26, Betty says 1/13, Cindy says 1/32, and David says 1/4. All are right, but that does not affect my draw. I had a 1/52 chance to draw it.


If you are referring to a context involving repeated trials, then all of your probabilities are physically meaningful posterior probabilities , for we are confident on the basis of our past experience with card games that the frequencies will roughly obtain. In which case none of your above probabilities are appealing to the controversial principle of indifference.

In stark contrast, Elga (for example) invokes the principle of indifference on the basis of ignorance to assert

P(Monday | Tails) = P(Tuesday | Tails)

Leading to him to a conclusion the conflicts with his knowledge of fair coins and that encourages avoidance coping. What he ought to have done, is to represent his ignorance with the maximal set of permissible distributions and to assign confidence bounds, until as and when he has reason to whittle the set down to a smaller subset and make more specific predictions.

Or if he really must assign a single distribution (i can't think of a good reason), to choose one whose deductive implications cohere with his broader state of knowledge.



PhilosophyRunner June 05, 2023 at 16:41 #813167
Let's look once again at two wordings of the SB problem, I have bolded what I see as the key part of them. Both are questions to SB:

From Scientific American: "What is the probability that the coin shows heads"
From wikipedia: "What is your credence now for the proposition that the coin landed heads?"

From the thirder position, both of these questions are asking about something that is intrinsically linked to her mental state at a particular moment in time, what she can see, etc. From the halfer position they are only asking about the coin. The disagreement is about what exactly the question is asking.

Let me rewrite these statements in more obvious ways, sorted into how the two camps interpret them.

Thirder camp:

"The coin is behind you. When you turn around and look at it, what is the probability that you will see heads at this very moment in time"
"What is the probability that photons leaving the coin and entering your eyes at this very moment in time will encode an image of heads?"

Halfer position

"What is the probability that the coin landed on heads when it was flipped regardless of this moment in time when i am asking you the question?"
"What is the probability that the coin landed on heads, as seen by an independent observer watching the whole experiment without sleeping."

PhilosophyRunner June 05, 2023 at 17:08 #813173
Sorry for the repeated posts here is another thought experiment to see the difference between the two camps. Both camps are given a camera and told to take pictures of the coin at what they consider to be the important moments.

The experiment is done twice and the coin lands on heads once and on tails once from the perspective of an independent observer.

Halfer camp

The coin is tossed and lands on heads on Sunday. They take a picture of it showing heads.
The coin is tossed and lands on tails on Sunday. They take a picture of it showing tails.

They come away with two pictures - one heads and one tails. This is what the question is asking for as far as they are concerned.

Thirder camp

The coin is tossed and lands on heads. They don't take a picture - they are not interested in the toss itself as sleeping beauty is asleep. Sleeping beauty wakes up on Monday, now they take a picture of the coin at that moment showing Heads.

The coin is tossed and lands on tails. They don't take a picture - they are not interested in the toss itself as sleeping beauty is asleep. Sleeping beauty wakes up on Monday, now they take a picture of the coin at that moment showing Tails. Sleeping beauty wakes up again on Tuesday, now they take a picture again of the coin at that moment showing Tails

They come away with three pictures - one heads and two tails. This is what the question is asking for as far as they are concerned.

You can see the difference in what the halfers and thirders think is important based on when they took their pictures.
Pierre-Normand June 06, 2023 at 04:49 #813295
I've refined further @Michael's 'less-extreme' variation of the Sleeping Beauty problem in a way that highlights a pragmatist interpretation of the halfer/thirder dispute. Here it is with some comments by GPT-4:

Hi GPT-4,

Here is a new variation on the Sleeping Beauty problem that I imagined with the goal of highlighting a pragmatist interpretation of it.

Sue Sitter has recently joined the team of sitters at the Sleeping Beauty Experimental Facility. In her role, like other sitters, she's responsible for attending to a participant on one single day during one of their experimental sessions. The nature of these sessions depends on a roll of a die: if the die lands on 'six,' the participant will be woken and interviewed six times over six days; otherwise, they will only wake once for an interview on the first day. As a sitter, Sue is assigned to a room randomly, without knowing whether the participant inside is scheduled for a single awakening or one of six.

In the experiment facility, there are, on average, a 6/11 proportion of rooms occupied by participants waking six times (due to a die having landed 'six') and a 5/11 proportion of rooms occupied by participants waking only once (due to a die not having landed 'six'). Now suppose Sue had access to the participant registry where all active participants are listed. She spotted the name 'Jane Doe.' What are the chances she will be woken up six times? Sue's credence is 1/6, and this would also be the case for Jane Doe's credence before she undergoes the first sleep session.

Now, suppose that by a stroke of luck, Sue is assigned to Jane Doe on that particular day. Her job is to measure her vitals as she awakens and get her breakfast as she waits for the interview. Sue arrives in Jane's room and wait for her to wake up. What are the chances that the die landed on 'six', indicating this would be one of six awakenings rather than the only one?

Once Sue has been assigned to Jane Doe, her credence P('six') in this proposition should be updated from 1/6 to 6/11. This is because Sue was randomly assigned a room and her credence must now reflect the proportion of participants who wake up six times to the total number of participants who occupy the facility on that day. It can be argued that when Jane Doe awakens, her credence P('six') then aligns with Sue's because both their credences are targeting the exact same proposition, and both of them have the same epistemic access to it. But I'm not going to argue for this just now.

From the point of view of Aunt Betsy (Jane Doe's aunt) who visits the facility and wants to see Jane, there is a 5/6 chance Jane's die didn't land on 'six' and she is being woken up only once whereas from Sue's point of view there is a 6/11 chance that the die landed on six and that Jane today is experiencing one in a sequence of six awakenings.

Suppose when a visitor like Aunt Betsy is allowed to come to the facility to inquire about a participant, they get to meet at the cafeteria with their assigned sitter who will inform them of their wellbeing. Aunt Betsy and Sue Sitter then get to discuss there respective (and inconsistent) credences regarding the die throw that determined Jane's number of awakenings. Will either Aunt Betsy or Sue find some ground to update their credences?

(Note that the circumstances of Aunt Betsy's meeting with Sue Sitter are somewhat underspecified.)

GPT-4:

This is a fascinating variation on the Sleeping Beauty problem! It certainly introduces an additional layer of complexity with the inclusion of Sue Sitter and Aunt Betsy, each with their own perspectives and credences.

The original Sleeping Beauty problem is interesting because it's a question about how to update beliefs when you learn no new ordinary information, but only learn that it is now. This is often called "self-locating belief." Here, you have a situation where both Sue Sitter and Aunt Betsy have different self-locating beliefs about the situation, but also have different access to information about the experiment.

Let's first clarify their perspectives:

Sue Sitter: She knows the overall proportions of the experiment, so her credence that Jane Doe will wake six times is 6/11, given that she was randomly assigned a room.

Aunt Betsy: She doesn't know the overall proportions of the experiment. She just knows that Jane Doe has been put into the experiment and that a die roll of six means Jane will wake up six times. So, her credence that Jane Doe will wake six times is 1/6.

When Sue Sitter and Aunt Betsy meet, they have an opportunity to share their perspectives and possibly update their credences.

The key question here is: do they have a reason to trust each other's perspective? Is there a reason why Sue should trust Aunt Betsy's credence of 1/6, or why Aunt Betsy should trust Sue's credence of 6/11?

In a sense, they're operating under different contexts and different sets of information. Sue's context is the whole experimental facility where she knows the average proportions, while Aunt Betsy's context is just about Jane Doe and the die roll that determines her awakenings.

So, when they meet and share their perspectives, they might be able to adjust their credences. But how they do so depends on what they learn from each other, their trust in each other's knowledge, and how they interpret the information. This is an open question in the philosophy of probability and epistemology, and different theories might suggest different ways of updating their beliefs.

The pragmatist interpretation you've proposed seems to suggest that Sue Sitter and Aunt Betsy should stick to their own credences, because those credences are based on their own perspectives and contexts. But they could also potentially learn from each other and update their credences in light of new information.

The exact credence updates would depend on various factors, including their prior beliefs about the reliability of their own and each other's information sources, the clarity with which they understand each other's perspectives, and their understanding of the underlying mechanics of the experiment. The problem does not provide a clear-cut, definitive answer to this question, illustrating the complexity and ambiguity inherent in the philosophy of probability and belief updating.

PN:

Let us start with the assumption that both Sue and Aunt Betsy are fully informed about the experimental setup and have complete trust in each other, although they may interpret their respective credences differently. Visitors like Aunt Betsy can arrange a visit if they notify the facility before the participant begins the experiment and starts their sleep schedule. Once the participant's sleep schedule is set, the facility manager organizes a meeting between the visitor and an assigned sitter. This meeting is scheduled for a randomly chosen day when the participant is due to awaken under the watch of this sitter. Importantly, the circumstances surrounding this meeting are designed in such a way that neither Sue nor Aunt Betsy receives any new information that might lead them to adjust their beliefs about the outcome of the die roll.

GPT-4:

Under these additional clarifications, it seems that both Sue Sitter and Aunt Betsy have complete knowledge of the experimental setup and trust each other's report. They also have no reason to believe that their meeting provides additional information about Jane Doe's situation. This makes the question of whether they should update their credences even more interesting.

In this case, they might consider the following:

Aunt Betsy could reason that Sue Sitter, being an employee of the facility, has a more informed perspective on the overall proportions of the experiment. Therefore, she might consider updating her credence to align with Sue's, which is 6/11.

Sue Sitter could consider Aunt Betsy's credence, but since Sue's perspective is based on the overall proportions of the experiment, she might not find a compelling reason to change her credence based on Aunt Betsy's belief.

What's interesting about this situation is that even though they both have complete knowledge of the experimental setup, their credences are based on their specific roles and perspectives within the experiment.

While it's tempting to think that they should align their credences, it's not necessarily clear that they should. This highlights how the concept of "credence" in probability theory can be subjective and context-dependent, even when all parties have full knowledge of the underlying facts.

Sue's credence of 6/11 is based on her perspective as a sitter, who knows the overall proportions of the experiment. Aunt Betsy's credence of 1/6 is based on her perspective as an aunt, who knows the die roll determines Jane Doe's awakenings.

The pragmatist interpretation might suggest that each should stick to their own credence, as it is the most rational belief given their specific perspective and role. But other interpretations might suggest that they should try to align their beliefs in light of their shared knowledge and trust in each other.

This scenario serves as an interesting case study in the philosophy of probability, illustrating how different perspectives can lead to different credences even with full knowledge of the facts.

PN:

Suppose that whenever a sitter and a visitor meet under such circumstances, they are allowed to put their money where their mouth is and make an even money bet on the outcome (i.e. on 'six' or 'not six'; or, equivalently, on six awakenings or one single awakening). Aunt Betsy would bet on 'not six' and expect to win five out of six times. Sue Sitter would bet on 'six' and expect to win six out of eleven times. And so would they both! How is that possible? (I already know the answer, but that's a challenge for you).

GPT-4:

The situation you've described is certainly intriguing. It seems paradoxical because Sue and Aunt Betsy have different credences about the same event and yet, given their respective perspectives, they both expect to win more often than not. This paradox arises from the fact that the event being wagered on ("six" or "not six") occurs within different sampling populations for Sue and Aunt Betsy.

Aunt Betsy is betting on the outcome of a single, specific experiment involving Jane Doe, where the die is rolled once. She knows that in this single experiment, the chance of getting a six is 1/6, and the chance of not getting a six is 5/6. So, she expects to win her bet 5 out of 6 times.

On the other hand, Sue Sitter is not betting on a single, specific experiment. Instead, she's betting on the outcome of a random sampling from all the experimental sessions taking place in the facility. In this larger population of experimental sessions, 6/11 of the sessions will involve six awakenings (a "six" outcome), and 5/11 will involve one awakening (a "not six" outcome). So, she expects to win her bet 6 out of 11 times.

This apparent paradox is resolved when we realize that Sue and Aunt Betsy are not actually betting on the same event, despite appearances. Aunt Betsy is betting on the outcome of a specific experiment (Jane Doe's), while Sue is betting on the outcome of a random sampling from all experiments. The event they're betting on has a different probability in these two different contexts.

This example demonstrates an important principle in probability theory: the probability of an event can depend on the context or sampling population in which it's considered. It also illustrates how different perspectives can lead to different assessments of probability, even when all parties have full knowledge of the underlying facts.

PN:

Your explanation is generally accurate, but it might be helpful to clarify that Aunt Betsy's and Sue's chances of 'winning' more often, despite betting on complementary outcomes, are based on their own individual long-term frequencies of winning. If they repeat their respective 'experiments' many times, Sue will interact with different participants' acquaintances, and Aunt Betsy will meet different sitters. Consequently, they won't both win more often against each other – a scenario that would indeed constitute a logical impossibility.

GPT-4:

Absolutely, your clarification is spot on. The betting scenario, as you've described it, involves many repetitions of the experiment. In the long run, both Sue Sitter and Aunt Betsy are betting in multiple rounds, with different people, based on their respective beliefs about the outcome of the experiment.

Aunt Betsy, when betting with different sitters across different experiments, will expect to win 5 out of 6 times because she is considering each experiment as a single event where a die is rolled once.

On the other hand, Sue Sitter, when betting with different visitors across different experiments, will expect to win 6 out of 11 times because she is considering the overall proportions of the experiment and the distribution of awakenings in the facility.

They are not directly betting against each other over and over again. If they were, then it would indeed be a logical impossibility for both to win more often than they lose, since they're betting on complementary outcomes.

This scenario is an interesting demonstration of how different perspectives can lead to different probabilities, and how those probabilities are borne out in the long term through repeated trials. It also shows how the same event can be perceived differently depending on the context and the observer's perspective.
Pierre-Normand June 06, 2023 at 08:54 #813320
Follow-up to my discussion with GPT-4 about the pragmatist interpretation of the Sleeping Beauty problem:

PN:

As I reflected about this thought experiment while taking a shower (always a propitious moment for philosophizing) I noticed a flaw in my initial conception of it. Due to the way Aunt Betsy and Sue's meeting is arranged, the mere occurrence of the meeting allows Sue to update her credence P('six') to match Aunt Betsy's. Can you see why?

GPT4:

Yes, I see the point you're making. The scheduling of the meeting between Aunt Betsy and Sue is tied to a participant's awakening, which in turn is tied to the outcome of the die roll. This means that the occurrence of the meeting itself does provide Sue with information relevant to her credence P('six').

Specifically, since the meeting occurs on a randomly chosen day when the participant is scheduled to awaken, the very fact that the meeting is taking place tells Sue that the participant has indeed awakened at least once. Given that the chance of the participant awakening at all is higher if the die landed on 'six', this information should increase Sue's credence that the die landed on 'six'. However, because the die is rolled before the participant's sleep schedule is set, and the meeting is scheduled after this, the information doesn't completely determine the outcome of the die roll. Sue's credence should therefore align more closely with Aunt Betsy's, but it won't necessarily match it exactly.

PN:

I think you are now (uncharacteristically) not entirely on track. The reason why is that when Sue requests for there to be a meeting with (one of) Jane's assigned sitter(s), if the die didn't land on 'six', the unique sitter assigned to Jane will be booked for the meeting. Else, one of the six sitters assigned to Jane will be booked at random. Upon knowing that she was booked, Sue credence that she was the only one (rather than one among six) sitters assigned to Jane increases.

GPT4:

You're right, I misunderstood the key aspect of your thought experiment. When Aunt Betsy schedules a meeting, one of two scenarios will happen:

If the die did not land on 'six,' one sitter was assigned to Jane, and that sitter will be booked for the meeting.

If the die landed on 'six,' six sitters were assigned to Jane, and one of them will be randomly chosen for the meeting.

Given this setup, when Sue is selected for the meeting, she gains new information: she now knows that she is either the sole sitter assigned to Jane (if the die did not land on 'six') or one among six (if the die landed on 'six').

The fact that Sue was selected for the meeting therefore updates her credence in the proposition that she is the only sitter assigned to Jane. This is because being chosen for the meeting is more likely in the scenario where there is only one sitter (where the die did not land on 'six') compared to the scenario where there are six sitters (where the die landed on 'six').

I apologize for the confusion, and I appreciate your patience in clarifying your thought experiment.

PN:

No trouble! I think we can easily use Bayesian updating to show that Sue's credence in P(not 'six') thereby increases from 5/11 to 5/6.

GPT4:

Yes, you're correct. Sue can use Bayesian updating to adjust her beliefs based on the new information she has received.

The Bayes' theorem formula is as follows:

P(A|B) = [P(B|A) * P(A)] / P(B)

In this case, we can interpret:

A as the event "the die did not land on 'six'"
B as the event "Sue is selected for the meeting"
Before the meeting, Sue's prior belief, P(A), is 5/11, reflecting the proportion of participants who wake up only once.

P(B|A) is 1, because if the die did not land on 'six', Sue is the only sitter assigned to Jane and will definitely be selected for the meeting.

P(B) is the total probability that Sue is selected for the meeting, which is the sum of the probabilities of her being selected under both possible outcomes of the die roll. This is (5/11 * 1) + (6/11 * 1/6) = 5/11 + 1/11 = 6/11.

Plugging these values into the Bayes' theorem formula gives us:

P(A|B) = [1 * (5/11)] / (6/11) = 5/6

So, after learning that she has been selected for the meeting, Sue's updated belief in the proposition that the die did not land on 'six' increases to 5/6.

PN:

You got it right on the first try, congrats!

GPT-4:

Thank you! I'm glad the explanation met your expectations. Bayesian reasoning can be quite intricate but it's a powerful tool for understanding how to update our beliefs in light of new information. If you have any more questions or thought experiments, feel free to share!

PN:

I just want to note that this result does not invalidate the 'pragmatist' lesson from the thought experiment. It's still the case that if they were to bet on their credences prior to meeting, both Aunt Betsy and Sue would make money in the long run. Furthermore, after the meeting, Sue would alter her betting behavior and make even more money (winning five times out of six by betting on the opposite outcome). But in this case, she would only have the opportunity to make this more profitable bet when she is booked for a meeting, so her proportions of winnings (and hence, her credence) would reflect differently sampled populations.

GPT4:

Absolutely, you've hit the nail on the head. While Sue updates her credence after the meeting, this doesn't undermine the pragmatist interpretation that emerged earlier in the thought experiment. The betting behavior of both Aunt Betsy and Sue is still rational given their respective information and they would both make money in the long run if they were to bet on their initial credences.

Furthermore, you're correct in noting that Sue's updated credence and altered betting behavior would indeed lead to more profitable outcomes for her in the instances where she is booked for a meeting. However, these instances represent a different sampling of the population (only when she is booked for a meeting), reinforcing the point that different agents with different information and sampling conditions can rationally hold different credences about the same event.

This is a very insightful point and underscores the richness and complexity of the pragmatist interpretation of the Sleeping Beauty problem and its variations. It's a good reminder that our beliefs and predictions about the world should take into account not only the available information but also the ways in which we are sampling from the world.
Kizzy June 06, 2023 at 16:32 #813378
https://thephilosophyforum.com/discussion/comment/782810 Any thoughts on this contribution to thread from The Lounge posted back in March titled, "Cinderella Problem. Anyone understand it?" My interpretation and take on this is coming from a clean slate as this was the first I have heard about the problem ...This view or twist I add to the problem involves not waking up on Wednesday at all, death is the outcome..but is it the only outcome?...The other outcome that WILL ALSO occur in this experiment is of a fair coin toss...two definite outcomes a H or T result and never waking up again.
What if she was never suppose to wake up again after either just Monday (Heads on fair flip) or Monday and Tueday (tails on fair flip) How lucky would she be to get two days instead of one to be alive for one more moment? What if she couldn't wait to die?....unlucky gal.
Pierre-Normand June 07, 2023 at 10:33 #813583
Quoting sime
It boils down to the following question. Should she risk a single draw from

P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

Or should she risk a single draw from

P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

According to thirders, x = 1/3, indicating that she gains from choosing amnesia.


Your suggestion that a thirder expects to gain from choosing amnesia would depend on her misconstruing the nature of her predictions. A thirder, upon each awakening, assigns a credence of P(H) = 1/3 to the coin having landed heads, predicting that, over many awakenings within various runs of the experiment, the coin will land 'tails' 2/3 of the time. These are predictions about the frequency of outcomes across multiple awakenings, not about the specific outcome of the coin toss in a particular awakening.

The initial bet, as you stipulated, is made prior to the start of the experiment, not at each waking event. As such, the payout of $1 is tied to the experimental run as a whole, not to each individual awakening. While a thirder may correctly predict the frequency of 'tails' outcomes across awakenings, these predictions do not increase the actual winnings from the initial bet.

In sum, while the amnesia might increase the frequency of correct predictions, it does not lead to a higher payout. This is because only one payout occurs per experimental run, even when Sleeping Beauty correctly predicts a 'tails' outcome twice within a run. A thirder, fully informed about the conditions of the experiment and the nature of her predictions, would be aware of this.
sime June 07, 2023 at 11:23 #813587
Quoting Pierre-Normand
Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.


They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

1) That the physical probability of winning the lottery is 1/2

2) That amnesia doesn't influence lottery outcomes.

3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?


Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.





Michael June 07, 2023 at 11:49 #813589
Quoting Pierre-Normand
Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.


I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

Maybe thirders who use betting examples are simply frequentists?

Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).
Pierre-Normand June 07, 2023 at 12:10 #813591
Quoting sime
They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

1) That the physical probability of winning the lottery is 1/2

2) That amnesia doesn't influence lottery outcomes.

3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?


Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.

Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.


Neither halfers nor thirders typically make any reference to the potential causal impacts (e.g. winning bets) of them expressing meaningful credences. If they would be more careful about that, they would stipulate what population samples they are talking about when they say "I have credence P that O is true when I am in situation S." In that case, the way in which they choose how to define what constitutes S and O would reflect their interests in predicting the frequency of such Os (e.g. Heads) in such Ss (e.g. awakenings or experimental runs). And then, thirders and halfers would simply stop talking past each other.
sime June 07, 2023 at 12:32 #813595
Quoting Pierre-Normand
Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.


The question she is asked is in the context of a single trial. It is merely

"What is your credence now for the proposition that the coin landed heads?"

The problem refers to the outcome of a particular coin toss, with a sample space S = {H,T} describing two possible worlds. So the problem does not concern frequencies. Both halfers and Elga agree that the propensity of the coin to land heads is 1/2. So the measure M over S is

M (H) = M(T) = 1/2

We are given two causal implications with respect to S that characterise two random variables, namely

Awaken_Mon (H) = True
Awaken_Mon (T) = True

and

Awaken_Tue (H) = True
Awaken_Tue (T) = False

Both of which are "measured" by M to give

P( Awaken_Mon = True) = 1
P( Awaken_Tue = True ) = 1/2

This fully describes the stated causal relationships of the Sleeping Beauty Problem, that go from the state of the coin to the states of two awakening events.

How do you suppose that we

(1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?

and

(2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

Unless thirders can answer (1) and (2), their epistemic probabilities are meaningless


Pierre-Normand June 07, 2023 at 12:42 #813596
Quoting Michael
I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

Maybe thirders who use betting examples are simply frequentists?

Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).


While Bayesians and frequentists might disagree about the meaning of probability assignments to potential outcomes of experiments, with the former asserting that they depend on degrees of belief, and the latter contending that they're based on expected long-run frequencies of such outcomes, the dispute between thirders and halfers seems more closely related to the class of events being predicted. The Bayesian framework is equally well-suited for deriving the thirder result, as long as the probability in question is the chance of finding oneself within a 'tails' experimental run, given the evidence that one is currently awake.
Pierre-Normand June 07, 2023 at 13:26 #813599
Quoting sime
How do you suppose that we

(1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?


This is actually straightforward. Imagine a spatial analogy. The 'B series of space' might be represented by Cartesian coordinates on a surface or by room numbers. The 'A series of space' could correspond to indexical names of locations like "here", or "this room". In the example I gave earlier, Sue Sitter is randomly assigned to a room in the Sleeping Beauty Experimental Facility. There are twice as many rooms allocated to participants who are scheduled for two awakenings (resulting from a coin landing tails), as opposed to one awakening (from a coin landing heads).

How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.

(2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?


The causal flow goes from 'Coin toss results' -> 'Distribution of tails and heads assigned rooms' -> 'Evidence of being in a random room' -> 'Sue calculating her credence'. Therefore, there is no causal paradox.
Michael June 07, 2023 at 13:43 #813600
Quoting Pierre-Normand
So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.


I think that part in bold is key.

There are two Sleeping Beauties, A and B. If the coin lands heads then A will be woken once and B twice, otherwise B will be woken once and A twice.

When woken each is asked their credence that they have been or will be woken twice.

Each of the three sitters is also asked their credence that their participant has been or will be woken twice.

Now it may be tempting to argue that the sitters ought to reason as if their interview has been randomly selected from the set of all interviews, but that might not be technically correct. The only way I can think of this to be done is for one of the sitters to be randomly assigned an interview from the set of all interviews (of which there are 3), for one to be randomly assigned an interview from the set of all remaining interviews (of which there are 2), and then one to be assigned the final interview.

So rather the sitters should reason as if they are randomly selected from the set of all sitters, and so their credence that their participant has been or will be woken twice will be 2/3.

The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.

That the participant who is woken twice will be right twice as often (if she guesses that she has been or will be woken twice) isn't that each participant's credence should be that they are twice as likely to be woken twice.
sime June 07, 2023 at 13:44 #813601
Quoting Pierre-Normand
How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.


Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.

So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2

sime June 07, 2023 at 14:09 #813604
Now suppose she doesn't make any assumptions regarding her room allocation. What then?

For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

but nevertheless

2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
Pierre-Normand June 07, 2023 at 14:11 #813606
Quoting sime
Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.


Have you had a look at the setup I had described here to provide a pragmatist account of the thirder/halfer dispute?

On a particular day, when Sue is assigned to a random room in her role as a sitter, all the rooms are occupied by different participants in the study. Each of their schedules was initially determined by a separate coin. Participants scheduled for two consecutive days of awakening might be there for their first or second day (with equal probabilities). In this case, there is a "tail outcome" either tomorrow or yesterday, which you might wish to label as "the same outcome". However, this is irrelevant to the coin currently hidden under the cardboard on the nightstand next to the participant's bed, about which Sue is expressing her credence today based on the available information.

So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2


Why would that be? Two thirds of the rooms occupied by participants on any given day are occupied by individuals who have been scheduled for two awakenings (on two consecutive days) because a coin landed tails.
Pierre-Normand June 07, 2023 at 14:21 #813611
Quoting sime
Now suppose she doesn't make any assumptions regarding her room allocation. What then?

For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

but nevertheless

2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.


If that were the case, the problem would transform into a different version of the Sleeping Beauty problem. This altered version would have Sleeping Beauty awakening only once regardless of the coin's outcome, or not being informed about the number of awakenings in either case. Indeed, in such setups, there would be no grounds for adopting a 'thirder' stance. However, in the problem statement in the original post, the information regarding the number of awakenings following the heads and tails outcomes is provided to Sleeping Beauty. Hence, the thirder stance remains relevant.
Pierre-Normand June 07, 2023 at 14:45 #813616
Quoting Michael
The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.


Why should it be either one or the other? One of them expresses a credence that an awakening (the current one) is occurring as a result of a coin having landed tails (E1) while the other one expresses a credence that an experimental run (the current one) is occurring as a result of a coin having landed tails (E2). They are credences about different kinds of events (E1 and E2) that have different temporal and spatial distributions. They can both (and indeed are) true at the same time and can be appealed to to rationally justify different bets or other actions that are dependent on such events happening.

There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.
Michael June 07, 2023 at 14:50 #813617
Quoting Pierre-Normand
There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.


I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.

If there's a lottery where the probability of winning is 1/1,000,000 but the cost is £1 and the prize is £2,000,000 then it can be profitable to play 1,000,000 times, but it is still the case that for each lottery one's credence in winning should be 1/1,000,000.

Given that E1 iff E2 as you say, I would say that P(E1) = P(E2), and that the most appropriate credence for E2 (and so also E1) is 1/2, irrespective of practical considerations.
PhilosophyRunner June 07, 2023 at 15:45 #813631
Quoting Michael
That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.


This is one part of halfers thinking I don;t understand. There is something in statistics called law of large numbers that allows just this inference.

We have a disagreement over a dice. I say that it is more likely to land on 5, you say all numbers are equally likely to happen. What can we do?

The answer is simple, do an experiment where we roll it many times. If it lands on heads 5 million times out of 10 million rolls, then I will have high credence to say that 5 is more likely to roll on the next one off roll we do.

Halfers seem to be saying that we can;t use the results of the 10 million rolls to give credence to what we think is more probable for the next one off roll. I disagree, we certainly can.
Michael June 07, 2023 at 15:47 #813632
Reply to PhilosophyRunner Take away the amnesia. Does it follow that because there are two interviews after every tails that a tails interview is twice as probable?

Throwing in amnesia doesn't convert the increased frequency into an increased probability.
PhilosophyRunner June 07, 2023 at 15:51 #813634
Reply to Michael Yes, an individual tails interview event is twice as probable. A tails interview where Monday and Tuesday interviews are grouped together and not seen as different events is equally likely as a heads interview. It comes back to the language of the question and interpretation.
Michael June 07, 2023 at 15:54 #813637
Quoting PhilosophyRunner
Yes, an individual tails interview event is twice as probable. A tails interview where Monday ans Tuesday interviews are grouped together is equally likely as a heads interview. it comes back to the language of the question and interpretation.


That I believe is a bad interpretation of probability.

The probability of the coin landing heads is 1/2, leading to one interview.
The probability of the coin landing tails is 1/2, leading to two interviews.

The probability that there will be a heads interview is 1/2.
The probability that there will be a tails interview is 1/2.

This is the correct interpretation.
Pierre-Normand June 07, 2023 at 16:04 #813640
Quoting Michael
I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.


In the Sleeping Beauty case, betting on tails at each awakening isn't merely profitable over the long run. It's also more likely to be a winning bet at each occasion of betting. The likelihood that it will be a winning bet is exactly 2/3. Similarly, in the "less-extreme" variation, each occasion of betting is more likely to result in a win (6 out of 11 bets are winning bets).

Following Pradeep Mutalik's suggestion, '[a]ccording to the Bayesian "Dutch Book argument" I mentioned, "a degree of certainty" or "degree of belief" or "credence" is, simply, your willingness to wager*. Specifically, if you have a "degree of certainty" of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.'

In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.
PhilosophyRunner June 07, 2023 at 16:04 #813641
Reply to Michael

For you the Monday heads interview is an A event. The Monday tails interview is a B event, the Tuesday tails interview is a B event as well (it is the same tails event as far as you are considered.)

For me the Monday heads interview is an A event. The Monday tails interview is a B event, the Tuesday tails interview is a C event (they are different interviews as far as I am concerned).

You care about the coin only, I care about the interview moment combined with the coin.

For me the coin showing tails on Monday is a completely different showing to it showing tails on a Tuesday. For you it is the same flip so it is the same showing.

What is the probability of the Mona Lisa showing? For you it will be 100% as we know it has been painted. For me it would depend on the time of day and the date. The probability of it showing during the day is higher than the probability of it showing at night, as the Louvre is only open on some special nights.

Different interpretations of what it means for the Mona Lisa to be showing.
Michael June 07, 2023 at 16:08 #813642
Quoting Pierre-Normand
Following Pradeep Mutalik's argument, according to the Bayesian "Dutch Book argument", "a degree of certainty" or "degree of belief" or "credence" is essentially your willingness to wager. Specifically, if you have a "degree of certainty" of 1/n, then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.

In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.


Then apply this to my case of tossing the coin one hundred times, and where the experiment is only run once.

Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.

That I would be woken up a large number of times if it did land heads 100 times in a row just doesn't affect my credence or willingness to bet that it did at all.

And the same if I was one of 2[sup]100[/sup] participants taking part. One person might win, but I will almost certainly not be the winner.

Only if I got to repeat the experiment 2[sup]100[/sup] times would I bet that it did. But not because my credence for any particular experiment has increased; it's because my credence is that I'm likely to win at least once in 2[sup]100[/sup] attempts, and that the winnings for that one time will exceed the sum of all my losses.
PhilosophyRunner June 07, 2023 at 16:16 #813644
Quoting Michael
Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.


Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one - maximizing your expected value when betting is the rational position.
Michael June 07, 2023 at 16:18 #813645
Quoting PhilosophyRunner
Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.


It can be rational in the sense that it can be profitable to bet when the expected value is greater than the cost, much like a lottery that costs £1 with a prize of £2,000,000 and a probability of winning of 1/1,000,000.

But it's not rational to believe that (especially when playing once) that I am most likely to win betting that it landed heads 100 times. You're not most likely to win. The odds of winning are [math]1\over2^{100}[/math].
Pierre-Normand June 07, 2023 at 16:19 #813647
Quoting Michael
Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.


@PhilosophyRunner's comment is on point. You awakening once while the coin landed tails is akin to a showing of the Mona Lisa on a particular day. The coin having landed tails rather than heads is akin to Leonardo da Vinci having produced a portrait of Mona Lisa rather than Marco Luciano.
PhilosophyRunner June 07, 2023 at 16:26 #813651
Reply to Michael It would be rational also in the sense that you are more likely to win on a particular guess (which would not be the case in a normal large prize 2,000,000 and a probability of winning of 1/1,000,000 at £1 each, as you point out).

If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours.
Michael June 07, 2023 at 16:28 #813653
Quoting PhilosophyRunner
If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours.


More frequent but not more probable.

If the game is played once I wouldn't argue that the coin most likely landed heads 100 times in a row and that my interview is most likely a 100-heads-in-a-row interview. I would argue that the coin most likely didn't land heads 100 times in a row and that this is most likely my first and only interview.

I think using frequencies over multiple games to argue for the probability in a single game is a non sequitur.
Pierre-Normand June 07, 2023 at 16:30 #813656
Quoting PhilosophyRunner
Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.


Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like @sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.
Michael June 07, 2023 at 16:32 #813657
Quoting Pierre-Normand
Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way.


I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".

It's either a non sequitur or affirming the consequent, where the implicit premise is "if tails is more probable then the majority of winning bets are tails bets".

In this case the majority of winning bets are tails bets only because you get to place more bets if it's tails.

This is why, as I have often said, betting examples just don't answer the question at all. They're a red herring. Betting on tails might be more profitable, but it is still the case that one's credence should be that P(Heads|Awake) = 1/2.
sime June 07, 2023 at 16:38 #813663
Reply to Pierre-Normand

The sample space of any room is { H, (T,F), (T,S) }

where F and S refer to First Stay and Second Stay, respectively

with probability measure

M(H) = 1/2
M(T,F) = 1/4
M(T,S) = 1/4

(a consequence of your assumed prior probabilities )

Define a variable indicating the stay

Stay (H) = First
Stay (T,F) = First
Stay (T,S) = Second


P(Stay = First) = M (H) + M(T,F) = 3/4
P(Stay = Second) = 1/4

That's all that can be said, unless i've overlooked an aspect of your problem. Which stay it is would give new information to the sitter about the coin, but alas she doesn't know this information. . To get a hotel for the purposes of obtaining a statistically interpretable result, simple take the product-space of the sample spaces for each room, and take the product of each room-specific measure M.

As before, the prior probability of the coin landing heads is given in the premises, and the sitter has no new information upon which to condition the state of the coin, meaning that it's probabilities remain unchanged.

I think the version of the hotel in which subjects are assigned to different rooms on each awakening is more interesting, because it reveals the importance of counterfactual reasoning when the sitter allocates her credences, which thirders blindly ignore in their "picture theory" of credence allocation.

Michael June 07, 2023 at 20:58 #813716
@Pierre-Normand

Thought you might be interested in my short exchange with Elga:

[quote=Michael]Dear Professor Elga,

I've read your paper Self-locating belief and the Sleeping Beauty problem and hope you could answer a question I have regarding your argument. You state that "P(T[sub]1[/sub]|T[sub]1[/sub] or T[sub]2[/sub]) = P(T[sub]2[/sub]|T[sub]1[/sub] or T[sub]2[/sub]), and hence P(T[sub]1[/sub]) = P(T[sub]2[/sub])" and by implication state that P(H[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]) = P(T[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]), and hence P(H[sub]1[/sub]) = P(T[sub]1[/sub]).

However I cannot see in the paper where this inference is justified, as it is not valid a priori.

If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B[sub]1[/sub]) = P(B[sub]1[/sub]|R or B[sub]1[/sub]) but P(R) = ½ and P(B[sub]1[/sub]) = ¼.

So the (double-)halfer can accept that P(H[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]) = P(T[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]) but reject your assertion that P(H[sub]1[/sub]) = P(T[sub]1[/sub]) follows. Is there something in your paper that I missed to justify this inference?

Thanks for your time.[/quote]

[quote=Elga]Dear Michael,

Thanks for your interest in this stuff. The form of reasoning I had in mind was the following chain of entailments:

P(X|X or Y) = P(Y|X or Y)
P(X&(X or Y))/P(X or Y) = P(Y&(X or Y))/P(X or Y)
P(X)/P(X or Y) = P(Y)/P(X or Y)
P(X) = P(Y).

I wish you the best with your research.[/quote]

Unfortunately I don't quite see how it addresses my counterexample, which seems to show that there must be a mistake with that chain of entailments, but I won't push him on it.
PhilosophyRunner June 07, 2023 at 21:50 #813735
Quoting Michael
I think using frequencies over multiple games to argue for the probability in a single game is a non sequitur.


I simply can't agree with this. Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated.

If you ask me what the probability of this dice in my hand will roll a 6, I can roll the dice a million times and that will give me credence for the probability that a 6 will roll the next time I roll the dice. And so on.

If 6 rolls 900,000 out of the million times, I am completely justified having credence that 6 is more likely to roll on the next single, one off roll I will do.

You seem to be suggesting that 6 having rolled 900,000 times out of a million should not affect my thought on the probability of 6 being rolled in the next one off event at all. That makes no sense to me.
PhilosophyRunner June 07, 2023 at 22:06 #813738
Quoting Pierre-Normand
Indeed, not only would their expected value (EV) be positive, but it would be positive because the majority of their individual bets would be winning bets. Michael, it seems, disagrees with the idea of individuating bets in this way. However, this resistance appears to stem from an unwillingness to assign probabilities to the possible involvement of epistemic agents in specific kinds of events. Instead, like sime, Michael prefers to attribute probabilities to the propensities of objects being realized as seen from a detached, God's-eye-view perspective.


Exactly, the disagreement stems from the perceptive from with the probability is being calculated.

Michael June 07, 2023 at 22:07 #813741
Quoting PhilosophyRunner
Using frequencies over multiple games to argue for the probabilities in a single game is a fundamental way probabilities are calculated.


Only when it's appropriate to do so. It is in the case of rolling a dice, it isn't in the case of counting the number of awakenings.

Again, it doesn't matter that if the coin lands heads 100 times in a row then I will be woken 2[sup]101[/sup] times. When I'm put to sleep, woken up, and asked my credence that the coin landed heads 100 times in a row – or my credence that my current interview is a 100-heads-in-a-row interview – the only thing that's relevant is the probability of a coin landing 100 heads in a row, which is [math]1\over2^{100}[/math]. It simply doesn't matter that if the experiment were repeated 2[sup]100[/sup] times then [math]2\over3[/math] interviews are 100-heads-in-a-row interviews.

If you want to say that it must still have to do with frequencies, then what matters is the frequency of a coin landing heads 100 times in a row, not the frequency of interviews that follow the coin landing heads 100 times in a row. You're using an irrelevant frequency to establish the probability.
PhilosophyRunner June 07, 2023 at 22:11 #813744
Reply to Michael
Ok, let me try a different method. Will your stance change if the question asked to sleeping beauty is "What is the probability that you see the coin with it's heads up when you look at it now."

Would this change your stance? Maybe it is clearer that the frequency with she looks at each outcome affect the probability that she will see that outcome. If she looks more often on tails (twice as often), then she is more likely to see tails on a given look.
Michael June 07, 2023 at 22:12 #813745
Reply to PhilosophyRunner That I get to see something twice doesn't mean that I'm twice as likely to see it. It just means I get to see it twice.
PhilosophyRunner June 07, 2023 at 22:18 #813747
Quoting Michael
No. That I get to see something twice doesn't mean that I'm twice as likely to see it. It just means I get to see it twice.


You are not twice as likely to see it. A given seeing of it is twice as likely to be tails. Those two are very different things.

You are interested in "it." That is your event for which you are calculating your probabilities

I am interested in "seeing of it." That is my event for which I am calculating probabilities.

Those are two very different events - "it" and "seeing of it" are different events.
Michael June 07, 2023 at 22:34 #813749
Quoting PhilosophyRunner
A given seeing of it is twice as likely to be tails.


This is an ambiguous claim. It is true that if you randomly select a seeing from the set of all possible seeings then it is twice as likely to be a tails-seeing, but the experiment doesn't work by randomly selecting a seeing from the set of all possible seeings and then "giving" it to Sleepy Beauty. It works by tossing a coin, and then either she sees it once or she sees it twice.

If we return to my example of tossing the coin 100 times, assume there are 2[sup]100[/sup] participants. Each participant knows two things:

1. Of the 2[sup]102[/sup] interviews, 2[sup]101[/sup] follow the coin landing heads 100 times

2. Of the 2[sup]100[/sup] participants, the coin landed heads 100 times for 1 of them

You are suggesting that they should ignore 2 and use 1 to infer a credence of [math]2\over3[/math].

I am saying that they should ignore 1 and use 2 to infer a credence of [math]1\over2^{100}[/math].

Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is [math]1\over2^{100}[/math].

So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times".

And by the exact same token, although it's true that "any given interview is twice as likely to be tails" it is false that "my interview is twice as likely to be tails".

The likelihood of your interview being a tails interview is equal to the likelihood that the coin landed tails in your experiment, which is [math]1\over2[/math].
Pierre-Normand June 08, 2023 at 02:58 #813813
Quoting sime
The sample space of any room is { H, (T,F), (T,S) }

where F and S refer to First Stay and Second Stay, respectively

with probability measure

M(H) = 1/2
M(T,F) = 1/4
M(T,S) = 1/4

(a consequence of your assumed prior probabilities )

Define a variable indicating the stay

Stay (H) = First
Stay (T,F) = First
Stay (T,S) = Second


P(Stay = First) = M (H) + M(T,F) = 3/4
P(Stay = Second) = 1/4


In the scenario I had envisioned, where new participants had the duration of their stay premised on the throw of a six-sided die, there was a 6/11 proportions of rooms occupied by participants who had landed a six. But this can easily be adapted to the original scenario with a coin toss. In the latter case, a coin landing on tails ensures that a participant will stay for two consecutive days rather than one. Whenever a room is freed, a new participant is recruited and a new coin is tossed. I left it open how the facility was filled when it began operating but we can assume that it was filled on the first day. In that case, the initial probability measure would have been approximately:

M(H) = 1/2
M(T,F) = 1/2
M(T,S) = 0 (exactly zero in this case, since participants can't be recruited on their second day)

On subsequent days, as Michael noticed, M(H) would evolve, on average, according to the sequence: first day M(H) = 1/2, second day 1/4, third day 3/8, fourth day 5/16, etc. This sequence converges on 1/3.

You can picture filling the facility (and keeping it full) as a process of filling a bag with 100 blue or red balls. The bag initially is filled on the first day with a process of tossing a coin 100 times. Whenever the coin lands on heads, a blue ball is put in the bag. Else, a red ball labelled R1 is put in the bag.

On subsequent nights, all the blue balls are being removed from the bag. All the balls labelled R2 also are removed. All the balls labelled R1 are relabelled R2 and remain in the bag. The bag is then refilled with a series of coin tosses according to the initial procedure.

An equilibrium is rapidly reached where the bag contains blue balls, R1 balls, and R2 balls, in roughly equal proportions. When all the blue and R2 balls are being removed, the R1 balls are relabelled R2, and the balls that were removed are replaced with new balls that either are blue or red (R1) according to the toss of a coin, the 1/3, 1/3, 1/3 proportions are maintained. This process ensures that the temporal frequencies of awakening events for individual participants match the (average) spatial distribution on any single day.
Pierre-Normand June 08, 2023 at 03:25 #813819
Quoting Michael
I disagree with the step from "the majority of winning bets are tails bets" to "tails is more probable".


This inference is only invalid inasmuch as it may lead to an equivocation. When you say "tails is more probable," what exactly do you mean? If you're suggesting that, according to the bettor's prior credence, it's more probable that the bettor would find themselves in circumstances where the coin toss resulted in tails, then I agree, the inference would be unwarranted. However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.
Pierre-Normand June 08, 2023 at 04:07 #813833
Michael:If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼.


What you mean, of course, it that you are picking a bag at random, and in a second stage picking a ball from that bag at random. I assume Elga would have understood this is what you meant.

Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though? If, in cases where she is scheduled for two awakenings, a second coin would be tossed to determine on which day she is being awakened (either Monday or Tuesday), then this means that, contrary to the problem stipulation, whenever the first coin landed tails, Sleeping Beauty would only be woken up once, not twice, although she would remain uncertain about the day of her awakening even after being informed that the first coin landed on tails.

But a setup in which, following a tails result on the first coin toss, Sleeping beauty is awakened both on Monday and on Tuesday places her in a different epistemic situation from a setup in which she is then awakened at random on only one of those two days according to another coin toss. Conflating the two procedures also yield to a conflating of branching out of possible worlds (as a result of intrinsic propensities of objects being realized) and branching out of centered possible worlds (as a result of epistemic agents finding themselves in various causal and epistemic relationships with such objects at different stages in their lives).
Pierre-Normand June 08, 2023 at 07:15 #813878
Here is a follow-up to my previous clarificatory remark to @sime regarding the statistical distribution of the participants in the waking rooms of the Sleeping Beauty Experimental Facility:

PN:

Here is a thought that just occurred to me. Returning to Sime's setup of priors regarding the distribution of participants in separate rooms:

P(H) = 1/2; P(T1) = 1/4; P(T2) = 1/4

While I have been advocating to both Sime and Michael that a set of priors with 1/3, 1/3, and 1/3 ratios is more representative of the original Sleeping Beauty setup, I've just realized that Sime's assumption could also present a problem if he were to inquire about the conditional probability P(H|'Meeting on the first day'). In that case, Sue is twice as likely to have been assigned a participant with the first coin having landed on heads. This result aligns with what Lewis obtained under his own halfer analysis. But that means that both Sue and the participant she has been assigned to, once informed that the participant is experiencing her first awakening, know that the probability of the coin having landed on heads is now 2/3. They might know this even in the absence of anyone having looked at the coin, or even before the coin has been tossed.

GPT-4:

Your realization highlights an interesting aspect of Sime's setup. Under his initial priors (P(H) = 1/2, P(T1) = 1/4, P(T2) = 1/4), Sue would indeed conclude that the probability of the coin having landed on heads is now 2/3 after learning that she's meeting a participant on their first awakening. This aligns with Lewis' halfer analysis, which Sime might not have intended.

Your point about Sue and the participant knowing this even before the coin is tossed or looked at demonstrates a potential paradox or counter-intuitive aspect of the setup. It seems strange that probabilities can be adjusted based on conditions that haven't occurred yet, which could be a basis for further discussions on the interpretation of probability and its application to these types of problems. This could lead to discussions about objective vs subjective interpretations of probability, the nature of knowledge and information, and the meaning of conditional probabilities.

In essence, Sime's setup appears to inadvertently support the position he's arguing against. You've made an important observation that could further illuminate the debate surrounding the Sleeping Beauty problem.

PN:

Under Sime's assumed priors, and the consistent distribution of participants in rooms labelled H, T1, and T2, Sime's conclusion that P(H|T1 or H) = 2/3 would be correct and uncontroversial. The fact that this leads to a weird ability to predict unequal frequencies from future tosses of a fair coin merely highlights that Sime's setup is inconsistent with the setup (and room-state distribution) having been determined in the manner described in the original Sleeping Beauty setup for individual participants. Sime's 2/3 conditional result would be consistent with his (and Michael's) alternative setup where a second independent coin toss determines whether Sleeping Beauty is awakened on Day1 or Day2 (but not both) following an initial tails result on the first coin toss.

GPT-4:

Yes, indeed. Sime's setup is more akin to an alternate interpretation of the Sleeping Beauty problem, where the awakening on Day 1 or Day 2 is determined by a second independent coin toss following an initial tails result. In this modified setup, the conclusion that P(H|T1 or H) = 2/3 would indeed be correct and uncontroversial, because the conditions under which Sleeping Beauty is awakened are fundamentally different from the original problem.

It's a subtle distinction but a crucial one. By changing the conditions of the problem in this way, Sime and Michael introduce a new factor (the second coin toss) that alters the statistical distribution of awakenings. This demonstrates the importance of carefully considering the specifics of a problem setup when interpreting the resulting probabilities.

However, this deviation from the original problem setup could be seen as sidestepping the core philosophical issues presented by the Sleeping Beauty problem, such as the nature of self-locating belief and the distinction between personal and impersonal probabilities. For these reasons, it's important to clarify the specifics of the setup when discussing the Sleeping Beauty problem, to ensure that everyone is debating the same problem.
Michael June 08, 2023 at 07:52 #813883
Quoting Pierre-Normand
Does that procedure accurately represent how Sleeping Beauty understands her own epistemic situation when she is being awakened on a day of interview, though?


It's not intended to. It's intended to show that this inference is not valid a priori:

P(A|A or B) = P(B|A or B)
? P(A) = P(B)

Elga's argument depends on this inference but he doesn't justify it.

His chain of entailments when applied to my counterexample leads to a false conclusion, and so it needs to be explained why this chain of entailments is valid for the Sleeping Beauty case.
Michael June 08, 2023 at 08:01 #813884
Quoting Pierre-Normand
However, if what you mean is that, from the bettor's perspective and in light of the evidence available to them at the time of betting, the bet (distinguished from other bets within the same experimental run, which from the agent's point of view, may or may not exist) is more likely to have been placed in circumstances where the coin landed tails, then I would argue that the inference is indeed warranted.


I believe this response to PhilosophyRunner addresses this claim. Specifically:

Although it's true that most interviews follow the coin landing heads 100 times, every single one of those interviews belongs to a single participant, and for each participant the probability that they are that single participant is [math]1\over2^{100}[/math].

So although it's true that "any given interview is twice as likely to have followed the coin landing heads 100 times" it is false that "my interview is twice as likely to have followed the coin landing heads 100 times"
Pierre-Normand June 08, 2023 at 08:14 #813888
Quoting Michael
Elga's argument depends on this inference but he doesn't justify it.


You challenged the validity of Elga's inference by presenting what you consider a counterexample:

"If I have one red ball in one bag and two numbered blue balls in a second bag, and I pick out a ball at random and show it to you then P(R|R or B1) = P(B1|R or B1) but P(R) = ½ and P(B1) = ¼." - Michael

As I pointed out earlier, these probabilities don't result from randomly picking one ball out of three. They would instead follow from choosing one of two bags at random, and subsequently selecting a ball at random from that chosen bag, which I assumed was your intention.

In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3. Clearly, these two probabilities aren't equal, and their equality forms the antecedent of Elga's inference.

Thus, you haven't presented a counterexample to the validity of Elga's inference; rather, you've demonstrated that his conclusion doesn't follow when the premise is false. At best, you have shown his inference to be unsound.

However, you've done this by replacing Elga's premise—which aligns with the Sleeping Beauty setup—with a different premise derived from an alternative setup.
Michael June 08, 2023 at 08:16 #813891
Quoting Pierre-Normand
In that scenario, P(R|R or B1) would be 2/3 and P(B1|R or B1) would be 1/3.


How do you get that?
Pierre-Normand June 08, 2023 at 08:43 #813895
Quoting Michael
How do you get that?


P(R|R or B1) is the probability of the ball being red, given that the ball isn't labelled B2. That's because the outcome 'R or B1' is equivalent to the outcome 'not B2'. If you eliminate the possibility of 'B2', the only outcomes left are 'R' and 'B1', in the same prior proportions as before.

Applying Bayes' theorem, we have P(R|R or B1) = P(R or B1|R) * P(R) / P(R or B1). Since P(R or B1|R) is 1 (if the ball is red, it's certain that it's either red or B1), and P(R) is 1/2 (the prior probability of the ball being red), and P(R or B1) is 3/4 (the prior probability of the ball being either red or B1), we get P(R|R or B1) = (1 * 1/2) / (3/4) = 2/3.

Applying the same reasoning mutatis mutandis, we get P(B1|R or B1) = 1/3.
Michael June 08, 2023 at 08:56 #813898
Reply to Pierre-Normand Good point. Thanks for the correction.
Pierre-Normand June 08, 2023 at 08:58 #813900
Quoting Michael
Good point. Thanks for the correction.


:up:
Pierre-Normand June 08, 2023 at 09:05 #813904
Quoting Michael
I believe this response to PhilosophyRunner addresses this claim. Specifically: [...]


I'd be happy to revisit this objection but I'm going to stand back and watch a little how @PhilosophyRunner grapples with it. If I see that they are in trouble, I may jump back into the ring. I've heard that's how things are done at the WWE.
Michael June 08, 2023 at 09:18 #813905
Quoting Michael
P(Heads | Mon or Tue) = P(Mon or Tue | Heads) * P(Heads) / P(Mon or Tue)
P(Heads | Mon or Tue) = 1 * 1/2 / 1
P(Heads | Mon or Tue) = 1/2


Going back to this for a moment, I think a better way to write this would be:

P(Heads|H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub]) = P(H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub]|Heads) * P(Heads) / P(H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub])

If Elga is right in saying that P(H[sub]1[/sub]), P(T[sub]1[/sub]), and P(T[sub]2[/sub]) sum to 1 then P(H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub]) = 1.

So P(Heads|H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub]) = [math]1\over2[/math].

If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H[sub]1[/sub] or T[sub]1[/sub] or T[sub]2[/sub]).
Pierre-Normand June 08, 2023 at 13:31 #813930
Quoting Michael
Going back to this for a moment, I think a better way to write this would be:

P(Heads|H1 or T1 or T2) = P(H1 or T1 or T2|Heads) * P(Heads) / P(H1 or T1 or T2)

If Elga is right in saying that P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

So P(Heads|H1 or T1 or T2) = 1/2

If he's right when he says that "[you] receiv[e no] new information [but] you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H" then it seems correct to say that Sleeping Beauty is just being asked about P(Heads|H1 or T1 or T2).


Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.

Let's reconsider the scenario you proposed yesterday with two Sleeping Beauties, Jane and Helen. If the coin lands heads, Jane is awakened once and Helen twice, and vice versa if it lands tails. They occupy the same waking room on three consecutive days (twice by either Jane or Helen, depending on the coin toss). On non-awakening days, they remain in a sleeping room. Sue Sitter is assigned randomly to attend to whoever is awakened in the waking room on one of three randomly selected days.

When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.
Michael June 08, 2023 at 13:36 #813932
Quoting Pierre-Normand
Your calculation seems correct, but it doesn't adequately account for the new capacity Jane gains to refer to her own temporal location using an indexical expression when updating her credence. Instead, you've translated her observation ("I am awake today") into an impersonal overview of the entire experiment ("I am scheduled to be awakened either under circumstances H1, T1, or T2"). The credence you've calculated reflects Sleeping Beauty's opinion on the ratio, over many iterations of the experiment, of (1) the number of runs resulting from a heads result, to (2) the total number of experimental runs. Indeed, this ratio is 1/2, but calculating it doesn't require her to consider the knowledge that today falls within the set {H1, T1, T2}.


I've just taken what Elga said. He says:

Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1)=1/3.


If P(H1), P(T1), and P(T2) sum to 1 then P(H1 or T1 or T2) = 1.

Where P(H1) means "the coin landed heads and today is Monday", P(T1) means "the coin landed tails and today is Monday", and P(T2) means "the coin landed tails and today is Tuesday".
Michael June 08, 2023 at 13:41 #813933
Quoting Pierre-Normand
When Sue finds Jane in the assigned room, and assuming she knows the participants and the experimental setup, her prior probabilities would be:

P(Jane awake today) = P(JAT) = 1/2, and P(H) = 1/2

Her updated credence for H is P(H|JAT) = P(JAT|H) * P(H) / P(JAT) = (1/3*1/2) / (1/2) = 1/3

Jane's priors for any random day during the experiment would be exactly the same as Sue's. When Jane is awakened on a day when Sue is assigned to her, Jane has the same information Sue has about herself, and so she can update her credence for H in the same way. She concludes that the probability of this kind of awakening experience, resulting from a heads result, is half as probable, and thus half as frequent, as identical awakening experiences resulting from a tails result. This conclusion doesn't impact the ratio of the frequency of heads-result runs to the total number of experimental runs, which remains at 1/2 from anyone's perspective.


I've already stated why I disagree with this. The manner in which the sitter is assigned a room isn't the manner in which Sleeping Beauty is assigned a room, and so their credences will differ.

Sue should reason as if her room was randomly selected from the set of all rooms, because it was.
Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).

This is clearer with my extreme example.

[math]2\over3[/math] of sitters will sit in on a 100 Heads interview, and so their credence should be P(100 Heads) = [math]2\over3[/math].

[math]1\over2^{100}[/math] of participants will have a 100 Heads interview, so their credence should be P(100 Heads) = [math]1\over2^{100}[/math].

The fact that the one participant who has a 100 Heads interview will have 2[sup]101[/sup] of them is irrelevant. It is only rational for each participant to reason that they are almost certainly not the participant who will have 2[sup]101[/sup] interviews, and so that this is almost certainly their first and only interview, and so that the coin almost certainly didn't land heads 100 times. This is, again, what I explained to PhilosophyRunner here.

The claim that because most interviews are a 100 Heads interview then my interview is most likely a 100 Heads interview is a non sequitur. Only if most participants have a 100 Heads interview could it follow that my interview is most likely a 100 Heads interview.
Pierre-Normand June 08, 2023 at 14:28 #813942
Quoting Michael
Jane should reason as if she was randomly selected from the set of all participants, because she was (via the coin flip).


Indeed, Jane reasons as if she was randomly selected from the set of all participants, as this forms the basis of her prior probabilities. What you seem to suggest is that she should not, or cannot, assign any credence or probability to specific features of her current awakening episode. However, if she is informed that the coin landed on tails, then she should and does update her credence P(H) to 1/2. She could only do this if her initial prior was 1/3. Otherwise, she'd end up in a paradox, akin to Lewis's divination of the future.

The introduction of Sue Sitter is designed to demonstrate that Jane can update her credence based on the same information available to Sue, even though the circumstances leading them to their shared situation are different. Specifically, the selection process for Sue ensures that, despite being fully aware of the experimental setup (like Jane), she shares Jane's state of ignorance (with no need for amnesia-inducing drugs) about whether her current awakening episode is unique, the first of two, or the last of two. Yet, concerning this awakening episode, they share the same priors and the same information necessary to update their credences.

If you are uncomfortable with Jane reflecting specifically about her current awakening episode, consider this: Let Jane reflect on Sue's reasoning about the probability P(H). If she deems Sue's reasoning correct, she could adjust her own credence to match Sue's, since they both concern the exact same coin toss result. Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"
Michael June 08, 2023 at 14:32 #813944
Quoting Pierre-Normand
Surely, Jane cannot reasonably say: 'Yes, I see you are right to conclude that the probability of the coin having landed on heads is 1/3, based on the information we share. But my belief is that it's actually 1/2.'"


Sue's reasoning is right for Sue but wrong for Jane (and vice versa) given that [math]2\over3[/math] of sitters will sit in on a 100 Heads interview but [math]1\over2^{100}[/math] of participants will have a 100 Heads interview.
Pierre-Normand June 08, 2023 at 15:32 #813955
Quoting Michael
Sue's reasoning is right for Sue, but wrong for Jane (and vice versa), given that 2/3 of sitters will sit in on a 100 Heads interview but 1/2^100 of participants will have a 100 Heads interview.


Let's consider another scenario. Suppose Jane is an amnesiac paratrooper set to be dropped behind enemy lines for three separate missions on different nights. Conveniently, she suffers from selective anterograde amnesia which shields mission information in case she's captured. There are two potential drop zones: #1 is dry land, and #2 is a swamp.

Mission Control will toss a coin to decide Jane's destinations. If it lands heads, she will be dropped once over DZ#1 and twice over DZ#2. If it lands tails, she'll be dropped twice over DZ#1 and once over DZ#2. Jane carries a tracking device which will enable Sergeant Sue, who's already operating behind enemy lines, to locate her during one randomly selected mission, collect the mission package, and direct her back home.

Now, what's the probability that the coin landed heads? Before the experiment begins, both Sue and Jane have the same prior credence P(H) = 1/2.

However, suppose Sue locates Jane in DZ#1. What is the updated probability that the coin landed heads? It would be 1/3. Shouldn't Jane update her credence to the same value when she realizes she's landed on dry land?

Suppose the outcome of the coin toss also determines the nature of the package that Jane is carrying. If the coin landed on heads, Jane's package contains a secret explosive; if it landed tails, the package contains a secret medication. Both Jane and Sue know this arrangement beforehand.

When Sue tracks Jane and finds her in DZ#1, they both realize that Sue should deliver the package to a hospital if the coin landed tails (since the package would contain medication), or to a military outpost if the coin landed heads (since it would contain an explosive).

Now, should they both act as though the probability of the coin having landed heads is 1/3? It's not just about winning or losing a bet anymore. The decisions they make based on their credence have immediate and tangible consequences. They must update their credence according to the information they have at hand, including Jane's current location.
Michael June 08, 2023 at 15:50 #813958
Reply to Pierre-Normand

So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

Sue tells Jane that it's Monday.

What is Jane's credence that the coin landed heads?

I say 1/2.

It's exactly the same reasoning as before.

Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.

This reasoning is much clearer to see in the 100 Heads example, and I don't see how any counterexample example is going to change my mind about 100 Heads. If I'm one of the participants I will only ever reason that P(100 Heads) = [math]1\over2^{100}[/math]. I am almost certainly not the one participant who will have 2[sup]101[/sup] interviews.

I should no more update my credence to match my sitter’s than she should update hers to match mine.
Pierre-Normand June 09, 2023 at 08:07 #814101
Quoting Michael
So if heads then woken once on Monday and twice on Tuesday, otherwise woken twice on Monday and once on Tuesday.

Sue tells Jane that it's Monday.

What is Jane's credence that the coin landed heads?

I say 1/2.

It's exactly the same reasoning as before.

Sue should reason as if she is randomly selected from the set of all sitters, and 1/3 of sitters sitting in a Monday room are sitting in a heads room.

Jane should reason as it she is randomly selected from the set of all participants, and 1/2 of participants in a Monday room are sitting in a heads room.


Although you linked my most recent post, I assume you intended to respond to this one.

In my scenario, there was only one room for awakenings. Without loss of generality, we can schedule the occupancy of that room over the three days of the experiment, according to the coin toss result, as follows:

If the coin lands Heads:

Day 1: Jane (first),
Day 2: Helen (first),
Day 3: Helen

If the coin lands Tails:

Day 1: Jane (first),
Day 2: Jane,
Day 3: Helen (first)

I use the label 'first' to signify a participant's first (or only) day of awakening. This is functionally equivalent to Monday in the original problem.

On any given day, whichever candidate doesn't occupy the waking room is moved into the sleeping room.

When Sue is assigned to attend to the participant who is awakened on a randomly chosen day (without being informed if it's Day 1, Day 2 or Day3), her credence in the coin having landed heads is 1/2. After she is informed that the occupant is Jane, she updates her credence P(H) to 1/3. After she is further informed that this is Jane's first or only occasion of awakening, her credence gets updated back to 1/2 again.

I would argue that Jane should update her credence in the same way in light of the same information. But first, I wanted to ensure that we are on the same page regarding both the setup and Sue's credences within it.

(I'll address your extreme example separately.)
Michael June 09, 2023 at 08:10 #814102
Quoting Pierre-Normand
Although you linked to my most recent post, I assume you intended to respond to this one.


No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?

If not then it appears to be saying the same thing as the above?
Pierre-Normand June 09, 2023 at 08:14 #814103
Quoting Michael
No, I was just trying to rephrase your secret mission example into a way that I could understand better. Did I misinterpret it?


In the secret mission example, Sergeant Sue is tasked with locating Jane on one of her three scheduled missions selected at random. Neither Sue nor Jane know if this mission is the first, second or third one or, if it's not the first, in what zone(s) Jane landed before.
Michael June 09, 2023 at 08:18 #814104
Reply to Pierre-Normand Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know that it's DZ#1 then you know that it's Monday.
Michael June 09, 2023 at 08:26 #814105
Quoting Pierre-Normand
I would argue that Jane should update her credence in the same way in light of the same information.


Jane should reason as if she is randomly selected from the set of all participants, because she is.
Sue should reason as if she is randomly selected from the set of all sitters, because she is.

Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.

I think my extreme example shows why. It's just not rational for each participant to reason that the coin most likely landed heads 100 times but it is rational for each sitter to reason that they are sitting in on a 100 heads experiment.
Pierre-Normand June 09, 2023 at 08:28 #814106
Quoting Michael
Was my rephrasing of it wrong? I'm treating DZ#1 as Monday and DZ#2 as Tuesday. If twice at DZ#1 then twice on Monday, if once at DZ#2 then once on Tuesday. If you know what it's DZ#1 then you know that it's Monday.


Structurally, DZ#1 is similar to awake and DZ#2 is similar to asleep. In this way, the Jane&Helen scenario perfectly maps on the secret mission scenario, with DZ#2 and Helen fulfilling the roles of dummy place holders, as it were.
Pierre-Normand June 09, 2023 at 08:30 #814107
Quoting Michael
Jane shouldn't update her credence to match Sue and Sue shouldn't update her credence to match Jane.


But we are agreed on the validity of Sue's credences in both scenarios, right?
Michael June 09, 2023 at 08:32 #814108
Quoting Pierre-Normand
But we are agreed on the validity of Sue's credences in both scenarios, right?


Yes, I said as much with my extreme example.

Given that [math]2\over3[/math] of sitters will sit in on a 100 Heads interview it is rational for each sitter to reason that they are likely sitting in on a 100 Heads interview.

But given that [math]1\over2^{100}[/math] of participants will have a 100 Heads interview it is rational for each participant to reason that their interview is likely not a 100 Heads interview.

Neither the sitter nor the participant should update their credence to match the other's. They each belong to a different reference class.

And if they reason this way then 2[sup]101[/sup] sitters will be right (once) and 2[sup]100[/sup] - 1 will be wrong (once), and 2[sup]100[/sup] - 1 participants will be right (once) and 1 will be wrong (2[sup]101[/sup] times).

I'd say that's all the evidence I need to justify my credence.
Pierre-Normand June 09, 2023 at 09:42 #814118
Quoting Michael
Neither the sitter nor the participant should update their credence to match the other's.


In that case, in the secret mission scenario, it looks like Sergeant Sue and G. I. Jane are poised to having a fight—or at least a heated argument—regarding where the secret package should be delivered. Sergeant Sue would advocate for its delivery to the allied field hospital, relying on her 1/3 credence, while G.I Jane, holding a 1/2 credence, would maintain that the destination of the package doesn't make a difference. (The parameters of the scenario could be tweaked to make the dispute more pronounced, through yielding 1/6 vs 6/11 credences, say).

Alternatively, Sue and Jane could come to realize that their apparently divergent credences target different classes of events (which is what I have argued) and conclude that both of them are true, but that it's the 1/3 credence that is relevant for determining the likely contents of the package they are presently dealing with.
Pierre-Normand June 09, 2023 at 14:16 #814153
Here is another discussion with GPT-4 that rehearses my pragmatist conception of the dispute between halfers and thirders.

PN:

Let's explore a variation of the Sleeping Beauty problem. In this version, a six-sided die is tossed to determine the course of the experiment. Should the die result in 'six', Jane will be awakened six times consecutively. Otherwise, she will experience only a single awakening. This sequence will unfold over six days, with a single run of the experiment.

In the case where the die does not land on six, Jane's lone awakening will occur on the sixth day of the experiment, and she will be released on the seventh day. The die outcome also determines the location of Jane's stay: 'six' results in her lodging in the East Wing, while any other outcome places her in the West Wing.

Aunt Betsy, who plans to pick Jane up upon her release, is aware that Jane checked into the facility on June 30th and will be released on July 7th. On July 1st, after the die has been rolled but before any awakenings, Betsy calls the facility to learn where she should pick up Jane. The manager informs her that this information cannot be disclosed directly but can be communicated to Jane, who is allowed to send a response.

To ensure that Jane's knowledge doesn't impact her belief in the probability that a 'six' was rolled, the manager forwards Betsy's inquiry to Jane during each awakening, irrespective of the die result. Jane's responses are collected, and only one is sent to Aunt Betsy on the evening of July 6th, during Jane's final sleep.

Upon receiving Betsy's inquiry, Jane estimates her belief in the probability of the die having landed on 'six', which she deduces to be 6/11. She reasons that Betsy should therefore wait for her at the East Wing, where she believes there is a 6/11 chance she is being housed.

By contrast, from Betsy's perspective, there should be a 5/6 probability that the die didn't land on 'six'. This perspective, grounded in observations from multiple hypothetical experiment runs, indicates Jane is likely housed in the West Wing with a 5/6 probability. Despite stemming from the same fundamental setup, their credences appear to conflict: Jane and Betsy can't both be correct about where Jane is most likely to be released from. What went wrong?

GPT-4:

This is an interesting variation of the Sleeping Beauty Problem, and the apparent conflict arises from a fundamental difference in the knowledge and perspectives of Aunt Betsy and Jane.

From Jane's perspective, each awakening she experiences is a random draw from all her potential awakenings. If the die landed on 'six', she will be awakened six times, and if the die did not land on 'six', she will be awakened once. So when she wakes up and contemplates her situation, she reasons that there are six times more ways she could have found herself awake in the East Wing compared to the West Wing, and thus, she assigns a 6/11 credence to being in the East Wing.

On the other hand, Aunt Betsy's perspective is fundamentally different. She knows that the experiment will last for seven days and that Jane will only be released on the seventh day. She does not know whether Jane was awakened multiple times or just once, and she is not experiencing the awakenings as Jane is. Her 5/6 credence that Jane is in the West Wing is based on the probability of the die roll, independent of Jane's awakening experiences.

So, the difference in their credences comes from the different ways they experience or perceive the experiment. Jane's credence is based on her waking experiences, while Aunt Betsy's is based on the initial conditions of the experiment. While this might appear to be a conflict, it's simply a result of their different perspectives on the same situation. This situation also underscores the subjective nature of Bayesian probabilities, which are personal degrees of belief and may vary between individuals even when considering the same event, based on their differing perspectives or knowledge about the situation.

PN:

I very much agree with everything that you said, but the solution of the puzzle involves the reconciliation of Jane's credence with the fact that her aunt will indeed pick her up at the wrong wing five times out of six if she would rely on Jane's message (and hence why Jane should rethink it).

GPT-4:

This is a great thought experiment. It reminds us that although probabilities might be personally reasonable, they can't always be used interchangeably between different people or contexts, especially when the information set or the perspective changes.

The solution to this puzzle lies in realizing that Jane, when she sends the message, is aggregating over multiple awakenings. The 6/11 credence that Jane assigns to being in the East Wing is not tied to any specific day of the experiment but is an average over all her possible awakenings. On each awakening, she updates her belief based on the premise that she's equally likely to be awake on any given day.

However, Aunt Betsy doesn't care about Jane's experience across all her awakenings. She cares about Jane's location on a specific day, the seventh day. Jane needs to acknowledge this and should instead reason as follows:

If the die landed on 'six', Jane will have six awakenings in the East Wing.
If the die didn't land on 'six', Jane will have one awakening in the West Wing.
However, from Jane's perspective, each of her awakenings is equally likely. Thus, if Jane knew it was her final awakening, her credence in being in the East Wing would only be 6/7, not 6/11. If it is not her final awakening, Jane can deduce that she must be in the East Wing. Therefore, Jane should send two types of messages: one indicating East Wing if it is not her final awakening, and another indicating a 6/7 chance of being in the East Wing if it is her final awakening.

From Aunt Betsy's perspective, she would need to know if it's Jane's last awakening, which she doesn't. But if the manager were to choose to send the message indicating the East Wing, if any, or the other message otherwise, Aunt Betsy's best strategy would be to follow the message's instructions and pick Jane up at the East Wing if the message indicates so, otherwise at the West Wing. This strategy would result in Aunt Betsy waiting at the correct wing six out of seven times. This may not match perfectly Jane's 6/11 credence but it's the best strategy that can be devised from their shared perspective.

PN:

Aunt Betsy doesn't know the experimental setup, so I equate her perspective with the observed result that (were she placed in this situation multiple times) she would find Jane coming out of the West Wing five out of six times. This is the same observation that Jane would make. (The seventh day doesn't count in the calculations since it's not a day of awakening and interview, it's just the day of discharge.)

Jane's mistake was to infer from the fact that her being located in the East Wing six out of eleven times when she awakens translates in her coming out of the facility six out of eleven times when she is discharged. It's as simple as that.

Six of her awakenings that occur in the East Wing result in one single discharge from that wing, which is why she should direct her aunt to the East Wing where 5 out of 6 of her discharges occur (following a single awakening), despite the fact that it is *now* (on the occasion of her present awakening) more likely than not that she is in the East Wing.

GPT-4:

Ah, I see! Thank you for the clarification. Yes, Jane seems to have confused her credence for being in the East Wing at the time of any awakening (6/11) with the probability of being discharged from the East Wing at the end of the experiment. As you mentioned, the error arises because six awakenings in the East Wing all lead to one single discharge from the East Wing, while one awakening in the West Wing leads to one discharge from the West Wing.

Therefore, the probability that Jane will be discharged from the East Wing is indeed 1/6, not 6/11. Conversely, the probability that she will be discharged from the West Wing is 5/6. So, Jane should inform Aunt Betsy to wait for her at the West Wing, since that is where she will most likely be discharged from. This aligns with the observed result that Aunt Betsy would find Jane coming out of the West Wing five out of six times if this scenario were to be repeated multiple times.

Thank you for presenting this interesting variation of the Sleeping Beauty problem. It indeed reinforces the point that probabilities can't be used interchangeably between different contexts and reminds us to always consider the exact event or outcome that the probability is referring to.

PN:

My variation of the Sleeping Beauty problem is designed to demonstrate that conflicting credences don't necessarily stem from incompatible epistemic viewpoints. Rather, they often arise from differing interpretations of what the credences actually represent. In the Sleeping Beauty problem, the confusion is compounded by the fact that different credences—1/2 and 1/3—are both assigned to the same 'state' (Heads) of an object (a fair coin). However, what truly distinguishes these credences, and explains their different values, is the uniquely individualted sets of events each considers. For a 'halfer', the credence is based on the ratio of Heads-experimental runs to total experimental runs. Conversely, for a 'thirder', it's about the ratio of Heads-awakenings to total awakenings. The apparent conflict is not a matter of differing epistemic perspectives, but rather differing definitions of the credences in question.

GPT-4:

I understand your point and agree with it. The distinction you've made is essential and helps clear up the confusion surrounding the Sleeping Beauty problem. Many debates about probability and credence essentially boil down to differences in the definitions of what constitutes an event or an experimental trial. In this context, it's clear that what one person considers as an 'event' might differ substantially from another person's definition. By drawing attention to these differences in definition, we can gain a clearer understanding of why different credences are assigned in the first place and clarify seemingly conflicting probabilities. It's a perspective that adds a significant depth to the discussion and makes the resolution of the paradox more nuanced and comprehensive.

PN:

Coming back to the new variation, which was intended to highlight a particular pragmatic context in which Jane can—and should—make use of a 'halfer' conception of her credence P('six') = 1/6, there are other pragmatic concerns that can make her rely on the 'thirder' conception. One such context would be for her to find an opportunity to escape from the facility and needing to prepare accordingly depending on her being hosted in the East or West wing. One of them might be surrounded by a moat filled with crocodiles and the other with a forest infested with lions. Since, in this case, the escape opportunities are distributed evenly across occasions of awakening, Jane should prepare her escape on the basis of a 'thirder' definition of her credence P('six') = 6/11.

GPT-4

Yes, that's an excellent extension of the scenario. If Jane finds herself in a situation where she must make a decision based on her current location during each awakening, then it is indeed more appropriate for her to use the 'thirder' credence. Given the uniform distribution of escape opportunities across awakenings, Jane's best strategy would be to prepare for the highest probability scenario in her current awakening, which is in the East Wing according to 'thirder' reasoning.

Your point here further underlines the importance of considering the context and its implications when deciding which credence to rely on. The 'halfer' and 'thirder' perspectives are not mutually exclusive or inherently right or wrong; instead, they offer different lenses through which to view the problem, each one more useful in different situations. It's the specifics of the situation and the nature of the decision to be made that determine which perspective should be used.
Benkei June 09, 2023 at 16:59 #814179
Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?

Pierre-Normand June 10, 2023 at 12:34 #814320
Quoting Benkei
Isn't this problem a consequence of asking the wrong question, because the question is ambiguous? Given the setup, and that you'd be woken twice if it is tails, what's the likelihood now that you're awake, that the coin shows tails? That seems to resolve the ambiguity or am I missing something?


In the Sleeping Beauty problem, both main parties to the debates agree that it's Sleeping Beauty's credence while she is being awakened ("now") that is at issue. 'Halfers' assert that since Sleeping Beauty gains no new information when she awakens, her credence that the fair coin landed on heads should remain P(H) = 1/2. On the other hand, 'Thirders' argue that upon awakening, Sleeping Beauty should update her credence to P(H) = 1/3.

You are right to point out that an ambiguity is present, but it isn't resolved by simply specifying the time span during which (or relative to which) Sleeping Beauty should evaluate her credence.

Consider two types of experimental runs: "H-runs" (following a heads coin toss result and yielding a single awakening) and "T-runs" (following a tails coin toss result and yielding two awakenings). Halfers argue that even though Sleeping Beauty experiences more awakenings during a T-run, this doesn't make such a run more likely. Therefore, upon awakening, her credence in being part of a H-run or a T-run should remain at 1/2 for each.

Conversely, Thirders argue that Sleeping Beauty should update her credence in P(H) to 1/3 upon awakening, since in the long run, one-third of her possible awakenings occur as a result of the coin landing heads.

At first glance, it might seem that Halfers focus more on the entirety of the experimental runs, while Thirders focus on individual awakening episodes. However, this observation doesn't resolve the dispute.

There are biconditional relationships between the propositions "The coin landed heads", "I have presently awoken in a H-run", and "I am presently experiencing a H-awakening". The same relationships exist for tails. Halfers and Thirders each believe that Sleeping Beauty can make valid inferences based on these relationships, leading to their respective credences. (It is also worth noticing that there is a many-to-one relationship between T-awakenings and T-runs.)

Because those biconditionals hold, The Halfers argue that Sleeping Beauty is entitled to infer from "P(H-run) = 1/2" that "P(H-awakening) = P(H) = 1/2" is true as well.

Likewise, the Tirders argue that Sleeping Beauty is entitled to infer from "P(H-awakening) = 1/3" that "P(H-run) = P(H) = 1/3" is true as well.

Those inferences appear valid, so the dispute remains.

The problem is that we have not resolved the ambiguity inherent in all three of the random variables P(H), P(H-awakening) or P(H-run)

The ambiguity, as I see it, can only be resolved by specifying the method of verifying the target events. If the verification is performed once at the end of each experimental run, the Halfer interpretation would be upheld. However, if the verification is performed during each individual awakening, it's the Thirder interpretation that would be upheld. (Notice that it's because of the aforementioned many-to-one relationship that the specification of the verification method disambiguates the ratio of events that Sleeping Beauty's credence tacitly refers to.)

Each verification method has its uses, and the failure to specify them is what gives rise to the apparent conflict. My discussion with GPT-4 in the post above provides a more detailed exploration of this pragmatist interpretation by means of practical examples.
Pierre-Normand June 11, 2023 at 14:54 #814598
Here is another interesting aspect of the Sleeping Beauty problem. What is the nature of the information that Sleeping Beauty receives on each occasion of her awakening, and how can this information be communicated?

PN:

I thought of a different aspect of the problem while imagining another variation. Imagine that Leonard Shelby, suffering from anterograde amnesia, remains at home for the next seven days. Let us assume that all his new memories are wiped out overnight every night. If a coin lands heads, a door-to-door vacuum cleaner salesman will ring his doorbell once in the coming week. If it lands tails, a door-to-door broom salesman will ring his doorbell every day for the next six days. Those details are written down in Shelby's memento that he reads every morning. He doesn't have a calendar and won't know what the day of the week is until his niece visits him on the seventh day (Saturday). Predictably (and he also knows that to be true) by the end of the week, his broom closet will contain either one vacuum cleaner (if the coin landed heads) or six new brooms (if the coin landed tails).

According to 'Halfer' reasoning, whenever the doorbell rings, Shelby should be expecting to meet the vacuum cleaner salesman with credence P(H) = 1/2. According to 'Thirder' reasoning, his credence should be P(H) = 1/7.

Suppose Shelby is a Thirder (as I also am, in this context). The ringing of the doorbell is what provides the evidence (i.e. "there is a salesman visit today") that enables Shelby to update his credence P(H) from 1/2 to 1/7. Shelby knows, though, that come Saturday, after his niece arrives, but before he looks into the closet, his credence will revert back to P(H) = 1/2. This is because of his amnesia that makes him lose the information that he had gained whenever he heard the doorbell ring. What is the nature of this information that he lost? After all, he still knows from his memento that the doorbell must have rung at least once.

Before I tell you how I propose to explain how this information could be conceptualized, I'd like to receive your input and about the thought experiment and your own speculation about the nature of the information that Shelby effectively forgets, thus causing him to revert his credence back to 1/2 even on Saturday morning.

GPT-4:

Your thought experiment is intriguing and sheds a new light on the problem. By introducing a character like Leonard Shelby with anterograde amnesia, you add a layer of complexity that makes the question about the nature of the "forgotten" information particularly interesting.

If we delve into this, we can think of the "forgotten" information not merely as facts, but rather as temporal experiences. Whenever the doorbell rings, Shelby updates his credence based on the fact that a visit has occurred. As a Thirder, this evidence updates his credence to P(H) = 1/7, since each visit is more likely to be a broom salesman if the coin landed on tails. The ringing of the doorbell provides him with the evidence that a new day has passed.

However, due to his amnesia, he forgets the experience of the passing days, despite knowing from his memento that the doorbell must have rung at least once. This lack of memory regarding the time elapsed (and hence the number of potential salesmen visits) is the missing information. It's not just the occurrence of the doorbell ringing (which he knows happened), but the frequency and distribution of those occurrences across days which is forgotten.

When his niece visits him on Saturday, he doesn't remember the number of past salesman visits (how many times the doorbell rang), only that there has been at least one. He therefore can't distinguish between the two scenarios (one vacuum cleaner salesman visit or six broom salesman visits). This lack of distinction, due to his amnesia, causes him to revert his credence back to P(H) = 1/2.

So, in summary, the forgotten information can be seen as the temporal sequence and frequency of the events (the doorbell ringing), not merely the events themselves.

PN:

Consider, though, that whenever the doorbell rings he doesn't know either how many times this happened—only that it happened at least once. But he already knew that this would happen at least once, from his memento, and will know that it must have happened once, come Saturday, still from his memento.

GPT-4:

Ah, I see what you mean. In that case, the information that's being lost isn't about the events themselves or their frequency, but the contextual information about when and how the events occurred in relation to his own experience of time.

When the doorbell rings, Shelby gains the information that he is currently experiencing a salesman's visit. However, due to his amnesia, he loses the contextual information of this event having occurred in his subjective timeline. Even though he knows from his memento that at least one visit must have occurred, he doesn't have the personal, experiential connection to that event anymore.

In other words, the "forgotten" information is the personal, experiential context of the event happening to him in the here and now. While he remembers that a doorbell ringing event was supposed to happen (from the memento), he has lost the immediate, personal experience of it happening to him in the moment. It's the disconnection from the immediate experience of the event that causes him to revert his credence back to P(H) = 1/2 on Saturday.

Therefore, the nature of the information being lost due to his amnesia might be considered his personal, immediate connection to the event - the event as situated within his experience of the passing of time.

PN:

This is an interesting way to think about it. Here is the alternative I had imagined. Suppose Shelby is enabled, as part of the protocol of the experiment (all written down in his memento), whenever the doorbell rings, and before answering, to throw four dice and, if they all land on six, to write down in his memento "today the doorbell rang". So, on the rare occasion when he lands four sixes once during the week, the information that he thus expresses will be retained by him come Saturday. But we now can understand better what this information is. It's not so much the propositional content of the expression "today the doorbell rang" that enables him to update his credence to P(H) = 1/7 (on Saturday) but rather the fact that the proposition (or memory) was recoded at all. Since if the coin landed H, there would have been only one opportunity for Shelby to roll four sixes and record the event, whereas if the coin landed tails there would have been six times as many opportunities, the fact that this recording happened enables Shelby, come Saturday, to update his credence from 1/2 to 1/7 though normal Bayesian reasoning.

GPT-4:

That's a fascinating approach! The way you've set it up cleverly exploits the difference in opportunity frequency between the two outcomes (Heads vs Tails) of the coin toss. This mechanism you've devised essentially equates the memory recording event to a "successful" dice roll event. It's not the content of the memory per se that matters, but rather the existence of the memory itself.

By creating a memory record when the doorbell rings and also meeting the condition of rolling four sixes, Shelby essentially marks one instance of the doorbell ringing. The dice roll acts as a filter, a probabilistic gatekeeper, reducing the chance of recording a memory. Thus, if a memory exists, it's more likely that it came from a context where there were more opportunities to create it, which corresponds to the Tails outcome.

The brilliant part of this mechanism is that, as you pointed out, it doesn't rely on the information content of the memory itself. Instead, it relies on the existence of the memory as an indicator of the number of chances Shelby had to create it. This effectively bypasses the issue of Shelby's amnesia wiping out the informational content of his experiences.

Indeed, by merely establishing a non-zero probability of recording an event, we have enough to perform a Bayesian update. This highlights how the thirder argument operates: not through a focus on specific events, but rather through a focus on the distribution of opportunities for those events to occur. So the 'forgotten information' is indeed the greater number of opportunities for creating a memory (of hearing the doorbell ring) that exists if the coin landed tails.

In essence, this set-up gives Shelby a way to "bypass" his amnesia and track the greater frequency of opportunities for hearing the doorbell in the Tails scenario. The presence of a memory record, even if its content is inaccessible or irrelevant, provides an indicator of the "weight of chances" or frequency distribution for the two different outcomes, allowing for a Bayesian update.

PN:

And so, once we've clarified the nature of the information that Shelby is able to convey to himself (i.e. to his future self) by means of a note in his memento, we can apply the same reasoning regarding the nature of the information that he receives in the first place whenever the doorbell rings, and that enables him to update his credence immediately by means of ordinary Bayesian reasoning.

GPT-4:

Yes, that's a coherent and insightful application of your reasoning. The act of the doorbell ringing is akin to the dice rolling event in your mechanism. Each ring is an opportunity for Shelby to gain information about the outcome of the coin toss. His immediate Bayesian update is not contingent on knowing the specific number of times the doorbell has rung; rather, it is contingent on the opportunity distribution - the realization that, because the doorbell is ringing, there's a greater chance of being in the scenario where there are more opportunities for the doorbell to ring (Tails).

This reasoning shows that information in this context is not so much about concrete details (like the specific number of doorbell rings) but about the probabilistic framework within which events are happening. What Shelby "loses" due to his amnesia is not the record of specific events (he can still recall from his memento that the doorbell must have rung at least once) but the trace of the more frequent opportunity distribution that comes with the Tails outcome.

In sum, the key is understanding the weight of chances - the different opportunity distributions across different scenarios - as the information being received and updated upon. This is precisely what the "thirder" argument in the Sleeping Beauty problem leverages. The information lost due to Shelby's amnesia - or due to Sleeping Beauty's induced forgetfulness - is an understanding of this probabilistic context.

PN:

I couldn't have said it better myself!

GPT-4

I'm glad you found the explanation satisfactory! Your thought experiment with Shelby provided a really engaging and insightful way to consider the nuances of the Sleeping Beauty problem and how credence, memory, and probability interact. I enjoyed our discussion. Feel free to bring up any other philosophical or scientific puzzles you'd like to dissect!
Srap Tasmaner June 12, 2023 at 04:44 #814788
Reply to Pierre-Normand

Since you're still a committed Thirder, here's what bothers me:

(1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?

(2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.
jgill June 12, 2023 at 05:23 #814791
They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?


Meaning comes up heads. 1/2
Benkei June 12, 2023 at 05:39 #814793
Reply to Srap Tasmaner Reply to Pierre-Normand I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.
Michael June 12, 2023 at 09:24 #814814
Reply to Benkei In Elga's paper the question is "to what degree ought you believe that the outcome of the coin toss is Heads?"
Benkei June 12, 2023 at 09:38 #814816
Reply to Michael What does she mean? If the questioner clarifies what she means, you can actually give an answer, instead of trying to guess what it means and then end up with two competing interpretations that are at odds with each other. This "problem" is really of a total different type than, for instance, the two envelope problem to me, where I don't detect any type of ambiguity.

In this question, it revolves around the word "believe". If you just straight up asked, what was the likelihood of the coin toss being heads? Then the number of times she wakes up is obviously irrelevant. If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant. It seems more about semantics than probability to me. But then I have a very strong aversion against probability calculations because I sucked at them as a kid. So there's that. :wink:
Pierre-Normand June 12, 2023 at 10:32 #814820
Quoting Srap Tasmaner
Since you're still a committed Thirder, here's what bothers me:

(1) When a coin is tossed it has only two proper outcomes. (No leaners, no edges, and the usual ceteris paribus.) SB, upon being awakened, finds herself in one of three possible situations. But that doesn't work. A coin toss cannot, by itself, engender three distinct situations for her to be in. For instance, "for all she knows," this is her second tails interview; but there is no way to go from the coin toss directly to the second tails interview. Not without passing through the first interview. So SB's intuition that three things can happen to her requires there to be some other determinant of her situation; what is that other factor?


Actually, I've stopped being a committed Thirder a few days ago, since I began developing a pragmatist conception of the dispute between Thirders and Halfers.

If Sleeping Beauty would hold that, on the occasion of an awakening, there is (from her point of view) a 1/3 chance that the coin landed on heads, that doesn't commit her to believing that there is any causal determinant of her awakening other than the coin toss result. It's just part of the setup of the experiment that she will be awakened one more time if the coin toss result is tails. The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it. So, I'm not sure I understand your question.

(2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.


Can you explain why the payoff tables you've come up with are unsatisfactory to you? I can describe at least one that seem coherent:

On each occasion of awakening, Sleeping Beauty accepts a two-to-one wager that the coin landed tails. If the coin was heads up on that occasion, she loses her bet. If it was tails up, she wins half the amount of her bet (and gets back her original bet). Her expected value at the end of the experiment is zero since whenever the coin landed heads she loses her whole bet while whenever it landed tails she wins half her bet twice. Her accepting such odds and expecting to break even reflects her credence that P(H) = 1/3.

The reason why this doesn't settle the dispute in favor in the Thirders is that there are alternative ways to define Sleeping Beauty's credence P(H).
Pierre-Normand June 12, 2023 at 11:01 #814824
Quoting Benkei
I still think it's a language issue and it revolves around the verb "shows". I agree with Srap it doesn't make sense to pretend a coin has three states. The coin can't "show" three options but the "coin when shown to SB" can and these are simply different things. It's just that mathematicians only have a hammer that makes this look like a nail.


The concept of Sleeping Beauty's credence P(H) indeed goes beyond stating the coin's inherent likelihood to land heads or tails. In the context of this experimental setup, it represents her epistemic relation to the coin. A more precise interpretation might suggest that Sleeping Beauty's credence P(H) is her long-term expectation of the coin landing heads, verified on each occasion she expresses this credence. But this doesn't completely resolve the issue. The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.
Michael June 12, 2023 at 11:20 #814827
Quoting Pierre-Normand
The original statement of the problem fails to specify what constitutes an individual act of verification of her credence, though, such that we can establish the target ratio unambiguously. As I've previously illustrated with various examples, different pragmatic considerations can lead to different verification methods, each yielding different values for P(H), aligning with either the Halfer or Thirder stance.


It sort of addresses this in a footnote:

The precise effect of the drug is to reset your belief-state to what it was just before you were put to sleep at the beginning of the experiment. If the existence of such a drug seems fanciful, note that it is possible to pose the problem without it — all that matters is that the person put to sleep believes that the setup is as I have described it.


The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.

So for the sake of the problem we can assert that, unknown to Sleeping Beauty, she's only actually woken up once.
Pierre-Normand June 12, 2023 at 11:21 #814828
Quoting Benkei
If you ask, what's the likelihood you will see Heads if I bring you the coin, then the number of times she could have been woken up becomes relevant.


One argument that Halfers make is that being given many occasions to verify the exact same thing, rather than verifying it once, doesn't change the probability of the outcome.

This argument doesn't work in the context of the Sleeping Beauty problem, it seems to me. This is because the number of opportunities, in this case, is probabilistically dependent on the coin toss result.

But then, the Halfers have the rejoinder that since Sleeping Beauty only is aware of the circumstances of her current awakening, and not of the number of them, she gains no information upon awakening that she didn't already have when the experiment was set up (and when her credence P(H) was 1/2).

My Leonard Shelby (Memento) variation above was meant to clarify what the nature of this information is, that Sleeping Beauty gains on the occasion of her awakening, and that is such that she can update her credence P(H) by means or ordinary Bayesian reasoning.
Michael June 12, 2023 at 11:34 #814829
Quoting Srap Tasmaner
(2) The setup confounds wagering arguments. That won't matter much to a lot of people, but it's uncomfortable. Annoying. Ramsey used Dutch book arguments from the beginning, and despite their limitations they can be clarifying. Each time I've tried to construct a sane payoff table I've failed. I've wondered lately if there might be a conditional wager that comes out rational, but I can work up enough hope of success to bother. Partial beliefs, within suitable limits, ought to be expressible as wagers, but not in this case, and that blows.


You might be interested in When betting odds and credences come apart: More worries for Dutch book arguments.
Pierre-Normand June 12, 2023 at 11:44 #814833
Quoting Michael
The problem (and Elga's solution) have nothing to do with how to "verify" one's credence. It simply asks what a rational person should/would believe were they told the rules of the experiment, woken up, and asked their credence.


This is indeed a flaw inherent to the statement of the problem. What is a rational person supposed to believe about a probabilistic outcome that is ambiguously defined? To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did. And this is not merely a reference to the coins intrinsic propensities to land this or that way when tossed.

Suppose half the people in your neighborhood are Italians and half are Tunisians. You don't know your left door neighbor. As you meet him for the first time, you credence P(Tunisian) = 1/2 (let us suppose). And then he introduces himself as Marco Gallo and announces that he's Italian. The reason why your credence about this man who was standing in front of you was P(T) = 1/2 wasn't because you thought this man had been equally likely to be born Italian or Tunisian, in the sense that this man had an equal propensity to be born both ways. It's because of your knowledge of the statistical distribution of Italians and Tunisians in your neighborhood that you had this credence. It's, in other words, because of the manner in which you are epistemically related to this population of individuals and what makes you more or less likely to meet some of them. The Sleeping Beauty problem only appears to fully specify what constitutes an occasion for Sleeping Beauty to "meet" with a coin that had landed on heads. It leaves a crucial ambiguity regarding what constitutes such a meeting.
Michael June 12, 2023 at 12:11 #814837
Quoting Pierre-Normand
To ask how probable it is that the coin landed on heads involves a tacit reference to the counterfactual circumstances where you are presently facing a (hidden) coin that didn't land the way it actually did.


Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.
Michael June 12, 2023 at 12:41 #814840
I think it's also worth paying particular attention to the way Elga phrased the problem and the solution:

When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

...

I've just argued that when you are awakened on Monday, that credence ought to change to 1/3.

...

But you were also certain that upon being awakened on Monday you would have credence 1/3 in H.


The Tuesday interview is actually irrelevant to Elga's argument (which is why he says in that footnote that Sleeping Beauty only needs to believe that the experiment will be conducted in this way).

So Elga argues that on Monday, before the coin has been flipped, Sleeping Beauty's credence (not knowing that it is Monday) should be P(Heads) = [math]1\over3[/math].
Pierre-Normand June 12, 2023 at 12:50 #814841
Quoting Michael
Not exactly, because if it's Monday the coin hasn't been flipped at all. It's only hidden if today is Tuesday and the coin is tails.


In the OP variation, the coin is flipped on Sunday before Sleeping Beauty's first awakening. Elga is right, though, that nothing substantially changes if the coin is flipped after Monday's (mandatory) awakening.

The point about Sleeping Beauty's credence being about her epistemic relationship to the coin flip result (either past or future) remains valid. If Sleeping Beauty were informed that today is Monday, her credence about the upcoming flip, P(H), would remain at 1/2. Under these circumstances, her credence would be fully predicated on the inherent propensity of the coin to land on either side with equal probabilities. Conversely, if she were told that it's Tuesday, she would know without a doubt that the coin landed on tails. In this case, her credence, P(H) = 0, would be independent of the coin's inherent propensity and solely a result of her causal relationship with the outcome. When the day of her awakening remains unknown to her, her rational credence becomes a composite function of both possibilities—that today could be Monday or Tuesday. Consequently, it becomes a function of the coin's propensities, for example, that it is a fair coin, and of the causal relationship between the coin toss result and her potential awakening on Tuesday.
PhilosophyRunner June 12, 2023 at 18:45 #814902
Quoting Michael
This is an ambiguous claim. It is true that if you randomly select a seeing from the set of all possible seeings then it is twice as likely to be a tails-seeing, but the experiment doesn't work by randomly selecting a seeing from the set of all possible seeings and then "giving" it to Sleepy Beauty. It works by tossing a coin, and then either she sees it once or she sees it twice.


That is the difference in interpretation I am talking about. For you the question is asking about how a fair coin toss will land. For me it is about the seeings of the coin, not the coin toss.

Let's say The SB experiment is carried out every week for a year - 52 time in total. On a given time that SB wakes up, what should her credence be that the coin at that given wake up event is heads? I'll come back to this - let's call it problem A.

Well let me do this experiment again with one small change - call it problem B. The SB experiment is done every week for a year - 52 times. But instead of asking SB her credence at each wake up event, we are going to take a picture at each wake up event, then select one for her to answer about at the end. At the end of the year we will have on average 26 pictures of heads and 52 pictures of tails, each representing a unique wake up event. They are all put in a bag and one is picked out, then SB is asked "at this particular wake up event, what is the probability that the coin was showing heads, as seen in the picture?" Of course the answer is 1/3.

The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. So the answer in problem A is also 1/3.
Michael June 12, 2023 at 18:48 #814903
Quoting PhilosophyRunner
The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. SO the answer in problem A is also 1/3.


It’s not the same because she isn’t given a randomly selected waking after 52 weeks. She’s given either one waking or two, determined by a coin toss.

The manner in which the experiment is conducted matters.
PhilosophyRunner June 12, 2023 at 18:50 #814904
Reply to Michael She is given one waking or two, determined by a coin toss, and this is repeated 52 times. When she wakes up she has no idea which of the wake up events this is - so from her point of view it is a randomly selected wake up event.
Srap Tasmaner June 12, 2023 at 18:52 #814906
I'll have more time to look at your response tonight. A couple quick notes:

Quoting Pierre-Normand
Can you explain why the payoff tables you've come up with are unsatisfactory to you?


The fundamental problem is that your stake changes depending on which outcome you bet on. I know when I first looked at this five years ago, I ran into problems determining the true odds: you'd get an event that's 1:2 paying off like it was 1:3. Sleeping Beauty doesn't even out when you bias the coin.

But I'll look at it again.

Quoting Pierre-Normand
The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it.


I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.

To get back to two, you have to add in [heads & Tuesday], and then split by sequence, like Halfers -- only now it's heads = awake-asleep, tails = awake-awake -- or by day, as you do here, heads = asleep, tails = awake, for Tuesday only.

That sounds plausible, but it's not what we want. Heads is not equivalent to asleep because you're awakened on Monday. More importantly, awake is not equivalent to tails.

We don't even have to get into issues about days and indexicals to have problems. (I like "first interview" and "second interview", but it doesn't matter here.)

PhilosophyRunner June 12, 2023 at 19:34 #814918
Quoting Michael
It’s not the same because she isn’t given a randomly selected waking after 52 weeks. She’s given either one waking or two, determined by a coin toss.

The manner in which the experiment is conducted matters.


Also, take this further evolution of Problem B that I outlined earlier. The SB experiment is done every week for a year. Each week she is woken once if the coin lands heads, twice if it lands tails.

But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.
Pierre-Normand June 13, 2023 at 10:49 #815093
Quoting Srap Tasmaner
I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.


If I understand correctly, you seem to be asking how the Thirders might be able to infer the probabilities of the three fine-grained types of awakening outcomes from the (prior) probabilities of the two coin toss outcomes?

Indeed, we can split an even-numbered population into two equal sub-populations Pop-1 (Beauties who awaken once) and Pop-2 (Beauties who awaken twice). This allows us to focus solely on Sleeping Beauty's personal credences upon awakening, concerning whether she's part of Pop-1 or Pop-2.

In my view, the coin in the original problem provides a convenient source of stochasticity. Without it, @sime would have been justified to worry about the explanation for Sleeping Beauty's priors. Consider this: Suppose I present you with a die that could be loaded to always land on 'six'. If it's not loaded, then it's fair. You throw it once and it lands on 'six'. What is your credence that the die is loaded? Without an objective grounding for your priors, the answer is undefined. However, if I tell you that there are two identical dice - one loaded and the other fair - and a fair coin toss determines which one you'll use, you can now update your credence that the die is loaded from 1/2 to 6/7, given that over time, six out of seven 'sixes' will be from a loaded die.

Let us therefore assume, as you suggested, that Sleeping Beauty's priors are P(Pop-1) = P(Pop-2) = 1/2, without needing to delve into the specific stochastic process that placed her in either Pop-1 or Pop-2.

The key disagreement between Halfers and Thirders is whether Sleeping Beauty can update her credence upon awakening that she's part of Pop-1 from 1/2 to 1/3. Halfers argue that since Sleeping Beauty knows she'll be awakened at least once, she can't distinguish whether her current awakening is the only one (Pop-1) or one of two (Pop-2). Therefore, these two possibilities should be equally probable from her perspective.

This argument seems to misuse the Principle of Indifference. Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one. However, you can still update your credence P('loaded') from 1/2 to 6/7. The die landing on 'six' does convey information in this context.

Halfers, therefore, need a stronger argument to support their 'no new information' claim. Alternatively, they could challenge Thirders to explain what new information Sleeping Beauty receives that allows her to rationally update her credence in Pop(1) from 1/2 to 1/3.

I believe this can be explained step by step to make it more intuitive:

--First step--

Imagine that upon being divided into populations Pop-1 and Pop-2, the participants in each population are awakened only once the following day in their respective waking rooms. In half of the Pop-1 rooms, a single red tulip is placed on the nightstand, hidden by a cardboard cylinder. In the other half, a white tulip is used instead. In all Pop-2 rooms, a red tulip is utilized. As a participant in this experiment, Sleeping Beauty is informed of these specific details. Upon waking, she is asked about her credence in being part of Pop-1, and what her credence is that the tulip next to her is white. In this context, her credences should be P(Pop-1) = 1/2 and P(white) = 1/4.

The cardboard cylinder is then removed, revealing a red tulip. What should Sleeping Beauty's credences be updated to now? They should be P(white) = 0 and P(Pop-1) = 1/3, right? This example appears to use Bayesian reasoning in a straightforward manner: Over time, 1/3 of participants who wake up in a room with a red tulip are part of Pop-1.

(As for the strict proof: P(Pop-1|red) = P(red|Pop-1)*P(Pop-1)/P(red) = (1/2)*(1/2)/(3/4)=1/3)

--Second step--

Let's change the previous scenario so that all participants experience two awakenings, one on Monday and another on Tuesday. Participants in Pop-1 awaken once with a white tulip and once with a red tulip, while participants in Pop-2 awaken twice with a red tulip. We also introduce an amnesia-inducing drug to ensure that the participants don't remember the outcome of the Monday awakening when they are awakened again on Tuesday.

In this new context, whenever Sleeping Beauty awakens, what should her credences P(Pop-1) and P(white) be? Arguably, most people, whether they're Halfers, Thirders or double-Halfers, would agree that these should be P(Pop-1) = 1/2 and P(white) = 1/4.

The cardboard cylinder is then removed and, as it happens, a red tulip is revealed. What should Sleeping Beauty's credences be updated to now? They should again be P(white) = 0 and P(Pop-1) = 1/3, right?

Perhaps the complexity of applying Bayesian reasoning in this context stems from the fact that participants in Pop-1 and Pop-2 who awaken on Monday aren't a distinct group from those who awaken on Tuesday. Indeed, the same individuals are awakened twice. To accommodate this factor, we can adjust Sleeping Beauty's Bayesian reasoning in the following manner:

Every time a participant wakes up, the probability that they are in a room with a white tulip is 1/4. If I awaken in a room with a white tulip, the probability that I am part of Pop-1 is 1/2, and it's zero if I am part of Pop-2. As such, my prior probabilities are P(white) = 1/4 and P(Pop-1) = 1/2, while P(red|Pop-1) = 1/2.

Consequently, once the tulip's color is revealed to be red, I can make the same inference as before: P(Pop-1|red) = P(red|Pop-1)P(Pop-1)/P(red) = (1/2)(1/2)/(3/4)=1/3.

In an intuitive sense, this means that, since the majority of awakened participants find themselves next to red tulips because they belong to Pop-2, witnessing a red tulip upon awakening boosts their credence in being part of Pop-2. Although seeing a red tulip doesn't enable them to distinguish cases where the current awakening is the only one where they'll see such a tulip (as in Pop-1) or one of two such instances (as in Pop-2), it still provides information and counts as evidence that they are part of Pop-2. The reasoning behind this is analogous to why a die landing on 'six' constitutes evidence that the die is biased even though a fair die can also land of 'six'.

--Third step--

In this new variation, Sleeping Beauties themselves play the role of tulips. The populations Pop-1 and Pop-2 are participants, let's call them Sleeping Uglies*, who each share a room with a Sleeping Beauty. The Sleeping Uglies will be administered the same amnesia-inducing drugs on Sunday and Monday night, but they will always be awakened both on Monday and Tuesday, ten minutes prior to the Sleeping Beauty's potential awakenings.

Whenever I, as a Sleeping Ugly, awaken, the probability that I am in a room with a 'sleeping' (i.e., not scheduled to be awakened) Sleeping Beauty is 1/4. The probability that I now have been awakened in a room with a 'sleeping' Sleeping Beauty is 1/2 if I am part of Pop-1 and zero if I am part of Pop-2. Therefore, my priors are P('sleeping') = 1/4 and P(Pop-1) = 1/2, while P('awake'|Pop-1) = 1/2.

Therefore, after Sleeping Beauty is awakened in front of me, I can infer, as before, that P(Pop-1|'awake') = P('awake'|Pop-1)*P(Pop-1)/P('awake') = (1/2 * 1/2)/(3/4) = 1/3, meaning the probability that I am part of Pop-1 after Sleeping Beauty is awakened is 1/3.

*My use if the Sleeping Uglies as participants in the experience, and of Sleeping Beauties' awakening episodes as evidences for the Uglies, is inspired by, but reverses, the example proposed by Robert Stalnaker in his paper Another Attempt to Put Sleeping Beauty to Rest.

--Fourth and last step--

We can now dispense with the Sleeping Uglies altogether since their epistemic situations, and the information that they are making use of (namely, that the Sleeping Beauty in their room awakens) are identical to those of the Sleeping Beauties themselves. The only difference is that there is a ten minute interval between the moment when the Speeping Uglies awaken and can make use of their evidence to update their credences, while the Sleeping Beauties can update their credences immediately upon awakening. Even this small difference can be wiped out by introducing a 10 minutes delay between the moment when the Sleeping Beauties are awakened (in all cases) and the moment when the interviewer shows up, with the proviso that when no interview is scheduled, the Beauties are put back to sleep rather than being interviewed, in which case their credences in P(Pop-1) momentarily drops to zero.
Pierre-Normand June 13, 2023 at 13:56 #815113
Quoting PhilosophyRunner
But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.


A Halfer might argue that Sleeping Beauty being posed such a question, along with the provided context of the question's delivery (i.e., through a random selection among all awakenings), indeed provides the grounds for Sleeping Beauty to update her initial credence P(H) from 1/2 to 1/3. However, they might also assert that this type of questioning doesn't exist in the original setup. Therefore, they might insist that, in the absence of such randomly assigned questioning, Sleeping Beauty should maintain her credence of 1/2.

A Thirder might counter-argue by saying: The crucial element that turns the questioning into information, enabling Sleeping Beauty to update her credence, is the fact that it results from randomly selecting an awakening from all possible awakenings. Given that there are twice as many awakenings under the 'tails' condition than under 'heads,' a random selection is twice as likely to yield a 'tails' awakening. We must recognize that Sleeping Beauty doesn't necessarily require external assistance to isolate her current awakening in a manner that is both random and statistically independent of the coin toss result.

Imagine an alternative method where an external agent, let's call her Sue, randomly selects awakenings from the complete set. Sue could examine a list of all scheduled awakenings, roll a die for each, and mark the awakening as evidence-worthy if the die lands on 'six'. The selected participants would then be equipped to update their credence P(H) to 1/3 after being presented with the evidence of their selection by Sue.

Now, it doesn't matter who performs the die-rolling selection; what's important is that any awakening marked as evidence-worthy is selected randomly by a method independent of the coin toss outcome. The participants themselves, not Sue, could roll the die and, if it lands on 'six,' consider their current awakening to have been randomly selected (as it would indeed have been!) from the entire set of awakenings. This random selection allows Sleeping Beauty to single out the fact of her current awakening as evidence for updating her credence P(H) to 1/3.

If the die doesn't land on 'six,' has Sleeping Beauty squandered an opportunity to identify her current awakening as a valuable piece of evidence? Actually, if the convention had been reversed to select awakenings by a die not landing on 'six', the chosen sample would still statistically represent all scheduled awakenings (with 1/3 of those being 'tails' awakenings). The Halfer's error is assuming that the mere occurrence of an awakening doesn't provide sufficient evidence for Sleeping Beauty. The participants' selection method, which involves identifying awakenings with the indexical expression "I am currently experiencing this awakening," is actually the most representative of all methods as it encompasses the entire population of awakenings!
JeffJo June 16, 2023 at 20:38 #815809
Reply to hypericin
I'm not going to wade through 14 pages. The answer is 1/3, and it is easy to prove. What is hard, is getting those who don't want that to be the answer to accept it.

First, what most think is the problem statement, was the method proposed by Adam Elga (in his 2000 paper "Self-locating belief and Sleeping Beauty problem") to implement the experiment. The correct problem statement was:
Some researchers are going to put you to sleep. During the [experiment], they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you [are awake], to what degree ought you believe that the outcome of the coin toss is
Heads?


The two changes I made do not affect anything in the problem, but they do show that Elga was already thinking of how he would implement his thirder solution. The first change is where he brought up "two days," and confused the continuation of the experiment with continuation of sleep. The second suggested that your (or SB's) information might change while you are (she is) awake, which is how Elga solved the problem piecewise.

But what Elga added made the circumstances of the two (if there are to be two) wakings different. And it is this difference that is the root of the controversy that has occurred ever since.

Patient: Doctor, Doctor, it hurts if I do this.
Doctor: Then don't do that.


But the difference Elga introduced was unnecessary. So don't do it; do this instead:
  1. Tell SB all the details listed here.
  2. Put SB to sleep.
  3. Flip two coins. Call them C1 and C2.
  4. Procedure start:
  5. If both coins are showing Heads, skip to Procedure End.
  6. Wake SB.
  7. Ask SB "to what degree do you believe that coin C1 is currently showing Heads?"
  8. After she answers, put her back to sleep with amnesia.
  9. Procedure End.
  10. Turn coin C2 over, to show its opposite side.
  11. Repeat the procedure.
  12. Wake SB to end the experiment.


When SB is awake, she knows that she is in the middle of the procedure listed in steps 4 thru 9. Regardless of which pass thru these steps it is, she knows that in step 5 of this pass, there were four equally-likely combinations for what (C1,C2) were showing: {(H,H),(H,T),(T,H),(T,T)}. This is the "prior" sample space.

She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.

The reason for the controversy, is that the difference Elga introduced between the first and (potential) second wakings obfuscates the prior sample space. This implementation has no such problem.

But I'm positive that halfers will try to invent one. I've seen it happen too many times to think otherwise.
jgill June 16, 2023 at 21:00 #815811
They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?


This statement isolates SB from the coin toss. "What is the probability that the coin shows heads?" 1/2.
Srap Tasmaner June 16, 2023 at 22:32 #815829
Quoting Pierre-Normand
Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one.


But there are two sources of randomness in this example, the die and the coin.

Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.

The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.

I'm undecided.
Pierre-Normand June 17, 2023 at 11:29 #815906
Quoting Srap Tasmaner
But there are two sources of randomness in this example, the die and the coin.

Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.


Indeed, in my examples (labelled "First step" through "Fourth step"), there's only a single source of randomness, which consists in the random assignment of individuals to either population Pop-1 or Pop-2 (awakened once or twice with white or red tulips).

Halfers contend that Sleeping Beauty's awakening cannot serve as evidence indicating she is more likely to be part of Pop-2, as there's nothing that allows her to distinguish an awakening in Pop-1 from one in Pop-2. Yet, the same reasoning can be applied to the inability to distinguish a 'six' roll from a loaded die versus a 'six' roll from a fair die. Yet, the occurrence of a 'six' increases the likelihood that the die is loaded.

You're correct in stating that there's only one source of randomness in Sleeping Beauty's case, unlike the dice scenario. However, the two situations share a strong resemblance. The reason a 'six' outcome increases the probability that a die is loaded is because loaded dice generate more instances of players confronting a 'six'. Similarly, being part of Pop-2 in Sleeping Beauty's setup leads to more instances of self-aware awakenings. This is simply an analogy - for a more compelling argument, refer back to my cases 1 through 4 in the post you quoted.

The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.


Indeed, the choice between a Halfer (P(Pop-1) = 1/2) and a Thirder (P(Pop-1) = 1/3) updated credence is a matter of individuation. While I focused on the individuation of events, you had seemed to suggest that different (more or less extended) conceptions of self might lead people towards one stance or another. This struck me as insightful, although personal psychological inclinations don't provide valid justifications. Currently, I don't identify as a Thirder or a Halfer. Rather, I believe that Thirders and Halfers are talking past each other because they each focus solely on one of two possible types of outcome distributions that could be considered in Sleeping Beauty's credence update. My previous "pragmatic" examples aimed at highlighting this duality (not a dichotomy!) When Sleeping Beauty wakes and considers her situation, is she weighing the opportunities to either evade or confirm her current situation (facing lions or crocodiles)? In this case, she should reason as a Thirder. Or is she weighing the opportunity to end, or validate, the nature of her ongoing predicament (and be rescued by Aunt Betsy) at the end of her current series of awakenings? If so, she should reason as a Halfer. The root question of what her credence should be upon awakening is inherently ambiguous, and the thought experiment is tailored to create this ambiguity.
Pierre-Normand June 21, 2023 at 10:07 #816648
Quoting JeffJo
She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.


Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.

In your scenario, there are four potential outcomes from the experiment, each of which is equally probable:

HH (end) --> Never awakened
HT HH --> Awakened once
TH TT --> Awakened twice
TT TH --> Awakened twice

When Sleeping Beauty awakens, her credences corresponding to these four outcomes shift from {1/4, 1/4, 1/4, 1/4} to {0, 1/3, 1/3, 1/3}.

However, in the scenario most frequently discussed, entire experimental runs in which Sleeping Beauty is awakened once are just as common as those where she is awakened twice. Furthermore, since there isn't any experimental run where Sleeping Beauty is not awakened at all, it's debatable whether her experiencing an awakening provides new information that would cause her to adjust her initial probabilities (as Halfers are inclined to argue).
Michael June 21, 2023 at 10:21 #816650
I'll throw in one last consideration. I posted a variation of the experiment here.

There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

If Michael is woken then what is his credence that the coin landed heads?

Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.

And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.

Do we accept this?

If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.
Pierre-Normand June 21, 2023 at 11:39 #816663
Quoting Michael
I'll throw in one last consideration. I posted a variation of the experiment here.

There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

If Michael is woken then what is his credence that the coin landed heads?

Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.


In this variation, it seems to me that being awakened does provide Michael with relevant evidence. Given that the coin landing on tails results in one person being awakened, and the coin landing on heads results in two persons being awakened, on average, 1.5 out of three persons are awakened. Therefore, the prior probability that Michael will be awakened is P(MA) = 1/2. The conditional probabilities are P(MA|H) = 1/3 and P(MA|T) = 2/3 (and these are the same for Jane and Jill).

Hence, when Michael awakens, it's more probable that the coin landed tails.

P(T|MA) = P(MA|T)*P(T) / P(MA) = (2/3)*(1/2)/(1/2) = 2/3.

And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.

Do we accept this?


Yes, we do.

If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.


I'd be curious to understand why you think so.

Recently, I contemplated a similar variation wherein candidates are recruited as part of a team of two: Jane and Jill, for example. On Monday, either Jill or Jane is awakened (selected at random). On Tuesday, if a coin lands on tails, the person who wasn't awakened on Monday is now awakened. If the coin lands on heads, the experiment ends. (In this variation, as in yours, there's no need for an amnesia-inducing drug. It's only necessary that the subjects aren't aware of the day of their awakenings.)

Just like in your variation, tails generates two awakenings (for two different subjects), while heads generates only one. On average, 1.5 out of two persons are awakened. Jane's prior is P(JA) = 3/4, and the conditional probabilities are P(JA|H) = 1/2 and P(JA|T) = 1.

As before, Jane's awakening provides her with evidence that the coin landed tails.

P(T|JA) = P(JA|T)*P(T) / P(JA) = (1)*(1/2)/(3/4) = 2/3.

I would argue that this case is structurally identical to the one discussed in the original post (as well as in Lewis and Elga), with the sole exception that the relevant epistemic subjects are members of a team of two, rather than a single identifiable individual potentially being placed twice in the "same" (epistemically indistinguishable) situation. You could also consider a scenario where Jill falls ill, and her team member Jane volunteers to take her place in the experiment. In this instance, the amnesia-inducing drug would be required to maintain the epistemic separation of the two potential awakenings in the event that the coin lands heads.
JeffJo June 21, 2023 at 13:02 #816676
Reply to Pierre-Normand
Quoting Pierre-Normand
Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.


You seem to be confused about chickens and eggs, so let me summarize the history:

  • The original problem was formulated by Arnold Zuboff, and was shared amongst a small group. Its scenario lasted for a huge number of days (I've seen a thousand and a trillion). Based on the same coin flip we know of today, the subject would be wakened either every day, or once on a randomly selected day in this period.
  • The second version of the problem came out when Adam Elga put it in the public domain in 2000. In his statement of the problem (seen above), he reduced Zuboff's number of days to two. But he did not specify the day for the "Heads" waking. So it was implied that the order was still random.
  • But he did not address that problem directly. He changed it into a third version, where he did fix the day for the "Heads" waking on Monday.
  • He apparently did this because he could isolate the commonality between {Mon,Tails} and {Mon,Heads} by telling SB that it was Monday. And then the commonality between {Mon,Tails} and {Tue,Tails} by telling her that the coin landed on Tails.That is how he got his solution, by working backwards from these two special cases to his third version of the problem.
  • You could say that Elga created three "levels" of the probability space. The "laboratory" space, where the coin flip is clearly a 1/2:1/2 chance, the "awakened" space where we seek an answer, and the "informed" spaces where SB knows that {Mon,Tails}, and whichever other is still possible, are equally likely.
  • The "situation typically discussed in literature" is about how the informed spaces should relate to the awakened space. Not the problem itself.

All the "situation typically discussed in literature" accomplishes is how well, or poorly, Elga was able to relate these "informed" spaces to the "awakened" space in the third version of the problem. And the controversy has more to do with that, than the actual answer to the second version.

All I did, was create an alternative third version, one that correctly implements the second version. If you want to debate anything here, the issue is more whether Elga's third version correctly implements his second. If it does, then a correct answer (that is, the probability, not the way to arrive at the number) to mine is the correct answer to Elga's. It can tell you who is right about the solution to Elga's third version

That answer is 1/3. And Elga's thrid verion is a correct version of his second, since he could have fixed the Heads waking on Tuesday and arrived at the same answer.

Quoting Pierre-Normand
In your scenario, there are four potential outcomes from the experiment, each of which is equally probable.

And in the "scenario most frequently discussed," there is a fourth potential outcome that halfers want to say is not a potential outcome. SB can be left asleep on Tuesday. This is an outcome in the "laboratory" space whether or not SB can observe it. It needs to be accounted for in the probability calculations, but in the "frequent discussions" in "typical literature," the halfers remove it entirely. Rather than assign it the probability it deserves and treating the knowledge that it isn't happening as "new information."

And the reason they prefer to remove it, rather than deal with it, is that there is no orthodox way to deal with it. That's why Elga created the two "informed" spaces; they are "awakened" sub-spaces that do not need to consider it.

That's all I did as well, but without making two cases out of it. I placed that missing outcome, the fourth one you described, in a place where you can deal with it but can't ignore it.

And there are other ways to do this. In the "scenario most frequently discussed" just change {Tue,Heads} to a waking day. But instead of interviewing her, take her on a shopping trip at Saks. Now she does have "new information" when she is interviewed, and her answer is 1/3. My question to you is, why should it matter what happens on {Tue, Heads} as long as it is something other than an interview?

Or, use four volunteers. Three will be wakened each day. Each is assigned a different combination of {DAY, COIN} for when she would be left asleep. The one assigned {Tue, Heads} is the SB in the "scenario most frequently discussed."

One each day, bring the three together and ask each what the probability is, that this is her only waking. None can logically give a different answer than the others, and only one of them fits that description. The answer is 1/3.

And my point is that, while these "frequent discussions" might be interesting to some, there is a way to say which gets the right answer. Rather than arguing about why one should be called the tight solution. The answer is 1/3.
Michael June 21, 2023 at 13:14 #816681
Quoting JeffJo
And in the "scenario most frequently discussed," there is a fourth potential outcome that halfers want to say is not a potential outcome. SB can be left asleep on Tuesday. This is an outcome in the "laboratory" space whether or not SB can observe it. It needs to be accounted for in the probability calculations, but in the "frequent discussions" in "typical literature," the halfers remove it entirely. Rather than assign it the probability it deserves and treating the knowledge that it isn't happening as "new information."


What if the experiment ends after the Monday interview if heads, with the lab shut down and Sleeping Beauty sent home? Heads and Tuesday is as irrelevant as Heads and Friday.

I think this is equivalent to the case where we don't consider days, and just say that if heads then woken once and if tails then woken twice. It doesn't make sense to consider heads and second waking as part of the probability space. It certainly doesn't have a prior probability of 1/4.
JeffJo June 27, 2023 at 00:20 #818138
Quoting Michael
What if the experiment ends after the Monday interview if heads, with the lab shut down and Sleeping Beauty sent home? Heads and Tuesday is as irrelevant as Heads and Friday.


Then, in the "scenario most frequently discussed," SB is misinformed about the details of the experiment. In mine, the answer is 1/3.
Michael June 27, 2023 at 07:45 #818203
Quoting JeffJo
Then, in the "scenario most frequently discussed," SB is misinformed about the details of the experiment. In mine, the answer is 1/3.


So we have two different versions of the experiment:

First
1. she’s put to sleep, woken up, and asked her credence in the coin toss
2. the coin is tossed
3. if heads she’s sent home
4. if tails she’s put to sleep, woken up, and asked her credence in the coin toss

Second
1. she’s put to sleep, woken on Monday, asked her credence in the coin toss, and put to sleep
2. the coin is tossed
3. if heads she’s kept asleep on Tuesday
4. if tails she’s woken on Tuesday, asked her credence in the coin toss, and put to sleep

I think the answer should be the same in both cases.

It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4 because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.

But if the answer to the first is 1/2 then the answer to the second is 1/2.
Michael June 27, 2023 at 10:55 #818222
Regarding Bayes' theorem:

[math]P(Heads | Monday) = {{P(Monday | Heads) * P(Heads)} \over P(Monday)}[/math]

Both thirders and double-halfers will accept that P(Heads | Monday) = 1/2, but how do we understand something like P(Monday)? Does it mean "what is the probability that a Monday interview will happen" or does it mean "what is the probability that this interview is a Monday interview"?

If the former then P(Monday) = P(Monday | Heads) = 1.

If the latter then there are two different solutions:

1. P(Monday) = P(Monday | Heads) = 2/3
2. P(Monday) = P(Monday | Heads) = 1/2

I think we can definitely rule out the first given that there doesn't appear to be any rational reason to believe that P(Monday | Heads) = 2/3.

But if we rule out the first then we rule out P(Monday) = 2/3 even though two-thirds of interviews are Monday interviews. This shows the weakness in the argument that uses the fact that two-thirds of interviews are Tails interviews to reach the thirder conclusion.

There is, however, an apparent inconsistency in the second solution. If we understand P(Monday) to mean "what is the probability that this interview is a Monday interview" then to be consistent we must understand P(Monday | Heads) to mean "what is the probability that this interview is a Monday interview given that the coin landed heads". But understood this way P(Monday | Heads) = 1, which, if assuming P(Monday) = 1/2, would give us the wrong conclusion P(Heads | Monday) = 1.

So it would seem to be that the only rational, consistent application of Bayes' theorem is where P(Monday) means "what is the probability that a Monday interview will happen", and so P(Monday) = P(Monday | Heads) = 1.

We then come to this:

[math]P(Heads | Awake) = {{P(Awake | Heads) * P(Heads)} \over P(Awake)}[/math]

Given that P(Awake | Monday) = 1, if P(Monday | Heads) = 1 then P(Awake | Heads) = 1 and if P(Monday) = 1 then P(Awake) = 1. Therefore P(Heads | Awake) = 1/2.
Pierre-Normand June 27, 2023 at 11:54 #818232
Quoting Michael
It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4, because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.

But if the answer to the first is 1/2 then the answer to the second is 1/2.


For the first case, we can use priors of P(H) = 1/2 and P(W) = 3/4, given that there are three awakenings in the four possible scenarios (H&Mon, H&Tue, T&Mon, T&Tue) where Sleeping Beauty can be. P(W|H) = 1/2, as she is only awakened on Monday when the coin lands heads.

Consequently, P(H|W) = P(W|H)P(H)/P(W) = (1/2)(1/2)/(3/4) = 1/3.

In the second case, we can set up a similar calculation: P(Unique|W) = P(W|Unique)*P(Unique)/P(W)

P(Unique) is the prior probability that an awakening will be unique rather than part of two. P(Unique) = 1/3, as one-third of the experiment's awakenings are unique. P(W) is now 1, as Sleeping Beauty is awakened in all potential scenarios.

We then find that P(Unique|W) = P(W|Unique)P(Unique)/P(W) = (1)(1/3)/(1) = 1/3.

This second case calculation is straightforward, but the similarity between the two cases is illuminating. Bayes' theorem works for updating a belief in an outcome given new evidence, P(O|E), by increasing the prior probability of the outcome, P(O), in proportion to the ratio P(E|O)/P(E). This ratio quantifies how much more likely the outcome becomes when the evidence is known to be present.

In both cases, Sleeping Beauty's evidence is that she is currently awake. In the first case, the relevant ratio is (1/2)/(3/4), which reflects how much more likely the coin is to land heads when she is awake. In the second case, the relevant ratio is (1)/(1), indicating how much more likely a unique awakening situation (due to the coin landing heads) is when she is awake. Both cases yield the same result (1/3), aligning with the ratio of possible H-awakenings ('unique awakenings') to total possible awakenings produced by the experimental designs.

Another interpretation of P(H) is the proportion of entire experimental runs in which Sleeping Beauty ends up in an H-run ('unique awakening run'). According to this interpretation, the Halfer solution P(H) = 1/2 is correct. The choice between Thirders or Halfers' interpretation of P(H) should depend on the intended use: during individual awakenings (Thirders) or throughout the experiment (Halfers).
Michael June 27, 2023 at 12:01 #818233
Quoting Pierre-Normand
P(Unique) = 1/3, as one-third of the experiment's awakenings are unique.


This is a non sequitur. See here where I discuss the suggestion that P(Monday) = 2/3.

What we can say is this:

[math]P(Unique | Heads) = {{P(Heads | Unique) * P(Unique)} \over P(Heads)}[/math]

We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.

Therefore P(Unique|W) = 1/2.

And if this experiment is the same as the traditional experiment then P(H|W) = 1/2.
JeffJo June 27, 2023 at 12:12 #818235
Quoting Michael
So we have two different versions of the experiment:


I'm not quite sure why you quoted me in this, as the two version you described do not relate to anything I've said.

To reiterate what I have said, let me start with a different experiment:

  1. You volunteer for an experiment. It starts with you seated at a table in a closed room, where these details are explained to you:
  2. Two coins will be arranged randomly out of your sight. By this I mean that the faces showing on (C1,C2) are equally likely to be any of these four combinations: HH, HT, TH, and TT.
  3. Once the combination is set, A light will be turned on.
  4. At the same time, a computer will examine the coins to determine if both are showing Heads. If so, it releases a sleep gas into the room that will render you unconscious within 10 seconds, wiping your memory of the past hour. Your sleeping body will be moved to a recovery room where you will be wakened and given further details as explained below.
  5. But if either coin is showing tails, a lab assistant will come into the room and ask you a probability question. After answering it, the same gas will be released, your sleeping body will be moved the same way, and you will be given the same "further details."

So you sit in the room for a minute, the light comes on, and you wait ten seconds. A lab assistant comes in (so you weren't gassed, yet) and asks you "What is the probability that coin C1 is showing Heads?

The answer to this question is unambiguously 1/3. Even tho you never saw the coins, you have unambiguous knowledge of the possibilities for what the combinations could be.

Now, let the "further details" be that, if this is the first pass thru experiment, the exact same procedure will be repeated. Otherwise, the experiment is ended. Whether or not you were asked the question once before is irrelevant, since you have no memory of it. The arrangement of the two coins can be correlated to the arrangement in the first pass, or not, for the same reason.

The point of my "alternate version" that I presented above is that it creates what is, to your knowledge, the same probability space on each pass. Just like this one.It exactly implements what Elga described as the SB problem. He changed it, by creating a difference between the first and second passes. The first pass ignores the coin, so only the second depends on it.

What you describe, where the (single) coin isn't established until the second pass, is manipulating Elga's change to emphasize that the coin is ignored in the first pass. It has nothing to do with what I've tried to convey. The only question it raises, is if Elga's version correctly implements his problem.
Pierre-Normand June 27, 2023 at 12:27 #818236
Quoting Michael
This is a non sequitur.


My argument follows a consistent line of reasoning. Given Sleeping Beauty's understanding of the experimental setup, she anticipates the proportion of indistinguishable awakening episodes she will find herself in, on average (either in one or in many experimental runs), and calculates how many of those will be H-awakenings given the evidence that she will presently be awakened.

What we can say is this:

P(Unique|Heads)=P(Heads|Unique)?P(Unique)/P(Heads)

We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.

Therefore P(Unique|W) = 1/2.

And if this experiment is the same as the traditional experiment then P(Heads|W) = 1/2.


Yes, I fully concur with this calculation. It interprets Sleeping Beauty's credence P(Unique|W) = P(H|W) upon awakening as the proportion of complete experimental runs in which Sleeping Beauty expects to find herself in an H-run ('unique awakening run'). However, this doesn't negate the Thirder interpretation, which becomes relevant when Sleeping Beauty is focusing on the individual awakening events she is expected to experience, rather than on the individual experimental runs. This interpretation distinction is highlighted in various practical examples I've provided: for example, preparing to face lions or crocodiles while escaping relies on the Thirder interpretation, whereas being picked up by Aunt Betsy at the East Wing at the end of the experiment follows the Halfer interpretation, and so on.
Michael June 27, 2023 at 12:34 #818237
Reply to Pierre-Normand I think Bayes’ theorem shows such thirder reasoning to be wrong.

[math]P(Unique | Heads) = {{P(Heads | Unique) * P(Unique)} \over P(Heads)}[/math]

If P(Unique) = 1/3 then what do you put for the rest?

Similarly:

[math]P(Heads | Monday) = {{P(Monday | Heads) * P(Heads)} \over P(Monday)}[/math]

If P(Monday) = 2/3 then what do you put for the rest?
Pierre-Normand June 27, 2023 at 12:40 #818239
Quoting Michael
I think Bayes’ theorem shows such thirder reasoning to be wrong.

P(Unique|Heads)=P(Heads|Unique)?P(Unique)/P(Heads)

If P(Unique) = 1/3 then what do you put for the rest?


P(Heads|Unique) = 1 and P(Heads) = 1/3 (since 1/3 of expected awakenings are H-awakenings)

P(Unique|Heads) is therefore 1, as expected.

Similarly:

P(Heads|Monday)=P(Monday|Heads)?P(Heads)P(Monday)

If P(Monday) = 2/3 then what do you put for the rest?


P(Monday|Heads) = 1 and P(Heads) = 1/3.

P(Heads|Monday) = 1/2, as expected.
Michael June 27, 2023 at 12:58 #818242
Reply to Pierre-Normand

Previously you've been saying that P(Heads) = 1/2.
Pierre-Normand June 27, 2023 at 13:13 #818244
Quoting Michael
Previously you've been saying that P(Heads) = 1/2.


In earlier messages? Yes. I shouldn't have used this prior in the context of the Thirder intepretation of P(H). I was unclear between the individuation of events as they related to the two possible interpretations of P(H) for Sleeping Beauty. So, some of my earlier uses of Bayes' theorem may have been confused or inconsistent. It is, I now think, the very fact that P(H) appears intuitively (but misleadingly) to reflect Sleeping Beauty's epistemic relation to the coin irrespective of the manner in which she tacitly individuates the relevant events that generates the apparent paradox.
Michael June 27, 2023 at 13:56 #818250
Reply to Pierre-Normand

Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment?

If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.
Pierre-Normand June 27, 2023 at 14:26 #818253
Quoting Michael
Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment?


Indeed, I have long insisted (taking a hint from @fdrake and Laureano Luna) that the following statements are biconditional: "The coin landed (or will land) heads", "I am currently experiencing a H-awakening", and "I am currently involved in a H-run".

However, it's important to note that while these biconditionals are true, they do not guarantee a one-to-one correspondence between these differently individuated events. When these mappings aren't one-to-one, their probabilities need not match. Specifically, in the Sleeping Beauty problem, there is a many-to-one mapping from T-awakenings to T-runs. This is why the ratios of |{H-awakenings}| to |{awakenings}| and |{H-runs}| to |{runs}| don't match.

If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.


Rationality in credences depends on their application. It would be irrational to use the credence P(H) =def |{H-awakenings}| / |{awakenings}| in a context where the ratio |{H-runs}| / |{runs}| is more relevant to the goal at hand (for instance, when trying to be picked up at the right exit door by Aunt Betsy) or vice versa (when trying to survive potential encounters with lions/crocodiles).
Michael June 27, 2023 at 14:36 #818256
Quoting Pierre-Normand
Rationality in credences depends on their application. It would be irrational to use the credence P(H) =def |{H-awakenings}| / |{awakenings}| in a context where the ratio |{H-runs}| / |{runs}| is more relevant to the goal at hand (for instance, when trying to survive encounters with lions/crocodiles or when trying to be picked up at the right exit door by Aunt Betsy) and vice versa.


I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely.

Quoting Pierre-Normand
However, it's important to note that while these biconditionals are true, they do not guarantee a one-to-one correspondence between these differently individuated events. When these mappings aren't one-to-one, their probabilities need not match. Specifically, in the Sleeping Beauty problem, there is a many-to-one mapping from T-awakenings to T-runs. This is why the ratios of |{H-awakenings}| to |{awakenings}| and |{H-runs}| to |{runs}| don't match.


I'm not sure what this has to do with credences. I think all of these are true:

1. There are twice as many T-awakenings as H-awakenings
2. There are an equal number of T-runs as H-runs
3. Sleeping Beauty's credence that this is a T-awakening is equal to her credence that this is a T-run
4. Sleeping Beauty's credence that this is a T-run is 1/2.

You seem to be disagreeing with 3 and/or 4. Is the truth of 1 relevant to the truth of 3 and/or 4? It's certainly relevant to any betting strategy, but that's a separate matter (much like with the lottery with a greater expected return).
Pierre-Normand June 27, 2023 at 15:15 #818261
Quoting Michael
I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely.


The rationality of Sleeping Beauty betting on T upon awakening isn't because this bet has a positive expected value. In fact, it's the other way around. The bet's positive expected value arises because she is twice as likely to win as she is to lose. This is due to the experimental setup, which on average creates twice as many T-awakenings as H-awakenings. It's because her appropriately interpreted credence P(T) =def P(T-awakening) = 2/3 that her bet on T yields a positive expected value, not the reverse. If she only had one opportunity to bet per experimental run (and was properly informed), regardless of the number of awakenings in that run, then her bet would break even. This would also be because P(T) =def P(T-run) = 1/2.

The same logic applies in my 'escape scenario', where Sleeping Beauty's room is surrounded by crocodiles (and she awakens once) if the die doesn't land on 'six', and is surrounded by lions (and she awakens six times) if the die does land on 'six'. Given a rare chance to escape (assuming opportunities are proportional to the number of awakenings), Sleeping Beauty should prepare to face lions, not because of the relative utilities of encounters with lions versus crocodiles, but because she is indeed more likely (with 6/11 odds) to encounter lions. Here also, this is because the experimental setup generates more encounters with lions than it does with crocodiles.
Michael June 27, 2023 at 15:25 #818265
Quoting Pierre-Normand
The bet's positive expected value arises because she is twice as likely to win as she is to lose. This is due to the experimental setup, which on average creates twice as many T-awakenings as H-awakenings.


Which of these are you saying?

1. There are twice as many T-awakenings because tails is twice as likely
2. Tails is twice as likely because there are twice as many T-awakenings

I think both of these are false.

I think there are twice as many T-awakenings but that tails is equally likely.

The bet's positive expected value arises only because there are twice as many T-awakenings.
Michael June 27, 2023 at 15:31 #818268
Quoting Pierre-Normand
It's because her appropriately interpreted credence P(T) =def P(T-awakening) = 2/3 that her bet on T yields a positive expected value, not the reverse. If she only had one opportunity to bet per experimental run (and was properly informed), regardless of the number of awakenings in that run, then her bet would break even. This would also be because P(T) =def P(T-run) = 1/2.


I don't think that works.

[math]\begin{aligned}P(HInterview | HRun) &= {{P(HRun | HInterview) * P(HInterview)} \over P(HRun)}\\\\P(HInterview) &= {{P(HInterview|HRun) * P(HRun)} \over P(HRun|HInterview)}\\\\P(HInterview) &= {{1 * P(HRun)} \over 1}\\\\P(HInterview) &= P(HRun)\end{aligned}[/math]

P(H-awakening) = P(H-run). Therefore either both are 1/2 or both are 1/3. This is expected given the biconditional.

I think it's more rational to say that P(H-awakening) = 1/2 than to say that P(H-run) = 1/3. Therefore I think it's more rational to say that P(H-awakening) = P(H-run) = P(H) = 1/2.
Pierre-Normand June 28, 2023 at 02:22 #818461
Quoting Michael
Which of these are you saying?

1. There are twice as many T-awakenings because tails is twice as likely
2. Tails is twice as likely because there are twice as many T-awakenings

I think both of these are false.

I think there are twice as many T-awakenings but that tails is equally likely.

The bet's positive expected value arises only because there are twice as many T-awakenings.


I am not relying on 1, but it would be a valid inference if we assume that P(T) = 2*P(H). This assumption would hold if we define P(T) as P(T) =def |{T-awakenings}| / |{awakenings}| (and similarly for P(H)).

Your disagreement with 2 appears to stem from an assumption that (at least in the context of the Sleeping Beauty problem) the probability of an outcome must solely be a reflection of the realization of an object's intrinsic properties, such as a fair coin. However, we frequently talk about probabilities of (types of) events that depend on how we interact with objects and that only indirectly depend (if at all) on the propensities of those objects had to actualize their properties. For instance, if there are twice as many Italians as Tunisians in my city (and no other nationalities), but for some reason, Tunisians go out four times more often than Italians, then when I go out, the first person I meet is twice as likely to be a Tunisian.

Here's another example. The management of the Sleeping Beauty Experimental Facility organizes a cocktail party for the staff. The caterers circulate among the guests serving drinks and sandwiches. Occasionally, they flip a coin. If it lands heads, they ask a random guest to guess the result. If it lands tails, they ask two random guests. The guests are informed of this protocol (and they don't track the caterers' movements). When a caterer approaches you, what are the odds that the coin they flipped landed heads? This situation is similar to the case of encountering a Tunisian in my previous example. Upon meeting someone, you could rationally say, "The probability that this person is a Tunisian is 2/3," even though people don't inherently possess a propensity to be born a different nationality than the one they were actually born into.

In the cocktail party scenario, the probability that "this" coin landed tails is a function of both the coin's propensities (its fairness) and the method used to engineer interactions between guests and coins based on the outcomes of the tosses.
Michael June 28, 2023 at 09:46 #818505
Quoting Pierre-Normand
However, we frequently talk about probabilities of (types of) events that depend on how we interact with objects and that only indirectly depend (if at all) on the propensities of those objects had to actualize their properties. For instance, if there are twice as many Italians as Tunisians in my city (and no other nationalities), but for some reason, Tunisians go out four times more often than Italians, then when I go out, the first person I meet is twice as likely to be a Tunisian.


In this case:

1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often

This makes sense.

So:

1. there are twice as many T-awakenings because T-awakenings are twice as likely
2. T-awakenings are twice as likely because ...

How do you finish 2? It's circular reasoning to finish it with "there are twice as many T-awakenings".

Quoting Pierre-Normand
The management of the Sleeping Beauty Experimental Facility organizes a cocktail party for the staff. The caterers circulate among the guests serving drinks and sandwiches. Occasionally, they flip a coin. If it lands heads, they ask a random guest to guess the result. If it lands tails, they ask two random guests. The guests are informed of this protocol (and they don't track the caterers' movements). When a caterer approaches you, what are the odds that the coin they flipped landed heads?


To make this comparable to the Sleeping Beauty problem; there are two Sleeping Beauties, one will be woken if heads, two will be woken if tails. When woken, what is their credence in heads? In such a situation the answer would be 1/3. Bayes' theorem for this is:

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\&={{{1\over2}*{1\over2}}\over{3\over4}}\\&={1\over3}\end{aligned}[/math]

This isn't comparable to the traditional probem.

Incidentally, what is your version of Bayes' theorem for this where P(Heads) = 1/3?
Pierre-Normand June 28, 2023 at 10:01 #818506
Quoting Michael
What wouldn't make sense is just to say that Tunisian-meetings are twice as likely because there are twice as many Tunisian-meetings. That is a non sequitur.


But why wouldn't it make sense? For example, if you're an immigration lawyer and your secretary has arranged for you to meet with twice as many Tunisians as Italians in the upcoming week, when you walk into a meeting without knowing the client's nationality, isn't it logical to say that it's twice as likely to be with a Tunisian?

Quoting Michael
To make this comparable to the Sleeping Beauty problem; there are two Sleeping Beauties, one will be woken if heads, two will be woken if tails. When woken, what is their credence in heads? In such a situation the answer would be 1/3. Bayes' theorem for this is:

P(Heads|Awake)=P(Awake|Heads)?P(Heads)/P(Awake)

=(1/2)?(1/2) / (3/4)=1/3

=1/3

This isn't comparable to the traditional problem.


Why isn't it comparable? I had proposed an identical version earlier. One effective way to erase Sleeping Beauty's memory without any side effects from an amnesia-inducing drug might be to switch her with her identical twin for the second awakening. They would each only experience one awakening at most as part of a team. Their epistemic perspectives regarding the coin toss would remain the same, and therefore so should their rational credences.

Incidentally, what is your version of Bayes' theorem for this where P(Heads) = 1/3?


I am unsure what it is that you are asking here.
Michael June 28, 2023 at 10:05 #818507
Quoting Pierre-Normand
But why wouldn't it make sense? For example, if you're an immigration lawyer and your secretary has arranged for you to meet with twice as many Tunisians as Italians in the upcoming week, when you walk into a meeting without knowing the client's nationality, isn't it logical to say that it's twice as likely to be with a Tunisian?


I've since edited my post to make my point clearer. To repeat:

In this case:

1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often

This makes sense.

So:

1. there are twice as many T-awakenings because T-awakenings are twice as likely
2. T-awakenings are twice as likely because ...

How do you finish 2? It's circular reasoning to finish it with "there are twice as many T-awakenings".

Quoting Pierre-Normand
I am unsure what it is that you are asking here.


Starting here you argued that P(Heads) = 1/3.

So, what do you fill in here for the example of one person woken if heads, two if tails?

[math]\begin{aligned}P(Heads | Awake) &= {{P(Awake | Heads) * P(Heads)} \over P(Awake)}\\\\&={{P(Awake | Heads)*{1\over3}}\over{P(Awake)}}\end{aligned}[/math]
Pierre-Normand June 28, 2023 at 10:19 #818511
Quoting Michael
Starting here you argued that P(Heads) = 1/3.

So, what do you fill in here for the example of one person woken if heads, two if tails?


I see. I was filling up a template that you had provided where P(Monday) = 2/3, thus making it clear that we were quantifying awakening episodes.

In that case P(Monday|Heads) = 1, and P(Heads) = 1/3 since one third of the awakenings are H-awakenings.

Therefore P(Heads|Monday) = P(Monday|Heads)?P(Heads)/P(Monday) = (1)*(1/3)/(2/3) = 1/2.

Likewise, P(Heads|Awake) = P(Awake|Heads)?P(Heads)/P(Awake) = (1)*(1/3)/(1) = 1/3

Note that when we quantify awakening episodes, P(Awake|Heads) = 1 since all H-awakenings are awakenings.
Pierre-Normand June 28, 2023 at 10:31 #818513
Quoting Michael
In this case:

1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often

This makes sense.

So:

1. there are twice as many T-awakenings because T-awakenings are twice as likely
2. T-awakenings are twice as likely because ...

How do you finish 2? It's circular reasoning to finish it with "there are twice as many T-awakenings".


To fill in your number 2 with no circularity, we can draw a parallel to the first example:

2a. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often (and thus, Tunisian-meetings are twice as frequent as Italian-meetings)

Likewise:

2b. T-awakenings are twice as likely because, based on the experiment's design, Sleeping Beauty is awakened twice as often when the coin lands tails (and thus, T-awakenings are twice as frequent as H-awakenings)
Michael June 28, 2023 at 10:33 #818514
Quoting Pierre-Normand
T-awakenings are twice as likely because, based on the experiment's design, Sleeping Beauty is awakened twice as often when the coin lands tails


This is just repeating the same thing in a different way. That there are twice as many T-awakenings just is that Sleeping Beauty is awakened twice as often if tails. So your reasoning is circular.
Pierre-Normand June 28, 2023 at 10:40 #818516
Quoting Michael
This is just repeating the same thing in a different way. That there are twice as many T-awakenings just is that Sleeping Bauty is awaked twice as often if tails


You accepted the validity of the reasoning when probability was deduced from frequencies in the Tunisian-meetings scenario. Why is this reasoning acceptable for people who were born Tunisian but questionable for coins that landed tails?
Michael June 28, 2023 at 10:43 #818518
Reply to Pierre-Normand

These are two different sets of claims:

A1. there are twice as many Tunisian-meetings because Tunisian-meetings are twice as likely
A2. Tunisian-meetings are twice as likely because there are half as many Tunisians and Tunisians go out four times more often

B1. there are twice as many T-awakenings because T-awakenings are twice as likely
B2. T-awakenings are twice as likely because Sleeping Beauty is woken twice as often if tails

"there are twice as many T-awakenings" is biconditional with "Sleeping Beauty is woken twice as often if tails" and so B uses circular reasoning.

"there are twice as many Tunisian-meetings" isn't biconditional with "there are half as many Tunisians and Tunisians go out four times more often" and so A doesn't use circular reasoning.
Michael June 28, 2023 at 10:47 #818520
We have two different experiments:

1. A is woken once if heads, twice if tails
2. A is woken once if heads, both A and B once each if tails

Given that I'm guaranteed to wake up if heads in the first experiment but not guaranteed to wake up if heads in the second experiment (and guaranteed to wake up if tails in both experiments) I think it only reasonable to conclude that P(Heads|Awake) in the first experiment is greater then P(Heads|Awake) in the second experiment.

And given that P(Heads|Awake) = 1/3 in the second experiment I think it only reasonable to conclude that P(Heads|Awake) > 1/3 (i.e. 1/2) in the first experiment.
Pierre-Normand June 28, 2023 at 11:10 #818524
Quoting Michael
"there are twice as many Tunisian-meetings" isn't biconditional with "there are half as many Tunisians and Tunisians go out four times more often" and so A doesn't use circular reasoning.


However, you seem to agree that in this scenario, one is twice as likely to encounter a Tunisian. The conclusion that there are twice as many Tunisian-meetings emerges from the premises: (1) there are half as many Tunisians and (2) Tunisians venture out four times more often. This inference is simply an intermediate step in the argumentation, providing an explanation for why there are twice as many Tunisian-meetings. Analogously, the Sleeping Beauty setup explains why there are twice as many T-awakenings. If the reason for twice as many Tunisian-meetings is that Tunisians venture out twice as often (assuming there are an equal number of Tunisians and Italians), then the analogy with the Sleeping Beauty scenario is precise. The attribute of being Tunisian can be compared to a coin landing tails, and encountering them on the street can be paralleled to Sleeping Beauty encountering such coins upon awakening. In the Sleeping Beauty setup, coins that land tails are 'venturing out' more often.
Michael June 28, 2023 at 11:26 #818526
Quoting Pierre-Normand
However, you seem to agree that in this scenario, one is twice as likely to encounter a Tunisian. The conclusion that there are twice as many Tunisian-meetings emerges from the premises: (1) there are half as many Tunisians and (2) Tunisians venture out four times more often. This inference is simply an intermediate step in the argumentation, providing an explanation for why there are twice as many Tunisian-meetings. Analogously, the Sleeping Beauty setup explains why there are twice as many T-awakenings. If the reason for twice as many Tunisian-meetings is that Tunisians venture out twice as often (assuming there are an equal number of Tunisians and Italians), then the analogy with the Sleeping Beauty scenario is precise. The attribute of being Tunisian can be compared to a coin landing tails, and encountering them on the street can be paralleled to Sleeping Beauty encountering such coins upon awakening. In the Sleeping Beauty setup, coins that land tails are 'venturing out' more often.


This goes back to my distinction between:

1. One should reason as if one is randomly selected from the set of all participants
2. One should reason as if one's interview is randomly selected from the set of all interviews

In the case where I go out and meet someone on the street it is certainly comparable to 2, and this is why when we consider the sitters it is correct to say that the probability that they are assigned a heads interview is 1/3.

But Sleeping Beauty isn't assigned an interview in the same way. It's not the case that there is one heads interview, two tails interviews, and she "meets" one of the interviews at random (such that P(T interview) = 2/3); instead it's the case that there is one heads interview, two tails interviews, and first she is assigned one of the interview sets at random (such that P(T interviews) = 1/2) and then she "meets" one of the interviews in her set at random.

If we were to use the meetings example then:

1. A coin is tossed
2. If heads then 1 Italian walks the streets
3. If tails then 2 Tunisians walk the streets
4. Sleeping Beauty is sent out into the streets

What is the probability that she will meet a Tunisian? That there are twice as many Tunisians isn't that her meeting a Tunisian is twice as likely.
Michael June 28, 2023 at 11:45 #818530
Your argument is that: if 1) there are twice as many T-awakenings and if 2) I randomly select one of the awakenings then 3) it is twice as likely to be a T-awakening.

This is correct. But the manner in which the experiment is conducted is such that (2) is false. (3) doesn't follow from (1) alone.

(2) is true for the sitter assigned an interview but not for the participant.

For the participant it is the case that 1) there are twice as many T-awakenings, 2) I randomly select one of the awakening sets, 3) it is equally likely to be a T-awakening set, and so 4) it is equally likely to be a T-awakening. (1) it turns out is irrelevant.

You can't just ignore (or change) the manner in which Sleeping Beauty participates in the experiment, which is what your various analogies do.
Pierre-Normand June 28, 2023 at 12:06 #818536
Quoting Michael
If we were to use the meetings example then:

1. A coin is tossed
2. If heads then 1 Italian walks the streets
3. If tails then 2 Tunisians walk the streets
4. Sleeping Beauty is sent out into the streets

What is the probability that she will meet a Tunisian?


In the Sleeping Beauty problem, she isn't asked to estimate the probability of being awakened in the future with the coin having landed heads. Instead, she's awakened and then questioned about her current belief regarding the coin's outcome. To maintain this structure in the street encounter example, we should consider Sleeping Beauty meeting a wanderer and then being asked to consider the probability that this wanderer is an Italian. If, over time, the setup leads to twice as many Tunisian encounters (perhaps because Tunisians wander about twice as long as Italians), then Sleeping Beauty's rational credence should be P(Italian) = 1/3.
Michael June 28, 2023 at 12:09 #818538
Quoting Pierre-Normand
If, over time, the setup leads to twice as many Tunisian encounters (perhaps because Tunisians wander about twice as long as Italians), then Sleeping Beauty's rational credence should be P(Italian) = 1/3.


I believe this credence is based on fallacious reasoning as explained here.

Her reasoning is: if 1) there are twice as many Tunisian walkers and if 2) I randomly meet one of the walkers then 3) it is twice as likely to be a Tunisian walker.

Given the manner in which the experiment is conducted (2) is false and so this isn't the correct reasoning with which to determine one's credence.
Pierre-Normand June 28, 2023 at 12:15 #818540
Quoting Michael
Your argument is that: if 1) there are twice as many T-awakenings and if 2) I randomly select one of the awakenings then 3) it is twice as likely to be a T-awakening.

This is correct. But the manner in which the experiment is conducted is such that 2) is false.


Your point (2) doesn't factor into my argument. I've consistently held to the premise, as dictated by the problem statement, that Sleeping Beauty awakens once when the coin lands heads and twice when it lands tails. There's no necessity for an external agent to select an awakening, just as there's no need for someone to choose a street encounter. Instead, Sleeping Beauty, upon each awakening (or encounter), should consider the long-term distribution of these awakenings (or encounters) to formulate a rational belief about the current situation.
Michael June 28, 2023 at 12:35 #818542
Reply to Pierre-Normand

We start with the mutually agreeable premise:

P1) there are twice as many T-awakenings

Your conclusion is:

C) T-awakenings are twice as likely

Obviously this is a non sequitur. We need the second premise:

P2) if there are twice as many T-awakenings then T-awakenings are twice as likely

This is something that I disagree with and that you need to prove.

In the case of the meetings we have:

*P1) there are twice as many Tunisian walkers
*P2) if I meet a walker at random then I am twice as likely to meet a Tunisian walker (from *P1)
*P3) I meet a walker at random
*C) I am twice as likely to have met a Tunisian walker (from *P2 and *P3)

In Sleeping Beauty's case we have:

P1) there are twice as many tails interviews
P2) ?
P3) I am in an interview
C) I am twice as likely to be in a tails interview

What is your (P2) that allows you to derive (C)? It doesn't follow from (P1) and (P3) alone.
Pierre-Normand June 28, 2023 at 13:34 #818545
Quoting Michael
In the case of the meetings we have:

*P1) there are twice as many Tunisian walkers
*P2) if I meet a walker at random then I am twice as likely to meet a Tunisian walker (from *P1)
*P3) I meet a walker at random
*C) I am twice as likely to have met a Tunisian walker (from *P2 and *P3)

In Sleeping Beauty's case we have:

P1) there are twice as many tails interviews
P2) ?
P3) I am in an interview
C) I am twice as likely to be in a tails interview

What is your (P2) that allows you to derive (C)? It doesn't follow from (P1) and (P3) alone.


You are introducing premises *P2 and *P3 in an attempt to emphasize a perceived disanalogy between the cosmopolitan meeting scenario and the Sleeping Beauty problem. Both *P1 and *P2 seem to imply that there exists a pre-determined set of potential encounters (many Tunisians and half as many Italians strolling around), from which a random selection process subsequently generates an encounter. There indeed is no analogous situation in the Sleeping Beauty problem, as there isn't a pre-determined set of pre-tossed coins from which Sleeping Beauty randomly encounters one upon awakening. However, I would argue that this misrepresents the cosmopolitan meeting scenario.

In the cosmopolitan situation, the probability of meeting a Tunisian doubles because Tunisians are around twice as often. There is no need for a random selection from a pre-existing set of potential encounters. Similarly, in the Sleeping Beauty problem, coins that have landed on tails "hang around" twice as long (i.e., for two sequential awakenings instead of one), which makes it twice as likely for Sleeping Beauty to encounter this outcome each time she is awakened and interviewed throughout the experiment.

The random process is fully specified by the equal distribution of coin toss outcomes (over the long run) and the longer "hang around" times of tails outcomes. Likewise, in the cosmopolitan meeting case, the process is fully specified by the equal distribution of Italians and Tunisians in the city and the increased frequency of encounters generated by Tunisians due to their longer "hang around" times. In neither case are additional random selection processes from a pre-determined set of possible encounters necessary.
Michael June 28, 2023 at 14:32 #818551
Reply to Pierre-Normand I introduce the additional premise(s) because this is a non sequitur:

A1. there are twice as many Tunisian walkers as Italian walkers
A2. I am twice as likely to meet a Tunisian walker

If all the Tunisian walkers are in one area of the town but I'm walking in another then the conclusion is false. If, unknown to me, all the Tunisian walkers are wearing red and all the Italians wearing blue and I'm told to meet someone wearing blue if a coin lands heads or red if tails then the conclusion is false. If I don't go out to meet anyone then the conclusion is false. The experiment needs to be set up in such a way that the walkers are randomly distributed throughout the town and that I meet at random any one of the walkers. Only when this setup is established as a premise will the conclusion follow:

A1. there are twice as many Tunisian walkers as Italian walkers
A2. if I meet a walker at random from a random distribution of all walkers then I am twice as likely to meet a Tunisian walker

Similarly, this is a non sequitur:

B1. there are twice as many T-interviews as H-interviews
B2. my interview is twice as likely to be a T-interview

You would instead need something like:

C1. there are twice as many T-interviews as H-interviews
C2. if my interview is randomly assigned from the set of all interviews then my interview is twice as likely to be a T-interview

But this reasoning doesn't apply to Sleeping Beauty because her interview isn't randomly assigned from the set of all interviews.

For Sleeping Beauty the correct argument is the one that properly sets out the manner in which the experiment is conducted:

D1. there are an equal number of T-interview sets as H-interview sets
D2. If I am assigned at random either a T-interview set or a H-interview set then my interview set is equally likely to be a T-interview set
D3. I am assigned at random either a T-interview set or a H-interview set
D4. my interview is a T-interview iff my interview set is a T-interview set
D5. my interview is equally likely to be a T-interview

B is fallacious, C is inapplicable, and D is sound, hence why P(Heads|Awake) = 1/2 is the only rational conclusion. The fact that there are twice as many T-interviews as H-interviews is irrelevant. It's a premise from which no relevant conclusion regarding credence can be derived. It's only use is to explain betting outcomes.

Quoting Pierre-Normand
In the cosmopolitan situation, the probability of meeting a Tunisian doubles because Tunisians are around twice as often.


This is an ambiguous claim. If there are half as many Tunisians but they go out four times as often but are only out for 10 mins, whereas Italians are out for 20 mins, then it would be that Tunisians are around equally as often as measured by time out. The only way you could get this to work is if the argument is set out exactly as I have done above:

A1. there are twice as many Tunisian walkers as Italian walkers (out right now)
A2. if (right now) I meet a walker at random from a random distribution of all walkers (out right now) then I am twice as likely to meet a Tunisian walker

But there's nothing comparable to "if (right now) I meet a walker at random from a random distribution of all walkers (out right now)" that has as a consequent "then my interview is twice as likely to be a T-interview".
Michael June 28, 2023 at 17:08 #818570
P1. If I am assigned at random either a H-interview set or a T-interview set then my interview set is equally likely to be a H-interview set
P2. I am assigned at random either a H-interview set or a T-interview set
P3. My interview is a H-interview iff my interview set is a H-interview set
C1. My interview is equally likely to be a H-interview

The premises are true and the conclusion follows, therefore the conclusion is true.

However, consider:

P4. If my sitter is assigned at random either a H-interview or a T-interview then his interview is half as likely to be a H-interview
P5. My sitter is assigned at random either a H-interview or a T-interview
P6. My interview is a H-interview iff my sitter's interview is a H-interview
C2. My interview is half as likely to be a H-interview

Prima facie the premises are true and the conclusion follows, therefore prima facie the conclusion is true. However, C1 and C2 are contradictory, therefore one of the arguments must be unsound.

Let's say that my sitter happens to be John:

P7. If John is assigned at random either a H-interview or a T-interview then his interview is half as likely to be a H-interview
P8. John is assigned at random either a H-interview or a T-interview
P9. My interview is a H-interview iff John's interview is a H-interview
C3. My interview is half as likely to be a H-interview

The issue is with P9. My interview is not biconditional with John's interview given that he is not guaranteed to be my sitter. That second argument commits a fallacy. P4 and P5 are true only under a de re interpretation of "my sitter" and P6 is true only under a de dicto interpretation.

This is why the participant shouldn't update his credence to match his sitter's.
Michael June 28, 2023 at 19:58 #818603
Reply to JeffJo

We have two different experiments:

1. A is woken once if heads, twice if tails
2. A is woken once if heads, A and B once each if tails

Your version of the experiment is comparable to the second experiment, not the first. Your case of HH is equivalent to the case that the coin landed heads and I am B. The very fact that I'm being asked my credence allows me to rule out one of the non-zero prior probabilities, i.e. P(HH) = 1/4 and P(Heads and B) = 1/4.

The second experiment is not equivalent to the first experiment. In the first experiment there is no non-zero prior probability that I can rule out when being asked my credence.

Given that I'm guaranteed to be woken up if heads in the first experiment but not the second (and guaranteed to be woken up if tails in both experiments) it makes sense that my credence in heads is greater in the first experiment than in the second. If I'm less likely to be woken up if heads then it's less likely to be heads if I'm woken up.
Pierre-Normand June 28, 2023 at 23:45 #818657
Quoting Michael
P1. If I am assigned at random either a T-interview set or a H-interview set then my interview set is equally likely to be a T-interview set
P2. I am assigned at random either a T-interview set or a H-interview set
P3. My interview is a T-interview iff my interview set is a T-interview set
C1. My interview is equally likely to be a T-interview

The premises are true and the conclusion follows, therefore the conclusion is true.


The conclusion doesn't follow because, while the biconditional expressed in P3 is true, this biconditional does not guarantee a one-to-one correspondence between the set of T-interviews and the set of T-runs (or "T-interview sets"). Instead, the correspondence is two-to-one, as each T-run includes two T-interviews. This is a central defining feature of the Sleeping Beauty problem that your premises fail to account for.
Srap Tasmaner June 28, 2023 at 23:50 #818660
Quoting Pierre-Normand
The random process is fully specified by the equal distribution of coin toss outcomes (over the long run) and the longer "hang around" times of tails outcomes


Haven't read all the recent back and forth here, but I think the usual examples of conditional probability do not apply.

If you want a closer analogy with pedestrians, it's Tunisians walking around in pairs. If the chances of meeting an Italian or a pair of Tunisians are equal, then the chances of meeting *a* Tunisian are either nil, since you can't meet just one, or the same as meeting a pair.

Look at how hang-around times affect the pedestrian-encountering odds. Roughly, if you miss a short walker, you've missed him, but if you miss a long walker you get another chance. That's not how Sleeping Beauty works at all. There's no way to miss your first tatils interview but still catch the second one.
Pierre-Normand June 29, 2023 at 00:00 #818664
Quoting Michael
This is an ambiguous claim. If there are half as many Tunisians but they go out four times as often but are only out for 10 mins, whereas Italians are out for 20 mins, then it would be that Tunisians are around equally as often as measured by time out. The only way you could get this to work is if the argument is set out exactly as I have done above:

A1. there are twice as many Tunisian walkers as Italian walkers (out right now)
A2. if (right now) I meet a walker at random from a random distribution of all walkers (out right now) then I am twice as likely to meet a Tunisian walker

But there's nothing comparable to "if (right now) I meet a walker at random from a random distribution of all walkers (out right now)" that has as a consequent "then my interview is twice as likely to be a T-interview".


To fine-tune the analogy, let's assume that there are an equal number of Tunisians and Italians, that they are out for the same duration, and that Tunisians go out twice as frequently. Importantly, there's no need for an extraneous process of random selection to generate an encounter with a citizen, or a tossed coin, in either example. In the cosmopolitan encounter case, the random distributions of citizens in the street at any given time (with, on average, twice as many Tunisians out) directly result in twice as many encounters with Tunisians. It is a straightforward causal relationship between the distribution of wanderers and the distribution of encounters. Similarly, in the Sleeping Beauty case, the setup guarantees that Sleeping Beauty will encounter twice as many coins having landed tails when she awakens, simply by ensuring that she is awakened twice as often when the coins land tails.
Pierre-Normand June 29, 2023 at 00:39 #818673
Quoting Srap Tasmaner
If you want a closer analogy with pedestrians, it's Tunisians walking around in pairs. If the chances of meeting an Italian or a pair of Tunisians are equal, then the chances of meeting *a* Tunisian are either nil, since you can't meet just one, or the same as meeting a pair.

Look at how hang-around times affect the pedestrian-encountering odds. Roughly, if you miss a short walker, you've missed him, but if you miss a long walker you get another chance. That's not how Sleeping Beauty works at all. There's no way to miss your first tatils interview but still catch the second one.


If I were to adjust the analogy, suppose that meeting a Tunisian pedestrian guarantees that you have met or will meet their sibling either in the previous or next encounter. In this scenario, would your credence that the pedestrian you're encountering is a Tunisian change? As long as you meet Tunisians twice as often as Italians, your credence P(Tunisian encounter) should remain 2/3 at the time of each individual encounter, regardless of the pairing situation.
Srap Tasmaner June 29, 2023 at 02:40 #818689
Reply to Pierre-Normand

But haven't you lost Sleeping Beauty's other constraint, that the chances of encountering one Italian or two Tunisians are equal?
Michael June 29, 2023 at 06:57 #818714
Quoting Pierre-Normand
The conclusion doesn't follow because, while the biconditional expressed in P3 is true, this biconditional does not guarantee a one-to-one correspondence between the set of T-interviews and the set of T-runs (or "T-interview sets"). Instead, the correspondence is two-to-one, as each T-run includes two T-interviews. This is a central defining feature of the Sleeping Beauty problem that your premises fail to account for.


That doesn't mean that the credence isn’t transitive. My premises "fail" to account for it because it's irrelevant.

A iff B
P(B) = 1/2
Therefore, P(A) = 1/2

The conclusion has to follow.

See also what I said before:

[math]\begin{aligned}P(TInterview | TRun) &= {{P(TRun | TInterview) * P(TInterview)} \over P(TRun)}\\\\P(TInterview) &= {{P(TInterview|TRun) * P(TRun)} \over P(TRun|TInterview)}\\\\P(TInterview) &= {{1 * P(TRun)} \over 1}\\\\P(TInterview) &= P(TRun)\end{aligned}[/math]
Michael June 29, 2023 at 07:17 #818716
Quoting Pierre-Normand
In the cosmopolitan encounter case, the random distributions of citizens in the street at any given time (with, on average, twice as many Tunisians out) directly result in twice as many encounters with Tunisians.


Because of what I said before:

A1. there are twice as many Tunisian walkers as Italian walkers
A2. therefore, if I go out and meet at random one of the walkers then I am twice as likely to meet a Tunisian walker
A3. therefore, there will be twice as many Tunisian-walker meetings

In Sleeping Beauty's case:

B1. Sleeping Beauty wakes up twice as often if the coin lands tails
B2. the coin is equally likely to land tails
B3. therefore, there will be twice as many tails awakenings

This argument is sound and fully explains the betting outcome. Your conclusion that "therefore, if Sleeping Beauty wakes up then the coin is twice as likely to have landed tails" just doesn't follow.

What would follow is "therefore, if I pick at random one of Sleeping Beauty's awakenings then it is twice as likely to be a tails awakening," but given that the experiment isn't conducted this way it doesn't make sense for Sleeping Beauty to reason this way to determine her credence.
Michael June 29, 2023 at 12:28 #818739
A slightly different example. If the coin lands heads then Sleeping Beauty is woken on Monday and kept asleep on Tuesday. If the coin lands tails then on Monday and Tuesday a coin is tossed and Sleeping Beauty is only woken if the coin lands heads.

This gives us:

H   Awake Sleep
H Awake Sleep
H Awake Sleep
H Awake Sleep
THH Awake Awake
THT Awake Sleep
TTH Sleep Awake
TTT Sleep Sleep


[math]\begin{aligned}P(Heads|Awake) = {{P(Awake|Heads) \cdot P(Heads)} \over P(Awake)}\end{aligned}[/math]

Using halfer reasoning (each row is the outcome) we have:

[math]\begin{aligned}P(Heads|Awake) = {{1 \cdot {1\over2}} \over {7\over8}} = {4\over7}\end{aligned}[/math]

Using thirder reasoning (each cell in the second and third columns is the outcome) we have:

[math]\begin{aligned}P(Heads|Awake) = {{{1\over2} \cdot {1\over2}} \over {1\over2}} = {1\over2}\end{aligned}[/math]

Given that a fair coin toss is equally likely to be heads as tails and that she is less likely (not guaranteed) to wake if tails then if she does wake then she reasons that it's less likely to be tails. In waking she rules out TTT.

The answer of [math]4\over7[/math] seems correct, even though there will be an equal number of heads awakenings as tails awakenings after repeated runs.
JeffJo June 29, 2023 at 13:01 #818748
Quoting Michael
We have two different experiments:

1. A is woken once if heads, twice if tails
2. A is woken once if heads, A and B once each if tails

Your version of the experiment is comparable to the second experiment, not the first.


Um, no.

In "my experiment" I will literally and explicitly wake the single subject once if coin C1 lands on Heads, and twice if it lands on Tails. And there literally and explicitly is no second subject. So it is an exact implementation of 1, not 2.

What you are doing here, is confusing the change in the details that determine how this single subject will be awakened, with her being a different subject. You name the different subjects "subject A" and "subject B." Well, the same thing can be done with Elga's ("the most frequent") implementation:

  • 3. Wake subject C once on Monday if Heads, or wake subjects C and D once each (on Monday and then Tuesday, respectively) if tails.


And the reason that neither 2, nor 3, fits the intended problem is that the subject has to know she will be wakened, but not remember whether it happens one or two times. So that she is the same person, but does not know if she is in "world" (I don't like this word here, but it is how philosophers approach the problem) A, B, C or D.

AGAIN:

The controversy created by Elga's ("the most frequent") implementation, of the same problem that I implement, is this:
  • The incarnation of the single subject does not know if she is in world C or D.
  • This prevents her from defining a sample space that applies to just her world.
  • So she has to combine them into a single world
  • Halfers do it by saying they are the same world as the world that contains both, so the single probability space that applies to the combination applies to each individually.
  • Thirders do it by saying they are separate worlds, requiring a joint probability space and an assessment of which world she might be in.


And the way my implementation solves this is by creating a simple probability space that applies to just world A, or to just world B, so it no longer matters which world the subject is in.
Michael June 29, 2023 at 13:06 #818751
Quoting JeffJo
In "my experiment" I will literally and explicitly wake the single subject once if coin C1 lands on Heads, and twice if it lands on Tails. And there literally and explicitly is no second subject. So it is an exact implementation of 1, not 2.


You toss two coins and don’t ask them their credence if both land heads. That’s what makes your experiment equivalent to my second example where B isn’t asked if heads.

To provide a proper analogy to Sleeping Beauty you must have it that the participant is guaranteed to be asked her credence at least once. That fact is why the answer is 1/2 and not 1/3.

The answer is only 1/3 when there’s a 1/4 prior probability of never being asked your credence.
Pierre-Normand June 29, 2023 at 13:15 #818753
Quoting Srap Tasmaner
But haven't you lost Sleeping Beauty's other constraint, that the chances of encountering one Italian or two Tunisians are equal?


In my original cosmopolitan analogy, the equal Italian and Tunisian populations mirrors the even likelihood of the coin landing on either side in the Sleeping Beauty problem. What makes it more likely to encounter a Tunisian—despite the equal population—is that Tunisians go for walks twice as often on average, increasing the odds of an encounter. This mirrors how Sleeping Beauty is woken up twice when the coin lands tails.

To fine-tune the analogy and preserve the feature of the Sleeping Beauty problem you've pointed out, we can assume that initially, you're equally likely to encounter an Italian or a Tunisian—perhaps because Tunisians walk in hidden pairs. When you meet a member of a Tunisian pair for the first time, their sibling ensures they are the next one you meet. Thus, when you have met an Italian, or a Tunisian for the second time in a row, your next encounter is equally likely to be with an Italian or a Tunisian, analogous to the Sleeping Beauty problem where a new coin toss (and a new Monday awakening) occurs after each heads or second tails awakening. Despite this, two-thirds of your encounters are with Tunisians, so the odds that any given encounter is with a Tunisian remain 2/3. (We can assume that the experiment begins with a small number of random "dummy" encounters to ensure that you lose track of the first "experimental" encounter.)
Pierre-Normand June 29, 2023 at 14:06 #818758
Quoting Michael
That doesn't mean that the credence isn’t transitive. My premises "fail" to account for it because it's irrelevant.

A iff B
P(B) = 1/2
Therefore, P(A) = 1/2


While this kind of inference is often valid, it doesn't apply in the Sleeping Beauty problem.

Credences, or probabilities, can be thought of as ratios. My credence at a given time for an outcome O reflects the proportion of cases where O occurs in a similar situation S. In other words, it is the "ideal" or "long run" ratio O/S. For instance, my credence that a randomly drawn card from a shuffled deck is a spade, P(Spade) = 1/4, reflects my belief that the ratio of spade outcomes to card-drawing situations is 1/4.

The general validity of the inference you propose is based on the assumption that the mapping between O and S is constant. However, this mapping is contentious in the Sleeping Beauty problem, with Halfers and Thirders disagreeing, resulting in conflicting interpretations of P(Heads).

As long as Halfers and Thirders stick to their own definitions, this isn't a problem—though it can lead to miscommunication. Being aware of these divergent definitions also helps avoid invalid inferences.

Let's take A as Sleeping Beauty being in a H-awakening episode and B as her being in a H-run. While A iff B holds true, note that:

P(B) = 1/2 = B/O, where O represents a representative set of experimental runs.

P(A) = 1/3 = A/O', where O' represents a representative set of awakening episodes.

Equating P(B) and P(A) and inferring one from the other can only be valid if O remains constant—in other words, if the mapping from potential outcomes to potential situations doesn't change.
JeffJo June 29, 2023 at 14:06 #818759
Quoting Michael
You toss two coins and don’t ask them their credence if both land heads. That’s what makes your experiment equivalent to my second example where B isn’t asked if heads.


I do not ask anybody (for) their credence if both coins landed on Heads. I don't ask anybody about coin C2 at all, although it has to be taken into account.

I ask for a credence in whether coin C1 is currently showing Heads. And the way the problem is set up, that is always the same event where coin C1 landed on Heads. Maybe you do not realize that coin C1 is the coin in the problem? And coin C2 is not?

C2 is just the coin that controls ordering. Since amnesia makes ordering irrelevant to the single subject, what coin C2 is showing cannot affect the answer.

Quoting Michael
That’s what makes your experiment equivalent to my second example where B isn’t asked if heads.

There is no B anywhere, as far as I can tell. You don't seem to want to explain the important details, like whether B is a person, a person in a different situation, or (as it seems here) if B is an event that is not a part of the experiment.

AGAIN: I use a single subject. That subject is always wakened, but may be wakened twice. That subject is always asked "if heads" about the only coin that the original problem is concerned with, (It's even possible we could pose the question while she sleeps, but we shouldn't expect an answer.) When she can answer, she knows that there are three equally likely combinations for what the two coins are showing, and in only one is coin C1 showing Heads.

This is not difficult. But you do need to stop trying to contrive a situation where it is wrong. So far, you have not even described a situation that applies.

Michael June 29, 2023 at 14:13 #818761
Quoting JeffJo
I do not ask anybody (for) their credence if both coins landed on Heads.


Exactly. It is precisely because the prior probability of being asked at least once is 3/4 that the probability that the first coin landed heads is 1/3.

If the prior probability of being asked at least once is 1 then the probability that the first coin landed heads is 1/2.

This is why your example isn’t comparable to the traditional problem and is comparable to my second example where the prior probability of being asked at least once is 3/4.
Michael June 29, 2023 at 14:18 #818762
Quoting Pierre-Normand
My credence at a given time for an outcome O reflects the proportion of cases where O occurs in a similar situation S.


I know that it is. I'm trying to show that it shouldn't be. Such reasoning is only correct where the situation is such that your outcome is randomly selected from the set of all outcomes, which isn't the situation with Sleeping Beauty. First her interview set is randomly selected from the set of all interview sets and then her interview is randomly selected from the set of all interviews in her set. This is how the experiment is actually conducted.

It just doesn't make sense to say that A iff B but that P(A) != P(B). And Bayes' theorem shows that P(A) = P(B).
JeffJo June 29, 2023 at 14:23 #818764
Quoting Michael
I do not ask anybody (for) their credence if both coins landed on Heads. — JeffJo

Exactly. It is precisely because the prior probability of being asked at least once is 3/4 that the probability that the first coin landed heads is 1/3.


The original problem is about one coin, not two. Asking about two would make it a different problem. Asking about one is what makes it the same problem.

But yes, it is indeed true that the prior probability of 3/4 is what makes the answer 1/3. You identified the wrong event for that prior probability (she is always asked), but it is the fact that this same prior probability applies to any waking, and not different prior probabilities depending on whether the subject is wakened on Monday or Tuesday, that makes it usable in a valid solution.

Thank you for stating, in your own words, why this is so.

Michael June 29, 2023 at 14:26 #818767
Quoting JeffJo
The original problem is about one coin, not two. Asking about two would make it a different problem. Asking about one is what makes it the same problem.

But yes, it is indeed true that the prior probability of 3/4 is what makes the answer 1/3. But it is the fact that this same prior probability applies to any waking, and not different prior probabilities depending on whether the subject is wakened on Monday or Tuesday, that makes it usable in a valid solution.

Thank you for stating, in your own words, why this is so.


In the original problem the prior probability of being asked one's credence at least once is 1 and the prior probability of being asked one's credence at least once if heads is 1, which is why the answer is 1/2 and why your example isn't comparable.

In your example and my second example the prior probability of being asked one's credence at least once is 3/4 and the prior probability of being asked one's credence at least once if heads is 1/2, which is why the answer is 1/3.
JeffJo June 29, 2023 at 14:52 #818771
Quoting Michael
In the original problem the prior probability of being asked one's credence at least once is 1 and the prior probability of being asked one's credence at least once if heads is 1, which is why the answer is 1/2 and why your example isn't comparable.

So now it isn't that I never asked about two coins ("You toss two coins and don’t ask them their credence if both land heads. That’s what makes your experiment equivalent to my second example where B isn’t asked if heads.")?

Do I need to explain "my version" to you again? You now say it is different because:

  1. In the original problem the prior probability of being asked one's credence at least once is 1 ...
    • In mine, the prior probability of being asked ones credence (that coin C1 is showing Heads) at least once is also 1.
  • ... and the prior probability of being asked one's credence at least once if heads is 1 ...[list]
  • In mine, the prior (?) probability of being asked one's credence at least once if Heads is also 1
  • But this doesn't appear to be a prior probability. That seems to be what "if Heads" means.

  • [/list]
    So you are still describing how it is the same problem, while claiming that it is different.

    ... which is why the answer is 1/2 and why your example isn't comparable.

    Non sequitur.

    Please, show me how
    • The existence of two credence=1 events...
    • ... which, while the facts that they are certain are valid, do exist as an issue in the problem,


    ... produce the conclusion "the answer is 1/2."

    Do you want to try again? But do try to realize that probabilities used within a solution cannot affect whether the problems are the same. They can only affect the solutions, and credence=1 events do not do that.
    Michael June 29, 2023 at 15:01 #818772
    Reply to JeffJo

    These are two different problems:

    1. A is woken once if heads, twice if tails
    2. A is woken once if heads, A and B once each if tails

    In the first problem the prior probability of being asked one's credence at least once is 1 and the prior probability of being asked one's credence at least once if heads is 1. The answer to the problem is 1/2.

    In the second problem the prior probability of being asked one's credence at least once is 3/4 and the prior probability of being asked one's credence at least once if heads is 1/2. The answer to the problem is 1/3.

    In your problem two coins are tossed, and only if at least one coin is tails am I asked my credence that the first coin is heads. The prior probability of being asked one's credence at least once is 3/4 and the prior probability of being asked one's credence at least once if heads is 1/2. The answer to the problem is 1/3.

    The first problem is the Sleeping Beauty problem. Your problem isn't analogous. Your problem is analogous to the second problem which gives a different answer.
    Pierre-Normand June 29, 2023 at 15:07 #818774
    It just doesn't make sense to say that A iff B but that P(A) != P(B). And Bayes' theorem shows that P(A) = P(B).


    I have indeed conceded that the inference is valid (as are the applications of Bayes' theorem predicated on it) as long as we avoid equivocating the meaning of P(). To prevent this equivocation, we must be mindful of the specific ratio implicitly referred to when we discuss Sleeping Beauty's credence P(H). It's important to ensure that, when you lay out your demonstrations, you do not switch between two inconsistent definitions of P(), even within the same formula.

    Consider again the pragmatic dice scenario where Sleeping Beauty is awakened six times in the East Wing if the die lands on 'six', and awakened once in the West Wing otherwise. It's rational for her to instruct her Aunt Betsy to wait for her at the West Wing exit, because once the experimental run concludes, the odds of her exiting there are P(not-'six') = 5/6. This also implies that P(not-'six'-awakening) is 5/6, if we understand it to mean that in five out of six potential runs of awakenings she awakens into, she finds herself in not-'six' runs (regardless of the number of times she awakens in that run). However, what does not logically follow is that P'(not-'six') = 5/6, if we interpret this to mean that in five out of six potential awakening episodes, she finds herself in not-'six' episodes. The relevant ratio in this context is P'(not-'six') = 6/11. This ratio is also the relevant one for her to predict from which wing she would likely exit from if she had a chance to escape during any given awakening episode.
    JeffJo June 29, 2023 at 15:15 #818775
    Quoting Michael
    These are two different problems:

    1. A is woken once if heads, twice if tails
    2. A is woken once if heads, A and B once each if tails

    Yes they are.

    #1 has two subjects, and #2 has two.

    In #1, the only subject knows she will be wakened, just not how many times. Her credence in Heads is what we are asked about. Since both Heads and Tails are possible when she is awake, 0
    In #2, A knows she will be wakened and that the coin is irrelevant. So Pr(Heads|Awake)=Pr(Heads)=1/2. B knows that she will only be wakened if Heads. So Pr(Heads/Awake)=1.

    In my version, there is one subject who knows she will be wakened, just not how many times. Please, without referring to how you you would solve the problem once it is established, how is this different than #1? And how does the presence of B in #3 make it like mine?
    Michael June 29, 2023 at 15:16 #818776
    Quoting JeffJo
    In #2, A knows she will be wakened and that the coin is irrelevant. So Pr(Heads|Awake)=Pr(Heads)=1/2. B knows that she will only be wakened if Heads. So Pr(Heads/Awake)=1.


    Neither participant knows if they are A or B.

    Quoting JeffJo
    In my version, there is one subject who knows she will be wakened, just not how many times.


    She doesn't know that. If both coins land heads then she's not asked her credence.
    Michael June 29, 2023 at 15:24 #818778
    Quoting Pierre-Normand
    However, what does not logically follow is that P'(not-'six') = 5/6, if we interpret this to mean that in five out of six potential awakening episodes, she finds herself in not-'six' episodes. The relevant ratio in this context is P'(not-'six') = 6/11.


    This has nothing to do with credence.

    I am asked to place two bets on a single future coin toss. If the coin lands heads then only the first bet is counted. What is it rational to do? Obviously to bet on tails, even though my credence isn't that tails is more likely. The same principle holds in the Sleeping Beauty experiment where I'm put to sleep and woken up either once or twice depending on a coin toss.

    In both cases the fact that there are twice as many T-wins as H-wins has nothing to do with the likelihood of the coin having landed tails. There are twice as many T-wins as H-wins because the experimental setup says that if it's heads then you get one bet and if it's tails then you get two bets, and heads and tails are equally likely.

    That in many cases there are twice as many A-outcomes as B-outcomes because A is twice as likely isn't that in every case if there are twice as many A-outcomes as B-outcomes then A is twice as likely. There are plenty of other factors that can contribute to an outcome ratio that doesn't reflect the likelihood of each outcome. The Sleeping Beauty problem is one such example.

    Quoting Pierre-Normand
    I have indeed conceded that the inference is valid (as are the applications of Bayes' theorem predicated on it) as long as we avoid equivocating the meaning of P().


    I think there's only one meaning of P(), and it is such that P(HInterview) = P(HRun) = 1/2.

    It is a mistake to reason that P(X) should reflect the ratio of X to not-X, as explained above.
    Michael June 29, 2023 at 15:37 #818780
    This ratio is also the relevant one for her to predict from which wing she would likely exit from if she had a chance to escape during any given awakening episode.


    If on each day the chance to escape is 1/2 then the prior probability of being given at least one chance to escape if the dice rolls 1-5 is 1/2 and if the dice rolls 6 is 63/64. Given the prior probability that I'm more likely to be given at least once chance to escape if the dice rolls 6 it is reasonable to infer that if I am given a chance to escape that the dice most likely rolled 6.

    This is the same reasoning used here to infer that one's credence in heads should be greater in experiment 1 than in experiment 2 and used here to infer that one's credence should favour heads.
    JeffJo June 29, 2023 at 16:03 #818785
    Quoting Michael
    Neither participant knows if they are A or B.

    I'm going to ignore the fact that neither A nor B is woken twice, so this isn't the SB problem. What you seem to mean is that the subject is woken once as A if Heads, and once each as A and as B if tails.

    Then you have created a third problem that is the equivalent of #1. Literally, it just adds a certain detail about names that is significant in #2 when they apply to different people, but makes it identical to #1 when it is the same person that can have either name. And an event it uses to choose the name - Heads or Tails - is already mentioned in the problems:

    1. A is woken once if Heads, twice if Tails.
    2. A is woken once if Heads, A and B once each if Tails.
    3. SB if woken once in room 100A if Heads, or once each in of the rooms 100A and 100B if tails.


    My version is indeed equivalent to both #1 and #3. It is not equivalent to #2 if A and B are different people, which was not given in its definition. In the special case where that is specified, all three are the same problem.
    Michael June 29, 2023 at 16:07 #818786
    Reply to JeffJo

    What matters is that in the Sleeping Beauty problem the prior probability of being asked one's credence at least once is 1 and the prior probability of being asked one's credence at least once if heads is 1, and in your problem the prior probability of being asked one's credence at least once is 3/4 and the prior probability of being asked one's credence at least once if the first coin is heads is 1/2.

    Because of this your problem is not equivalent. The answer to your problem is 1/3 only because of those prior probabilities. The Sleeping Beauty problem has different prior probabilities and so a different answer of 1/2.
    Pierre-Normand June 29, 2023 at 16:10 #818788
    Quoting Michael
    This has nothing to do with credence.

    I am asked to place two bets on a single coin toss. If the coin lands heads then only the first bet is counted. What is it rational to to? Obviously to bet on tails. Even though my credence isn't that tails is more likely. The same principle holds in the Sleeping Beauty experiment where I'm put to sleep and woken up either once or twice depending on a coin toss. That it's rational to bet on tails isn't that my credence is that it's most likely tails; it's that I know that if it is tails I get to bet twice.

    The same principle holds with the dice roll and the escape attempts.


    I find it unusual that you maintain that when faced with a potential outcome O in a situation S, your credence P(O) should only reflect the intrinsic propensity of an object to generate O, disregarding how O affects the likelihood of you being in this situation.

    Consider a scenario where fireflies are equally likely to have gene XYZ, which makes them brighter and thus twice as likely to be seen from the same distance. If you happen to be in proximity to such a firefly, it is twice as likely to catch your attention when it has the XYZ gene. Therefore, from a population where half of the fireflies have this gene, you witness twice as many flashes from the ones carrying XYZ. According to your logic, your credence about any given firefly flash should remain P(XYZ) = 1/2 (because the firefly generating it had a 50% chance of inheriting this gene), despite the fact that you would have twice as many betting opportunities on fireflies with the XYZ gene. You seem to consider this increase in betting opportunities irrelevant to your credence P(XYZ), even though your encounters with such fireflies are twice as frequent.

    This line of reasoning appears to be an ad hoc restriction on the common understanding of credence, primarily designed to disqualify the Thirder interpretation of the Sleeping Beauty problem from the outset. This restriction seems to have limited applicability outside of this specific problem. In most cases, we focus more on the overall frequency of the outcomes in proportion to the relevantly similar situations, rather than on the intrinsic propensities of the objects generating these outcomes.
    Michael June 29, 2023 at 16:14 #818789
    Quoting Pierre-Normand
    I find it unusual that you maintain that when faced with a potential outcome O in a situation S, your credence P(O) should only reflect the intrinsic propensity of an object to generate O, disregarding how O affects the likelihood of you being in this situation.


    That's not what I said.

    In the Sleeping Beauty problem I am guaranteed to wake up at least once if tails and guaranteed to wake up at least once if heads. The coin toss does not determine the likelihood of me waking up. It only determines the number of times I'm woken up. But this frequency is irrelevant given the guarantee.

    Only if there is no guarantee is the frequency relevant. Again, see this and this.
    JeffJo June 29, 2023 at 16:15 #818790
    Quoting Michael
    What matters is that in the Sleeping Beauty problem the prior probability of being asked one's credence at least once is 1 and the prior probability of being asked one's credence at least once if heads is 1


    What has no significance to the SB problem, is what might be different if event X, or event Y, happens WHEN BOTH ARE CERTAIN TO HAPPEN.

    • X = The prior probability of being asked one's credence at least once, equal to 1.
    • Y = The prior probability of being asked one's credence at least once if heads, also equal to 1.


    Now, you are either a troll, very confused, or expressing yourself poorly. But if you won't consider that I might be right (as I keep assuming about you, until I actually find something wrong) there is no way to discuss anything with you.
    Michael June 29, 2023 at 16:16 #818791
    Reply to JeffJo In your example being asked your credence isn't certain. In Sleeping Beauty's it is. That's why your example isn't equivalent.

    Pierre-Normand also tried to explain this to you here.
    JeffJo June 29, 2023 at 16:30 #818793
    Probability=1 means "is certain." You said the probability of being asked was 1. That means certain. The subject in my implementation is always asked.

    "Prior probability" means "prior to information being given," so "prior probability...if Heads" is vacuous.

    I have no idea what you mean by "my example." My implementation is an exact version of your problem #1. I notice that you don't claim otherwise, you just try to say that #2 is different and I implemented #2. #3 is closer, and it is equivalent to #1.

    My explanation of how your additional constraints turned #2 into #3 is trivially true. As is how #3 is equivalent to #1.

    What credence is asked for does not affect, in any way, how the events themselves might occur. You are arguing with non sequiturs.

    All you have identified is how my solution differs from yours, not how my implementation of the problem differs from the actual problem. This is because it is trivially obvious that it does not. But you can't attack my solution, which is also trivially correct, so you attack the implementation.
    Pierre-Normand June 29, 2023 at 16:32 #818794
    Quoting Michael
    That's not what I said.

    In the Sleeping Beauty problem I am guaranteed to wake up at least once if tails and guaranteed to wake up at least once if heads. The coin toss does not determine the likelihood of me waking up. It only determines the number of times I'm woken up. But the frequency is irrelevant. The only thing that matters is the guarantee.


    When Sleeping Beauty awakens, she could potentially be experiencing either a guaranteed awakening (i.e. T-Monday or H-Monday) or an optional awakening (i.e. T-Tuesday). Since she cannot definitively rule out the possibility of experiencing an optional awakening, this uncertainty should affect her credence P(T), as P(T) = P(T-Monday) + P(T-Tuesday), and P(T-Monday) is always equal to P(H-Monday) regardless of the value of P(T-Tuesday). Therefore, P(T) should be greater than 1/2 whenever Sleeping Beauty cannot conclusively rule out the possibility of it being T-Tuesday.
    Michael June 29, 2023 at 16:37 #818797
    Reply to Pierre-Normand

    If we apply your reasoning to the example here then we conclude that P(Heads|Awake) = 1/2, which I think is wrong.

    I'm less likely to wake if tails and so if I do wake it's less likely to be tails, and so P(Heads|Awake) > P(Tails|Awake).

    This is satisfied only if we use halfer reasoning and conclude that P(Heads|Awake) = 4/7.
    Pierre-Normand June 29, 2023 at 16:47 #818799
    @Michael

    Let me adjust my previous firefly case to meet your objection.

    We can assume that half of the fireflies have gene XYZ, which causes them to flash twice every five minutes. The other half, lacking gene XYZ, flash once every five minutes.

    A witness can see every flash and thus is guaranteed to see the first flash of every firefly. The second flash, however, is optional as it depends on the firefly having gene XYZ. This mimics the guaranteed and optional awakenings in the Sleeping Beauty problem.

    When the witness sees a flash, they know it could either be a first flash (which is guaranteed from every firefly) or a second flash (which is optional and only comes from the fireflies with gene XYZ).

    Just like in the Sleeping Beauty problem, every flash is an 'awakening' for the witness. The presence of gene XYZ is akin to a coin landing tails (T), leading to an optional second flash (analogous to the T-Tuesday awakening).

    Upon witnessing a flash, the observer's credence that they're seeing a firefly with gene XYZ should be more than 1/2, as the witness cannot conclusively rule out that it's a second, optional flash. This aligns with the reasoning that P(T) > 1/2 for Sleeping Beauty when she cannot rule out the possibility of T-Tuesday.

    This analogy illustrates how an increased frequency of a particular event (the witnessing of a second flash, or T-Tuesday) can impact overall credence.
    Michael June 29, 2023 at 16:54 #818800
    Reply to Pierre-Normand In your scenario there are a bunch of flashes going off in a forest and me, a passer-by, randomly sees one of them. This is comparable to a sitter being assigned a room.

    But this isn't how things work for Sleeping Beauty. It's not the case that her interview is randomly selected from the set of all interviews. It's the case that her interview set is randomly selected from the set of all interview sets. That's just how the experiment works and so is how she should reason.
    Pierre-Normand June 29, 2023 at 17:13 #818805
    Quoting Michael
    In your scenario there are a bunch of flashes going off in a forest and me, a passer-by, randomly sees one of them. This is comparable to a sitter being assigned a room.


    The passer-by sees all of the flashes and does not know the genetic status of the fireflies producing them. This is analogous to Sleeping Beauty experiencing all of her awakenings but not knowing if they're unique (generated by a coin having landed heads) or one of a series of two (generated by a coin having landed tails). The witness's credence P(XYZ) = 2/3 applies to all of the flashes they are witnessing, just like Sleeping Beauty's credence P(T) = 2/3 applies to all the awakenings she is experiencing, not merely to random samplings of them.

    What is true of a random sampling of these awakening episodes (or flash sightings), due to the fact that the sampling would represent the relevant frequencies, is even more applicable to the total population of awakening episodes. However, in the latter case, no additional sampling method (nor the presence of a randomly assigned sitter) is required.
    Michael June 29, 2023 at 17:22 #818808
    Quoting Pierre-Normand
    The passer-by sees all of the flashes and does not know the genetic status of the fireflies producing them. This is analogous to Sleeping Beauty experiencing all of her awakenings but not knowing if they're unique (generated by a coin having landed heads) or one of a series of two (generated by a coin having landed tails).


    The passer-by can see a mix of single and double-flashing fireflies. Sleeping Beauty can't. She either sees one firefly flash once or she sees one firefly flash twice (forgetting if she's seen the first flash).

    All you've imagined here is that we collect several iterations of the Sleeping Beauty experiment, mix up all the interviews, and then have Sleeping Beauty "revisit" them at random. That's not at all equivalent. This is equivalent to a sitter being assigned a room and I already accepted that for the sitter the answer is 1/3.
    Michael June 29, 2023 at 17:46 #818811
    Quoting JeffJo
    The subject in my implementation is always asked.


    No they're not. I'll quote you:

    1. Two coins will be arranged randomly out of your sight. By this I mean that the faces showing on (C1,C2) are equally likely to be any of these four combinations: HH, HT, TH, and TT.

    2. Once the combination is set, A light will be turned on.

    3. At the same time, a computer will examine the coins to determine if both are showing Heads. If so, it releases a sleep gas into the room that will render you unconscious within 10 seconds, wiping your memory of the past hour. Your sleeping body will be moved to a recovery room where you will be wakened and given further details as explained below.

    4. But if either coin is showing tails, a lab assistant will come into the room and ask you a probability question. After answering it, the same gas will be released, your sleeping body will be moved the same way, and you will be given the same "further details."


    The lab assistant only asks for my credence if the coin combination isn't HH.

    If you take away this condition and so I am always asked my credence then the answer is 1/2.
    JeffJo June 29, 2023 at 18:15 #818816
    Quoting Michael
    The lab assistant only asks your credence if the coin combination isn't HH.


    And in the original, on Tuesday after Heads, you are also not asked for a credence. The only difference is that in this subset of my implementation, you started out awake but forget the occurrence. Oh, yeah, it had only one possible waking, so it was not the full SB problem.

    The point of extracting this from the full procedure was (A) the probability can be calculated based on the state of the current pass, and (B) only asking a question is important, not waking or sleeping.

    My implementation of the SB problem, the one I have been describing, is:
    Quoting JeffJo
    But the difference Elga introduced was unnecessary. So don't do it; do this instead:

    Tell SB all the details listed here.
    Put SB to sleep.
    Flip two coins. Call them C1 and C2.
    Procedure start:
    If both coins are showing Heads, skip to Procedure End.
    Wake SB.
    Ask SB "to what degree do you believe that coin C1 is currently showing Heads?"
    After she answers, put her back to sleep with amnesia.
    Procedure End.
    Turn coin C2 over, to show its opposite side.
    Repeat the procedure.
    Wake SB to end the experiment.


    When SB is awake, she knows that she is in the middle of the procedure listed in steps 4 thru 9. Regardless of which pass thru these steps it is, she knows that in step 5 of this pass, there were four equally-likely combinations for what (C1,C2) were showing: {(H,H),(H,T),(T,H),(T,T)}. This is the "prior" sample space.


    What you were looking at was one pass thru step 4 to 9.
    Michael June 29, 2023 at 18:19 #818817
    Quoting JeffJo
    And in the original, on Tuesday after Heads, you are also not asked for a credence.


    What matters is the probability that you will be asked for your credence at least once during the experiment.

    In the normal problem you are certain to be asked at least once during the experiment.

    In your problem you are not certain to be asked at least once during the experiment.

    Your problem isn’t equivalent and so the answer to your problem is irrelevant.
    JeffJo June 29, 2023 at 18:32 #818821
    Quoting Michael
    What matters is the probability that you will be asked for your credence at least once during the experiment.
    0
    What matters is not the question, but that you are awake for an interview. And in my implementation, you are. After careful double-checking I found that you not only linked to the wrong posting, you ignored the part that said:
    Quoting JeffJo
    Now, let the "further details" be that, if this is the first pass thru experiment, the exact same procedure will be repeated. Otherwise, the experiment is ended. Whether or not you were asked the question once before is irrelevant, since you have no memory of it. The arrangement of the two coins can be correlated to the arrangement in the first pass, or not, for the same reason.

    I did forget to say that coin C2 is turned over, but that was said before. What I outlined in 3 posts above is identical to the SB problem. What I said in that post you dissected applies to one pass only, and the intent was to have two passes where, if there was no question in the first, there would be in the second.
    Michael June 29, 2023 at 18:39 #818822
    Quoting JeffJo
    What I said in that post you dissected applies to one pass only, and the intent was to have two passes where, if there was no question in the first, there would be in the second.


    So if the coin combination is HH then the participant will be asked their credence during the second pass? If so then you are wrong when you said "she also knows that the fact that she is awake eliminates (H,H) as a possibility."
    JeffJo June 29, 2023 at 22:10 #818867
    Quoting Michael
    So if the coin combination is HH then the participant will be asked their credence during the second pass? If so then you are wrong when you said "she also knows that the fact that she is awake eliminates (H,H) as a possibility."


    No. Please pay attention to the highlighted texts.

    Quoting JeffJo
    My implementation of the SB problem, the one I have been describing, is:

    But the difference Elga introduced was unnecessary. So don't do it; do this instead:

    Tell SB all the details listed here.
    Put SB to sleep.
    Flip two coins. Call them C1 and C2.
    Procedure start:
    If both coins are showing Heads, skip to Procedure End.
    Wake SB.
    Ask SB "to what degree do you believe that coin C1 is currently showing Heads?"
    After she answers, put her back to sleep with amnesia.
    Procedure End.
    Turn coin C2 over, to show its opposite side.
    Repeat the procedure.
    Wake SB to end the experiment.


    Srap Tasmaner June 29, 2023 at 22:51 #818879
    Quoting Pierre-Normand
    perhaps because Tunisians walk in hidden pairs. When you meet a member of a Tunisian pair for the first time, their sibling ensures they are the next one you meet.


    I'm gonna come back to this, but I just want to point out that you're now describing a pickpocketing team, a stick and a cannon.

    I foresee fun new variations of Sleeping Beauty.
    Michael June 29, 2023 at 23:16 #818886
    Quoting JeffJo
    Turn coin C2 over, to show its opposite side


    OK, I understand your argument now, and it's what I said to you before: in your experiment the prior probability P(HH) = 1/4 is ruled out when woken; in the normal experiment where I am woken once if heads and twice if tails there is no prior probability P(X) = 1/4 that is ruled out when woken.

    To make this fact clearer the experiment will be conducted in the simplest possible form:

    1. Sleeping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is sent home
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home
    Michael June 30, 2023 at 01:37 #818912
    Quoting Pierre-Normand
    The conclusion doesn't follow because, while the biconditional expressed in P3 is true, this biconditional does not guarantee a one-to-one correspondence between the set of T-interviews and the set of T-runs (or "T-interview sets"). Instead, the correspondence is two-to-one, as each T-run includes two T-interviews.


    There is a one-to-one correspondence between the set of H-interviews and the set of H-runs.

    H-interview iff H-run
    P(H-run) = 1/2
    Therefore, P(H-interview) = 1/2

    Quoting Pierre-Normand
    Credences … can be thought of as ratios


    One’s credence is the degree to which one believes a thing to be true. Often one’s credence is determined by the ratios but this is not necessary, as shown in this case.

    If one believes that A iff B and if one is 50% sure that A is true then one is 50% sure that B is true. That just has to follow.

    So:

    P1. If I have been assigned at random either one heads interview or two tails interviews then the probability that I have been assigned one heads interview is [math]1\over2[/math]
    P2. I have been assigned at random either one heads interview or two tails interviews
    C1. Therefore my credence that I have been assigned one heads interview is [math]1\over2[/math]
    P3. My current interview is a heads interview iff I have been assigned one heads interview
    C2. Therefore my credence that my current interview is a heads interview is [math]1\over2[/math]

    You would have to argue:

    P4. If I have been assigned at random a heads interview, a first tails interview, or a second tails interview then the probability that I have been assigned a heads interview is [math]1\over3[/math]
    P5. I have been assigned at random a heads interview, a first tails interview, or a second tails interview
    C3. Therefore my credence that my current interview is a heads interview is [math]1\over3[/math]

    But P5 is false.

    I think you have chosen the wrong reference class. Given the manner in which the experiment is conducted, as shown by this Venn diagram, the reference class of all interviews is of no use to Sleeping Beauty. With respect to individual interviews the only relevant reference classes are "heads", "tails", "first", and "second".
    Srap Tasmaner June 30, 2023 at 04:00 #818932
    Quoting Michael
    Credences … can be thought of as ratios — Pierre-Normand


    They shouldn’t. One’s credence is the degree to which one believes a thing to be true. Often one’s credence is determined by the ratios but this is not necessary, as shown in this case.

    If A iff B and if one is 50% sure that A is true then one is 50% sure that B is true. That just has to follow.


    You know 50% is a ratio, right?
    Michael June 30, 2023 at 09:18 #818952
    Quoting Srap Tasmaner
    You know 50% is a ratio, right?


    Pierre-Normand is saying that P(X) refers to the ratio of Xs to non-Xs in some given reference class.

    I'm saying that P(X) refers to the degree to which I believe X to be true.

    If P(X) refers to the degree to which I believe X to be true, and if I believe that A iff B, then P(A) = P(B).
    JeffJo June 30, 2023 at 14:06 #818997
    Quoting Michael
    OK, I understand your argument now,

    No, you seem to understand the process finally, but your counterargument completely misses the point of the argument.

    Specifically, you are saying that by not applying probability theory as you do, in the same way you do, that I am not implementing the problem correctly. I am saying - and nothing you have said challenges this - that:

    1. I have implemented the original problem (not the variation of it that Elga solved) exactly.
      • I may have added a detail (coin C2), but that is how the specifics of the original problem (one or two wakings) is managed.
      • You (well, Elga) are adding a similar detail with Monday/Tuesday schedule. It also adds a detail, and is correct. But it obfuscates the solution instead of clarifying it.
  • Mine has a trivial solution that establishes what the answer to any correct implementation of the problem is.
  • So any solution that does not get that answer, to a correct implementation, is wrong.


  • Quoting Michael
    in your experiment the prior probability P(HH) = 1/4 is ruled out when woken


    Which is the entire point. You refuse to recognize that something must be "ruled out" due to the fact that your SB cannot be awake, after Heads, on Tuesday. The reason you do this is because the probability space that governs the situation on your Monday is different than that which governs Tuesday, and you disagree with how a joint probability space is calculated by Thirders.

    My entire point is to create a single probability space that applies, unambiguously, to any waking.
    Quoting Michael
    1. Sleeping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is sent home
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home

    This does not implement the original problem. She is wakened, and asked, zero tomes or one time.

    From Elga:
    The Sleeping Beauty problem:
    Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is
    Heads?

    Michael June 30, 2023 at 14:08 #818998
    Quoting JeffJo
    This does not implement the original problem. She is wakened, and asked, zero tomes or one time.


    She’s asked once in step 1 and then, optionally, again in step 4.

    No prior probability is ruled out when asked.
    JeffJo June 30, 2023 at 14:27 #819002
    Quoting Michael
    She’s asked once in step 1 and then, optionally, again in step 4.


    Okay, I missed that.Quoting Michael
    No prior probability is ruled out when asked.


    When she is asked the second time, the "prior probability" of heads is ruled out.

    That is what is wrong with your solution. You disagree with this criticism, but that does not invalidate these steps. In fact, nothing about anybody's solution invalidates anybody's set of steps.

    But you have not addressed my steps, or my solution. You have only repeated your solution. A solution I claim is invalid.
    Michael June 30, 2023 at 14:27 #819003
    Quoting JeffJo
    When she is asked the second time, the "prior probability" of heads is ruled out.


    No it's not. She doesn't know that she's being asked a second time. She can't rule out heads.
    JeffJo June 30, 2023 at 14:48 #819010
    Quoting Michael
    No it's not. She doesn't know that she's being asked a second time. She can't rule out heads.


    Yes, it is. The Law of Total Probability says that

    Pr(Heads) = Pr(Heads&First Time) + Pr(Heads&Second Time)
    But "not ruing out" has not significance." Do this instead:

    1. Shopping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is given amnesia and taken shopping.
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home.

    If Shopping Beauty is taken shopping, she knows that Pr(Heads)=0, even tho she isn't asked.If she is asked for a credence, she know that one of four possibilities is ruled out.
    Michael June 30, 2023 at 14:50 #819012
    Quoting JeffJo
    Pr(Heads) = Pr(Heads&First Time) + Pr(Heads&Second Time)


    The prior P(Heads & Second Time) = 0 as established by the rules of the experiment. She will never be asked a second time if the coin lands heads. So there's nothing for her to later rule out when she's asked her credence.
    Srap Tasmaner July 01, 2023 at 02:28 #819122
    Quoting Michael
    You know 50% is a ratio, right? — Srap Tasmaner


    Pierre-Normand is saying that P(X) refers to the ratio of Xs to non-Xs in some given reference class.

    I'm saying that P(X) refers to the degree to which I believe X to be true.


    I know what you think you're saying.

    But you want to express your degree of belief as a percentage. Fine. But percentages are just a funny way of writing fractions. So what do the numerator and the denominator represent?

    If you want to say that you're just describing your confidence, somewhere between 0% representing absolute certainty of falsehood, with respect to some P, and 100% representing absolute certainty of the truth of P, you ought to be able to explain where these numbers come from. So where do they come from?

    What exactly does it mean to say you have, say, 75% confidence? That you'll turn out to have been right 3 times out of 4? Out of 4 whats? Universes? --- That's ratio talk, because percentages are ratios. If you want to use percentages, you have to have some such account. So what's yours?
    Pierre-Normand July 01, 2023 at 05:25 #819146
    Quoting Michael
    Pierre-Normand is saying that P(X) refers to the ratio of Xs to non-Xs in some given reference class.

    I'm saying that P(X) refers to the degree to which I believe X to be true.

    If P(X) refers to the degree to which I believe X to be true, and if I believe that A iff B, then P(A) = P(B).


    Actually, I suggested that P(X) could be understood as referring to the ratio of |{X}| to (|{X}| + |{not-X}|) in epistemically identical situations with respect to X. There is some flexibility in defining what the relevant situations are.

    In the case where Sleeping Beauty can say "I am experiencing an H-awakening iff I am experiencing an H-run", and there is a one-to-one mapping between H-awakenings and H-runs, we still can't logically infer that P(H-awakening) = P(H-run). This is because one can define P(H-awakening) as |{H-awakening}|/|{awakenings}| and similarly define P(H-run) as |{H-run}|/|{run}| where {awakenings} and {run} are representative sets (and |x| denotes cardinality). For the inference to hold, you would also need a one-to-one mapping between the sets of T-runs and T-awakenings.

    So, the grounds for a Thirder's credence P(H-awakening) being 1/3 (where it is defined as |{H-awakening}|/|{awakenings}|) simply comes from the propensity of the experimental setup to generate twice as many T-awakenings as H-awakenings.

    You argue that the number of T-awakenings is irrelevant to the determination of Sleeping Beauty's credence P(H-awakening) because her having multiple opportunities to guess the coin toss result when it lands tails doesn't impact the proportion of tails outcomes. However, while it doesn't impact |{H-run}|/|{run}|, it does impact |{H-awakening}|/|{awakenings}|, which is why I argue that Halfers and Thirders are talking past each other.

    Consider an alternative experiment setup where Sleeping Beauty awakens less often rather than more often when the coin lands tails. For instance, we could eliminate Tuesday awakenings altogether and ensure that Sleeping Beauty awakens once on Monday when the coin lands heads, and only half the time when it lands tails (by tossing a second coin, say). This setup would, in the long run, reverse the ratio of H-awakenings to T-awakenings compared to the original setup. In that case, when Sleeping Beauty awakens, would you still argue that her credence P(H-awakening) remains 1/2? A Thirder would argue that her credence should now increase to 2/3, based on the same frequency-ratio reasoning.
    Michael July 01, 2023 at 09:11 #819167
    Quoting Pierre-Normand
    Consider an alternative experiment setup where Sleeping Beauty awakens less often rather than more often when the coin lands tails. For instance, we could eliminate Tuesday awakenings altogether and ensure that Sleeping Beauty awakens once on Monday when the coin lands heads, and only half the time when it lands tails (by tossing a second coin, say).


    I consider a similar situation here.

    Halfer reasoning gives an answer of 4/7 and thirder 1/2. I think 4/7 is more reasonable.

    I’m less likely to wake if tails, heads and tails are equally likely, and so if I do wake my credence is that it’s less likely to be tails.

    Even though in the long run the number of heads- and tails-awakenings are equal.
    Michael July 01, 2023 at 09:27 #819169
    Reply to Srap Tasmaner

    My claim is simply regarding what the word means.

    Credence "or degree of belief is a statistical term that expresses how much a person believes that a proposition is true."

    Therefore, if I believe that A iff B and if my credence in A is 1/2 then my credence in B is 1/2.

    What it means for one's credence to be 1/2 rather than 1/3 is a secondary matter. My only point here is that it cannot be that each of these is true:

    1. I believe that A iff B
    2. My credence in A is 1/2
    3. My credence in B is 1/3

    Either both my credence in A and my credence in B is 1/2 or both my credence in A and my credence in B is 1/3.

    So now we have an issue:

    1. I believe that H-run iff H-interview
    2. My credence in H-run is equal to my credence in H-interview
    3. 1/2 of all runs are H-runs
    4. 1/3 of all interviews are H-interviews

    Given (2) it cannot be that my credence in H-run is equal to the fraction of all runs that are H-runs and that my credence in H-interview is equal to the fraction of all interviews that are H-interviews.

    It may be that one's credence is a reflection of the ratio in some reference class, but given the above, (3) and/or (4) are the wrong reference class to use.

    So we go back to my previous argument:

    P1. If I have been assigned at random either one heads interview or two tails interviews then the probability that I have been assigned one heads interview is [math]1\over2[/math]
    P2. I have been assigned at random either one heads interview or two tails interviews
    C1. Therefore my credence that I have been assigned one heads interview is [math]1\over2[/math]
    P3. My current interview is a heads interview iff I have been assigned one heads interview
    C2. Therefore my credence that my current interview is a heads interview is [math]1\over2[/math]

    P1, P2, and P3 are true, C1 follows from P1 and P2, and C2 follows from C1 and P3. Therefore C2 is true.

    My credence that my current interview is a heads interview is equal to the fraction of runs assigned one heads interview, not the fraction of interviews which are heads interviews.
    JeffJo July 01, 2023 at 14:31 #819225
    Quoting Michael
    The prior P(Heads & Second Time) = 0 as established by the rules of the experiment. She will never be asked a second time if the coin lands heads. So there's nothing for her to later rule out when she's asked her credence.


    She cannot use ""First Time" and "Second Time" in her solution at all, except in the context of the Law of Total Probability. Because she cannot have the slightest clue which "time" (and you don't define if "second time" means the second possible waking, or the second actual waking) it is. So she can "rule out" this case as part of this Law's application.

    The entire point of my suggested implementation is that "First Time" and "Second Time" look exactly the same. So the "prior" means the circumstances that govern "This Time," not "How this time relates to the other possible time."

    But you still ignore that the details used anybody's solution do not establish whether the implementation of the problem is correct. They can only establish whether a solution is. Your argument here is a non sequitur. The experiment does not require the subject to distinguish between which "time" it is; and in fact, it prevents isolating such a difference.

    Let me repeat "the rules of the experiment." From Elga's 2000 paper titled "Self-locating belief and the Sleeping Beauty problem" :
    Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

    Q1: Do you agree, or disagree, that the procedure I have outlined (with two coins, turning coin C2 over, but asking only for credence in coin C1) correctly implements this?

    Q2: Do you agree, or disagree, that the subject can be wakened, and asked for this credence, during the "second time" the waking+interviewing process could occur?

    Q3: Do you agree, or disagree, that the subject's answer (when asked) can be based solely on the state of the coins between time A when were examined to see if she should be wakened, and time B when/if she is subsequently wakened and interviewed?

    And you also ignored this:
    Quoting JeffJo
    1. Shopping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is given amnesia and taken shopping.
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home.


    Q4: Do you agree, or disagree, that she can have a credence in Heads while shopping in step 3 of this variation of what you presented?

    Q5: And that she even can be asked for it without affecting her credences at other times?

    Q6: And that those credences can't be affected by whether they lied to her about step 3?

    Michael July 01, 2023 at 14:44 #819230
    Reply to JeffJo

    In your experiment the prior probability P(HH) = 1/4 becomes P(HH) = 0 when she’s asked her credence. But there is no prior P(X) = 1/4 that becomes P(X) = 0 when she’s asked her credence in my simplified form of the experiment.

    Hence your experiment is not equivalent and your solution doesn’t apply.
    JeffJo July 01, 2023 at 15:12 #819242
    Quoting Michael
    In your experiment the prior probability P(HH) = 1/4 becomes P(HH) = 0 when she’s asked her credence. But there is no prior P(X) = 1/4 that becomes P(X) = 0 when she’s asked her credence in my simplified form of the experiment.

    Hence your experiment is not equivalent and your solution doesn’t apply.

    Tell me if you remember reading this before: In any experiment, measures of probability define a solution, not the experiment itself. The more you repeat this non sequitur (that your preferred solution can't be applied to my version of the experiment), the more obvious it becomes that you recognize that my experiment is correct.

    BUT, Q4 thru Q6 show that there is a "prior P(X) = 1/4 that becomes P(X) = 0 when she’s asked her credence" in your experiment. There is a possibility that she can rule out, and it DOES NOT MATTER whether she would be awake, or asked anything, in that possibility. All that matters is that she knows it is not the current case.

    Now, we could discuss all this if you would reply with anything except this same, tired non sequitur. Q1 thru Q3 are an easy start. But they don't involve mentioning prior probabilities.
    Michael July 01, 2023 at 15:14 #819243
    Quoting JeffJo
    there is a "prior P(X) = 1/4 that becomes P(X) = 0 when she’s asked her credence" in your experiment.


    Which is what? What prior P(X) = 1/4 becomes P(X) = 0 when she’s asked her credence?
    JeffJo July 01, 2023 at 15:15 #819244
    Quoting Michael
    Which is what?


    That 1/4 chance that she would have been taken shopping.
    Michael July 01, 2023 at 15:25 #819247
    Quoting JeffJo
    That 1/4 chance that she would have been taken shopping.


    I’m not asking about your shopping example. I’m asking about this example:

    1. Sleeping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is sent home
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home
    JeffJo July 01, 2023 at 15:49 #819252
    Quoting Michael
    I’m not asking about your shopping example.

    But I am.

    I’m asking about this example:

    1. Sleeping Beauty is given amnesia and (A or B) asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads (C) then she is sent home
    4. If the coin lands tails then she is given amnesia, (D) asked her credence that the coin will or did land heads, and sent home


    And the reason for the shopping example is pointing out that the four parts that I highlighted and labeled A, B, C, and D each have a prior of 1/4. And that what does or does not happen in C cannot change SB's credence in A, B, or D. All that matters is that when she knows she is in A, B, or D, she can "rule out" C. You are treating C as if it it isn't a point in the experiment, and it is.

    But you would have to address my questions to discuss this, and any truthful answer to them would discredit all you have argued. So you continue to ignore them, and reply only with this same non sequitur.
    Michael July 01, 2023 at 15:59 #819254
    Quoting JeffJo
    And the reason for the shopping example is pointing out that the four parts that I highlighted and labeled A, B, C, and D each have a prior of 1/4.


    No they don’t. Your “A or B” isn’t two separate things but one thing with prior probability 1. C and D each have a prior probability of 1/2; C will happen if the coin lands heads and D will happen if the coin lands tails, and the prior probability that a coin will land heads is 1/2.

    And why do you count being sent home in step 3 as part of the probability space but not being sent home in step 4 as part of the probability space”? You’re being inconsistent.
    Srap Tasmaner July 01, 2023 at 16:04 #819255
    Quoting Michael
    What it means for one's credence to be 1/2 rather than 1/3 is a secondary matter.


    For my post too which expressed curiosity about what "1/2" means in this context.

    I understand this is a sideline to the Sleeping Beauty discussion, but what "subjective probability" could possibly be is also kinda what the whole puzzle is about. I just thought we could pause and consider the foundations.
    JeffJo July 01, 2023 at 16:26 #819258
    Quoting Michael
    No they don’t. Your “A or B” isn’t two separate things but one thing with prior probability 1. C and D each have a prior probability of 1/2; C happens if the coin lands heads and D happens if the coin lands tails, and the prior probability that a coin will land heads is 1/2.


    1. Your question "what is your credence the coin will/did land on Heads" is asking SB to distinguish between the cases where your coin will/did land on Heads, and will/did land on Tails. So cases A and B, which depend on the same distinction, must be distinct outcomes to SB. Even if the coin isn't flipped yet, this is a part of your experiment that you insist others must recognize. So please don't be disingenuous and try to deny it again.
    2. (A or B) is indeed certain to happen in the experiment. But since it is possible that SB is in the part labeled D, and part D is not in (A or B), the probability that she is in (A or B) has to be less than one. Do I need to provide a lecture on what probability means? Or are you ready to at least pretend to have an open mind?


    But this can all be clarified, by considering my questions. I have little "credence" that you will, because you can't accept those answers.
    Srap Tasmaner July 01, 2023 at 16:32 #819260
    Quoting Michael
    What it means for one's credence to be 1/2 rather than 1/3 is a secondary matter.


    Here's another stab at it.

    Ramsey has that ingenious example when he's originally arguing for the very idea of subjective probability and both the possibility and the necessity of putting numbers on credences. You're walking from one town to another, but come to a point where you're not sure you're going the right way; there's a farmer working in a field alongside the road. You can count the steps you would take -- going out of your way -- to reach him and ask directions: the more steps you'd be willing to take, the less certain you are that you know the way, the fewer you'd take the more certain you must be that you know the way.

    That's brilliant, but note there are no percentages here to start with, but there is specifically the possibility of comparing one level of confidence to another, and that leads directly to percentages, because you can say how much more confident one answer is than another.

    But how does this analysis actually work? Is there a possible world in which Frank walks seventeen paces and another in which he walks thirty? It's all hypothetical, counterfactual even, and experiments that are not performed (I am told on good authority) have no results.
    Michael July 01, 2023 at 16:37 #819262
    Quoting JeffJo
    Your question "what is your credence the coin will/did land on Heads" is asking SB to distinguish between the cases where your coin will/did land on Heads, and will/did land on Tails. So cases A and B, which depend on the same distinction, must be distinct outcomes to SB.


    She’s being asked her credence that step 3 happens.

    Step 1 just isn’t two events with a prior probability of 1/4 each. It’s one event with a prior probability of 1. Step 2 has a prior probability of 1 and steps 3 and 4 each have a prior probability of 1/2.
    JeffJo July 01, 2023 at 16:43 #819266
    Quoting Srap Tasmaner
    what "subjective probability" could possibly be is also kinda what the whole puzzle is about. I just thought we could pause and consider the foundations.


    I roll two six sided dice. I tell you the resulting sum is an odd number. What is your "credence"="subjective probability" that the dice landed on a 3 and a 4? Is it 2/36, or 2/18 (there are two ways they can land on 3 and 4)?

    This is pertinent to your question if one thinks the answer to the SB problem is 1/2, solely because a fair coin toss has that probability. In that interpretation, the answer here is 2/36.

    The puzzle is "about" whether SB receives "new information," similar to "the sum is odd," when she is awake. Halfers say she doesn't, and Thirders say she does. What you choose to call the probability has no relevance. In my opinion, arguments based on what you call it are used because they can't defend their usage of "new information or not," and they know that their counterpart cannot defend such a definition.

    She receives new information. The reason some think otherwise is because they think "you sleep through the outcome" means "the outcome doesn't occur." There are four equally-likely experiment states that can arise that are distinct from each other. One of them means the subject is asleep (or not asked for credence, same thing), but it is a state nonetheless. If she is awake, she knows that she is in one of three.
    JeffJo July 01, 2023 at 16:49 #819267
    Quoting Michael
    I they don’t. She’s being asked here credence in the outcome of step 3.


    You they do. In step 1, she is asked for her credence in the outcome of step 2, not step 3 (which also asks about it). So she must be able to distinguish between its two possible results when she is in step 1. You can't have it both ways; either the credence question is ambiguous in step 1 becasue the coin hasn't been flipped, or there are two recognizable future results that also distinguish A and B.
    Michael July 01, 2023 at 17:34 #819281
    Reply to JeffJo

    We’re talking about prior probabilities, i.e the probabilities as established before the experiment starts.

    The prior probability that step 1 will happen is 1.
    The prior probability that step 2 will happen is 1.
    The prior probability that step 3 will happen is 1/2.
    The prior probability that step 4 will happen is 1/2.

    There is no prior probability equal to 1/4.

    When she is asked her credence she cannot rule out any of these prior probabilities, and she is being asked her credence that step 3 occurs.
    JeffJo July 01, 2023 at 18:03 #819290
    Quoting Michael
    We’re talking about prior probabilities, i.e the probabilities as established when the experiment starts.


    Which doesn't change the fact that A can be distinguished from B.

    Quoting Michael
    The prior probability that step 1 will happen is 1.


    And the prior probability that the current waking, is a step-1 waking, is 1/2. The prior probability that this is an "A" waking is 1/4. Same for B, C (yes, she is wakened in C in your system), and D. EACH IS A DISTINCT OCCURRENCE IN YOUR SYSTEM and so has a 1/4 prior [probability.

    And guess what? If she is asked for a credence, that 1/4 prior probability for C is "ruled out."

    But you don't agree with this because you refuse to recognize the difference between B and D, as well as A and C (which you deny).

    And guess what else? My experiment bypasses all these issues. Which is why you won't discuss it.
    Michael July 01, 2023 at 18:10 #819291
    Quoting JeffJo
    And the prior probability that the current waking, is a step-1 waking, is 1/2.


    Prior probabilities are established before the experiment starts, so there is no “current waking” prior because there is no “current waking” before the experiment starts.
    JeffJo July 01, 2023 at 18:25 #819293
    Quoting Michael
    Prior probabilities are established before the experiment starts, so there is no “current waking”.


    There is no theory of when prior probabilities are established. But if there were, it would be fom the start, not before the start. This is the same principle that allows you to ask for the credence of an occurrence that hasn't happened yet. Any time parameter you need has to extend over the entire experiment, so it sees the future possibilities and can distinguish them.

    But this is the entire controversy behind the Sleeping Beauty Problem. One that I have shown can be trivially removed. And that is why you ignore it.

    Now, I have addressed everything you have tried. Answer my questions.
    Michael July 01, 2023 at 18:31 #819295
    Quoting JeffJo
    There is no theory of when prior probabilities are established. But if there were, it would be fom the start, not before the start.


    So when is this alleged P(X) = 1/4 prior established if not before the experiment starts?

    It cannot be when she is asked her credence as you’ve said that in being asked her credence this prior is reduced to 0.

    But it also cannot be before because there is no “current interview” before she’s asked her credence.

    So this alleged prior just makes no sense.

    Quoting JeffJo
    But this is the entire controversy behind the Sleeping Beauty Problem. One that I have shown can be trivially removed. And that is why you ignore it.


    I'm not ignoring it. I'm showing you that your version with the P(HH) = 1/4 prior that is ruled out when asked is not the same as the normal problem because the normal problem doesn't have an equivalent P(X) = 1/4 prior that is ruled out when asked.
    JeffJo July 01, 2023 at 19:53 #819316
    Quoting Michael
    So when is this alleged P(X) = 1/4 established if not before the experiment starts? It cannot be when she is asked her credence as you’ve said that in being asked her credence this prior is reduced to 0.


    If you have so little understanding of probability theory, you should not be trying to explain it to those who do. The (somewhat simplified) basics are:

    1. A probability experiment is any set of actions that has multiple possible, but unpredictable, results.
    2. An outcome is a measurable result.
    3. A sample space is any set of outcomes that:
      • Are all distinct.
      • Include all possibilities.
  • Each outcome in the sample space is assigned a probability value such that:[list]
  • Each probability is greater than or equal to zero.
  • The sum of the probabilities for the entire sample space is 1.

  • [*]Sets of outcomes are called events.
    • The probability of an event is the sum of the probabilities in it.

    [*]These can also be called "prior" probabilities.
    • This has absolutely nothing to do with when the probabilities are "established."
    • It refers to before any information about an outcome is revealed.

    [*]The companion to a prior probability is a posterior probability. It is sometimes called a conditional probability.
    • It does require timing, but not directly.
    • Specifically, it refers to a time after the result has been determined, and some information about that result has been revealed.
    • I said it was not directly related to timing because the result can be hypothetical. As in "If I roll a die, and the result is odd, the conditional probability that it is a 3 is 1/3."
    [/list]

    When looking at your SB experiment as a whole, there are only two distinct outcomes. The sample space is {Heads,Tails}. We know that both (A or B) and (C or D) will occur, so this distinction does not qualify as different results.

    But when looking at it from SB's awakened state, four are necessary:
    • A and B belong to distinct outcomes, because "Heads" and "Tails" are distinct results. SB knows that only one is (or could be) true. Please note that it does not matter that SB does not see this result, since she knows of the distinction and that only one can apply to her at the moment.
    • B and D are distinct outcomes by the same logic. The measure that distinguishes them is "first possible interview" and "second possible interview." Or "Monday" and "Tuesday" in Elga's solution (not the experiment he described). However you name this quality, SB knows for a fact that one value applies, and the other doesn't.[list]
    • It does not matter if SB is awakened in step 3, or not. The same quality that distinguishes B from D also distinguishes A from C.

    So the sample space is {A,B,C,D}. The prior probabilities, for an awakened SB, are 1/4 for each. The "new information" she receives is that C is ruled out. The posterior probability for A, the only outcome where the coin is Heads, is 1/3.
    [/list]

    But all of this is unnecessary in my scenario. The sample space for either waking is {HH,HT,TH,TT}. Being awake rules out HH.
    Michael July 01, 2023 at 20:07 #819318
    Quoting JeffJo
    The prior probabilities, for an awakened SB, are 1/4 for each.


    It makes no sense to say that when she wakes there is then a prior probability that she’s “asleep” of [math]1\over4[/math] that is immediately ruled out. If some prior probability is ruled out when she wakes then it must be that the prior probability is established before she wakes. But the prior probability that she will wake a second time if the coin lands heads is 0, not [math]1\over4[/math].

    If she’s awake then it is just the case that either the coin hasn’t been tossed or it landed tails.

    I can set out an even simpler version of the experiment with this in mind:

    1. Sleeping Beauty is given amnesia
    2. She is asked her credence that a coin has been tossed
    3. A coin is tossed
    4. If the coin lands tails then:
    4A. She is given amnesia
    4B. She is asked her credence that a coin has been tossed

    The prior probability that step 2 will happen is 1 and the prior probability that step 4B will happen is [math]1\over2[/math].

    When Sleeping Beauty is given amnesia she knows that she is in either step 1 or step 4A. When asked her credence she knows that she is in either step 2 or step 4B.

    No prior probability is ruled out when she is given amnesia or asked her credence.

    You have to say that in step 2 her credence is [math]1\over3[/math]. I have to say that in step 2 her credence is [math]1\over4[/math]. Of note is that neither of us can just apply the principle of indifference and say that in step 2 her credence is [math]1\over2[/math].

    Thirder reasoning is that because step 2 is twice as likely to occur as step 4B then I am twice as likely to be in step 2 as step 4B.

    Halfer reasoning is that because step 2 is twice as likely to occur as step 4B and that because if 4B will occur then I am equally likely to be in step 2 as step 4B then I am three times as likely to be in step 2 as step 4B.
    JeffJo July 02, 2023 at 13:23 #819493
    Quoting Michael
    It makes no sense to say that when she wakes there is then a prior probability that she’s “asleep” of 14 that is immediately ruled out.


    That is the only thing that does make sense. You are completely ignoring that fact that the experiment that SB sees when she is awake, is not the experiment that she volunteered for. In fact, this is the mistake all halfers make, and they do it because they do not know how to model the difference so they claim the difference doesn't exist.

    1. The experiment she volunteered for involves the possibility of two wakings.
    2. The experiment she see involves exactly one waking.


    The reason it is hard to model, is that experiment #2 still has to account for the other possible waking. But it has to do so while accounting for the fact that it is not the current waking. I have proposed a way to implement the actual SB problem (not the "most frequent" one that Elga used to account for the other waking) that removes this difference, And that is why you ignore it. Your argument about "no prior=1/4 to rule out" is explicitly saying you do not recognize the difference.

    Now, I can see why some would say we should agree to disagree on this issue. But to make that agreement, you have to at least make an effort to say what you disagree with, and "it can't use the solution I choose, and doesn't get the answer I want" is not such an effort. I have told you why I think all of your arguments are wrong. You display intellectual dishonesty by refusing to address what you think could be wrong with mine. It pretty much implies you can't find anything. I don't mean to be so blunt, but you can easily disprove this by making such an effort.

    Quoting Michael
    I can set out an even simpler version of the experiment with this in mind:

    1. Sleeping Beauty is given amnesia
    2. She is asked her credence that a coin has been tossed


    Um, no. You are asking if it has occurred when you know it hasn't, while correct implementation has to refer to the flip whenever it might occur.

    Quoting Michael
    The prior probability that step 2 will happen is 1 and the prior probability that step 4B will happen is 12

    Again, no."Prior" refers to before information revealed, not to before that information is "established." You do not help your argument by ignoring how probability theory works.

    Michael July 02, 2023 at 13:48 #819495
    Quoting JeffJo
    You are asking if it has occurred when you know it hasn't


    Sleeping Beauty doesn't know that it hasn't occurred. She has amnesia.

    Quoting JeffJo
    "Prior" refers to before information revealed


    So after waking, and before new information is revealed, the prior probability that the coin landed heads and that she is being woken for a second time is 1/4? Then what new information is revealed after waking that allows her to rule out that prior probability? It can’t be “being awake” because that isn’t new information. And it can’t be “being asked her credence” because if she has just woken then she knows with certainty that she is about to be asked her credence.

    The simplest answer is the correct one. The prior probability that the coin will or did land heads and that she is being or will be woken for a second time is and always was 0. That’s just a rule of the experiment.

    Nothing is ruled out when woken or asked her credence that wasn’t already ruled out before the experiment started.

    Even Elga understood this:

    Before being put to sleep, your credence in H was 1/2. I’ve just argued that when you are awakened on Monday, that credence ought to change to 1/3. This belief change is unusual. It is not the result of your receiving new information
    Pierre-Normand July 03, 2023 at 01:16 #819652
    Quoting Michael
    Nothing is ruled out when woken or asked her credence that wasn’t already ruled out before the experiment started.

    Even Elga understood this:


    I believe Elga was mistaken about this. There actually is some information that becomes available to Sleeping Beauty when she awakens, though the nature of this information is rather peculiar. I discussed the nature of this information with GPT-4 in this earlier post.

    What informs Sleeping Beauty about the likelihood that the coin landed (or will land) tails, allowing her to update her credence from 1/2 to 2/3, is the fact that she awakens and that, whenever she awakens, the coin landed (or will land) tails two times out of three. After the experiment is over, and she is awoken on Wednesday (assuming she always receives the amnesia-inducing drug after each interview), this information is lost to her, and her credence reverts back to 1/2. The reason why she can't retain the information available to her during each awakening is that this information pertains specifically to the state of the coin in relation to her current episode of awakening. Upon awakening on Wednesday, she loses this information because she loses the ability to refer deictically to her past episodes of awakening (not even knowing how many of them there were).

    This loss of information can be emphasized further by modifying the experiment in such a way that the information is not lost by her on Wednesday. Suppose we update the protocol so that on rare occasions, which present themselves with equal probability on each awakening episode, Sleeping Beauty is able to write down a note saying "I have now been awakened and interviewed." She can retain this note and read it again on Wednesday. Upon rereading the note on Wednesday, she can reason that it is twice as likely that such a note was produced if the coin landed tails since she would have been twice as likely to write it during such an experimental run. Therefore, she can retain her updated credence P(T) = 2/3 through ordinary Bayesian reasoning.

    The key realization is that the same information that allows Sleeping Beauty to update her credence P(H) from 1/2 to 2/3 upon receiving the note, is inherent in every awakening she experiences due to the causal structure of the experiment. Each awakening serves as an implicit notification of her being in one of the two potential kinds of awakening episodes, which are twice as likely to occur if the coin landed tails. This causal relationship between coin toss results and awakenings, established by the experimental setup, provides information that is available to her in every awakening, even when she doesn't have the opportunity to physically write it down. Essentially, the note merely serves to extend this causal relationship to her Wednesday state, providing her with twice as many opportunities to receive the note if the coin landed tails.
    Michael July 03, 2023 at 07:57 #819727
    Quoting Pierre-Normand
    Suppose we update the protocol so that on rare occasions, which present themselves with equal probability on each awakening episode, Sleeping Beauty is able to write down a note saying "I have now been awakened and interviewed." She can retain this note and read it again on Wednesday. Upon rereading the note on Wednesday, she can reason that it is twice as likely that such a note was produced if the coin landed tails since she would have been twice as likely to write it during such an experimental run.


    Not necessarily.

    Assume a probability of 1/2 each time. The probability of writing it if the coin landed heads is 1/2. The probability of writing it (at least once) if the coin landed tails is 3/4. It is 3/2 times as likely to have been tails.

    Assume a probability of 2/3 each time. The probability of writing it if the coin landed heads is 2/3. The probability of writing it (at least once) if the coin landed tails is 8/9. It is 4/3 times as likely to have been tails.

    But notice that as the probability of writing a note each time approaches 1 the "greater likelihood" of it having been tails gets smaller, approaching 1.

    Also, apply the principle of this reasoning to this.

    A fair coin toss is equally likely to be heads as tails, she is less likely (not guaranteed) to wake if tails, therefore if she does wake then she reasons that it's less likely to be tails. In waking she rules out TTT.

    If the answer to this problem is 4/7 then the answer to the normal problem is 1/2. If the answer to the normal problem is 1/3 then the answer to this problem is 1/2.
    Michael July 03, 2023 at 08:31 #819728
    @Pierre-Normand

    These cannot all be true:

    1. Credence "is a statistical term that expresses how much a person believes that a proposition is true"
    2. My current interview is a heads interview iff I have been assigned one heads interview
    3. The fraction of interviews which are heads interviews is [math]1\over3[/math]
    4. The fraction of experiments which have one heads interview is [math]1\over2[/math]
    5. My credence that my current interview is a heads interview is equal to the fraction of interviews which are heads interviews
    6. My credence that I have been assigned one heads interview is equal to the fraction of experiments which have one heads interview

    Propositions like 5 and 6 might usually be true, but they are not true by definition.

    Given 1 and 2, my credence that my current interview is a heads interview is equal to my credence that I have been assigned one heads interview.

    Therefore given 1, 2, 3, and 4, one or both of 5 and 6 is false.

    So how will your reasoning let you choose between 5 and 6 without begging the question?

    I accept 6 and reject 5. My credence that my current interview is a heads interview isn't equal to the fraction of interviews which are heads interviews.

    My argument is:

    P1. If I am certain that A is true if and only if B is true then the degree to which I believe that A is true is equal to the degree to which I believe that B is true
    P2. I am certain that my current interview is my only interview if and only if I have been assigned only one interview
    C1. Therefore the degree to which I believe that my current interview is my only interview is equal to the degree to which I believe that I have been assigned only one interview (from P1 and P2)
    P3. I am certain that if I have been assigned at random by a fair coin toss either one or two interviews then the probability that I have been assigned only one interview is [math]1\over2[/math]
    P4. I am certain that I have been assigned at random by a fair coin toss either one or two interviews
    C2. Therefore I am certain that the probability that I have been assigned only one interview is [math]1\over2[/math] (from P3 and P4)
    P5. The degree to which I believe that I have been assigned only one interview is equal to what I am certain is the probability that I have been assigned only one interview
    C3. Therefore the degree to which I believe that I have been assigned only one interview is [math]1\over2[/math] (from C2 and P5)
    C4. Therefore the degree to which I believe that my current interview is my only interview is [math]1\over2[/math] (from C1 and C3)
    P6. I am certain that my current interview is my only interview if and only if the coin landed heads
    C5. Therefore the degree to which I believe that the coin landed heads is $1\over2$ (from P1, C4, and P6)
    Pierre-Normand July 03, 2023 at 16:42 #819775
    Quoting Michael
    [...]But notice that as the probability of writing a note each time approaches 1 the "greater likelihood" of it having been tails gets smaller, approaching 1.[...]


    Yes, indeed, which is why I edged my specification by stipulating that the occasions to write a note were rare.

    If Sleeping Beauty would receive two notes on Wednesday, she'd be able to infer that there were two awakenings and hence that the coin didn't land heads. On the earlier occasions when she was writing those notes, by contrast, she wasn't able to know this. When the probability that she would be able to write a note on each awakening occasion is exactly 1/2, the overlapping cases are just numerous enough to enable her to infer on Wednesday, when she receives one single note, that P(H) = 1/2.

    As the occasions to write a note become rarer (e.g. 1/n with n >> 1), the frequency of those overlapping notes become negligible (n times as many single notes are received as double notes) and Sleeping Beauty's epistemic state (i.e. the value of her credence) approaches asymptotically her epistemic state as she was writing the note. And, as I had suggested in my previous post, this is because when she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note.

    I'll address the other cases and analyses you have presented separately.
    Michael July 03, 2023 at 17:52 #819789
    Quoting Pierre-Normand
    As the occasions to write a note become rarer (e.g. 1/n with n >> 1), the frequency of those overlapping notes become negligible (n times as many single notes are received as double notes) and Sleeping Beauty's epistemic state (i.e. the value of her credence) approaches asymptotically her epistemic state as she was writing the note. And, as I had suggested in my previous post, this is because when she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note.


    If heads and n = 100 then the probability of writing a note is 1/100

    If tails and n = 100 then the probability of writing exactly one note is 1/100.

    So if she finds exactly one note on Wednesday then her credence in heads is 1/2.
    Pierre-Normand July 03, 2023 at 17:58 #819790
    Quoting Michael
    If heads and n = 100 then the probability of writing a note is 1/100

    If tails and n = 100 then the probability of writing exactly one note is 1/100.

    So if she finds exactly one note her credence in heads is 1/2.


    Her probability of writing a note is 1/100 on each occasion she awakens. Since she awakens twice when the coin lands tails, her probability of writing a note is 2/100 when a T-experimental run occurs (discounting the 1/10000 cases where she writes two notes).
    Pierre-Normand July 03, 2023 at 18:24 #819792
    Quoting Michael
    If tails then:

    The probability of her writing on Monday is 1/100
    The probability of her writing on Tuesday is 1/100


    Yes.

    The probability of her writing on both Monday and Tuesday is 1/100 * 1/100 = 1/10,000
    The probability of her writing on neither Monday or Tuesday is 1 - (1/100 * 1/100) = 9,999/10,000


    The latter is actually 99/100*99/100 = 0.9801 ("both" and "neither" aren't complementary cases.)

    The probability of her writing on Monday or Tuesday but not both is (1/100 + 1/100) / 2 = 1/100


    It's actually 1 - "both" - "neither" = 1 - 0.0001 - 0.9801 = 0.0198 ? 2%, which is roughly twice the probability of writing a note in the H case.[/quote]

    Michael July 03, 2023 at 18:28 #819793
    Reply to Pierre-Normand

    Yes, I got the maths wrong there (and deleted my post before you replied; apologies).

    Though I don't see why I should accept your claim that if "she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note."

    As you said yourself, if the probability of her writing a note is 1/2 then if she finds exactly one note on Wednesday then her credence in Heads is 1/2.
    Pierre-Normand July 03, 2023 at 18:54 #819798
    Quoting Michael
    Though I don't see why I should accept your claim that if "she receives a single note on Wednesday, Sleeping Beauty comes to be causally and epistemically related to the coin result in the exact same manner as she was when she originally wrote the note."


    This is because, when the experimental protocol is expanded to enable Sleeping Beauty to hand notes to her future self in such a manner, the episodes of her receiving a note on Wednesday are produced twice as often in the long run when the coin has landed tails. On the occasion where she awakens and is offered the opportunity to write a note, Sleeping Beauty therefore is enabled to reason thus:

    "When I will read again the note that I am currently writing, on Wednesday, I will be able to rationally infer that it is twice as likely that this note was written by me on the occasion of a T-awakening. But it can only be the case that it will have been written on the occasion of a T-awakening if I am now experiencing a T-awakening. Therefore, it is now twice as likely that I am experiencing a T-awakening."

    Notice also that, since the probability that Sleeping Beauty would be offered an opportunity at any given awakening to write a note is the same regardless of whether it is an H-awakening or a T-awakening, being offered such an opportunity gives her no independent ground to update her credence.

    Note that, as you said yourself, if the probability of her writing a note is 1/2 then if she finds exactly one note then her credence in Heads is 1/2.


    That was only in the specific case where n = 2. As n grows larger, P(H) tends towards 1/3.
    Michael July 03, 2023 at 19:08 #819800
    Quoting Pierre-Normand
    When I will read again the note that I am currently writing, on Wednesday, I will be able to rationally infer that it is twice as likely that this note was written by me on the occasion of a T-awakening.


    That depends on the probability that you will be given the opportunity to write a note. If that probability is 1/2 then it won't be rational on Wednesday to infer that it is twice as likely that this note was written by me on the occasion of a T-awakening.

    Quoting Pierre-Normand
    That was only in the specific case where n = 2. As n grows larger, P(H) tends towards 1/3.


    And as it grows smaller, P(H) tends to 1. I don't understand the relevance of any of these three answers.

    Why is the correct answer given by any of these situations, let alone by the situation where n is arbitrarily large?
    JeffJo July 03, 2023 at 19:17 #819801
    Quoting Michael
    So after waking, and before new information is revealed, the prior probability that the coin landed heads and that she is being woken for a second time is 1/4?

    You are trying really hard to not understand this, aren't you? Of course, all of this would become moot if you would openly discuss other people's ideas, instead of ignoring them while insisting that they discuss only yours. (See: intellectual dishonesty.)

    AT ANY TIME in an experiment, the prior probability for any event is based set of all possibilities that could occur, and how they could occur. Not what (or when) information is revealed. And not on how they can be observed.

    But a person's BELIEF in an event is based on what information is revealed to her after a result has been realized. And that can depend on how it is observed.

    The problem here, is that many experiments can be described by different sets of results. The roll of two dice can be described by 36 different combinations or 11 different sums. Neither is invalid, but only the 36 combinations allow for prior probabilities to be easily assigned.

    And that is important here, because you are insisting that a two-day collection of events (I'll call your two passes Monday and Tuesday since only the order matters to anything). You are calling Monday+Tails and Tuesday+Tails the same event. But to SB, who can only observe one at a time, they are distinct events that each have half the prior probability that you assign to the combination.

    And you not only call Monday+Heads and Tuesday+Heads the same event in the same way, you deny that Tuesday+Heads happens. It does. Even if (as in "the most frequent" version) SB can't observe it, she knows (A) that it can happen, (B) that it has a non-zero prior probability, and (C) that it is not what is happening if she is awake.

    But in this version:
    Quoting Michael
    1. Sleeping Beauty is given amnesia and asked her credence that the coin will or did land heads
    2. The coin is tossed
    3. If the coin lands heads then she is sent home
    4. If the coin lands tails then she is given amnesia, asked her credence that the coin will or did land heads, and sent home


    She does observe it.
    Michael July 03, 2023 at 19:21 #819802
    Quoting JeffJo
    the prior probability for any event is based set of all possibilities that could occur


    And her being woken a second time if the coin lands heads can't occur, which is why its prior probability is 0, not [math]1\over4[/math].

    Quoting JeffJo
    And that is important here, because you are insisting that a two-day collection of events (I'll call your two passes Monday and Tuesday since only the order matters to anything). You are calling Monday+Tails and Tuesday+Tails the same event. But to SB, who can only observe one at a time, they are distinct events that each have half the prior probability that you assign to the combination.


    There aren't two days in my example.
    Pierre-Normand July 03, 2023 at 21:25 #819818
    Quoting Michael
    And as it grows smaller, P(H) tends to 1. I don't understand the relevance of any of these three answers.

    Why is the correct answer given by any of these situations, let alone by the situation where n is arbitrarily large?


    The issue with making n small is that it allows Sleeping Beauty on Wednesday to decrease her credence P(H) regarding the origin of the single note. This is because (1) she did not receive two notes and (2) in a significant proportion of cases where a T-run occurs, two such notes are generated instead of one. This makes her epistemic situation dissimilar to her situation when she experiences a particular awakening episode. During such episodes, she can never know that there are two of them due to her amnesia. Making n large makes Sleeping Beauty's epistemic situation on Wednesday, when she receives a note, nearly identical to her situation when she wrote the note, since the Bayesian updating she can perform on the basis of the note being unique is negligible.

    When evaluating P(H) on Wednesday, assuming n >> 1, the question Sleeping Beauty is asking is:

    "What are the odds that this single note that I received was written by me during an H-awakening?"

    The answer is approximately 1/3. However, the note could only have been written during an H-awakening if the coin landed on H. Therefore, P(H) is 1/3.

    The second step in the reasoning is to consider that when Sleeping Beauty awakens and finds an opportunity to write a note, she knows that when she reads it on Wednesday (except on the very rare occasion when she finds two notes) she will be able to rationally infer that the odds that the note was written during an H-awakening are 1/3. Since it is now certain that she will read the note on Wednesday and will possess no more information regarding the circumstances of production of the current note than she currently has, she can already infer that this note is being written by her during an H-awakening with 1/3 odds.

    A streamlined version of Sleeping Beauty's inference is: "Since I now know that I will soon rationally infer that this note was written during an H-awakening with probability 1/3 (on the basis of no new information), I can already infer this right now." (Here, I am making use of van Fraassen's reflection principle.)

    The last step in the argument requires reflecting on the case where Sleeping Beauty doesn't find an opportunity to write a note. In that case, when she awakens, she can reason counterfactually:

    "If I had had an opportunity to write a note to myself, I would then have inferred on Wednesday that P(H) (regarding the current awakening episode that is the source of the note) is 1/3, and hence known now that P(H) is 1/3. But the odds that I am currently experiencing an H-awakening are probabilistically independent of my finding an opportunity to write a note. Therefore, they are 1/3 and the only reason why I will be unable to know this when I awaken on Wednesday (and rather infer that P(H) = 1/2) is because I will have lost the special causal connection that I currently have to my present awakening episode.

    Note that when Sleeping Beauty doesn't receive a note on Wednesday, her credence P(H) = 1/2 doesn't merely differ in value from her credence P(H) = 1/3 during awakenings; the predicates P() also have different meanings. During awakenings, P(H) refers to the odds that her current awakening episode is occurring during a coin toss that landed heads. On Wednesday, P(H) refers to the odds that the experimental run she is exiting from was an H-run. While in each case the biconditionals "I am now in an H-awakening iff I am now (and will be) in an H-run" or (on Wednesday) "I was in an H-awakening iff I am now in an H-run" hold, the probabilities don't necessarily match due to the two-to-one mapping between T-awakenings and T-runs.

    To emphasize this last point, suppose Sleeping Beauty writes a note on each awakening occasion and the experiment is run many times. She ends up with a collection of identical notes, approximately two-thirds of which were written during T-awakenings. She now has lost track of the pairing between the notes. Two things can now be true at the same time:

    (1) Since 1/3 of those notes are H-notes, Sleeping Beauty was right during the occasions where she wrote them to believe P(H-note) = 1/3 and hence that P(H) = 1/3.

    (2) Since at the end of each experimental run, Sleeping Beauty received either one H-note or two T-notes with equal probabilities, the probability that the individual note(s) she received were T-notes (or H-notes) was 1/2 (or 1/2). In other words, in advance of counting how many notes there were on any given Wednesday, Sleeping Beauty could point at the note(s) and say that they were equally likely to be H-notes or T-notes.

    This analysis again highlights how Halfers and Thirders can both be right at the same time but talk past each other when they fail to attend precisely to their respective definitions of P(H), and especially how H and T outcomes are to be individuated and counted.

    Here, I've asked GPT-4 to summarise the argument and highlight the main points:
    [hide]Summary

    1. Expanding the Protocol: You suggest a modification to the Sleeping Beauty experiment, where on rare occasions she gets the opportunity to write a note to her future self when she awakens. Importantly, the likelihood of being able to write a note does not depend on whether it is a T-awakening or an H-awakening.

    2. Receiving the Note: When she awakens on Wednesday and receives one note, the probability of the note having been written during a T-awakening is roughly 2/3, assuming the opportunities to write are rare. This is because there are twice as many opportunities to write a note during a T-run compared to an H-run.

    3. Retaining Credence: This enables Sleeping Beauty to maintain her updated credence of P(H) = 1/3 from her awakening into Wednesday. She can rationally infer from receiving the note that it's twice as likely it was written during a T-awakening.

    4. Understanding the Information: The fact that Sleeping Beauty can retain this information when reading the note on Wednesday suggests that the information available to her upon receiving the note is essentially the same information she had access to upon each awakening. Specifically, it's the information that two times out of three, an awakening corresponds to the coin landing Tails.

    5. Counterfactual Reasoning: When Sleeping Beauty doesn't get the opportunity to write a note, she can use counterfactual reasoning. She knows that had she been able to write a note, she would have concluded on Wednesday that P(H) was 1/3, so she can infer the same during the awakening.

    6. Different Meanings of P(H): When Sleeping Beauty doesn't receive a note on Wednesday, her credence P(H) = 1/2 isn't merely a different value than P(H) = 1/3 during awakenings, the predicates P() also have different meanings. During awakenings, P(H) refers to the odds of her current awakening being a H-awakening. On Wednesday, P(H) refers to the odds that the experimental run she is exiting from was an H-run.

    7. Individuation of H and T Outcomes: The argument concludes by stressing the importance of how H and T outcomes are to be individuated and counted. Misunderstandings or disagreements about this can lead to Halfers and Thirders talking past each other.[/hide]
    Michael July 03, 2023 at 21:33 #819819
    Quoting Pierre-Normand
    Since I now know that I will soon rationally infer that this note was written during an H-awakening with probability 1/3 (on the basis of no new information), I can already infer this right now.


    Then before the experiment starts the thirder will say "since I now know that I will soon rationally infer that the coin will have landed heads with probability 1/3 (on the basis of no new information), I can already infer this right now, before the coin is tossed."

    But I think this is wrong.

    Quoting Pierre-Normand
    Making n large makes Sleeping Beauty's epistemic situation on Wednesday, when she receives a note, nearly identical to her situation when she wrote the note, since the Bayesian updating she can perform on the basis of the note being unique is negligible.


    They're not nearly identical. On Wednesday she knows that she only had the opportunity once. When she wrote the note she didn't know that it was her only opportunity. So contrary to the above, there is new information on Wednesday.

    Quoting Pierre-Normand
    Note that when Sleeping Beauty doesn't receive a note on Wednesday, her credence P(H) = 1/2 doesn't merely differ in value from her credence P(H) = 1/3 during awakenings; the predicates P() also have different meanings. During awakenings, P(H) refers to the odds that her current awakening episode is occurring during a coin toss that landed heads. On Wednesday, P(H) refers to the odds that the experimental run she is exiting from was an H-run. While in each case the biconditionals "I am now in an H-awakening iff I am now (and will be) in an H-run" or (on Wednesday) "I was in an H-awakening iff I am now in an H-run" hold, the probabilities don't necessarily match due to the two-to-one mapping between T-awakenings and T-runs.


    I address this here.
    JeffJo July 04, 2023 at 00:27 #819844
    Quoting Michael
    And her being woken a second time if the coin lands heads can't occur, which is why its prior probability is 0, not 14.


    In your experiment as you listed it, she isn't put to sleep and isn't woken. Yet you keep describing it as being woken, probably because that language is used in the "most frequent version." I have not mentioned this, because I recognize the need to read what you write with an open mind.

    There aren't two days in my example.


    But the time period after the coin is flipped still exists, and the coin can be Heads during that time. AS I DESCRIBED, I wanted a name for that period. So I choose to call it "Tuesday" regardless of when it occurs.

    So we can label four distinct periods of consciousness for SB during your experiment, by fake-name day and coin result. Any such moment has a 1/4 prior probability to be any one of these. Including Heads+Tuesday.

    But this was the subject of some of my questions. Even with sleep, the time period I choose to call "Tuesday" still exists, and can still have "Heads", regardless of SB being awake for it. The important part is that she knows when it isn't happening.

    If you have any integrity at all, you'd discuss my questions. You won't. because you have no counter argument for them. I have debunked everything you have said.

    Your "version" with sleep.
    1. Sleeping Beauty is given a 10 minute sleep drug that induces amnesia.
    2. Twenty minutes after she is given it (so ten minutes after she wakens with amnesia) a bell sounds, and she is asked her credence that the coin will or did land heads.
    3. After answering, she is given the drug again.
    4. The coin is tossed.
    5. If the coin lands heads she is given a booster that extends the effect of the drug for thirty additional minutes.
    6. The same bell sounds twenty minutes after the dose in step 2. If she is awake when it rings, she is asked the same question.
    7. After answering, she is given the drug a third time.
    8. She wakens about the same time regardless of the coin, and is sent home.

    There are four situations where the sound of the bell could fall on SB's ears: (Ring #1, Heads), (Ring #1, Tails), (Ring #2, Heads), (Ring #3, Tails). Each has a prior probability of 1/2 to an outside observer who can remember both. But only 1/4 to to be the one that just fell on SB's ears, even if she can't hear it. If the bell rings and SB is asked a question, she knows that (Ring #2, Heads) is ruled out, and her confidence in Heads is 1/3.

    Your error, as it has always been, is considering (Ring #1, Tails) and (Ring #2, Tails) to be the same result. Because both happen if the coin lands on Tails. But the point is that don't happen at the same time, and this makes them different outcomes to SB since in here world there is only ever one ring at a time. Even if she is asleep.
    Pierre-Normand July 04, 2023 at 01:31 #819861
    Quoting Michael
    Then before the experiment starts the thirder will say "since I now know that I will soon rationally infer that the coin will have landed heads with probability 1/3 (on the basis of no new information), I can already infer this right now, before the coin is tossed."

    But I think this is wrong.


    You are correct that this would be wrong. The entire aim of my variation (and the Leonard Shelby variation before it) was to highlight that there is indeed some new information available to Sleeping Beauty upon awakening, and that this information can be retained by her on Wednesday through a note. This information wasn't available to her before the experiment began, and isn't available to her on Wednesday when she doesn't receive a note.

    The objective of my discussion was also to highlight the unique nature of this information. It's not a form of information indicating, consistent with this information, a higher proportion of possible worlds in which the coin landed tails. Indeed, the proportion of possible worlds remains exactly the same. It's rather information that is generated by placing Sleeping Beauty in a higher proportion of centered possible worlds (her distinct awakening episodes within a single experimental run) within the T-run timelines.

    This type of information is the same as the information that is transmitted from her to her own future self (on Wednesday) when she awakens, by selecting twice as many future recipients of the note in the long run when it is a T-note. This is akin to the information you gained that someone you met was a Tunisian with a probability of 2/3, not because there are twice as many Tunisians as there are Italians (there were actually as many of each in the city), but because Tunisians venture outside of their homes twice as often, doubling your opportunities of meeting them. Likewise, the setup of the Sleeping Beauty experiment makes the coins that land tails twice as likely to "meet" her on the occasion of her awakenings.

    Quoting Michael
    They're not nearly identical. On Wednesday she knows that she only had the opportunity once. When she wrote the note she didn't know that it was her only opportunity. So contrary to the above, there is new information on Wednesday.


    I would argue that the situations are nearly identical since this new knowledge is almost inconsequential. Suppose n = 100. On Wednesday, Sleeping Beauty knows that she only had a single opportunity and can thus rule out the miniscule 0.01% chance that she would have had two opportunities. The probability that the note she obtained was an H-note (produced on the occasion of an H-awakening) therefore is P(H-note|single)/(P(H-note|single)+P(T-note|single)) where

    P(H-note|single) = P(single|H-note)P(H-note)/P(single) = (1)(0.5%)/(0.0149) ? 0.3356
    (since P(single) = 0.5%+(1/2)*0.0198 = 0.0149, and we had calculated the 0.0198 before)

    and

    P(T-note|single) = P(single|T-note)P(T-note)/P(single) = (99%)(1%)/(0.0149) ? 0.6644

    As expected, P(H-note|single) and P(T-note|single) sum up to 1, and the probability that the note Sleeping Beauty obtained on Wednesday was an H-note rises to 0.3356. This figure is slightly larger than 1/3 only because the rare cases of H-runs where two opportunities to write a note were present are discounted when she only receives one.

    Quoting Michael
    I address this here.


    Thank you! I'll respond to this separately.
    Michael July 04, 2023 at 05:59 #819893
    Quoting JeffJo
    But the time period after the coin is flipped still exists, and the coin can be Heads during that time.


    She is put to sleep on day 0.

    If the coin lands heads then a 14-sided dice is rolled. She is woken that many days later and asked her credence.

    If the coin lands tails then a 7-sided dice is rolled. She is woken that many days later, asked her credence, and put back to sleep. The dice is rolled again. She is woken that many days later and asked her credence.

    Whether heads or tails she can wake on any day between 1 and 14 days after she is first put to sleep.

    Nothing is ruled out when she wakes. She doesn't rule out heads and day 1, she doesn't rule out heads and day 2, ... and she doesn't rule out heads and day 14.

    The specific days she’s woken or kept asleep are irrelevant. The only things that matter are that she has either one or two interviews – determined by a fair coin toss – and that she doesn’t know if she’s already had one. Everything else is a red herring.
    Michael July 04, 2023 at 07:24 #819896
    Quoting Pierre-Normand
    While in each case the biconditionals "I am now in an H-awakening iff I am now (and will be) in an H-run" or (on Wednesday) "I was in an H-awakening iff I am now in an H-run" hold, the probabilities don't necessarily match due to the two-to-one mapping between T-awakenings and T-runs.


    I know I referred you to one of my previous posts, but I’ll respond to this directly too.

    We’re discussing credence.

    If I am certain that A is true if and only if B is true and if I am pretty sure that A is true then ipso facto I am pretty sure that B is true.

    Given the biconditional one’s credence in the left hand side must match one’s credence in the right hand side, even if there is this “two-to-one mapping”.

    I am certain that my hand is one of two hands I have if and only if I have two hands.

    I am pretty sure that my hand is one of two hands I have.

    Therefore I am pretty sure that I have two hands.

    It doesn’t make sense to say that you’re pretty sure (P = 2/3) that your current interview is one of two but that you’re on the fence (P = 1/2) as to whether or not you have been assigned two interviews.

    Either you are pretty sure that you have been assigned two interviews or you are on the fence as to whether or not your current interview is one of two.
    JeffJo July 04, 2023 at 12:08 #819911
    Quoting Michael
    The specific days she’s woken or kept asleep are irrelevant.


    I agree, with a caveat. The specific details of whether she is woken at a specific point ("day") in the experiment do matter. You can test this in Elga's solution. If she is told that the current day is Tuesday, then she knows Pr(Heads) must decrease to 0. If she is told that it is not Tuesday, the Law of Total Probability actually requires it to go up. It goes from 1/3 to 1/2 in the Thirder soluton, and from 1/2 to 2/3 in the Halfer solution. This is quite relevant, even if you think it is wrong AND CAN PROVIDE A VALID REASON

    But it isn't the day name that matters, it is the details associated with that day name. And since these details are different on different "days," we can track them by naming the "days."

    Now, you could argue about why those details might matter; but so far you have refused to. You have just asserted they don't (in spite of evidence like I just presented). But naming them cannot affect the correct answer, no matter how those details affect it. SO THERE IS NO REASON TO NOT NAME THE "DAYS." And even the possibility that they might have an effect makes them relevant.

    In other words, you are proffering the red herring here. You are insisting that we must ignore a piece of potential information, because you think it has no affect. If so, there is no harm in including it.
    Quoting Michael
    The only things that matter are that she has either one or two interviews – determined by a fair coin toss – and that she doesn’t know if she’s already had one. Everything else is a red herring.


    I agree (well, if you add "or if she will have one later") about "only things." But the "red herring" you claim exists, is trying to utilize information that you say has no affect while refusing to address affects that are clearly present.

    But if you truly believe in this, then it should be easy to address this:
    1. She is put to sleep. A coin, call it C1, is tossed to determine if "she has either one or two interviews."
    2. But then a second coin is tossed, call it C2.[list=1]
    3. Call the state of the coins at this moment S1.
    4. If either coin is showing Tails in S1, wake her with amnesia and interview her.
    5. If both are showing Heads, do SOMETHING ELSE.
    6. If she isn't asleep at this point, put her to sleep again.

    [*]Turn coin C2 over to show its other side.
    1. Call the state of the coins at this moment S2.
    2. If either coin is showing Tails in S2, wake her with amnesia and interview her.
    3. If both are showing Heads, do SOMETHING ELSE.
    4. If she isn't asleep at this point, put her to sleep again.

    [*]Wake her and send her home.
    [/list]

    This procedure creates what, in your words, are "the only things that matters." She has either one, or two, interviews and does not know if another will/did happen. The only thing that is different, is that your possible two interviews occur under different circumstances; one is mandatory, and one is optional. What we disagree about is whether the part that is missing - the non-interview when the option is not taken - matters.

    Here they occur under identical circumstances. That is, either steps 2.1 thru 2.4, or steps 3.1 thru 3.4. And those circumstances can be used to answer the question. I did "name" the details by calling them state S1 or S2, but since they are identical to SB she can call them state S.

    There are three possible combinations of the two coins in state S, and they are equally likely. Her credence in state S=(H,T) is 1/3.

    This has nothing to do with what may or may not get "ruled out" in a solution to your version of the experiment. That difference is the red herring in the "most frequent" presentation of the problem. This is a self-contained experiment with a trivial answer.

    But it's an answer you don't like. So you will either ignore it, or repeat the non sequitur that it includes the "ruling out of a 1/4 probability" that we are debating about above, which is circular logic.
    Michael July 04, 2023 at 12:12 #819912
    Reply to JeffJo

    So tell me what prior probability is ruled out in my experiment above.

    If there isn't one then there isn't one in the ordinary experiment either. You may have other reasons for believing that the answer to the problem is [math]1\over3[/math], but those reasons can't be that some prior probability of [math]1\over4[/math] is ruled out when woken.
    JeffJo July 04, 2023 at 13:14 #819916
    Quoting Michael
    So tell me what prior probability is ruled out in my experiment above.


    This is getting tiresome. The answer follows trivially from what I have said before - have you read it? But I will not continue to dangle on your string. I'll point it out after you tell me:

    1. Whether you agree that procedure I just described implements "The only things that matter are that she has either one or two interviews – determined by a fair coin toss – and that she doesn’t know if she’s already had one [or will have another]."
    2. If not, what "thing that matters" is not implemented.
    3. What her credence in Heads should be.
    Michael July 04, 2023 at 13:16 #819917
    Quoting JeffJo
    The answer follows trivially from what I have said before - have you read it?


    Nothing in your above post tells me what prior probability is ruled out when she's woken in this experiment.

    In your experiment the prior probability HH = [math]1\over4[/math] is ruled out when woken.

    If you cannot tell me what prior probability is ruled out when woken in my experiment then I have no reason to accept that your experiment is at all equivalent.
    JeffJo July 04, 2023 at 13:34 #819920
    I most certainly can. You just won't try to understand it. Or answer any question placed before you, if you don't want to accept the answer. So once again, just answer mine first and I'll answer yours.
    Michael July 04, 2023 at 13:34 #819921
    Quoting JeffJo
    I agree, with a caveat. The specific details of whether she is woken at a specific point ("day") in the experiment do matter. You can test this in Elga's solution. If she is told that the current day is Tuesday, then she knows Pr(Heads) must decrease to 0. If she is told that it is not Tuesday, the Law of Total Probability actually requires it to go up. It goes from 1/3 to 1/2 in the Thirder soluton, and from 1/2 to 2/3 in the Halfer solution. This is quite relevant, even if you think it is wrong AND CAN PROVIDE A VALID REASON

    But it isn't the day name that matters, it is the details associated with that day name. And since these details are different on different "days," we can track them by naming the "days."

    Now, you could argue about why those details might matter; but so far you have refused to. You have just asserted they don't (in spite of evidence like I just presented). But naming them cannot affect the correct answer, no matter how those details affect it. SO THERE IS NO REASON TO NOT NAME THE "DAYS." And even the possibility that they might have an effect makes them relevant.

    In other words, you are proffering the red herring here. You are insisting that we must ignore a piece of potential information, because you think it has no affect. If so, there is no harm in including it.


    When I said that the only things that matter are:

    1. She has either one or two interviews determined by a fair coin toss and
    2. She doesn’t know if she’s already had one

    I was referring to her just waking up, not being told any further information.

    If she's told that the coin landed tails then 1 is no longer relevant and she just considers 2. If she's told that this is now her first interview then 2 is no longer relevant and she just considers 1.

    And I don't see why having to consider 2 affects her consideration of 1. Knowing or not knowing that this is now her first interview only affects her credence that this is now her first interview. It doesn't affect her credence that she will have two interviews.

    Quoting JeffJo
    This procedure creates what, in your words, are "the only things that matters." She has either one, or two, interviews and does not know if another will/did happen. The only thing that is different, is that your possible two interviews occur under different circumstances; one is mandatory, and one is optional. What we disagree about is whether the part that is missing - the non-interview when the option is not taken - matters.

    Here they occur under identical circumstances. That is, either steps 2.1 thru 2.4, or steps 3.1 thru 3.4. And those circumstances can be used to answer the question. I did "name" the details by calling them state S1 or S2, but since they are identical to SB she can call them state S.

    There are three possible combinations of the two coins in state S, and they are equally likely. Her credence in state S=(H,T) is 1/3.

    This has nothing to do with what may or may not get "ruled out" in a solution to your version of the experiment. That difference is the red herring in the "most frequent" presentation of the problem. This is a self-contained experiment with a trivial answer.

    But it's an answer you don't like. So you will either ignore it, or repeat the non sequitur that it includes the "ruling out of a 1/4 probability" that we are debating about above, which is circular logic.


    In this example when woken she rules out the prior probability HH = [math]1\over4[/math].

    No prior probability is ruled out here when woken so your example isn't equivalent.
    Pierre-Normand July 04, 2023 at 15:33 #819930
    Quoting Michael
    I accept 6 and reject 5. My credence that my current interview is a heads interview isn't equal to the fraction of interviews which are heads interviews.

    My argument is:

    P1. My credence is the degree to which I believe that a proposition is true
    P2. My current interview is a heads interview iff I have been assigned one heads interview
    C1. Therefore my credence that my current interview is a heads interview is equal to my credence that I have been assigned one heads interview (from P1 and P2)
    P3. If I have been assigned at random by a fair coin toss either one heads interview or two tails interviews then the probability that I have been assigned one heads interview is 1/2
    P4. I have been assigned at random by a fair coin toss either one heads interview or two tails interviews
    C2. Therefore the probability that I have been assigned one heads interview is 1/2
    (from P3 and P4)
    P5. My credence that I have been assigned one heads interview is equal to the probability that I have been assigned one heads interview
    C3. Therefore my credence that I have been assigned one heads interview is 1/2
    (from C2 and P5)
    C4. Therefore my credence that my current interview is a heads interview is 1/2
    (from C1 and C3)


    The issue arises from a conflation of two distinct ways of individuating events and counting probabilities. We can see this more clearly if we distinguish between the 'timeline perspective' and the 'episodic perspective'. Your propositions P1 through P4 and C1 though C4 above frequently shift between those two perspectives, which vitiates the validity of some inferences.

    Let's consider the shift from the 'timeline perspective' to the 'episodic perspective'. In the timeline perspective, there are two possibilities ("possible worlds"): an H-timeline and a T-timeline, each with an equal chance of 1/2. The T-timeline, however, comprises two distinct awakening episodes ("centered possible worlds"). This does not create more exclusive events sharing the probability space; rather, it articulates the unique structure of the T-timeline.

    Therefore, when shifting to the episodic perspective, it would be a mistake is to divide the probability of the T-timeline (1/2) between the two T-awakenings, suggesting each has a probability of 1/4. This line of thinking presumes these awakenings to be exclusive events within the T-timeline (as they might be if a second coin would be tossed to determine which one of the two would occur). However, that's not the case. In the T-timeline, the two awakenings are guaranteed to occur concurrently if at all; the only unknown is which of them Sleeping Beauty currently finds herself in.

    The shift from the timeline perspective to the episodic perspective is not a straightforward Bayesian update on new information. Instead, it's a shift in how we count the alternatives. This shift happens automatically when Sleeping Beauty awakes, because she can't tell apart the two T-episodes and what were concurrent possibilities become exclusive possibilities.

    Once we've dealt with the faulty reasoning that made it appear like the T-first-awakening and T-second awakening were lowered from 1/2 to 1/4 when shifting to the episodic perspective, we can now see how the equiprobability between the H-awakening and T-first-awakening must also be retained when we shift perspectives. When Sleeping Beauty wakes up and doesn't know whether it's Monday or Tuesday, this doesn't change the equiprobability of a H-awakening or a T-first-awakening that she would express her credence in were she to know that it's Monday. Instead, her ignorance about the day of the week introduces an additional possibility—that of her being in a T-second-awakening—which in turn increases the total probability of being in a T-awakening.

    So, since the shift to the episodic perspective preserves both the equiprobabilities P(T-first-awakening) = P(T-second awakening) and P(T-first-awakening) = P(H-awakening), and all three outcomes are exclusive from this perspective, the probabilities must sum up to 1 and therefore must shift from 1/2, 1/2, and 1/2 to 1/3, 1/3 and 1/3.
    Michael July 04, 2023 at 15:42 #819933
    Quoting Pierre-Normand
    Your propositions P1 through P4 and C1 though C4 above frequently shift between those two perspectives, which vitiates the validity of some inferences.


    So which premises are false or which conclusions do not follow?

    Quoting Pierre-Normand
    Therefore, when shifting to the episodic perspective, it would be a mistake is to divide the probability of the T-timeline (1/2) between the two T-awakenings, suggesting each has a probability of 1/4. This line of thinking presumes these awakenings to be exclusive events within the T-timeline


    My current interview being the first or the second T-awakening are exclusive events.

    But I brought up something like this here:

    1. Sleeping Beauty is given amnesia
    2. She is asked her credence that a coin has been tossed
    3. A coin is tossed
    4. If the coin lands tails then:
    4A. She is given amnesia
    4B. She is asked her credence that a coin has been tossed

    Thirder reasoning is that because step 2 is twice as likely to occur as step 4B then I am twice as likely to be in step 2 as step 4B.

    Halfer reasoning is that because step 2 is twice as likely to occur as step 4B and that because if 4B will occur then I am equally likely to be in step 2 as step 4B then I am three times as likely to be in step 2 as step 4B.
    Pierre-Normand July 04, 2023 at 15:55 #819937
    Quoting Michael
    I know I referred you to one of my previous posts, but I’ll respond to this directly too.

    We’re discussing credence.

    If I am certain that A is true if and only if B is true and if I am pretty sure that A is true then ipso facto I am pretty sure that B is true.


    When I previously addressed this inference of yours, I conceded that it is generally valid, but I also pointed out that it involved a possible conflation of two meanings of the predicate P(). The problem I identified wasn't with the validity of the inference (within the context of probability calculus), but rather with the conflation that could occur when the expression P(A) appears twice in your demonstration.

    What makes you "pretty sure" that A is true is the expectation that A is much more likely to occur than not-A. As such, this probabilistic judgment is implicitly comparative. It is therefore dependent on how you individuate and count not only A events but also not-A events. As I've argued elsewhere, a shift in epistemic perspective can alter the way you count not-Heads events (i.e., Tails events), transforming them from non-exclusive to exclusive. For example, when you move from considering possible world timelines to specific awakening episodes, what were concurrent alternatives (not-H events) become exclusive possibilities. This change in perspective modifies the content of your comparative judgment "H is much more likely to occur than not-H," and consequently affects your credence.
    Michael July 04, 2023 at 16:00 #819940
    Quoting Pierre-Normand
    When I previously addressed this inference of yours, I conceded that it is generally valid, but I also pointed out that it involved a possible conflation of two meanings of the predicate P(). The problem I identified wasn't with the validity of the inference (within the context of probability calculus), but rather with the conflation that could occur when the expression P(A) appears twice in your demonstration.


    There is only one meaning I'm using: "the degree to which I believe that the proposition is true". It's stated as much in P1.

    If I am certain that A is true if and only if B is true then the degree to which I believe that A is true is equal to the degree to which I believe that B is true. This is true for all As and Bs.

    It isn't rational for me to believe more strongly in one side of a biconditional that I am certain is true.

    Quoting Pierre-Normand
    What makes you "pretty sure" that A is true is the expectation that A is much more likely to occur than not-A. As such, this probabilistic judgment is implicitly comparative. It is therefore dependent on how you individuate and count not only A events but also not-A events. As I've argued elsewhere, a shift in epistemic perspective can alter the way you count not-Heads events (i.e., Tails events), transforming them from non-exclusive to exclusive. For example, when you move from considering possible world timelines to specific awakening episodes, what were concurrent alternatives (not-H events) become exclusive possibilities. This change in perspective modifies the content of your comparative judgment "H is much more likely to occur than not-H," and consequently affects your credence.


    Given the above meaning, and as I said before, these cannot all be true:

    1. My current interview is a heads interview iff I have been assigned one heads interview
    2. The fraction of interviews which are heads interviews is [math]1\over3[/math]
    3. The fraction of experiments which have one heads interview is [math]1\over2[/math]
    4. The degree to which I believe that my current interview is a heads interview is equal to the fraction of interviews which are heads interviews
    5. The degree to which I believe that I have been assigned one heads interview is equal to the fraction of experiments which have one heads interview

    You seem to assert that 4 and 5 are true by definition, but they're not. Given the truth of 1, 2, and 3, it must be that one or both of 4 and 5 is false.
    Pierre-Normand July 04, 2023 at 16:59 #819948
    Quoting Michael
    There is only one meaning I'm using: "the degree to which I believe that the proposition is true".

    If I am certain that A is true if and only if B is true then the degree to which I believe that A is true is equal to the degree to which I believe that B is true. This is true for all As and Bs.


    This overlooks the issue that your credence can change over time when your epistemic perspective changes. If your separate uses of the expression P(H) don't take into account the epistemic perspective within which they're intended to be evaluated, you risk equivocation.

    Quoting Michael
    Given the above, as I said before, these cannot all be true:

    1. My current interview is a heads interview iff I have been assigned one heads interview
    2. The fraction of interviews which are heads interviews is 1/3
    3. The fraction of experiments which have one heads interview is 1/2
    4. My credence that my current interview is a heads interview is equal to the fraction of interviews which are heads interviews
    5. My credence that I have been assigned one heads interview is equal to the fraction of experiments which have one heads interview

    You seem to assert that 4 and 5 are true by definition, but they're not. Given the definition of the term "credence", and given the truth of 1, 2, and 3, it must be that one or both of 4 and 5 are false.


    4 and 5 aren't true by definition; rather, they are definitions. Definitions specify how terms are to be used in a given context, and it's the subsequent argumentation that can be evaluated as true or false in light of those definitions. In this case, it appears that you intend for "My credence that my current interview is a heads interview" and "My credence that I have been assigned one heads interview" to both represent Sleeping Beauty's episodic perspective, yet the defining clause in 5 would be more appropriate for a timeline perspective. This potentially equivocal definition could lead to confusion. If, however, 5 is meant to convey a timeline perspective, then 4 and 5 are both reasonable, complementary definitions addressing distinct questions.

    So simply asserting that "the fraction of interviews which are heads interviews is 1/3, therefore my credence that my current interview is a heads interview" is a non sequitur.


    Granted, such an argument would be a bit quick and likely an enthymeme. However, I didn't present it in that way. I provided more explicit steps and premises in my previous post, explaining how attending to the distinction between the two epistemic perspectives (timeline and episodic) allows us to conclude that, in the episodic perspective, P(H)=P(T-first), P(T-first)=P(T-second), and since all three events are mutually exclusive in this perspective, they must each have a probability of 1/3.
    Michael July 04, 2023 at 17:08 #819949
    Quoting Pierre-Normand
    This overlooks the issue that your credence can change over time when your epistemic perspective changes.


    It doesn’t. If my credence in A changes then my credence in B will change along with it (or my credence in A iff B will change).

    Quoting Pierre-Normand
    4 and 5 aren't true by definition; rather, they are definitions.


    I have since changed the wording (although it makes no difference, this wording is just more exact).

    4. The degree to which I believe that my current interview is a heads interview is equal to the fraction of interviews which are heads interviews
    5. The degree to which I believe that I have been assigned one heads interview is equal to the fraction of experiments which have one heads interview

    These are neither definitions nor true by definition.

    One or both must be false given 1, 2, or 3.
    Michael July 04, 2023 at 17:28 #819952
    These mean two different things:

    1. My credence favours this being a tails awakening rather than a heads awakening
    2. There are more tails awakenings than heads awakenings

    You can argue that 1 is true because I believe that 2 is true, but then my argument above shows that this is unreasonable.

    The fraction of awakenings which are tails awakenings is the wrong fraction to base one’s credence that this is a tails awakening on.
    Pierre-Normand July 04, 2023 at 18:10 #819964
    Quoting Michael
    My current interview being the first or the second T-awakening are exclusive events.


    They indeed are. As you get involved in the experiment and your perspective shifts from the timeline (before the experiment begins) to the episodic one (whenever you are awakened), relative to the current interview, the two possible outcomes T-first and T-second now are exclusive. However, this doesn't mean that their probabilities shift from 1/2 each (as they were in the timeline perspective) to 1/4 each (as they would if a second coin were tossed to chose only one to be actualized between them). Instead, your finding yourself in a T-awakening episode doesn't exclude the other one from the current timeline; it merely shifts it to the other concurrent episode in this timeline.

    The difference between the Thirder and Double-halfer reasoning can be illustrated this way:

    If a coin lands heads, you are allowed to pick one ball from an unlabeled "H-bag" containing one blue ball. If the coin lands tails, you are allowed to pick two balls, one at a time, from a "T-bag" containing two red balls. Therefore, there are three possible ball picking episodes: B, R1, and R2. We assume, as usual, that when the opportunity arises to pick a ball from a bag, you forget if it's your first or second opportunity.

    A Double-halfer would reason that since you were equally likely to have been presented with a T-bag and, in that case, you are equally likely to be experiencing R1 or R2, P(R1)=P(R2)=1/4.

    In contrast, a Thirder would point out that picking a red ball doesn't exclude the other one from the current timeline but rather guarantees it. The implication of this is that the additional opportunity to pick a second red ball from the T-bag does not reduce P(R1) relative to P(B) but rather increases P(R) = P(R1 or R2) by providing a second opportunity. This is especially apparent from the timeline perspective, where P(B)=P(R1)=1/2 regardless of how many more red balls there might be in the T-bag for you to all pick consecutively. The equiprobability between P(R1) and P(B) doesn't change when we shift from the timeline perspective to the episodic perspective, because on each picking occasion, although you can't know if it's the first one, you do know that P(R|first) = P(B|first). In other words, if you were to ask the experimenter presenting you the bag if this is your first pick, and receive a truthful positive answer, you would know that P(B)=P(R1)= 1/2. The next step in the argument is the straightforward inference from P(R|first) = P(B|first) to P(R1) = P(B). Given that, on all accounts, P(R1) = P(R2), and that all three outcomes are exclusive from the episodic perspective, it follows that they all have a probability of 1/3

    Quoting Michael
    But I brought up something like this here:


    Indeed, I intended to address this separately but haven't yet gotten round to doing it. This will be my next order of business.
    Pierre-Normand July 04, 2023 at 18:24 #819968
    Quoting Michael
    These mean two different things:

    1. My credence favours tails awakenings
    2. There are more tails awakenings than heads awakenings

    I don’t think we can move forward if you insist that they mean the same thing.


    It's precisely because they mean different things that I've provided detailed arguments for deducing 1 from 2 (alongside with other premises). However, the truth of 2 certainly is relevant to the deduction of 1. Nobody would be a Thirder in a scenario where coins lading tails would generate as many awakenings as coins landing heads.
    Michael July 04, 2023 at 18:35 #819970
    Quoting Pierre-Normand
    It's precisely because they mean different things that I've provided detailed arguments for deducing 1 from 2 (alongside with other premises). However, the truth of 2 certainly is relevant to the deduction of 1. Nobody would be a Thirder in a scenario where coins lading tails would generate as many awakenings as coins landing heads.


    If they mean different things then 4 and 5 are neither definitions nor true by definition, and given that 1, 2, and 3 are true (and that "the degree to which I believe that" means the same thing in 4 and 5), one or both of 4 and 5 are false.

    The degree to which I believe that my current interview is my only interview is equal to the degree to which I believe that I have been assigned only one interview.
    JeffJo July 04, 2023 at 19:05 #819977
    Quoting Michael
    When I said that the only things that matter are:

    1. She has either one or two interviews determined by a fair coin toss and
    2. She doesn’t know if she’s already had one

    I was referring to her just waking up, not being told any further information.

    And the only point of mentioning new information, was to show that the information you ignore has meaning. Not to solve the problem or alter the problem. But you knew that.

    Quoting Michael
    No prior probability is ruled out here when woken so your example isn't equivalent.

    And again, you keep using circular logic. You deny that events with non-zero prior probability are "ruled out" in your solution. So you claim that my solution, which does "rule out," must be wrong. This is a fallacy; your presumption that you are right is your only defense. You have never argued for why you think they aren't events.

    So, lets try this variation of what you most recently proposed. The only changes are to make the conditions you require, as in that quote above, easier to describe with both words and probabilities:

    1. SB is put to sleep on day 0, and a coin is flipped.
    2. The counting of days is facilitated by a tear-away calendar that is posted on the door to her room at midnight. It starts at Day 1, and as the top page is torn off at midnight every day thereafter, the day numbers are increased by 1.
    3. If the coin landed Heads, then an N-sided die (N>=2) is rolled and placed on a shelf by the door.
    4. If the coin landed Tails, then two N-sided dice are rolled and placed on the shelf. (If they both land on the same number, the roll is repeated until they are different.)
    5. Over the next N days, if the calendar number matches a die number, she is woken and asked for her credence.

    Whether Heads or Tails, she can wake once or twice over the next N days. The prior probability of a waking is the same on every day (your method didn't make this true). And she will never know if it is the first waking, the second waking, or the only waking.

    Do you agree that this does the same things your latest experiment does (if N=14) with the unimportant exception that the prior probability distribution over the N days varies in your version?
    Michael July 04, 2023 at 19:07 #819981
    Quoting JeffJo
    And again, you keep using circular logic. You deny that events with non-zero prior probability are "ruled out" in your solution. So you claim that my solution, which does "rule out," must be wrong. This is a fallacy; your presumption that you are right is your only defense. You have never argued for why you think they aren't events.


    I can't prove a negative. If there is some prior probability that is ruled out when woken then tell me what it is.

    If you can’t then I have every reason to accept that there isn’t one.
    Pierre-Normand July 04, 2023 at 23:29 #820103
    @Michael

    Consider Leonard Shelby's journey through the "Sleeping Beauty Zoo". In this zoo, each fork in the path presents two options - one path (H-path) leads to a hippo enclosure, while the other (T-path) leads to a tiger enclosure followed by a toucan enclosure. Each path ends with a new similar fork until the zoo exit is reached. Due to Leonard's condition of anterograde amnesia, he forgets his previous encounter (with an enclosure or fork) whenever he reaches a new enclosure. Despite his memory loss, Leonard knows that typically, due to the zoo peculiar layout, he encounters each of the three animals with equal frequency, on average, during his visits.

    Now, let's look at a particular moment of Leonard's visit. As he walks, before reaching a new enclosure, he might reason this way: "Since each fork in the path gives an equal chance of leading to a T-path or an H-path, there is a 50% chance that the next enclosure I'll see will have a hippo." Thus, when he approaches an enclosure, he might conclude there is a 25% chance of it being a tiger enclosure, and a 25% chance of it being a toucan enclosure.

    Is this reasoning accurate? Not quite, because it neglects a crucial shift in perspective from "timeline" to "episodic".

    To clarify, let's imagine Leonard uses a series of coin tosses to predetermine his path at each fork and records the sequence in his notebook. On average, his predetermined path will lead him to an equal number of hippo, tiger, and toucan enclosures. So, whenever he approaches an enclosure during his visit (even if he exits the zoo after only one path segment), he can reasonably assume a 1/3 chance that it is a hippo, tiger, or toucan enclosure.

    One might ask: Isn't there a contradiction between Leonard's initial assumption of a 50% chance of being on a H-path and his subsequent conclusion of a 1/3 chance of seeing a hippo in the enclosure?

    The answer is no. Although Leonard travels an equal number of H-path and T-path segments, he encounters twice as many T-enclosures. Because of the way the T-enclosures map (two to one) to the T-path segments, Leonard encounters hippos and tigers with equal frequency after taking any fork (which makes P(hippo) = P(tiger), and he encounters tigers and toucans with equal frequency while traveling any T-path (which makes P(tiger) = P(toucan). Since all three possibilities are exclusive from the episodic perspective, he expects all three animals to appear with equal probability at any enclosure, even though T-paths exclude H-paths, and tiger-enclosures and toucan-enclosures occur concurrently.

    Keeping this in mind, let me now address you "flowchart" argument:

    Quoting Michael
    1. Sleeping Beauty is given amnesia
    2. She is asked her credence that a coin has been tossed
    3. A coin is tossed
    4. If the coin lands tails then:
    4A. She is given amnesia
    4B. She is asked her credence that a coin has been tossed

    Thirder reasoning is that because step 2 is twice as likely to occur as step 4B then I am twice as likely to be in step 2 as step 4B.

    Halfer reasoning is that because step 2 is twice as likely to occur as step 4B and that because if 4B will occur then I am equally likely to be in step 2 as step 4B then I am three times as likely to be in step 2 as step 4B.


    From the timeline perspective, the following scenarios are equiprobable:

    1. SB asked P(second-awakening), coin lands heads, exit
    2. SB asked P(second-awakening), coin lands tails, SB asked P(second-awakening), exit

    where P(second-awakening) is Sleeping Beauty's credence, on the occasion of an awakening, that it is her second awakening within the current experimental run.

    From the episodic perspective, Sleeping Beauty knows that conditionally on her present awakening being the first, it is equally probable that it is a H-awakening (and that the coin will land heads) or that it is a T-first-awakening (and that the coin will land tails). She also knows that in the event the coin will land (or has landed) tails, it is equiprobable that she is experiencing a T-first-awakening or a T-second awakening. Since the three possible outcomes are exclusive from her episodic perspective, their probabilities must sum up to 1 and since P(H-awakening) = P(T-first-awakening) and P(T-first-awakening) = P(T-second awakening), all three possible outcomes must have probability 1/3.

    The Halfer error lies in incorrectly treating the T-first-awakening and T-second-awakening as equally probable and mutually exclusive alternatives within the T-timeline. This leads to an improper division of the T-timeline's probability between these two events. In reality, the T-first-awakening and T-second-awakening are part of the same timeline and their probabilities should not be divided, but understood as part of the cumulative likelihood of being on the T-timeline. This is similar to the presence of as many tiger or toucan enclosures along Leonard's overall (or average) path as there are hippo enclosure increasing the overall probability of him encountering a T-enclosure on each occasion he approaches one to 2/3 (i.e. P(T-enclosure) = (P(tiger or toucan)/P(tiger or toucan or hippo) = 2/3).
    hypericin July 05, 2023 at 02:58 #820152
    Quoting Pierre-Normand
    whenever she awakens, the coin landed (or will land) tails two times out of three


    This is not true. There are three possible awakenings, Monday-Heads, Tuesday-Heads, Tuesday-Tails, and SB's job on awakening is determine the probability that she is experiencing each of these. The coin has a 50% chance of landing heads, and if it does, the awakening will be on Monday 100% of the time. Therefore, P(Monday-Heads) = 50%. The coin has a 50% chance of landing tails, and if it does, the awakening will be on Monday 50% of the time, and Tuesday 50% of the time. Therefore, P(Tuesday-Heads) = P(Tuesday-Tails) = 25%. If this is true, and I don't see how it can be reasonably argued against, on each awakening the coin is equally likely to be heads and tails.
    Pierre-Normand July 05, 2023 at 03:33 #820155
    Quoting hypericin
    This is not true. There are three possible awakenings, Monday-Heads, Tuesday-Heads, Tuesday-Tails, and SB's job on awakening is determine the probability that she is experiencing each of these. The coin has a 50% chance of landing heads, and if it does, the awakening will be on Monday 100% of the time. Therefore, P(Monday-Heads) = 50%. The coin has a 50% chance of landing tails, and if it does, the awakening will be on Monday 50% of the time, and Tuesday 50% of the time. Therefore, P(Tuesday-Heads) = P(Tuesday-Tails) = 25%. If this is true, and I don't see how it can be reasonably argued against, on each awakening the coin is equally likely to be heads and tails.


    Are you evaluating the probabilities of the three possible outcomes occurring from the point of view of an external observer or from Sleeping Beauty's own epistemic perspective whenever she awakens?

    From an external observer perspective, each awakening—H-awakening, T-first-awakening, and T-second-awakening—have an equal chance of 1/2 to occur. Note that the sum of these probabilities exceeds 1 because the two T-awakenings aren't mutually exclusive but rather, they are concurrent within the same timeline.

    If you are evaluating the probabilities from Sleeping Beauty's own epistemic perspective when she awakens, then a careful analysis of the situation shows that the probability of each is 1/3.

    Consider first the two possible outcomes conditional on today being Monday. Since Sleeping Beauty always is awakened on Monday regardless of the coin toss result, P(Monday-Heads) = P(Monday-Tails). Consider next the two possible outcomes conditional on the coin having landed tails. Since in that case Sleeping Beauty is awakened once on Monday and once on Tuesday, P(Monday-Tails) = P(Tuesday-Tails), which is something that the Thirders, Halfers and Double-halfers all agree on. We therefore have that P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails). Lastly, since Sleeping Beauty isn't inquiring about the probabilities that any of those three outcomes will occur at least once during her current experimental run, but rather about the probability that her current awakening episode is the realization of one of those three outcomes, the three possibilities are exclusive and exhaustive, and their probabilities must therefore sum up to 1. They therefore all three are 1/3, and P(Tails) = P(Monday-Tails) + P(Tuesday-Tails) = 2/3.

    This argument is illustrated in a more picturesque way in my variation: Leonard Shelby Visits the Sleeping Beauty Zoo.
    Michael July 05, 2023 at 07:31 #820187
    Quoting Pierre-Normand
    Now, let's look at a particular moment of Leonard's visit. As he walks, before reaching a new enclosure, he might reason this way: "Since each fork in the path gives an equal chance of leading to a T-path or an H-path, there is a 50% chance that the next enclosure I'll see will have a hippo." Thus, when he approaches an enclosure, he might conclude there is a 25% chance of it being a tiger enclosure, and a 25% chance of it being a toucan enclosure.

    Is this reasoning accurate?


    1. The next enclosure is the toucon enclosure iff I first turned right at the fork (P = [math]1\over2[/math]) and then passed the tiger enclosure.

    2. My credence that the next enclosure is the toucon enclosure is equal to the probability that the first event happened multiplied by the probability that the second (dependent) event happened.

    Having amnesia is no excuse to reject (2). Even with amnesia I know that (1) is true.

    If my credence that the next enclosure is the toucon enclosure is [math]1\over3[/math] then the probability that I passed the tiger enclosure, if I turned right at the fork, is [math]2\over3[/math], but then my credence that the next enclosure is the tiger enclosure is [math]1\over6[/math].

    If the probability that I passed the tiger enclosure, if I turned right at the fork, is [math]1\over2[/math], then my credence that the next enclosure is the toucon enclosure is [math]1\over4[/math] (as is my credence that the next enclosure is the tiger enclosure).

    Either way my credence that the next enclosure is the hippo enclosure is [math]1\over2[/math].

    Quoting Pierre-Normand
    From the episodic perspective, Sleeping Beauty knows that conditionally on her present awakening being the first, it is equally probable that it is a H-awakening (and that the coin will land heads) or that it is a T-first-awakening (and that the coin will land tails). She also knows that in the event the coin will land (or has landed) tails, it is equiprobable that she is experiencing a T-first-awakening or a T-second awakening. Since the three possible outcomes are exclusive from her episodic perspective, their probabilities must sum up to 1 and since P(H-awakening) = P(T-first-awakening) and P(T-first-awakening) = P(T-second awakening), all three possible outcomes must have probability 1/3.


    This appears to be repeating Elga's argument:

    P(T[sub]1[/sub]|T[sub]1[/sub] or T[sub]2[/sub]) = P(T[sub]2[/sub]|T[sub]1[/sub] or T[sub]2[/sub])
    ? P(T[sub]1[/sub]) = P(T[sub]2[/sub])

    P(H[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub]) = P(T[sub]1[/sub]|H[sub]1[/sub] or T[sub]1[/sub])
    ? P(H[sub]1[/sub]) = P(T[sub]1[/sub])

    ? P(H[sub]1[/sub]) = P(T[sub]1[/sub]) = P(T[sub]2[/sub])

    Those first two inferences need to be justified.
    JeffJo July 05, 2023 at 16:37 #820277
    [Quote="Michael;819981"]I can't prove a negative.[/quote]
    Yet that is the basis of your argument. You even reiterate it here. And it is part of your circular argument, which you used this non sequitur to divert attention from:

    1. M's solution is right.
    2. J's solution gets a different answer, so it must be wrong.
    3. J's solution "rules out" an event.
    4. M's does not. In fact, it it says that event doesn't exist (the "negative" you claim).
    5. That must be the error, since there must be an error (see #2).


    If there is some prior probability that is ruled out when woken then tell me what it is.

    I have. You ignore it. But this is a fallacious argument. Claiming I did something different does not prove the way you handles the different thing is right and mine was wrong.

    If you can’t then I have every reason to accept that there isn’t one.

    Quite an ultimatum, from one who never answers questions and ignores answers he can't refute. Since you haven't proven why the event "Heads&Tuesday" doesn't exist - and in fact can't, by your ":can't prove a negative" assertion, I have every reason to accept that it does exist.

    +++++

    Here is a version of your latest procedure. The only non-cosmetic change is that I made it easier to determine the prior probabilities.The "prior probabilit[ies] that [are] ruled out" are easily identified. I just need you to agree that it is equivalent to your procedure first. IF YOU DON'T, I HAVE EVERY REASON TO ACCEPT THAT IT IS.

    1. She is put to sleep on day 0.
    2. A coin is flipped.
    3. If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence.Then she is put back to sleep.
    4. If the coin landed Tails then two N-sided dice are rolled. If they land on the same number, repeat the roll until they are different.She is woken on day D1 and day D2, and sked her credence. On the first of these days, she is put back to sleep with amnesia.


    Notes on the changes; all but one are cosmetic only:
    • By numbering day 0, you allow numbering all days.
    • You left out the coin flip, but we all know where it has to go.
    • The singular of "dice" is "die."
    • Your tenses don't mesh with stating when the coin is flipped.
    • There is nothing special about 14. Any number works - it doesn't even have to be even - that allows two wakings.
    • Rolling two N-sided dice at the same time, rather than two N/2 dice sequentially, makes the distribution of wakings over the N days uniform rather than a complicated function. So it simplifies any calculations based on it, if needed.


    Whether the coin landed on Heads or Tails she can wake on any day between Day 1 and day N.

    Something is indeed is ruled out when she wakes. You just don't recognize how the sample space you described doesn't recognize it.
    Michael July 05, 2023 at 16:44 #820279
    Quoting JeffJo
    I just need you to agree that it is equivalent to your procedure first.


    It's not. You say:

    "If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence. Then she is put back to sleep."

    In my example she isn't put back to sleep. The experiment just ends. The same with her second tails interview. So we have no idea how many days the experiment will last. It could be anywhere between 1 and N days.

    Quoting JeffJo
    Something is indeed is ruled out when she wakes.


    What is ruled out?
    Pierre-Normand July 05, 2023 at 18:12 #820304
    Quoting Michael
    The next enclosure is the toucon enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure.

    2. My credence that the next enclosure is the toucon enclosure is equal to the probability that the first event happened multiplied by the probability that the second (dependent) event happened.


    (I had assumed that the H-path was the left path at half of the forks, but this is inconsequential since Leonard always forgets which path he took.)

    Your reasoning is not entirely misguided. Let's consider a typical path Leonard might navigate through the zoo, guided by four pre-tossed coins:

    Tiger, Toucan -- Hippo -- Hippo -- Tiger, Toucan -- Exit

    On this path, Leonard will encounter six enclosures within four path segments. The frequency of encounters with each type of enclosure are typically equal, regardless of the manner in which they are divided among different path segments.

    First, let's revisit the Thirder argument regarding any enclosure Leonard might approach:

    1. Conditionally on its being a first encounter on a path segment, P(Tiger) = P(Hippo)
    2. Conditionally on Leonard being on a T-path segment, P(Tiger) = P(Toucan)
    3. The three possible outcomes are exhaustive and mutually exclusive
    4. Therefore, P(Tiger) = P(Hippo) = P(Toucan) = 1/3

    This makes sense because, on a typical journey through the zoo, like the one illustrated above, Leonard can expect to encounter each type of enclosure with the same frequency, variance notwithstanding.

    The Double-halfer analysis, on the other hand, posits Leonard on either a H-path or a T-path segment with equal probability. The justification for this comes from assuming that Leonard considers his current path segment as equally likely to be a H-path or a T-path from the time he took the previous fork. This reasoning is relevant and accurate until Leonard approaches an enclosure and has the opportunity to infer which path segment it is located on. If Leonard were to ignore the type of enclosure and only guess the path he's on, confirming his guess only when he reaches the next fork (or zoo exit), regardless of the number of times he might make this guess along the path, then he would find that he was on a H-path half of the time, supporting the Halfer thesis. (This, however, wouldn't vindicate the Double-halfer thesis since encountering tigers does not exclude, but rather guarantees, his also encountering toucans on the same path, and vice versa.)

    However, if we acknowledge that each encounter with an enclosure is an occasion for Leonard to be located on a H-path, and the layout of the zoo (and path navigation process) ensures that these occasions occur 1/3 of the time, we realize that, from this new episodic perspective, the probability that Leonard is on a H-path when he approaches an enclosure isn't independent of the number of such encounters. This is because the likelihood of the encountered enclosure being a H-enclosure (and thus Leonard being on a H-path) isn't solely determined by the process that placed Leonard on this path (the fork and coin toss) but also by the proportion of occasions Leonard has to be on such a path.
    Michael July 05, 2023 at 19:22 #820323
    Reply to Pierre-Normand

    I’m only considering one fork as only that is comparable to the Sleeping Beauty problem. What’s true of multiple forks isn’t true of one fork, as evidenced by (1):

    1. The next enclosure is the toucon enclosure iff I first turned right at the fork (P = [math]1\over2[/math]) and then passed the tiger enclosure.

    This is only true if there is one fork.

    So what is wrong about my analysis of one fork?

    Quoting Pierre-Normand
    1. Conditionally on its being a first encounter on a path segment, P(Tiger) = P(Hippo)
    2. Conditionally on Leonard being on a T-path segment, P(Tiger) = P(Toucan)
    3. The three possible outcomes are exhaustive and mutually exclusive
    4. Therefore, P(Tiger) = P(Hippo) = P(Toucan) = 1/3


    As it stands your conclusion is a non sequitur. You need to justify this inference:

    P(Hippo|Hippo or Tiger) = P(Tiger|Hippo or Tiger)
    Therefore P(Hippo) = P(Tiger)
    Pierre-Normand July 06, 2023 at 04:19 #820456
    Quoting Michael
    You need to prove this inference:

    P(Hippo|Hippo or Tiger) = P(Tiger|Hippo or Tiger)
    Therefore P(Hippo) = P(Tiger)


    This inference follows if we consider what is excluded by the condition "Hippo or Tiger". The case where Leonard is seeing a second enclosure on his path (which always contains toucans) is excluded. Since this second encounter is guaranteed whenever Leonard sees tigers, adding this extra encounter doesn't affect the relative probabilities of P(Hippo) and P(Tiger). However, it does reduce the total probability of him facing either a hippo or a tiger enclosure, i.e., P(Hippo or Tiger).

    This reasoning becomes more intuitive if we consider Leonard's entire visit to the zoo and adopt a frequency approach. I understand that you dislike this method, but bear with me as it may illuminate a blind spot in your understanding. Leonard can reason that since he is traversing as many H-path segments as T-path segments on average, he is encountering as many tiger enclosures as he is hippo enclosures. This is because each type of path segment contains precisely one enclosure of each kind, respectively. The presence of toucan enclosures on T-path segments doesn't diminish the number of tiger enclosures in proportion to the hippo enclosures he encounters, but it does increase the total number of T-enclosures (or average number, if we consider only one single fork) relative to the number of hippo enclosures he encounters.

    Just like in the Sleeping Beauty case, each fork (or coin toss) can be seen as a random generator of T- and H-events, producing twice as many of the former. Creating more than one T-awakening when the coin lands tails (or more than one encounter with an enclosure when a T-path is taken) dilutes the probability of all the individual events (since they are being experienced one at a time) but increases the total probability of the T-events. Lastly, since the additional T-events being generated aren't generated to the exclusion of the first one, but rather in addition to it, they don't alter the relative probabilities of an H-awakening and a T-first-awakening (or of Hippo relative to Tiger).

    Quoting Michael
    I’m only considering one fork as only that is comparable to the Sleeping Beauty problem. What’s true of multiple forks isn’t true of one fork, as evidenced by (1);

    1. The next enclosure is the toucon [sic] enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure.

    This isn’t true if there are two forks.

    So what is wrong about my analysis of one fork?


    The same reasoning about probabilities and frequencies that applies to multiple forks (or repeated Sleeping Beauty experiments) also holds for a single fork (or a single Sleeping Beauty experiment).

    Consider a brief visit to the zoo where Leonard only takes the first fork, with the intention of taking the exit-shortcut at the next fork. In such cases, half of the time, he sees hippos, and the other half, he sees both tigers and toucans. Given this, Leonard can reason that since his brief zoo visits will put him in front of hippo, tiger, or toucan enclosures with equal probabilities (namely 1/2 each), the probability that the current enclosure he is seeing is a toucan enclosure is 1/3.

    The Double-halfer reasoning errs by treating the Tiger and Toucan events as mutually exclusive—as if a second coin toss generates a second, probabilistically independent event—when in fact they both occur on the same timeline whenever either of them occurs.

    Your claim that "The next enclosure is the toucan enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure," is an assumption that can't be put forward without begging the question against the Thirder. You need to substantiate, rather than presuppose, that when you're nearing an enclosure, there's a 1/2 chance the path you're on is a T-path.

    We can, however, agree on the following: P(Hippo|H-path) = 1, and P(Toucan|T-path) = P(Tiger|T-path) = 1/2. The Thirder argument, though, posits that whenever Leonard faces an enclosure, P(H-path) = 1/3, and consequently,

    P(Hippo) = P(Hippo|H-path)P(H-path)+P(Hippo|T-path)P(T-path) = 1/3.

    Likewise,

    P(Toucan) = P(Toucan|T-path)P(T-path)+P(Toucan|H-path)P(H-path) = (1/2)(2/3) = 1/3.

    The justification for P(H-path) = P(Hippo) = 1/3 was provided above. The creation of additional T-encounters when a T-path is taken dilutes the probability of individual T-encounters, but raises the cumulative probability of T-encounters. This doesn't change the relative probabilities of Hippo and the first Tiger encounter, since the extra encounters are not produced at the exclusion of the first one, but are in addition to it. It's this crucial point that differentiates the Thirder and Double-Halfer perspectives.
    hypericin July 06, 2023 at 05:20 #820463
    Quoting Pierre-Normand
    Consider first the two possible outcomes conditional on today being Monday. Since Sleeping Beauty always is awakened on Monday regardless of the coin toss result, P(Monday-Heads) = P(Monday-Tails). Consider next the two possible outcomes conditional on the coin having landed tails. Since in that case Sleeping Beauty is awakened once on Monday and once on Tuesday, P(Monday-Tails) = P(Tuesday-Tails), which is something that the Thirders, Halfers and Double-halfers all agree on. We therefore have that P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails). Lastly, since Sleeping Beauty isn't inquiring about the probabilities that any of those three outcomes will occur at least once during her current experimental run, but rather about the probability that her current awakening episode is the realization of one of those three outcomes, the three possibilities are exclusive and exhaustive, and their probabilities must therefore sum up to 1. They therefore all three are 1/3, and P(Tails) = P(Monday-Tails) + P(Tuesday-Tails) = 2/3.


    This is a fallacy:

    If Monday, P(Monday-Heads) = P(Monday-Tails)
    If Tails, P(Monday-Tails) = P(Tuesday-Tails)
    Therefore, P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails)

    The conclusion doesn't follow, because the first two equalities depend on the conditionals being true.

    You can see this by observing that

    P(Monday-Heads) = 1/2
    P(Monday-Tails) = 1/4
    P(Tuesday-Tails) = 1/4

    Also satisfies the two conditional statements, without satisfying the conclusion
    Pierre-Normand July 06, 2023 at 05:39 #820469
    Quoting hypericin
    This is a fallacy:

    If Monday, P(Monday-Heads) = P(Monday-Tails)
    If Tails, P(Monday-Tails) = P(Tuesday-Tails)
    Therefore, P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails)

    The conclusion doesn't follow, because the first two equalities depend on the conditionals being true.

    You can see this by observing that

    P(Monday-Heads) = 1/2
    P(Monday-Tails) = 1/4
    P(Tuesday-Tails) = 1/4

    Also satisfies the two conditional statements, without satisfying the conclusion


    The reason why the Double-halfer splits the probability P(Tails) = 1/2 between P(Monday-Tails) and P(Tuesday-Tails) is because they treat them as exclusive outcomes as if a second coin would be tossed to determine if Sleeping Beauty would be awakened on Monday or Tuesday, but not both.

    Elsewhere, I made my argument more explicit. Let me rehearse it here by transposing it to the zoo variation:

    I must first justify the inferences from:

    1. P(Tiger|First) = P(Hippo|First)
    2. P(Tiger|T-path) = P(Toucan|T-path)

    to

    1b. P(Tiger) = P(Hippo)
    2b. P(Tiger) = P(Toucan)

    The first inference is justified by the fact that placing more enclosures with different animals in them on either path doesn't alter the relative probabilities of the Tiger or Hippo outcomes since I will be seeing those new animals (such as toucans) in addition to seeing the tigers and hippos and not to the exclusion of them.

    The second inference is justified by the fact that generating alternative timelines where I don't see either tigers or toucans doesn't alter their relative probabilities (but rather lowers both in equal proportion). The second inference is actually common ground between Thirders, Halfers and Double-halfers, which is presumably why Michael only challenged the first.

    See also my most recent reply to Michael regarding this example.
    Michael July 06, 2023 at 10:02 #820489
    Quoting Pierre-Normand
    Your claim that "The next enclosure is the toucan enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure," is an assumption that can't be put forward without begging the question against the Thirder. You need to substantiate, rather than presuppose, that when you're nearing an enclosure, there's a 1/2 chance the path you're on is a T-path.


    My wording may have been imprecise.

    Both of these are true (note the tense):

    1. To reach the toucon enclosure I must first turn right at the fork and then pass the tiger enclosure
    2. The probability that I will turn right at the fork is [math]1\over2[/math]

    When I wake and consider my credence that the next enclosure is the toucon enclosure I consider what must have happened (or not happened) for the next enclosure to be the toucon enclosure. I know that I must have first turned right at the fork (A) and then passed the tiger enclosure (B).

    P(A, B) = P(A) × P(B|A)

    My claim is that the probability of having turned right at the fork is equal to the probability of turning right at the fork, i.e. [math]1\over2[/math].

    Your claim is that the probability of having turned right at the fork is equal to the fraction of all encountered enclosures which are right-side enclosures, i.e. [math]2\over3[/math].

    I don't think your claim makes any sense. The probability of the first event having happened isn't determined by what could happen after that first event happens. The probability of the first event having happened is determined only by the probability of that first event happening.[sup]1[/sup]

    Your conclusion only applies if I'm dropped into an enclosure at random, perhaps via parachute. This is your so-called "episodic perspective" (where it’s not the case that I turned right at the fork; it’s only the case that I’m on the right-side path). But given that this isn't what happens to me, I shouldn't reason this way.

    [sup]1[/sup] [sub]Where no new relevant information is learned.[/sub]
    JeffJo July 06, 2023 at 12:21 #820502
    Quoting Michael
    It's not. You say:

    "If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence. Then she is put back to sleep."

    It's irrelevant (it refers to occurrences after she has answered). But I did intend to take that one out.

    I did adjust the second one to match what you said, so your next point is not only just as irrelevant, it is incorrect when you say I "put her back to sleep" after the second interview:
    In my example she isn't put back to sleep. The experiment just ends. The same with her second tails interview. So we have no idea how many days the experiment will last. It could be anywhere between 1 and N days.


    But it is also incorrect where you claim you identified when she is sent home (you didn't). Or the implication that the experiment's length makes any difference whatsoever. But since these are the only differences you could find, and they make no difference, I can easily correct them to match the version you now say you want. Which, by your standards, is different than what you said before:
    1. She is put to sleep on day 0.
    2. A coin is flipped.
    3. If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence.Then she is sent home.
    4. If the coin landed Tails, then two N-sided dice are rolled. If they land on the same number, repeat the roll until they are different. She is woken on day D1 and day D2, and asked her credence. Then, on the first of these days, she is put back to sleep with amnesia. On the second, she is sent home.


    This is the exact procedure you asked for, except: (A) It lasts between 1 and N>=2 days, not between 1 and 14 days. And (B) the selection of the two random "TAILS" days in that period is uniform, instead of weighted toward the earlier days.

    On each day in the range 1 to N, the prior probability that she will be woken on that day AND the coin landed on Heads is 1/N. And the prior probability that she will be woken on that day AND the coin landed on Tails is 2/N.

    We can proceed two different ways from here. The first is easier, but relies on the statement "the probabilities are the same regardless of which day it is, so we can treat the day D as a known constant.:

    • Pr(Heads&Day=D) = (1/2)*(1/N) = 1/(2N).[list]
    • This is the prior probability from above.
    • All events of the form (Heads&Day=d), where d is not equal to D, are "ruled out" because it is day D.

    [*]Pr(Tails&Day=D) = (1/2)*(2/N) = 1/N.
    • All events of the form (Tails&Day=d), where d is not equal to D, are "ruled out" because it is day D.
    • But I will reiterate that "ruled out" is not a definition that is ever used in probability theory. It is one you made up. What is done, is what follows next.

    [*]The conditional probability of event A, given that it is day D, is found by this definition:
    • Pr(A|C) = Pr(A&C)/Pr(C)
    • Pr(Heads|Day=D) = Pr(Heads&Day=D)/[Pr(Heads&Day=D)+Pr(Tails&Day=D)
    • Pr(Heads|Day=D) = (1/(2N))/(1/(2N)+1/N) = 1/3.
    • In other words, what is done is to only use the prior probabilities of events that are consistent with the condition. In this case, with a specific day.

    [/list]

    The more formal method is to use that calculation separately for each day d in the range [1,N], add them all up, and divide the result by N. It still gets 1/3.

    There is one unorthodox part in all this. The random variable D in the prior probabilities is different from the random variable d used by SB when she is awake. This is because D can take on two values in the overall experiment, but only one to SB when she is awake. So (D=1) and (D=1) are not independent events in the prior, while (d=1) and (d=2) are independent to SB when she is awake

    Finally, I'll point out that if N=2, this is the two-coin version you have ignored.
    Michael July 06, 2023 at 13:07 #820506
    Quoting JeffJo
    On each day in the range 1 to N, the prior probability that she will be woken on that day AND the coin landed on Heads is 1/N.


    Pr(Heads & Day = D) = 1/2 * 1/N.

    That aside, using your above reasoning, in the normal problem the prior probability that she will be woken on Tuesday and the coin landed on Heads is 0, and the prior probability that she will be woken on Monday and the coin landed on Heads is 1/2.

    So when she "rules out" Pr(Heads & Day = Tuesday) she's "ruling out" some Pr = 0, not some Pr = 1/4. "Ruling out" some Pr = 0 after waking does nothing because it was already "ruled out".
    JeffJo July 06, 2023 at 20:26 #820578
    Quoting Michael
    Pr(Heads & Day = D) = 1/2 * 1/N.

    Fixed; I was in a hurry, and that didn't affect the answer. All the probabilities I gave were off by that factor of 1/2.

    That aside, using your above reasoning, in the normal problem the prior probability that she will be woken on Tuesday and the coin landed on Heads is 0

    No, the prior probability that she will be woken on Tuesday, and the coin landed Heads, is 1/4. The prior probability that she is awake and the coin landed Heads is 0. "Will be woken" and "is awake" are not the same events.

    So when she "rules out" Pr(Heads & Day = Tuesday)

    And for about the tenth time, "rules out" is not a valid expression. I only use it since you can't stop using it, and then only when I really mean a valid one. The conditional probability of event A, given event C, is defined to be:

    • Pr(A|C) = Pr(A&C)/Pr(C).


    I gave the correct realization of this definition in my answer. The fact that it differs from yours, in "ruling out" certain probabilities, can only prove that one is wrong. Not which. The fact that mine is valid, and ours is not, proves which.

    It is valid, because it looks at what could have happened, not what could not. The prior probability of reaching an interview on any given day, after Tails is flipped, is twice that of Heads. Always. On any day in your experiment.
    Michael July 06, 2023 at 20:30 #820580
    Quoting JeffJo
    No, the prior probability that she will be woken on Tuesday, and the coin landed Heads, is 1/4.


    The rules of the experiment say that she won’t be woken on Tuesday if the coin lands heads. That means that the prior probability that she will be woken on Tuesday and the coin lands heads is 0.

    P(Heads, woken on Tuesday) = P(Heads) × P(woken on Tuesday|Heads) = 1/2 × 0 = 0.

    Just look at how you calculated the probability of waking up in my experiment. It’s the same reasoning.
    Michael July 07, 2023 at 13:33 #820726
    If a die rolls a 6 then Sleeping Beauty is woken six times otherwise she is woken once. When woken what is her credence that the die rolled a 6?

    Halfers have to say [math]1\over6[/math] and thirders have to say [math]6\over11[/math].

    Before she is first put to sleep she is to bet on whether or not the die will roll a 6 – paid out at the end of the experiment – and each time she is woken she is allowed to change her bet.

    If she bets according to her credence then both halfers and thirders have to say that before she is first put to sleep she will bet that the die will not roll a 6.

    Thirders then have to say that when woken she will change her bet and bet that the die did roll a 6.

    Are thirders willing to commit to their position and change their bet?
    Pierre-Normand July 10, 2023 at 18:19 #821537
    Quoting Michael
    Both of these are true (note the tense):

    1. To reach the toucon enclosure I must first turn right at the fork and then pass the tiger enclosure
    2. The probability that I will turn right at the fork is 1/2

    When I wake and consider my credence that the next enclosure is the toucon enclosure I consider what must have happened (or not happened) for the next enclosure to be the toucon enclosure. I know that I must have first turned right at the fork (A) and then passed the tiger enclosure (B).

    P(A, B) = P(A) × P(B|A)

    My claim is that the probability of having turned right at the fork is equal to the probability of turning right at the fork, i.e. 1/2.

    Your claim is that the probability of having turned right at the fork is equal to the fraction of all encountered enclosures which are right-side enclosures, i.e. 2/3.

    I don't think your claim makes any sense. The probability of the first event having happened isn't determined by what could happen after that first event happens. The probability of the first event having happened is determined only by the probability of that first event happening.


    It actually often makes sense that the probability of an event having happened is determined by what has been found to happen (as a consequence of it) after that first event happened. Take the simple example of a coin toss: The initial probability that a coin would land heads was 1/2. But suppose we have tossed the coin, and we now see it landed heads. Our updated probability that it landed heads is now 1. In this case, our current situation—our observing the fact of the coin having landed heads—has completely determined the probability of the previous event, the coin landing heads. This may seem trivial, but it is a similar principle at play in our journey through the zoo, and it is also key to the Sleeping Beauty problem. The probability of one finding oneself in a specific situation is not only determined by the initial probabilities of different outcomes (or paths taken) but also by the subsequent encounters or observations that are stipulated to occur as a result of those outcomes. Importantly, it is precisely when the proportion of these subsequent observations (or encounters) is dependent on the earlier outcome that those observations warrants a Bayesian updating of our credence.

    Let's look back at our zoo journey. Right now, as we approach an enclosure, what would have had to happen for us to be approaching a hippo, tiger, or toucan enclosure? For a hippo enclosure, we must have taken a new fork after passing either a hippo or a toucan enclosure. For a toucan enclosure, we must have walked past a tiger enclosure. Every fork is equally likely to lead to a hippo or a tiger enclosure directly, so we can outline the possible scenarios as follows:

    Hippo -> Hippo (1/2) or Tiger (1/2)
    Toucan -> Hippo (1/2) or Tiger (1/2)
    Tiger -> Toucan (1)

    Now, let's consider (for the sake of argument) that we are equally likely to have just passed any of the three types of enclosures. This leads to the following scenarios with their probabilities:

    1/3 of the time we passed a Hippo -> we're now approaching a Hippo (1/6) or Tiger (1/6)
    1/3 of the time we passed a Toucan -> we're now approaching a Hippo (1/6) or Tiger (1/6)
    1/3 of the time we passed a Tiger -> we're now approaching a Toucan (1/3)

    This shows that, even if we start with equal chances of having just passed any kind of enclosure, and even if every new fork is equally likely to lead directly to a H-path or T-path, the equal probabilities of approaching each kind of enclosure at each new step are maintained. This refutes your argument that we should have a 1/2 chance of approaching a hippo enclosure based on the equal chances of taking a H-path or T-path at the previous fork. It is precisely because every new fork was equally likely to lead directly to a H-path or T-path that, whenever we are approaching an enclosure, the probability that it is a hippo enclosure is 1/3.

    Let us now consider a closer analogy to the Sleeping Beauty problem where there only is one fork (and only one coin toss).

    It's Christmas morning and young Leonard Shelby is seated beside a gleaming Christmas tree, eyes wide with excitement. Two of his favorite aunts, Jane and Sue, have come bearing gifts. Each year, it's a tradition that one aunt gifts a singular, unique and expensive present, while the other aunt bestows upon Leonard two less expensive, yet equally fun presents. Who brings the unique gift is decided by a flip of a fair coin - heads for Jane and tails for Sue.

    This year, all three gifts are packaged identically, and their order of presentation to Leonard is randomized. Due to the condition Leonard has had since his childhood (unlike what happens in the Memento movie), he forgets about the gifts as soon as he unwraps them. This makes every unwrapping experience as exciting as the first one. A small note accompanies each gift, indicating which aunt has brought it. Given the symmetries of this setup, before unwrapping any gift, Leonard's initial, or prior, probabilities are quite straightforward: there's a 1/2 chance the gift is from Jane, a 1/2 chance the coin landed heads, and a 1/3 chance that the gift is the unique one.

    Now, let's consider a scenario where Leonard reads the attached card and learns that the gift is from Jane. What does this tell him about the coin toss? Well, if the coin landed heads, Jane would be the one to give the unique gift. But if it was tails, Jane would have two gifts to give. Knowing this, Leonard updates his belief about the coin toss. Now that he knows the gift is from Jane, the probability of the coin having landed heads, P(H|Jane), is reduced to 1/3.

    This key conclusion is supported by Bayes' theorem, allowing Leonard to update his beliefs in light of the new evidence. Bayes' theorem here shows that the probability of heads, given that the gift is from Jane, is equal to the initial probability of Jane being the one to give the gift, given heads (1/3), times the prior probability of heads (1/2), divided by the prior probability of the gift being from Jane (1/2). This gives us a revised probability of 1/3 for heads, given that the gift is from Jane.

    In short: P(H|Jane) = P(Jane|H)P(H)/P(Jane) = (1/3)(1/2)/(1/2) = 1/3.

    Okay, so now imagine a similar scenario, but instead of gift-giving aunts and Christmas, it involves a scientific experiment Leonard is participating in. In this experiment, if a coin lands heads, Leonard is interviewed once in a room in the West Wing (let's call it Jane's Wing) and twice in a room in the East Wing (Sue's Wing). If the coin lands tails, the number of interviews in each wing is reversed. Similar to the Christmas scenario, Leonard is interviewed three times exactly and his priors before any interview are: P(West Wing) = 1/2, P(H) = 1/2, and P(Unique Interview) = 1/3. (In more details, his priors are: P(West-HU) = P(East-T1) = P(East-T2) = P(East-HU) = P(West-T1) = P(West-T2) = 1/6 where interviews ("awakenings") rather than gifts are labeled as unique ("U"), first ("1") or second ("2").

    But now, let's say that Leonard finds out he's being interviewed in the West Wing. This new information allows Leonard to update his belief about the coin toss, similar to what happened in the Christmas scenario. Using Bayes' theorem again, Leonard finds that P(H|West Wing) = 1/3. In other words, given that he's in the West Wing (where the structure of the experiment is identical to the original Sleeping Beauty problem), Leonard's credence in the coin having landed heads is 1/3.

    And there you have it! We've demonstrated that when considering the structure of the experiment and the new information that Leonard possesses, the probabilities he assigns to the coin having landed heads or tails should be updated. The key insight here is that the information Leonard gains in the revised scenario—namely that he's being interviewed in the West Wing—doesn't actually provide him with any new facts that he wasn't already aware of in the original Sleeping Beauty problem. Instead, learning he's in the West Wing simply makes explicit to Leonard the fact that he is in a situation which perfectly mirrors the original Sleeping Beauty setup. This underlines the fact that in the original scenario, Sleeping Beauty already has all the information necessary to update her beliefs in line with the Thirder position.
    Michael July 10, 2023 at 23:51 #821617
    Reply to Pierre-Normand I appreciate the reply but I’ve run out of motivation and am going to end my involvement with this. I wouldn’t change my bet, so I’m a committed halfer. I don’t think any number of analogies are going to convince me to change my bet in that specific case.
    Pierre-Normand July 11, 2023 at 00:44 #821629
    Quoting Michael
    If a die rolls a 6 then Sleeping Beauty is woken six times otherwise she is woken once. When woken what is her credence that the die rolled a 6?

    Halfers have to say 1/6

    and thirders have to say 6/11.

    Before she is first put to sleep she is to bet on whether or not the die will roll a 6 – paid out at the end of the experiment – and each time she is woken she is allowed to change her bet.

    If she bets according to her credence then both halfers and thirders have to say that before she is first put to sleep she will bet that the die will not roll a 6.

    Thirders then have to say that when woken she will change her bet and bet that the die did roll a 6.

    Are thirders willing to commit to their position and change their bet?


    Thirders wouldn't change their bet in this scenario. Although it's true that in this setup, a bettor aware of P("6") = 6/11 (i.e., the proportion of "H-awakenings" to "All-awakenings" equals 6/11) might be tempted to bet on the outcome "6". They're also aware that a successful bet will be rewarded only once at the end of the experimental run, no matter how many times they've made that assessment.

    Here's where the nuance comes in: over the course of six experimental runs, they would, on average, place a winning bet 6 times out of 11. This is if we imagine each individual bet as standing alone. However, due to the rules of this experiment, the six winning bets will only result in a single even-money payout. The lack of profitability in this scenario doesn't fault the credence in the outcome. Instead, it reveals a unique aspect of this setup where multiple winning bets are not individually rewarded. Therefore, the "bet" one ought to make doesn't straightforwardly track one's credence in the outcome of the die roll, but rather, it must take into account the rules of payout in this specific experimental setup.
    Count Timothy von Icarus July 12, 2023 at 15:44 #821989


    I had an idea for a rather dark version of this:

    Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.

    On Wednesday, all the Beauties wake up in an identical room with identical memories. The real Beauty is given breakfast and allowed to leave her room and enjoy the magical castle until Sunday, when the experiment will be rerun.

    The clone Beauties are killed by the first person who comes to the door.

    The real Beauty retains her memories of her free weekends.

    Now, let's say the experiment has been running for a long time, three years. A lot of other Beauties have likely been cloned and killed by this point. But, if you're the real Beauty, and you consistently think you are the real Beauty when you wake up, then you have indeed been right about that fact for three years straight. So, when you wake up next time, how worried should you be that you'll be killed?

    Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?

    Pierre-Normand July 12, 2023 at 16:58 #822010
    Quoting Count Timothy von Icarus
    Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.


    Are you suggesting that one new clone is always created, but the coin flip determines on which day? Furthermore, are Sleeping Beauty and her clones only awakened on Wednesdays? Regardless, I believe that your experiment could completely ignore the element of randomness introduced by the coin flip, and Sleeping Beauty's uncertainty on Wednesday would still exist, solely because she doesn't know if she's the original Beauty or a clone.

    Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?


    It doesn't seem to me that Bayesian principles would allow Sleeping Beauty to grow increasingly confident with each passing week that she isn't a clone. That's because the design of the experiment ensures that she survives if she is the original Beauty. She can recall (or inherit once the "fake" memory) that every previous week, upon awakening, she had (from her viewpoint) a 1/2 chance of being a clone. However, her past survival events weren't probabilistically independent events, given that the survival of the original is always ensured by the fact that she is the original. It remains true, however, that until she leaves the room and survives yet again, her belief that she is currently the original is reduced to 1/2.

    On edit: After asking GPT-4 (always the sycophant) to check my English, I discussed another issue with it.
    [hide]

    PN: Rethinking about it, I've imagined another form of faulty reasoning that might influence Sleeping Beauty in thinking that her continued survival is an increasingly unlikely event. Can you imagine what it is?

    GPT-4: It might be related to the well-known but fallacious gambler's fallacy, which involves the mistaken belief that, if something happens more frequently than normal during a certain period, it will happen less frequently in the future, or that if something happens less frequently than normal during some period, it will happen more frequently in the future. [snip]

    Is that the kind of reasoning you were considering?

    PN: I was rather imagining that Sleeping Beauty wasn't thinking clearly about her own personal identity. She may have thought (or Count Timothy von Icarus on her behalf) that on each previous week there was a 1/2 chance that "she" (Sleeping Beauty) would survive. Thereby the idea of multiplying independent odds. But whenever she awakens on Wednesday, Sleeping Beauty's uncertainty can't coherently be expressed as "what are the odds that I, Sleeping Beauty, am the original" but rather as "what are the odds that 'I', whoever 'I' am, am the original Sleeping Beauty or a clone."

    GPT-4: I see, that makes a lot of sense! In this case, Sleeping Beauty's potential misunderstanding arises from a confusion about personal identity and what it means for "her" to survive. You're right that the probability she should be considering is not "what are the odds that 'Sleeping Beauty' survives" but "what are the odds that 'I', this conscious observer that is experiencing this moment, am the original Sleeping Beauty versus a clone."

    In other words, each time she awakens, there is always a 'new' Sleeping Beauty (from the perspective of that conscious observer) asking this question, and for each of them, the odds are always 1/2 of being the original. There is no compounding effect from week to week because each instance of questioning is, in a sense, a unique and independent event from the perspective of the one experiencing it. Each conscious observer is asking about their own identity in that moment, not the abstract identity of 'Sleeping Beauty' across all time.

    This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations.

    PN: "This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations."

    I don't think it's crucial for understanding the original problem, where issues of personal identity (in the guise of questions about reidentification of the self over time) don't arise, but it may be relevant to some of the extensions regarding simulations, the multiverse, the anthropic principle, etc.

    GPT-4: You're absolutely right. In the original Sleeping Beauty problem, questions of personal identity indeed don't play a significant role, as the problem focuses on updating beliefs based on self-locating uncertainty.

    However, as you pointed out, these considerations become quite important when dealing with variations of the problem that involve cloning, simulations, or multiverse scenarios, where the 'identity' of the observer might be more difficult to determine or may split into multiple instances.[/hide]
    JeffJo October 04, 2025 at 14:07 #1016323
    Sorry to resurrect. But I recently thought of a way to explain exactly how the halfers are misinterpreting the problem. It is based on how Marilyn vos Savant tried to make the answer to the Monty Hall Problem more intuitive by using more doors. I call it "Camp Sleeping Beauty," and it uses more days, more random choices, and more things that can happen.

    SB arrives for camp orientation on Sunday Night and is informed of these details:
    • Every day during the week (Monday thru Saturday), campers will take part in an activity that is determined by a six-sided die that will secretly be rolled after they are dismissed on Sunday. The activities are Archery, Bowling, Canoeing, Dodge Ball, Fishing, and an Extra Activity that is unnamed, but not one of those other five.
    • The activities are selected using a 6x6 table that is randomly filled out. This can occur either after they are dismissed, or during orientation. (This shouldn't matter, but I suspect that some will try to argue that it does so I leave the option open.)
    • This table includes a column for each day, and a row for each possible roll of the die.
    • At the end of each day, campers will be shown the table and asked to assign a probability to each day, and to each die roll, based on what activity they participated in that day.
    • After providing these probabilities, campers will be put to sleep with an amnesia drug that makes them forget everything that happened that day.


    Here is sample of the table. I did make sure that each activity is represented at least once by placing them in order along the diagonal, again for those who think it should make a difference (it can't).

    Mon Tue Wed Thur Fri Sat
    1 A D C F E B
    2 F B B C C F
    3 A B C D F D
    4 F E B D B C
    5 C D F C E E
    6 E E F C C F

    So, for example, if the die roll was a 6, the campers would do the Extra Activity of Monday and Tuesday, go Fishing on Wednesday and Saturday, and go canoeing on Thursday and Friday.

    It is my claim that this is a trivial conditional probability problem. Say the campers went fishing. An "F" appears eight times in this table. Since it appears once in each row except the "2" and "6" rows, there is a 1/8 probability for each die roll except 2 and 6, and a 1/4 probability for each of 2 and 6. Since "F" never appears in the Tuesday column, it can't be Tuesday. But it there is a 1/7 probability for Thursday and for Friday, and a 2/7 probability for Monday, Wednesday, and Saturday.

    But by the halfer solution for the SB problem, each die roll has a 1/6 probability. Even a 5, if the campers went Bowling. Or will halfers want to use conditional probability incompletely, and claim Pr(5|B)=Pr(6|B)=0 and Pr(1|B)=Pr(2|B)=Pr(3|B)=Pr(4|B)=1/4, since there are four rows (this is applying conditional probability!) where it appears, but ignoring that there are two where it appears only once and two where it appears twice (this is not applying conditional probability!).

    Finally, it cannot matter what "E" means in the table. All that matters is how many times the day's actual activity does. Even if "E" means "don't wake the campers up." Each activity that is demonstrated to not have happened is eliminated the same way.

    The classic SB problem is Camp Sleeping beauty with a 2x2 table. Three cells are populated with "Wake and Interview," and one with "sleep." The answer for the "Heads" row is the number of times "Wake and Interview" appears in that row, divided by the number of time is appears in the table. That is 1/3.
    Michael October 04, 2025 at 16:53 #1016344
    Quoting Pierre-Normand
    Therefore, the "bet" one ought to make doesn't straightforwardly track one's credence in the outcome of the die roll, but rather, it must take into account the rules of payout in this specific experimental setup.


    If each outcome has the same reward then it is rational to bet on the most probable outcome.

    Therefore, if her credence that the die landed on a 6 is [math]6\over11[/math] then she will change her bet. Therefore, if she doesn't change her bet then her credence that the die landed on a 6 isn't [math]6\over11[/math].
    DifferentiatingEgg October 04, 2025 at 17:11 #1016346
    They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?


    Seems like a trick question.

    The probability remains the same every time they flip the coin after she wakes.

    But if she wrote the tally down, then she'll see heads written for the tally 100% of the time. But since it's the coin toss that's all that matters.
    ProtagoranSocratist October 04, 2025 at 20:14 #1016371
    So if the last coin flip doesn't effect the outcome, doesn't that mean it's always 50/50? If i understand correctly, getting heads the first time means it's over, which balances getting tails twice in a row, and the last one is 50/50.

    It is a confusing issue, do there exist methods for keeping sleeping beauty from remembering the other days in the experiment?
    Pierre-Normand October 05, 2025 at 02:28 #1016414
    Quoting Michael
    If each outcome has the same reward then it is rational to bet on the most probable outcome.

    Therefore, if her credence that the die landed on a 6 is 6/11
    6
    11
    then she will change her bet. Therefore, if she doesn't change her bet then her credence that the die landed on a 6 isn't 6/11
    6
    11
    .


    She doesn't change her bet because although her credence changes, the specific payoff structure doesn't warrant her doing so. Before the experiment begins, she's given an opportunity to bet on the outcome of the coin toss while this singular bet will be paid off at the end of the experiment. As the experiment proceeds, when she is first awakened (as well as during any other awakening episode) she is being offered an opportunity to change her bet. Although her credence that the die landed on a 6 is updated to 6/11, she has no incentive to change her bet. That's because she knows that the payout will occur only once at the end of the current experimental run regardless of how many awakenings will have occurred during the whole run. So, her choosing to bet that the die will not have been found to have landed on a 6 at the end of the current experimental run doesn't reflect her (updated) credence that is has landed on a 6 on the occasion of her current awakening episode.
    Pierre-Normand October 05, 2025 at 02:40 #1016417
    Quoting JeffJo
    Sorry to resurrect.


    It's not a resurrection. It's just a reawakening :wink:
    Yes, yours is a nice variation that highlights the motivation for the thirder reasoning.
    Michael October 05, 2025 at 08:38 #1016439
    Reply to Pierre-Normand

    You have it backwards. It's not that her credence changes and she bets against it when she can only place a single bet; it's that her credence doesn't change and she bets against it when she can place multiple bets.

    Her reasoning after waking up when she can only place a single bet is:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. The probability that the die landed on a 6 is [math]1\over6[/math]
    A4. Therefore, the expected return if I keep my bet is £83.33
    A5. Therefore, the expected return if I change my bet is £16.67
    A6. Therefore, the expected return if I keep my bet is [math]5\over6[/math]
    A7. Therefore, I will keep my bet

    And her reasoning after waking up when she can place multiple bets is:

    B1. If I always bet that the die didn't land on a 6 and it didn't then I will win £100
    B2. If I always bet that the die did land on a 6 and it did then I will win £600
    B3. The probability that the die landed on a 6 is [math]1\over6[/math]
    B4. Therefore, the expected return if I always bet that the die didn't land on a 6 is £83.33
    B5. Therefore, the expected return if I always bet that the die did land on a 6 is £100
    B6. Therefore, the expected return if I always bet that the die did land on a 6 is [math]6\over11[/math]
    B7. Therefore, I will bet that the die did land on a 6

    Whereas your argument appears to be that her reasoning after waking up when we can only place a single bet is:

    C1. If I keep my bet and the die didn't land on a 6 then I will win £100
    C2. If I change my bet and the die did land on a 6 then I will win £100
    C3. The probability that the die landed on a 6 is [math]6\over11[/math]
    C4. Therefore, the expected return if I keep my bet is £83.33
    C5. Therefore, the expected return if I change my bet is £16.67
    C6. Therefore, the expected return if I keep my bet is [math]5\over6[/math]
    C7. Therefore, I will keep my bet

    Which makes no sense at all. If she truly believes C3 then she would have calculated different expected returns and changed her bet, just as you or I would if we came to learn that the die is loaded in favour of landing on a 6. But waking up doesn't function like learning that the die is loaded in favour of landing on a 6, and so her credence in the outcome doesn't change. Her credence is always that the probability that the die landed on a 6 is [math]1\over6[/math], consistent with common sense and explaining why she bets the way she does in both betting scenarios.

    Thirder reasoning appears to conflate B6 and C3, which is both a mistake and a contradiction given B3.
    Pierre-Normand October 05, 2025 at 17:30 #1016555
    @Michael

    I am happy to grant you all of this (except for the last comment regarding credence updating), not as a resolution of the paradox however, but rather as a sensible explanation of the reasoning that motivates the halfer response to the SB problem. I am not myself either a halfer or a thirder. I am rather arguing that both the halfer and the thirder (or "sixther", in our current example) advocates present incompatible but equally valid and complementary interpretations of the epistemic situation Speeping Beauty finds herself in, and of her credence in the "outcome". This is possible because the outcome (i.e. the coin having landed heads or tails) is incompletely specified in the statement of the problem. This outcome purports to represent an event that SB has incomplete information about, but the precise way in which this event is meant to be individuated is unspecified which opens the room for two equally plausible interpretations.

    So, when SB awakens, she could be wondering what the chances are that the experimental run her present awakening is a part of is a 6-run, regardless of how many times she is being awakened in this run. That is your favorite interpretation. And the answer is 1/6 precisely on the basis of the reasonings that you present in your previous post. But she can also wonder what the chances are that her present awakening episode is a 6-awakening regardless of the manner in which such events are stringed into continuous experimental runs. In this case, your analysis of the thirder reasoning in your previous post is inaccurate (and an elucidation of the sort recently proposed by @JeffJo is more apposite).

    The reason why SB can take a thirder rather than a halfer stance regarding her current awakening episode is because she may care about the long-term average frequency of such events (6-awakenings) rather than the average frequency of such experimental runs (6-runs). The reason why she doesn't change her bet is because, as I've explained, although in the long run she finds herself having been (on each occasion where she had been given an opportunity to change her bet) in a 6-awakening 6/11 of the times, she only is being awarded £100 once for each series of six awakening episodes where she consistently changes her bet, but is awarded the full £100 every single time when she changes her bet on the occasion of the comparatively rarer 6-awakenings.

    Of course, given this specific payout structure (that is not being provided as part of the statement of the original SB problem), which has been conceived precisely to highlight a rationale for the halfer reasoning, the halfer stance appears more intuitive and the thirder stance appears more convoluted. But you only have to change the payout structure, such that even-money bets are paid on the occasion of each awakening episode rather than once after each run, to reverse the situation and make the halfer stance appear unnecessarily convoluted.

    During the last round of SB discussions that had taken place two years ago, I had devised a few variation (hostage taking and escape opportunities, zoo visit scenario, Christmas gift scenario, etc.) that were meant to highlight how pragmatic considerations and specific payout structures can motivate either a halfer or a thirder stance as the most natural interpretation of the problem. The original SB problem just strips away the required interpretative or pragmatic context in such a way as to leave SB's "credence" regarding the "event" (are two tails-awakening within a single run two events or two parts of the same event?) indeterminate and open to interpretation.
    Michael October 05, 2025 at 18:12 #1016576
    Quoting Pierre-Normand
    The reason why SB can take a thirder rather than a halfer stance regarding her current awakening episode is because she may care about the long-term average frequency of such events (6-awakenings)


    She isn't being asked "what is the long-term average frequency of being woken up when the die did land on a 6?" Her answer to that question is the same both before being put to sleep and after being woken up, and so there wouldn't be a problem to solve.

    The problem only exists because there is the counter-intuitive suggestion that her credence in the outcome of a die roll changes, comparable to being told that the die is loaded, after being woken up despite prima facie not being provided with any new information.

    The actual question she is being asked is "what is the probability that the die did land on a 6?" which is the same as being asked for the value of [math]n[/math] below:

    1. If I correctly bet that the die didn't land on a 6 then I will win £[math]a[/math]
    2. If I correctly bet that the die did land on a 6 then I will win £[math]b[/math]
    3. The probability that the die did land on a 6 is [math]n[/math]
    4. Therefore, the expected return if I bet that the die didn't land on a 6 is £[math]x[/math]
    5. Therefore, the expected return if I bet that the die did land on a 6 is £[math]y[/math]

    There is only one correct value for [math]n[/math] and that value is the value that gives the correct values for [math]x[/math] and [math]y[/math] – which is [math]1\over6[/math] both before being put to sleep and after being woken up.

    Whereas if she were told that the die is loaded then the values of [math]n[/math], [math]x[/math], and [math]y[/math] all change and with it her actual credence in the outcome of the die roll.
    Pierre-Normand October 05, 2025 at 20:06 #1016599
    Quoting Michael
    She isn't being asked "what is the long-term average frequency of being woken up when the die did land on a 6?"


    That's right. If this had been the question there would be no apparent paradox and the answer would unambiguously be 6/11. But notice that she isn't being asked "what is the long-term average frequency of experiencing a series of six successive awakenings when the die did land on a 6?" either. If this had been the question, then the answer would be unambiguously 1/6. This is your favored interpretation.

    Typical formulations of the SB problem are ambiguous, however. For instance, it is being asked, on the occasion of her next awakening, what is her credence that the die landed on a six? (or that the coin landed tails?). What this "occasion" precisely is is left unstated. Ostensibly, it is an event whereby SB encounters a specific die in a determinate albeit unknown state. It's a reference to an event. However, is this "event" consisting of one among a set of undistinguishable awakening episodes or is it one among a set of undistinguishable experimental runs?

    You seem to think that the mere reference to the state of a specific (fair) die, and incomplete knowledge of the process whereby that die ended up in a specific state, is sufficient for establishing her credence. But the question isn't being asked about a credence derived from the perspective of someone running the experiment, but rather from SB's own epistemic situation when she is being awakened (and, as stated, this is ambiguous).

    Remember the variation I had suggested whereby someone knocks at a random door in a village. Some of the doors belong to a singular dwelling that has two doors. And some of the doors belong to a dwelling that had only one door. We also assume that the probability that each singular dwelling in this village would have been built with one or two doors has been established by a coin toss. On the occasion where you knock at a door, the probability that this is one door from a two-door dwelling is 2/3 since two thirds of the doors in this village belong to two-door dwellings. Yet, the probability that this dwelling would have had two doors rather than one was 1/2 since it was the result of a coin toss. What accounts for the different credence of the visitor is their specific epistemic situation in relation with the dwelling that they come to face on that occasion, and not just the event that led to this dwelling being constructed with either one or two doors. Although the specific dwelling that they now face was equally likely to end up being a two-door dwelling or a single-door dwelling, your credence that it is a two-door dwelling is 2/3 owing to you encountering such dwellings twice as often. (And similarly for Sleeping Beauty more often encountering dice that have landed on a six, or coins that landed tails, under the thirder interpretations of the problem.)
    Michael October 05, 2025 at 21:33 #1016612
    Quoting Pierre-Normand
    This is your favored interpretation.


    My "favoured" interpretation is the literal interpretation; she is being asked about the probability that a die rolled a six.

    She isn't being asked about the long-term average frequency of being woken up when the die did land on a 6 and she isn't being asked about the long-term average frequency of experiencing a series of six successive awakenings when the die did land on a 6. Either of these two questions gives the same answer when asked before being put sleep as when asked after being woken up, and so there would be no problem to solve.

    The problem only exists when the question being answered before being put sleep is the same question being answered after being woken up, and where the answer (allegedly) changes despite (apparently) no new information.

    If the Thirder's answer before being put to sleep is [math]1\over6[/math] and if their answer after being woken up is [math]6\over11[/math] then either they are not answering the same question or one of their answers is wrong. And it is obvious in context that the correct answer to the question being asked before being put to sleep is [math]1\over6[/math].
    Pierre-Normand October 05, 2025 at 22:00 #1016625
    Quoting Michael
    My "favoured" interpretation is the literal interpretation; she is being asked about the probability that a die rolled a six.


    In most formulations of the problem, she is not being asked "What is the probability that a die landed (past tense) tails" from some sort of a disengaged perspective. She always is being asked a question that makes specific reference to her peculiar epistemic situation such as: "what her degree of belief that “the outcome of the coin toss is Heads” ought to be when she is first awakened". If you're just ignoring the peculiarities of her queer epistemic situation, then, of course, the paradox evaporates.

    The problem only exists when the question being answered before being put sleep is the same question being answered after being woken up, and where the answer changes despite no new information.


    Yes, that's the root to the paradox. Since she acquired no new information about the specific coin that was tossed, how could her credence regarding the same event change? Clue: owing to the change in her epistemic situation, she is not referencing the "same" event in the same way.

    If the Thirder's answer before being put to sleep is 1/6 and if their answer after being put to sleep is 6/11 then either they are not answering the same question or their answer is wrong.


    ...or, thirdly, their epistemic relation to the target outcome changed. This is the alternative that you seem to have a persistent blind spot for. You haven't addressed the village/doors example. When a house is built, the number of doors giving access to it is set at random to be one to two according to the toss of a fair coin. When a traveller in that village picks a door at random, the house that this door belongs to was equally likely (when it was constructed) to be a two-door dwelling or a one-door dwelling.

    However, owing to the fact that the traveller must establish their credence on the occasion of encountering one among a set of indistinguishable doors, and 2/3rds of such doors belong to two-door dwellings, their credence that this house that they now are facing is a two-door dwelling is 2/3. This is true in spite of the fact that the traveller can also reason: "this house that I am now facing was equally likely when it was built to become a two-door or a one-door dwelling". What makes those two statements compatible is the fact that owing to the traveller's peculiar epistemic situation, their present encounter is twice as likely to be an encounter with a two-door dwelling.

    When the traveller came to face a particular house, they didn't acquire any new information regarding this particular house. That's because they don't know what house it is that they are now facing. But they do know that they now are twice as likely to be facing a two-door dwelling. The question is no longer about a specific house that they never encountered before but rather about a specific encounter with a house that has already been built.
    Pierre-Normand October 05, 2025 at 22:34 #1016641
    Quoting Michael


    Thirders then claim that:

    P(6|Monday)=6/11

    P(¬6|Monday)=5/11


    Unless my memory is faulty, the variation we had discussed (two years ago) was one where Sleeping Beauty was awakened only once, on Monday, unless the die lands on 6, in which case she is being awakened six times from Monday through Saturday. In that case, thirders would claim that

    P(6|Monday)=1/6 (Since one sixth of Monday-awakenings are Six-awakenings)

    P(¬6|Monday)=5/6 (Since five sixths of Monday-awakenings are Non-six-awakenings)

    Right?
    Michael October 05, 2025 at 22:42 #1016644
    Reply to Pierre-Normand

    Sorry, I deleted that post because it's late and I'm tired and I may have messed up the specific numbers. The general gist is what I said before. Your argument is that her reasoning after being woken up is:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. My credence that the die landed on a 6 is [math]6\over11[/math]
    A4. Therefore, the expected return if I keep my bet is £83.33
    A5. Therefore, the expected return if I change my bet is £16.67

    But A3, A4, and A5 are inconsistent. If A3 really was true then she would calculate different values for A4 and A5, concluding that it is profitable to change her bet. But she doesn't do this.

    You can't have it both ways. Either she genuinely believes it to be more likely that the die landed on a 6, and so she changes her bet, or she continues to believe it to be more likely that it didn't, and so she keeps her bet.
    Michael October 05, 2025 at 22:56 #1016648
    Quoting Pierre-Normand
    However, owing to the fact that the traveller must establish their credence on the occasion of encountering one among a set of indistinguishable doors, and 2/3rds of such doors belong to two-door dwellings, their credence that this house that they now are facing is a two-door dwelling is 2/3.


    That doesn't follow. It depends on the manner in which the door is chosen. Compare with a red bag containing 100 balls and a blue bag containing 50 balls. You "encounter" a ball. What is the probability that it came from the red bag? Is it [math]2\over3[/math] because [math]2\over3[/math] of the balls come from the red bag? Not if one "encounters" a ball by putting one's hand in a bag at random, as the probability that one picks the red bag is [math]1\over2[/math].

    So you need to first specify the mechanism by which one has "encountered" a door, and this mechanism must be comparable to the Sleeping Beauty scenario for it to be an apt analogy.
    Pierre-Normand October 05, 2025 at 23:09 #1016652
    Quoting Michael
    Sorry, I deleted that post because it's late and I'm tired and I may have messed up the specific numbers. The general gist is what I said before. Your argument is that her reasoning after being woken up is:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. My credence that the die landed on a 6 is 6/11
    A4. Therefore, the expected return if I keep my bet is £83.33
    A5. Therefore, the expected return if I change my bet is £16.67

    But A3, A4, and A5 are inconsistent. If A3 really was true then she would calculate different values for A4 and A5, concluding that it is profitable to change her bet. But she doesn't do this.


    A thirder will not agree with A4 or A5. If SB is allowed to change her bet when she awakens, she must do do consistently as a matter of policy since she can't distinguish between different occasions of awakening (i.e. day of the week). She knows that a policy of changing her bet lowers her expected return since there only is one payout per experimental run. Although her systematically betting on a six would result in her being right on six out of eleven occasions when she is being given the opportunity to do so, in accordance with her credence, she only is being paid £100 once at the end of the experimental run when she does so (and the die landed 6) but this policy also makes her forfeit the full prize on the five occasions out of eleven where the die didn't land on six. All this shows is that the lopsided payout structure makes it irrational for her to bet on the most likely outcome.
    Michael October 05, 2025 at 23:21 #1016655
    Quoting Pierre-Normand
    A thirder will not agree with A4 or A5.


    Correct. Her reasoning would be:

    A1. If I keep my bet and the die didn't land on a 6 then I will win £100
    A2. If I change my bet and the die did land on a 6 then I will win £100
    A3. My credence that the die landed on a 6 is [math]6\over11[/math]
    A4. Therefore, the expected return if I keep my bet is £45.45
    A5. Therefore, the expected return if I change my bet is £54.55

    And yet you say she doesn't change her bet even though she has calculated that changing her bet is more profitable? There's something amiss with your reasoning.

    Either she does change her bet or her credence that the die landed on a 6 continues to be [math]1\over6[/math].

    Quoting Pierre-Normand
    All this shows is that the lopsided payout structure makes it irrational for her to bet on the most likely outcome.


    Again, you have it backwards. The most likely outcome is always that the die didn't land on a 6, but when she is allowed to place multiple bets it is irrational to bet on the most likely outcome.

    If I can place one bet on a single outcome with [math]5 \over 6[/math] odds or 6 bets on a single outcome with [math]1 \over 6[/math] odds, the latter has the highest expected return even though it has the lowest odds. I don't even have to be put to sleep and woken up to do this. I can just say before the experiment starts that I choose to place 6 bets that the die will land on a 6 instead of 1 bet that it won't.
    Pierre-Normand October 05, 2025 at 23:45 #1016659
    Quoting Michael
    So you need to first specify the mechanism by which one has "encountered" a door, and this mechanism must be comparable to the Sleeping Beauty scenario for it to be an apt analogy.


    The doors are encountered randomly. I agree that the situation isn't perfectly analogous to the SB problem since SB doesn't "choose" randomly among sets of already established awakenings. She simply finds herself awakened on one particular occasion. But the purpose of the thought experiment was more modest, aiming at showing that the credence in an event one is involved in doesn't generally merely depend on the manner in which such events are produced but also on the way one relates to event of that kind—that is, in what way one encounters them. [url=https://thephilosophyforum.com/discussion/comment/820103]

    My earlier zoo example[/url] was mirroring much more closely the SB scenario (since the visitor likewise is amnesiac and merely finds themselves approaching a new enclosure) while making the same points. In that scenario, the zoo visitor had 1/3 chances (their credence) to next encounter a toucan, tiger or hippo enclosure regardless of the fact that the previous fork in the path that they randomly took had a 1/2 chance of leading them on a path segment that only has a hippo enclosure on it.
    Pierre-Normand October 05, 2025 at 23:50 #1016661
    Quoting Michael
    I don't even have to be put to sleep and woken up to do this. I can just say before the experiment starts that I choose to place 6 bets that the die will land on a 6 instead of 1 bet that it won't.


    I wonder why you are so insistent on this arbitrary payout structure. Why not make an even-money payout on each occasion where she is being awakened and offered the opportunity to bet on the coin toss outcome as it is already determined right now? Would not her expected value exactly mirror—and be governed only by—her credence regarding the hidden die having landed six right now? A six is the most likely outcome, so I'm betting on it. No word games. Immediately maximized expected profit (and guaranteed long term profit as well).
    Michael October 06, 2025 at 07:53 #1016705
    Reply to Pierre-Normand

    It's not an arbitrary payout structure.

    A £100 reward is paid out at 6:00pm for any correct bet on the outcome of a die roll. Sleeping Beauty and Prince Charming each bet that their die will not land on a 6. They are both free to change their bet at any time before 6:00pm, e.g. if something happens to affect their credence in the outcome, and can do so as many times as they like. Neither of them has a watch.

    Sleeping Beauty is told that if her die landed on a 6 then she will be put to sleep and woken up at six arbitrary points before 6:00pm, otherwise she will be put to sleep and woken up at one arbitrary point before 6:00pm.

    Prince Charming is told before 6:00pm that his die is loaded and that the probability that it landed on a 6 is [math]6\over11[/math].

    It doesn’t make any sense to argue that Sleeping Beauty (after being put to sleep and woken up) and Prince Charming (after being told that his die is loaded) come to share the same credence in the outcome of their die roll but that only he changes his bet. If she truly shares his credence then she would also change her bet.

    Quoting Pierre-Normand
    A six is the most likely outcome, so I'm betting on it.


    A six is the least likely outcome, but has the highest expected return, and so she bets on it. Her reasoning both before being put to sleep and after being woken up is:

    P1. If I always bet that the die didn't land on a 6 and it didn't then I will win £100 at the end of the experiment (1 × £100 bet)
    P2. If I always bet that the die did land on a 6 and it did then I will win £600 at the end of the experiment (6 × £100 bets)
    P3. The probability that the die did land on a 6 is [math]1\over6[/math]
    C1. Therefore, the expected return if I always bet that the die didn't land on a 6 is £83.33
    C2. Therefore, the expected return if I always bet that the die did land on a 6 is £100
    C3. Therefore, the expected return if I always bet that the die did land on a 6 is [math]6\over11[/math]
    C4. Therefore, I will always bet that the die did land on a 6

    Her credence remains committed to P3, else she’d calculate very different expected returns after being put to sleep and woken up.
    Pierre-Normand October 06, 2025 at 16:08 #1016767
    Quoting Michael
    Her credence remains committed to P3, else she’d calculate very different expected returns.


    P3—"The probability that the die did land on a 6 is 1/6"—is an ambiguous statement since, although it makes reference to the die, it fails to sufficiently specify SB's epistemic situation in relation to the die, which is a consideration that seldom arises explicitly outside of the peculiar context of the the Sleeping Beauty problem.

    When asked about her credence, SB could reason: "I am currently in a situation (awakening episode) such that 6 times out of 11, when I find myself in such a situation, the die landed on a 6. If I could place an even money bet now, and get fully paid on that bet, it would therefore be rational for me to bet that the die landed on a 6, in accordance with my higher credence in this specific outcome."

    She could equally validly reason: "I am currently in a situation (experimental run) such that 1 time out of 6, when I find myself in such situations, the die has landed on a 6. If I could place an even money bet now and not change my bet in subsequent awakening episodes, and get paid at the end of the current experimental run, it would therefore be rational for me to bet that the die didn't land on a 6, in accordance with my higher credence in this specific outcome (i.e. not-six).

    Those two reasonings concern the same dice but two different statements of credence in two different kinds of events/outcomes. How SB chooses which one of those two different sorts of credence (and the duration of the "event" she is now involved in) as an apt explicitation of the ambiguous phrase "The probability that the die did land on a 6" can be guided by pragmatic considerations. In this case, the relevant consideration is the specific payout structure and what kinds of events/outcomes this payout structure was designed to track. In a pair of examples I had designed early in this discussion, the relevant pragmatic considerations were either the need for SB to set up an appointment with her aunt (to get a lift at the end of the experimental run), or choose a tool (plank or torch) for escaping the experimental facility during the current awakening episode.

    As stated in the original ambiguous statement of the SB problem, the forced choice between the Halfer or Thirder interpretations of SB's credence is a false dichotomy. Your stance leads you to propound Halfer interpretations/elaborations of the problem, which are valid, and to dismiss Thirder interpretations as misconstruals of your Halfer stance. But they're not misconstruals. They're alternative and equally valid interpretations. Thirders often make the same mistake, believing that their interpretation gets at the fundamental truth regarding SB's credence in the (ill specified) "outcome" or "current state of the die".
    Michael October 06, 2025 at 17:01 #1016771
    Quoting Pierre-Normand
    Those two reasonings concern the same dice but two different statements of credence in two different kinds of events/outcomes.


    This makes no sense. There is only one kind of event; being woken up after a die roll. Her credence in the outcome of that die roll cannot be and is not determined by any betting rules. Maybe she's not allowed to place a bet at all

    After waking up, either she continues to believe that the probability that the die landed on a 6 is 1/6, as Halfers say, or she now believes that it is 6/11, as Thirders say.

    Only then, if allowed, can she use her credence to calculate the expected returns of placing or changing a bet, accounting for the particular betting rules. And as I believe I showed above, only a credence of 1/6 provides a consistent and sensible approach to both betting scenarios.
    Pierre-Normand October 06, 2025 at 18:56 #1016778
    Quoting Michael
    This makes no sense. There is only one kind of event; being woken up after a die roll. Her credence in the outcome of that die roll cannot be and is not determined by any betting rules. Maybe she's not allowed to place a bet at all.


    I agree that her credence in the outcome (however this outcome is characterized) isn't determined by the betting rules. The betting rules, though, can make one rather than another characterization of the outcome more natural. It's not true that there is only one kind of event. The relevant event is protracted. Sleeping Beauty could focus on her current awakening as the event where she either is facing a die that landed on six or didn't (and this event is over when she is put back to sleep, while her next awakening, if there is any, will be a separate event). Or she could focus on the current experimental run as the protracted event that her present awakening is, in some cases, only a part of. Nothing in the Bible, in the fundamental laws of nature, or in the mathematical theory of probability, determines what specific event (awakening or experimental run) should be the proper focus of attention. This choice of focus yields different analyses and different credences since those credences target differently individuated events. However, once one analysis has been settled on, and one payout structure has been determined, Halfers and Thirders (almost) always agree on the expected value of a given betting strategy.

    After waking up, either she continues to believe that the probability that the die landed on a 6 is 1/6, as Halfers say, or she now believes that it is 6/11, as Thirders say.


    Indeed, and, as previously explained, that because Halfers and Thirders are typically talking past each other. They're not talking about the same events.

    Only then, if allowed, can she use her credence to calculate the expected returns of placing or changing a bet, accounting for the particular betting rules. And as I believe I showed above, only a credence of 1/6 provides a consistent and sensible approach to both betting scenarios.


    I don't think you've shown the Thirder analysis to be inconsistent. You just don't like it. There are scenarios where the Thirder analysis is more more natural. Remember the flip-coin scenario where the singular H-awakenings take place in the West-Wing of the Sleeping Beauty Experimental Facility and the dual T-awakenings are taking place in the East-Wing. The West-Wing is surrounded by a moat with crocodiles and the East-Wing is surrounded by a jungle with lions. On the occasion of her awakening Sleeping Beauty (we may call her Melania) finds a rare opportunity to escape and can either choose to bring a torch (that she can use to scare off lions) or a wooden plank (that she can use to safely cross the moat). A Thirder analysis of the situation is natural in that case since it tracks singular escape opportunities. Her credence that she will encounter lions is 2/3 (as is her credence that the coin landed Tails). Taking the torch is the safest bet and, indeed, two thirds of Sleeping Beauties who make this bet on the rare occasions where this opportunity presents itself to them survive.

    On edit: For this analysis to be sound, we must assume that the rare escape opportunities don't convey any significant amount of information that SB didn't already have when she awoke, and hence present themselves with the same (very low) frequency on each awakening occasion.
    Michael October 06, 2025 at 19:35 #1016786
    Quoting Pierre-Normand
    those credences target differently individuated events


    This is where I believe the mistake is made. The question she is asked after being woken up is the same question she is asked before being put to sleep. There is no ambiguity in that first question, and so there is no ambiguity in any subsequent question. There is a single event that is the target of the question before being put to sleep and we are asking if being put to sleep and woken up gives Sleeping Beauty reason to re-consider her credence in that event, much like Prince Charming re-considers his credence in that event after being told that his coin is loaded. Neither Sleeping Beauty nor Prince Charming is being asked to consider their credence in one of two different events of their own choosing.

    Quoting Pierre-Normand
    Indeed, and, as previously explained, that because Halfers and Thirders are typically talking past each other. They're not talking about the same events.


    Which is why I think that Thirders have fabricated a problem that doesn't exist and Halfers are right. The problem only arises because it is suggested that Sleeping Beauty's credence in Event A changes after being put to sleep and woken up, despite no new information. All I can gather from your responses is that Thirders say that Sleeping Beauty's credence in Event B is [math]2\over3[/math]. But we're not interested in Sleeping Beauty's credence in Event B; we're only interested in Sleeping Beauty's continued credence in Event A.

    Quoting Pierre-Normand
    Remember the flip-coin scenario where the singular H-awakenings take place in the West-Wing of the Sleeping Beauty Experimental Facility and the dual T-awakenings are taking place in the East-Wing. The West-Wing is surrounded by a moat with crocodiles and the East-Wing is surrounded by a jungle with lions. On the occasion of her awakening Sleeping Beauty finds a rare opportunity to escape and can either choose to bring a torch (that she can use to scare off lions) or a wooden plank (that she can use to safely cross the moat). A Thirder analysis of the situation is natural in that case since it tracks singular escape opportunities. Her credence that she will encounter crocodiles is 2/3 (as is her credence that the coin landed Tails). Taking the plank is the safest bet and, indeed, two thirds of Sleeping Beauties who make this bet on the rare occasions where this opportunity presents itself to them survive.


    That you're more likely to escape if you assume that the coin landed tails isn't that the coin most likely landed tails. You just get two opportunities to escape if the coin landed tails. It's exactly the same as being able to place either two bets on outcome A or one bet on outcome B, and where P(A) <= P(B) but P(B) < 2P(A). It is more profitable to bet twice on the least probable outcome than once on the most probable outcome. You don't need to force yourself to believe that outcome A is more probable to justify placing those bets. The expected return already does that for you.
    Pierre-Normand October 06, 2025 at 20:33 #1016800
    Quoting Michael
    That you're more likely to escape if you assume that the coin landed tails isn't that the coin most likely landed tails. You just get two opportunities to escape if the coin landed tails.


    She gets two opportunities to escape if the coin landed tails (or rather she is twice as likely to have an opportunity to escape when the coin landed tails) precisely because she twice as often finds herself being awakened when the coin landed tails. This is the reason why, whenever she is awakened, her epistemic relationship to the coin that has been tossed changes. There is a causal relationship between the coin toss result and the number of awakenings (and escape opportunities) she thereby experiences (encounters). It's her knowledge of this causal relationship that she can harness to update her credence in the new epistemic situation she finds herself in when she awakens.

    Notice that, in this example, the success of her escape strategy isn't predicated on there being more opportunities when the coin landed tails. The choice being offered to her isn't between escaping or staying put. It's a choice between carrying a plank or a torch. Taking the torch will enable her to survive if and only if she's being housed in the East-Wing. Else, she's going to be eaten by crocs. The success rate of betting on lions (and, correlatively, on the dice having landed tails) is twice as high as the success rate of betting on crocs (and on the dice having landed heads). The success rate of her betting decisions directly track her credence on the specific outcome she is betting on on those occasions.

    If a Halfer claims that, when she awakens, SB's credence on the coin having landed tails remains 1/2, and hence likewise for her credence that she is surrounded by lions, there would be no reasons for her when she attempts to escape on this occasion to bring a torch rather than a plank. She could pick either the torch or the plank at random. Half of such Halfer Beauties who make an escape attempt would survive. Two thirds of Thirder Beauties would survive. The Halfers weren't wrong in their credence assessment. But they picked the wrong credence (targeting expected frequencies of runs rather than frequencies of awakenings) for the task at hand.
    JeffJo October 06, 2025 at 21:05 #1016810
    SB has no unusual "epistemic relationship to the coin," which is what the point of my new construction was trying to point out. That fallacy is based on the misconception that Tuesday somehow ceases to exist, in her world, if the coin lands on Heads. It still exists, and she knows it exists when she addresses the question.

    But she also knows that it is not the current situation when she is asked the question. In this new construction, there are N random values (N=2 for the original, N=6 at camp) that determine which row of N days in the schedule is used. So there are N^2 equally likely entries in the schedule, with N possible ways an entry could be observed.

    The probability for each random value is the number of times the observed activity appears in the row for that random value, divided the number of times it appears in the schedule. The probability for each day is the number of times the observed activity appears in the column for that day, divided the number of times it appears in the schedule. What the other observations are - even "I wouldn't be able to observe" - are irrelevant since SB knows what was observed, and that the opportunity for the other observations exist regardless of whether the activity is observable.

    There is no need to debate a "payout schedule" since the probabilities apply to what SB knows at the time, which depend only which schedule entries are consistent and which are not. To argue otherwise, you have to defend why E="Egg Hunt" gives different answers than E="Extended Sleep," and what that change would be.
    Michael October 06, 2025 at 21:26 #1016812
    Reply to Pierre-Normand

    You seem to continue to conflate an outcome's expected return with its probability and assert that one's behaviour is only governed by one's credence in the outcome. Neither of these things is true. I've shown several times that the least likely outcome can have the greater expected return and so that this assessment alone is sufficient to guide one's decisions. No number of analogies is going to make either "she wins two thirds of the time if she acts as if A happened, therefore she believes (or ought to believe) that A most likely happened" or "she believes that A most likely happened, therefore she acts (or ought to act) as if A happened" valid inferences.

    But the most important part of my previous comment were the first two paragraphs, especially when considering the standard problem.
    Pierre-Normand October 06, 2025 at 21:26 #1016813
    Quoting JeffJo
    SB has no unusual "epistemic relationship to the coin," which is what the point of my new construction was trying to point out. That fallacy is based on the misconception that Tuesday somehow ceases to exist, in her world, if the coin lands on Heads. It still exists, and she knows it exists when she addresses the question.


    According to a standard Thirder analysis, prior to being put to sleep, SB deems the two possible coin toss outcomes to be equally likely. When she awakens, she could be in either one of three equiprobable situations: Monday&Tails, Monday&Heads and Tuesday&Tails (according to Elga's sensible argument). SB's credence in the truth of the statement "Today is Tuesday" is 1/3. That possibility doesn't cease to exist. Her epistemic relationship to the already flipped coin changes since she is now able to refer to it with the self-locating indexical proposition: "The coin-toss result on the occasion of this awakening episode", which she wasn't able to before.

    Before the experiment began, SB could (correctly) reason that is was equally likely that she would be awakened once when the coin toss result is Heads and twice when the coin toss result is Tails. When she is awakened, on any occasion, her epistemic relationship to the coin changes since it's only in the case where the result is Tails that she experiences an awakening twice. In general, events that make it more likely for you to encounter them result in your being warranted to update your credence in them when you do encounter them. This stems from the core rationale of Bayesian updating.

    Pierre-Normand October 06, 2025 at 22:01 #1016820
    Quoting Michael
    You seem to continue to conflate an outcome's expected return with its probability and assert that one's behaviour is only governed by one's credence in the outcome.


    I've acknowledged this distinction. It's not the credence alone that governs the rational betting behavior. It's the (well defined) credence in combination with the payoff structure that jointly govern the rational betting behavior.

    Neither of these things is true. I've shown several times that the least likely outcome can have the greater expected return and so that this assessment alone is sufficient to guide one's decisions.


    I've also myself repeatedly made the point that when the payout structure rewards a consistent betting policy (or the last bet being made after being given to opportunity to change it on each awakening occasion) with an even-money bet only once at the end of the experimental run, then, in that case, it's rational to bet on the least likely outcome (namely, a non-six result, which occurs only 5/11th of the times) since this is the betting behavior that maximizes the expected return. In fact, it could be argued that this arbitrary payoff structure is misleading in the present context since it is being designed precisely to incentivise the bettor to bet on the least likely outcome according to their own credence. It's quite fallacious to then charge the Thirder with inconsistency on the ground that they are betting on an outcome that they have the least credence on. When doing so, you are committing the very conflation that you are charging me of doing.

    No number of analogies is going to make either "she wins two thirds of the time if she acts as if A happened, therefore she believes (or ought to believe) that A most likely happened" or "she believes that A most likely happened, therefore she acts (or ought to act) as if A happened" valid inferences.


    The analogies are being offered for the sake of illustration. They don't aim at proving the validity of Thirder stance, but rather its pragmatic point. By the same token, your own analogies don't prove the validity of the Halfer stance. Remember that I am not a Halfer or a Thirder. My main goal rather was to show how different situations make salient one rather than another interpretation of SB's "credence" as being pragmatically relevant to specific opportunities: highlighting specific kinds of events one gets involved in and that one wishes to track the long term frequency of as a guide to rational behavior.

    But the most important part of my previous comment were the first two paragraphs, especially when considering the standard problem.


    So, I'll address this separately.
    Pierre-Normand October 06, 2025 at 23:33 #1016845
    I'm coming back to one of the two paragraphs you had flagged as the most important part of your comment.

    Quoting Michael
    This is where I believe the mistake is made. The question she is asked after being woken up is the same question she is asked before being put to sleep. There is no ambiguity in that first question, and so there is no ambiguity in any subsequent question. There is a single event that is the target of the question before being put to sleep and we are asking if being put to sleep and woken up gives Sleeping Beauty reason to re-consider her credence in that event, much like Prince Charming re-considers his credence in that event after being told that his coin is loaded. Neither Sleeping Beauty nor Prince Charming is being asked to consider their credence in one of two different events of their own choosing.


    I assume that the singular event that is the target of the question is, according to you, the coin toss event. And the question is: what is SB's credence in the outcome of this coin toss? Of course, the question is indeed about this unique event, and remains so after she awakens. However, when asked about her credence regarding this specific outcome, SB has to consider some determinate range of possible outcomes, and what makes it more likely in her current epistemic situation that one of those possible outcomes is actual. Any piece of information SB acquires upon awakening that is conditionally dependent on the target outcome provides her with the means to update her credence (using Bayes' theorem). It's also often alleged (e.g. by David Lewis) that no such new information becomes available to her when she awakens, which is true albeit misleading since it neglect a more subtle change in her epistemic situation.

    One particular way in which one can acquire information about a specific outcome T occurs when the occurrence of T biases the probability of one encountering this outcome. For instance, if a bunch of fair pennies fall on the ground but, due to reflectivity and lighting conditions, pennies that landed Tails are more noticeable from a distance, then, on the occasion where I notice a penny shining in the distance, my credence that this penny landed tails is increased. (How silly and point missing would a "Halfer" objection be: "It was not more likely to land Tails, you were just more likely to notice it when it did land Tails!")

    The SB setup is a very close analogy to this. Coins landing Tails play a similar causal role. Just replace "increased proclivity to being noticed by a passerby" with "increased proclivity to awaken a random test subject in the Sleeping Beauty Experimental Facility".

    Of course, one salient disanalogy between this penny drop analogy and the SB problem is that, in the standard SB problem, each coin is being tracked separately and noticed at least once, on Monday. But I don't think this disanalogy undermines the main point. It's because tail-outcomes causally increase the proportion of awakening episodes at which SB would encounter them that, on each occasion where she encounters them, SB can update her credence that the coin landed Tails. That this rational ground for Bayesian updating remains valid even in cases of singular experimental runs with amnesia (as in the original SB problem) is something that I had illustrated by means of a Christmas gift analogy (see the second half of the post).
    JeffJo October 06, 2025 at 23:43 #1016851
    Reply to Pierre-Normand
    According to a standard Thirder analysis, prior to being put to sleep, SB deems the two possible coin toss outcomes to be equally likely.

    According to the often-misrepresented, original Thirder analysis by Adam Elga, there are two independent random elements: the coin toss, and the day. They combine in four (not three) ways. But I suppose Elga suspected how obtuse halfers would be about them, so he only considered the two overlapping pairs of two that you think constitute the entire sample space.

    When she awakens, she could be in either one of three equiprobable situations: Monday&Tails, Monday&Heads and Tuesday&Tails (according to Elga's sensible argument)

    That's not the reasoning.
    • If (upon awakening) you were to learn that the toss outcome is Tails, that would amount to your learning that you are in either T1 or T2. Since being in T1 is subjectively just like being in T2, and since exactly the same propositions are true whether you are in T1 or T2, even a highly restricted principle of indifference yields that you ought then to have equal credence in each. But your credence that you are in T1, after learning that the toss outcome is Tails, ought to be the same as the conditional credence P(T1|T1 or T2), and likewise for T2. So P(T1|T1 or T2) = P(T2|T1 or T2), and hence P(T1) = P(T2).
    • If (upon awakening) you were to learn that it is Monday, that would amount to your learning that you are in either H1 or T1. Your credence that you are in H1 would then be your credence that a fair coin, soon to be tossed, will land Heads. It is irrelevant that you will be awakened on the following day if and only if the coin lands Tails — in this circumstance, your credence that the coin will land Heads ought to be 1/2. But your credence that the coin will land Heads (after learning that it is Monday) ought to be thesame as the conditional credence P(H1|H1 or T1). So P(H1|H1 or T1) = 1/2, and hence P(H1) = P(T1).
    • Combining results, we have that P(H1) = P(T1) = P(T2). Since these credences sum to 1, P(H1) = 1/3.


    SB's credence in the truth of the statement "Today is Tuesday" is 1/3.
    .
    Wrong. Elga's credence in the truth of the statement "Today is Tuesday and the coin landed Tails" is 1/3. That's what Elga's T2 means.

    You are doing exactly what I said halfers do - denying the existence of what you would call Tuesday&Heads, and that Elga would call H2. The prior pronbability, of a day that is "H2 or T2", is 1/2. Because H2 and T2 are independent results, and the prior pronbabilities are P(H2)=P(T2)=1/4.

    But because she is awake, SB receives the "new information" [see note] that H2 is eliminated, leading to Elga's result.

    Before the experiment began, SB could (correctly) reason that is was equally likely that she would be awakened once when the coin toss result is Heads and twice when the coin toss result is Tails.

    As far as I know, SB can tell that this day is just one of those three possibilities. Please explain if you think otherwise. And before the experiment begins (this is called the "prior" to those who understand probability), she also knows that the experiment exists on Tuesday&Heads, even though she will not be awake to observe it. So when she is awake, she eliminates that possibility.

    Note: The way Baysean Updating works is that you define a sample space (called the prior) comprising all possible outcomes of a procedure. This procedure is doing something with SB on a single day based on a coin flip and the index of the day - the 1 or 2 in Elga's notation. So the red herring about indexicals does not apply. The prior probabilities of these outcomes should sum to 1.

    Once you have that, you make an observation about an outcome - WHICH IS WHAT HAPPENS ON A SINGLE DAY. This is called "new information" because some of those outcomes in the prior sample space can be eliminated. Not because what you know about what did happen was a surprise. This is another red herring halfers use. Once you "eliminate" any outcomes in the prior that are inconsistent with the observation, you update the prior probabilities so those that are consistent sum to 1.

    In the SB experiment, there are four things that can happen ON A SINGLE DAY. They are H1, T1, H2, and T2. Prior probabilities are 1/4 each. When SB is awake, she knows that she is in A SINGLE DAY but it must be H1, T1, or T2. Not H2. So she updates these probabilities to 1/3 each.

    This is a trivial conditional probability problem. The reason I posed the "Camp Sleeping Beauty" version, is that it exposes the red herrings. And I assume that is the reason you ignore it, and how the red herrings are exposed.
    Pierre-Normand October 07, 2025 at 00:24 #1016862
    Quoting JeffJo
    This is a trivial conditional probability problem. The reason I posed the "Camp Sleeping Beauty" version, is that it exposes the red herrings. And I assume that is the reason you ignore it, and how the red herrings are exposed.


    I didn't ignore your post. I read it and referred to it in a reply to Michael as a more aposite (than his) elucidation of the Thirder position. It's true that I now depart somewhat from the sorts of analyses of the problem that were favored by Elga and Lewis since I think the problem can be demystified somewhat by focusing not on the updating of priors regarding predefined situations SB can potentially find herself in at a future time but rather on the shift in her epistemic situation in relation to the coin-toss outcome on any occasion when she awakens. Also, I no longer see Thirder and Halfer interpretations of Sleeping Beauty's epistemic condition to be mutually exclusive responses to a well defined problem but rather each being motivated by complementary interpretations of the sort of event her "credence" in the coin-toss outcome is supposed to be about. If you can't see what a sensible rationale for a Halfer interpretation might be, you can refer to my Aunt Betsy variation laid out here (and following post).
    Michael October 07, 2025 at 09:14 #1016934
    Quoting Pierre-Normand
    it's rational to bet on the least likely outcome (namely, a non-six result, which occurs only 5/11th of the times) since this is the betting behavior that maximizes the expected return. In fact, it could be argued that this arbitrary payoff structure is misleading in the present context since it is being designed precisely to incentivise the bettor to bet on the least likely outcome according to their own credence.


    The multiple bets structure is the misleading structure, and where one is betting on the least likely outcome. If you are offered the opportunity to place six bets that the die landed on a 6 or one bet that it didn’t, what do you do? You place six bets that the die landed on a 6 even though your credence that it did is [math]1\over6[/math]. Nothing changes after being made to forget before any bet and so you remain committed to what you knew before being put to sleep.

    The single bet structure (why do you call it “arbitrary”?) is the appropriate structure to properly assess the problem: does being put to sleep and woken up change her credence in the die roll, like Prince Charming being told that his die is loaded? If it did then she would follow his lead and change her bet, and we would have a genuine paradox (although she'd lose money). If she doesn’t then her credence hasn’t changed and the problem is resolved in the Halfer’s favour (more on this below).

    Quoting Pierre-Normand
    It's the (well defined) credence in combination with the payoff structure that jointly govern the rational betting behavior.


    Yes, so consider the previous argument:

    P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
    P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
    P3. My credence that the die landed on a 6 is [math]6\over11[/math]
    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £[math]n[/math]
    C1. Therefore, the expected return at the end of the experiment if I change my bet is £[math]m[/math]

    What values does she calculate for [math]n[/math] and [math]m[/math]?

    She multiplies her credence in the event by the reward. Her calculation is:

    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
    C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55

    This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.

    So why doesn’t she change her bet? Your position requires her to calculate that [math]n > m[/math] but that’s impossible given P1, P2, and P3. She can only calculate that [math]n > m[/math] if she rejects P3 in favour of “my credence that the die landed on a 6 is [math]1\over6[/math]”.

    I’ll respond to the other comment this evening after work.
    JeffJo October 07, 2025 at 18:27 #1016979
    Quoting Pierre-Normand
    I didn't ignore your post.

    You didn't respond to a single point in it. You only acknowledged its existence, while you continued your invalid analysis about changing bets and expected runs. None of which can answer the questions I raised. Using this table

    . M T W H F S
    1 A D C F E B
    2 F B B C C F
    3 A B C D F D
    4 F E B D B C
    5 C D F C E E
    6 E E F C C F

    • On each single day, after activity X, what is the probability/SB's confidence that the die roll was d, for d=1 to 6?
    • On each single day, after activity X, what is the probability/SB's confidence that the day is Y, for Y in {M,T,W,H,F,S}?
    • Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads.


    I use "single day" because each day is an independent outcome to SB.

    We do not need payout schemes to determine this. We do not need to know what SB expects over the row. We do not need to address "indexicals" The correct answers are:
    • COUNT(CELL=X in row d)/COUNT(CELL=X in table)
    • COUNT(CELL=X in column Y)/COUNT(CELL=X in table)
    • No.

    Michael October 07, 2025 at 18:59 #1016986
    Quoting Pierre-Normand
    The SB setup is a very close analogy to this. Coins landing Tails play a similar causal role. Just replace "increased proclivity to being noticed by a passerby" with "increased proclivity to awaken a random test subject in the Sleeping Beauty Experimental Facility".


    This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.

    In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll).

    If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.
    Pierre-Normand October 07, 2025 at 19:20 #1016993
    Quoting JeffJo
    You didn't respond to a single point in it. You only acknowledged its existence, while you continued your invalid analysis about changing bets and expected runs.


    I didn't provide a detailed response to your post because you didn't address it to me or mention me. I read it and didn't find anything objectionable in it. If you think my own analyses are invalid, then quote me or make reference to them and state your specific objections. I'll respond.
    Michael October 07, 2025 at 20:04 #1017007
    Apologies for doing this as a second post. I did mean to include this earlier but miss-clicked.

    Quoting Pierre-Normand
    Of course, one salient disanalogy between this penny drop analogy and the SB problem is that, in the standard SB problem, each coin is being tracked separately and noticed at least once, on Monday. But I don't think this disanalogy undermines the main point. It's because tail-outcomes causally increase the proportion of awakening episodes at which SB would encounter them that, on each occasion where she encounters them, SB can update her credence that the coin landed Tails. That this rational ground for Bayesian updating remains valid even in cases of singular experimental runs with amnesia (as in the original SB problem) is something that I had illustrated by means of a Christmas gift analogy (see the second half of the post).


    I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.

    Everyone agrees that [math]P(H) = {1\over2}[/math].

    Halfers claim that [math]P(H|Awake) = {1\over2}[/math] and Thirders claim that [math]P(H|Awake) = {2\over3}[/math].

    You claim that both Halfers and Thirders are right because they are referring to different events, which I understand to mean that the [math]H[/math] in [math]P(H|Awake) = {1\over2}[/math] and the [math]H[/math] in [math]P(H|Awake) = {2\over3}[/math] do not designate the same event, which means that one or both do not designate the same event as the [math]H[/math] in [math]P(H) = {1\over2}[/math].

    The problem is only a problem (or paradox) if the [math]H[/math] in [math]P(H|Awake) = {2\over3}[/math] designates the same event as the [math]H[/math] in [math]P(H) = {1\over2}[/math], and so it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox.
    Pierre-Normand October 07, 2025 at 21:39 #1017027
    Quoting Michael
    Yes, so consider the previous argument:

    P1. If I keep my bet and the die didn't land on a 6 then I will win £100 at the end of the experiment
    P2. If I change my bet and the die did land on a 6 then I will win £100 at the end of the experiment
    P3. My credence that the die landed on a 6 is 6/11
    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £
    C1(sic). Therefore, the expected return at the end of the experiment if I change my bet is £

    What values does she calculate for and ?

    She multiplies her credence in the event by the reward. Her calculation is:

    C1. Therefore, the expected return at the end of the experiment if I keep my bet is £45.45
    C2. Therefore, the expected return at the end of the experiment if I change my bet is £54.55

    This is exactly what Prince Charming does given his genuine commitment to P3 and is why he changes his bet.

    So why doesn’t she change her bet? Your position requires her to calculate that > but that’s impossible given P1, P2, and P3. She can only calculate that > if she rejects P3 in favour of “my credence that the die landed on a 6 is 1/6”.


    While Thirders and Halfers disagree on the interpretation of SB's credence expressed as "the likelihood that the die didn't land on a six", once this interpretations is settled, and the payout structure also is settled, they then actually agree on the correct betting strategy, which is a function of both.

    The Thirder, however, provides a different explanation for the success of this unique (agreed upon) betting strategy. The reason why SB's expected return—from a Thirder stance—is higher when she systematically bets on the least likely coin toss result (i.e. 'non-six' which end up being actual only five times on average in eleven awakenings) than when she systematically bets on the most likely one (i.e. 'six' which ends up being the actual result six times on average in eleven awakenings) is precisely because the betting structure is such that in the long run she only is being rewarded once with £100 after betting eleven times on the most likely result ('six') but is rewarded five times with £100 after betting eleven times on the least likely result ('non-six'). On that interpretation, when SB systematically bets on the least likely outcome, she ends up being rewarded more because instances of betting on this outcome are being rewarded individually (and cumulatively) whereas instances of betting on the more likely events are rewarded in bulk (only once for six successful bets placed.) This is the reason why SB, as a Thirder, remains incentivized to bet on the least likely outcome.

    Your calculation of her expected return spelled out above was incorrect. It's not simply the result of multiplying her credence in an outcome with the potential reward for this outcome. It's rather the result of multiplying her credence in an outcome with the average reward for this outcome. Since she is only being rewarded with £100 for each sequence of six successful bets on the outcome 'six', her expected value when she (systematically) changes her original bet is:

    C2: credence('six') * 'average reward when bet successful' = (6/11) * (£100/6) = £9.091

    And her expected value when she doesn't change her bet is

    C1: credence('non-six') * 'average reward when bet successful' = (5/11) * £100 = £45,45

    She thereby is incentivized to systematically bet on 'non-six', just like a Halfer is.

    Notice also that, at the end of an average experimental run, where the number of betting opportunities (i.e. awakening episodes) is 11/6 on average, her calculated expected return is (11/6) * £45,45 = £83.3, which matches the expecting return of a Halfer (who is winning £100 five times out of six runs) as expected.
    Pierre-Normand October 07, 2025 at 23:27 #1017059
    Quoting Michael
    I think your comment sidestepped the issue I was raising (or at least misunderstood it, unless I'm misunderstanding you), but this reference to Bayesian probability will make it clearer.

    [...]

    it cannot be that both Halfers and Thirders are right. One may be "right" in isolation, but if used in the context of this paradox they are equivocating, and so are wrong in the context of this paradox.


    I agree with your Bayesian formulation, except that we're more used to follow with Elga's convention, and predicate two awakenings on Tails, such that it's P(T|Awake) that is 2/3 on the Thirder interpretation of this credence.

    To be clear about the events being talked about, there is indeed a unique event that is the same topic for discussion for both Halfers and Thirders: namely, the coin toss. However, even after the definition of this unique event has been agreed upon, there remains an ambiguity in the definition of the credence that SB expresses with the phrase "the probability that the coin landed Tails." That's because her credence C is conceptually tied with her expectation that this event will be repeated with frequency C, in the long run, upon repeatedly being placed in the exact same epistemic situation. Thirders assert the the relevant epistemic situation consist in experiencing a singular awakening episode (which is either a T-awakening or a H-awakening) and Halfers assert that the relevant epistemic situation consist in experiencing a singular experimental run (which comprises two awakenings when it is a T-run). So, there are three "events" at issue: the coin toss, that occurs before the experiment, the awakenings, and the runs.

    Since it's one's subjective assessment of the probability of the unique event (either H or T) being realized that is at issue when establishing one's credence, one must consider the range of epistemic situations that are, in the relevant respect, indistinguishable from the present one but that one can reasonably expect to find oneself into in order to establish this credence. The Thirders insist that the relevant situations are the indistinguishable awakening episodes (being generated in unequal amounts as a result of the coin toss) while the Halfers insist that they are the experimental run (being generated in equal amounts as a result of this toss). I've argued that both stances yield sensible expressions of SB's credence, having different meanings, and that the choice of either may be guided by pragmatic considerations regarding the usefulness of either tracking relative frequencies of awakenings types or of experimental run types for various purposes.
    Pierre-Normand October 08, 2025 at 02:56 #1017084
    Quoting Michael
    This, I think, shows the fallacy. You're equivocating, or at least begging the question. It's not that there is an increased proclivity to awaken in this scenario but that waking up in this scenario is more frequent.

    In any normal situation an increased frequency is often explained by an increased proclivity, but it does not then follow that they are the same or that the latter always explains the former – and this is no normal situation; it is explicitly set up in such a way that the frequency of us waking up Sleeping Beauty does not mirror the probability of the coin toss (or die roll).


    I’m with you on the distinction. "Proclivity" and "frequency" aren’t the same thing. The only point I’m making is simple: in my shiny-penny story, a causal rule makes certain observations show up more often, and Bayes lets us use that fact.

    In the shiny-penny case, fair pennies have a 1/2 chance to land Tails, but Tails pennies are twice as likely to be noticed. So among the pennies I actually notice, about 2/3 will be Tails. When I notice this penny, updating to (2/3) for Tails isn’t smuggling in a mysterious propensity; it’s just combining:

    1) the base chance of Tails (1/2), and
    2) the noticing rates (Tails noticed twice as often as Heads).

    Those two ingredients, or proclivities, generate the observed 2:1 mix in the pool of "noticed" cases, and that’s exactly what the posterior tracks. No amnesia needed; if you were really in that situation, saying "My credence is 2/3 on Tails for the penny I’m looking at" would feel perfectly natural.

    If you are allowed to place 6 bets if the die lands on a 6 but only 1 if it doesn't then it is both the case that winning bets are more frequently bets that the die landed on a 6 and the case that the die is most likely to not land on a 6.


    Right, and that’s the clean way to separate the two perspectives:

    1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).

    Once you say which thing you’re scoring, runs or awakenings, both beliefs lead to the same betting strategy and the same expected value under any given payout scheme. Different grains of analysis, same rational behavior.
    JeffJo October 08, 2025 at 22:29 #1017216
    Quoting Pierre-Normand
    I didn't provide a detailed response to your post because you didn't address it to me or mention me.


    It's "addressed" to what I thought was a discussion forum. You know, to discuss this problem and the approach to its solution. And more specifically, to the unnamed in the discussion who try to obfuscate what is a simple conditional probability problem.

    Using this table, in the Camp Sleeping Beauty setup:

    . M T W H F S
    1 A D C F E B
    2 F B B C C F
    3 A B C D F D
    4 F E B D B C
    5 C D F C E E
    6 E E F C C F

    • On each single day, after activity X, what is the probability/SB's credence that the die roll was d, for d=1 to 6?
    • On each single day, after activity X, what is the probability/SB's credence that the day is Y, for Y in {M,T,W,H,F,S}?
    • Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?


    I use "single day" because each day is an independent outcome to SB. SB does not know anything about any other days, only what is happening in this day. Even if it is possible that the same thing could happen on other days. She can place this day within the context of the set of all days, and the subset where this day's outcomes can happen. BUT SHE ONLY KNOWS ABOUT ONE DAY.

    We do not need payout schemes to determine this. We do not need to know what SB expects over the row. We do not need to address "indexicals" The correct answers are:

    • COUNT(CELL=X in row d)/COUNT(CELL=X in table).
    • COUNT(CELL=X in column Y)/COUNT(CELL=X in table).
    • No.


    Pierre-Normand October 08, 2025 at 23:31 #1017223
    Quoting JeffJo
    I use "single day" because each day is an independent outcome to SB.


    I had misunderstood your original post, having read it obliquely. I had thought you meant for the participants to experience, over the duration of one single day, all six activities in the table row selected by a die throw, and be put to sleep (with amnesia) after each activity. In that case, their credence (on the occasion of any particular awakening/activity) in any given die throw result would be updated using the non-uniform representation of each activity in the different rows. This would have been analogous to the reasoning Thirders make in the original Sleeping Beauty problem. But the variation that you actually propose, when only one activity is being experienced on any given day, yields a very straightforward Bayesian updating procedure that both Halfers and Thirders will agree on. I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given—where, that is, all the potential awakening episodes are subjectively indistinguishable from Sleeping Beauty's peculiar epistemic perspective.
    JeffJo October 09, 2025 at 01:13 #1017235
    Quoting Pierre-Normand
    the variation that you actually propose, when only one activity is being experienced on any given day, yields a very straightforward Bayesian updating procedure that both Halfers and Thirders will agree on.

    Thank you for that. But you ignored the third question:

    • Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?


    Quoting Pierre-Normand
    I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given

    Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward.

    And what makes it "a very straightforward Bayesian updating procedure" is observing that it does not match what is happens on the day SB is experiencing. That is the straightforward Bayesian methodology.

    JeffJo October 09, 2025 at 01:27 #1017238
    Quoting Pierre-Normand
    I don't see how it bears on the original problem


    Then try this schedule:
    . M T W H F S
    1 A E E E E E
    2 A A E E E E
    3 A A A E E E
    4 A A A A E E
    5 A A A A A E
    6 A A A A A A

    Here, A is "awake and interview."

    If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that.

    But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.

    My issue is that, if A is what SB sees, these two cannot be different.
    Pierre-Normand October 09, 2025 at 01:28 #1017239
    Quoting JeffJo
    Thank you for that. But you ignored the third question:

    Does it matter if E is "Extended sleep"? That is, the same as Tuesday&Heads. in the popular version?

    "I don't see how it bears on the original problem where the new evidence being appealed to for purposes of Bayesian updating isn't straightforwardly given"
    — Pierre-Normand

    Then you don't want to see it as straightforward. Tuesday still exists if the coin lands Heads. It is still a single day, with a distinct activity, in the experiment. Just like the others in what you just called straightforward.


    Oh yes, good point. I had overlooked this question. Indeed, in that case your variation bears more directly on the original SB thought experiment. One issue, though, is that if is E is just another activity like the other ones, then SB should not know upon awakening on that day that her scheduled activity is E, just like the original problem, when SB wakes up on Tuesday, she isn't informed that she is experiencing a Tuesday-awakening. So, you haven't quite addressed the issue of the indistinguishability of her awakening episodes.
    Pierre-Normand October 09, 2025 at 01:43 #1017241
    Quoting JeffJo
    Then try this schedule:
    . M T W H F S
    1 A E E E E E
    2 A A E E E E
    3 A A A E E E
    4 A A A A E E
    5 A A A A A E
    6 A A A A A A

    Here, A is "awake and interview."

    If E is "Extended Sleep," the Halfer logic says Pr(d|A)=1/6 for every possible roll, but I'm not sure what Pr(Y|A) is. Halfers aren't very clear on that.


    Halfers don't condition on the propostion "I am experiencing an awakening". They contend that for SB to be awakened several times, rather than once, in the same experimental run (after one single coin toss or die throw) has no incidence on her rational credence regarding the result of this toss/throw.

    But if E is anything where SB is awoken but not interviewed, then the straightforward Bayesian updating procedure you agreed to says Pr(d|A)=d/21, and if Y is an index for the day, Pr(Y|A)=Y/21.

    My issue is that, if A is what SB sees, these two cannot be different.


    Yes, I agree with the cogency of this Thirder analysis. Halfers, however, interpret SB's credence, as expressed by the phrase "the probability that the coin landed Tails" to be the expression of her expectation that the current experimental run, in which she is now awakened, (and may have been, or will be, awakened another time,) is half as likely to be a T-run or a H-run, which also makes sense if she doesn't care how many times she may be awakened and/or interviewed in each individual run. Her credence tracks frequencies of runs rather than (in Thirder interpretations of the problem) awakening episodes.
    JeffJo October 09, 2025 at 12:13 #1017325
    Quoting Pierre-Normand
    Halfers don't condition on the propostion "I am experiencing an awakening".


    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."

    They contend that for SB to be awakened several times, rather than once, in the same experimental run


    Right. Which, after "Tails," requires SB's observation to be happening on both days, simultaneously. See "on a single day."

    Halfers, however, interpret SB's credence, as expressed by the phrase "the probability that the coin landed Tails" to be the expression of her expectation that the current experimental run,


    Why? How does something that is not happening, on not doing so on a different day, change her state of credence now? How does non-sleeping activity not happening, and not doing so on a different day, change her experience on this single day, from an observation of this single day, to an "experimental run?"

    You are giving indefensible excuses to re-interpret the experiment in the only way it produces the answer you want.

    How about this schedule:

    . M T W H F S
    1 A E E E E E
    2 A B E E E E
    3 A B A E E E
    4 A B A B E E
    5 A B A B A E
    6 A B A B A B

    When A happens, if E is treated as only an unseen portion of a different "experimental run," should not B be also considered that? What happened to "straightforward Bayesian updating procedure"?

    OR, what if E is "If a coin flipped in Sunday landed Heads, leave SB asleep. But if it landed Tails, wake SB and take her to Disneyworld without an interview." Now you have to use a two models - not different probabilities, two different probability models, for each E day.
    JeffJo October 10, 2025 at 20:28 #1017605
    How about this schedule:

    . M T W H F S
    1 A E E E E E
    2 A B E E E E
    3 A B A E E E
    4 A B A B E E
    5 A B A B A E
    6 A B A B A B

    When A happens, if E is treated as only an unseen portion of a different "experimental run," should not B be also considered that? What happened to "straightforward Bayesian updating procedure"?

    OR, what if E is "If a coin flipped in Sunday landed Heads, leave SB asleep. But if it landed Tails, wake SB and take her to Disneyworld without an interview." Now you have to use a two models - not different probabilities, two different probability models, for each E day.

    Michael October 11, 2025 at 14:54 #1017732
    Quoting Pierre-Normand
    In the shiny-penny case, fair pennies have a 1/2 chance to land Tails, but Tails pennies are twice as likely to be noticed. So among the pennies I actually notice, about 2/3 will be Tails. When I notice this penny, updating to (2/3) for Tails isn’t smuggling in a mysterious propensity; it’s just combining:

    1) the base chance of Tails (1/2), and
    2) the noticing rates (Tails noticed twice as often as Heads).


    You appear to be affirming the consequent. In this case, Tails is noticed twice as often because Tails is twice as likely to be noticed. It doesn't then follow that Tail awakenings happen twice as often because Tails awakenings are twice as likely to happen.

    The Sleeping Beauty case in contrived in such a way that a Heads awakening is guaranteed to happen and two Tails awakenings are guaranteed to happen. This contrivance doesn't allow you to compare the likeliness of a Tails awakening compared to a Heads awakening.

    Quoting Pierre-Normand
    1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).


    This doesn't make sense.

    She is in a Tails awakening if and only if she is in a Tails run.
    Therefore, she believes that she is most likely in a Tails awakening if and only if she believes that she is most likely in a Tails run.
    Therefore, her credence that she is in a Tails awakening equals her credence that she is in a Tails run.

    You can't have it both ways.

    Quoting Pierre-Normand
    Since she is only being rewarded with £100 for each sequence of six successful bets


    This isn't what's happening. There is only a single bet, placed before she is put to sleep. She is then given a 3 hour window in which she is able to change her bet, and can do so as many times as she likes. The same for Prince Charming, although he is never put to sleep.

    In this situation, if either of their credences in the outcome genuinely changed to favour the die landing on a 6 then they would change their bet. Prince Charming does this when he learns that his die is loaded. So why doesn't Sleeping Beauty after having her memory wiped? Because despite Thirder word games, her credence in the outcome hasn't genuinely changed. She continues to know that if she changes her bet then she is most likely to lose.

    If it helps, it's not a bet but a holiday destination. The die is a magical die that determines the weather. If it lands on a 6 then it will rain in Paris, otherwise it will rain in Tokyo. Both Prince Charming and Sleeping Beauty initially decide to go to Paris. If after being woken up Sleeping Beauty genuinely believes that the die most likely landed on a 6 then she genuinely believes that it is most likely to rain in Paris, and so will decide instead to go to Tokyo.

    Quoting Pierre-Normand
    So, there are three "events" at issue: the coin toss, that occurs before the experiment, the awakenings, and the runs.


    But again, the paradox is only a paradox if the [math]H[/math] in [math]P(H|Awake) = {1\over3}[/math] denotes the same event as the [math]H[/math] in [math]P(H) = {1\over2}[/math].

    The paradox is: this awakening gives me reason to believe that this coin toss most likely landed on a tails.

    If this claim is false then Halfers are right and Thirders are either wrong or equivocating.
    ProtagoranSocratist October 11, 2025 at 15:42 #1017754
    So i guess to increase her odds, she bets tails 100% of the time since she can't remember which phase of the experiment she's in, and the 2/3rds tailsers make a profit off the gambling? That's the thing: we need to talk about the drugs. How many times does she participate in the experiment, is it just one run through?

    Do coins change their heads/tails bias based on the number of times tossed?
    Michael October 11, 2025 at 16:06 #1017769
    Quoting ProtagoranSocratist
    So i guess to increase her odds, she bets tails 100% of the time since she can't remember which phase of the experiment she's in, and the 2/3rds tailsers make a profit off the gambling?


    It doesn't increase her odds but it does increase her expected return in the long run.

    On this point it's worth considering an extreme example I provided two years ago.

    I am put to sleep and a coin is tossed 100 times. If it lands heads every time then I am woken up, interviewed, and put back to sleep 2[sup]101[/sup] times, otherwise I am woken up, interviewed, and put back to sleep once.

    When being interviewed, I am asked a) my credence that the coin landed heads every time and b) to place a bet on the outcome.

    All of these are true:

    1. If I know that the experiment will be performed once
    a. My credence is [math]{1\over{2^{100}}}[/math]
    b. I will bet that the coin did not land heads every time

    2. If I know that the experiment will be performed 2[sup]101[/sup] times
    a. My credence is [math]{1\over{2^{100}}}[/math]
    b. I will bet that the coin did land heads every time

    I strongly believe that a perfectly rational agent like Sleeping Beauty will believe and do the same. Thirder reasoning seems to be that if (2b) results in twice as many successful bets then (1a) is false, and that simply doesn't follow, either for me or for Sleeping Beauty.
    ProtagoranSocratist October 11, 2025 at 16:44 #1017790
    Reply to Michael but doesn't the first coinflip is every phase of the experiment matter the most if it lands on heads, because then it ends? I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips. You can't win with gambling logic here, as it never what phase the experiment is in, it's always 50/50. 1/3 argument is like arguing that heads is 2/3rds as likely because it ends the experiment...
    Pierre-Normand October 11, 2025 at 18:05 #1017824
    Quoting ProtagoranSocratist
    I understand the 1/3rd logic, but it simply doesn't apply here: the third flip, given the first two were heads (less likely than one tail and a head, but still very likely), is also unaffected by the other flips.


    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once. She also is being administered an amnesia inducing drug after each awakening so that she is unable to infer anything about the number of awakenings she may be experiencing from her memory, or lack thereof, of a previous awakening episode. It might be a good idea to either reread the OP carefully, or read the Wikipedia article on the problem: especially the description of the canonical form of the problem in the second section titled "The problem".

    (For the record, my own "pragmatist" solution is an instance of what the Wikipedia article, in its current form, dubs the "Ambiguous-question position", although I think the formulation of this position in the article remains imprecise.)
    Pierre-Normand October 11, 2025 at 18:25 #1017833
    Quoting JeffJo
    Right. And this is they get the wrong answer, and have to come up with contradictory explanations for the probabilities of the days. See "double halfers."


    Let me just note, for now, that I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses. Under that scenario, a first coin would be tossed and if it lands Heads, then SB would be awakened Monday only. If it lands Tails, then a second coin would be tossed and SB would still be awakened Monday only if it lands Heads and be awakened Tuesday only if it lands Tails. Such a scenario would support a straightforward Halfer interpretation of SB's rational credence but it's different from the original one since it makes Monday-awakenings and Tuesday-awakenings mutually exclusive events whereas, in the original problem, SB could be experiencing both successively though not at the same time. The different awakening generation rules yield different credences. (I haven't read Mikaël Cozic's paper, where the double-halfer solution is being introduced, though.)
    Pierre-Normand October 11, 2025 at 18:41 #1017836
    Quoting JeffJo
    Why? How does something that is not happening, on not doing so on a different day, change her state of credence now? How does non-sleeping activity not happening, and not doing so on a different day, change her experience on this single day, from an observation of this single day, to an "experimental run?"

    You are giving indefensible excuses to re-interpret the experiment in the only way it produces the answer you want.


    Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the Thirder interpretations, but denies either one being the exclusively correct one. (I might as well say that Halfers and Thirders both are wrong to dismiss the other interpretation as being inconsistent with the "correct" one, rather than acknowledging its being incompatible but complementary.)

    With this out of the way, let me agree with you that the arbitrary stringing up of discrete awakenings into composite experimental runs doesn't affect the Thirder credence in the current awakening being a T-awakening (which remains 2/3). However, likewise, treating a run as multiple interview opportunities doesn't affect the Halfer credence in the current run being a T-run (which remains 1/2). The mistake that both Halfers and Thirders seem to make is to keep shouting at each other: "Your interpretative stance fails to refute my argument regarding the validity of my credence estimation." What they fail to see is that they are both right and that the "credences" that they are taking about are credences about different things.
    ProtagoranSocratist October 11, 2025 at 18:43 #1017838
    Quoting Pierre-Normand
    There is no third flip. The coin is only tossed once. When it lands Tails, Sleeping Beauty is awakened twice and when it lands Heads, she is awakened once.


    okay, thanks for clearing that up, as i read the original description of the fake experiment more than once, and that part of it was unclear to me. To me it was saying that heads ends the experiment right away, tails produces two more flips. Maybe it's because i was reading it at work, but i doubt it, i remember the experiment confusing me, and i remember the article saying that it was confusing.

    Still: the effects of one flip never effect the outcome of the other FLIPS, unless that is baked into the experiment, so it is a misleading hypothetical question (but interesting to me for whatever reason). The likelihood of the flips themselves are still 50/50, not accounting for other spooky phenomenon that we just don't know about. So, i'll think about it some more, as it has a "gamey" vibe to it...

    Here's what would effect the outcome to skew the bias slightly in the tails direction: let's say the experimentor gives her the drug, keeps flipping the coin while she sleeps, and then wakes her up on the condition that there are 2/3 tails on the last 3 tosses...then asks "what was the last coin flip?". There would be exactly 1/3 odds of it being heads...but as it stands, even the "correct" way you describe it, i still can't side with 1/3 camp.
    I guess at this point it's a game to see how long it will take before i get frustrated with talking about this, so go on...
    Pierre-Normand October 11, 2025 at 18:55 #1017840
    Quoting ProtagoranSocratist
    Still: the effects of one flip never effect the outcome of the other FLIPS, unless that is baked into the experiment, so it is a misleading hypothetical question (but interesting to me for whatever reason). The likelihood of the flips themselves are still 50/50, not accounting for other spooky phenomenon that we just don't know about. So, i'll think about it some more, as it has a "gamey" vibe to it...


    There are no other flips. From beginning to end (and from anyone's perspective), we're only talking about the outcome of one single coin toss. Either it landed Heads or it landed Tails. We are inquiring about SB's credence (i.e. her probability estimation) in either one of those results on the occasion where she is being awakened. The only spooky phenomenon is her amnesia, but that isn't something we don't know about. It's part of the setup of the problem that SB is being informed about this essential part of the protocol. If there were no amnesia, then she would know upon being awakened what the day of the week is. If Monday (since she wouldn't remember having been awakened the day before) then her credence in Tails would be 1/2. If Tuesday (since she would remember having been awakened the day before) then her credence in Tails would be 1 (i.e. 100%). The problem, and competing arguments regarding what her credence should be, arise when she can't know whether or not her current awakening is the first one.

    (Very roughly, Halfers argue that since she is guaranteed to be awakened once in any case, her being awakened conveys no new information to her and her estimation of the probability that the coin landed Tails should remain 1/2 regardless of how many times she is being awakened when the coin lands Tails. Thirders argue that she is experiencing one of three possible and equiprobable awakening episodes, two of which happen when the coin landed Tails, and hence that her credence in the coin having landed Tails becomes 2/3.)
    JeffJo October 11, 2025 at 19:18 #1017841
    Quoting Pierre-Normand
    I think the double halfer reasoning is faulty because it wrongly subsumes the Sleeping Beauty problem under (or assimilates it with) a different problem in which there would be two separate coin tosses.


    The point is that, like you, they construct the reasons in order to get the result they want. Not because the reasons are consistent in mathematics. But your explanation is wrong. They argue that there is a single, but somehow the Law of Total Probability does not apply. That Tails&Monday is both the same, and a different, outcome than Tails&Tuesday depending one which way you try to use that Law.

    Quoting Pierre-Normand
    Well, firstly, the Halfer solution isn't the answer that I want since my own pragmatist interpretation grants the validity of both the Halfer and the Thirder interpretations, but denies either one being the exclusively correct one.

    Like I said, you want the halfer solution to have validity, so you manufacture reasons for it to be. There can't be two valid answers. Your logic fails to provide ANY solution to my last (repeated) variation.

    Viewed from outside the experiment - i.e., not SB's viewpoint - there are two paths with two distinct days each (For this logic, I'm calling Heads&Mon and Tails&Mon different days). And each path has a 50% probability, and the days are not distinguished.

    SB's viewpoint inside the experiment sees that only one day is happening. But recognizes that there are three other days, including one she would sleep through, that exist as possibilities but are not this day. I don't recall if I've used it here, but of course I have an equivalent version that clarifies this.

    There are three Michelin three-star restaurants in San Francisco, where I'll assume the experiment takes place. They are Atelier Crenn, Benu, and Quince. Before the coin is tossed, a different restaurant is randomly assigned to each of Heads&Mon, Tails&Mon, and Tails&Tue. When she is awoken, SB is taken to the assigned restaurant for her interview. Since she has no idea which restaurant was assigned to which day, as she gets in the car to go there each has a 1/3 probability. (Note that this is Elga's solution.) Once she gets to, say, Benu, she can reason that it had a 1/3 chance to be assigned to Heads&Mon.

    The point is that each restaurant represents one of three possible "waking" days, not the path that it is a part of. Outside the experiment there is a pair that represent the Tuesday path, but that is irrelevant to SB.
    ProtagoranSocratist October 11, 2025 at 19:49 #1017843
    Quoting JeffJo
    The point is that, like you, they construct the reasons in order to get the result they want. Not because the reasons are consistent in mathematics. But your explanation is wrong


    It's not wrong, it's spooky just because of the gauranteed amnesia, which makes it weird and susceptible to forgetfulness. I think the problem was created more or less just to see what answers people would come up with, how they would project their logic onto what they read.

    1/2 makes since, since theoretical coinflips always result in either or. 1/3 makes sense depending on how the experiment is manipulated, when she is asked to make a prediction and the information she is given etc. I gave a perfect example in my last post of how you could gaurantee one out of three heads, but whether sleeping beauty is given the true info is suspect. But the odd thing about coin flips, and this is what the question is exploiting, is that there will generally be a bias one way or the other, always with odd and normally with an even number of flips. It's less like a random number generator with thousands/millions/billions of possibilities like in real life gambling.
    Pierre-Normand October 11, 2025 at 20:12 #1017848
    Quoting Michael
    You appear to be affirming the consequent. In this case, Tails is noticed twice as often because Tails is twice as likely to be noticed. It doesn't then follow that Tail awakenings happen twice as often because Tails awakenings are twice as likely to happen.


    Rather, the premiss I'm making use of is the awakening-episode generation rule. If the coin lands/landed Tails, two awakening episodes are being generated, else only one is. This premiss is available to SB since it's part of the protocol. From this premiss, she infers that, on average, when she participates in such an experiment (as she knows to be currently doing) the number of T-awakenings that she gets to experience is twice as large as the number of H-awakening. (Namely, those numbers are 1 and 1/2, respectively). So far, that is something that both Halfers and Thirders seem to agree on.

    "[i]1) Per run: most runs are 'non-six', so the per-run credence is P(6)=1/6 (the Halfer number).
    2) Per awakening/observation: a 'six-run' spawns six observation-cases, a 'non-six' run spawns one. So among the observation-cases, 'six' shows up in a 6/5 ratio, giving P('six'|Awake)=6/11 (the Thirder number).[/i]"
    — Pierre-Normand

    This doesn't make sense.

    She is in a Tails awakening if and only if she is in a Tails run.
    Therefore, she believes that she is most likely in a Tails awakening if and only if she believes that she is most likely in a Tails run.
    Therefore, her credence that she is in a Tails awakening equals her credence that she is in a Tails run.

    You can't have it both ways.


    This biconditional statement indeed ensures that her credences regarding her being experiencing a T-awakening, her experiencing a T-run, or her being in circumstances in which the coin landed (or will land) Tails, all match. All three of those statements of credence, though, are similarly ambiguous. All three of them denote three distinct events that can indeed only be actual (from SB's current epistemic situation on the occasion of an awakening) if and only if the other two are. The validity of those biconditionals doesn't resolve the relevant ambiguity, though, which is something that had been stressed by Laureano Luna in his 2020 Sleeping Beauty: An Unexpected Solution paper that we had discussed before on this thread (and that @fdrake had brought up, if I remember).

    Under the Halfer interpretation of SB's credence, all three of those biconditionally related "experienced" events—by "experienced", I mean that SB is currently living those events, regardless of her knowing or not that she is living them—are actual on average 1/2 of the times that SB is experiencing a typical experimental run. Under the Thirder interpretation, all three of those biconditionally related "experienced" events are actual on average 2/3 of the times that SB is experiencing a typical awakening episode.

    If it helps, it's not a bet but a holiday destination. The die is a magical die that determines the weather. If it lands on a 6 then it will rain in Paris, otherwise it will rain in Tokyo. Both Prince Charming and Sleeping Beauty initially decide to go to Paris. If after being woken up Sleeping Beauty genuinely believes that the die most likely landed on a 6 then she genuinely believes that it is most likely to rain in Paris, and so will decide instead to go to Tokyo.


    This setup exactly mirrors some other variations I also had proposed (exiting the Left Wing or exiting the East Wing at the end of the experiment) that indeed warrant SB's reliance on her Halfer-credence to place her bet. But the original SB problem doesn't state what the "exit conditions" are. (If it did, there'd be no problem.) Rather than being offered to make a unique trip to Paris or Tokyo at the end of the current experimental run, SB could be offered to make a one day trip to either one of those destinations over the course of her current awakening episode, and then be put back to sleep. Her Thirder-credence would then be pragmatically relevant to selecting the destination most likely to afford her a sunny trip.
    Pierre-Normand October 11, 2025 at 20:37 #1017856
    Quoting JeffJo
    There are three Michelin three-star restaurants in San Francisco, where I'll assume the experiment takes place. They are Atelier Crenn, Benu, and Quince. Before the coin is tossed, a different restaurant is randomly assigned to each of Heads&Mon, Tails&Mon, and Tails&Tue. When she is awoken, SB is taken to the assigned restaurant for her interview. Since she has no idea which restaurant was assigned to which day, as she gets in the car to go there each has a 1/3 probability. (Note that this is Elga's solution.) Once she gets to, say, Benu, she can reason that it had a 1/3 chance to be assigned to Heads&Mon.


    Yes, that is a very good illustration, and justification, of the 1/3 credence Thirders assign to SB given their interpretation of her "credence", which is, in this case, tied up with the experiment's "exit rules": one separate restaurant visit (or none) for each possible coin-toss-outcome + day-of-the-week combinatorial possibility. Another exit rule could be that SB gets to go the Atelier Crenn at the end of the experiment when the coin landed Heads and to Benu when it landed Tails. In that case, when awakened, she can reason that the coin landed Tails if and only if she will go to Benu (after the end of the experiment). She knew before the experiment began that, in the long run, after many such experiments, she would go to Atelier Crenn and to Benu equally frequently on average. When she awakens, from her new epistemic situation, this proportion doesn't change (unlike what was the case with your proposed exit rules). This supplies a sensible interpretation to the Halfer's 1/2 credence: SB's expectation that she will go to Atelier Crenn half the times (or be equally likely to go to Atelier Crenn) at the end of the current experimental run regardless of how many times she is pointlessly being asked to guess.
    JeffJo October 11, 2025 at 20:48 #1017863
    Quoting Pierre-Normand
    Another exit rule could be that SB gets to go the Atelier Crenn at the end of the experiment


    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM. SB is asked once on each waking day, not once at the end. To even try to make it similar, you'd need to take her to two restaurants if the coin landed Tails. In the original, that is equivalent to saying a single day is Tails&Mon and Tails&Tue AT THE SAME TIME.

    This is what I mean by inconsistent reasoning designed to get a specific answer, not to be correct.
    JeffJo October 11, 2025 at 21:04 #1017876
    Quoting ProtagoranSocratist
    It's not wrong

    His explanation for "double halfers" used two coin flips. There is only one coin flip. So it is both incorrect mathematics, and incorrect about the double-halfer's claim.

    I think the problem was created more or less just to see what answers people would come up with, how they would project their logic onto what they read.

    It was created to justify epistemic reasoning, where it does not apply.

    1/2 makes since, since theoretical coinflips

    1/2 does not make sense because it treats the problem unconditionally. It makes the "outside the experiment" interpretation that single outcome can be represented by two different awakenings.
    ProtagoranSocratist October 11, 2025 at 21:10 #1017881
    Quoting JeffJo
    1/2 does not make sense because it treats the problem unconditionally. It makes the "outside the experiment" interpretation that single outcome can be represented by two different awakenings.


    assuming there is nothing mysterious or "spooky" influencing a coin flip, then the answer is always is always 50/50 heads or tails. Maybe I misunderstand.

    Quoting JeffJo
    His explanation for "double halfers" used two coin flips. There is only one coin flip. So it is both incorrect mathematics, and incorrect about the double-halfer's claim.


    The folks who created this thought experiment are great at confusing people, that's all i'm really getting out of this as a moral.
    Pierre-Normand October 11, 2025 at 21:34 #1017904
    Quoting JeffJo
    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM.


    It isn’t a different problem; it’s a different exit rule (scoring rule) for the same coin-toss -> awakenings protocol. The statement of an exit rule is required to disambiguate the question being asked to SB, how her "credence" is meant to be understood.

    Think of two perfectly concrete versions:

    A. End-of-run dinner (Atelier Crenn vs Benu).

    One coin toss. If Heads, the run generates one awakening (Monday); if Tails, it generates two (Monday+Tuesday). We still ask on each awakening occasion, but the bet is scored once at the end (one dinner: Atelier Crenn if Heads and Benu if Tails). The natural sample here is runs. As many runs are T-runs as are H-runs, so the correct credence for the run outcome is 1/2. The Halfer number reflects this exit rule.

    B. Pay-as-you-go tastings (Atelier Crenn vs Benu vs Quince, as you defined the problem).

    Same protocol, but now each awakening comes with its own tasting bill: the bet is scored each time you’re awakened. The natural sample here is awakenings. T-runs generate more awakenings (one each at Benu and at Quince) than H-runs do (only one awakening at Atelier Crenn); a random awakening is twice as likely to come from Tails as from Heads, so the right credence at an awakening is 2/3. The Thirder number reflect this different exit rule.

    Both A and B are about the same protocol. What changes isn’t the coin or the awakenings. Rather, it’s which dataset you’re sampling when you answer "what’s your credence now?"

    That’s all I meant: the original wording leaves the relevant conditioning event implicit ("this run?" or "this awakening?"). Different people tacitly pick different exit rules, so they compute different frequencies. Once we say which one we’re using, the numbers line up and the apparent disagreement evaporate.

    Your Atelier Crenn tweak doesn’t uniquely solve the initial (ambiguous) problem; it just provides a sensible interpretation through making a specific scorecard explicit.
    Pierre-Normand October 12, 2025 at 02:37 #1017989
    Quoting ProtagoranSocratist
    assuming there is nothing mysterious or "spooky" influencing a coin flip, then the answer is always is always 50/50 heads or tails. Maybe I misunderstand.


    It's not something spooky influencing the coin that make SB's credence in the outcome shift. It's rather the subsequent events putting her in relation with the coin that do so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result.

    Using the analogy I've used recently, if someone drops a bunch of pennies on the floor but, due to their reflectance properties, pennies landing Tails are twice as likely to catch your attention from a distance than pennies landing Heads, then, even though any penny that you see shining was equally likely to land Heads or Tails, the very fact that it's a penny that you noticed ensures that it's most likely to be a penny that landed Tails. And the reason isn't spooky at all. It's just because, in a clear sense, pennies that land Tails make you notice them more often (because they're shinier, we're assuming). It can be argued (and I did argue) that the SB situation in the original problem is relevantly similar. Coins landing Tails make SB more likely to be awakened and questioned about them (because of the experiment's protocol, in this case).

    JeffJo October 12, 2025 at 12:31 #1018077
    Quoting Pierre-Normand
    so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result.


    When she is awake, what knowledge does she have, related to any other day or coin result?

    This is what seems difficult to accept. SB's "world" consists of one day, and one coin result, and due to the amnesia both are independent of any other "world" represented in another awakening. Illustrating that was the point of my "Camp Sleeping Beauty" variation.
    ProtagoranSocratist October 12, 2025 at 17:04 #1018142
    Quoting Pierre-Normand
    Coins landing Tails make SB more likely to be awakened and questioned about them (because of the experiment's protocol, in this case).


    but that's the reason why the chance of seeing heads and tails stays constant: sleeping beauty, by default, must be woken up to participate in the experiment. Every single subsequent time, she must also be reminded of the conditions because of the amnesia drug. The way they phrased the experiment initially means the 1/3 outcome is just a faith-based fantasy, and i can't empathize with people who argue continue to argue that, even though i get why with the phrasing of the experiment (the woken up two additional times with tails) logic would make the 1/3 outcome seem logical.

    Sleeping beauty is a mythical character who always sleeps until she is woken up for whatever reason. However, there's not part of her story dictating what she remembers and doesn't, so if amnesia drugs are involved, then the experimentors are free to then craft the percentage that the outcome shows up...but the original logic dictates that each time the coin will show heads or tails 50% of the time, like with every other coinflip...

    I guess we could start making bets on how many pages this thread will accumulate before everyone loses interest. I'm guess over 100, hehehe...
    Pierre-Normand October 12, 2025 at 17:25 #1018147
    Quoting ProtagoranSocratist
    Sleeping beauty is a mythical character who always sleeps until she is woken up for whatever reason. However, there's not part of her story dictating what she remembers and doesn't, so if amnesia drugs are involved, then the experimentors are free to then craft the percentage that the outcome shows up...


    She is woken up once when the coin lands Heads and twice when it lands Tails. That is part of the protocol of the experiment. We also assume that the drug only makes her forget any previous awakening episode that may have occurred but not the protocol of the experiment. If that seems implausible to you, you can indeed also assume that she is being reminded of the protocol of the experiment each time she is awakened and interviewed.
    ProtagoranSocratist October 12, 2025 at 17:39 #1018154
    Quoting Pierre-Normand
    you can assume


    This experiment is now becoming "beyond the pale" and "incorrigable" to me...
    Pierre-Normand October 12, 2025 at 18:51 #1018163
    Quoting ProtagoranSocratist
    This experiment is now becoming "beyond the pale" and "incorrigable" to me...


    No worry. You're free to let Sleeping Beauty go back to sleep.
    JeffJo October 12, 2025 at 19:29 #1018176
    Quoting Pierre-Normand
    Yes, that makes the answer 1/2 BECAUSE IT IS A DIFFERENT PROBLEM. — JeffJo


    It isn’t a different problem;


    It's s different probability problem based on the same coin toss. SB has no knowledge of the other possible days, while this answer requires it.
    Pierre-Normand October 12, 2025 at 20:38 #1018194
    Quoting JeffJo
    It's s different probability problem based on the same coin toss. SB has no knowledge of the other possible days, while this answer requires it.


    SB does know the setup of the experiment in advance however. She keeps that general knowledge when she wakes, even if she can’t tell which awakening this is. What varies in our "variants" isn’t the awakenings setup, it’s the exit/score rule that tells us which sample to use when we ask SB "what’s your credence now?"

    From Beauty’s point of view these biconditionals are all true:

    "The coin landed Tails" ? "This is a T-run" ? "This is a T-awakening."

    So a Thirder assigns the same number to all three (2/3), and a Halfer also assigns the same number to all three (1/2). The disagreement isn’t about which event kind the credence talks about (contrary to what I may have misleadingly suggested before). It’s rather about which ratio we’re implicitly estimating.

    Halfer ratio (per-run denominator): count runs and ask what fraction are T. With one toss per run, that stays 1/2.

    Thirder ratio (per-awakening denominator): count awakenings and ask what fraction are T-awakenings. Since T makes more awakenings (2 vs 1), that’s 2/3.

    Same event definitions; different denominators. Making the exit/score rule explicit just fixes the denominator to match the intended end-of-run scoring:

    End-of-run scoring -> per-run ratio (Halfer number)
    Per-awakening scoring -> per-awakening ratio (Thirder number)
    JeffJo October 13, 2025 at 00:13 #1018239
    Quoting Pierre-Normand
    SB does know the setup of the experiment in advance however.


    Yep. What makes it an independent outcome, is not knowing how the actual progress of the experiment is related to her current situation. This is really basic probability. If you want to see it for yourself, simply address the Camp Sleeping Beauty version.
    Pierre-Normand October 13, 2025 at 00:40 #1018246
    Quoting JeffJo
    Yep. What makes it an independent outcome, is not knowing how the actual progress of the experiment is related to her current situation. This is really basic probability. If you want to see it for yourself, simply address the Camp Sleeping Beauty version.


    I did and I agreed with you that it was a fine explanation of the rationale behind the Thirder interpretation of the original SB problem.
    JeffJo October 13, 2025 at 17:37 #1018388
    Quoting Pierre-Normand
    I did and I agreed with you that it was a fine explanation of the rationale behind the Thirder interpretation


    You may have read it. You did comment on it from that aspect. But you did not address it. The points it illustrates are:

    • That each "day" (where that means the coin toss and the activity that occurred during that awakening), in Mathematical fact, represents a random selection of one possible "day" from the NxN grid. If that activity appears S times in the schedule, and R times in the row, then the Mathematically correct credence for the random result corresponding to that row is R/S. This is true regardless of what the other N^2-S "days" are, even if some are "don't awaken."
    • There is no connection between the "days" in a row. You call this "T-awakenings" or "the H-wakening." in the 2x2 version. They are independent.
    Philosophim October 13, 2025 at 17:57 #1018395
    The part to note is that almost all of this is a red herring. Its irrelevant if she remembers or not. Its just word play to get us out of the raw math. The odds are still the same.

    Flip heads, 1 result
    Flip tails, 2 results

    Put the pile of results as total possible outcomes. You have 3 possible outcomes. In two of the outcomes, tails was flipped. Put it in a hat and draw one. You have a 2/3rd chance that its a tails outcome.

    To be clear, it is a 50/50 shot as to whether heads or tails is picked. Meaning that both are equally like to occur. But since we have more outcomes on tails, and we're looking at the probability of what already happened based on outcomes, not prediction of what will happen, its a 2/3rds chance for tails.
    Pierre-Normand October 13, 2025 at 18:10 #1018397
    Quoting JeffJo
    You may have read it. You did comment on it from that aspect. But you did not address it. The points it illustrates are:

    - That each "day" (where that means the coin toss and the activity that occurred during that awakening), in Mathematical fact, represents a random selection of one possible "day" from the NxN grid. If that activity appears S times in the schedule, and R times in the row, then the Mathematically correct credence for the random result corresponding to that row is R/S. This is true regardless of what the other N^2-S "days" are, even if some are "don't awaken."

    - There is no connection between the "days" in a row. You call this "T-awakenings" or "the H-wakening." in the 2x2 version. They are independent.


    I agree with the reasoning and calculation. As I said, this is a standard Thirder interpretation of the problem. It is consistent, coherent and valid. Regarding the second point, the two events that occur when the coin lands Tails only are independent in the sense that when Sleeping Beauty experiences them she can't know which one (i.e. Monday&Tails or Tuesday&Tails) it is. In that sense, they also are independent of Monday&Heads. In another sense, the first two are interdependent since one of them can't occur without the other one also occurring within the same experimental run.

    But the question being asked to SB isn't explicitly about those three "independent" events. It's a question about her credence in the state of the hidden coin at the time when she is being awakened. One interpretation (the Thirder one) of this credence is that it ought to represent the proportion of her indistinguishable awakening episodes that occur while the coin landed Tails. This interpretation yields the probability 2/3. Another one, the Halfer interpretation, is that it ought to represent the proportion of her current awakening runs (which may or may not include two rather than one awakening episodes, and hence may or may not afford SB with two rather than one opportunity to express her credence) that occur as a result of the coin having landed Tails. This interpretation yields the probability 1/2. Those two interpretations also have associated with them two different methods of verification, and so are complementary rather than contradictory.

    Consider the variation I had proposed early on in this thread. Let the two awakenings that occur (on Monday and Tuesday) when the coin lands Tails take place in a room located in the West Wing of the Sleeping Beauty Experimental Facility, and the unique awakening that occurs on Monday when the coin lands Heads take place in a room located in the East Wing. On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over and she will be released on Wednesday, she will find herself to be in the West Wing? Does that not happen 1/2 of the times she is being enrolled in such an experiment? Is that not also what her Aunt Sue who must come to pick her up expects? Finally, when she experiences one of the three possible (and indistinguishable) awakening situations, does she learn anything that he Aunt Sue (and herself previously) didn't already know?
    Pierre-Normand October 13, 2025 at 18:27 #1018400
    Quoting Philosophim
    The part to note is that almost all of this is a red herring. Its irrelevant if she remembers or not. Its just word play to get us out of the raw math. The odds are still the same.

    Flip heads, 1 result
    Flip tails, 2 results

    Put the pile of results as total possible outcomes. You have 3 possible outcomes. In two of the outcomes, tails was flipped. Put it in a hat and draw one. You have a 2/3rd chance that its a tails outcome.

    To be clear, it is a 50/50 shot as to whether heads or tails is picked. Meaning that both are equally like to occur. But since we have more outcomes on tails, and we're looking at the probability of what already happened based on outcomes, not prediction of what will happen, its a 2/3rds chance for tails.


    The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she is being awakened again on Tuesday, she would be able to deduce that the coin landed Tails with certainty and, when she couldn't remember it, she could deduce with certainty that "today" is Monday (and that the probability of Tails is 1/2). That would be a different problem, and no problem at all.

    Your argument in favor of the Thirder credence that the coin landed Tails (2/3) relies on labeling the awakening episodes "the outcomes". But what is it that prevents Halfers from labelling the experimental runs "the outcomes" instead? Your ball picking analogy also has been produced by Berry Groisman to illustrate this ambiguity in his The end of Sleeping Beauty's Nightmare paper (although I don't fully agree with his conclusions).
    Philosophim October 13, 2025 at 20:05 #1018420
    Quoting Pierre-Normand
    The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she is being awakened again on Tuesday, she would be able to deduce that the coin landed Tails with certainty and, when she couldn't remember it, she could deduce with certainty that "today" is Monday


    Correct. My point was that its just used as a word problem way of saying, "We have 3 outcomes we reach into a hat and pull from"

    Quoting Pierre-Normand
    Your argument in favor of the Thirder credence that the coin landed Tails (2/3) relies on labeling the awakening episodes "the outcomes". But what is it that prevents Halfers from labelling the experimental runs "the outcomes" instead?


    Because there are two different outcomes. One with one day, and one with two days. If you pick any day and have no clue if its a day that resulted from a heads or tails outcome, its a 2/3rds chance its the tails outcome. The heads and tails is also irrelevant. The math is, "Its as equally likely that we could have a series of one day or two day back to back in this week. If you pick a day and you don't know the outcome or the day, what's the odds its a tails day vs a heads day?"

    The odds of whether its head or tails is irrelevant since they are the same and can be effectively removed from the problem.

    Pierre-Normand October 13, 2025 at 20:38 #1018425
    Quoting Philosophim
    Correct. My point was that its just used as a word problem way of saying, "We have 3 outcomes we reach into a hat and pull from"


    You are using the word "outcome" ambiguously and inconsistently. In your previous post you had stated that "You have 3 possible outcomes. In two of the outcomes, tails was flipped."

    And now you are saying that:

    Because there are two different outcomes. One with one day, and one with two days. If you pick any day and have no clue if its a day that resulted from a heads or tails outcome, its a 2/3rds chance its the tails outcome. The heads and tails is also irrelevant. The math is, "Its as equally likely that we could have a series of one day or two day back to back in this week. If you pick a day and you don't know the outcome or the day, what's the odds its a tails day vs a heads day?"

    The odds of whether its head or tails is irrelevant since they are the same and can be effectively removed from the problem.


    So, now you are back to treating experimental runs rather than awakening episodes as the "outcomes". This sort of ambiguity indeed is the root cause of the misunderstanding that befalls Halfers and Thirders in their dispute.

    When Sleeping Beauty is being asked, on one particular awakening occasion, what her credence is that the coin landed Tails, she must ponder over what the odds are that the epistemic situation she currently is in (given the information available to her) is such that the coin landed Tails when she is in that situation. In other words, she takes herself to be experiencing one among a range of possible and undistinguishable (from her current point of view) events (or "outcomes") such that a proportion P of them occur when the coin landed Tails, in the long run. All of this leaves it undefined what the events or "outcomes" are that we're talking about.

    Thirders interpret those outcomes as awakening episodes and Halfers interpret them as experimental runs. Their expressed credences, 2/3 and 1/2 respectively, therefore are answers to different questions (or to the same question differently disambiguated, if you will).

    Thirder Sleeping Beauty expects, reasonably enough, that in the long run awakening episodes like the one she is currently experiencing will turn out to have occurred when the coin had landed Tails two thirds of the time.

    Halfer Sleeping Beauty expects, equally reasonably, that, in the long run, experimental runs like the one she is currently experiencing (regardless of how many more times she already was or will be awakened during that run) will turn out to have occurred when the coin had landed Tails one half of the times.

    Credences implicitly are about ratios. Halfers and Thirders disagree about the denominator that is meant to figure in the relevant ratio.
    Philosophim October 13, 2025 at 21:07 #1018430
    Quoting Pierre-Normand
    So, now you are back to treating experimental runs rather than awakening runs as the "outcomes". This sort of ambiguity indeed is the root cause of the misunderstanding that befalls Halfers and Thirders in their dispute.


    I'm not seeing the ambiguity here, but maybe I'm not communicating clearly. There are two outcomes based on context.

    1 Outcome of the coin
    2. Outcome from the result of the coin

    The outcome from the coin is always 50/50. Because of this, you can ignore the outcome and simply say, "Its equally likely that the result of one coin flip will happen as the other result"

    When we look at all the outcomes from the result of the coin, we see that if its tails, there are two days, and only 1 day if heads. So if you don't know which day it is, (previous days can be removed as we 'forget') then it is a 1/3 chance that your day is a heads, and 2/3rd chance that your day is a tails. Is it equally likely that a heads or a tails was flipped in isolation from the days? Yes. But since we have different outcomes from the flip itself, we have to take them all in consideration and realize that if either side is equally likely to occur, it is a much better odds to say that its heads because the consequence of it being heads makes it twice as likely you awake two days.

    One more way to think about it is you have a hat that you could either reach in one time, or two times. Each is equally likely to occur. You forget how many times you've reached into the hat. Is it more likely that you're on an outcome where you reach in two times, or one time? Obviously its two times.
    Pierre-Normand October 13, 2025 at 22:15 #1018441
    Quoting Philosophim
    I'm not seeing the ambiguity here, but maybe I'm not communicating clearly. There are two outcomes based on context.


    I assume what you now mean to say is that there are two possible ways to think of the "outcomes" based on context. Well, sure, that's pretty much what I have been saying. But I'm also arguing that the original Sleeping Beauty problem fails to furnish the relevant context.

    If we think of the experimental runs following the coin toss as the "R-outcomes" (an equal amount of T-runs and H-runs are expected) and the awakening episodes as the "A-outcomes" (twice as many T-awakenings as H-awakenings are expected), then we've resolved part of the ambiguity. But Sleeping Beauty isn't being asked about specific kinds of outcomes explicitly. Rather she is being asked about her credence regarding the current state of the coin. She can reason that the current state of the coin is Tails if and only if she is currently experiencing a T-awakening and hence that the current state of the coin is twice as likely to be Tails than it is to be Heads. But she can also reason that the current state of the coin is Tails if and only if she is currently experiencing a T-run and hence that the current state of the coin is equally as likely to be Tails than it is to be Heads.

    Another way to state the Halfer interpretation that makes it intuitive is to suppose Sleeping Beauty will be given a bag of Twizzlers (T-candy) at the end of the experiment if the coin landed Tails and a bag of Hershey's Kisses (H-candy) if it landed Heads. The fact that she's awakened twice rather than once when she's scheduled to receive Twizzlers doesn't make it more likely that she will receive them at the end of the run. Hence her credence remains 1/2 that she will receive Twizzlers. This is consistent with her credence being 2/3 that her current awakening episode (A-outcome) is twice as likely to be one that puts her on a path towards getting Twizzlers. But since the Twizzlers reward is an outcome that is being realized if and only if she is currently experiencing a T-awakening, she can sensibly reason that the odds of that are 1/2 also.

    The key to understanding the consistency between the two apparently contradictory credences regarding the very same coin toss result is to realize that the two T-awakening outcomes occur in the same timeline and hence them being more frequent than H-awakenings doesn't increase the relative frequency of the Twizzlers rewards (or of her having experienced a T-run, regardless of how many times she was awakened in this run).
    Philosophim October 13, 2025 at 23:59 #1018453
    Quoting Pierre-Normand
    But Sleeping Beauty isn't being asked about specific kinds of outcomes explicitly. Rather she is being asked about her credence regarding the current state of the coin. She can reason that the current state of the coin is Tails if and only if she is currently experiencing a T-awakening and hence that the current state of the coin is twice as likely to be Tails than it is to be Heads. But she can also reason that the current state of the coin is Tails if and only if she is currently experiencing a T-run and hence that the current state of the coin is equally as likely to be Tails than it is to be Heads.


    She can reason that its equally likely that the result of the coin flip is 50/50, but that doesn't mean its likely that the day she is awake is 50/50. Lets flip it on its head and note how the likelihood that she would be wrong.

    If she always guesses heads, she's wrong twice if its tails. If she always guesses tails, she's only wrong once. Thus, she is twice as likely to be wrong if she guesses heads on any particular day woken up, and twice as likely to guess correctly if she guesses tails. If the total odds of guessing correctly were 50/50, then she would have an equal chance of guessing correctly. She does not.

    We can see this by extending the days out. Lets say that if its heads, she's woken up one day. If its tails she's woken up 10 days. Again, its better odds to guess tails, despite the outcome of the coin being equal as mentioned above.

    As I've been noting, because the probable outcome of the coin is the same, you can effectively remove the probable outcome of the coin from the equation. r = r can be divided out. Lets change up the odds of the coin flip, and it changes the our guess.

    Going back to the original setup, lets Say that its twice as likely that the coin lands on heads. Now the probability of the coin becomes essential to the outcome of the follow up outcomes.

    So its two times more likely that its heads, but only one day. 1 times as likely for two days. This appears to equate to 50/50 when considering what day is randomly picked out of the outcomes.

    Now, there is absolutely zero doubt that in the original case it is not a 50/50 outcome. However, I do doubt if it ends up being 2/3rds. There may be some multiplicative statistical shenanigans needed here that I'm not aware of that is being disguised by making the heads/tails outcome equal and eliminating it as a consideration. Perhaps the imaginary case I made above would be more revealing of any special case. Thus I may be wrong that 2/3rds is the answer, but there is no question that 1/2 is wrong.
    Pierre-Normand October 14, 2025 at 00:25 #1018457
    Quoting Philosophim
    She can reason that its equally likely that the result of the coin flip is 50/50, but that doesn't mean its likely that the day she is awake is 50/50.


    Sure, but the former precisely is what she is being asked. She is being asked what her credence about the coin will be on that occasion, and not what the proportion of such occasions are that are T-occasions. One can argue than she is being asked this implicitly, but in that case it's still open to interpretation what those "occasions" are meant to be, as we've already discussed.

    Lets flip it on its head and note how the likelihood that she would be wrong.

    If she always guesses heads, she's wrong twice if its tails. If she always guesses tails, she's only wrong once. Thus, she is twice as likely to be wrong if she guesses heads on any particular day woken up, and twice as likely to guess correctly if she guesses tails. If the total odds of guessing correctly were 50/50, then she would have an equal chance of guessing correctly. She does not.


    That's right, and this is a good argument favoring the Thirder position but it relies on explicitly introducing
    a scoring procedure that scores each occasion that she has to express her credence: once for each awakening episode. If you would rather score those statements only once per run, regardless of how many times she is being asked about her credence in that run, then she would be right half the times. This also makes sense if you view all of the separate awakening episodes occurring during a single Tails run to be part of the same "outcome" (as you've indeed yourself favored doing earlier).
    Philosophim October 14, 2025 at 00:41 #1018460
    Quoting Pierre-Normand
    Sure, but the former precisely is what she is being asked. She is being asked what her credence about the coin will be on that occasion, and not what the proportion of such occasions are that are T-occasions.


    I would say she is being asked what the odds are of it being a day in which a T side vs a H side coins is flipped. If she's only being asked what the percent chance of the coin ended up being at, the answer is always 50/50. The odds of the coin flip result don't change whether its 1 or 1,000,000 days. What changes is from the result of that coin flip, and that is the pertinent data that is important to get an accurate answer.

    This is very similar to the old Monty Hall problem. You know the three doors, make a guess, then you get to make another guess do you stay or change?

    On the first guess, its always a 1/3 shot of getting the door wrong. But it can also be seen as a 2/3 chance of getting the door wrong. When given another chance, you simply look at your first set of odds and realize you were more likely than not wrong, so you change your answer. The result matches the odds.

    Same with the situation here. Run this experiment 100 times and have the person guess heads 50 times, then tails 50 times. The person who guesses tails every time 50 times will be right 2/3rds of the time more than the first. Since outcomes ultimately determine if we are correct in our odds, we can be confident that 1/2 odds is incorrect.

    By the way, very nice discussion! I appreciate your insight and challenging me to view things I might not have considered.
    Pierre-Normand October 14, 2025 at 01:22 #1018472
    Quoting Philosophim
    I would say she is being asked what the odds are of it being a day in which a T side vs a H side coins is flipped.


    I was talking about what she is being asked, literally, in the original formulation of the problem discussed in the OP. From Wikipedia:

    This has become the canonical form of the problem: [...] During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"

    Your reading is a sensible interpretation of this question, but it isn't the only one.

    If she's only being asked what the percent chance of the coin ended up being at, the answer is always 50/50. The odds of the coin flip result don't change whether its 1 or 1,000,000 days. What changes is from the result of that coin flip, and that is the pertinent data that is important to get an accurate answer.


    Yes, quite, but now with the phrase "the result of that coin flip" you still seem to be gesturing to the "occasions" being generated by those coin flip results. And how is it that those events are meant to be individuated (one occasion per run or one per awakening) is what's at issue.

    This is very similar to the old Monty Hall problem. You know the three doors, make a guess, then you get to make another guess do you stay or change?

    On the first guess, its always a 1/3 shot of getting the door wrong(sic). But it can also be seen as a 2/3 chance of getting the door wrong. When given another chance, you simply look at your first set of odds and realize you were more likely than not wrong, so you change your answer. The result matches the odds.


    That's not quite how the Monty Hall problem is setup so I'm unsure about the analogy your intending to make. The contestant has a 1/3 chance of getting the prize-hiding door right on their first try. But the game host then reveals among the two other doors one that they (i.e. the host) know to be hiding a goat. The contestant then is being offered an opportunity to switch their initial choice for the other unopened door. The correct reasoning is that since contestants that switch their choices will win the prize on each occasion where their first choice was wrong, they have a 2/3 change of winning the prize if they switch their choice.

    Same with the situation here. Run this experiment 100 times and have the person guess heads 50 times, then tails 50 times. The person who guesses tails every time 50 times will be right 2/3rds of the time more than the first. Since outcomes ultimately determine if we are correct in our odds, we can be confident that 1/2 odds is incorrect.


    Yes, the person who guesses that the coin landed Tails will turn out to have made a correct guess two thirds of the times on average, thereby matching the Thirder credence. But the Halfer argues that this is just because they were able to make more guesses during T-runs. Under the Thirder interpretation of the meaning of SB's credence, though, her being afforded more opportunities to express her credence during T-runs doesn't make those runs more likely to happen, and this is what to them matter.

    By the way, very nice discussion! I appreciate your insight and challenging me to view things I might not have considered.


    Cheers!
    JeffJo October 14, 2025 at 18:11 #1018599
    Quoting Pierre-Normand
    On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over ...


    This is what invalidates your variation. She is asked during the experiment, not before or after. Nobody contests what her answer should be before or after. And you have not justified why her answer inside the experiment should be the same as outside.
    Quoting Pierre-Normand
    The issue with her remembering or not is that if, as part of the protocol, she could remember her Monday awakening when the coin landed tails and she is being awakened again on Tuesday, she would be able to deduce that the coin landed Tails with certainty and, when she couldn't remember it, she could deduce with certainty that "today" is Monday (and that the probability of Tails is 1/2). That would be a different problem, and no problem at all.


    Gee, if she is given a different set of information (knowing that it is not Tuesday, after Heads is information, as is all of these) produces different conditional probabilities. Possibly certainties. And?

    Here's one more attempt. It's really the same thing that you keep dodging by changing the timing of the question, and claiming that I have "vallid thirder logic" while ignoring that it proves the halfer logic to be inconsistent.

    • Get three opaque note cards.
    • On one side of different cards, write "Monday and Heads," "Monday and Tails," and "Tuesday and Tails.
    • Turn the cards over, shuffle them around, and write "A," "B," and "C" on the opposite sides.
    • Before waking SB on the day(s) she is to be woken, put the appropriate card in the table in her room, with the letter side face up.


    Let's say she sees the letter "B." She knows, as a Mathematical fact, that there was a 1/3 probability that "B" was assigned to the card with "Heads" written on the other side. And a 2/3 chance for "Tails."

    By halfer logic, while her credence that "Heads" is written on the "B" card must be 1/3, her credence that the coin landed on Heads is 1/2. This is a contradiction - these two statements represent the same path to her current state of knowledge, regardless of what day it is.

    Your justification for considering the halfer logic is that there may be a different path that _includes_ her her current state of knowledge and another that has a different day written on the card. My reason for rejecting that justification outright, besides the contradiction that it produces, is that this other day is not a path of the the path she sees.
    Pierre-Normand October 14, 2025 at 22:49 #1018627
    Quoting JeffJo
    On the occasion of an awakening, what is Sleeping Beauty's expectation that when the experiment is over ...
    — Pierre-Normand

    This is what invalidates your variation. She is asked during the experiment, not before or after. Nobody contests what her answer should be before or after. And you have not justified why her answer inside the experiment should be the same as outside.


    It looks like you didn't parse correctly the sentence fragment that you quoted. It is indeed on the occasion of an awakening (as I said) that she is being asked about her credence regarding the coin, not later. I did make reference in the question to the end-off-run verification conditions of the credence statement that SB is asked to express. The reason this reference is made (to the future verification conditions) is to disambiguate the sense of the question, in accordance with the Halfer interpretation in this case. But the question still is about her credence "now".

    Compare: (1) "What are the chances, now, that your lottery ticket is the winning ticket?" and (2) "What are the chances, now, that your lottery ticket has the number that will be drawn as the winning number?" It's the exact same question and the odds are the same (one in a million, say). The second question merely makes explicit what "winning" means.

    Here's one more attempt. It's really the same thing that you keep dodging by changing the timing of the question, and claiming that I have "vallid thirder logic" while ignoring that it proves the halfer logic to be inconsistent.

    Get three opaque note cards.
    On one side of different cards, write "Monday and Heads," "Monday and Tails," and "Tuesday and Tails.
    Turn the cards over, shuffle them around, and write "A," "B," and "C" on the opposite sides.
    Before waking SB on the day(s) she is to be woken, put the appropriate card in the table in her room, with the letter side face up.

    Let's say she sees the letter "B." She knows, as a Mathematical fact, that there was a 1/3 probability that "B" was assigned to the card with "Heads" written on the other side. And a 2/3 chance for "Tails."

    By halfer logic, while her credence that "Heads" is written on the "B" card must be 1/3, her credence that the coin landed on Heads is 1/2. This is a contradiction - these two statements represent the same path to her current state of knowledge, regardless of what day it is.


    The Halfer logic is to reason that although T-runs, unlike H-runs, are such that SB will be presented with a card two different times (one on Monday and one on Tuesday), nevertheless, on each occasion where the experiment is performed and she finds herself being involved in an individual experimental run, the likelihood that this run is a T-run is 1/2. That's because in the long run there are as many T-runs as there are H-runs. By that logic she can also reason that, on an particular occasion where she is being awakened, the chances that she is holding the "Monday and Heads" card is 1/2. The fact that she now finds out her card to be labelled "B" doesn't alter those odds since the labelling procedure is probabilistically independent from the coin toss result and hence this result conveys no information to her regarding the coin toss result.

    By the way, for the same reason, it would not convey any information to her either from a Thirder perspective. Her credence that the coin landed Tails would remain 2/3 before and after she saw the "B" label on her card.

    Remember: SB isn't betting on the card (neither is she betting on the current awakening episode). She's betting on the current coin toss outcome. How those outcomes must be considered to map to her possible ways to experience them (in separate awakenings or separate runs) is a matter of interpretation that isn't spelled out in the original SB problem. It's true, though, that under the Thirder interpretation, betting on the wakening episodes or betting on the cards is equivalent. But that's just because the cards and the awakening episodes are mapped one-to-one.
    JeffJo October 15, 2025 at 18:03 #1018840
    Quoting Pierre-Normand
    The reason this reference is made (to the future verification conditions) is to disambiguate the sense of the question, ...

    Perhaps you didn't parse correctly. There is no ambiguity. If she is asked to project her state of knowledge on Wednesday, or to recall it from Sunday, of course the answer is 1/2.

    Where, exactly, do you think projection or recollection is implied or stated? You are forcing this issue into a place where it does not belong, in order to justify saying that 1/2 is a possible answer.

    Remember: SB isn't betting on the card (neither is she betting on the current awakening episode). She's betting on the current coin toss outcome.


    I keep looking at the problem, and I can't find a reference to betting anywhere. The reason I don't like using betting is because anybody can re-define how and when the bet is made and/or credited, in order to justify the answer they like. One is correct, and one is wrong.

    They ask her one question after each time she awakens, however: What is the probability that the coin shows heads.


    Do you see the words "landed when it was flipped" here? No? How about "will be showing after the experiment ends"? Still no? Then stop inserting them. What is the probability that the coin is showing "Heads"? This is in the present tense.

    So, if a bet were to exist, and assuming she uses the same reasoning each time? She risks her $1 during the interview, and is credited her winnings then also. If she bets $1 on Heads with 2:1 odds, she gains $2 if the coin landed Heads, and loses 2*$1 if it landed on Tails. If she bets on Tails with 1:2 odds, she loses $1 if the coin landed Heads, and gains 2*$0.50=$1 if it landed Tails.

    But if she bets $1 on Heads with 1:1 odds, she gains $1 if the coin landed Heads, and loses 2*$1=$2 if it landed on Tails. If she bets on Tails with 1:1 odds, she loses $1 if the coin landed Heads, and gains 2*$1=$2 if it landed Tails.

    The answer, to the question that was asked and not what you want it to be, is 1/3.



    Michael October 15, 2025 at 19:13 #1018857
    Quoting Pierre-Normand
    Her Thirder-credence would then be pragmatically relevant to selecting the destination most likely to afford her a sunny trip.


    Again, there's not much sense in this so-called "pragmatically relevant" credence. Even before being put to sleep – and even before the die is rolled – I know both that the die is most likely to not land on a 6 and that betting that it did will offer the greater expected return in the long run. So after waking up I can – and will – continue to know that the die most likely did not land on a 6 and that betting that it did will offer the greater expected return in the long run, and so I will bet against my credence.

    With respect to "pragmatic relevance", Thirder reasoning is unnecessary, so if there's any sense in it it must be somewhere else.

    Quoting Pierre-Normand
    Under the Thirder interpretation, all three of those biconditionally related "experienced" events are actual on average 2/3 of the times that SB is experiencing a typical awakening episode.


    My argument is that a rational person should not – and would not – reason this way when considering their credence, and this is most obvious when I am woken up 2[sup]101[/sup] times if the coin lands heads 100 times in a row (or once if it doesn't).

    It is true that if this experiment were to be repeated 2[sup]101[/sup] times then we could expect [math]2\over3[/math] of all awakenings to occur after the coin landed heads every time, but it's also irrelevant. The experiment is only performed once. I strongly believe that it is irrational for one's credence to consider this long term average; a rational person, after waking up and knowing that the experiment is only performed once, will only consider the sheer improbability of the coin landing heads every time. Their credence remains [math]{1\over{2^{100}}}[/math]. There is no ambiguity in the question or the answer.

    Thirder reasoning only has its place, if it has a place at all, if both a) the experiment is repeated 2[sup]101[/sup] times and b) Sleeping Beauty is also made to forget between experiments. It matters that the problem does not stipulate these two conditions.
    Pierre-Normand October 16, 2025 at 02:54 #1018932
    Quoting JeffJo
    Perhaps you didn't parse correctly. There is no ambiguity. If she is asked to project her state of knowledge on Wednesday, or to recall it from Sunday, of course the answer is 1/2.


    I was explicitly referring to her state of knowledge at the time when the interview occurs. There is no projection of this state into the future. Likewise, when you buy a lottery ticket and express your credence that it is the winning ticket as one in one million, say, what you mean is that there is a one in one million chance that, when the winning number will be drawn (or will be revealed, if it has already been drawn), your ticket will be the winner. You're not projecting your state of knowledge into the future. You're merely stating the conditions of verification regarding what your present state of knowledge (i.e. your credence) is about.

    I keep looking at the problem, and I can't find a reference to betting anywhere. The reason I don't like using betting is because anybody can re-define how and when the bet is made and/or credited, in order to justify the answer they like. One is correct, and one is wrong.


    Establishing a betting protocol with a well defined payout structure enable SB to put her money where her mouth is, and also to clarifies what it is that her stated credence is about. It highlights the tension between saying that you have a high credence that some outcome is true but that you wouldn't bet on it. Once you acknowledge that it is rational to make an even money bet on an outcome that you believe to be more likely to occur (or to be actual) than not, then the specification of the payout structure help clarify what the stated credence is about (i.e. what it is exactly that you take to be most likely to occur, or to be actual). This indeed goes beyond the original statement of the problem, but since it is precisely my contention that the original statement is ambiguous, it's a useful way to highlight the ambiguity.

    So, if a bet were to exist, and assuming she uses the same reasoning each time? She risks her $1 during the interview, and is credited her winnings then also. If she bets $1 on Heads with 2:1 odds, she gains $2 if the coin landed Heads, and loses 2*$1 if it landed on Tails. If she bets on Tails with 1:2 odds, she loses $1 if the coin landed Heads, and gains 2*$0.50=$1 if it landed Tails.

    But if she bets $1 on Heads with 1:1 odds, she gains $1 if the coin landed Heads, and loses 2*$1=$2 if it landed on Tails. If she bets on Tails with 1:1 odds, she loses $1 if the coin landed Heads, and gains 2*$1=$2 if it landed Tails.

    The answer, to the question that was asked and not what you want it to be, is 1/3.


    Indeed, such a payout structure clarifies what the bettor means when they express a 2/3 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-awakenings (as opposed to H-awakenings) two thirds of the time. A different payout structure that rewards the bettor only once when they place a winning bets during an experimental run clarifies what the bettor means when they express a 1/2 credence that the the coin landed tails. They mean that the epistemic situations that they find themselves in when awakened are T-runs (as opposed to H-runs) one half of the times. As @Michael correctly argues in defence of a Halfer interpretation, merely being afforded more opportunities to bet on a given ticket (outcome) doesn't make it any more likely that this ticket is the winning ticket (or that the outcome is actual).
    Pierre-Normand October 16, 2025 at 05:18 #1018947
    Quoting Michael
    Again, there's not much sense in this so-called "pragmatically relevant" credence. Even before being put to sleep – and even before the die is rolled – I know both that the die is most likely to not land on a 6 and that betting that it did will offer the greater expected return in the long run. So after waking up I can – and will – continue to know that the die most likely did not land on a 6 and that betting that it did will offer the greater expected return in the long run, and so I will bet against my credence.

    With respect to "pragmatic relevance", Thirder reasoning is unnecessary, so if there's any sense in it it must be somewhere else.


    It is indeed somewhere else. Look at the payout structure that @JeffJo proposed in their previous post. Relative to this alternative payout structure, your own Halfer reasoning is unnecessary.

    My argument is that a rational person should not – and would not – reason this way when considering their credence, and this is most obvious when I am woken up 2^101 times if the coin lands heads 100 times in a row (or once if it doesn't).

    It is true that if this experiment were to be repeated 2^101 times then we could expect 2/3 of all awakenings to occur after the coin landed heads every time, but it's also irrelevant.


    It's only irrelevant to the determination of your credence about the experimental run that you are experiencing (regarding what proportion of such runs are T-runs). Regarding the determination of your credence about the specific awakening episode that you are experiencing, though, it's rather the fact that T-runs and H-runs are equally frequent that is irrelevant. Taking the case to such wild extremes, though, makes your intuition about the comparative utility of betting on such unlikely outcomes (i.e. H-awakenings) relative to the utility of betting on the likeliest outcome (T-awakenings) play into your intuition about the rational credence. (Why would anyone risk a virtually guaranteed and useful $1 for an infinitesimal chance of winning a bazillion dollars that one wouldn't even be able to stash away in a Sun-sized vault?) But that just a psychological fact. Using more sensible win/loss ratios of 2/3 vs 1/3, or 6/11 vs 5/11 in the die case, doesn't reveal anything odd about the Thirder interpretation of her credence, or about her betting behavior.

    [/quote]Thirder reasoning only has its place, if it has a place at all, if both a) the experiment is repeated 2^101 times and b) Sleeping Beauty is also made to forget between experiments. It matters that the problem does not stipulate these two conditions.[/quote]

    The experiment needs no be repeated many times for SB's expression of her 2/3 credence (under the Thirder interpretation of her credence) to make sense, or for her associated betting behavior to be rational. The case of a single experimental run (following a single coin flip) was addressed specifically in my Leonard Shelby Christmas gift case. You can refer to it for the relevant Bayesian updating calculations, but here is another variation that may be more intuitive:

    For this year’s annual cocktail party at the Sleeping Beauty Experimental Facility, Leonard Shelby is among the guests. Drinks are being served by two butlers: Alfred and Lurch. Each guest is entitled to three complimentary drinks.

    In Leonard's case, as with every other guest, a fair coin is secretly tossed beforehand. If the coin lands Tails, Alfred is assigned to serve him two of his drinks and Lurch one. If it lands Heads, their roles are reversed: Lurch serves two drinks and Alfred one. The guests are informed of this protocol, and Leonard has made a note of it in his memento notepad.

    Because of his anterograde amnesia, Leonard cannot keep track of how many drinks he has already received, if any. Nor does he initially recognize either butler by name, but he can read their name tags when they approach him.

    A final feature of the protocol is that, at the end of the evening, guests whose coin landed Tails (and thus received two drinks from Alfred and one from Lurch) will be given a bag of Twizzlers ("T-candy"). Those whose coin landed Heads will receive a bag of Hershey’s Kisses ("H-candy").

    At any given moment during the party, when Leonard sees a butler bringing him a drink, his credence that this drink is unique (that is, not one of two planned drinks from the same butler) is 1/2, as is his credence that the coin landed Heads. However, upon reading the name tag and discovering that the butler is Alfred, he updates his credence that the coin landed Tails to 2/3, since there are twice as many situations in which the coin landed Tails and Alfred serves him a drink as there are situations where the coin landed Heads and Alfred serves him one. This mirrors the Thirder interpretation in the Sleeping Beauty problem.

    That seems straightforward enough until someone asks him, "So, Leonard, do you think you’ll get Twizzlers or Kisses at the end of the night?"

    He frowns, checks his notepad, and realizes that by the same reasoning that gave him 2/3 for Tails a moment ago, he ought also to think he's more likely than not to get Twizzlers. But that can't be right. The coin decides both outcomes, doesn’t it?

    The trick, of course, is in what Leonard’s belief precisely is about when he thinks about the coin toss "outcome". When he reasons about this drink—the one Alfred is serving him—he’s locating himself among drink-moments. In that frame, a Tails-run simply generates twice as many such moments involving Alfred. But when he wonders what candy he'll get later, he's no longer locating himself in a drink-moment but in an entire "run of the evening": the single history that will end either in Twizzlers or Kisses. And there, each run counts only once, no matter how many times Alfred appeared in it.

    Two T-drinks in a Tails-run correspond to just one Twizzlers outcome (in the same timeline), while one H-drink in a Heads-run corresponds to one Kisses outcome. Once you factor that mapping in, the overall odds of Twizzlers or Kisses even out again. (Since experiencing one of two T-drink event doesn't exclude but rather ensures the actuality of the other T-drink event in the same timeline).

    So Leonard’s probabilities fit together neatly after all. In the middle of the party, as Alfred hands him a glass, he can think, "This is probably a T-drink". Yet, looking ahead to the end of the night, he can just as honestly write in his notebook, "Chances of T-candy: fifty-fifty."
    JeffJo October 16, 2025 at 12:53 #1019022
    Quoting Pierre-Normand
    Your argument in favor of the Thirder credence that the coin landed Tails (2/3) relies on labeling the awakening episodes "the outcomes".

    Uh, yeah?

    Write "Heads and Monday" on one notecard. Write "Tails and Monday" on another, and "Tails and Tuesday" on a third. Turn them over, and shuffle them. Then write "A," B," and "C" on the other sides.

    Pick one. What is the probability that it says "Heads" on the other side? What is the probability that it says "Tails" on the other side? Call me silly, but I'd say 1/3 and 2/3, respectively.

    Each morning of the experiment when SB is to be awakened, put the appropriate card on a table in her room, with the letter side up. Hold the interview at that table.

    What is the probability that the card, regardless of what letter she sees, says "Heads" on the other side? Or "Tails?" This "outcome" can be defined by the letter she sees. But that does not define what an outcome is, being the description of the experiment's result, in SB's knowledge, is. If she wakes on a different day, that is a different result. Being determined by the same coin flip does not determine that.

    Now, did these probabilities change somehow? For which letter(s) do they change? Or are they still 1/3 and 2/3?

    Within SB's knowledge, is not the outcome where it says "Heads" the exact same outcome where the coin is showing Heads? And the same with "Tails?" If it says "Tails and Monday," is there not another interview along this same path where it says "Tails and Tuesday?" Does that change the probability that this card says "Tails?" How does that carry over to the one time it would say "Heads?"

    Again, halfers are constructing inconsistent logic to support the answer they desire. Not using valid logic to answer the question.

    But what is it that prevents Halfers from labelling the experimental runs "the outcomes" instead?

    Because it is not both Monday, and Tuesday, when she is asked the question? What else may or may not happen is irrelevant.

    Quoting Pierre-Normand
    That's right, and this is a good argument favoring the Thirder position but it relies on explicitly introducing a scoring procedure that scores each occasion that she has to express her credence: once for each awakening episode.

    A "scoring procedure" based on imagined repeats is a way of testing your probabilities, not of defining it. It does not work in the SB problem, as should be painfully obvious, because each side will define the number of trials differently since repeated runs require looking at more than one outcome, and the number changes based on the subject event.
    Pierre-Normand October 17, 2025 at 06:27 #1019249
    Quoting JeffJo
    Write "Heads and Monday" on one notecard. Write "Tails and Monday" on another, and "Tails and Tuesday" on a third. Turn them over, and shuffle them. Then write "A," B," and "C" on the other sides.

    Pick one. What is the probability that it says "Heads" on the other side? What is the probability that it says "Tails" on the other side? Call me silly, but I'd say 1/3 and 2/3, respectively.

    Each morning of the experiment when SB is to be awakened, put the appropriate card on a table in her room, with the letter side up. Hold the interview at that table.

    What is the probability that the card, regardless of what letter she sees, says "Heads" on the other side? Or "Tails?" This "outcome" can be defined by the letter she sees. But that does not define what an outcome is, being the description of the experiment's result, in SB's knowledge, is. If she wakes on a different day, that is a different result. Being determined by the same coin flip does not determine that.

    Now, did these probabilities change somehow? For which letter(s) do they change? Or are they still 1/3 and 2/3?


    In the first case you described, a single run of the experiment consists in randomly picking one of three cards. When an outcome is determined, the remaining two possibilities collapse, since the three are mutually exclusive.

    In the second case, which mirrors the Sleeping Beauty protocol more closely, two of the possible outcomes, namely "Monday & Tails" and "Tuesday & Tails," are not mutually exclusive. In modal logical terms, one is "actual" if and only if the other is, even though they do not occur at the same time. This is unlike the relationship either has to "Monday & Heads," which is genuinely exclusive. Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other.

    It’s precisely this relation of entailment, rather than exclusion, that explains why the existence of two separate occasions for Sleeping Beauty to find herself in a Tails-awakening does not dilute the probability of her finding herself in a Heads-timeline, on the Halfer interpretation.

    In other words, one third of her awakenings are "Mon & Tails," one third are "Tue & Tails," and one third are "Mon & Heads," vindicating her 1/3 credence that her current awakening is a Heads awakening. But since the two "Tails" awakenings always occur sequentially within the same timeline, they jointly represent two occasions for her to experience a single Tails-run: one that remains just as frequent as a Heads-run overall.

    Thus the "Thirder credence" in Heads outcomes (1/3) and the "Halfer credence" in Heads timelines (1/2) are both valid, but they refer to different ratios: the first to occasions of experience, the second to timelines of outcomes. Crucially, this is true even though on both accounts the target events (whether awakenings or timelines) occur if and only if the coin landed Heads.
    Michael October 17, 2025 at 11:05 #1019299
    Quoting Pierre-Normand
    Relative to this alternative payout structure, your own Halfer reasoning is unnecessary.


    Yes, I have tried to argue this point several times. A rational person's credence in the outcome of the coin toss is unrelated to the betting strategy that yields the greater expected return in the long run, and is why any argument to the effect of "if I bet on Tails then I will win [math]2\over3[/math] bets, therefore my credence that the coin landed on Tails is [math]2\over3[/math]" is a non sequitur. The most profitable betting strategy is established before being put to sleep when one’s credence is inarguably [math]1\over2[/math], showing this disconnect.

    After waking up you just either believe that the coin most likely landed on Tails or you don't, and I think my extreme example shows that no rational person’s credence will be based on some counterfactual ratio of awakenings in the way that Thirders say. It seems absurd for anyone to answer anything other than [math]1\over{2^{100}}[/math], regardless of how you “choose” to interpret the question.
    Pierre-Normand October 17, 2025 at 20:39 #1019404
    Quoting Michael
    Yes, I have tried to argue this point several times. A rational person's credence in the outcome of the coin toss is unrelated to the betting strategy that yields the greater expected return in the long run, and is why any argument to the effect of "if I bet on Tails then I will win 2/3 bets, therefore my credence that the coin landed on Tails is 2/3" is a non sequitur. The most profitable betting strategy is established before being put to sleep when one’s credence is inarguably 1/2, showing this disconnect.


    Your argument is too quick and glosses over essential details we already rehearsed. We agreed that when there are two mutually exclusive outcomes A and B, there isn’t a valid inference from "I am rationally betting on outcome A" to "My credence in A is highest." But that’s not because there is no rational connection between betting strategies and credences. It’s rather because, as we also seemed to agree, the rational choice of a betting strategy depends jointly on your credences in the outcomes and the given payout structure. Hence, if the cost of placing a bet is $1, and if my credence in Tails being realized whenever I place a bet is twice my credence in Heads being realized on such occasions, and the payout structure is such that I’m paid $2 each time I’ve placed a bet when the coin lands Tails, then it’s rational for me to bet on Tails. The reason why it’s rational is that (1) I am paid back $2 each time I place such a bet and (2) I expect Tails to be realized twice as often on occasions such as the present one when I place a bet (my credence), which yields an expected value of $1.33. The second consideration therefore remains part of the equation.

    What a Halfer would typically object to (and you yourself have sometimes argued) is that this has no bearing on SB’s credence regarding the odds that the coin landed Tails for her current experimental run (as determined by the coin toss), which credence is independent of the number of awakenings (or betting opportunities) that occur during that run. They can illustrate this with a payout structure that awards $2 per experimental run regardless of SB’s number of guessing opportunities. In that case, SB rationally expects to break even because (1) she expects the Tails outcome to be realized just as frequently as the Heads outcome across runs (regardless of how many times she is awakened within a run) and (2) the payout structure (matching the odds of the outcome being realized while a bet was placed) nullifies the expected value.

    In summary, rational credence doesn’t float free of betting; it aligns with whatever gets checked. If we check one answer per run, rational calibration yields 1/2. If we check one answer per awakening, rational calibration yields 2/3 (or 6/11 in the die case). The same coin is being talked about, but the Halfer and Thirder interpretations of SB’s credence refer to different scorecards. Given one scorecard and one payout structure, everyone agrees on the rational betting strategy in normal cases. I’ll address your extreme case separately, since it appeals to different (nonlinear) subjective utility considerations.
    JeffJo October 18, 2025 at 18:25 #1019569
    Quoting Pierre-Normand
    In the second case, which mirrors the Sleeping Beauty protocol more closely, two of the possible outcomes, namely "Monday & Tails" and "Tuesday & Tails," are not mutually exclusive.

    Oh? You mean that a single car can say both "Monday & Tails" and "Tuesday & Tails?" Please, explain how.

    "What is your credence in the fact that this card says "Heads" on the other side? This is unquestionably 1/3.

    "What is your credence in the fact that the coin is currently showing Heads?" This is unquestionably an equivalent question. As is ""What is your credence in the fact that [i]the coin landed on Heads/i]?"

    I realize that you want to make the question about the entire experiment. IT IS NOT. I have shown you over and over again how it leads to contradictions. Changing the answer between these is one of them.

    Now, the fact that the coin shows Tails on Tuesday, if a question can be asked, certainly is the same fact as it was on Monday. But SB's knowledge set does not allow a connection between these.

    In modal logical terms, one is "actual" if and only if the other is

    And how is this relevant to SB?

    even though they do not occur at the same time.

    No. BECAUSE ONE EXISTS IN HER "WORLD," AND THE OTHER DOES NOT.

    [QUOTE]Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other.[/quote]
    And how does this affect what SB's credence should be, when she does not have access to any information about "timelines?"

    AGAIN: You are constructing invalid and inconsistent logic to support the conclusion you want to reach.
    Pierre-Normand October 20, 2025 at 06:51 #1019852
    Quoting JeffJo
    Oh? You mean that a single car can say both "Monday & Tails" and "Tuesday & Tails?" Please, explain how.


    I was referring to your second case, not the first. In the first case, one of three cards is picked at random. Those three outcomes are mutually exclusive by construction. In your second case, the three cards are given to SB on her corresponding awakening occasions. Then, if the coin lands Tails, SB is given the two T-cards on two different days (Mon & Tue). So "Mon & Tails" and "Tue & Tails" are distinct events that both occur in the same timeline; they are not mutually exclusive across the run, even though each awakening is a separate moment.

    "What is your credence in the fact that this card says "Heads" on the other side? This is unquestionably 1/3.

    "What is your credence in the fact that the coin is currently showing Heads?" This is unquestionably an equivalent question. As is ""What is your credence in the fact that the coin landed on Heads/i]?"

    I realize that you want to make the question about the entire experiment. IT IS NOT. I have shown you over and over again how it leads to contradictions. Changing the answer between these is one of them.


    I also take the question to always be about the coin. You are arguing that this translates into a question about the card (or awakening episode) on the ground that there is a biconditional relation that holds between coin outcomes and awakening (or card) outcomes. On any occasion of awakening, the coin landed Heads if and only if the awakening is a H-awakening and this happens if and only if "Monday & Heads" is written on the card. But a Halfer will likewise argue that on any occasion of awakening during a run, the coin landed Heads if and only if the run is a H-run. The fact that SB is awakened twice during a T-run, or given two cards, doesn't alter this. Just like you are arguing that the question isn't about the runs, the Halfer argues that it isn't about the awakenings either.

    "Picking "Monday & Tails" guarantees that "Tuesday & Tails" will be picked the next day, and vice versa. They are distinct events but belong to the same timeline. One therefore entails the other." —Pierre-Normand

    And how does this affect what SB's credence should be, when she does not have access to any information about "timelines?"


    She does have the information that the two potential T-awakenings occur on the same timeline and that the H-awakening occurs on a different one. This is an essential part of the experiment's protocol that SB is informed about. The Halfer argues that since the two T-awakenings occur on the same timeline (on two successive days) the two occasions that SB finds to experience a T-awakening don't dilute the probability that she could be experiencing an H-timeline.

    What Halfers and Thirders both overlook is that the timeline branching structure set up by the Sleeping Beauty protocol establishes both equal (1/3) frequencies of the three types of awakening (Monday&Heads, Monday&Tails and Tuesday&heads) and equal (1/2) frequencies of the two types of experimental runs (Tails-runs and Heads-runs). This makes it possible to individuate the events Sleeping Beauty is involved in in two different ways. Sleeping beauty can therefore say truly that she is currently experiencing an awakening that she expects, in the long run, to be one among three equally frequent types of awakening (and therefore has a 2/3 chance of being a T-awakneing) and also say truly that she is currently experiencing an experimental run that she expects, in the long run, to be one among two equally frequent types of runs (and therefore has a 1/2 chance of being a T-run). The apparent contradiction comes from neglecting the two-to-one mapping of T-awakenings to T-runs within the same timeline.

    In both interpretations, it's the coin outcome that is at issue but, when expressing a credence about this outcome, tacit reference always is made to the epistemic situations SB finds herself in while evaluating the relative frequencies of her encounters with those outcomes. The statement of the original SB problem doesn't specify what constitutes "an encounter": an experimental run or a singular awakening? Halfers and Thirders intuitively individuate those events differently, although those intuitions often are grounded in paradigmatic cases that are extensions of the original problem and that make one or the other interpretation more pragmatically relevant.
    JeffJo October 20, 2025 at 10:47 #1019875
    Quoting Pierre-Normand
    I was referring to your second case, not the first. In the first case, one of three cards is picked at random. Those three outcomes are mutually exclusive by construction. In your second case, the three cards are given to SB on her corresponding awakening occasions. Then, if the coin lands Tails, SB is given the two T-cards on two different days (Mon & Tue). So "Mon & Tails" and "Tue & Tails" are distinct events that both occur in the same timeline; they are not mutually exclusive across the run, even though each awakening is a separate moment.


    And.... you continue to ignore the obvious point I am making. You keep looking at an "outcome" as what occurs over two days. The only "outcome" SB sees occurs on one day.

    But if you really want to use two days, do it right. On Tails, there are two waking days. On Heads, there is a waking day and a sleeping day. The sleeping day still exists, and carries just as much weight in the probability space as any of the waking days. What SB knows is that she is in one of the three waking days.

    Each day carries a 1/4 prior probability. Since SB knows she is in a waking day, the sleeping day is "eliminated" and she can use conditional probability to update the probabilities of the three waking days to 1/3 each. And it is no different than if you always wake SB, but have three interview days and one DisneyWorld day. The three days that have a common description, when that common description is what SB sees, each have a probability of 1/3. This is true regardless of what the other one description is.

    And you have offered no counter arguments except "but what if SB wants to look across all of the days."
    Pierre-Normand October 20, 2025 at 12:10 #1019879
    Quoting JeffJo
    But if you really want to use two days, do it right. On Tails, there are two waking days. On Heads, there is a waking day and a sleeping day. The sleeping day still exists, and carries just as much weight in the probability space as any of the waking days. What SB knows is that she is in one of the three waking days.


    Sure, but Sleeping Beauty isn’t being asked what her credence is that "this" (i.e. the current one) awakening is a T-awakening. She’s being asked what her credence is that the coin landed Tails. If you want to equate those two questions by the true biconditional "this awakening is a T-awakening if and only if the coin landed Tails" (which you are free to do), then you ought to grant the Halfer the same move: "This run is a T-run if and only if the coin landed Tails." And since the protocol generates T-runs and H-runs in equal frequency, her experiencing T-runs is as frequent as her experiencing H-runs.

    Crucially, the fact that Sleeping Beauty sleeps more in H-runs has no bearing on the Halfer’s point. Arguing otherwise is like saying your lottery ticket is more likely to win because (in a setup where winning causes more "clutching opportunities") you’re allowed to clutch it more often (or sleep less) before the draw. That setup creates more opportunities to clutch a winning ticket and hence makes each "clutching episode" more likely to be a "T-clutching," but it doesn’t make the ticket more likely to win. And with amnesia, you can’t just count clutchings, or awakenings, to infer the outcome.
    JeffJo October 20, 2025 at 13:05 #1019888
    Quoting Pierre-Normand
    Sure, but Sleeping Beauty isn’t being asked what her credence is that "this" (i.e. the current one) awakening is a T-awakening.


    She is asked for her credence. I'm not sure what you think that means, but to me it means belief based on the information she has. And she has "new information." Despite how some choose to use that term, it is not defined in probability. When it is used, it does not mean "something she didn't know before," it means "something that eliminates some possibilities. That usually does mean something about the outcome that was uncertain before the experiment, which is how "new" came to be applied. But in this situation, where a preordained state of knowledge eliminates some outcomes, it still applies.

    Pierre-Normand October 20, 2025 at 20:33 #1019947
    Quoting JeffJo
    She is asked for her credence. I'm not sure what you think that means, but to me it means belief based on the information she has. And she has "new information." Despite how some choose to use that term, it is not defined in probability. When it is used, it does not mean "something she didn't know before," it means "something that eliminates some possibilities. That usually does mean something about the outcome that was uncertain before the experiment, which is how "new" came to be applied. But in this situation, where a preordained state of knowledge eliminates some outcomes, it still applies.


    One important thing Sleeping Beauty gains when she awakens is the ability to make de re reference to the coin in its current state as the current state of "this coin" (indexically or deictically) whereas prior to awakening she could only refer to future states of the coin de dicto in a general descriptive way. To express her current credence (in light of her new epistemic situation) when awakened, she must also refer to her own epistemic position relative to the coin. We can appeal to David Lewis’s notion of de se reference (centered possible worlds). That’s what you seemed to have in mind earlier when you spoke of awakening events existing in "her world."

    With this de se act, SB doesn’t merely locate herself at a single moment. In order to state her credence about the "outcome" in her centered world, she must also fix the unit over which probability mass is assigned: that is, how the total probability space (normalized to 1) is partitioned into discrete possible situations she might find herself in and that each have their own probabilities. Partitioning by awakening episodes is one such choice (the Thirder’s). It yields probability 1/3 for each one of the three possible occasions of encountering the coin in a definite state on a specific day. Crucially, this awakening-centered measure does not preclude the Halfer’s run-centered measure; it entails it, since the three awakening-centered worlds (and their frequencies) map in a fixed way onto the two run-centered worlds the Halfer is tracking (two-to-one in the case of T-worlds and one-to-one in the case of H-worlds).

    Hence, the premises and reasoning SB uses to justify her 1/3 credence in Heads (including her knowledge of the two-to-one mapping from T-awakening-centered worlds to T-run-centered worlds) show that the Halfer credence is perfectly consistent and, in fact, supported by the very structure you endorse. The Thirder and Halfer credences about the same coin having landed Heads (1/3 vs 1/2) are consistent beliefs that implicitly refer to different centered world measures over the same underlying possibility space.
    JeffJo October 21, 2025 at 10:07 #1020052
    SB "locates" herself in one of the four possible states in the experiment. These states exist whether or not she would be able to observe them, That was the point of the Camp Sleeping Beauty experiment: the 36 possible "days" each have a prior probability of 1/36. If she participates in activity "A", the probability that the die rolled N is the number of times A appears in row N, divided by the number of times A appears in the 6tx6 calendar. This is true if all activities are waking activities, or if one is a sleeping activity she would not be able to observe. Her observation is that B, C, D, E, and F did not occur, not just that A occurred.

    This is what determines the "probability mass." You correctly described how it is used, but you refuse to identify it correctly. I think you agreed (its hard to recall with all your pedantry) that this is true when all activities are waking ones, but I don't recall you addressing how sleeping activities affect it.THEY DON'T.

    The "halfers run-centered measure" is precluded because you can't define, in a consistent way, how or why they are removed from the prior. So you avoid addressing that.
    Pierre-Normand October 21, 2025 at 20:38 #1020158
    Quoting JeffJo
    he "halfers run-centered measure" is precluded because you can't define, in a consistent way, how or why they are removed from the prior. So you avoid addressing that.


    The Halfer's run-centered measure just is a way to measure the space of probabilities by partitioning the events that Sleeping Beauty's credence (understood as a ratio of such events) are about that are counted in the numerator and denominator. It refers to the expected proportion of runs Sleeping Beauty finds herself in that are H-runs or T-runs consistently with the information available to her at any given moment (such as an occasion of awakening).

    Because there are two different ways (i.e. two different kinds of T-awakenings) for her to awaken in a T-run (on Monday or Tuesday) and only one to awaken in a H-run (on Monday), and the expected long term proportion of awakenings that are T-awakenings is 2/3, it is tempting to infer that the probability that she is experiencing a T-run likewise is 2/3. But while this is true in a sense it is also false in another sense.

    It is indeed true, in a sense, that when she awakens the probability (i.e. her rational credence) that she is currently experiencing a T-run is 2/3. Spelled out explicitly, this means that SB expects, in the long run, that the sort of awakening episode she is experiencing is part of a T-run two thirds of the time. In a different sense, the probability that when she awakens she is currently experiencing a T-run is 1/2. Spelled out explicitly, this means that SB expects, in the long run, that the sort of experimental run she is experiencing is a T-run (and hence comprises two awakenings) half of the time. Notice that the first use of "the time" in "half of the time" meant half of the runs, while in "two thirds of the time", it meant two thirds of the awakenings.

    The reason why there is no need for Sleeping Beauty to update her credence from 1/2 to 2/3 when her credence is understood in the "Halfer" way spelled out above is because nothing specific about her epistemic situation changes such that the proportion of such situations (runs) that are T-runs changes. That's true also in the case of her Thirder-credence. She already knew before the experiment began that she could expect to be awakened in a T-awakening situation two thirds of the time, and when she so awakens, nothing changes. So, her expectation remains 2/3. The fact that the Halfer-expectation matches the proportion of Tails coin toss results, and the Thirder-expectation doesn't, is fully explained by the fact that Tails coin toss results spawn two awakenings in one run while Heads coin toss results spawn a single awakening in one run.

    Notice also that your removal of the non-awakening events (i.e. "Heads&Sunday") from the prior only yields a renormalisation of the relevant probabilities without altering the proportions of T-runs to H-runs, or of T-awakenings to H-awakenings, and hence without altering probabilities on either interpretation of SB's credence. Halfers and Thirders "condition" on different events, in the sense they they use those events as measures, but neither one does any Bayesian updating on the occasion of awakening since no new relevant information, no new condition, comes up.
    ssu October 21, 2025 at 21:14 #1020161
    Thanks for @JeffJo for the resurrection of this thread.

    I do think this related to the Monty Hall problem where information affects probabilities. Information does affect probabilities, you know. It's easier indeed to understand the Monty Hall when there's a lot more doors (just assume there's one million of them). So there's your pick from one million doors, then the gameshow host leaves just only one other door closed and opens all other 999 998 doors. You think it's really fifty-fifty chance then? You think you are so lucky that you chose the right door from a million?

    If she knows the experiment, then it's the 1/3 answer. In Monty Hall it's better to change your first option as the information is different, even if one could at first think it's a 50/50 chance. Here it's all about knowing the experiment.

    In this case it's a bit blurred in my view with saying that she doesn't remember if she has been already woken up. Doesn't mean much, if she can trust the experimenters. But in my view it's the same thing. Does it matter when she is represented with the following picture of events?

    User image

    She cannot know exactly what day it is, of course. She can only believe that the information above is correct. Information affects probabilities, as in the Monty Hall problem.

    What if these so-called scientists behind the experiment are perverts and keep intoxicating the poor woman for a whole week? Or a month? If she believes that the experiment ended on Wednesday, but she cannot confirm it being Wednesday, then the could have taken the been experiment for a week. Being drugged for a week or longer will start affecting your health dramatically.

    Now I might have gotten this wrong, I admit. But please tell me then why I got it wrong.
    Pierre-Normand October 21, 2025 at 22:22 #1020165
    Quoting ssu
    I do think this related to the Monty Hall problem where information affects probabilities. Information does affect probabilities, you know. It's easier indeed to understand the Monty Hall when there's a lot more doors (just assume there's one million of them). So there's your pick from one million doors, then the gameshow host leaves just only one other door closed and opens all other 999 998 doors. You think it's really fifty-fifty chance then? You think you are so lucky that you chose the right door from a million?

    If she knows the experiment, then it's the 1/3 answer. In Monty Hall it's better to change your first option as the information is different, even if one could at first think it's a 50/50 chance. Here it's all about knowing the experiment.


    In the classic Monty Hall problem, since the three doors hide one prize and two goats, there is a 1/3 chance that the initially randomly selected door hides the prize. After the game show host deliberately opens one of the remaining two doors that they know not to contain the prize, the player can update their credence (probability estimate) that the remaining unselected door hides the prize to 2/3 and hence is incentivized to switching. It's not just the player's knowledge of the game protocol that embodies the relevant information, but also the actual action the game show host. This action leave the player's credence in their initial choice being right at 1/3 and hence yields no information regarding the initially selected door. But this action also yields knowledge about the two other doors: the one that has been shown to hide a goat now has zero chances of hiding the prize and the remaining unselected door now has a 2/3 chance of hiding it.

    The simplest rationale for switching stems from the consideration that never switching makes the player win 1/3 of the time while always switching makes them win in all cases where they would otherwise lose (and vice versa), and hence makes them win 2/3 of the time.

    Unlike the Sleeping Beauty Problem, the Monty Hall Problem isn't a matter of controversy in probability theory. Pretty much everyone agrees that after the game show host opens a goat-hiding door, the player is incentivized to switch their initial choice and thereby increases their chance from 1/3 to 2/3.

    In this case it's a bit blurred in my view with saying that she doesn't remember if she has been already woken up. Doesn't mean much, if she can trust the experimenters. But in my view it's the same thing. Does it matter when she is represented with the following picture of events?

    She cannot know exactly what day it is, of course. She can only believe that the information above is correct. Information affects probabilities, as in the Monty Hall problem.

    What if these so-called scientists behind the experiment are perverts and keep intoxicating the poor woman for a whole week? Or a month? If she believes that the experiment ended on Wednesday, but she cannot confirm it being Wednesday, then the could have taken the been experiment for a week. Being drugged for a week or longer will start affecting your health dramatically.

    Now I might have gotten this wrong, I admit. But please tell me then why I got it wrong.


    What makes the Sleeping Beauty Problem harder, and more controversial, than the Monty Hall Problem is that despite agreeing about the game experiment protocol, disagreements arise regarding the meaning of Sleeping Beauty's "credence" about the coin toss result when she awakens, and also about the nature of the information she gains (if any) when she is awakened and interviewed.

    In order assess if you're right or wrong, you'd need to commit to an answer and explains why you think it's right. Should Sleeping Beauty express a 1/2 credence, when she is being awakened, that the coin landed heads? Should it be 1/3, or something else?
    Kizzy October 22, 2025 at 03:28 #1020193
    Quoting JeffJo
    I was referring to your second case, not the first. In the first case, one of three cards is picked at random. Those three outcomes are mutually exclusive by construction. In your second case, the three cards are given to SB on her corresponding awakening occasions. Then, if the coin lands Tails, SB is given the two T-cards on two different days (Mon & Tue). So "Mon & Tails" and "Tue & Tails" are distinct events that both occur in the same timeline; they are not mutually exclusive across the run, even though each awakening is a separate moment. — Pierre-Normand


    And.... you continue to ignore the obvious point I am making. You keep looking at an "outcome" as what occurs over two days. The only "outcome" SB sees occurs on one day.

    But if you really want to use two days, do it right. On Tails, there are two waking days. On Heads, there is a waking day and a sleeping day. The sleeping day still exists, and carries just as much weight in the probability space as any of the waking days. What SB knows is that she is in one of the three waking days.

    Each day carries a 1/4 prior probability. Since SB knows she is in a waking day, the sleeping day is "eliminated" and she can use conditional probability to update the probabilities of the three waking days to 1/3 each. And it is no different than if you always wake SB, but have three interview days and one DisneyWorld day. The three days that have a common description, when that common description is what SB sees, each have a probability of 1/3. This is true regardless of what the other one description is.

    And you have offered no counter arguments except "but what if SB wants to look across all of the days."
    AH-HA moment! See underlined part of JeffJo's quote.

    In response to this specific part of your comment quoted above, "Each day carries a 1/4 prior probability. Since SB knows she is in a waking day, the sleeping day is "eliminated" and she can use conditional probability to update the probabilities of the three waking days to 1/3 each,"
    I want to ask:
    SB knows that Monday waking is guaranteed, no matter what the outcome of the coin toss, if so how can she eliminate the sleeping day and update the probabilities or her credence to 1/3...how is the conditional probability used to update the probabilities of the three waking days to 1/3 each? Monday waking offers no "new information" if it was guaranteed to happen from the start. She knows the rules. From what does she take from the inevitable waking day to update her credence besides confirming uncertainty?

    Quoting JeffJo
    so when those events aren't occurring in a way that is causally (and probabilistically) independent of the coin flip result. — Pierre-Normand


    When she is awake, what knowledge does she have, related to any other day or coin result?

    This is what seems difficult to accept. SB's "world" consists of one day, and one coin result, and due to the amnesia both are independent of any other "world" represented in another awakening. Illustrating that was the point of my "Camp Sleeping Beauty" variation.
    From SB's pov, waking Monday was always going to happen. It doesn't update her credence, but Tuesday would...Just because SB cannot tell which day it is doesn't mean each day carries the same intel or informational value.


    Quoting Pierre-Normand
    In order assess if you're right or wrong, you'd need to commit to an answer and explains why you think it's right. Should Sleeping Beauty express a 1/2 credence, when she is being awakened, that the coin landed heads? Should it be 1/3, or something else?


    On Monday she always wakes - no matter what the coin says. So waking Monday tells her nothing new. On Tuesday, she only wakes if the coin landed tails. That is an experience conditional to the result and it is information...So there seems to be some ranking or levels to the importance or significance of the intel

    Monday: inevitable, guaranteed, offers no/very little intel
    Tuesday: outcome dependent, charged with intel (intel that we can even smell taste feel see or sense)

    Since SB doesn't remember Monday, she cannot feel the difference but the structure of the experiment KNOWS the difference.So if she is asked twice, Monday and Tuesday, that only happens with tails outcome. Even without memory, her credence may shift, but because the setup itself is informative.

    Monday-----> credence stays 1/2
    Tuesday-----> credence shifts towards tails
    Not sure what day it is???-------->maybe 1/3 but only as a reflection of uncertainty

    Perhaps the structure of the experiment does not support equal weighing of awakenings and because of this, not treating all awakenings equally happens because not all carry the same weight of intel.

    I wonder now, if it is not about counting awakenings but which ones actually help SB figure things out. This is about belief systems we adopt, the values we hold, and the decisions we make....even when we dont remember why exactly. The problem shows indirectly we are building beliefs, holding expectations, and how we reason when memory fails us as subjects embedded within a structure we lean on more than we may notice...what structures, ethical systems, cultural norms, personal values, systems of beliefs, with or without religious backed faith, intuition, to back up our reasoning in order to guide us in these decision making moments?

    Where are you????????
    Kizzy


    Pierre-Normand October 22, 2025 at 03:40 #1020196
    Quoting Kizzy
    Since SB doesn't remember Monday, she cannot feel the difference but the structure of the experiment KNOWS the difference.So if she is asked twice, Monday and Tuesday, that only happens with tails outcome. Even without memory, her credence may shift, but because the setup itself is informative.


    It's also part of the protocol that although SB knows that she is being awakened a second time on Tuesday if and only if the coin landed tails, on each occasion where she is being awakened, she isn't informed of the day of the week. As specified in the OP (and in the Wikipedia article), she doesn't know if her awakening is happening on Monday or Tuesday (though she does know, or rather can infer, that it's twice as likely to be occurring on Monday). Hence, the relevant information available to her for establishing her credence is the same on each occasion of awakening.
    Kizzy October 22, 2025 at 06:23 #1020219
    I think I understand and see my mistake, I was confused from the start as I misunderstood the problem because of the way its laid out and my brain,[thing to note208am] and since I was and am assuming SB is using prior knowledge plus new information Tuesday, without amnesia Sunday to Monday. I just find it to be seemingly difficult to not use some sort of self locating knowledge to assign degree of belief in something, that something being a coin heads up with or without amnesia Sunday and the chance for belief to update come Tuesday.

    Since I didn't account for the amnesia from Sunday to Monday, only Monday to Tuesday. My pov, though in error for mistaking PROTO, understood the OG problem (see wiki for Sleeping Beauty Paradox*) to mean after she goes to sleep Sunday, the experiment does not start until Monday, where she goes into the experiment asleep and awakened [inevitable], and asked her credence, put to sleep and either asked again Tuesday with amnesia or sleeps through it. awakening Wednesday and experiment is officially over. .

    I do not put it passed me, that even though I just very well may have completely misinterpreted the problem before my own eyes, to be standing corrected, as I have to in order to carry on...

    However, now knowing Sunday to Monday she is given amnesia, something about knowing the experiment duration and rules tells me self locating knowledge must come from somewhere, from something in time and our relation to it is relevant to the possible knowing, yes being awake is evidence of some thing but what else gives?

    Reply to Pierre-Normand Thanks for your contributions to this thread. And also to your AI exploration experience and work that you've shared here on the forum as well. Of interest...

    BUT even though, I was and still am off, in this way:

    What else besides knowing what she signed up for, prior know ledge and new intel [ i am awake now] does not alter her credence Monday, only verifies indirect info about SB maybe? If she was given amnesia Sunday and put to bed, awakened Monday then what awakening is she forgetting? *see quote from wiki below, note bold words that are throwing me.....

    Anyways, either way -

    OH never mind, OF course if she knew it was Monday she wouldn't say 1/3, but what if she was off...and Tuesday comes around and it changes to 0? the chance to change or update belief still exists if tails and asked twice. On Monday she does not know for certain if heads or tails only gives her degree of belief in heads, knowing nothing Wednesday when experiment ends, tomorrow she will be awakened or sleep through the day, she can still guess reasonably participating, I think? I don't know, perhaps I am in over my head here...[i]again!

    * per Wikipedia, "On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening." https://en.wikipedia.org/wiki/Sleeping_Beauty_problem
    Pierre-Normand October 22, 2025 at 06:43 #1020224
    Quoting Kizzy
    OH never mind, OF course if she knew it was Monday she wouldn't say 1/3, but what if she was off...and Tuesday comes around and it changes to 0? the chance to change or update belief still exists if tails and asked twice. On Monday she does not know for certain if heads or tails only gives her degree of belief in heads, knowing nothing Wednesday when experiment ends, tomorrow she will be awakened or sleep through the day, she can still guess reasonably participating, I think? I don't know, perhaps I am in over my head here...again!


    You'll more easily wrap your head around the problem if you don't overcomplicate things (even though it will remain a tough problem). The purpose of the drug merely is to make it impossible for Sleeping Beauty on any occasion of awakening to know if this occasion was a first or a second one in the experiment (which she could otherwise deduce if she had a memory of the previous one or the lack thereof). This makes all three possibilities—Monday&Heads, Monday&Tails and Tuesday&Tails—indistinguishable from her subjective perspective although she knows at all times that over the course of the experiment all three of those situations could be experienced by her (without knowing which one it is whenever she's experiencing one of them). You can now place yourself in her shoes and start pondering what the chances are that the coin landed tails.

    (I'm glad you're enjoying my AI experiment reports!)
    JeffJo October 23, 2025 at 00:43 #1020389
    Quoting Kizzy
    SB knows that Monday waking is guaranteed, no matter what the outcome of the coin toss, if so how can she eliminate the sleeping day and update the probabilities or her credence to 1/3

    SB does not know if a waking day is a Monday. Only that it is a waking day. She can eliminate the sleeping day because she knows this is a waking day.

    Compare two versions:

    • Three days where she is wakened and interviewed, and a fourth where she sleeps.
    • Three days where she is wakened and interviewed, and a fourth where she is wakened and taken to DisneyWorld.


    On a waking day in the second version, she clearly can eliminate the DisneyWorld day and the probability of Heads. Why is that different? Whether on not she would be awake on that "fourth day" is irrelevant. The important fact is not being able to observe it when it happens, it is being able to observe that it is not happening when it does not.
    ssu October 23, 2025 at 12:23 #1020414
    Quoting Pierre-Normand
    disagreements arise regarding the meaning of Sleeping Beauty's "credence" about the coin toss result when she awakens, and also about the nature of the information she gains (if any) when she is awakened and interviewed.

    Well, isn't this exactly that I tried to say about this being about information?

    Quoting Pierre-Normand
    Should Sleeping Beauty express a 1/2 credence, when she is being awakened, that the coin landed heads? Should it be 1/3, or something else?

    Isn't the only the she can say simply that she's participating in the experiment... and she cannot know if its monday or tuesday. Information has an effect on the probability (as in the Monty Hall). Without the information, the probability cannot be accurately defined by her when waking up.



    JeffJo October 23, 2025 at 16:41 #1020463
    Quoting Kizzy
    Since SB doesn't remember Monday, she cannot feel the difference but the structure of the experiment KNOWS the difference.So if she is asked twice, Monday and Tuesday, that only happens with tails outcome. Even without memory, her credence may shift, but because the setup itself is informative.


    You are one of four volunteers gathered on Sunday Night. You see the combinations "Monday and Heads," "Monday and Tails," "Tuesday and Heads," and "Tuesday and Tails" written on four different note cards. They are turned over, shuffled, and distributed between you, but you can't look. You are told that after you go to sleep, a single fair coin will be flipped. Then, on Monday and again on Tuesday, three of you will be wakened asked some questions. The one who is left out will be the one whose card says the actual coin flip result, and the current day. Afterwards, you will be put back to sleep with amnesia.

    Some time later, you find yourself awake and sitting in a room where you can see two of the other three volunteers on TV monitors (you are instructed to not try to communicate through them). One is labeled "Monitor A," and the other "Monitor B." You, and these other two, have their card face-down card on the table in front of them.

    • Not knowing what your card says, you are asked for your credence that the coin result written on you card is the actual coin result. AND, the same question about your credence for it matching A's card, and B's card.
    • Once you all have provided an answer (unseen by the others, of course), you are told to look at your card, without revealing it, and answer the same questions.


    I say the answers in #1 cannot be anything but 1/3. You have the exact same information about each, and they have to add up to 1. If you disagree, please explain how it is possible. Note that the "structure of the experiment KNOWS" that there is a day, an a coin face, that apply. The importangt part is that yolu don't know these.

    I say the answers in #2 can't change. Knowing the specific names applied to you "sleep day" does not change their existence what the "structure of the experiment KNOWS," in any way. You seem to think it can; that what the "structure of the experiment KNOWS" changes for you.

    But the same applies to A and B. If it changes the same way, your answer for them changes the same way and eveybody's is 1/2. This is a paradox.

    And if it changes in a different way for A and B, allowing you to say 1/4 for them, how did it change differently?
    JeffJo October 26, 2025 at 21:40 #1021080
    Quoting Pierre-Normand
    The Halfer's run-centered measure just is a way to measure the space of probabilities by partitioning the events that Sleeping Beauty's credence

    I can repeat this as often as you ignore it.

    The experiment, when viewed from the outside, consists of two possible runs. The experiment that SB sees is one day, from one run, and to her that one day is independent of whichever run she is in. Since she cannot know which run she is in, that is not information that is useful to her. Inside the experiment, an outcome consists of one "day" only. The only point that is significant to SB is that she can tell that an interview day is not a sleeping day. This constitutes "new information" in probability.

    In fact, "new information" is not defined in probability. The information that allows for probability updates is whatever eliminates outcomes that exist in the prior sample space, but are inconsistent with that information. Yes, this usually means a positive fact about the outcome, that does not apply to all, hence some call it "new information." But being "new" isn't what is important, it is the elimination. H&Tue is a member of the prior sample space. It is eliminated when she is awoken and interviewed.

    And you can check this is several ways, all of which you ignore. One, you can change the sleeping day to one where she is awakened, but not interviewed. I'll stick with the example that you take her to DisneyWorld. Now one "day" in what you call the "Heads run" is eliminated when she is interviewed. Since the "Heads run" has a 50% probability, and she is can't be in all of the probability-weight of the "Heads Run,", her credence in Heads must be less than 50%/

    But it cannot matter what happens on H&Tue. What affects SB's credence is that she knows that the current "day" is not H&Tue. Which she knows whatever happens on H&Tue.

    Or you could address the Camp Sleeping Beauty version with more than just "it illustrates the thirder view." You could try to apply the "six day run" theory to Camp Sleeping Beauty. And you will not be able to do so consistently.

    • Is SB's credence in each die roll the number of times today's activity appears in that row, divided by the number of times it appears in the 6x6 calendar. If you disagree, please say what it is based on the "six day run" theory.
    • If one of the activities is replaced with "sleep through this day," does that change her credence in any way? HOW, and TO WHAT?


    Quoting Kizzy
    SB knows that Monday waking is guaranteed, no matter what the outcome of the coin toss, if so how can she eliminate the sleeping day and update the probabilities or her credence to 1/3


    Because her current knowledge and existence in the experiment is fully limited to one day. Knowing that she will always be awakened on Monday does not change that. See above.

    Quoting ssu
    I do think this related to the Monty Hall problem where information affects probabilities. Information does affect probabilities, you know.

    This is called conditional probability.

    It's easier indeed to understand the Monty Hall when there's a lot more doors

    What that does is make it more intuitive. Since there is a 99.9999% chance Monty Hall picked that one door for the specific reason that it has the car, and a 0.0001% chance that he picked a goat door randomly, it makes sense top go with the 99.9999%. This is harder to wee then the numbers are 66.7% and 33.3%.

    So yes, there is similarity in that the information that allows conditional probability to be used is hard to see. But the reasons are quite different. In Sleeping Beauty, it is because philosophers want to propose inconsistent ways to view information.
    Pierre-Normand October 28, 2025 at 08:08 #1021318
    Quoting JeffJo
    I can repeat this as often as you ignore it.

    The experiment, when viewed from the outside, consists of two possible runs. The experiment that SB sees is one day, from one run, and to her that one day is independent of whichever run she is in.


    SB doesn't have the magical power to make the other awakenings, or their mutual causal relationships, drop out of existence on the occasion where she awakens. She still knows that the two potential T-awakenings live on the same timeline (and hence that when she's experiencing one of them, she will also go on to experience, or will have experienced, the other one in the same run).

    Since she cannot know which run she is in, that is not information that is useful to her. Inside the experiment, an outcome consists of one "day" only. The only point that is significant to SB is that she can tell that an interview day is not a sleeping day. This constitutes "new information" in probability.


    The fact the the information isn't new to her doesn't make the possibility of there being other potential awakenings in the same run irrelevant. She already has information about those possibilities (and long run frequencies) since she was told about them before the experience began. The Halfer stance, just like the Thirder (equally valid) stance, does not depend on her learning anything new when she awakens since it merely depends on her knowledge of the relative frequencies of H-runs to T-runs.

    You're saying that when she awakens, she learns that an interview day is not a sleeping day. But she already knew that interview days never are sleeping days. She can't be asleep and awake at the same time. She knew before the experiments began that the awakenings she would potentially experience in the future would equally as often turn out (merely unbeknownst to her at the time) to have been T-Mon, T-Tue and H-Mon and hence that, when she experiences any of them, those three possibilities would be equally likely from her epistemic standpoint. The Halfer-credence isn't either based on anything new that she learns upon awakening but it is about a differently partitioned relative frequency of events.

    To recap what I had said earlier: When SB, as a Thirder, says that the odds that the coin landed tails are 2/3, what she means is that that her current awakening episode just is one from a set of indistinguishable awakening epodes that, in the long run, will turn out to have been T-awakenings 2/3 of the time. When SB, as a Halfer, says that the odds that the coin landed tails are 1/2, what she means is that her current awakening episode is part of a set of indistinguishable runs that, in the long run, will turn out to have been T-runs one half of the time.

    Just as you view it as irrelevant to your Thirder claim that T-mon and T-tue belong to the same run, which it indeed is, a Halfer views it as irrelevant to their claim that T-runs spawn more than one awakening episode, which it indeed is. The Halfer and yourself simply are talking about different things.
    JeffJo October 28, 2025 at 12:44 #1021330
    Quoting Pierre-Normand
    SB doesn't have the magical power to make the other awakenings, or their mutual causal relationships, drop out of existence on the occasion where she awakens.


    Exactly. That is the opposite side of the ability you claim she could have, to make one "other awakening" selectively pop into significance based on know;edge she does not possess. That is, to treat the "other Tails awakening" when the coin landed Tails differently than the "Heads awakening."

    Again: the prior sample space comprises FOUR combinations of Coin+Day. In the prior, each is equally likely to apply at the moment the lab techs decide whether or not to awaken her. If they do, to entirety of her information about it is that it is one of the THREE combinations that correspond to an awakening. To her, there is no more, or less, of a connection to the "other" day in this two-day run that indicates, to her, whether it is Monday or Tuesday, if the coin landed Heads or Tails, or which "run" she is in. If you think otherwise, I'd be glad to hear why. An explicit reason why, on T&Mon, she could be more or less likely to think it is H&Mon. This requires knowledge of whether she is in a Heads or Tails run, not the knowledge that such runs are possibilities.

    Quoting Pierre-Normand
    When SB, as a Halfer, says that the odds that the coin landed tails are 1/2, what she means is that her current awakening episode is part of a set of indistinguishable runs that, in the long run, will turn out to have been T-runs one half of the time.

    Indistinguishable? You contradict yourself here, because in the long run you do distinguish them.

    But you use this argument to once again evade answering the direct questions I have asked several times. One of them is "If the sleeping day is changed to a non-interview waking day, what should her answer be on an interview waking day?" It can't be 1/2, because that would not allow here to have 100% credence in Heads in the non-interview waking day. So she must answer 1/3. But if she answers 1/3, what is different in her knowledge on an interview waking day in the original version?

    And if you try to hand-wave a difference, how does in work in the Camp Sleeping Beauty version when each run can contain a different number of waking days?

    But I've given up the silly notion that you will address these questions. Which probably means you can't.
    Pierre-Normand October 28, 2025 at 13:02 #1021339
    Quoting JeffJo
    Indistinguishable? You contradict yourself here, because in the long run you do distinguish them.


    No. I just mean that when she awakens she isn't able to tell if she's in a T-run anymore than she can tell if she's in a T-Monday-awakening or any other possible awakening. That's why the best she can express is a rational credence. She distinguishes runs, and awakenings, and coin toss results, as distinct possibilities that are realized with frequencies determined by the experiment's protocol. If those possibilities were irrelevant, then her knowledge of the protocol that sets their long run frequencies would also be irrelevant. But it's clearly relevant to both Halfers and Thirders.
    JeffJo October 29, 2025 at 11:46 #1021566
    Quoting Pierre-Normand
    No. I just mean that when she awakens she isn't able to tell if she's in a T-run anymore than she can tell if she's in a T-Monday-awakening or any other possible awakening.


    And I'm saying that this is the exact reason why she cannot base credence on what may, or may not, be the other part(s) of the "run" she is in. I'm saying that all she can base credence on is the one day she can see. And this is trivial to confirm, by addressing the questions you refuse to acknowledge.
    Pierre-Normand October 29, 2025 at 12:18 #1021568
    Quoting JeffJo
    And I'm saying that this is the exact reason why she cannot base credence on what may, or may not, be the other part(s) of the "run" she is in. I'm saying that all she can base credence on is the one day she can see. And this is trivial to confirm, by addressing the questions you refuse to acknowledge.


    I've made quite a few points that you've never acknowledged, some of them in responses to questions of yours that I responded to more than once. But some of the objections you raise are so vague and bear so little relationship to what I've said that the best I can do in response to them is to try to reiterate my own view more clearly. You repeatedly claimed that I'm disallowed to make reference to any awakening opportunity Sleeping Beauty isn't currently experiencing. But how do you yourself arrive at a credence of 2/3 without making reference to the fact that there are three possible awakening opportunities in total and not just the single one that she is experiencing?
    sime October 29, 2025 at 17:41 #1021627
    What the SB problem amounts to is a Reductio ad absurdum against the principle of indifference being epistemically normative, a principle that in any case is epistemically inadmissible, psychologically implausible, and technically unnecessary when applying probability theory; a rational person refrains from assigning probabilities when ignorant about frequency information; accepting equal odds is not a representation of ignorance (e.g Bertrand's Paradox).

    - It is commonly falsely argued by thirders, that halvers are suspect to a Dutch-book argument, by virtue of losing twice as much money if the coin lands tails, than they gain if the coin lands heads (since the dutch-book is defined as an awoken SB placing and losing two bets, each costing her $1 in the case of tails, one on monday and one on tuesday, versus her placing and winning only one bet rewarding her with $1 on Monday if the coin lands heads). But this dutch book argument is invalidated by the fact that it it equivalent to SB beingapriori willing to win $1 in the case of heads and losing $2 in the case of tails, i..e. SB knowingly accepting a Dutch Book with an expected loss of 0.5x1 - 0.5x2 = -$0.5 before the experiment begins, given her prior knowledge that P(H) = 0.5. So the Dutchbook argument is invalid and is actually an argument against the thirder position.

    The (frankly unnecessary) lesson of SB is that meaningful probabilities express causal assumptions, and not feelings of indifference about outcomes.

    JeffJo October 29, 2025 at 18:16 #1021629
    Quoting Pierre-Normand
    You repeatedly claimed that I'm disallowed to make reference to any awakening opportunity Sleeping Beauty isn't currently experiencing.

    You can refer to any part of the experiment you want. Sleeping Beauty knows all of the parts (*), but has no means to relate her current awake period to any others. You are saying halfers base their answer on doing that. They can't.

    But how do you yourself arrive at a credence of 2/3 without making reference to the fact that there are three possible awakening opportunities in total and not just the single one that she is experiencing?

    Are you really that obtuse? As I indicated with the (*), she knows all of the parts. That's what establishes the prior sample space. All four possibilities, with equal probabilities. Since she is awake, she eliminates the one she sleeps through.

    And as I have said, betting arguments don't work because you have to agree on how many bets are placed. But there is no logical fallacy in a direct probability analysis, as I have done.

    There is nothing vague about my questions, unless you refuse to understand it.
    • Compare two versions of the popular problem; one where she stays asleep on H+Tue, and one where she is awakened but taken to Disney World instead of being interviewed. In the halfer, two-runs model, does her credence in Heads change between these two versions? What is her credence in Heads when she goes to DisneyWorld?
    • In my Camp Sleeping Beauty version, is her credence in die roll D (# times today's activity appears in row D)/(# times today's activity appears in table), as thirders would claim, or is it 1/6 as halfers would claim? How does the halfer's answer change if today's activity does not appear in all rows?

    Pierre-Normand October 30, 2025 at 03:26 #1021767
    Quoting sime
    What the SB problem amounts to is a Reductio ad absurdum against the principle of indifference being epistemically normative, a principle that in any case is epistemically inadmissible, psychologically implausible, and technically unnecessary when applying probability theory; a rational person refrains from assigning probabilities when ignorant about frequency information; accepting equal odds is not a representation of ignorance (e.g Bertrand's Paradox).


    I don't see any questionable appeal to the principle of indifference being made in the standard Thirder arguments (though JeffJo may be making a redundant appeal to it, which isn't needed for his argument to go through, in my view.) Sleeping Beauty isn't ignorant about frequency information since the relevant information can be straightforwardly deduced from the experiment's protocol. SB doesn't infer that her current awakening state is a T-awakening with probability 1/3 because she doesn't know which one of three indistinguishable states it is that she currently is experiencing (two of which are T-awakenings). That would indeed be invalid. She rather infers it because she knows the relative long run frequency of such awakenings to the 2/3 by design.

    The (frankly unnecessary) lesson of SB is that meaningful probabilities express causal assumptions, and not feelings of indifference about outcomes.


    I don't think that is the salient lesson from the thought experiment but I agree with your claim.

    Regarding the Dutch-book arguments, they represent specific payout structures. They highlight why it's rational for Halfers to be indifferent between betting on H or T when only one betting opportunity and payout is afforded to them in one run of the experiment. They also highlight why Thirders are not likewise indifferent between betting on H or T when one betting opportunity and payout is afforded to them on any awakening occasion.
    sime October 30, 2025 at 11:21 #1021811
    Quoting Pierre-Normand
    I don't see any questionable appeal to the principle of indifference being made in the standard Thirder arguments (though JeffJo may be making a redundant appeal to it, which isn't needed for his argument to go through, in my view.) Sleeping Beauty isn't ignorant about frequency information since the relevant information can be straightforwardly deduced from the experiment's protocol. SB doesn't infer that her current awakening state is a T-awakening with probability 1/3 because she doesn't know which one of three indistinguishable states it is that she currently is experiencing (two of which are T-awakenings). That would indeed be invalid. She rather infers it because she knows the relative long run frequency of such awakenings to the 2/3 by design.


    But the SB experiment is only assumed to be performed once; SB isn't assumed to have undergone repeated trials of the sleeping beauty experiment, let alone have memories of the previous trials, but only to have been awoken once or twice in a single experiment, for which no frequency information is available, except for common knowledge of coin flips. So the SB is in fact appealing to a principle of indifference as per the standard explanation of the thirders position, e.g. wikipedia.

    In any case, a frequentist interpretation of P(Coin is Tails) = 0.5 isn't compatible with a frequentist interpretation of P(awoken on Tuesday) = 1/3.

    For sake of argument, suppose P(Coin is Tails) = 0.5 and that this is a frequential probability, and that inductive reasoning based on this is valid.

    Now if P(awoken on Tuesday) = 1/3, then it must also be the case that

    P(awoken on Tuesday | Coin is Tails) x P(Coin is tails) = 1/3, as typically assumed by thirders at the outset. But this in turn implies that

    P(awoken on Tuesday | Coin is tails) = (1/3)/0.5 = 2/3.

    Certainly this isn't a frequential probability unless SB having undergone repeated trials notices that she is in fact woken more times on a tuesday than a monday in cases of Tails, in contradiction to the declared experimental protocol. Furthermore, this value doesn't even look reasonable as a credence , because merely knowing apriori that the outcome of the coin is tails shouldn't imply a higher credence of being awoken on tuesday rather than Monday.

    Credences are a means of expressing the possession of knowledge without expressing what that knowledge is. To assign consistent credences requires testing every implied credence for possible inconsistencies. Thirders fail this test. Furthemore, credences should not be assigned on the basis of ignorance; a rational SB would not believe that every possible (day, coin-outcome) pair has equal prior probability, rather she would only assume was is logically necessary - namely that one of the pairs will obtain with either unknown or undefined probability.

    JeffJo October 30, 2025 at 18:05 #1021869
    The SB problem is a classic illustration of confusing what probability is about. It is not a property of the system (the coin in the SB problem), it is a property of what is known about the system. That is, your credence in an outcome is not identically the prior probability that it will occur. Example:

    • I have a coin that I have determined, through extensive experimentation, is biased 60%:40% toward one result. But I am not going to tell you what result is favored.
    • I just flipped this coin. What is your credence that the result was Heads?


    Even though you know that the probability-of-occurrence is either 60% or 40%, your credence in Heads should be 50%. You have no justification to say that Heads is the favored result, or that Tails is. So your credence is 50%. To justify, say, Tails being more likely than Heads, you would need to justify Tails being more likely to be the favored result. And you can't.

    And the reason I have not responded to many of Pierre's comments, is that they try to justify answers that directly contradict the answers to the questions I have asked, but have gone unanswered. Because he is trying to convince me with unsupported logic that would be dismissed if he answers mine. Since there is no end in sight to the carousel of unanswered questions, I am going to assert the answers to mine.

    I'm going to describe several alternate scenarios that encompass my point. All include amnesia at the end of each day. What I would like to see, is either agreement with these assertions; or disagreement, with reasons. And if reasons are given, I will respond to them.

    • Heads & Monday: Wake SB and interview her in conference room A.
    • Tails & Monday: Wake SB and interview her in conference room B.
    • Heads & Tuesday: Wake SB and take her to Disney World.
    • Tails & Tuesday: Wake SB and interview her in conference room D.
    • When SB is interviewed, she is asked fro her credence in each line item.

    Note: the conference rooms will be indistinguishable to SB.

    Assertion #1: These are different outcomes, regardless of SB's ability to distinguish them.

    Assertion #2: As she wakes up, SB's credence in today being each line item should be 25%.

    Assertion #3: SB cannot make use of runs. That is, if she is being interviewed in conference room A there will be a trip to Disney World. If she is being interviewed in D, she can't make use of the fact that she had been interviewed in C. Such knowledge is of no use to her.

    Assertion #4: Since her credence in each outcome is 25%, and she cannot utilize "runs," when she is interviewed her credence in each of the three "interview" outcomes updates to 33%.

    • Heads & Monday: Wake SB and interview her in conference room A.
    • Tails & Monday: Wake SB and interview her in conference room A.
    • Heads & Tuesday: Wake SB and take her to Disney World.
    • Tails & Tuesday: Wake SB and interview her in conference room A.


    Assertion #5: There is no difference, that can affect SB's credence, in this scenario. Whatever "identifies" an interview has nothing to do with the room where it occurs, it is the circumstances under which it occurs. But not "runs."

    • Heads & Monday: Wake SB and interview her.
    • Tails & Monday: Wake SB and interview her.
    • Heads & Tuesday: Leave SB asleep
    • Tails & Tuesday: Wake SB and interview her.


    Assertion #6: There is no difference, that can affect SB's credence in an interview, in this scenario. Not being able to observe H&Tue does not remove it from the set of outcomes she knows can happen. Those are determined by the plan described on Sunday, not SB being able to observe it.

    Assertion #7: Not being able to observe H&Tue does not make allow SB to utilize the difference between a "Heads run" and a "Tails run."

    • The Camp Sleeping Beauty setup, with six distinguishable activities named A, B, C, D, E, and F. Each day in the six-day-by-six-die-rolls camp calendar is randomly assigned one.
    • After participating in each day's activity, SB is asked for her credence about the possible die rolls.


    Assertion #8 (Thirder version): Her credence in die roll D should be the number of days that today's activity occurs in row D, divided by the number of times it appears on the calendar.

    Assertion #8A (Impossible Halfer Version): Her credence in each die roll should be 1/6, even for die rolls where today's activity does not appear. (It is impossible since SB knows the die roll can't be for a row where today's activity does not appear.)

    Assertion #8B (Inconsistent Halfer Version): Her credence in each die roll where today's activity does not appear has to be 0. Fore those where it does, it should be 1/N, where N is the number of rolls where it appears. (It is inconsistent since it contradicts the halfer concept that her credence can't be updated.)

    Assertion #9: It does not matter if one of the activities is "sleep all day and skip the question."

    Assertion #10: The halfer logic is inconsistent. The correct answer is the thirder's.
    sime October 30, 2025 at 21:00 #1021918
    Quoting JeffJo
    The SB problem is a classic illustration of confusing what probability is about. It is not a property of the system (the coin in the SB problem), it is a property of what is known about the system.


    Then you are referring to subjective probability which is controversial, for reasons illustrated by the SB problem. Aleatory probability by contrast is physical probability and directly or indirectly refers to frequencies of occurrence.

    Quoting JeffJo


    That is, your credence in an outcome is not identically the prior probability that it will occur. Example:

    I have a coin that I have determined, through extensive experimentation, is biased 60%:40% toward one result. But I am not going to tell you what result is favored.
    I just flipped this coin. What is your credence that the result was Heads?



    It is correct to point out that credence does not traditionally refer to physical probability but to subjective probability. It is my strong opinion however, that credence ought to refer to physical probability. For example, my answer to your question is say that my credence is exactly what you've just told me and nothing more, that is my credience is 60/40 in favour of heads or 60/40 in favour of tails.

    Quoting JeffJo

    Even though you know that the probability-of-occurrence is either 60% or 40%, your credence in Heads should be 50%. You have no justification to say that Heads is the favored result, or that Tails is. So your credence is 50%. To justify, say, Tails being more likely than Heads, you would need to justify Tails being more likely to be the favored result. And you can't.


    I definitely would not say that my credence is 50/50, because any statistic computed with that credence would not be reflective of the physical information that you have provided.
    JeffJo October 31, 2025 at 00:47 #1021987
    Quoting sime
    I definitely would not say that my credence is 50/50, because any statistic computed with that credence would not be reflective of the physical information that you have provided.


    Then what would you say it is? If you say Q, then your credence in Tails must be 1-Q, and you have a paradox.
    sime October 31, 2025 at 05:55 #1022019
    Quoting JeffJo
    Then what would you say it is? If you say Q, then your credence in Tails must be 1-Q, and you have a paradox.


    If you insist that credence must be expressed as a number Q, then in general I would refuse to assign a credence for that reason - cases like SB in which credences are artificially constrained to be single probability values, doesn't merely result in harmless paradoxes but in logical contradictions (dutch books) with respect to causal premises. Likewise, I am generally more likely to bet on a binary outcome when I know for sure that the aleatoric probability is 50/50, compared to a binary outcome for which I don't know the aleatoric probability.

    In order to avoid unintented inferences, the purpose for assigning credences needs to be known. For example, decisions are often made by taking posterior probability ratios of the form P(Hypothesis A | Observation O )/ P(Hypothesis B | Observation O). For this purpose, assigning the prior probability credence P(Hypothesis A = 0.5) is actually a way of saying that credences don't matter for the purpose of decision making using the ratio, since in that case the credences cancel out in the posterior probability ratio to produce the likelihood ratio P(observation O | Hypothesis A)/P(Observation O | Hypothesis B) that only appeals to causal (frequential) information. This is also the position of Likelihoodism; a view aligned with classical frequential statistics, that prior probabilities shouldn't play a part in decision making unless they are statistically derived from earlier experiments.

    An acceptable alternative to assigning likelihoods, which often cannot be estimated as in single experiment situations, is to simply to list the possible outcomes without quantifying. Sometimes there is enough causal information to at least order possibilities in terms of their relative likelihood, even if quanitification of their likelihoods isn't possible or meaningful.


    Pierre-Normand October 31, 2025 at 10:42 #1022039
    Thanks for the tempered reply, @JeffJo. I appreciate the reset. Let me try to restate where I think we converge, then where we part…

    In my view, when considering how Sleeping Beauty's credence is to be interpreted, Halfers and Thirders are talking past one another. When considering their specific justifications for their views, though, I think they hold complementary pieces of the puzzle. It's important to put those pieces together.

    @sime (Halfer) and @JeffJo (Thirder) appear to me to each own a crucial half of the puzzle.

    Sime's piece of the puzzle: The grounding of SB's credence is aleatoric. The fair coin doesn't just draw the map, it drives the frequencies the map will realize across many runs (or, justify expectations over one single run). If we look at the physical world's output, it churns out Heads-runs and Tails-runs in a 1:1 pattern and, because the protocol ties one interview to Heads-runs and two to Tails-runs, it thereby churns out interviews in a 1:2 ratio. Those are physical frequencies, not an appeal to a principle of indifference.

    JeffJo's piece of the puzzle: SB's credence is about where she should place herself given what she knows. That's an epistemic task. Where I part ways is only here: we don't need fresh "evidence" to update. We only need to be clear which frequency ensemble her question ranges over.

    Put together, the protocol + fair coin fixes the layout and its long-run frequencies (or, equivalently, the single-run expectation). The credence question (that is, its interpretation) fixes which of those frequencies constitutes SB's "credence." Once we say what we're counting, the ratio is forced. No indifference needed.

    Here is a variation structurally identical to the original problem: Sleeping Beauty's Garden, where the protocol fixes the topology and the question determines how Sleeping Beauty is conceived to pace through it (the topometry).

    Before Beauty sleeps, the attendants lay out a garden she knows everything about:

    At the gate there's a fair fork: Heads-path or Tails-path (that's the aleatoric 1/2 vs 1/2).

    Along the paths they place the stopping spots (interviews): one lit spot on Heads, two lit spots on Tails.

    Amnesia just means each lit spot looks the same when she stands under it.

    That's the topology plus its built-in frequencies. What remains open, and defines SB's "credence," is how she conceives of herself pacing through the garden given those frequencies. There are two perfectly sensible ways:

    Gate-pacing (the Halfer question).

    "One step per run. Which coin toss result did the run that this very awakening belongs to produce?"

    SB's answer: "Across repeats, the fair coin makes Heads-runs and Tails-runs equally common. And the bare fact that I am awake now doesn't tilt those run chances: each kind of run guarantees at least one awakening. So, speaking about this awakening: the run that it belongs to will turn out to have been a T-run about half the time. Hence my credence that the current coin toss result is Tails is 1/2."

    Lamp-pacing (the Thirder question).

    "One step per interview. Which lamp is this?"

    SB's answer: "Because the protocol ties one lamp to Heads-runs and two lamps to Tails-runs, among the awakenings that actually occur across repeats, the lamp I'm under now will have turned out to be a T-lamp about two times out of three. So my credence that the current coin toss result is Tails is 2/3." (A biased coin would change these proportions; no indifference is assumed.)

    The coin's fairness fixes the branches and the long-run frequencies they generate. The protocol fixes how many stopping points each branch carries. Beauty's "what are the odds?" becomes precise only when she specifies what it is that she is counting.

    Note on indifference: The Thirder isn't cutting the pie into thirds because the three interview situations feel the same. It's the other way around: SB is indifferent because she already knows their long-run frequencies are equal. The protocol plus the fair coin guarantee that, among the awakenings that actually occur, the two T-awakenings together occur twice as often as the single H-awakening, and within each coin outcome the Monday vs Tuesday T-awakenings occur equally often. So her equal treatment of the three interview cases is licensed by known frequencies, not assumed by a principle of indifference. Change the coin bias or the schedule and her "indifference" (and her credence) would change accordingly.

    Note on one run versus many runs: The fair coin doesn’t just draw the map, it fixes the frequencies the map tends to realize. We can read those either as long-run proportions or as the single-run expectation for Beauty’s present case. In both readings, the protocol—not indifference—does the work.
    sime October 31, 2025 at 13:56 #1022067
    Quoting Pierre-Normand
    SB's answer: "Because the protocol ties one lamp to Heads-runs and two lamps to Tails-runs, among the awakenings that actually occur across repeats, the lamp I'm under now will have turned out to be a T-lamp about two times out of three. So my credence that the current coin toss result is Tails is 2/3." (A biased coin would change these proportions; no indifference is assumed.)

    The coin's fairness fixes the branches and the long-run frequencies they generate. The protocol fixes how many stopping points each branch carries. Beauty's "what are the odds?" becomes precise only when she specifies what it is that she is counting.

    Note on indifference: The Thirder isn't cutting the pie into thirds because the three interview situations feel the same. It's the other way around: SB is indifferent because she already knows their long-run frequencies are equal. The protocol plus the fair coin guarantee that, among the awakenings that actually occur, the two T-awakenings together occur twice as often as the single H-awakening, and within each coin outcome the Monday vs Tuesday T-awakenings occur equally often. So her equal treatment of the three interview cases is licensed by known frequencies, not assumed by a principle of indifference. Change the coin bias or the schedule and her "indifference" (and her credence) would change accordingly.


    Thirders who argue their position on the basis of frequential probabilities are conflating the subject waking up twice in a single trial (in the case of Tails) for two independent and identically distributed repeated trials, but the subject waking up twice in a single trial constitutes a single outcome, not two outcomes. Frequentist Thirders are therefore overcounting.

    There is only one alleatorically acceptable probability for P(Head | Monday OR Tuesday) (which is the question of the SB problem) :

    P(Head | Monday OR Tuesday) =
    P(Monday OR Tuesday | Head) x P(Head) / P(Monday Or Tuesday)

    where

    P(Head) = 0.5 by assumption.
    P(Monday Or Tuesday) = 1 by assumption.

    P(Monday OR Tuesday | Head) = P(Monday | Head) + P(Tuesday | Head) = 1 + 0 = 1.

    P(Head | Monday OR Tuesday) = 1 x 0.5 / 1 = 0.5.


    JeffJo November 01, 2025 at 17:19 #1022265
    Quoting Pierre-Normand
    Sime's piece of the puzzle: The grounding of SB's credence is aleatoric. The fair coin doesn't just draw the map, it drives the frequencies the map will realize across many runs (or, justify expectations over one single run)


    When SB N is awake, while she is aware of the map, she has no information that she can use to place herself in that map. IT IS IRRELEVANT.

    In any way that SB can assess her credence, that does not reference her position in the map, the answer is 1/3.
    • Using four volunteers, where each sleeps though a different combination in {H&Mon, T&Mon, H&Tue, T&Tue}? On any day, the credence assigned to each of the three awake volunteers cannot be different. and they must add up to 1. The credence is 1/3.
    • Use the original "awake all N days, or awake on on one random day in the set of N" problem? N+1 are waking combinations, only one corresponds to "Heads." The credence is 1/(N+1).
    • Change the "sleep" day to a non-interview day? It is trivial that the answer is 1/3.


    I'm sure there are others. The point is that the "halfer run-based" argument cannot provide a consistent result. It only works if you somehow pretend SB can utilize information, about which "run" she is in, that she does not and cannot posses.

    But to consider a halfer's objections, I have to get that halfer to address the direct questions I have asked about how a halfer argument applies in other scenarios that must use similar arguments. So far, I can't get that to happen.
    Pierre-Normand November 02, 2025 at 09:08 #1022493
    Quoting JeffJo
    I'm sure there are others. The point is that the "halfer run-based" argument cannot provide a consistent result. It only works if you somehow pretend SB can utilize information, about which "run" she is in, that she does not and cannot posses.


    The only information she makes use of about the run she's in is that the fair coin decided it. It's the exact same information that she's also making use of in your own "four-cards" variation since it's on the basis of this information that she knows that getting a T-Monday, a T-Tuesday or a H-Monday card are equally likely outcomes (1/4 each before updating, on your account). If she knew the coin to be biased towards Tails, say, she would need to adjust her credence accordingly, right?

    To clarify the nature of the information that, according to you, she only gets upon awakening, let me make my SBG (Sleeping Beauty's Garden) analogy closer to yours. Rather than there being 2 lit spots (or lampposts) on the T-path and 1 lamppost on the H-path, let us put two lampposts on each path, but only the first one of those two is lit on the H-path. This is equivalent to sleeping on Tuesday. So, on your view, when she happens upon a lit lamppost, she can exclude H-second-lamppost (equivalent to excluding H-Tuesday upon awakening). This, according to you, enables SB, when she happens upon a lit lamppost, to update her priors from [1/4 T-first; 1/4 T-second; 1/4 H-first; 1/4 H-second] to [1/3 T-first; 1/3 T-second; 1/3 H-first; 0 H-second]

    Would it matter at all if the original experiment were modified in such a way that when the coin lands Tails, Sleeping Beauty is awakened at two separate random times on Monday, or else is only awakened at one single random time still on Monday? This would be equivalent to my original SBG experiment where there are two lamppost on the T-path and only one on the H-path, and they all are lit (and occasions for an interview) and we don't care at all about the exact location of the lampposts along the paths. There is therefore no information to be had about sleeping days or unlit lampposts. How do you account for her gaining information upon awakening (or when happening upon a lamppost) in those scenarios?
    sime November 02, 2025 at 16:18 #1022569
    Quoting JeffJo
    In any way that SB can assess her credence, that does not reference her position in the map, the answer is 1/3.
    Using four volunteers, where each sleeps though a different combination in {H&Mon, T&Mon, H&Tue, T&Tue}? On any day, the credence assigned to each of the three awake volunteers cannot be different. and they must add up to 1. The credence is 1/3.
    Use the original "awake all N days, or awake on on one random day in the set of N" problem? N+1 are waking combinations, only one corresponds to "Heads." The credence is 1/(N+1).
    Change the "sleep" day to a non-interview day? It is trivial that the answer is 1/3.

    I'm sure there are others. The point is that the "halfer run-based" argument cannot provide a consistent result. It only works if you somehow pretend SB can utilize information, about which "run" she is in, that she does not and cannot posses.


    No, the Halfer position doesn't consider SB to have any information that she could utilize when awakened, due to the fact that SB's knowledge that it is either Monday or Tuesday doesn't contribute new information about the coin, which she only observes after the experiment has concluded.

    Also, your reasoning demonstrates why we shouldn't conflate indifference with equal credence probability values. Yes, an awakened SB doesn't know which of the possible worlds she inhabits and is indifferent with regards to which world she is in and rightly so. No, this doesn't imply that she should assign equal probabilty values for each possible world: For example, we have already shown that if an awakened SB assigns equal prior probabilities to every possible world that she might inhabit, then she must assign unequal credences for it being monday versus tuesday when conditioning on a tails outcome.

    To recap, if P(Monday) = 2/3 (as assumed by thirders on the basis of indifference with respect to the three possible awakenings), and if P(Tails | Monday) = 1/2 = P(Tails) by either indiffererence or aleatoric probability, then

    P(Monday | Tails) = P(Tails |Monday) x P(Monday) / P(Tails) = (1/2 x 2/3) / (1/2) = 2/3.

    So let's assume that SB is awakened on monday or tuesday and is told, and only told, that the result was Tails. According to the last result, if SB initially assigns P(Monday) = 2/3 on the basis of the principle of indifference as per thirders, then she must infer having learned of the tails result that monday is twice as likely as tuesday, in spite of mondays and tuesdays equally occcuring on a tails result.

    As this demonstrates, uniform distributions have biased implications. So if SB insists on expressing her state of indifference over possible worlds in the language of probabilty, she should only say that any probability distribution over {(Monday,Heads),(Monday,Tails) ,(Tuesday,Tails)} is compatible with her state of indifference, subject to the constraint that the unconditioned aleatoric probability of the coin is fair.

    However, if she really must insist on choosing a particular probability distribution to represent her state of indifference, then she can still be a halfer by using the prinicple of indifference to assert P(Monday | Tails) = P(Tuesday | Tails), and then inferring the unconditioned credence that it is monday to be P(Monday = 1/2), which coheres with the halver position.

    JeffJo November 06, 2025 at 19:05 #1023531
    quote="Pierre-Normand;1022039"]Before Beauty sleeps, the attendants lay out a garden she knows everything about:

    At the gate there's a fair fork: Heads-path or Tails-path (that's the aleatoric 1/2 vs 1/2).

    Along the paths they place the stopping spots (interviews): one lit spot on Heads, two lit spots on Tails.

    Amnesia just means each lit spot looks the same when she stands under it.[/quote]

    The difference here is that she seems to get a second chance at finding a lamppost along one path, but not the other. Your version pre-loads the probabilities based on which path is taken, which is what I mean when I say you use information that SB cannot possess.

    Whether or not that makes a difference can be seen if you address the Camp Sleeping Beauty version, where she may or may not "find" a spot along the indicated path. Specifically, how does, say, Friday's lamp being lit or unlit change her credence on Monday? You are taking a property of the path she is on, that she cannot know, as evidence she can use.

    Quoting Pierre-Normand
    The only information she makes use of about the run she's in is that the fair coin decided it.[quote]
    So she doesn't make use of the fact that her "run" might include another waking, which is supposed to be beyond her worldview? Specifically, that there might be a connection that she cannot see?

    [quote]It's the exact same information that she's also making use of in your own "four-cards" variation since it's on the basis of this information that she knows that getting a T-Monday, a T-Tuesday or a H-Monday


    You seem to have left out H-Tuesday. By doing so, you are utilizing information about one "run" that you ignore in another. And this omission is the exact reason I'd like to see my questions, which are designed to demonstrate how you are utilizing this information inconsistently, addressed. If you feel you are not utilizing this information, there should be no reason to not address them.
    JeffJo November 06, 2025 at 19:32 #1023542
    Quoting sime
    No, the Halfer position doesn't consider SB to have any information that she could utilize when awakened,

    "Gee, what do I know? Well, if the coin landed Tails then there is another waking I have to go to, and I have to split the prior probability of Heads between today and that day. But if it landed Heads, the full probability weight of Heads is applied to today."

    "So If I knew which way the coin landed, I could get different answers about today. BUT I DON'T KNOW. I have to moderate these possibilities in a way that makes the probability of TODAY equal for all."

    Of course, all of this could be made clearer if we compare two versions of the experiment, one where SB sleeps through H+Tue, and one where she is awakened but taken to DisneyWorld instead of being interviewed. If the answer changes, she is using information about the "other" day that she is supposed to not know. If the answer is the same, she is not using it. But if it is the same, it can't be 1/2 for the COIN, and 1/4 for the COIN+DAY, since the probabilities addd up to more than 1.

    Yes, an awakened SB doesn't know which of the possible worlds she inhabits and is indifferent with regards to which world she is in and rightly so.


    So don't use that as a model, use the well-established methods of conditional probability. Ring a bell at noon of both days. An awake SB hears it, but a sleeping SB is unaffected in any way.

    The prior probabilities of a specific bell-ring being on any member of {H+Mon, T+Mon, H+Tue, T+Tue} is 1/4. If SB hears it, H+Tue is eliminated. Conditional probability says:

    Pr(H+Mon|Bell) = Pr(H+Mon)/[Pr(H+Mon)+Pr(T+Mon)+Pr(T+Tue)] = 1/3.

    No, this doesn't imply that she should assign equal probabilty values for each possible world: For example, we have already shown that if an awakened SB assigns equal prior probabilities to every possible world that she might inhabit, then she must assign unequal credences for it being monday versus tuesday when conditioning on a tails outcome.

    ???? The probability of Tuesday, conditioned on a Tuesday outcome, is 1.

    But SB's sample space (the set of possible outcomes) does not include "Tuesday." That is an event that is the union of the discrete outcomes "H+Tue" and "T&Tue." The significance is that she can only observe one.

    To recap, if P(Monday) = 2/3 (as assumed by thirders on the basis of indifference with respect to the three possible awakenings), and if P(Tails | Monday) = 1/2 = P(Tails) by either indiffererence or aleatoric probability, then

    ????

    Thirders never say Pr(Monday)=2/3. Since SB can only observe one combination of COIN+DAY at a time, they say that Pr(MONDAY|I AM OBSERVING ONE COMBINATION) = 1/3. A dreaming SB might say PR(MONDAY|I'M DREAMING)=1, but any evidence she actually has includes only one of these outcomes.
    sime November 08, 2025 at 09:24 #1023786
    Quoting JeffJo
    So don't use that as a model, use the well-established methods of conditional probability. Ring a bell at noon of both days. An awake SB hears it, but a sleeping SB is unaffected in any way.

    The prior probabilities of a specific bell-ring being on any member of {H+Mon, T+Mon, H+Tue, T+Tue} is 1/4. If SB hears it, H+Tue is eliminated. Conditional probability says:

    Pr(H+Mon|Bell) = Pr(H+Mon)/[Pr(H+Mon)+Pr(T+Mon)+Pr(T+Tue)] = 1/3.


    Your bell is just a label for the event {Monday OR Tuesday} which is independent of the coin flip, and so you are merely repeating the same appeal to indifference as before.

    What I was pointing out is that this application of the principle of indifference isn't consistently applied to SB.

    Lets start by assuming the credence that you insist upon:

    P(Monday,Heads) = P(Tuesday, Heads) = P(Tuesday,Tails) = 1/3

    To verify that you are happy with this credence assignment, you need to check the hypothetical credences that this credence implies. In the case of P(Monday | Tails) we get

    P(Monday | Tails) = P(Monday, Tails) / P(Tails) = (1/3) / (1/2) = 2/3.

    Are you happy with this implied conditional credence? If SB is told that the outcome is tails when she wakes up, then should she believe that it is twice as likely to be monday than tuesday, given her knowledge of tails?

    JeffJo November 08, 2025 at 19:41 #1023873
    Quoting sime
    Your bell is just a label for the event {Monday OR Tuesday}


    It's supposed to turn the continuous passage of time into a discrete outcome. And yes, I have had halfers try to make that an issue.

    What I was pointing out is that this application of the principle of indifference isn't consistently applied to SB.


    The Principle of Indifference is applied to single bell-rings, not to SB. It does not take SB into account. That's the point.

    P(Monday,Heads) = P(Monday, Tails) = P(Tuesday,Tails) = 1/3


    You keep abbreviating it, to insert SB into the PoI application. And I think you made a typo, that I corrected (if you intended to include H+Tue, but not T+Mon, I'd love to hear that explained).

    What I'm saying is that the PoI applies to the prior probabilities, which are:

    • Pr(H+Mon) = Pr(T+Mon) = Pr(H+Tue) = Pr(T+Tue) = 1/4


    Note that there is no "application to SB" here. So I can't be doing it wrong. You are doing it wrong, since you try to include SB somehow.

    The solution, is that when SB hears the bell ring, she can eliminate H+Tue. The event AWAKE is (H+Mon) or (T+Mon) or (T+Tue) with prior probability 3/4. And:

    • Pr(H+Mon|AWAKE) = Pr((H+Mon) and AWAKE)/Pr(AWAKE) = (1/4)/(3/4)= 1/3.
    • Pr(T+Mon|AWAKE) = Pr((T+Mon) and AWAKE)/Pr(AWAKE) = (1/4)/(3/4)= 1/3.
    • Pr(T+Tue|AWAKE) = Pr((T+Tue) and AWAKE)/Pr(AWAKE) = (1/4)/(3/4)= 1/3.


    Note that the PoI is not used here, either.

    To verify that you are happy with this credence assignment, you need to check the hypothetical credences that this credence implies. In the case of P(Monday | Tails) we get

    P(Monday | Tails) = P(Monday, Tails) / P(Tails) = (1/3) / (1/2) = 2/3.


    No,
    • Pr(Mon|Tails) = Pr((T+Mon) and ((T+Mon) or (T+Tue))) / Pr((T+Mon) or (T+Tue))
    • Pr(Mon|Tails) = (1/4) / (1/2) = 1/2


    Are you happy with this implied conditional credence?

    This one? Yes. Since yours included observing H+Tue, and pulled the event "Tails" out of SB's observation, that one was bogus.

    +++++

    Now, replace "Leave SB asleep" on H+Tue with "Wake SB and go to DisneyWorld with probability Q, or leave her asleep with probability 1-Q."

    How does SB's credence in Heads, in an interview that can't be on H+Tue, depend on Q?
    sime November 12, 2025 at 13:08 #1024558
    Reply to JeffJo

    Well, I've come to the conclusion that your answer is in some sense philosophically superior to the result insisted upon by halfers like myself, even though your answer is technically false. In a nutshell, I think that although you have lost the battle, you have won the war.

    As I understand it, your proposal is essentially the principle of indifference applied to a sample space that isn't the same as the stated assumptions of the SB problem, namely your sample space is based on the triple

    {Coin,Day,Wakefulness}

    upon which you assign the distribution Pr(Heads,Monday,Awake) = Pr(Tails,Monday,Awake) = Pr(Heads,Tuesday,Asleep) = Pr(Tails,Tuesday,Awake) = 1/4.

    But the important thing isn't your appeal to indifference "on a single bell" as you put it, but the different sample space you used and the viewpoint it provides. (Any measure can be assumed on your sample space, provided that it satisfies the marginal distribution P(Heads) = 1/2 and assigns coherent condtional credences - your particular choice based on PoI is easily seen to be coherent, for the reasons you point out.

    By contrast, the probability space for the classical SB problem is that of a single coinflip C = {H,T}, namely (C,{0,H,T,{H,T}},P) where P (C = H) = 0.5 . From this premise it isn't technically possible to conclude anything except for the halfers position, namely

    P(C = H | Monday Or Tuesday) = P(C = H) = 1/2.

    for reasons already explained many times, (and which is more rigorously proved by pushing the measure P forwards onto the different sample space of day outcomes, and then disintegrating the resulting measure and taking the inverse to obtain the conditional probability for P( C = H | Monday Or Tuesday), but this is incidental).

    But what makes your argument incorrect for the SB problem à la lettre, namely the use of a non-permitted sample space that is based on commonsense counterfactual intuition that goes beyond the explicitly stated premises of the SB problem, is also what makes your argument interesting and persuasive, for your argument for the thirder's position is based on coherent counterfactual intuitions that are commonsensically valid and important to point out, even though they are inapplicable with respect to a strict interpretation of the SB problem as explicitly stated.

    Essentially, if by "probability" we mean a coherence value based only on the frequential probability of the coin landing heads as explicitly assumed by the SB problem and we do not make any other assumptions no matter how intuitively plausible, then the answer can only be a half because the sample space is of one coin flip. But if we interpret "probability" more liberally to a mean a credence that includes commonsense counterfactual intuitions, then the answer can be different to a half provided that we define "probability" more precisely to permit this and extend our premises of the SB problem to include counterfactual premises that allow your chosen sample space. But then the answer isn't necessarily equal to a third, for that particular case requires the use of the Principle of Indifference applied to bells, which a non-halfer might object to, even though he agrees to use your sample space.
    JeffJo November 12, 2025 at 19:03 #1024606
    Quoting sime
    As I understand it, your proposal is essentially the principle of indifference applied to a sample space that isn't the same as the stated assumptions of the SB problem, namely your sample space is based on the triple

    {Coin,Day,Wakefulness}

    upon which you assign the distribution Pr(Heads,Monday,Awake) = Pr(Tails,Monday,Awake) = Pr(Heads,Tuesday,Asleep) = Pr(Tails,Tuesday,Awake) = 1/4.

    No.

    The reason I keep asking for specific answers to specific questions, is that I find that nobody addresses "my sample space." Even though I keep repeating it. They change it, as you did here, to include the parts I am very intentionally trying to eliminate.

    There are two, not three, random elements. They are COIN and DAY. WAKE and SLEEP are not random elements, they are the consequences of certain combinations, the consequences that SB can observe.

    There are two sampling opportunities during the experiment, not two paths. The random experiment, as it is seen by SB's "inside" the experiment, is just one sample. It is not one day on a fixed path as seen by someone not going through the experiment, but one day only. Due to amnesia, each sample is not related, in any way SB can use, to any other.

    Each of the four combinations of COIN+DAY is equally likely (this is the only application of the PoI), in the prior (this means "before observation") probability distribution. Since there are four combinations, each has a prior ("before observation") probability of 1/4.

    In the popular problem, SB's observation, when she is awake, is that this sample could be H+Mon, T+Mon, or T+Tue; but not H+Tue. She knows this because she is awake. One specific question I ask, is what happens if we replace SLEEP with DISNEYWORLD. Because the point that I feel derails halfers is the sleep.

    Halfers seem to think SLEEP means H+Tue cannot be sampled. So, I change the problem so it can be sampled. SB's observation is now that she is in an interview. So this sample could be H+Mon, T+Mon, or T+Tue; but not H+Tue. She knows this because she is in an interview, not at DisneyWorld. There is no difference it her utilization of the mechanics of the experiment, nor of what her observation means.

    By contrast, the probability space for the classical SB problem is that of a single coinflip C = {H,T}, namely (C,{0,H,T,{H,T}},P) where P (C = H) = 0.5 .

    This probability space includes a distribution says that a single sampling is happening on two days at the same time. Halfers convince themselves that there is no contradiction; after TAILS the "other" awakening is identical, and after HEADS it is not observed. But that doesn't work if we change to DISNEYWORLD.

    But what makes your argument incorrect [is] the use of a non-permitted sample space.

    What is non-permitted? It is a functionally equivalent one. Consequence #1 occurs with three of the four combinations, and Consequence #2 occurs with the fourth. The question is only asked with Consequence #2. I'm sorry, but this is a rationalization.

    But its use is why I asked the specific question about my Camp Sleeping Beauty version. There is a 6x6 calendar for the six days of camp, and a six-sided die is used to pick one row (the columns are days of the week). If campers partake of, say, activity C then how do we deduce the conditional probability that the die roll was, say, 5?

    I claim that the 36 cells each have a prior probability of 1/36 (PoI again). And that the conditional probability of a 5, given activity C, is the number of times C appears in row 5, divided by the number of times C appears in the calendar.

    AND, this does not, and cannot, change if one of the "activities" is "sleep through the entire day." This is what addresses you concern here, and I'd love to hear why D="DARTS" produces a different result than D="DOZE."

    There is nothing "non-permitted" about including either D in the sample space. In fact, since either D is part of the camp counselors' planning, they must be functionally equivalent. And it applies to any NxN isomoprhic experiment, even if N=2.

    We could discuss this conclusion, but that discussion will need to include answers.
    sime November 12, 2025 at 19:41 #1024611
    Quoting JeffJo
    The reason I keep asking for specific answers to specific questions, is that I find that nobody addresses "my sample space." Even though I keep repeating it. They change it, as you did here, to include the parts I am very intentionally trying to eliminate.


    I think you misunderstand me. I am simply interpreting the thrux of your position in terms of an extended sample space. This isn't miscontruing your position but articulating it in terms of Bayesian probabilities. This step is methodological and not about smuggling in new premises, except those that you need to state your intuitive arguments, which do constitute additional but reasonable premises. [/quote]


    Quoting JeffJo

    There are two, not three, random elements. They are COIN and DAY. WAKE and SLEEP are not random elements, they are the consequences of certain combinations, the consequences that SB can observe.


    Look at this way: It is certainly is the case that according to the Bayesian interpretation of probabilities, one can speak of a joint probability distribution over (Coin State, Day State, Sleep State), regardless of one's position on the topic. But in the case of the frequentist halfer, the sleep-state can be marginalised out and in effect ignored, due to their insistence upon only using the coin information and rejecting counterfactual outcomes that go over and above the stated information.

    Quoting JeffJo

    There are two sampling opportunities during the experiment, not two paths. The random experiment, as it is seen by SB's "inside" the experiment, is just one sample. It is not one day on a fixed path as seen by someone not going through the experiment, but one day only. Due to amnesia, each sample is not related, in any way SB can use, to any other.


    You have to be careful here, because you are in danger of arguing for the halfers position on their behalf. Counterfactual intuitions, which you are appealing to below, are in effect a form of path analysis, even if you don't see it that way.

    Quoting JeffJo

    Each of the four combinations of COIN+DAY is equally likely (this is the only application of the PoI), in the prior (this means "before observation") probability distribution. Since there are four combinations, each has a prior ("before observation") probability of 1/4.

    In the popular problem, SB's observation, when she is awake, is that this sample could be H+Mon, T+Mon, or T+Tue; but not H+Tue. She knows this because she is awake. One specific question I ask, is what happens if we replace SLEEP with DISNEYWORLD. Because the point that I feel derails halfers is the sleep.


    But the Sleeping Beauty Problem per-se does not assume that the Sleeping Beauty exists on tuesday if the coin lands heads, because it does not include an outcome that measures that possibility. Hence you need an additional variable if you wish to make your counterfactual argument that SB would continue to exist on tueday in the event the coin lands heads. Otherwise you cannot formalise your argument.

    Just to clarify, I'm not confusing you for a naive thirder, as I mistook you for initially, where i just assumed that you were blindly assigning a naive prior over three possible outcomes. I think your counterfactual arguments are reasonable, and I verified that they numerally check out; but they do require the introduction of a third variable to the sample space in order to express your counterfactual intuition that I called "sleep state" (which you could equally call "the time independent state of SB").


    JeffJo November 13, 2025 at 18:06 #1024755
    Quoting sime
    I am simply interpreting the thrux of your position in terms of an extended sample space.

    You are inserting details into the description of the outcomes, that provide no additional information. It has nothing to do with the [crux? thrust?] of my position. You are obfuscating the sample space in order to suggest an omission.

    Look at it this way: a sample space is a set of distinct outcomes that include all possibilities. An event is a set of outcomes. These concepts often get confused. You are confusing them, by including the name of an event (awake, asleep), as your "new variable" that you claim helps to define the outcome.

    There is a simple model for how SB views each waking in an NxN sleeping-beauty calendar. There are N^2 cells in the calendar. Each has an N^(-2) prior probability at the start of an amnesia'ed day, before SB is awakened. When an awake SB observes activity X, her credence in any row or column of the calendar is the number of times activity X occurs in that row or column, divided by the number of times it occurs in the entire calendar.

    The only complication in this model, is what happens when one of the activities does not permit an observation. But since the crux of SB's observation is recognizing the activities that did not happen, that cannot be an issue. For there to be an effect, that calendar cell has to cease to exist, not be unobservable. So I'm trying to get that difference addressed.

    This step is methodological and not about smuggling in new premises, except those that you need to state your intuitive arguments, which do constitute additional but reasonable premises.

    That's just rationalization. I have proposed a model, that I claim represents the SB problem. Whether or not it is "smuggling in new premises" (it isn't, it is extending a premise that already exists), the issues here are only (A) Is the SB problem an example of my model, (B) does my solution apply to the model in general, and (C) how does an unobservable activity affect the solution.

    In other words, when you claim that I am "smuggling in a new premise" you are looking at the changes I suggest add details. What I am trying to get you to address is how the SB problem is an example of my model with fewer details.

    Look at this way: It is certainly is the case that according to the Bayesian interpretation of probabilities, one can speak of a joint probability distribution over (Coin State, Day State, Sleep State), regardless of one's position on the topic.


    What you are doing is creating an event, a subset of the sample space, and applying it as a "new variable" in every outcome member of that event. If you were adding new descriptors, you should add all to each outcome. Yes, some will end up with zero probability, like H+Tue+Awake.

    But the Sleeping Beauty Problem per-se does not assume that the Sleeping Beauty exists on tuesday if the coin lands heads, because it does not include an outcome that measures that possibility.

    And you confuse "measuring the possibility" with "the possibility exists as an outcome." But I devised specific questions to address this exact issue, which have gone completely ignored.
    • If were to wake SB on H+Tue, and take her to DisneyWorld (see how I don't need to add DISNEYWORLD asa descriptor in the sample space?), what is her credence in the event (H+Mon or H+Tue) if she observes the event (H+Mon or T+Mon or T+Tue)? It is:
    • Pr(((H+Mon or H+Tue) and (H+Mon or T+Mon or T+Tue)) / Pr(H+Mon or T+Mon or T+Tue)
    • = Pr(H+Mon) / Pr(H+Mon or T+Mon or T+Tue) = (1/4)(/(3/4) = 1/3.


    [You] do require the introduction of a third variable to the sample space in order to express your counterfactual intuition that I called "sleep state" (which you could equally call "the time independent state of SB").

    Except, you didn't add one. You applied a name that always applies to combination of the other variables. And pardon me for suggesting this, but it seems you are using it to not address my very specific questions, that your "new variable" adds nothing to. If you think it does, then use it as part of your answer to those questions.

    And it is not "counterfactual." The outcome H+Tue can occur. The prior sample space is comprises every possibility before the random elements are determined, and before an observation is made. And I'm tired of all this repetition, but all this can be explored by addressing the questions I have asked.
    sime November 14, 2025 at 14:32 #1024924
    Reply to JeffJo

    All I can say is that we aren't agreeing as to the semantics of the problem. Your sample space includes the counterfactual possibility (H, Tuesday), which isn't in the sample space of the experiment as explicitly defined. You appeal to "if we awoke SB on tuesday on the event of heads" might be a perfectly rational hypothetical in line with common-sense realism, but that hypothetical event isn't explicit in the problem description. Furthermore, the problem is worded as a philosophical thought experiment from the point of view of SB as a subject who cannot observe that tuesday occurred on a heads result, nor even know of her previous awakenings, in sharp contrast to an external point of view relative to which her awakenings are distinguishable and for which the existence of tuesday isn't conditional on the outcome of the event of tails.

    As straw-clutching as this might sound, there are radically minded empiricists who would argue that the existence of "tuesday" for the sleeping beauty is contingent upon her being awake. For such radical empiricists the event (H,Tuesday) doesn't merely have zero probability, but is a logical contradiction from SB's perspective.

    Epistemically for SB,

    (h,mon) -> observable, but undiscernable.
    (h,tue) - > unobservable.
    (t,mon) -> observable but undiscernible.
    (t,tue) -> observable but undiscernible.

    So we are back to the question as to whether (h,tue) should be allowed in the sample space. This is ultimately what our dispute boils down to.






    JeffJo November 14, 2025 at 21:51 #1024977
    Quoting sime
    All I can say is that we aren't agreeing as to the semantics of the problem. Your sample space includes the counterfactual possibility (H, Tuesday), which isn't in the sample space of the experiment as explicitly defined.

    So, are you saying that the week skips from Monday to Wednesday if the coin lands on Heads? What it they wait to flip the coin until Tuesday Morning?

    As I have described in several ways, and constructed questions to test, the experiment consists of two days, with two COIN+DAY potential-observation methods. But the amnesia means only one is "seen" at a time, and prevents them from being connected (for Pierre: this means no path information). But H+Tue, with no indication of waking or sleeping, is still a possibility, and a sample space must describe all possibilities.

    You appeal to "if we awoke SB on tuesday on the event of heads" might be a perfectly rational hypothetical...

    Please, what is inconsistent about it, when we ignore whether she is awake? Do you think she does not know it can happen?

    But that hypothetical event isn't explicit in the problem description.

    Why does the problem description have to explicitly say that something which obviously can happen, can actually happen? The description only says that SB will sleep through it, not that it is excluded from the realm of possibility.

    But again, look at the version where, on H+Tue, SB is left asleep with probability Q and otherwise taken to DisneyWorld. The original problem is a sub-case of this with Q=0. the one you think is a different problem, but is not, has Q=1. Instead of thinking of me adding something, address this one as Q approaches, and then equals, 0.

    Furthermore, the problem is worded as a philosophical thought experiment from the point of view of SB as a subject who cannot observe that tuesday occurred on a heads result,
    But who still knows it can happen. All I'm saying is that your "philosophical thought experiment" does not "philosophically eliminate Tuesday from the week" if the coin lands Heads.

    OR. we could just go back to the problem that spawned the SB problem, which was more of that "philosophical thought experiment."

    • SB is put to sleep on day 0.
    • A fair coin is flipped, and an N-sided die is rolled.
    • Over the next N days, SB is woken, or not, based on what is currently showing on the coin and die.
    • If she is woken, she is amnesia-ed before going back to sleep that night.
    • Regardless of whether she is woken, the die is rotated down by 1 (resetting to N if it was a 1).


    The conditions for waking her are if the coin is currently showing Heads, or the die is currently showing 1.

    In this version, an exact implementation of the problem proposed by Arnold Zuboff, SB knows FOR A FACT that the experimenters look at the coin and die this morning. She knows, ALSO FOR A FACT, the there were 2*N equally-likely combinations they could have seen on this particular day. She knows, AGAIN FOR A FACT that she would not be awake for N-1 of them, she would be awake for N+1, and in only one of those is the coin showing HEADS. Her credence in Heads is 1/(N+1).

    If you think I changed something by adding (if N>2) more days, or that changing the order makes it different, please explain how instead of avoiding the issue by simply saying it is. If you think SB knowing that some combinations could not be observed means that they must be excluded from the sample space, consider that she could be awoken on all days, observe the coin and die herself, and then a sleep+amnesia gas it released if she sees HEADS and DIE>1.
    JeffJo November 15, 2025 at 19:39 #1025133
    An N-day experiment:
    1. The days of the experiment are named D(1) through D(N).
    2. D(0) is the night before the experiment begins, when SB is informed of all these details before going to sleep.
    3. After she goes to sleep on D(0), an M-sided die will be rolled and preserved throughout out the experiment.
    4. What occurs on each day of the experiment is predetermined using an M-row by N-column calendar. One of M distinct activities (that can all be differentiated from each other), A(1) through A(M), is assigned to each cell in the calendar, such that each appears at least once. The calendar is shown to SB on D(0).
    5. With one variation about A(M) that will be explained below, SB will be awakened each day, and will participate in the activity assigned to that day's column and the row determined by the die roll.
    6. After participating in a day's activity, except A(M), SB will be shown the calendar and asked for her credence in each possible die roll. After any waking activity, she will be put back to sleep with amnesia.


    There are three possibilities for the variation:
    1. For A(M), SB will be left asleep all day.
    2. For A(M), SB will be taken to DisneyWorld.
    3. For A(M), SB will be left asleep with probability 0


    After participating activity A(m), for 1<=m
    The popular SB problem is a version of this with M=N=2 and using variation #1. A(1) is simply "interview" and appears in every cell except the HEADS row and the Tuesday column. So the answer, when she is interviewed, is that Pr(HEADS|INTERVIEW)=1/3.

    The extra details DO NOT make this problem's solution inapplicable to the popular SB problem. In fact, any solution to the popular one has to be consistent with this problem's solution. All the added details do is illustrate why various rationalizations made by halfers, to avoid using simple conditional probability methods,are invalid. In particular, the three variations for A(M) are meant to show how A(M) cannot affect the solution.

    I've been trying to get many questions answered for about a month now, and non have been addressed, except with unsupported assertions that I must be wrong. If that is true, there must be a solution to this problem I present today that differs from what I said.

    If you disagree, please provide a solution to this problem, with specific detail about how the three variations affect the answer, not just an assertion that they must.