LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Yes! There is 50% chance that the coin is Tails and so the room is to be Red in this experiment.
yimbygeorge on The Cognitive-Theoretic Model of the Universe: A Partial Summary and ReviewFalsifiable predictions?
wilkox on My Interview With Cade Metz on His Reporting About Slate Star CodexThe epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).
I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard.
Cade correctly informed the readers that Scott is aligned with Murray on race and IQ.
He really didn’t. Firstly, in the literal sense that Metz carefully avoided making this claim (he stated that Scott aligned himself with Murray, and that Murray holds views on race and IQ, but not that Scott aligns himself with Murray on these views). Secondly, and more importantly, even if I accept the implied claim I still don’t know what Scott supposedly believes about race and IQ. I don’t know what ‘is aligned with Murray on race and IQ’ actually means beyond connotatively ‘is racist’. If this paragraph of Metz’s article was intended to be informative (it was not), I am not informed.
tailcalled on My Interview With Cade Metz on His Reporting About Slate Star CodexIt's totally possible to say taboo things, I do it quite often.
But my point is more, this doesn't seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I'm pointing at.
signer on Beauty and the BetsYou observe outcome “Blue” which correspond to event “Blue or Red”.
So you bet 1:1 on Red after observing this “Blue or Red”?
ape-in-the-coat on Beauty and the Bets*ethically
No, I'm not making any claims about ethics here, just math.
Works against Thirdism in the Fissure experiment too.
Yep, because it's wrong in Fissure as well. But I'll be talking about it later.
I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory?
To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that
P(Heads|Blue) = P(Heads|Red) = 1/3
but
P(Heads|Blue or Red) = 1/2
means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment.
The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?
You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment [LW · GW].
wei-dai on My Interview With Cade Metz on His Reporting About Slate Star CodexMany comments pointed out that NYT does not in fact have a consistent policy of always revealing people's true names. There's even a news editorial about this which I point out in case you trust the fact-checking of NY Post more.
I think that leaves 3 possible explanations of what happened:
In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz's mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.
ape-in-the-coat on The Solution to Sleeping BeautyThe Two Coin version is about what happens on one day.
Let it be not two different days but two different half-hour intervals. Or even two milliseconds - this doesn't change the core of the issue that sequential events are not mutually exclusive.
observation of a state, when that observation bears no connection to any other, as independent of any other.
It very much bears a connection. If you are observing state TH it necessary means that either you've already observed or will observe state TT.
What law was broken?
The definition of a sample space - it's supposed to be constructed from mutually exclusive elementary outcomes.
Do you disagree that, on the morning of the observation, there were four equally likely states? Do you think the subject has some information about how the state was observed on another day?
Disagree on both accountsd. You can't treat HH HT TT TH as individual outcomes and the term "morning of observation" is underspecified. The subject knows that some of them happen sequentially.
what I am trying to do is eliminate any basis for doing that
I noticed, and I applaud your attempts. But you can't do that because you still have sequential events, anyway, the fact that you call them differently doesn't change much.
Yes, each outcome on the first day can be paired with exactly one on the second.
Exactly. And the Beauty knows it. Case closed.
But without any information passing to the subject between these two days, she cannot do anything with such pairings. To her, each day is its own, completely independent probability experiment.
She knows that they do not happen at random. This is enough to be sure that each day is not completely independent probability experiment. See Effects of Amnesia section.
No, it treats the current state of the coins as four mutually exclusive states.
Call them "states" if you want. It doesn't change anything.
How so? If your write down the state on the first day that the researchers look at the coins, you will find that {HH, TH, HT, TT} all occur with frequency 1/4. Same on the second day.
I've specifically explained how. We write down outcomes when the researcher sees the Beauty awake - when they updated on the fact of Beauty's awakening. The frequency for three outcomes is 1/3, moreover they actually go in random order because the observer witnesses only one random awakening per experiment.
If you write down the frequencies when the subject is awake, you find that {TH, HT, TT} all have frequency 1/3.
Yep, no one is arguing with that. The problem is that the order isn't random as your model predicts - TH and TT always go in pairs.
Here is what you are arguing: Say you repeat this many times and make two lists, one for each day.
No, I'm not complicating this with two lists for each day. There is only one list, which documents all the awakenings of the subject, while she is going through the series of experiments. The theory that predicts that two awakening are "completely independent probability experiments" expect that the order of the awakenings is random and it's proven wrong because there is an order between awakenings. Easy as that.
That is what the amnesia drug accomplishes.
You are mistaken about what the amnesia acomplishes. Once again I send you to reread the Effects of Amnesia section. It's equally applicable to Two-Coin version of the problem as a regular one.
And your arguments that this is wrong require associating the attempts, essentially removing the effect of amnesia.
According to Beauty's knowledge, the attempts are already connected. Only if the amnesia removed from her mind the setting of the experiment if she forgot that TT and TH go in pairs, only then she should reason the way you want her to.
On the other hand, if we trully removed the effect of amnesia alltogether, then the Beauty would be 100% confident in Tails when awaken the second time in the same experiment.
So no, I'm talking about the exact knowledge state of the Beauty with the exact level of amnesia that she gets, while you your are talking about a more significant alteration of her mind.
signer on Beauty and the Betsmathematically sound
*ethically
Utility Instability under Thirdism
Works against Thirdism in the Fissure experiment too.
Technicolor Sleeping Beauty
I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?
stephen-bennett on My PhD thesis: Algorithmic Bayesian EpistemologyCongratulations! I wish we could have collaborated while I was in school, but I don't think we were researching at the same time. I haven't read your actual papers, so feel free to answer "you should check out the paper" to my comments.
For chapter 4: From the high level summary here it sounds like you're offloading the task of aggregation to the forecasters themselves. It's odd to me that you're describing this as arbitrage. Also, I have frequently seen the scoring rule be used with some intermediary function to determine monetary rewards. For example, when I worked with IARPA on geopolitical forecasting, our forecasters would get financial rewards depending on what percentile they were in relative to other forecasters. One would imagine that this would eliminate the incentive to report the aggregate as your own answer, but there's a reason we (the researcher/platform/website) aggregate individual forecasts! It's actually just more accurate under typical conditions. In theory an individual forecaster could improve that aggregate by forming their own independent forecast before seeing the work of others, and then aggregating, but in practice the impact of an individual forecast is quite small. I'll have to read about QA pooling, it's surprising to me that you could disincentivize forecasters from reporting the aggregate as their individual forecast.
For chapter 7: It seems to me that under sufficiently pessimistic conditions, there would be no good way to aggregate those two forecasts. For example, if Alice and Bob are forecasting "Will AI cause human extinction in the next 100 years?", they both might individually forecast ~0% for different reasons. Alice believes it is impossible for AI to get powerful enough to cause human extinction, but if it were capable of acting it would kill us all. Bob believes any agent smart enough to be that powerful would necessarily be morally upstanding and believes it's extremely likely that it will be built. Any reasonable aggregation strategy will put the aggregate at ~0% because each individual forecast is ~0%, but if they were to communicate with one another they would likely arrive at a much higher number. I suspect that you address this in the assumptions of the model in the actual paper.
Congrats again, I enjoyed your high level summary and might come back for a more detailed read of your papers.