## Posts

## Comments

**ike**on Poll: Which variables are most strategically relevant? · 2021-01-22T22:37:27.326Z · LW · GW

How important will scaling relatively simple algorithms be, compared to innovation on the algorithms?

**ike**on Why do stocks go up? · 2021-01-18T05:46:07.805Z · LW · GW

Did you see my initial reply at https://www.lesswrong.com/posts/4vcTYhA2X99aGaGHG/why-do-stocks-go-up?commentId=wBEnBKqqB7TRXya8N which was left before you replied to me at all? I thought that added sufficient caveats.

>"While it is expected that stocks will go up, and go up more than bonds, it is yet to be explained why they have gone up so much more than bonds."

Yeah, I'd emphasize *slightly* more in expectation.

**ike**on Why do stocks go up? · 2021-01-18T05:23:30.962Z · LW · GW

The vast majority of the equity premium is unexplained. When people say "just buy stocks and hold for a long period and you'll make 10% a year", they're asserting that the unexplained equity premium will persist, and I have a problem with that assumption.

I tried to clarify this in my first reply. You should interpret it as saying that stocks were massively undervalued and shouldn't have gone up significantly more than bonds. I was trying to explain and didn't want to include too many caveats, instead leaving them for the replies.

It's interesting to note that several other replies gave the simplistic risk response without the caveat that risk can only explain a small minority of the premium.

**ike**on Why do stocks go up? · 2021-01-18T03:30:54.731Z · LW · GW

Start with https://en.wikipedia.org/wiki/Equity_premium_puzzle. There's plenty of academic sources there.

People have grown accustomed to there being an equity premium to the extent that there's a default assumption that it'll just continue forever despite nobody knowing why it existed in the past.

>Isn't there more real wealth today than during the days of the East India Company? If a stock represents a piece of a businesses, and those businesses now have more real wealth today than 300 years ago, why shouldn't stock returns be quite positive?

I simplified a bit above. What's unexplained is the excess return of stocks over risk-free bonds. When there's more real wealth in the future, the risk free rate is higher. Stock returns would end up slightly above the risk-free rate because they're riskier. The puzzle is that stock returns are way, way higher than the risk-free rate and this isn't plausibly explained by their riskiness.

**ike**on Why do stocks go up? · 2021-01-17T22:15:43.555Z · LW · GW

Well there's some probability of it paying out before then.

If the magic value is a martingale, and the payout timing is given by a poisson process then the stock price should remain a constant discount off of the magic value. You will gain on average by holding the stock until the payout, but won't gain in expectation by buying and selling the stock.

**ike**on Why do stocks go up? · 2021-01-17T22:00:57.347Z · LW · GW

It seems obvious to me I shouldn't expect this company's price to go up faster than the risk free rate, yet the volatility argument seems to apply to it.

You should, because the company's current value will be lower than $10 million due to the risk. Your total return over time will be positive, while the return for a similar company that never varies will be 0 (or the interest rate if nonzero).

**ike**on Why do stocks go up? · 2021-01-17T21:53:57.940Z · LW · GW

The classic answer is risk. Stocks are riskier than bonds, so they should be underpriced (and therefore have higher returns) than bonds.

But we *know* how risky stocks have been, historically. We can calculate how much higher a return that level of risk should lead to, under plausible risk tolerances. The equity premium puzzle is that the observed returns on stocks is significantly higher than this.

Read through the wikipedia page on the equity premium puzzle. It's good.

**ike**on Why do stocks go up? · 2021-01-17T21:50:43.991Z · LW · GW

The equity premium puzzle is still unsolved. The answer to your question is that nobody knows the answer. Stocks *shouldn't* have gone up historically, none of our current theories are capable of explaining why stocks did go up. Equivalently, stocks were massively underpriced over the last century or so and nobody knows why.

If you don't know why something was mispriced in the past, you should be very careful about asserting that it will or won't continue to be mispriced in the future.

**ike**on ike's Shortform · 2020-12-31T17:35:39.751Z · LW · GW

The other day a piece fell off of one side of my glasses (the part that touches the nose.)

The glasses stay on, but I've noticed a weird feeling of imbalance at times. I could be imagining it, I'm able to function apparently regularly. But I was thinking that the obvious analogy is to filmography: directors consciously adjust camera angles and framings in order to induce certain emotions or reactions to a scene. It's plausible that even a very slight asymmetry in your vision can affect you.

If this is true, might there be other low hanging fruit for adjusting your perception to increase focus?

**ike**on Cultural accumulation · 2020-12-06T13:48:07.352Z · LW · GW

If you had our entire society, you'd have enough people that know what they're trying to do that they should be able to figure out how to get to there from 1200 artifacts. It might take several decades to set up the mining/factories etc, and it might take several decades or more to get the politics to a place where you'd be able to try.

**ike**on The Hard Problem of Magic · 2020-12-05T01:34:53.413Z · LW · GW

I thought it was extremely clear that magic is meant to mean consciousness, from the title alone as well as the examples, and that the post is criticizing / satirizing those that make the corresponding arguments about consciousness/qualia.

**ike**on Links for Nov 2020 · 2020-12-01T05:14:45.214Z · LW · GW

My favorite link this time around was the baseball antitrust one, although the qntm series has been really good.

**ike**on The Exploitability/Explainability Frontier · 2020-11-27T15:25:15.886Z · LW · GW

Assume you're at the frontier of being able to do research in that area and have similar abilities to others in that reference class. The total amount of effort most of those people will put in is the same, but it will be split across these two factors differently. The system being unexploitable corresponds to the sum here being constant.

There can be examples where both sides are difficult, which are out of the frontier.

Re politics, there are some issues that are difficult, some issues that are value judgments, and some that are fairly simple in the sense that spending a week seriously researching is enough to be pretty confident of the direction policy should be moved in.

**ike**on The Exploitability/Explainability Frontier · 2020-11-27T14:48:45.608Z · LW · GW

My point is that it's rare and therefore difficult to discover.

The kinds that are less rare are easier to discover but harder to convince others of, or at least harder to convince people that they matter.

I was drawing off this example, by the way: https://econjwatch.org/articles/recalculating-gravity-a-correction-of-bergstrands-1985-frictionless-case

A 35 year old model had a simple typo in it that got repeated in papers that built on it. Very easy to convince people that this is the case, but very difficult to discover such errors - most such papers don't have those errors so you need to replicate a lot of correct papers to find the one that's wrong.

If it's difficult to show that the typo actually matters, that's part of the difficulty of discovering it. My point is you should expect the sum of the difficulty in explaining and the difficulty in discovery to be roughly constant.

**ike**on Why is there a "clogged drainpipe" effect in idea generation? · 2020-11-20T21:14:28.218Z · LW · GW

Your mind tracks the idea so as not to forget it. This reduces the effective working memory space, which makes it harder to think.

**ike**on The Presumptuous Philosopher, self-locating information, and Solomonoff induction · 2020-11-17T02:34:00.273Z · LW · GW

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation.

https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty

**ike**on Down with Solomonoff Induction, up with the Presumptuous Philosopher · 2020-11-17T02:33:38.561Z · LW · GW

I've written a post that argues that Solomonoff Induction actually is a thirder, not a halfer, and sketches an explanation.

https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty

**ike**on Reading/listening list for the US failing or other significant shifts? · 2020-11-13T19:59:38.570Z · LW · GW

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-9477.2011.00265.x

Recommend this paper, which suggests that wealthy democracies never fall.

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-13T14:37:58.340Z · LW · GW

I've been trying to understand, but your model appears underspecified and I haven't been able to get clarification. I'll try again.

treat perspectives as fundamental axioms

Have you laid out the axioms anywhere? None of the posts I've seen go into enough detail for me to be able to independently apply your model.

like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite

This is not clear at all. In this comment you wrote

the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.

In the earlier comment:

from the first-person perspective it is primevally clear the other copy is not

me.

I don't know how these should be interpreted other than implying that you know you're not a clone (if you're not). If there's another interpretation, please clarify. It also seems obviously false, because "I don't know which person I am among several subjectively indistinguishable persons" is basically tautological.

If MWI does not require perspective-independent reality. Then what is the universal wave function describing?

It's a model that's useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don't postulate reality, because I find the concept incoherent.

But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...

That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn't say much. If you disagree with their conception of CI then my comment doesn't apply.

Your position that SIA is the “natural choice” and paradox free is a very strong claim.

It seems natural to me, and none of the paradoxes I've seen are convincing.

what is the framework

Start with a standard universal prior, plus the assumption that if an entity "exists" in both worlds A and B and world A "exists" with probability P(A) and P(B) for world B, then the relative probability of me "being" that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I "can" be given this knowledge.

Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works - in the end, it spits out probabilities and that's what gets used.

If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.

I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you've acknowledged that there are some questions your theory will refuse to answer, even though there's a simple observation that can be done to answer the question. This is highly counter-intuitive to me.

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-12T21:29:46.472Z · LW · GW

I'm trying to understand your critiques, but I haven't seen any that present an issue for my model of SIA, MWI, or anything else. Either you're critiquing something other than what I mean by SIA etc, or you're explaining them bad, or I'm not understanding the critiques correctly. I don't think it should take ten posts to explain your issues with them, but even so I've read through your posts and couldn't figure it out.

It might help if you explained what you take SIA and MWI to mean. When you gave a list of assumptions you believed to be entailed by MWI, I said I didn't agree with that. Something similar may be going on with SIA. A fully worked out example showing what SIA and what your proposed alternative say for various scenarios would also help. What statements does PBR say are meaningful? When is a probability meaningful?

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-12T21:24:45.132Z · LW · GW

From my point of view, you keep making new posts building on your theory/critique of standard anthropic thinking without really responding to the issues. I've tried to get clarifications and failed.

In the above post, I explained the problem of SSA and SIA: they

assumea specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given.

I have no idea what this means.

Re paradoxes, you appear to not understand how SIA would apply to those cases using the framework I laid out. I asked you why those paradoxes apply and you didn't answer. If there are particular SIA advocates that believe the paradoxes apply, you haven't pointed at any of them.

In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.

You gave no explanation for why MWI would imply those statements, why am I expected to spend more time proving a negative than you spent arguing for the positive? You asserted MWI implies those postulates, I asserted otherwise. I've written two posts here arguing for a form of verificationism in which those postulates end up incoherent.

Instead of adding more and more posts to your theory, I think you should single in on one or two points of disagreement and defend that. Your scenarios and your perspective based theory are poorly defined, and I can't tell what the theory says in any given case.

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-11T19:30:21.270Z · LW · GW

>I am arguing they are both wrong.

You keep saying that you're arguing, but as far as I can tell you just say that everyone's wrong and don't really argue for it. I've pointed out issues with all of your posts and you haven't been responding substantively.

Here, you're assuming that Beauty knows she's not the clone. In that scenario, even thirders would agree the probability of heads is 1/2. This assumption is core to your claims - if not, we don't get "there is no principle of indifference among copies", among other statements above.

**ike**on Ongoing free money at PredictIt · 2020-11-11T05:41:43.371Z · LW · GW

The Hunter Biden federal charges market is also bonkers. If there was enough evidence to charge, they'd have done so before the election.

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-10T18:39:43.111Z · LW · GW

Sorry, yes, I missed that you clone on tails and not heads.

Twice as many copies for tails means in the long run any given copy is likely to be near 2/3ds tails. Where are you getting the opposite result?

**ike**on Tweet markets for impersonal truth tracking? · 2020-11-10T18:37:37.124Z · LW · GW

Depends on the topic, but look at e.g. Figure 2 on page 6. 81% say never allow election related misinformation, 85% say never allow health misinformation.

**ike**on Tweet markets for impersonal truth tracking? · 2020-11-10T13:01:44.836Z · LW · GW

Something like 80% of Americans think social media is doing just right or should be doing more to address misinformation (exact percentage depends on the category of information.)

If Twitter stopped, they'd lose some market share to competitors that do more fact checking. There are already popular social media networks that don't do that much checking, like reddit. Twitter itself does fact checking on fewer topics than Facebook. People can choose what level they're comfortable with, if that's important to them.

**ike**on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-10T03:02:49.675Z · LW · GW

Through these repetitions I would experience Heads and Tails with roughly equal numbers.

This seems wrong. Over a large number of repetitions, most Beautys end up experiencing twice as many heads as tails.

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T22:32:11.494Z · LW · GW

I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T18:26:16.739Z · LW · GW

Do you have any sources explaining the assumptions that go into this minimal version?

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T17:52:39.903Z · LW · GW

Can you explain how you end up with fewer assumptions?

Under your terminology, I believe I can formulate MWI epistemically with the same number of assumptions, and if formalized I think MWI comes out slightly simpler.

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T23:55:35.995Z · LW · GW

It sounds like your version of CI is broad enough that MWI is a special case of it, or vice verse, or something.

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T20:04:26.584Z · LW · GW

Can you be explicit about the set of assumptions you're using for MWI and Copenhagen? I can't make heads or tails of your comments here. Are you arguing that Copenhagen requires fewer assumptions?

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T17:35:58.919Z · LW · GW

So not at all different than MWI, using the exact same assumptions.

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T14:22:31.706Z · LW · GW

If collapse is not objective then in what sense is that theory different than MWI?

**ike**on Why I Prefer the Copenhagen Interpretation(s) · 2020-10-31T22:41:55.117Z · LW · GW

Copenhagen requires a collapse, you can't get there with fewer assumptions than MWI. At best you have agnosticity between MWI and alternate interpretations.

**ike**on PredictIt: Presidential Market is Increasingly Wrong · 2020-10-19T22:49:06.111Z · LW · GW

If the odds according to you drop, you can also just sell your position.

**ike**on This Territory Does Not Exist · 2020-10-17T11:49:38.573Z · LW · GW

The how is Solomonoff induction, the why is because it's historically been useful for prediction.

I don't believe programs used in Solomonoff are "models of an external world", they're just models.

Re simplicity, you're conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren't simple, with a folk understanding of simplicity in which they are. Or at least you're promoting the folk understanding over the mathematical understanding.

If you understand how Solomonoff works, are you willing to defend the folk understanding over that?

**ike**on This Territory Does Not Exist · 2020-10-17T03:25:44.517Z · LW · GW

In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things.

Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes "actually exist" is meaningless, though, it's all just models.

It’s to come up with the best model of reality that includes the experiences I’m having right now.

Why? You'll end up with many models which fit the data, some of which are simpler, but why is any one of those the "best"?

Experiences are physical processes in the world.

Disagree that this statement is cognitively meaningful.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-12T03:01:26.223Z · LW · GW

I just quoted the paper. It stated that N is the expected number of civilizations in the Milky Way. If that is the case, we have to account for the fact that at least one civilization exists. Which wasn't done by the authors. Otherwise N is just the expected number of civilizations in the Milky Way under the assumption we didn't knew that we existed.

The update we need to do is not equivalent to assuming N is at least one, because as I said, N being less than one is consistent with our experiences.

"before you learn any experience"? I.e. before you know you exist? Before you exist? Before the "my" refers to anything?

Yes, it gets awkward if you try to interpret the prior literally. Don't do that, just apply the updating rules.

There are infinitely many possible priors. One would need a justification that the SIA prior is more rational than the alternatives.

SIA as a prior just says it's equally likely for you to be one of two observers that are themselves equally likely to exist. Any alternative will necessarily say that in at least one such case, you're more likely to be one observer than the other, which violates the indifference principle.

You might be certain that 100 observers exist in the universe. You are not sure who might be you, but one of the observers you regard as twice as likely to be you as each of the other ones, so you weigh it twice a strong.

But you may also be uncertain of how many observers exist. Say you are equally uncertain about the existence of each of 99 and twice as certain about the existence of a hundredth one. Then you weigh it twice as strong.

I'm not sure where my formulation is supposed to diverge here.

"Infinity" then just means that for any real number there is another real number which is larger (or smaller).

Well, this is possible without even letting the reals be unbounded. For any real number under 2, there's another real number under 2 that's greater than it.

We can perfectly well (and do all the time) make probabilistic statements about the present or the past.

And those statements are meaningless except insofar as they imply predictions about the future.

Where is the supposed "incoherence" here?

The statement lacks informational content.

It is verified by just a single non-mental object.

I don't know what this is supposed to mean. What experience does the statement imply?

Low generalization error seems to be for many theories what truth is for ordinary statements.

Sure, I have no problem with calling your theory true once it's shown strong predictive ability. But don't confuse that with there being some territory out there that the theory somehow corresponds to.

objective a priori probability distribution over hypotheses (i.e. all possible statements) based on information content

Yes, this is SIA + Solomonoff universal prior, as far as I'm concerned. And this prior doesn't require calling any of the hypotheses "true", the prior is only used for prediction. Solomonoff aggregates a large number of hypotheses, none of which are "true".

Some barometer reading predicts a storm, but it doesn't explain it.

The reading isn't a model. You can turn it into a model, and then it would indeed explain the storm, while air pressure would explain it better, by virtue of explaining other things as well and being part of a larger model that explains many things simply (such as how barometers are constructed.)

prediction is symmetric:

A model isn't an experience, and can't get conditioned on. There is no symmetry between models and experiences in my ontology.

The experience of rain doesn't explain the experience of the wet street - rather, a model of rain explains / predicts both experiences.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-11T17:29:10.712Z · LW · GW

If they interpret N in this way, then N is at least 1.

No, N is a prior. You can't draw conclusions about what a prior is like that. N could be tiny and there could be a bunch of civilizations anyway, that's just unlikely.

It just occurred to me that you still need some prior probability for your sentence which is smaller than 1.

Sure, prior in the sense of an estimate before you learn any of your experiences. Which clearly you're not actually computing prior to having those experiences, but we're talking in theory.

My personal goal would be to make SIA (or a similar principle) nothing more than a corollary of Bayesian updating, possibly together with a general theory of indexical beliefs.

SIA is just a prior over what observer one expects to end up with.

Maybe it is not just the probability that the hypothetical observer had the same observations, it's the probability that the hypothetical observer exists and had the same observations. Not just what observations observers made is often a guess but also how many of them exist.

I'm not sure what distinction you're drawing here. Can you give a toy problem where your description differs from mine?

So I think "has the same state of mind" is better to not exclude those freak observers to begin with, because we might be such a freak observer.

My usual definition is "subjectively indistinguishable from me", you can substitute that above.

The sphere is of finite size and we take the probability of a cow being one-headed as the limit of the ratio as the size of the sphere goes towards infinity.

This is basically just downweighting things infinitely far away infinitely low. It's accepting unboundedness but not infinity. Unboundedness has its own problems, but it's more plausible than infinity.

But we would need an epistemic reason, in contrast to an instrumental reason, to a priori exclude a possibility by assigning it probability 0.

I'm not assigning it probability 0 so much as I'm denying that it's meaningful. It doesn't satisfy my criterion for meaning.

You seemed to specifically object to universes with finite information content on grounds that they are just (presumably periodic) "loops".

That's one objection among several, but the periodicity isn't the real issue - even without that it still must repeat at some point, even if not regularly. All you really have is an irrational set of ratios between various "states of the world", calling that "infinity" seems like a stretch.

those hypotheses are more likely to be true

What do you mean by true here?

Because lower information content means higher a priori probability.

Probability is just a means to predict the future. Probabilities attached to statements that aren't predictive in nature are incoherent.

If you entertained the hypothesis that solipsism is true, this would not compress your evidence at all, which means the information content of that hypothesis would be very high, which means it is very improbable.

The same thing is true of the "hypothesis" that solipsism is false. It has no information content. It's not even meaningful to say that there's a probability that it's true or false. Neither is a valid hypothesis.

If no external things exist, then all "y because x" statements would be false.

The problem with this line of reasoning is that we commonly use models we *know* are false to "explain" the world. "All models are wrong, some models are useful".

Also re causality, Hume already pointed out we can't know any causality claims.

Also, it's unclear how an incoherent hypothesis can serve to "explain" anything.

I think explanations are just fine without assuming a particular metaphysics. When we say "E because H", we just mean that our model H predicts E, which is a reason to apply H to other predictions in the future. We don't need to assert any metaphysical statements to do that.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-11T02:35:25.200Z · LW · GW

I don't even know how to interpret such fraction intervals, given that we can't have a non-integer number of civilizations per galaxy.

N is the average number of civilizations per galaxy.

But the probability of us being alone in the galaxy, i.e. that no other civilizations besides us exist in the galaxy, is rather the probability that at most one civilization exists in the galaxy, given that at least one civilization (us) exists in the galaxy. To calculate this would amount to apply SIA.

I was going to agree with this, but I realize I need to retract my earlier agreement with this statement to account for the difference between galaxies and the observable universe. We don't, in fact, have evidence for the "fact that N is at least 1." We have evidence that the number of civilizations in the universe is at least one. But this is likely to be true even if the probability of a civilization arising on any given galaxy is very low.

I think I agree with you that SIA means higher values of N are higher a priori. But I'm not sure this leads to the overwhelming evidence of a future filter that you need, or much evidence for a future filter at all.

I'll also note that some of the parameters are already adjusted for such effects:

As noted by Carter and McCrea [10] the evidential power of the early emergence of life on Earth is weakened by observer selection effects, allowing for deep uncertainty about what the natural timescale of life formation is

You've succeeded in confusing me, though, so I'll have to revisit this question at a later point.

But what is it from a gods-eye perspective?

It doesn't seem meaningful to ask this.

It seems you are rather agnostic about who are you among the group of observers that, from your limited knowledge, might have had the same set of observations as you.

If some observer only has some probability of having had the same set of observations, then they get a corresponding weight in the distribution.

As long as an infinite universe with infinite observers has a prior probability larger than 0 being in such an universe is infinitely more likely than being in a universe with finitely many observers.

This breaks all Bayesian updates as probabilities become impossible to calculate. Which is a great reason to exclude infinite universes a priori.

You might then have exact doppelgängers but you are not them.

I don't see any meaningful sense in which this is true.

Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.

I don't know how this is relevant.

It depends on what you mean with "verify".

I wrote two posts on this: https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist and https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism. I don't think ontological claims are meaningful except insofar as they mean a set of predictions, and infinite ontological claims are meaningless under this framework.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-09T00:05:42.504Z · LW · GW

It seems that SIA says that the parameters of the drake equation should be expected to be optimized for observers-which-could-be-us to appear, but exactly this consideration was not factored into the calculations of Sandberg, Drexler, and Ord. Which would mean their estimations for the expected number of civilizations per galaxy are way too low.

I don't think this is correct. Look at page 6 of https://arxiv.org/pdf/1806.02404.pdf

SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low. But the lowest values of N aren't that likely - probability of N<1 is around 33%, but probability of N<10^-5 is around 15. It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-08T23:23:03.754Z · LW · GW

You are talking about the calculations by Sandberg, Drexler, and Ord, right?

Yes. Will read that post and get back to you.

reference class whose effect then "cancels out" while you are, as you pointed out, trying to avoid reference classes to begin with.

I don't know that this is a meaningful distinction, being as both produce the same probabilities. All we need is a reference class large enough to contain anything that I might be / don't currently know that I am not.

Perhaps you should try to give your version a precise definition.

SIA is a prior over observers, once you have a prior over universes. It says that for any two observers that are equally likely to exist, you are equally likely to "be" either one (and corresponding weighting for observers not equally likely to exist). We take this prior and condition on our observations to get posterior probabilities for being in any particular universe as any particular observer.

I'm not conditioning on "ike exists", and I'm not conditioning on "I exist". I'm conditioning on "My observations so far are ike-ish" or something like that. This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations. And the SIA prior means that I'm equally likely to be any member of that set, if those members had an equal chance of existing.

Why and how would we plausibly reject this widely held possibility?

If it's incoherent, it doesn't matter how many people believe it.

On the contrary, it seems that SIA presumptuously requires us to categorically reject the sphere and the torus possibility on pure a priori grounds because they imply a universe finite in size and thus with way too few observers.

You're smuggling in a particular measure over universes here. You absolutely need to do the math along with priors and justification for said priors, you can't just assert things like this.

An infinite universe may arise from quite simple laws and initial conditions, which would make its information content low, and its probability relatively high.

It's not clear to me this counts as an infinite universe. It should repeat after a finite amount of time or space or both, which makes it equivalent to a finite universe being run on a loop, which doesn't seem to count as infinite. That's assuming all of this talk is coherent, which it might not be - our bandwidth is finite and we could never verify an infinite statement.

Well, SIA seems to predict that we will encounter future evidence which would imply a finite size of the universe with probability 0.

You need to specify the measure, as above. I disagree that this is an implication of SIA.

**ike**on Babble challenge: 50 ways to escape a locked room · 2020-10-08T23:03:49.982Z · LW · GW

- Call the police
- Call a locksmith
- Wait ten years, someone will find you by then
- Log onto the router that’s serving the wifi network and hack the door to open
- Lots of energy - punch through the walls
- Dig a tunnel
- This sounds like an escape room puzzle, just google for answers to the riddle to get the door to open
- Rip apart your clothes to make a rope to climb over the walls
- Ok, that 10 year thing is just totally implausible. You’re probably in a simulation being tortured. Gotta crash the simulation - use your phone to divide by 0
- Do that thing from Rick and Morty where you go naked to embarrass the simulators and then somehow escape
- Doesn’t say where this room is - I’m assuming it’s in an urban area, just yell and someone will hear you and rescue you
- Wait, why am I trying to escape? My life is perfect, free wifi and unlimited energy. The true escape was the friends we made along the way
- Post on twitter “if I was stuck in a locked room and couldn’t tell anyone, what message should I post to clue my followers in?” someone will figure it out and rescue you
- pre-commit to spending the next ten years inventing a time machine and going back and rescuing yourself in 5 minutes once invented. Just gotta pre-commit hard enough
- Acausally trade with aliens to get them to simulate versions of you escaping
- “I define the space inside these walls as the outside of the room”
- Technically because everything is mostly empty space, there is no objective sense in which you’re “inside” the room anyways
- Sounds like your phone is a perpetual motion machine. Harvest the energy from it to blast through the door
- You’re in a dream. Meditate and imagine the room disappearing, it will
- Look for other doors and try opening them
- Google “how to make a bomb”, the FBI will break down the door very quickly
- Pre-commit to kill yourself if you haven’t been rescued within the next hour. Quantum immortality will rescue you
- If you’re being simulated, you’re probably locked up because you’re interesting in some way. Just go to sleep and be completely boring until they let you out
- Spit on the door hinges until they become loose
- Make a post on LW asking people to list 50 ways to escape the room, then try all of them till something works
- I’m in no hurry. Download and read all the books I’ve always wanted to read, by the end of which I’ll probably have some ideas how to escape
- Play loud obnoxious music until they let you out
- Use your unlimited energy on your phone to mine bitcoin, then use it to pay for someone to rescue you
- Call random numbers until one of them is helpful
- Kick the door until it caves in
- The room is made out of cheese. Bite your way out.
- Jump over the walls
- Did you try knocking on the door till someone answers?
- Use your unlimited energy to make endurance videos, become a youtube star, then ask your fanbase to rescue you
- Pinch yourself to wake up
- You may not need food or water, but without calories you’ll keep losing weight until you’re skinny enough to fit through the cracks in the door
- Take apart your phone to get wires to pick the lock
- Take apart your phone to build a bomb to blow open the door
- Play a dogwhistle sound on your phone at top volume, the dogs will rescue you
- Bargain with the kidnappers, you obviously have something they want
- Lay down and play dead until someone checks on you then jump them and escape
- 42
- Start a youtube livestream titled “I have information that will lead to the arrest of Hillary Clinton”
- Sign up on upwork, do some paid work, and don’t pay taxes - IRS will come get you
- Call in a bomb threat for the building you’re in, causing it to get evacuated
- Rub your clothes against the wall to start a fire, causing the building to collapse and allowing you to escape
- Building is close to collapse already, push against the walls until something gives
- Use your phone to break the window and climb through
- Door is locked with combination lock - plenty of time, try all combos
- The room is on a boat. Dig a small hole in the floor to flood the boat, then as you have the only phone on the boat they’ll be forced to open the door to call for help

Took me just about 50 minutes. Most are silly but I wanted to finish on time, and it was fun.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-08T02:16:36.713Z · LW · GW

The standard argument for the great filter depends on a number of assumptions, and as I said, my current understanding is this standard argument doesn't work numerically once you set up ranges for all the variables.

The point of the SIA Doomsday argument is precisely that the filter, assuming it exists, is much, much more likely to be found in the future than in the past.

Yes, this is true in my model - conditioning on a filter in the first case yields 100 future filters vs 1 past filter, and in the second case yields 900 future filters vs 9 past filters. There's a difference between a prior before you know if humans exist and a posterior conditioning on humans existing.

Indeed, SIA's preference for more observers who could be us seems to be unbounded, to the point that it makes it certain that there are infinitely many observers in the universe.

This depends on your measure over the set of possible worlds, but one can plausibly reject infinities in any possible world or reject the coherency of such. As I've written elsewhere, I'm a verificationist and don't think statements about what *is* per se are verifiable or meaningful - my anthropic statements are meaningful insofar as they predict future experiences with various probabilities.

**ike**on Inaccessible finely tuned RNG in humans? · 2020-10-07T18:26:15.170Z · LW · GW

It seems fairly easy to get a sequence of random-seeming numbers by e.g. applying some transform to the first ten digits of pi if you remember those, your birth date, other important dates, etc. As long as you come up with a procedure to convert to the scale you need, it shouldn't be predictable for low sample sizes.

300 random numbers is pushing it, though. For that you'd need a hash function, which most can't calculate mentally without specific training.

**ike**on Anthropic Reasoning and Perspective-Based Arguments · 2020-10-07T00:46:53.354Z · LW · GW

If we don't have overwhelming reason to think that the filter is in the past, or to think that there is no filter at all, SIA suggests that the filter is very, very likely in the future.

My current understanding is that the parameters don't imply the need for a future great filter to explain the Fermi paradox.

I don't think you need overwhelming evidence for this. SIA is only overwhelming evidence for a future filter if you already have overwhelming evidence that a filter exists beyond what we know of, which we don't.

Toy model: if a priori a filter exists 10% of the time, and this filter would prevent 99% of civilizations from evolving into humans OR prevent humans from becoming space civilizations (50% each), then there's 1800 worlds with no filter for every 200 worlds with a filter; 101 of those filter worlds contain humans and only two of those become space civilizations. So our probability of getting to space is 1802/1901.

If the probability a filter exists is 90%, then there's 200 worlds with no filter for every 1800 filter worlds. Out of the filter worlds, 909 contain humans. Out of those, 18 go to space. So the probability of us going to space is 218/1109.

You really do need overwhelming evidence that a filter exists / the filter is very strong before it creates overwhelming odds of a future filter.

**ike**on Not Even Evidence · 2020-10-06T22:30:49.784Z · LW · GW

>If what you mean by SIA is something more along the lines of "Constantly update on all computable hypotheses ranked by Kolmogorov Complexity", then our definitions have desynced.

No, that's what I mean by Bayesianism - SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior.

>Also, remember: you need to select your priors based on inferences in real life. You're a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.

The bootstrap problem doesn't mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn't be using those priors, but that seems irrelevant here.

>SIA has you reason as if you were randomly selected from the set of all possible observers.

Yes, this is literally describing a prior - you have a certain, equal, prior probability of "being" any member of that set (up to weighting and other complications).

>If you think you're randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like

As I've repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It's still unclear what you think is impermissible in a prior - do you really think one can't have a prior over what the set of possible observers looks like? If so, you'll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that's paradoxical for you or dutchbookable if you indeed reject Bayes as I think you're doing.

Once you confirm that my fully specified model captures what you're looking for, I'll go through the math and show how one applies SIA in detail, in my terms.

**ike**on Not Even Evidence · 2020-10-06T21:28:04.774Z · LW · GW

Here's my version of your scenario.

You send out one thousand probes that are all too far apart to have any effect on each other. Each probe flips a coin / selects a random quantum bit. If heads, it creates one billion simulations of you and tells each of them that it got heads. If tails, it creates one simulation of you and tells it that it got tails. And "you" as the person who sent the probes commits suicide right after launch, so you're not counted as part of this.

Would you agree that this version exhibits the same paradoxical structure as yours, so I can analyze it with priors etc? If not, what would you prefer I change? I want hard numbers so I can actually get numerical output.

**ike**on Not Even Evidence · 2020-10-06T21:22:35.980Z · LW · GW

>The fact that you use some set of priors is a physical phenomenon.

Sure, but irrelevant. My prior is exactly the same in all scenarios - I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can't physically get access to, except the part that requires actually computing Solomonoff as it's uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect.

Just claiming that you've disproven some theory with an unnecessarily complex example that's not targeted towards the theory in question and refusing to elaborate isn't going to convince many.

You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don't find it counter-intuitive, but I might not be understanding the core of your theory yet.

>SIA is asserting more than events A, B, and C are equal prior probability.

Like what?

I'm going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you're going wrong.