Anthropic answers to logical uncertainties?

post by Roko · 2010-04-06T17:51:49.486Z · LW · GW · Legacy · 43 comments

Suppose that if the Riemann Hypothesis were true, then some complicated but relatively well-accepted corollary involving geometric superstring theory and cosmology means that the universe would contain 10^500 times more observers. Suppose furthermore that the corollary argument ( RH ==> x10^500 observers) is accepted to be true with a very high probability (say, 99.9%).

A presumptuous philosopher now has a "proof" of the Riemann Hypothesis. Just use the self-indication assumption: reason as if you are an observer chosen at random from the set of all possible observers (in your reference class). Since almost all possible observers arise in "possible worlds" where RH is true, you are almost certainly one of these.

Do we believe this argument?

One argument against it is that, if RH is false, then the "possible worlds" where it is true are not possible. They're not just not actual, they are as ridiculous as worlds where 1+1=3.

Furthermore, the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically; otherwise, they act as a "collective sucker". Unless you have reason to believe you are a "special" member of Ω, you should assume that your best move is to reason as if you are a generic member of Ω, i.e. anthropically. When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible. When most of the members of Ω arise from non-actual impossible worlds, something seems to have gone wrong. Observers who would only exist in logically impossible worlds can't make bets, so the "collective sucker" arguments don't really work.

If you think that the above argument in favor of RH is a little bit fishy, then you might want to ponder Katja's ingenious SIA great filter argument. Most plausible explanations for a future great filter are logical facts, not empirical ones. The difficulty of surviving a transition through technological singularities, if it convergently causes non-colonization, is some logical fact, derivable by a sufficiently powerful mind. A tendency for advanced civilizations to "realize" that expansionism is pointless is a logical fact. I would argue that anthropic considerations should not move us on such logical facts.

Therefore, if you still buy Katja's argument, and you don't endorse anthropic reasoning as a valid method of mathematical proof, you need to search for an empirical fact that causes a massive great filter just after the point in civilization that we're at. 

The supply of these is limited. Most explanations of the great filter/fermi paradox postulate some convergent dynamic that occurs every time a civilization gets to a certain level of advancement; but since these are all things you could work out from first principles, e.g. by Monte Carlo simulation, they are logical facts. Some other explanations where our background facts are false survive, e.g. the Zoo Hypothesis and the Simulation Hypothesis.

Let us suppose that we're not in a zoo or a simulation. It seems that the only possible empirical great filter cause that fits the bill is something that was decided at the very beginning of the universe; some contingent fact about the standard model of physics (which, according to most physicists, was some symmetry breaking process, decided at random at the beginning of the universe). Steven0461 points out that particle accelerator disasters are ruled out, as we could in principle colonize the universe using Project Orion spaceships right now, without doing any more particle physics experiments. I am stumped as to just what kind of fact would fit the bill. Therefore the Simulation Hypothesis seems to be the biggest winner from Katja's SIA doomsday argument, unless anyone has a better idea. 

Update: Reader bogdanb points out that there are very simple logical "possibilities" that would result in there being lots of observers, such as the possibility that 1+1= some suitably huge number, such as 10^^^^^^^^10. You know there is an observer, you, and that there is another observer, your friend, and therefore there are 10^^^^^^^^10 observers according to this "logical possibility". If you reason according to SIA, you might end up doubting elementary arithmetical truths.

 

43 comments

Comments sorted by top scores.

comment by bogdanb · 2010-04-06T21:22:12.727Z · LW(p) · GW(p)

You don’t need any complicated relationship between the RH and the number of observers. Just consider the very simple “possibility” that 1+0 = 1E500. Worlds in which this is true have lots more observers than those where 1+0 = 1 :-)

Replies from: andrew-jacob-sauer, Roko
comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2019-09-15T18:49:42.748Z · LW(p) · GW(p)

That's the funniest thing I've seen all day.

comment by Roko · 2010-04-07T08:58:44.605Z · LW(p) · GW(p)

I was considering mentioning this too. What if 1+0=3^^^^^^^^^^^3. Then there are really definafely lots of observers.

Replies from: bogdanb
comment by bogdanb · 2010-04-08T07:35:06.557Z · LW(p) · GW(p)

Well, you still need cogito ergo sum to reach that conclusion. Though I’ve no idea how to make anthropic reasoning work without it :-)

comment by Wei Dai (Wei_Dai) · 2010-04-07T10:44:00.974Z · LW(p) · GW(p)

I would argue that anthropic considerations should not move us on such logical facts.

Unless you move fully to UDT (which you don’t seem willing to do, at least for the purposes of this post), such a rule will lead you astray. Consider this thought experiment:

Omega appears and says that a minute ago he generated a physically random number R between 10^9 and 10^10, and if the R-th bit of π is 1, made 100 copies of Earth and scattered them throughout the universe. (He’s appearing to you in all 100 copies if that’s the case.) A minute from now, he will reveal R to you, and then you’ll be given an opportunity to bet $100 on the R-th bit of π being 1 at 1:1 odds.

What do you want your future self to do? If you apply SSA/SIA now, you would conclude that you’re likely to live in a universe where Omega made 100 copies of Earth, in other words a universe where the number R that Omega generated is such that the R-th bit of π is 1. So you have a greater expected utility if your future self were to take the bet.

But once you learn R, and if you follow the proposed rule, you’d become indifferent between taking the bet and not taking it, because whatever R turns out to be, you’d think that the R-th bit of π being 1 has probability .5, since that’s a reasonable prior and you’re not willing to let anthropic considerations move you on that purely logical fact.

Replies from: Roko
comment by Roko · 2010-04-08T15:20:47.573Z · LW(p) · GW(p)

I agree that this example proves that the naive approach doesn't work in general. Thank you for providing it.

What would UDT do?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-04-16T10:44:12.397Z · LW(p) · GW(p)

What would UDT do?

Do you mean what UDT would do in the example I gave? A UDT agent would have a preference about how it wants the multiverse as a whole to turn out. Does it prefer:

A. In approximately half of possible universes/branches, 100 copies of itself gain $100. In the other half, 1 copy of itself loses $100. Or, B. Nobody gains or loses any money.

Betting implies A, and not betting implies B, so if it prefers A to B, then it chooses to bet, otherwise it doesn't. (For simplicity, this analysis ignores more complex strategies such as only betting for some fraction of possible R's.)

comment by Vladimir_Nesov · 2010-04-06T18:59:34.143Z · LW(p) · GW(p)

If you just use utility function that favors big number of observers getting personal utility, then you'll automatically act as if you are in the many-observers world when you are uncertain. This doesn't require revising the level of belief in the logical fact, and allows to switch to the minority option upon discovering the corresponding solution. The same setting allows to estimate the value of new logical information about the fact. (This has to refer to some standard position on anthropics.)

comment by RobinHanson · 2010-04-06T22:43:41.997Z · LW(p) · GW(p)

You seem to be rejecting the "impossible possible worlds" framework, and count what happens in "possible" but not actual worlds more than in "impossible" worlds. Neither world is actual, so why treat them so differently?

Replies from: CarlShulman
comment by CarlShulman · 2010-04-06T23:47:49.586Z · LW(p) · GW(p)

Modal realism/Tegmark level IV likely.

Replies from: jimmy
comment by jimmy · 2010-04-07T22:51:48.900Z · LW(p) · GW(p)

I don't think that helps.

Consider the case (1) where Omega comes to you and says "If the Rth digit of pi is <5, then I created 1e500 copies of you." and then you have to make some decision. Compare that to the case (2) where Omega uses a quantum coin toss instead.

Can you think of any decision that you'd want to make differently in case 1 than in case 2?

In case 1 you're deciding to do something that might affect 1 person or 1e500 people (with equal probability) and in case 2 you're deciding to do something that affects 1 person AND 500 people (each with half the measure).

Since you decide over expected values anyway, it shouldn't matter. It looks like the difference between 50% chance of 2 utilons and 100% chance of one utilon.

Replies from: CarlShulman
comment by CarlShulman · 2010-04-08T00:25:16.994Z · LW(p) · GW(p)

If you have preferences about, e.g. the average level of well-being among actual entities, it could matter a lot.

Replies from: jimmy
comment by jimmy · 2010-04-08T16:50:00.141Z · LW(p) · GW(p)

How so?

In the first case you still don't know which ones are "actual" and which ones are "impossible" so you still have to decide based on expected actual entities.

Replies from: Roko
comment by Roko · 2010-04-08T17:32:21.009Z · LW(p) · GW(p)

One plausible reason is risk-averse optimization of welfare on {actual entities}. If you buy Tegmark level IV, then with the quantum coin, you are guaranteed that the upside of the bet will materialize (or be realized), whereas with Pi, you might lose in every part of the multiverse.

In the long-run, the two will come out the same, i.e. given a series of logical bets L1, L2, ... and a long series of quantum bets Q1, Q2, ... the average welfare increase over the whole multiverse from expected utility maximization on the L bets will be the same as the average welfare increase over the whole multiverse from expected utility maximization on the Q bets.

However, if you have just one bet to make, a "Q" bet is a guaranteed payoff somewhere, but the same cannot be said for an "L" bet.

Replies from: jimmy
comment by jimmy · 2010-04-08T20:39:02.469Z · LW(p) · GW(p)

Good point. I still have some hard to verbalize thoughts against this, but I'll have to think about it more to tease them out.

Since risk aversion is a result of a 'convex frown' utility function, and since we're talking about differences in number of entities, we'd have to have a utility function that is convex frown over number of entities. This means that the "shut up and multiply" rule for saving lives would be just a first order approximation that is valid near the margin. It's certainly possible to have this type of preference, but I have a hunch that this isn't the case.

For there to be a difference, you'd also have to be indifferent between extra observers in a populated Everett branch and extra observers in an emtpy one, but that seems likely.

comment by steven0461 · 2010-04-06T18:59:53.012Z · LW(p) · GW(p)

Observers who would only exist in logically impossible worlds can't make bets, so the "collective sucker" arguments don't really work.

I'm confused as to how and whether this property of "logical possibility" leaks through to affect the rationality of our decisions. I mean, people in non-actual possible worlds don't actually make bets, either. So if we don't care about bets that can't possibly get made, why would we care about bets that don't actually get made?

It seems to me that I can rationally take favorable bets on far-out digits of pi, even though it's conceivable that it makes all logically possible versions of me worse off. If I bet that some far-out digit is 7, at 1-100 odds, then I should even expect that probably, I will make all possible versions of myself worse off; and yet it's still the rational thing to do. So how is it different when you maximize EU through anthropic reasoning?

You should probably cite Bostrom for the Presumptuous Philosopher thought experiment.

Replies from: Roko
comment by Roko · 2010-04-07T12:37:52.950Z · LW(p) · GW(p)

I must admit that I am also somewhat confused.

However, there is a model in which anthropics behaves differently with respect to logical and empirical uncertainty: the multiverse theory. Here, every logically possible world is actualized, so there really are more copies of you in places where the local values of empirical variables are such that more copies are produced.

It seems to me that I can rationally take favorable bets on far-out digits of pi, even though it's conceivable that it makes all logically possible versions of me worse off

sure, but that's not quite the same structure of reasoning as in the example I gave. Suppose that someone asks you to bet on whether RH is true or not, and your mathematical intuition says 50/50. Then they present the argument that RH ==> many many more observers. Do your odds now change?

Reasoning from one empirical thing to another going via a logical question is different to just directly speculating about the logical question.

comment by steven0461 · 2010-04-06T18:18:11.245Z · LW(p) · GW(p)

One ominous possibility is that the standard model is such that advanced civilizations always destroy themselves by doing some lethal physics experiment.

If this is true, it requires a lot of revising of our (logical rather than empirical) beliefs, because it sure seems as though we could in principle colonize space without doing further physics experiments, and some civilizations similar to us will.

Replies from: Roko
comment by Roko · 2010-04-06T18:29:14.967Z · LW(p) · GW(p)

That's a good point Steven. I'll have to correct the post.

Replies from: steven0461, Mitchell_Porter
comment by steven0461 · 2010-04-06T19:13:05.442Z · LW(p) · GW(p)

Hmm, I think if you have simulations as the main/only hypothesis for the filter, Katja's argument reduces to (and is already accounted for by) the simulation argument.

comment by Mitchell_Porter · 2010-04-07T05:03:35.532Z · LW(p) · GW(p)

Steven0461 points out that particle accelerator disasters are ruled out, as we could in principle colonize the universe using Project Orion spaceships right now, without doing any more particle physics experiments.

If we're talking about supercollider-induced vacuum decay (where the disaster actually renders the whole future light-cone uninhabitable), then every part of your Orion-type civilization must avoid conducting the dangerous experiment, forever, or at least for long enough to matter, astro-demographically. It might be argued that this is improbable enough to offset the existence of the rare civilization which does have such discipline.

Replies from: Roko
comment by Roko · 2010-04-07T12:26:03.397Z · LW(p) · GW(p)

Disasters that sterilize the whole universe don't solve the Fermi problem, they exacerbate it.

It has to be a disaster that kills of each civ just at the point that we're at, or a little after, without preventing our existence.

Replies from: Strange7, Mitchell_Porter, DanielVarga
comment by Strange7 · 2010-04-08T00:22:33.281Z · LW(p) · GW(p)

What if the first civilization in any given universe carries out some universe-sterilizing experiment, which effectively results in an entirely new universe, whose first civilization then carries out an equally universe-sterilizing experiment?

comment by Mitchell_Porter · 2010-04-08T00:17:27.536Z · LW(p) · GW(p)

Elementary anthropic logic tells us that we cannot find ourselves in a universe where we never got to exist!

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-04-10T03:33:49.391Z · LW(p) · GW(p)

Good point, of course, but since life like us could (it seems) have arisen billions of years ago, if intelligent life were common and mostly destroyed universes we would expect to see a younger universe. (I remember seeing a paper (by Milan Cirkovic, maybe?) about this, but am too tired to find it right now.)

This also applies to DanielVarga's suggestion of relativistically expanding civilizations, if they don't contain observers in our reference class (if they did, obviously we'd expect to be there).

comment by DanielVarga · 2010-04-09T00:04:54.929Z · LW(p) · GW(p)

Let me promote my idea that other civilizations are unobservable to us because they are expanding with the speed of light. See a bit more detail here and the small thread below it.

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2019-09-15T19:03:58.886Z · LW(p) · GW(p)

The Riemann argument seems to differ from the Great Filter argument in this way: the Riemann argument depends only on the sheer number of observers, i.e. the only thing you're taking into account is the fact that you exist. Whereas in the great filter argument you're updating based on what kind of observer you are, i.e. you're intelligent but not a space-travelling, uploaded posthuman.


The first kind of argument doesn't work because somebody exists either way: if the RH or whatever is false then you are one of a small number, if it's true than you are one of a large number, you are in a typical position either way, and the other situation simply isn't possible. But the second kind of argument seems to hold more merit: if the great filter is behind then you are part of the extreme minority of normal humans, but if the great filter is ahead then you are rather typical of intelligent lifeforms. This might count as evidence, and it seems to be the same kind of evidence which suggests that a great filter even exists in the first place: if it doesn't then we are very exceptional not only in being the very first humans but the very first intelligent life as well.

comment by Mallah · 2010-04-06T19:33:25.207Z · LW(p) · GW(p)

the justification for reasoning anthropically is that the set Ω of observers in your reference class maximizes its combined winnings on bets if all members of Ω reason anthropically

That is a justification for it, yes.

When most of the members of Ω arise from merely non-actual possible worlds, this reasoning is defensible.

Roko, on what do you base that statement? Non-actual observers do not participate in bets.

The SIA is not an example of anthropic reasoning; anthropic implies observers, not "non-actual observers".

See this post for an example of the difference, showing why the SIA is false.

comment by Stuart_Armstrong · 2010-04-07T12:16:02.779Z · LW(p) · GW(p)

Do we believe this argument?

I believe this argument. One way of thinking about it, that may seem less counter-intuitive, is to imagine that the RH is one of a large group of math proposition each of which (for arguments sake) we give a subjective probability of 50% of being true.

Then if one of the propositions X is selected at random, and we know that X implies 10^500 times more observers. In this situation, we are more in a situation akin to the old fashioned SIA situation; we expect that roughly half the propositions will be true, so standard proability is all we need to state SIA.

But now suppose we find out that X is the RH, and that we don't get any extra information about the likely truth of the RH. By Bayseian rules, if we wish to shift our proability away from standard SIA (towards, for example, making the large universe less likely), then there must be some mathematical proposition that, if we used it instead of the RH would shift the probability the other way (towards making the large universe more likely). Since all we know about these propositions is that they all have a subjective proability of 50% of being true, making them interchangeable, this cannot be the case.

Take home message: SIA for uncertainty over empirical facts works only iff SIA for uncertainty over logical facts does as well.

comment by Stuart_Armstrong · 2010-04-07T12:00:52.875Z · LW(p) · GW(p)

Can I repeat it again: there are priors for which Katja's argument decreases the probability of a late filter.

You may feel these priors are unintuitive, but please don't base "logical (not empirical) facts" on things that rely on intuition.

comment by Mitchell_Porter · 2010-04-07T02:26:11.768Z · LW(p) · GW(p)

Do we believe this argument?

It's not a proof. But it might make RH more plausible.

Mathematics is full of situations where an "empirical logical fact" is held to affect the plausibility of some other mathematical proposition we cannot yet decide. I cannot offhand think of an example of a physical observation being used that way, but it may have occurred. E.g. some quantum system might have equations of motion we don't know how to solve, but empirically we can see that it's stable, and from this we can infer something about solutions to those equations.

Replies from: RobinZ, Douglas_Knight
comment by RobinZ · 2010-04-07T02:39:42.894Z · LW(p) · GW(p)

I'm not entirely sure this is what you're talking about...

...but the use of dynamic similitude) in engineering can be justified by referring to the modeling equations and showing they are equivalent for the scale model and the full-size system. Because both sets of equations are identical, their solutions are identical, and parameters derived from those solutions - such as drag and lift coefficients - must also be identical. Effectively, you solve the equation experimentally.

Edit: I can write out the algebra for the Navier-Stokes equation if you would like an example.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-04-07T05:25:30.435Z · LW(p) · GW(p)

Nice example. Actually, whenever you rely on a computer to perform a calculation, you are likewise assuming that the physical structure of the object and the logical structure of the math problem have a degree of isomorphism.

There is actually a quantum cosmological model that starts with chaos, and a quantum-chaotic encoding of the Riemann hypothesis, so I'm wondering if Roko's starting point was more than just a whimsical example. :-)

But let's try to get a little clearer regarding the validity of anthropic reasoning in mathematics... The anthropic principle can never be the reason why a mathematical statement is true, whereas it can be the reason why a particular contingent physical fact is true (true 'locally' or 'indexically'). However, faced with a mathematical uncertainty, it may be that certain possibilities are friendlier to the existence of observers. We might therefore regard our own existence as at least consistent with one of those possibilities being the mathematical truth, and even as favoring those possibilities. This appears to be a form of inductive reasoning.

So to sum up, you cannot anthropically prove a mathematical proposition; but in the absence of more decisive considerations, anthropic induction may provide a reason to favor the hypothesis.

comment by Douglas_Knight · 2010-04-07T05:26:52.088Z · LW(p) · GW(p)

"Listen to water flowing - the Navier-Stokes equations are well-posed."

comment by Stuart_Armstrong · 2010-04-07T12:05:26.647Z · LW(p) · GW(p)

Most plausible explanations for a future great filter are logical facts, not empirical ones. The difficulty of surviving a transition through technological singularities, if it convergently causes non-colonization, is some logical fact, derivable by a sufficiently powerful mind. A tendency for advanced civilizations to "realize" that expansionism is pointless is a logical fact. I would argue that anthropic considerations should not move us on such logical facts.

These may be logical facts for a "sufficiently powerful mind", but they are empirical facts to us. For that mind, there is no such thing as an empirical fact about our universe (except for recursive statements about itself), so any distrinction between empirical and logical facts is moot via this kind of argument.

Replies from: Vladimir_Nesov, Roko
comment by Vladimir_Nesov · 2010-04-08T18:34:44.921Z · LW(p) · GW(p)

These may be logical facts for a "sufficiently powerful mind", but they are empirical facts to us. For that mind, there is no such thing as an empirical fact about our universe (except for recursive statements about itself), so any distinction between empirical and logical facts is moot via this kind of argument.

If the mind is initially implemented as a program in a computer, it will have to predict what it'll observe at each subsequent moment, including when it leaves the initial implementation, only given its mathematical construction and not taking into account the previous observations. By creating two copies of such mind, I can introduce different observations to the copies, so that any constant expected observation won't be correct.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-04-09T05:50:29.453Z · LW(p) · GW(p)

Yes, I skated over that point (since it reinforces my argument against logical facts but wasn't that relevant, I omitted it).

This corresponds to my current intuition about deterministic empirical facts - that a mind running inside a program must treat as empirical certain facts, when they are actualy logical for a mind "outside the universe".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-09T08:25:33.071Z · LW(p) · GW(p)

a mind running inside a program must treat as empirical certain facts, when they are actually logical for a mind "outside the universe".

Where the mind is in the world is a very important factor. Control scheme is given by a mapping from the controlling domain (the agent's influence) to the controlled outcomes (where preference is defined, inducing preference on the controlling decisions). Taking the controlling domain out of the system makes control and preference meaningless.

comment by Roko · 2010-04-08T17:36:05.657Z · LW(p) · GW(p)

For that mind, there is no such thing as an empirical fact about our universe

What about the initial quantum fluctuations that caused asymmetries that allowed matter to clump in the first place? What about the physical constants?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-04-09T06:10:44.593Z · LW(p) · GW(p)

Would you mind if I elided that question by taking a step back, and questionning all definitions? Not for the fun of it, but just because I think we are arriving at a crucial point:

  • What, for us, is the distinction between a logical and an empirical fact?

Take, for instance, your sufficiently powerful mind (located in some sense outside the universe, or at least outside the causal flow of the question it is considering). You said that this mind can deduce, from first principles, that there is a great filter.

Is that claim empirical, or logical? In cannot be purely logical, for it will not be true in certain universes (such as those whose next instant is derived by hyper-computational means, for examples); so it has to have an empirical component (we see it to be true in our universe, somehow).

Hence we know, empirically, that a hypothetical mind will deduce, logically, the great filter. So, from our perpective, is the great filter logical or empirical?

It gets worse; the hypothetical mind has to know empirically, that it can deduce these facts logically, so even the second step is not purely logical. Then we can add that empirical deduction depends on certain logical facts about how deduction works, etc...

Even to return to the original question:

What about the initial quantum fluctuations that caused asymmetries that allowed matter to clump in the first place? What about the physical constants?

What if many-worlds is true? What if we live in a level X multiverse? You could say your location is an empirical fact, even in these worlds; but your location - and the fact you think about your location - is also a logical fact of the many-worlds/level X multiverse.

So a solid distinction between empirical and logical seems nebulous to me; I think we should at least start clearly defining these terms, before saying that certain facts lie in one category or the other.

And I haven't even brought up the whole issue of our minds being (empirically known to be) imperfect and possibly mistaken about whether particular logical facts are true...

comment by taw · 2010-04-06T18:25:52.412Z · LW(p) · GW(p)

Why do we even need to talk about great filters? We know perfectly well that intelligent life is so extremely unlikely, that we should expect far less than 1 such event per universe.

Replies from: jimrandomh, steven0461
comment by jimrandomh · 2010-04-06T18:44:55.233Z · LW(p) · GW(p)

Saying that "intelligent life is so extremely unlikely, that we should expect far less than 1 such event per universe" is equivalent to saying that there exists an extremely strong filter behind us, since we're using "great filter" as jargon for "reason why advancement past a stage is improbable". And, for what it's worth, I don't think a filter anywhere near that strength is at all likely.

comment by steven0461 · 2010-04-06T18:28:43.425Z · LW(p) · GW(p)

How?