Dach's Shortform

post by Dach · 2020-09-24T07:17:48.478Z · LW · GW · 4 comments

4 comments

Comments sorted by top scores.

comment by Dach · 2020-09-24T07:17:48.785Z · LW(p) · GW(p)

(2020 - 10 - 03) EDIT: I have found the solution: the way I was thinking about identity turns out to be silly.

In general, if you update your probability estimates of non-local phenomenon based on anthropic arguments, you're (probably? I'm sure someone has come up with smart counterexamples) doing something that includes the sneaky implication that you're conducting FTL communication. I consider this to be a reductio ad absurdum on the whole idea of updating your probability estimates of non-local phenomena based on anthropic arguments, regardless of the validity of the specific scenario in this post.

If you conduct some experiment which tries to determine which world you're in, and you observe x thing, you haven't learned (at least, in general) anything about what percentage of exact copies of you before you did the experiment observed what you observed.

If you do update and If you claim the update you're making corresponds to reality, you're claiming that non-local facts are having a justified influence on you. Whenever you put it like that, it's very silly. By adjusting the non-local worlds, we can change this justified influence on you (otherwise your update does not correspond to reality), and we have FTL signaling.

The things you're experiencing aren't any evidence about the sorts of things that most exact copies of your brain are experiencing, and if you claim it is you're unknowingly claiming FTL communication is possible, and that you're doing it right now.

I'll need to write something more substantial about this problem.

(End Edit)

 

(This bit is edited to redact irrelevant/obviously-wrong-in-hindsight information)

 

So, let us imagine our universe is "big" in the sense of many worlds, and all realities compatible with the universal wavefunction are actualized- or at least something close to that [LW · GW]. This seems increasingly likely to me.

Aligned superintelligences might permute through all possible human minds, for various reasons. They might be especially interested in permuting through minds which are thinking about the AI alignment problem- also for various reasons. 

As a result, it's not evident to me for normal reasons that most of the "measure" of me-right-now flows into "normal" things- it seems plausible (on a surface level- I have some arguments against this) that most of the "measure" of me should be falling into Weird Stuff. Future superintelligences control the vast majority of all of the measure of everything (more branches in the future, more future in general, etc.), and they're probably more interested in building minds from scratch and then doing weird things with them.

If, among all of the representations of my algorithm in reality, (100%) * (1 - 10^-30) of my measure was "normal stuff", I'd still expect to be "diverted" basically instantly, if we assume there's one opportunity for a "diversion" every planck second.

However, this is, of course, not happening. We are constantly avoiding waking up from the simulation.

Possible explanations:

  • The world is small. This seems unlikely- look at these conditions:
  1. Many worlds is wrong.
  2. The universe is finite and not arbitrarily large.
  3. There's no multiverse, or the multiverse is small, or other universes all have properties which mean they don't support the potential for diversion. e.g. their laws are sufficiently different where none of them will contain human algorithms in great supply,
  4. There's no way to gain infinite energy, or truly arbitrarily large amounts of energy.
  • The sum of "Normal universe simulations" vastly outweighs the sum of "continuity of experience stealing simulations", for some reason. Maybe there are lots of different intelligent civilizations in the game, and lots of us spawn general universe simulators, and we also tend to cut simulations when other superintelligences arrive.
  • Superintelligences are taking deliberate action to prevent "diversion" tactics from being effective, or predict that other superintelligences are taking these actions. For example, if I don't like the idea of diverting people, I might snipe for recently diverted sentients and place them back in a simulation consistent with a "natural" environment.
  • "Diversion" as a whole isn't possible, and my understanding of how identity and experience work is sketchy.
  • Some other sort of misunderstanding hidden in my assumptions or premise.

(Apply the comments in the edit above to the original post. If I think that the fact I'm not observing myself being shoved into a simulation is evidence that most copies of my algorithm throughout reality are not being shoved into simulations, I also need to think that the versions of me which don't get shoved into simulations are justifiably updating in correspondence with facts of arbitrary physical separation from themselves, thus FTL signaling. Or, even worse, inter-universe signaling.)

Replies from: TAG, avturchin
comment by TAG · 2020-09-24T11:49:45.181Z · LW(p) · GW(p)

You didn't list "superintelligence is unlikely" among the list of possible explanations.

Replies from: Dach
comment by Dach · 2020-09-24T18:29:14.016Z · LW(p) · GW(p)

Right, that isn't an exhaustive list. I included the candidates which seemed most likely.

So, I think superintelligence is unlikely in general- but so is current civilization. I think superintelligences have a high occurrence rate given current civilization (for lots of reasons), which also means that current civilization isn't that much more likely than superintelligence. It's more justified to say "Superintelligences which make human minds" have a super low occurrence rate relative to natural examples of me and my environment, but that still seems to be an unlikely explanation.

Based on the "standard" discussion on this topic, I get the distinct impression that the probability our civilization will construct an aligned superintelligence is significantly greater than, for example, 10^-20%, and the large amounts of leverage that a superintelligence would have (There's lots of matter out there) would produce this same effect.

comment by avturchin · 2020-09-25T10:32:15.762Z · LW(p) · GW(p)

Future superintelligences could steal minds to cure "past sufferings" and to prevent s-risks, and to resurrect all the dead. These is actually a good thing, but for the resurrection of the dead they have to run the whole world simulation once again for last few thousands years. In that case it will look almost like normal world.