Unbounded Intelligence Lottery

post by kman · 2022-07-07T23:28:03.071Z · LW · GW · 11 comments

Contents

  Why is this interesting?
None
11 comments

Suppose you're offered a free ticket for the following lottery: an  chance of being uploaded onto a perfect platonic Turing machine (with the understanding that you'll have full control over the course of computation and ability to self-modify) and a  chance of dying immediately. Assume that if you do not participate in the lottery, you will never again have a chance to be uploaded onto a perfect platonic Turing machine. What is the smallest value of , if any, where you'll participate in the lottery?

Why is this interesting?

An intelligence embedded in a perfect platonic Turing machine would be able to expand and improve itself indefinitely (and arbitrarily quickly, subjectively), without ever running into physical limitations. It could think any computable thought in a subjective instant. It could spend all the steps and memory it wants on simulating fun experiences [LW · GW]. It could simulate our universe (?) in order to upload all other humans that have/will ever live. It could do the same for any sentient aliens. Would this be infinitely better than living for a billion years?

11 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2022-07-08T00:09:20.708Z · LW(p) · GW(p)

Something needs to give this mathematical object moral worth [LW · GW], otherwise we could just consider the machine in question, without having to win the lottery (the machine exists as an idea regardless of any lotteries, which are not part of the machine, don't influence its content). Usually the source of moral worth is some sort of instantiation in the physical world, which won't work for details uncomputable in the physical world.

On the other hand, this is a matter of preference, so it's possible that your extrapolated volition cares about details of mathematical objects that can't be computed in the physical world, in which case such turing machines might matter. But the situation where they matter conditionally on some lottery is more tenuous.

(Of course, the thought experiment itself implicitly requires that exactly this happens, that you do end up caring about a turing machine conditionally on the outcome of the lottery. So if we are already inside the hypothetical of the thought experiment, my comment is beside the point.)

Replies from: kman
comment by kman · 2022-07-08T00:33:11.080Z · LW(p) · GW(p)

Suppose I further specify the "win condition" to be that you are, through some strange sequence of events, able to be uploaded in such a TM embedded in our physical universe at some point in the future (supposing such a thing is possible), and that if you do not accept the lottery then no such TM will ever come to be embedded in our universe. The point being that accepting the lottery increases the measure of the TM. What's your answer then?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-08T00:46:27.830Z · LW(p) · GW(p)

That wouldn't matter in general, physicality of the initial states of a TM doesn't make its states from sufficiently distant future any more physically computed, so there is no "increasing the measure of the TM" by physical means. The general argument from being physically instantiated doesn't cover this situation, it has to be a separate fact about preference, caring about a TM in a way that necessarily goes beyond caring about the physical world. (This is under the assumption that the physical world can't actually do unbounded computation of undiluted moral weight, which it in principle might.)

Replies from: kman
comment by kman · 2022-07-08T01:03:45.036Z · LW(p) · GW(p)

physicality of the initial states of a TM doesn't make its states from sufficiently distant future any more physically computed

I'm not sure what you mean by this.

Let's suppose the description length of our universe + bits needed to specify the location of the TM was shorter than any other way you might wish to describe such a TM. So with the lottery, you are in some sense choosing whether this TM gets a shorter or longer description.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-08T01:24:06.263Z · LW(p) · GW(p)

The argument for moral worth of physically instantiated details says that details matter when they are physically instantiated. Any theories about description lengths are not part of this argument. Caring about such things is an example of caring about things other than physical world.

I'm not sure what you mean by this.

What I mean [LW(p) · GW(p)] is that sufficiently distant states of the TM won't be physically instantiated regardless of how many times its early states get to be physically instantiated. Therefore a preference that cares about things based on whether they get to be physically instantiated won't care about distant states of the TM regardless of how many times its early states get to be physically instantiated.

A preference that cares about things other than physical instantiation can of course care about them, including conditionally on how many times early states of a TM get to be physically instantiated. Which is sufficient to implement the thought experiment, but not necessary, since one shouldn't fight the hypothetical. If the thought experiment asks us to consider caring about unbounded TMs, that's the appropriate thing to do, whether that happens to hold about us in reality or not.

Replies from: kman
comment by kman · 2022-07-08T01:43:21.808Z · LW(p) · GW(p)

I see. When I wrote

such a TM embedded in our physical universe at some point in the future (supposing such a thing is possible)

I implicitly meant that the embedded TM was unbounded, because in the thought experiment our physics turned out to support such a thing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-08T02:02:30.404Z · LW(p) · GW(p)

Ah I see, the problem was ambiguity between TM-defined-by-initial-state and TM-with-full-computation-history. Since you said it was embedded in physics, I resolved ambiguity in favor of the first option, also allowing a bit of computation to take place, but not all of it. But if unbounded computation fits in physics, saying that something is physically instantiated can become meaningless if we allow the embedded unbounded computations to enumerate enough things, and some theory of measure of how much something is instantiated becomes necessary (because everything is at least a little bit instantiated), hence the relevance of your point about description length to caring-about-physics.

Replies from: kman
comment by kman · 2022-07-08T03:27:05.307Z · LW(p) · GW(p)

Right. I think that if we assign measure inverse to the exponent of the shortest description length and assume that the  probability increases the description length of the physically instantiated TM by  (because the probability is implemented through reality branching which means more bits are needed to specify the location of the TM, or something like that), then this actually has a numerical solution depending on what the description lengths end up being and how much we value this TM compared to the rest of our life.

Say  is the description length of our universe and  is the length of the description of the TM's location in our universe when the lottery is accepted,  is the description length of the location of "the rest of our life" from that point when the lottery is accepted,  is the next shortest description of the TM that doesn't rely on embedding in our universe,  is how much we value the TM and  is how much we value the rest of our life. Then we should accept the lottery for any , if I did that right.

Replies from: kman
comment by kman · 2022-07-08T03:43:55.341Z · LW(p) · GW(p)

If we consider the TM to be "infinitely more valuable" than the rest of our life as I suggested might make sense in the post, then we would accept whenever . We will never accept if  i.e. accepting does not decrease the description length of the TM.

comment by Vladimir_Nesov · 2022-07-08T02:37:34.218Z · LW(p) · GW(p)

To create uploads we would have to first solve said mystery

Solving this mystery is not necessary in order to create uploads, but deconfusing this is relevant to knowing what it means to say that they are the same individuals as pre-upload originals, or that they hold the same moral worth as pre-upload originals, and whether saying that is correct.

Replies from: superads91
comment by superads91 · 2022-07-08T17:12:32.587Z · LW(p) · GW(p)

True, in a way. Without solving said mystery (of how an animal brain produces not only calculations but also experiences) you could theoretically create philosophical zombie uploads. But in this post what is really desired is to save all conscious beings from death and disease by uploading them, so to that effect (the most important one) it still looks impossible.

(I deleted my post because on hindsight it sounded a bit off topic.)