post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Vaughn Papenhausen (Ikaxas) · 2018-05-02T20:21:11.996Z · LW(p) · GW(p)

Let me see if I've got your argument right:

(1) It seems likely that the world is a simulation (simulation argument)

(2) If (1) is true, then it's most likely that I am the only conscious being in existence (presumably due to computational efficiency constraints on the simulation, and where "in existence" means "within this simulation")

(3) If I am the only conscious being in existence, then it would be unethical for me to waste resources improving the lives of anyone but myself, because they are not conscious anyway, and it is most ethical for me to maximize the good in my own life.

(4) Therefore, it's likely that the most ethical thing for me to do is to maximize the good in my own life (ethical egoism).

Is this right?

I had never considered this argument before; it's a really interesting argument, and I think it has a lot of promise. I especially had never really thought about premise (2) or its implications for premise (3), I think that is a really forceful point.

I'm not yet fully convinced though; let me see if I can explain why.

First, I don't think premise (2) is true. The simulation argument, at least as I tend to hear it presented, is based on the premise that future humans would want to run "ancestor simulations" in order to see how different versions of history would play out. If this is the case, it seems like first-person simulations wouldn't really do them much good; they'd have to simulate everyone in order to get the value they'd want out of the simulations. To be clear, by "first person simulation" I mean a simulation that renders only from the perspective of one person. It seems to me that, if you're a physicalist about consciousness, if the simulation was rendering everywhere, then all the people in the simulation would have to be conscious, because consciousness just is the execution of certain computations, and it would be necessary to run those computations in order to get an accurate simulation. This also means that, even in a first-person simulation, the people you were interacting with would be conscious as long as they were within your frame of awareness (otherwise the simulation couldn't be accurate), it's just that they would blink out of existence once they left your frame of awareness.

Second, correct me if I'm wrong, but it seems to me that premise (3) is actually assuming Utilitarianism (or some other form of agent-neutral consequentialism) which simply reduces to egoism when there's only one conscious agent in the world. So at bottom, the disagreement between yourself and effective altruists isn't normative, it's empirical (i.e. it's not about your fundamental moral theory, but simply about which beings are conscious). This isn't really a counterargument, more of an observation that you seem to have more in common with effective altruists than it may seem just on the basis that you call your position "ethical egoism." If, counterfactually, there _were_ other conscious beings in the world, would you think that they also had moral worth?

Third, assuming that your answer to that question is "yes," I think that it's still often worth it to act altruistically, even on your theory, on the basis of expected utility maximization. Suppose you think that there's only an extremely small chance that there are other conscious beings, say only .01%. Even so, if there are _enough_ lives at stake, even if you think they aren't conscious it can be worth it to act as if they are, because it would be morally catastrophic if you did not and they turned out to be conscious after all. I think this turns out to be equivalent to valuing other lives (and others' happiness, etc.) at X% the value of your own, where X is the probability you assign to their being conscious. So, if you assign a .01% chance that you're wrong about this argument and other people are conscious after all, you should be willing to e.g. sacrifice your life to save 100 others. Or if you think it's .001%, you should be willing to sacrifice your life to save 1000 others.

Anyway, I'm curious to hear your thoughts on all of this. Thanks for the thought-provoking article!

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-03T09:33:40.466Z · LW(p) · GW(p)

First, let me thank you for taking the time of writing down the premises/arguments. I think you summarized the argumentation sufficiently well to allow a precise discussion.

I - "Premise 2) is false"

Ancestor simulations are defined in the simulation argument as "A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) [...].". I agree with you that the argument deals with the probability of ancestor simulations, and not first-person simulations. Therefore, I cannot infer the probability of a first-person simulation from the simulation argument.

However, such simulations are mentioned here:

"In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations." (https://www.simulation-argument.com/simulation.html)

In short, I should only believe that I live in a me-simulations if I think they are 100 billion times more probable than ancestor-simulations.

Let me try to estimate the probability of those me-simulations, and compare it with the probability of ancestor simulations.

First, I claim that one of these two assertions is true (assuming the existence of post-humans who can run ancestor-simulations):

i) Post-humans will run ancestor-simulations because they don't observe other forms of (conscious) intelligence in the universe (Fermi Paradox), and are therefore trying to understand if there was indeed a great filter before them.
ii) Post-humans will observe that other forms of conscious intelligence are abundant and the universe, and have low interest in running Ancestor simulations.

Second, even with the premise of physical consciousness, I claim that me-simulations could be made at lest 100 billion times less computationally expensive than full simulations. Here are my reasons to believe so:

1) Even though it would be necessary to generate consciousness to mimic human processes, it would only be necessary for the humans you directly interact to, so maybe 10 hours of human consciousness other than yours every day.

2) The physical density needed to simulate a me-simulation would be at most the size of your room (about 20 meters squared * the height of your room). If you are in room it is trivially true, and if you are in the outer world I believe you are less self-aware of the rest of the physical world, so the "complexity of reality" necessary so that you to believe the world is real is about the same as if you were in your room. However, Earth's Surface is about 500 million km squared, so 2.5 * 10^13 times greater. It follows that it would be at least 100 billion times less computationally intensive to run a me-simulation, assuming you would want to simulate at least the same height for the ancestor-simulation.

3) You would only need to run one ancestor civilization to run an infinitely large number of me-simulation: if you know about the environment and have in memory how the conscious humans behaved, you can easily run a me-simulation where a bunch of the other characters are just a copy of what they did in the past (when they are in your focus), but only one (or a small number of people) is conscious. A bit like in Westworld there are some plots where robots are really convincing, but in general they are not.

I am only starting to answer your comment and have already written a lot, so I might just create a post about selective simulations latter today. If so, you could reply to this part there.

II - "the disagreement between yourself and effective altruists isn't normative, it's empirical"

I think I understand your point. If I got it right, you are saying that I am not contradicting Effective Altruism, but only applying an empirical reasoning with EA's principles? If so, I agree with your claim. I guess I tried to apply Effective Altruism's principles, and in particular the Utilitarian view (which might be controversial,even inside of EA, I don't know) to the described world (a video-game-like life) to show that it resulted in what I called ethical egoism.

If, counterfactually, there were other conscious beings in the world, would you think that they also had moral worth?

I don't share deeply the Utilitarian view. To describe it shortly I believe I am a Solipsist who values the perception of complexity. So I value my own survival because I may not have any proof of any kind of the complexity of the Universe if I stop to exist, but also the survival of Humanity (because I believe humans are amazingly complex creatures), but I don't value positive subjective perceptions of other conscious human beings. I value my own positive subjective perceptions because it maximizes my utility function of maximizing my perception of complexity.

III - "I think that it's still often worth it to act altruistically"

Let's suppose I had answered yes.

To come back to what we said about physicalism and the simulation of other conscious agents in a first-person view. You said:

"[...] even in a first-person simulation, the people you were interacting with would be conscious as long as they were within your frame of awareness (otherwise the simulation couldn't be accurate), it's just that they would blink out of existence once they left your frame of awareness."

I claim that even though they would be conscious in the "frame of awareness", they would not deserve any altruism, even considering Expected Value. The reason is that if you give them sporadic consciousness, it greatly lacks of what I consider as a conscious human's subjective experience. In particular, if the other simulated humans I connect to do not have any continuity in their consciousness, and the rest is just false memories (e.g. WestWorld), I would give a much greater value to my own subjective experience (at least 1000 times more valuable I would say).

So if I have 10$ in my pocket, I would still use it to buy me an icecream and I would not buy it to some random guys in the street, even if my 10$ could buy them 100 icereams (but I might hesitate with like 10000 for instance). The issue here is the probability of consciousness. If I assume there is a 1/1 000 000 chance someone is conscious and that I value my subjective experiences 10 000 times more than theirs, I would need to be able to buy like 10 000 * 1 000 000 = 10 000 000 000 icecreams (for more people than on Earth) to not buy me an icecream.

Anyway, I am very glad you clarified my hypothesis with your comment, asked for clarification and objected courteously. My post was not explicit at all and lacked rational/details arguments, what you did. Answering you made me think a lot. Thank you.

Feel free to let me know what you think. PS: I might do a full post on the probability of me-simulations in less than 10h as I said above.

Replies from: Ikaxas, TAG
comment by Vaughn Papenhausen (Ikaxas) · 2018-05-04T21:35:31.931Z · LW(p) · GW(p)

I'll respond to your point about me-simulation in the comments of your other post, as you suggested.

Response to your Section II

I'm skeptical that your utility function is reducible to the perception of complexity [LW · GW]. In Fake Utility Functions [LW · GW] Eliezer writes:

Press [the one constructing the Amazingly Simple Utility Function] on some particular point, like the love a mother has for her children, and they reply "But if the superintelligence wants 'complexity', it will see how complicated the parent-child relationship is, and therefore encourage mothers to love their children."  Goodness, where do I start?
Begin with the motivated stopping [? · GW]:  A superintelligence actually searching for ways to maximize complexity wouldn't conveniently stop if it noticed that a parent-child relation was complex.  It would ask if anything else was more complex.  This is a fake justification [? · GW]; the one trying to argue the imaginary superintelligence into a policy selection, didn't really arrive at that policy proposal by carrying out a pure search [? · GW] for ways to maximize complexity.
The whole argument is a fake morality [? · GW].  If what you really valued was complexity, then you would be justifying the parental-love drive by pointing to how it increases complexity.  If you justify a complexity drive by alleging that it increases parental love, it means that what you really value is the parental love.  It's like giving a prosocial argument in favor of selfishness.

In "You Don't Get to Know What You're Fighting For," Nate Soares writes:

There are facts about what you care about, but you don't get to know them all. Not by default. Not yet. Humans don't have that sort of introspective capabilities yet. They don't have that sort of philosophical sophistication yet. But they do have a massive and well-documented incentive to convince themselves that they care about simple things — which is why it's a bit suspicious when people go around claiming they know their true preferences.
From here, it looks very unlikely to me that anyone has the ability to pin down exactly what they really care about. Why? Because of where human values came from. Remember that one time that Time tried to build a mind that wanted to eat healthy, and accidentally built a mind that enjoys salt and fat? I jest, of course, and it's dangerous to anthropomorphize natural selection, but the point stands: our values come from a complex and intricate process tied closely to innumerable coincidences of history.

Thou art Godshatter [LW · GW], whose utility function has a thousand terms, each of which is in itself indispensible. While the utility function of human beings is complex, that doesn't imply that it reduces to "complexity is valuable."

I don't claim that what I've just said should convince you that your utility function isn't "maximize the amount of complexity I perceive"; I'm not in your head, for all I know it could be. All I intend it to convey is my reasons for being skeptical.

Let me ask you this: Why do you value complexity? And how do you know?

Response to your section III

Regarding why it's still worth it to act altruistically: I just want to clarify that the part of my comment that you quoted:

"[...] even in a first-person simulation, the people you were interacting with would be conscious as long as they were within your frame of awareness (otherwise the simulation couldn't be accurate), it's just that they would blink out of existence once they left your frame of awareness."

wasn't supposed to have anything to do with my argument that it's still worth it to act altruistically.

Suppose that premise (2) above is true, i.e. you really are the only conscious being in the me-simulation, and even the people in your frame of awareness aren't conscious, even while they're there. Even if that's the case, your credence that it's the case shouldn't be 100%; you should have at least some miniscule doubt. Suppose that you assign a .01 credence to the possibility that premise (2) is false (i.e. a 1% credence); that is, you think there's a 99% chance it's true and a 1% chance it's false. In that case, suppose you could perform some action that, if premise (2) were false, would produce 1,000,0000 utils for some people who aren't you. Then the expected utility of that action, even if you think premise (2) has a 99% chance of being right, is 10,000 utils. So if you don't have any other actions available that produce more than 10,000 utils, you should do that action. And the conclusion is the same even if the numbers are different.

A further argument

A further argument also occurred to me for why it might still be worth it to act altruistically after I posted my original comment, namely the following: if there are enough me-simulations that you are likely to be in one of them, then it's also likely that you appear in at least some other me-simulations as a "shadow-person," to use Bostrom's term. And since you are simply an instantiation of an algorithm, and the algorithm of you-in-the-me-simulation is similar to the algorithm of you-as-a-shadow-person, due to Functional Decision Theory and Timeless Decision Theory considerations your actions in the me-simulation will affect the actions of you-as-a-shadow-person. And you-as-a-shadow-person's actions will affect the conscious person who is in the other me-simulation, which means that you-in-the-me-simulation's actions will, acausally, have effects on other conscious beings, so you should want to act in such a way that the outputs of your near-copies in other me-simulations will have positive effects on the conscious inhabitants of those other me-simulations. If this argument isn't clear, let me know and I can try to rephrase it. It's also not my True Rejection [LW · GW], so I don't place too much weight on it.

comment by TAG · 2018-05-04T11:41:59.747Z · LW(p) · GW(p)

Second, even with the premise of physical consciousness, I claim that me-simulations could be made at lest 100 billion times less computationally expensive than full simulations. Here are my reasons to believe so:

If there is no point, the saving doesn't mean anything. If the simulators want to find out how agents interact to shape history, they are going to need multiple accurate simulated people. A me-simulation is a video-game with "me" in cotnrol....they are only going to get information about one person. Why would they be interested?

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-04T12:38:44.078Z · LW(p) · GW(p)

Yes, I agree that cost-effectiveness does not mean me-simulations would be useful.

What is needed are specific/empiric reasons why such posthuman civilization would want to run those me-simulations, which I haven't done in this article. However, I tried to give some reasons in the next post (where you also commented) : https://www.lesswrong.com/posts/8fSaiJX7toRixR6Ee/are-you-living-in-a-me-simulation [LW · GW]

comment by ryan_b · 2018-05-02T20:04:00.133Z · LW(p) · GW(p)

How did you resolve your obsession with your own mortality?

comment by Donald Hobson (donald-hobson) · 2018-05-05T23:57:54.526Z · LW(p) · GW(p)

The problem with such arguments is that they shoot out their own support. If I am in a simulation, then the external universe can be any size and shape it likes for all I know.

Replies from: TAG
comment by TAG · 2018-05-06T09:30:01.158Z · LW(p) · GW(p)

Yes, that is the general problem with simulation arguments: they are epistemologically self undermining.

comment by Dacyn · 2018-05-05T13:46:19.069Z · LW(p) · GW(p)

I think according to UDT, it doesn't make a difference whether or not you're in a me-simulation, because a priori you don't know which person it is that has consciousness, and you're not allowed to update once you find out. Presumably the UDT algorithm's output will be logically correlated with your choices regardless of whether or not you are actually conscious.

In more detail: UDT has to either output "Michaël Trazzi [LW · GW] should be an EA" or "Michaël Trazzi [LW · GW] should be an ethical egoist". To compute the value of these different outputs it computes the expected value over all possible ways the world could be, which include not just Michaël Trazzi [LW · GW] being in a me-simulation but also each of the other 7 billion people being in such a simulation. In the latter case the EV is clearly greater if Michaël Trazzi [LW · GW] is an EA, so that is what UDT would output.

I guess this doesn't matter if your solipsism is just a cover for regular egoism (which it sounds like is the case from your other comments).