Posts

A simulation basilisk 2021-09-17T17:44:23.083Z
Torture vs Specks: Sadist version 2021-07-31T23:33:42.224Z

Comments

Comment by andrew sauer (andrew-sauer) on Outlawing Anthropics: An Updateless Dilemma · 2021-09-22T00:41:36.988Z · LW · GW

I think this is a confusion of two different types of thinking. One is the classical thought of one being responsible only for the consequences of one's individual actions. If you think of yourself as an individual making independent decisions like this, then you are justified in thinking that there is a 90% chance of heads upon seeing a green room: 90% of individuals in green rooms, in expectation, are there when the coin flips heads.(note that if you modify the problem so that the outcomes of the bet only apply to the people making it, the bet becomes favorable even in advance, regardless of whether the agents are altruistic or selfish).

However, in this case you cannot claim that your "yes" decision has utility based on the result of the entire group's response to the question. If you did, then in heads flips, if all 18 people say "yes", all 18 people will claim that the utility of their action was $18, but only $18 of utility was gained by the group in total as a consequence of all these decisions, so they cannot all be right.(Note that if you modify the problem so that you really are the only one responsible for the decision, like by saying that everyone in green rooms except one person is mind-controlled to say "yes", then saying "yes" really is the right decision for that free-willed person, even in advance.)

It is also possible to reason in a TDT sort of way, acting as though you are making decisions for all identical copies of yourself. This is effectively defining yourself, as an agent, as the set of all identical copies of yourself with identical input. In this case, it does make sense to take responsibility for the decision of the entire group, but it no longer makes sense to do an anthropic update: you as an agent, as a set of copies of yourself seeing green rooms, would exist whether or not the coin came up heads, and would make the same observations.

In conclusion, it makes sense to do an anthropic update if you think of yourself as an individual, it makes sense to take utilitarian responsibility for the whole group of actions if you think of yourself as a group of identical copies, but in neither case does it make sense to do both, which is what you would need to justify saying "yes" in this situation.

Comment by andrew sauer (andrew-sauer) on Torture vs Specks: Sadist version · 2021-08-02T23:10:10.946Z · LW · GW

Okay, maybe the NUs wouldn't interpret the problem as I phrased it in this way, but the problem can be slightly changed to have a similar conclusion, by saying that the sadists are mildly annoyed when the guy isn't being tortured, instead of wanting it for their pleasure.

Comment by andrew sauer (andrew-sauer) on Jews and Nazis: a version of dust specks vs torture · 2021-08-02T00:16:14.476Z · LW · GW

This situation is more like "they eat babies, but they don't eat that many, to the extent that it produces net utility given their preferences for continuing to do it."

Comment by andrew sauer (andrew-sauer) on Torture vs Specks: Sadist version · 2021-08-01T23:54:08.684Z · LW · GW

I'm referring to the fact that utility functions are equivalent under positive affine transformations (if you add a constant and multiply it by a positive constant, the UF remains the same in the sense that it has the same preference in every situation)

Assuming we are computing the utility of an outcome by assigning a utility to each person and then summing them, adding a constant value to any person's utility doesn't change the comparison between outcomes, because the net effect is just to add a constant to the utility of each outcome(as long as the person we are adding a constant value to exists in every outcome).

Therefore, we can convert the situation to negative utilitarian without functionally changing it, by subtracting the maximum utility from each person, ensuring that everyone's utility will be negative in every outcome. We can also convert it to positive utilitarian by subtracting the minimum in a similar way.

This analysis assumes that there is a maximum and a minimum utility, and that every outcome has the same set of people in it, so if these assumptions break there may be relevant differences.

Comment by andrew sauer (andrew-sauer) on Torture vs Specks: Sadist version · 2021-08-01T15:08:41.256Z · LW · GW

I'm not sure how negative utilitarianism changes things. Positive and negative utilitarianism are equivalent whenever UFs are bounded and there are no births or deaths as a result of the decision.

Negative utilitarianism interprets this situation as the sadists suffering from boredom which can be slightly alleviated by knowing that the guy they hate is suffering.

Comment by andrew sauer (andrew-sauer) on Torture vs Specks: Sadist version · 2021-08-01T01:00:29.705Z · LW · GW

Although under strict preference utilitarianism, wouldn't change in values/moral progress be considered bad, for the same reason a paperclip maximizer would consider it bad?

Comment by andrew sauer (andrew-sauer) on Torture vs Specks: Sadist version · 2021-08-01T00:56:59.475Z · LW · GW

I should say we assume that we're deciding which one a stable, incorruptible AI should choose. I'm pretty sure any moral system which chose torture in situations like this would not lead to good outcomes if applied in a practical circumstance, but that's not what I'm wondering about, I'm just trying to figure out which outcome is better. In short, I'm asking an axiological question, not a moral one. https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

My intuition strongly says that the torture is worse here even though I choose torture in the original, but I don't have an argument for this because my normal axiological system, preference utilitarianism, seems to unavoidably say torture is better.

Comment by andrew sauer (andrew-sauer) on Working With Monsters · 2021-07-25T01:01:50.179Z · LW · GW

Just shows how effective the disagreement is at getting people to care deeply about it I guess

Comment by andrew sauer (andrew-sauer) on The Neglected Virtue of Scholarship · 2021-05-19T22:36:31.601Z · LW · GW

Well, if it's eternal and sufficiently powerful, a kind of omnibenevolence might follow, insofar as it exerts a selection pressure on the things it feels benevolent towards, which over time will cause them to predominate. 

Unless it decides that it wants to keep things it hates around to torture them

Comment by andrew sauer (andrew-sauer) on Less Realistic Tales of Doom · 2021-05-08T23:52:44.945Z · LW · GW

This is often overlooked here (perhaps with good reason as many examples will be controversial). Scenarios of this kind can be very, very bad, much worse than a typical unaligned AI like Clippy.

For example, I would take Clippy over an AI whose goal was to spread biological life throughout the universe any day. I expect this may be controversial even here, but see https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Inadvertently_Multiplying_Suffering for why I think this way.

Comment by andrew sauer (andrew-sauer) on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2021-03-24T02:43:55.699Z · LW · GW

You might not even need to go to a different Tegmark universe lol, given that multiple people have independently come up with this idea

Comment by andrew sauer (andrew-sauer) on Acausal romance · 2021-03-24T02:41:25.806Z · LW · GW

I wonder if anyone has tried to argue for the existence of God in a similar way to this article?

Comment by andrew sauer (andrew-sauer) on Acausal romance · 2021-03-23T22:32:36.951Z · LW · GW

Oh man, I think I came up with something very similar to this whilst being extremely horny and extremely lonely

Comment by andrew-sauer on [deleted post] 2021-02-23T04:00:24.892Z

Username checks out

Comment by andrew sauer (andrew-sauer) on The Solomonoff Prior is Malign · 2020-10-14T04:40:20.107Z · LW · GW

In your section "complexity of conditioning", if I am understanding correctly, you compare the amount of information required to produce consequentialists with the amount of information in the observations we are conditioning on. This, however, is not apples to oranges: the consequentialists are competing against the "true" explanation of the data, the one that specifies the universe and where to find the data within it, they are not competing against the raw data itself. In an ordered universe, the "true" explanation would be shorter than the raw observation data, that's the whole point of using Solomonoff induction after all.

So, there are two advantages the consequentialists can exploit to "win" and be the shorter explanation. This exploitation must be enough to overcome those 10-1000 bits. One is that, since the decision which is being made is very important, they can find the data within the universe without adding any further complexity. This, to me, seems quite malign, as the "true" explanation is being penalized simply because we cannot read data directly from the program which produces the universe, not because this universe is complicated.

The second possible advantage is that these consequentialists may value our universe for some intrinsic reason, such as the life in it, so that they prioritize it over other universes and therefore it takes less bits to specify their simulation of it. However, if you could argue that the consequentialists actually had an advantage here which outweighed their own complexity, this would just sound to me like an argument that we are living in a simulation, because it would essentially be saying that our universe is unduly tuned to be valuable for consequentialists, to such a degree that the existence of these consequentialists is less of a coincidence than it just happening to be that valuable.

Comment by andrew sauer (andrew-sauer) on Chapter 1: A Day of Very Low Probability · 2017-11-23T21:02:12.069Z · LW · GW

Gung unf gb or na rqvg... gur svany rknz fbyhgvba jnf sbhaq ol gur pbzzhavgl.