Is there a possibility of being subjected to eternal torture by aliens?

post by onevoyager · 2020-08-28T05:37:04.498Z · LW · GW · No comments

This is a question post.

Contents

  Scenario:
  The following consists of more of my ideas about this. Obviously, please feel free not to read it. 
  Possible motivations for why aliens may want to torture us include:
  Reasons why this scenario, although unlikely, may be possible:
  Reasons why this scenario is unlikely include:
None
  Answers
    13 Viliam
    4 ChristianKl
    3 avturchin
None
No comments

I recently thought of the possibility of this scenario while reading about the Fermi paradox. This may sound highly unlikely and like the plot of a sci-fi horror movie, and there is no evidence that it will happen. However, is there a possibility of this happening?

Scenario:

Highly technologically advanced aliens become aware of our existence or are already aware and decide to torture us (possible motivations are listed below).

They are able to make themselves invisible to us and travel to Earth or to send AI to do the same. Then, they enter our bodies (e.g. through nanobots). They edit our genomes to make us immortal and only able to do things that are necessary to keep ourselves alive (e.g. drinking water). Or they upload our minds.

They can then cause us pain and even edit our genomes to cause us to experience more pain (e.g. by adding more pain receptors). It could be possible for them to torture us until the end of the universe, if they have the ability to generate enough energy to keep us conscious. Even after the end of the universe, it might be possible for them to torture us in a different universe (Zeeya Merali writes about the possibility of creating a new universe in A Big Bang in a Little Room).

The following consists of more of my ideas about this. Obviously, please feel free not to read it.

Possible motivations for why aliens may want to torture us include:

A possible reason for why aliens may torture us, without necessarily wanting to, is:

Reasons why this scenario, although unlikely, may be possible:

Reasons why this scenario is unlikely include:

Answers

answer by Viliam · 2020-08-28T11:31:45.794Z · LW(p) · GW(p)

Technically, everything is possible (unless it violates the laws of physics, i.e. I am not sure about the part with making new universes), so yes, there is a posibility.

Is there a good reason to focus on this specific scenario, instead of the billion other possibilities?

comment by onevoyager · 2020-08-28T20:37:19.151Z · LW(p) · GW(p)

Thanks for your response.

The reason why I'm concerned about this scenario is because this would be the worst possible outcome for anyone, in my opinion. The other possibilities wouldn't cause as much suffering.

Replies from: Viliam
comment by Viliam · 2020-08-28T21:19:42.966Z · LW(p) · GW(p)

You might be interested in this article [LW · GW].

The general idea is that the probability of outcome should be a part of the equation. Otherwise, insurance agents will love you as a customer.

Replies from: onevoyager
comment by onevoyager · 2020-08-29T22:48:18.698Z · LW(p) · GW(p)

Thanks for sharing the article.

I didn't read anything about the probability of outcome--are you referring to a comment?

Replies from: Viliam
comment by Viliam · 2020-08-30T10:06:25.484Z · LW(p) · GW(p)

Probabilities and frequencies are a related concept. Question "is it better if X happens to 1 person, or Y happens to 5 people" should have a similar answer to question "is it better if X happens (to 1 person) with probability 10%, or Y happens (to 1 person) with probability 50%". If you imagine hypothetical futures, it's kinda the same thing; 10% probability means in happens in 1 of 10 hypothetical futures; probability 50% means is happens in 5 of 10.

If you want to prevent some hypothetical outcome, it usually comes with a cost. (It always comes with a cost, if we include your time spent thinking about the scenario.) Therefore my analogy with the insurance agents -- they typically want to focus your entire attention towards "what will be the consequences of X, if I am not insured"... and away from "what will be the consequences of paying for insurance, if X does not happen". To make a good decision, you need to consider both scenarios, and their relative probability. Not being insured can ruin your life if an unexpected event happens, but being insured too much also decreases your quality of life by taking away a part of your income; sometimes the latter cost (multiplied by probability) outweighs the former (multiplied by probability), and then not getting insured is the right choice.

Thinking about hypothetical futures and taking action to avoid them, that is analogical to insurance. You spend some resources (including the time you spent thinking) now, in order to mitigate a possible problem in the future. The same equation applies; if the probability of the outcome is too small, it is not worth worrying about it. The time you spend worrying about the unlikely things is taken from the same budget you have for solving problems that are actually quite likely to happen.

answer by ChristianKl · 2020-08-28T15:59:49.084Z · LW(p) · GW(p)

The key problem with your question is that you ignore the evolutionary pressures. To become an intergalactic civilisation it's necessary to be very good at cooperation because it means that many actors have the capability to blow everything up.  

Other evolutionary pressures are about using resources effectively. Torturing people on earth wouldn't be an effective use of resources. 

comment by onevoyager · 2020-08-28T20:45:37.030Z · LW(p) · GW(p)

Thanks for your response.

I do take evolutionary pressures into consideration in the list of reasons why this scenario is unlikely. I state that "highly intelligent aliens may be more likely to have evolved to be kind toward others." However, it's also possible that aliens "(or a group or an individual) have evolved to feel sadistic pleasure when they harm others who don't belong to their group."

Aliens who are excellent at cooperating with others may not necessarily be kind toward those who belong to out-groups. Lack of empathy or even sadism toward those who belong to other groups could possibly be selected for.

Additionally, there is the possibility that one individual alien or a group of aliens are sadistic.

Your point about resources is good. However, a highly technologically advanced civilization may have advanced to the point that they are no longer concerned about conserving resources. They may be able to generate energy with resources from many planets.

Replies from: ChristianKl
comment by ChristianKl · 2020-08-28T22:29:58.637Z · LW(p) · GW(p)

An intergalatic civilisation needs cultures that evolve for long periods of time separately to be peaceful with each other as it's likely possible for one solar system to extinguish other solar systems if it desires to do so in a way that can't be traced back to the attacker. 

It's a core economic principle that actors that are more effective at using resources outcompete actors that are less effective no matter the amount of resources that are available. 

Replies from: onevoyager
comment by onevoyager · 2020-08-29T22:49:44.955Z · LW(p) · GW(p)

Are you referring to multiple cultures that evolve alongside each other as part of the same civilization?

Your point about the economic principle is good.

answer by avturchin · 2020-08-28T12:37:33.717Z · LW(p) · GW(p)

If aliens could torture us, another alien race could come and save us.

comment by onevoyager · 2020-08-28T20:48:24.624Z · LW(p) · GW(p)

This is a great point.

I considered this possibility when I was writing my post. However, it's possible that the sadistic aliens would be capable of making our planet invisible to others, or that the benevolent aliens never travel close enough to Earth to be aware of us.

Replies from: avturchin
comment by avturchin · 2020-08-28T22:22:18.938Z · LW(p) · GW(p)

If there is one another alien race in observable universe, where should at least several more, and they may not like the idea of torture: they will be "exo-humanists", that is like effective altruist but for other alien races.

A superintelligent AI on Earth which has a goal of global torture is worse as it looks like that the help will never arrive (actually, it can but from other universes via complex acausal trade and indexical uncertainty).

Replies from: onevoyager
comment by onevoyager · 2020-08-29T22:52:03.788Z · LW(p) · GW(p)

Thanks for your response.

I wonder if it's possible that intelligent life is so rare in the universe that there is only one other civilization, which could be sadistic.

Would it be possible for you to expand more on how aliens from other universes can help "via complex acausal trade and indexical uncertainty"? I tried searching for those terms, but couldn't find anything about other universes.

Replies from: avturchin
comment by avturchin · 2020-08-30T11:04:24.845Z · LW(p) · GW(p)

If an alien civilization is 1 billion light years from us in one direction (and it is highest distance for contact), it implies that median distances between civilizations is 1 billion ly, and there are 5 others: in opposite direction, as well as up, down, right and left direction. So it is 1 civ or at least 7 including ours, based on some symmetry considerations. Two civs seems unlikely.


The idea about prevention s-risks via acausal trade is discussed by me here: https://forum.effectivealtruism.org/posts/3jgpAjRoP6FbeESxg/curing-past-sufferings-and-preventing-s-risks-via-indexical [EA · GW]

No comments

Comments sorted by top scores.