The (short) case for predicting what Aliens value

post by Jim Buhler (jim-buhler) · 2023-07-20T15:25:39.197Z · LW · GW · 5 comments

Contents

  The case
  Acknowledgment
  Appendix: Relevant work
None
6 comments

The case

Most of the future things we care about – i.e., (dis)value – come, in expectation, from futures where humanity develops artificial general intelligence (AGI) and colonizes many other stars (Bostrom 2003MacAskill 2022Althaus and Gloor 2016). 

Hanson (2021) and Cook (2022 [EA · GW]) estimate that we should expect to eventually “meet” (grabby) alien AGIs/civilizations – just AGIs, from here on – if humanity expands, and that our corner of the universe will eventually be colonized by aliens if humanity doesn’t expand. 

This raises the following three crucial questions:

  1. What would happen once/if our respective AGIs meet? Values handshakes [? · GW] (i.e., cooperation) or conflict [? · GW]? Of what forms?
  2. Do we have good reasons to think the scenario where our corner of the universe is colonized by humanity is better than that where it is colonized by aliens? Should we update on the importance of reducing existential risks [? · GW]?[1]
  3. Considering the fact that aliens might fill our corner of the universe with things we (dis)value, does humanity have an (inter-civilizational) comparative advantage in focusing on something the grabby aliens will neglect?

The answers to these three questions heavily depend on the values we expect the grabby aliens our AGI will meet to have. For instance, if we expect grabby alien AGIs to, say, care about suffering more than our AGI, AGI conflict [? · GW] generating significant suffering is then relatively unlikely, and the importance of reducing X-risks depends on whether you prefer the aliens’ degree of concern for suffering or that of our AGI.

Therefore, figuring out what aliens value (or Alien Values[2] Research) appears quite important,[3] although absolutely no one is working on it[4] as far as I know. 

Is it because it isn’t tractable? Although I see how it might seem so, I don’t think it is. First, thinking about the values of grabby aliens doesn’t strike me as harder than modeling their spread (see, e.g., Hanson 2021 and Cook 2022 [EA · GW] for work on the latter). My EA Forum sequence What values will control the Future? [? · GW] is an instance of how simple observations/reasoning can make us significantly narrow down the range of values we should expect grabby aliens to have. Second, there seems to be – outside of the Effective Altruism sphere – a whole field of research focused on thinking about the evolution of aliens (most of which I’m not familiar with, yet), and there are already quite interesting takeaways (see, e.g., Kershenbaum 2020Todd and Miller 2017). Although the moral preferences of aliens are by no means the focus so far, this is evidence that figuring stuff out about aliens is feasible, and there might even be potential for making Alien Values Research part of people’s alien-related research agenda.

Acknowledgment

Thanks to Elias Schmied for their helpful comments on a draft. All assumptions/claims/omissions are my own.

Appendix: Relevant work

(This list is not exhaustive.[5] More or less ranked by decreasing order of relevance.)

  1. ^

    Charlie Guttman (2022 [EA · GW]) and Michael Aird (2020 [EA · GW]) ask questions very similar to this second one.

  2. ^

    Alien values” here literally means “the values of aliens”, not “values that look alien to us” as in this confusing LessWrong tag [? · GW].

  3. ^

    Besides helping us answer the two above questions, it might also give us useful insights regarding the future of human evolution and what our successors might value (see Buhler 2023 [EA · GW]). Robin Hanson makes a similar point around the beginning of this interview.

  4. ^

    The Appendix lists a few pieces that raised relevant considerations, however. 

  5. ^

    And this is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!

5 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2023-07-21T01:24:10.225Z · LW(p) · GW(p)

figuring out what aliens value appears quite important

My instant answer to this question is that it is not of practical importance, except insofar as we may already be inside an alien sphere of influence. 

You're talking primarily about scenarios of alien encounter in which it's a meeting between a human-descended superintelligence and an alien-descended superintelligence. But by definition, the human-descended superintelligence is going to be better than you, at inferring the likely distribution of alien life and alien values in the cosmos. 

But since you're interested, I suggest you also look up "Xenology" by Robert Freitas, which is a big obscure work from the 1970s by someone who went on to become one of the major theorists of mechanical nanotechnology. It has weird stuff like eleven metalaws of first contact, devised in 1970 by an Austrian space lawyer

Apart from the fact that such works may contain valid observations that the current literature overlooks, they may also promote awareness of the extent to which current ideas about alien life are non-empirical guesswork and potentially quite wrong. 

Freitas opens his chapter 25 with the proposition that 

Many billions of intelligent races may exist in the Milky Way alone at the present time

which is a very Carl Sagan, birth-of-SETI perspective, and one which is still held by many many people. On the other hand, our local avantgarde believe that intelligence in the universe is dominated by aggressively expansionist superintelligences that may be trading with other branches of the universal wavefunction. Maybe that's a very current-year outlook, but even Bing can point out [LW(p) · GW(p)] just how many assumptions it's making. 

comment by [deleted] · 2023-07-20T22:55:05.804Z · LW(p) · GW(p)

I think questions like these are important, so thank you for thinking about and writing about this.

A hypothetical civilization which hasn't observed signs of other life might also be able to find and understand these arguments. This includes the first civilization to create an ASI, if it has no way to know whether it's the first.

If we accept this, then we may prefer to act as if we are the first, because we may think it best for (alien) civilizations in general to act as if they are the first, to ensure that the actual first one acts appropriately. (i.e., creating an aligned ASI, when the alternative would be an unaligned ASI tiling the universe). You could frame this as a form of acausal trade [? · GW].

I apologize if this is confusing, I'm autistic and struggle with reducing meaning into language that others understand. Please let me know if you need clarification.

Replies from: jim-buhler
comment by Jim Buhler (jim-buhler) · 2023-07-21T13:01:29.707Z · LW(p) · GW(p)

Interesting, thanks! This is relevant to question #2 in the post! Not sure everyone should act as if they were the first considering the downsides of interciv conflicts, but yeah, that's a good point.

comment by [deleted] · 2023-07-20T22:35:11.290Z · LW(p) · GW(p)

I have two things I want to say, I'm not sure if this one is important (it's a physics question, out of curiosity, and you don't have to answer) so I'll make two separate comments. 

The question: Would an ASI in control of more matter have enough of an advantage to fully take over the lower amount of matter controlled by another ASI, or would the second ASI have other options, e.g things like "creating a black hole supercomputer that computes in ways it deems valuable"?

Replies from: jim-buhler
comment by Jim Buhler (jim-buhler) · 2023-07-21T13:05:53.632Z · LW(p) · GW(p)

I don't know and this is outside the scope of this post I guess. There are a few organizations like the Center on Long-Term Risk studying cooperation and conflict between ASIs, however.

comment by [deleted] · 2023-07-20T22:30:34.660Z · LW(p) · GW(p)