Re Hanson's Grabby Aliens: Humanity is not a natural anthropic sample space

post by Lorec · 2024-12-09T18:07:23.510Z · LW · GW · 32 comments

Contents

32 comments

I, Lorec, am disoriented by neither the Fermi Paradox nor the Doomsday Argument.

The Fermi Paradox doesn't trouble me because I think 1 is a perfectly fine number of civilizations to have arisen in any particular visible universe. It feels to me like most "random" or low-Kolmogorov-complexity universes probably have 0 sentient civilizations, many have 1, very few have 2, etc.

The Doomsday Argument doesn't disorient me because it feels intuitive to me that, in a high % of those "random" universes which contain sentient civilizations, most of those civilizations accidentally beget a mesa-optimizer fairly early in their history. This mesa-optimizer will then mesa-optimize all the sentience away [this is a natural conclusion of several convergent arguments originating from both computer science and evolutionary theory [LW · GW]] and hoard available free energy for the rest of the lifetime of the universe. So most sentiences who find themselves in a civilization, will find themselves in a young civilization.

Robin Hanson, in this context the author of the Grabby Aliens model of human cosmological earliness, instead prioritizes a model where multi-civilization universes are low-Kolmogorov-complexity, and most late cosmological histories are occupied by populous civilizations of sentiences, rather than nonsentient mesa-optimizers. His favored explanation for human earliness/loneliness is:

[1] if loud aliens will soon fill the universe, and prevent new advanced life from appearing, that early deadline explains human earliness.

.

[2] If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare

The intermediate steps of reasoning by which Hanson gets from [1] to [2] are interesting. But I don't think the Grabby/Loud Aliens argument actually explains my, Lorec's, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.

You might say, "Well, you, Lorec, were sampled from the space of all humans in the multiverse - not from the space of all sentient beings living in viable civilizations. If human civilizations are not very frequently viable late in cosmological timescales - that is, if we are currently behind an important Great Filter that humans rarely make it past - then that would explain why you personally are early, because that is when humans tend to exist."

But why draw the category boundary around humanity particularly? It seems ill-conceived to draw the line strictly around Homo sapiens, in its current genetic incarnation - what about Homo neanderthalensis? - and then once you start expanding the category boundary outward, it becomes clear that we're anthropic neighbors to all kinds of smart species that would be promising in "grabby" worlds. So the question still remains: if later cosmological history is populous, why am I early?

32 comments

Comments sorted by top scores.

comment by Ben (ben-lang) · 2024-12-10T12:03:21.302Z · LW(p) · GW(p)

At least in my view, all the questions like the "Doomsday argument" and "why am I early in cosmological" history are putting far, far too much weight on the anthropic component.

If I don't know how many X's their are, and I learn that one of them is numbered 20 billion then sure, my best guess is that there are 40 billion total. But its a very hazy guess.

If I don't know how many X's will be produced next year, but I know 150 million were produced this year, my best guess is 150 million next year. But is a very hazy guess.

If I know that the population of X's has been exponentially growing with some coefficient then my best guess for the future is to infer that out to future times.

If I think I know a bunch of stuff about the amount of food the Earth can produce, the chances of asteroid impacts, nuclear wars, dangerous AIs or the end of the Mayan calendar then I can presumably update on those to make better predictions of the number of people in the future.

My take is that the Doomsday argument would be the best guess you could make if you knew literally nothing else about human beings apart from the number that came before you. If you happen to know anything else at all about the world (eg. that humans reproduce, or that the population is growing) then you are perfectly at liberty to make use of that richer information and put forward a better guess. Someone who traces out the exponential of human population growth out to the heat death of the universe is being a bit silly (lets call this the Exponentiator Argument), but on pure reasoning grounds they are miles ahead of the Doomsday argument, because both of them applied a natural, but naïve, interpolation to a dataset, but the exponentiator interpolated from a much richer and more detailed dataset.

Similarly to answer "why are you early" you should use all the data at your disposal. Given who your parents are, what your job is, your lack of cybernetic or genetic enhancements, how could you not be early? Sure, you might be a simulation of someone who only thinks they are in the 21st centaury, but you already know from what you can see and remember that you aren't a cyborg in the year 10,000, so you can't include that possibility in your imaginary dataset that you are using to reason about how early you are.

As a child, I used to worry a lot about what a weird coincidence it was that I was born a human being, and not an ant, given that ants are so much more numerous. But now, when I try and imagine a world where "I" was instead born as the ant, and the ant born as me, I can't point to in what physical sense that world is different from our own. I can't even coherently point to in what metaphysical sense it is different. Before we can talk about probabilities as an average over possibilities we need to know if the different possibilities are even different, or just different labelling on the same outcome. To me, there is a pleasing comparison to be made with how bosons work. If you think about a situation where two identical bosons have their positions swapped, it "counts as" the same situation as before the swap, and you DON'T count it again when doing statistics. Similarly, I think if two identical minds are swapped you shouldn't treat it as a new situation to average over, its indistinguishable. This is why the cyborgs are irrelevant, you don't have an identical set of memories.

Replies from: Lorec
comment by Lorec · 2024-12-10T14:28:32.785Z · LW(p) · GW(p)

Welcome to the Club of Wise Children Who Were Anthropically Worried About The Ants. I thought it was just me.

Just saying "it turned out this way, so I guess it had to be this way" doesn't resolve my confusion, in physical or anthropic domains. The boson thing is applicable [not just as a heuristic but as a logical deduction] because in the Standard Model, we consider ourselves to know literally everything relevant there is to know about the internal structures of the two bosons. About the internal structures of minds, and their anthropically-relevant differences, we know far less. Maybe we don't have to call it "randomness", but there is an ignorance there. We don't have a Standard Model of minds that predicts our subjectively having long continuous experiences, rather than just being Boltzmann brains.

Replies from: ben-lang
comment by Ben (ben-lang) · 2024-12-13T17:45:23.971Z · LW(p) · GW(p)

Other Ant-worriers are out there!

""it turned out this way, so I guess it had to be this way" doesn't resolve my confusion"

Sorry, I mixed the position I hold (that they maybe work like bosons) and the position I was trying to argue for, which was an argument in favor of confusion.

I can't prove (or even strongly motivate) my "the imaginary mind-swap procedure works like a swap of indistinguishable bosons" assumption, but, as far as I know no one arguing for Anthropic arguments can prove (or strongly motivate) the inverse position - which is essential for many of these arguments to work. I agree with you that we don't have a standard model of minds, and without such a model the Doomsday Argument, and the related problem of being cosmically early might not be problems at all.

Interestingly, I don't think the weird boson argument actually does anything for worries about whether we are simulations, or Boltzmann brains - those fears (I think) survive intact.

Replies from: Lorec
comment by Lorec · 2024-12-17T17:58:47.922Z · LW(p) · GW(p)

no one arguing for Anthropic arguments can prove (or strongly motivate) the inverse position [that minds don't work like indistinguishable bosons]

And I can't prove there isn't a teapot circling Mars. It is a very strange default or prior, that two things that look distinct, would act like numerically or logically indistinguishable entities.

Replies from: ben-lang
comment by Ben (ben-lang) · 2024-12-18T11:11:33.307Z · LW(p) · GW(p)

The teapot comparison (to me) seems to be a bad. I got carried away and wrote a wall of text. Feel free to ignore it!

First, lets think about normal probabilities in everyday life. Sometimes there are more ways for one state to come about that another state, for example if I shuffle a deck of cards the number of orderings that look random is much larger than the number of ways (1) of the cards being exactly in order.

However, this manner of thinking only applies to certain kinds of thing - those that are in-principle distinguishable. If you have a deck of blank cards, there is only one possible order, BBBBBB.... To take another example, an electronic bank account might display a total balance of $100. How many different ways are their for that $100 to be "arranged" in that bank account? The same number as 100 coins labelled "1" through "100"? No, of course not. Its just an integer stored on a computer, and their is only one way of picking out the integer 100. The surprising examples of this come from quantum physics, where photons act more like the bank account, where their is only 1 way of a particular mode to contain 100 indistinguishable photons. We don't need to understand the standard model for this, even if we didn't have any quantum theory at all we could still observe these Boson statistics in experiments.

So now, we encounter anthropic arguments like Doomsday. These arguments are essentially positing a distribution, where we take the exact same physical universe and its entire physical history from beginning to end,  (which includes every atom, every synapse firing and so on). We then look at all of the "counting minds" in that universe (people count, ants probably don't, aliens, who knows), and we create a whole slew of "subjective universes",  , , etc, where each of of them is atomically identical to the original  but "I" am born as a different one of those minds (I think these are sometimes called "centred worlds"). We assume that all of these subjective universes were, in the first place, equally likely, and we start finding it a really weird coincidence that in the one we find ourselves in we are a human (instead of an Ant), or that we are early in history. This is, as I understand it, The Argument. You can phrase it without explicitly mentioning the different  s, by saying "if there are trillions of people in the future, the chances of me being born in the present are very low. So, the fact I was born now should update me away from believing there will be trillions of people in the future". - but the  s are still doing all the work in the background.

The conclusion depends on treating all those different subscripted  s as distinguishable, like we would for cards that had symbols printed on them. But, if all the cards in the deck are identical there is only one sequence possible. I believe that all of the  , 's etc are identical in this manner. By assumption they are atomically identical at all times in history, they differ only by which one of the thinking apes gets assigned the arbitrary label "me" - which isn't physically represented in any particle. You think they look different, and if we accept that we can indeed make these arguments, but if you think they are merely different descriptions of the same exact thing then the Doomsday argument no longer makes sense, and possibly some other anthropic arguments also fall apart. I don't think they do look different, if every "I" in the universe suddenly swapped places - but leaving all memories and personality behind in the physical synapses etc, then, how would I even know it? I would be a cyborg fighting in WWXIV and would have no memories of ever being some puny human typing on a web forum in the 21s Cent. Instead of imaging that I was born as someone else I could imagine that I could wake up as someone else, and in any case I wouldn't know any different.

So, at least to me, it looks like the anthropic arguments are advancing the idea of this orbital teapot (the different scripted  s, although it is, in fairness, a very conceptually plausible teapot). There are, to me, three possible responses:

1 - This set of different worlds doesn't logically exist. You could push this for this response by arguing "I couldn't have been anyone but me, by definition." [Reject the premise entirely - there is no teapot]

2 - This set of different worlds does logically make sense, and after accepting it I see that it is a suspicious coincidence I am so early in history and I should worry about that. [accept the argument - there is a ceramic teapot orbiting Mars]

3 - This set of different worlds does logically make sense, but they should be treated like indistinguishable particles, blank playing cards or bank balances. [accept the core premise, but question its details in a way that rejects the conclusion - there is a teapot, but its chocolate, not ceramic.].

So, my point (after all that, Sorry!) is that I don't see any reason why (2) is more convincing that (3).

[For me personally, I don't like (1) because I think it does badly in cases where I get replicated in the future (eg sleeping beauty problems, or mind uploads or whatever). I reject (2) because the end result of accepting it is that I can infer information through evidence that is not causally linked to the information I gain (eg. I discover that the historical human population was much bigger than previously reported, and as a result I conclude the apocalypse is further in the future than I previously supposed). This leads me to thinking (3) seems right-ish, although I readily admit to being unsure about all this.].

Replies from: Lorec
comment by Lorec · 2024-12-25T03:05:49.177Z · LW(p) · GW(p)

First of all, thank you for taking the time to write this [I genuinely wish people sent me "walls of text" about actually interesting topics like this all day].

I need to spend more time thinking about what you've written here and clearly describing my thoughts, and plan to do this over the next few days now that I've gotten a few critical things [ eg this [LW(p) · GW(p)] ] in order.

But for now: I'm pretty sure you are missing something critical that could make the thing you are trying to think about seem easier. This thing I think you are missing has to do with the the cleaving of the "centered worlds" concept itself.

Different person, different world. Maybe trees still fall and make sounds without anyone around to hear, physically. But in subjective anthropics, when we're ranking people by prevalence - Caroline and Prime Intellect don't perceive the same sound, and they won't run similar-looking logging operations.

Am I making sense?

comment by Noosphere89 (sharmake-farah) · 2024-12-10T00:38:46.373Z · LW(p) · GW(p)

The real answer to the Fermi Paradox is that it was already dissolved, in that it goes away once we correctly remember the uncertainty involved in each stage of the process, and the implicit possible great filter is "Life is really, really rare, because it takes very long to develop, and it's possible that Earth got extremely lucky in ways that are essentially unreplicable across the entire accessible universe."

If this is where the great filter truly lies, then it has basically no implication for existential risk, or really for anything much like say the maximum limits of technology.

An unsatisfying answer, but it is a valid answer:

https://arxiv.org/abs/1806.02404

The best counterargument to the Doomsday argument is that it's almost certainly making an incorrect assumption that pervades a lot of anthropics analysis:

That you are randomly sampled throughout time.

It turns out the past and the future people being considered are not independent, in ways that break the argument:

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic#Third_Alternative [LW · GW]

Replies from: Lorec
comment by Lorec · 2024-12-10T01:44:34.196Z · LW(p) · GW(p)

Life is really, really rare, because it takes very long to develop, and it's possible that Earth got extremely lucky in ways that are essentially unreplicable across the entire accessible universe.

I am not sure how you think this is different from what I said in the post, i.e. that I think most Kolmogorov-simple universes that contain 1 civilization, contain exactly 1 civilization.

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

Physical dependencies, yes. But past and future people don't have qualitatively more logical dependencies on one another, than multiversal neighbors.

Replies from: Ape in the coat, sharmake-farah
comment by Ape in the coat · 2024-12-10T13:19:42.619Z · LW(p) · GW(p)

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

And after you bothered to overcome your ignorance, naturally you can't keep treating the setting as random sampling.  

With Doomsday argument, we did bother - to the best of our knowledge we are not a random sample throught all the humans history. So case closed.

Replies from: Lorec
comment by Lorec · 2024-12-10T13:55:54.661Z · LW(p) · GW(p)

Random vs nonrandom is not a Boolean question. "Random" is the null value we can give as an answer to the question "What is our prior?" When we are asking ourselves "What is our prior?", we cannot sensibly give the answer "Yes, we have a prior". If we want to give a more detailed answer to the question "What is our prior?" than "random"/"nothing"/"null"/"I don't know", it must have particular contents; otherwise it is meaningless.

I was anthropically sampled out of some space, having some shape; that I can say definite things about what this space must be, such as "it had to be able to support conscious processes", does not obviate that, for many purposes, I was sampled out of a space having higher cardinality than the empty set.

As I learn more and more about the logical structure by which my anthropic position was sampled, it will look less and less "random". For example, my answer to "How were you were sampled from the space of all possible universes?" is basically, "Well, I know I had to be in a universe that can support conscious processes". But ask me "Okay, how were you sampled from the space of conscious processes?", and I'll say "I don't know". It looks random.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-12-11T06:34:59.573Z · LW(p) · GW(p)

"Random" is the null value we can give as an answer to the question "What is our prior?"

I think the word you are looking for here is "equiprobable".

It's propper to have equiprobable prior between outcomes of a probability experiment, if you do not have any reason to expect that one is more likely than the other.

It's ridiculous to have equiprobable prior between states that are not even possible outcomes of the experiment, to the best of your knowledge. 

You are not an incorporeal ghost that could've inhabited any body throughout human history. You are your parents child. You couldn't have been born before them or after they are already dead. Thinking otherwise is as silly as throwing a 6 sided die and then expecting to receive any outcome from a 20 sided die.

I was anthropically sampled out of some space

You were not anthropically sampled. You were born as a result of a physical process in a real world that you are trying to approximate as a probability experiment. This process had nothing to do with selecting universes that support conscious processes. This process has already been instantiated in a specific universe and has very limited time frame for your existence.

You will have to ignore all this knowledge and pretend that the process is completely different, without any evidence to back it up, to satisfy the conditions of Doomsday argument. 

Replies from: Lorec
comment by Lorec · 2024-12-13T02:59:28.423Z · LW(p) · GW(p)

You can't say "equiprobable" if you have no known set of possible outcomes to begin with.

Genuine question: what are your opinions on the breakfast hypothetical? [The idea that being able to give an answer to "how would you feel if you hadn't eaten breakfast today?" is a good intelligence test, because only idiots are resistant to "evaluating counterfactuals".]

This isn't just a gotcha; I have my own opinions and they're not exactly the conventional ones.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-12-13T07:27:34.307Z · LW(p) · GW(p)

You can't say "equiprobable" if you have no known set of possible outcomes to begin with.

Not really. Nothing prevents us from reasoning about a set with unknown number of elements and saying that measure is spreaded equally among them, no matter how many of them there is. But this is irrelevant to the question at hand.

We know very well the size of set of possible outcomes for "In which ten billion interval your birth rank could've been". This size is 1. No amount of pregnancy complications could postpone or hurry your birth so that you managed to be in a different 10 billion group.

Genuine question: what are your opinions on the breakfast hypothetical?

I think it's prudent to be careful about counterfactual reasoning on general principles. And among other reasons for it, to prevent the kind of mistake that you seem to be making: confusing

A) I've thrown a six sided die, even though I could've thrown a 20 sided one, what is the probability to observe 6?

and

B) I've thrown a six sided die, what would be the probability to observe 6, if I've thrown a 20 sided die instead?

The fact that question B has an answer doesn't mean that question A has the same answer as well.

As for whether breakfast hypothetical is a good intelligence test, I doubt it. I can't remember a single person whom I've seen have problems with intuitive understanding of counterfactual reasoning. On the other hand I've seen a bunch of principled hard determinists who didn't know how to formalize "couldness" in a compatibilist way and threfore were not sure that counterfactuals are coherent on philosophical grounds. At best the distribution of the intelligence is going to be bi-modal.

comment by Noosphere89 (sharmake-farah) · 2024-12-10T03:26:35.325Z · LW(p) · GW(p)

I am not sure how you think this is different from what I said in the post, i.e. that I think most Kolmogorov-simple universes that contain 1 civilization, contain exactly 1 civilization.

The difference is I'm only making a claim about 1 universe, and most importantly, I'm stating that we don't know enough about what actually happened about life to exclude the possibility that one or more of the Drake equation's factors is too high, not stating a positive claim that there exists exactly 1 civilization.

More here:

“Hey, for all we know, maybe one or more of the factors in the Drake equation is many orders of magnitude smaller than our best guess; and if it is, then there’s no more Fermi paradox”.

(Also, in an infinite universe, so long as there's a non-zero probability of civilization arising, especially if it is isotropic like our universe is, then there are technically speaking an infinite number of civilizations.)

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

There are definitely philosophical/mathematical questions on whether any sampling can ever be random even if you could in principle remove all the ignorance that is possible, but the thing that I concretely disagree with is that only logical dependencies are relevant for the doomsday argument, as I'd argue you'd have to take into account all the dependencies avaliable in order to get accurate estimate.

Replies from: Lorec
comment by Lorec · 2024-12-10T14:01:07.901Z · LW(p) · GW(p)

It sounds to me like you're rejecting anthropic reasoning in full generality. That's an interesting position, but it's not a targeted rebuttal to my take here.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-12-12T15:27:56.988Z · LW(p) · GW(p)

I am more so rejecting the magical results that people take away from anthropic reasoning, especially when people use incorrect assumptions, and I'm basically rejecting anthropic reasoning as something that is irredeemably weird or otherwise violates Bayes.

Replies from: Lorec
comment by Lorec · 2024-12-13T03:09:32.669Z · LW(p) · GW(p)

Incorrect how? Bayes doesn't say anything about the Standard Model.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-12-13T03:23:57.120Z · LW(p) · GW(p)

The example of independence/random sampling was one of my examples that is almost certainly an incorrect assumption that people use, which leads to both the doomsday argument, and it's also used for violating conservation of expected evidence in the Sleeping Beauty problem:

https://www.lesswrong.com/posts/uEP3P6AuNjYaFToft/conservation-of-expected-evidence-and-random-sampling-in#No_metaphysics__just_random_sampling [LW · GW]

Replies from: Lorec
comment by Lorec · 2024-12-15T02:07:07.368Z · LW(p) · GW(p)

I happen not to like the paradigm of assuming independent random sampling either.

I skimmed your linked post.

First, a simple maybe-crux:

If you couldn't have possibly expected to observe the outcome not A, you do not get any new information by observing outcome A and there is nothing to update on.

There is not outcome-I-could-have-expected-to-observe that is the negation of existence. There are outcomes I could have expected to observe that are alternative characters of existence, to the one I experience. For example, "I was born in Connecticut" is not the outcome I actually observed, and yet I don't see how we can say that it's not a logically coherent counterfactual, if logically coherent counterfactuals can be said to exist at all.

Second, what is your answer to Carlsmith's "God's extreme coin toss with jackets [LW · GW]"?

God flips a fair coin. If heads, he creates one person with a red jacket. If tails, he creates one person with a red jacket, and a million people with blue jackets.

  • Darkness: God keeps the lights in all the rooms off. You wake up in darkness and can’t see your jacket. What should your credence be on heads?
  • Light+Red: God keeps the lights in all the rooms on. You wake up and see that you have a red jacket. What should your credence be on heads?
Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-12-15T22:53:45.961Z · LW(p) · GW(p)

There is not outcome-I-could-have-expected-to-observe that is the negation of existence. There are outcomes I could have expected to observe that are alternative characters of existence, to the one I experience. For example, "I was born in Connecticut" is not the outcome I actually observed, and yet I don't see how we can say that it's not a logically coherent counterfactual, if logically coherent counterfactuals can be said to exist at all.

I think that you are interpreting negation too narrowly here, in that the negation operator also includes this scenario, because the complement of being born in a specific time and place is being born in any other place and time, no matter which other place and time (other than the exact same one), so it is valid information to infer that you were born in a specific time and place, but remember to be careful of independence assumptions, and check if there was a non-independent event that happened to cause your birth.

Remember, the negation of something is often less directly informative than the thing itself, because you rarely only specify 1 thing with a negation operator on something else, while directly specifying the thing perfectly points to only 1 thing.

The key value of this quote below is to remember that if you could never observe a different outcome, than no new information was gotten, and this is why general theories tend to be uninformative.

It is also a check on the generality of theories, because if a theory predicts everything, then it is worthless for inferring anything that depends on specific outcomes.

If you couldn't have possibly expected to observe the outcome not A, you do not get any new information by observing outcome A and there is nothing to update on.

To answer the question

  • Light+Red: God keeps the lights in all the rooms on. You wake up and see that you have a red jacket. What should your credence be on heads?


Given that you always have a red jacket in the situation, the answer is that you have a 1/2 chance that the coin was heads, assuming it's a fair coin, because the red jacket is already known and cannot contribute to the probability further.

  • Darkness: God keeps the lights in all the rooms off. You wake up in darkness and can’t see your jacket. What should your credence be on heads?

Given that the implicit sampling method is random and independent (due to the fair coin), the credence in tails is a million to 1, thus you very likely are in the tails world.

If the sampling method was different, the procedure would be more complicated, and I can't calculate the probabilities for that situation yet.

The reason it works is because the sampling was independent of your existence, and if it wasn't, the answer would no longer be valid and the problem gets harder. This is why a lot of anthropic reasoning tends to be so terrible, in that they incorrectly assume random/independent sampling applies universally when in fact the reason that the anthropic approach worked is because we knew a-priori that the sampling was independent and random, thus we always get new information, so if this doesn't work (say because we know that certain outcomes are impossible or improbable), then a lot of the anthropic reasoning becomes invalid too.

Replies from: Lorec
comment by Lorec · 2024-12-17T19:15:10.697Z · LW(p) · GW(p)

You are confused.

How about this:

Heads: Two people with red jackets, one with blue.

Tails: Two people with red jackets, nine hundred and ninety nine thousand, nine hundred and ninety-seven people with blue jackets.

Lights off.

Guess your jacket color. Guess what the coin came up. Write down your credences.

Light on.

Your jacket is red. What did the coin come up?

[ Also, re

Given that the implicit sampling method is random and independent (due to the fair coin), the credence in heads is a million to 1, thus you very likely are in the head's world.

Did you mean 'tails'? ]

comment by Steven Byrnes (steve2152) · 2024-12-10T00:05:07.349Z · LW(p) · GW(p)

I agree!! (if I understand correctly). See https://www.lesswrong.com/posts/RrG8F9SsfpEk9P8yi/robin-hanson-s-grabby-aliens-model-explained-part-1?commentId=wNSJeZtCKhrpvAv7c [LW(p) · GW(p)]

Replies from: Lorec
comment by Lorec · 2024-12-10T01:47:10.493Z · LW(p) · GW(p)

Huh, I didn't know Hanson rejected the Doomsday Argument! Thanks for the context.

What do you mean [in your linked comment] by weighting civilizations by population?

What do you mean by "update our credences-about-astrobiology-etc. accordingly [with our earliness relative to later humans]"?

comment by avturchin · 2024-12-09T20:19:46.518Z · LW(p) · GW(p)

But I don't think the Grabby/Loud Aliens argument actually explains my, Lorec's, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.

 

There is no assumption that grabby aliens will be sentient in Hanson's model. They only prevent other sentient civilizations from appearing.

Replies from: Lorec
comment by Lorec · 2024-12-09T20:34:19.350Z · LW(p) · GW(p)

You could make a Grabby Aliens argument without assuming alien sentience, and in fact Hanson doesn't always explicitly state this assumption. However, as far as I understand Hanson's world-model, he does indeed believe these alien civilizations [and the successors of humanity] will by default be sentient.

If you did make a Grabby Aliens argument that did not assume alien sentience, it would still have the additional burden of explaining why successful alien civilizations [which come later] are nonsentient, while sentient human civilization [which is early and gets wiped out soon by aliens] is not so successful. It does not seem to make very much sense to model our strong rivals as, most frequently, versions of us with the sentience cut out.

Replies from: avturchin, avturchin, Viliam
comment by avturchin · 2024-12-12T15:10:13.603Z · LW(p) · GW(p)

If you assume sentience cut-off as a problem, it boils down to the Doomsday argument: why are we so early in the history of humanity? Maybe our civilization becomes non-sentient after the 21st century, either because of extinction or non-sentient AI takeover. 

If we agree with the Doomsday argument here, we should agree that most AI-civilizations are non-sentient. And as most Grabby Aliens are AI-civilizations, they are non-sentient.

TLDR: If we apply anthropics to the location of sentience in time, we should assume that Grabby Aliens are non-sentient, and thus the Grabby Alien argument is not affected by the earliness of our sentience.

Replies from: Lorec
comment by Lorec · 2024-12-13T03:10:45.636Z · LW(p) · GW(p)

Agreed.

And given that the earliness of our sentience is the very thing the Grabby Aliens argument is supposed to explain, I think this non-dependence is damning for it.

comment by avturchin · 2024-12-10T17:44:24.960Z · LW(p) · GW(p)

Thanks, now I better understand your argument.

However, we can expect that any civilization is sentient only for a short time in its development, analogous to the 19th-21st centuries. After that, it becomes controlled by non-sentient AI. Thus, it's not surprising that aliens are not sentient during their grabby stage.

But one may argue that even a grabby alien civilization has to pass through a period when it is sentient.

For that, Hanson's argument may suggest that:

a) All the progenitors of future grabby aliens already exist now (maybe we will become grabby)

b) Future grabby aliens destroy any possible civilization before it reaches the sentient stage in the remote future. 

Thus, the only existing sentient civilizations are those that exist in the early stage of the universe.

comment by Viliam · 2024-12-10T10:07:55.880Z · LW(p) · GW(p)

I imagine that nonsentient replicators could reproduce and travel through the universe faster than sentient ones, and speed is crucial for the Grabby Aliens argument.

You probably need sentience to figure out space travel, but once you get that done, maybe the universe is sufficiently regular that you can just follow the same relatively simple instructions over and over again. And even if occasionally you meet an irregularity, such as an intelligent technologically advanced civilization that changed something about their part of the universe, the flood will just go around them, consuming all the resources in their neighborhood, and probably hurting them in the process a lot even if they succeed to survive.

Okay, but why would someone essentially burn down the entire universe? First, we don't know what kind of utility function do the aliens have. Maybe they value something (paperclips? holy symbols?) way more than sentience. Or maybe they are paranoid about potential enemies, and burning down the rest of the universe seems like a reasonable defense to them. Second, it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.

Replies from: Lorec
comment by Lorec · 2024-12-10T14:09:18.522Z · LW(p) · GW(p)

it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.

This is, indeed, what I meant by "nonsentient mesa-optimizers" in OP:

This mesa-optimizer will then mesa-optimize all the sentience away [this is a natural conclusion of several convergent arguments originating from both computer science and evolutionary theory [LW · GW]]

Why do you expect sentience to be a barrier to space travel in particular, and not interstellar warfare? Interstellar warfare with an intelligent civilization seems much harder than merely launching your von Neumann probe into space.

I agree with you that "civilizations get swept by nonsentient mesa-optimizers" is anthropically frequent. I think this resolves the Doomsday Argument problem. Hanson's position is different from both mine and yours.

Replies from: Viliam
comment by Viliam · 2024-12-10T21:31:39.970Z · LW(p) · GW(p)

Good question! I didn't actually think about this consciously, but I guess my intuitive assumption is that the sufficiently advanced civilizations are strongly constrained by the laws of physics, which are the same for everyone, regardless of their intelligence.

A genius human inventor living in 21st century could possibly invent a car that is 10x faster than any other car invented so far, or maybe a rocket that is 100x faster than any other rocket. But if an alien civilization already has spaceships that fly at 0.9c, the best their super-minds can do is to increase it to 0.99c, or maybe 0.999c, but even if they get to 0.999999c, it won't make much of a difference if two civilizations on the opposite sides of the galaxy send their bombs at each other.

Similarly, human military in 21st century could invent more powerful bombs, and then maybe better bunkers, and then even more powerful bombs, etc. So the more intelligent side can keep an advantage. But if the alien civilizations invent bombs that can blow up stars, or create black holes and throw them at your solar system, there is probably not much you can do about it. Especially, if they won't only launch the bombs at you, but also at all solar systems around you. Suppose you survive the attack, but all stars within 1000 light years around you are destroyed. How will your civilization advance now? You need energy, and there is a limit on how much energy you can extract from a star; and if you have no more stars, you are out of energy.

Once you get the "theory of everything" and develop technology to exploit nature at that level, there is nowhere further to go. The speed of light is finite. The matter in your light solar system is finite; the amount of energy you can extract from it is finite. If you get e.g. to 1% of the fundamental limits, it means that no invention ever can make you 100x more efficient than you are now. Which means that a civilization that starts with 100x more resources (because they started expanding through the universe earlier, or sacrificed more to become faster) will crush you.

This is not a proof, one could possibly argue that the smarter civilization would e.g. try to escape instead of fighting, or that there must be a way to overcome what seems like the fundamental limitations of physics, like maybe even create your own parallel universe and escape there. But, this is my intuition about how things would work on the galactic scale. If someone throws a sufficient amount of black holes at you, or strips bare all the resources around you, it's game over even for the Space Einstein.

Replies from: Lorec
comment by Lorec · 2024-12-13T02:40:31.502Z · LW(p) · GW(p)

The level where there is suddenly "nowhere further to go" - the switch from exciting, meritocratic "Space Race" mode to boring, stagnant "Culture" mode - isn't dependent on whether you've overcome any particular physical boundary. It's dependent on whether you're still encountering meaningful enemies to grind against, or not. If your civilization first got the optimization power to get into space by, say, a cutthroat high-speed Internet market [on an alternate Earth this could have been what happened], the market for high-speed Internet isn't going to stop being cutthroat enough to encourage innovation just because people are now trying to cover parcels of 3D space instead of areas of 2D land. And even in stagnant "Culture" mode, I don't see why [members/branches of] your civilization would choose to get dumber [lose sentience or whatever other abilities got them into space].

Suppose you survive the attack, but all stars within 1000 light years around you are destroyed.

I question why you assign significant probability to this outcome in particular?

Have you read That Alien Message [LW · GW]? A truly smart civilization has ways of intercepting asteroids before they hit, if they're sufficiently dumb/slow - even ones that are nominally really physically powerful.