Is Grabby Aliens built on good anthropic reasoning?
post by Steven Byrnes (steve2152) · 2022-03-17T14:12:05.909Z · LW · GW · 11 commentsThis is a question post.
Contents
Answers 16 Tristan Cook 12 danielmartin0 10 Stuart_Armstrong 7 dadadarren 5 Rafael Harth 4 Charlie Steiner 2 avturchin None 11 comments
An underlying assumption of the Grabby Aliens paper by Robin Hanson et al., if I understand it, is the following:
We should expect to find ourselves as a member of a uniformly-randomly-selected civilization out of all civilizations in the history of the universe.
In other words, if there’s a master list of every civilization in the universe’s past, present, and future, our prior should be that our human civilization should be uniformly-randomly selected from that list. If you accept that assumption, then you’re obligated to perform a Bayesian update towards hypotheses-about-the-universe that predict a master-list-of-all-civilizations with the property that human civilization looks like a “typical” civilization on the list. My impression is that this assumption (and corresponding Bayesian update) is the foundation upon which the whole Grabby Aliens paper is built. (Well, that plus the assumption that there are no other Bayesian updates that need to be taken into account, which I think is dubious [LW · GW], but let’s leave that aside.)
If that’s right, I’m confused where this assumption comes from. When I skim discussions of anthropic reasoning (e.g. in Nick Bostrom's book, and on lesswrong [? · GW], and on Joe Carlsmith's blog, etc.), I see lots of discussion about “SIA” and “SSA” and “UDASSA” and so on. But the Grabby Aliens assumption above seems to be none of those things—in fact, it seems to require strongly rejecting all of them! (E.g., note how the Grabby Aliens assumption does not weight civilizations by their population.)
I feel like I’m missing something. I feel like there are a bunch of people who have spent a bunch of time thinking about anthropics (I’m not one of them), and who endorse some “standard” anthropic reasoning framework like SIA or SSA or UDASSA or whatever. Do all those people think that the Grabby Aliens paper is a bunch of baloney? If so, have they written about that anywhere? Or am I wrong that they’re contradictory? (Or conversely, has anyone tried to spell out in detail why the Grabby Aliens anthropic assumption above is a good assumption?)
Answers
I've been studying & replicating the argument in the paper [& hopefully be sharing results in the next few weeks]
The argument implicitly uses the self-sampling assumption (SSA) with reference class of observers in civilizations that are not yet grabby (and may or may not become grabby).
Their argument is similar in structure to the Doomsday argument:
If there are no grabby aliens (and longer lived planets are habitable) then there will be many civilizations that appear far in the future, making us highly atypical (in particular, 'early' in the distribution of arrival times).
If there are sufficiently many grabby aliens (but not too many) they set a deadline (after the current time) by when all civilizations must appear if they appear at all. This makes civilizations/observers like us/ours that appear at ~13.8Gy more typical in the reference class of all civilizations/observers that are not yet grabby.
Throughout we're assuming the number of observers per pre-grabby civilization is roughly constant. This lets us be loose with the the civilization - observer distinction.
I don't think the reference class is a great choice. A more natural choice would be the maximal reference class (which includes observers in grabby alien civilization) or the minimal reference class (containing only observers subjectively indistinguishable from you).
↑ comment by Stuart_Armstrong · 2022-03-17T21:18:06.350Z · LW(p) · GW(p)
It's best, in my judgement, to not use reference classes at all when doing anthropics. Explained more in this sequence: https://www.lesswrong.com/s/HFyami76kSs4vEHqy [? · GW]
↑ comment by Steven Byrnes (steve2152) · 2022-03-17T17:09:07.859Z · LW(p) · GW(p)
Thanks!
Maybe I’m misunderstanding SSA, but wouldn’t “SSA with reference class of observers in civilizations that are not yet grabby” require that we weight by the relevant populations?
For example, if Civilization A has 10× more citizens (before becoming grabby or going extinct) than does Civilization B, wouldn’t our prior be that we’re 10× likelier to find ourselves in Civilization A than B?
Replies from: Tristan Cook↑ comment by Tristan Cook · 2022-03-17T17:22:35.834Z · LW(p) · GW(p)
Yep, you're exactly right.
We could further condition on something like "observing that computers were invented ~X years ago" (or something similar that distinguishes observers like) such that the (eventual) population of civilizations doesn't matter. This conditioning means we don't have to consider that longer-lived planets will have greater populations.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2022-03-17T17:27:50.144Z · LW(p) · GW(p)
If we’re allowed to “observe” that computers were invented 80 years ago, why can’t we just “observe” that the universe is 13.8 billion years old, and thus throw the whole Grabby Aliens analysis in the garbage? :-P (Sorry if that sounds snarky, it’s an honest question and I’m open-minded to there being a good answer.)
Replies from: Tristan Cook↑ comment by Tristan Cook · 2022-03-17T17:43:33.898Z · LW(p) · GW(p)
Doesn't sound snarky at all :-)
Hanson et al. are conditioning on the observation that the universe is 13.8 billion years old. On page 18 they write
Note that by assuming a uniform distribution over our origin rank r (i.e., that we are equally likely to be any percentile rank in the GC origin time distribution), we can convert distributions over model times τ (e.g., an F(τ ) over GC model origin times) into distributions over clock times t. This in effect uses our current date of 13.8Gyr to estimate a distribution over the model timescale constant k. If instead of the distribution F(τ ) we use the distribution F0(τ ), which considers only those GCs who do not see any aliens at their origin date, we can also apply the information that we humans do not now see aliens.
Formally (and I think spelling it out helps) with SSA with the above reference class, our likelihood ratio is the ratio of [number of observers in pre-grabby civiliations that observe Y] to [number of observers in pre-grabby civilizations] where Y is our observation that the universe is 13.8 billion years old, we are on a planet that has been habitable for ~4.5Gy and has total habitability of ~5.5Gy, we don't observe any grabby civilizations, etc
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2022-03-17T17:58:48.538Z · LW(p) · GW(p)
Oh, I think I phrased my last comment poorly.
You originally wrote “We could further condition on something like "observing that computers were invented [80] years ago" … This conditioning means we don't have to consider that longer-lived planets will have greater populations.”
I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)
And then I was trying to respond to that by saying “Well if we can do that, why can’t we equally well restrict our SSA reference class to only include observers for whom the universe is 13.8 billion years old? And then “humanity is early” stops being true.”
Replies from: Tristan Cook↑ comment by Tristan Cook · 2022-03-17T19:08:36.360Z · LW(p) · GW(p)
Ah, I don't think I was very clear either.
I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)
What I wanted to say was: keep the reference class the same, but restrict the types of observers we are we saying we are contained in(the numerator in the SSA ratio) to be only those who (amongst other things) observe the invention of the computer 80 years ago.
And then I was trying to respond to that by saying “Well if we can do that, why can’t we equally well restrict our SSA reference class to only include observers for whom the universe is 13.8 billion years old? And then “humanity is early” stops being true.”
Yep, one can do this. We might still be atypical if we think longer-lived planets are habitable (since life has more time to appear there) but could also restrict the reference class further. Eventually we end up at minimal reference class SSA [LW · GW]
↑ comment by Viliam · 2022-03-18T21:10:28.910Z · LW(p) · GW(p)
If there are no grabby aliens, then our civilization is highly atypical. But if there are grabby aliens, then we as individuals are highly atypical, living before the space expansion which controls orders of magnitude more resources, and therefore can supports orders of magnitude more sentient observers.
A possible solution would be, if the grabby aliens have to sacrifice their sentience in return for greater expansion speed. A global race to the bottom, where those who do not reduce themselves to the most efficient replicators get outcompeted by those who do. If replicators without sentience are 1% more efficient at replication than replicators with sentience, in the long run this is all that matters.
*
(Actually, this also seems to get the math wrong. Even if grabby aliens gradually lose sentience and become pure replicators, as long as they don't lose the sentience immediately, there should still be orders of magnitude more sentient observers in the early phase of expansion than before the expansion. So our situation before the expansion remains highly atypical.)
Hi, coauthor of the Grabby Aliens paper here.
In my view, the correct way to calculate in many anthropic problems is along the lines of the well-explored case of Everett physics: by operationalising the problem in decision theoretic terms.
For the sleeping beauty problem, if one embeds the problem in a repeated series involving bets, and if each bet feeds into a single pot, you arrive at the Thirder position. There is then a consistency argument to make the single-shot problem match that.
Similarily, for the Grabby Aliens problem, consider that civilisations may lodge predictions about the distance to the nearest GC, which can be compared to other civilisations' guesses in the intergalactic council at a later date. Or choose a repeated game in which members of the council reset themselves to the spacetime origin point of a random other GC in the council, by simulation or other method, and make bets from there. The single-shot case, i.e. humanity's predicament, should have a matching strategy.
It is statistical prediction in this sense that I had in mind when helping with calculations+concepts for the paper.
As I've argued [? · GW], anthropics reasoning, absent indistinguishable copies, is nothing special: "I observe X" gives the same update [? · GW] as "I exist and observe X".
So, what theory best explains the fact we exist and don't observe aliens? Apart from the various "zoo hypotheses" (there are aliens but they are hiding), the most likely theory is "evolving life is not that hard, but humans evolved unusually early". The first half makes our existence more likely, the second explains our non-observation of aliens (again, "we don't observe aliens" is the same as "we exist and don't observe aliens", which is the same as "early aliens didn't kill or conquer humanity, and we don't observe aliens").
Grabby Alien works on similar logic to well-known anthropic camps such as SSA and SIA: consider what we are as an Observation Selection Effect. As you wrote, treat ourselves as random selections from a list containing everyone. The main difference is regular anthropic camps typically apply this to individual observers, while grabby alien applies it to civilizations.
Whether this reflects good anthropic reasoning is hard to answer. If one endorses regular anthropic camps then Grabby Alien's logic is at least incomplete. It should incorporate how many observers different civilizations have. But it should be noted applying the Observation Selection Effect at the observer level is not watertight either. Maybe it should be applied to the observer-moment pair level: what I am experiencing now should be regarded as it is randomly selected from all observer-moments. Then the theory ought to be further updated reflecting the life span of all observers from different civilizations.....
I personally firmly believe the typical OSE way of anthropic reasoning is plainly wrong. What "I" am, or more preciously what the first-person perspective is, cannot be reasoned. It is a primitively axiomatic fact. I.E. "I naturally know I am this person. But there is no reason behind it, nor an explanation for why it is so. I just am." Attempting to explain it as a random sample only leads to paradoxes. A starter of my argument can be read here. [LW · GW]
I feel like I’m missing something. I feel like there are a bunch of people who have spent a bunch of time thinking about anthropics
I have. Hi! I think the reasoning is approximately correct. The caveat is that "civilization" is not an ontologically basic element in the calculation. What you should update on is your total set of observations, and then you should prefer universes where that set of observations is more likely to be instantiated. But (without reading the grabby aliens paper) it sounds to me like this approximates the update that the paper makes.
I thought that my model aligns with UDASSA, but I've derived it independently and I'm not sure.
↑ comment by Steven Byrnes (steve2152) · 2022-03-17T16:48:51.211Z · LW(p) · GW(p)
For example, IIUC, Grabby Aliens is claiming:
- We are a member of a uniformly-randomly-selected civilization out of all civilizations in the past, present, and future of the universe.
- We are not a uniformly-randomly-selected individual out of all individuals in the past, present, and future of the universe. For example, if Civilization A contains × more individuals than Civilization B, then our prior should be that we are × more likely to be any particular individual in Civilization B than any particular individual in Civilization A.
- We are not on a uniformly-randomly-selected civilized planet out of all civilized planets in the past, present, and future of the universe. Same idea as #2 above.
- We are not a uniformly-randomly-selected individual out of all individuals in in the past, present, and future of Earth. For example, if there will eventually be 100 trillion intelligent individuals on Earth, we should not update on the fact that we are unusually early.
You agree with all four of these? The contrast between #1 vs #4 seems especially weird to me—if we’re going to update on human civilization being early with respect to all civilizations, shouldn’t we also update on me being early with respect to all intelligent Earthlings?? #4 is of course the doomsday argument, which incidentally Robin Hanson rejects, which seems inconsistent to me.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2022-03-17T17:37:42.100Z · LW(p) · GW(p)
You agree with all four of these?
No, not quite; if this list is correct, I was wrong about what G/A claims.
You are a uniformly sampled observer-moment (according to my model). That means you should have a master list of all instances that could implement this moment and then assume you're sampled from those. This is in fact the beginning and end of my model. To make this more manageable, you can assume your memories from the last five minutes are accurate,[1] and then draw a slightly larger box, i.e., "I'm a randomly sampled 5-minute segment".
Applying this:
-
I agree with #2 because you see that you live in a civilization with people, not one with
-
Ditto #3.
-
Ditto #4, you're not randomly sampled out of people who live early and late because you see that you live early. The question for doomsday is whether a universe where lots of civilizations go extinct makes it more likely to see that you're early (plus everything else you see), and I don't see why it would.
So the way I disagree with #1 is similar; we can see that we're early in the history of the universe. If GA relies on ignorance on that point (right now I can't figure out from memory if it does), I probably disagree with it. I guess I'll come back to this when I reread the paper or at least the video.
This goes wrong iff you are a Boltzman brain or something similar, which my model is perfectly happy to treat as coherent possibility, but Boltzman brains are extremely complex, so this should not give you a lot of moments. ↩︎
It's none of the three-letter-acronyms because it actually uses our knowledge that human civilization exists, and that we have a certain distribution over physical law. I think it's basically fine, though I think the paper makes some pitfalls in saying it "explains" certain things without showing that there are more "microstates" of the model where their "explanation" works versus where it doesn't.
EDIT: In response to Tristan's answer, I'd say that you can start with this distribution and recover different three-letter acronyms by ablating away different pieces of knowledge. Like Rafael says, the important thing is taking the knowledge we actually do have and thinking about different ways the rest of the universe could be.
In my view [LW · GW], SIA and SSA becomes the same in the infinite universe, so there is no difference.
↑ comment by Steven Byrnes (steve2152) · 2022-03-18T12:42:25.476Z · LW(p) · GW(p)
My question was about the Grabby Aliens paper assumption. As far as I understand, that assumption its own idiosyncratic thing which is neither SIA nor SSA (e.g. because "civilizations" are weighted equally regardless of their populations).
11 comments
Comments sorted by top scores.
comment by Gurkenglas · 2022-03-17T14:20:09.842Z · LW(p) · GW(p)
You'd think that some grabby alien civilization would try to maximize the number of civilizations spawned under its purview.
Replies from: MichaelStJules, steve2152↑ comment by MichaelStJules · 2022-03-17T16:14:05.746Z · LW(p) · GW(p)
Seems similar to the simulation hypothesis. Should we believe we were created by aliens?
EDIT: On the other hand, grabby aliens may be more motivated to have more efficient and aligned intelligent civilizations in our place, without the long and uncertain evolutionary trajectory. Why life like ours?
Replies from: None↑ comment by Steven Byrnes (steve2152) · 2022-03-17T14:28:38.186Z · LW(p) · GW(p)
If I understand correctly, if Civilization A “spawns” Civilization B, they are supposed to be lumped together as “one civilization” when doing the Grabby Aliens paper’s anthropic accounting. (I’m not endorsing that, just saying how I think the paper’s analysis works.)
Replies from: Gurkenglas, ege-erdil↑ comment by Gurkenglas · 2022-03-18T08:35:13.839Z · LW(p) · GW(p)
To be fair, if I were to translate UDT into an anthropic assumption it'd be "You're more likely to find yourself in a position that matters.", which predicts both being Civ A among its descendants and being Steven Byrnes among humanity. Note how the instinct is apparently evolutionarily adaptive.
↑ comment by Ege Erdil (ege-erdil) · 2022-03-17T14:54:38.077Z · LW(p) · GW(p)
Yes, the paper is only counting independent origins. I don't think this is a significant problem: all else equal, we should assume that the arrival time of our civilization is close to the median of the distribution of possible arrival times of civilizations like us. Descendant civilizations don't affect the analysis because they can still only expand at some speed close to the speed of light from their location, so the future light cone occupation is not going to change as a result of that.
The really big assumption the paper makes is this "hard steps" model which constrains the shape of the arrival time distribution quite strongly: it has to be a sum of independent exponentially distributed random variables. This is where all the stuff comes from and a different model of the evolution of life could alter the conclusions coming from this greatly. That said, this model does at least seem plausible as a model of how life evolves.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2022-03-17T15:05:17.935Z · LW(p) · GW(p)
Descendant civilizations don't affect the analysis because they can still only expand at some speed close to the speed of light from their location, so the future light cone occupation is not going to change as a result of that.
Are you sure it wouldn’t affect the analysis? Suppose someone chooses to redefine the word “civilization” such that “one technological species occupying N planets” counts as “N civilizations” rather than “one civilization”. And thus, if one grabby species expands to occupy thousands of galaxies, under this new definition it would be “trillions of civilizations” instead of just one civilization. Wouldn’t that radically change the distribution of civilization arrival times, and thus radically change every result in the paper?
Replies from: ege-erdil, None↑ comment by Ege Erdil (ege-erdil) · 2022-03-17T17:04:36.671Z · LW(p) · GW(p)
No, it wouldn't change anything in the paper because we're not such a civilization and so our arrival time wouldn't be sampled from the arrival time of descendant civilizations.
What would change it is if there is actually a chance that we're a descendant civilization ourselves; that is, we're currently in an area of space which has already been colonized by a grabby alien civilization. In this case it's indeed true that every result in the paper would radically change.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2022-03-17T17:23:23.687Z · LW(p) · GW(p)
I look around me and make two observations: (1) I observe that I am not a member of a “descendent civilization” (i.e., I am on a planet in which intelligent life arose endogenously, as opposed to being colonized by a grabby civilization from elsewhere, at least for all appearances, and let’s assume this is definitely the case for the sake of argument), (2) I observe that my civilization arose when the universe was 13.8 billion years old.
According to the Grabby Aliens paper,
- When I make observation #2, I’m supposed to feel surprised, and thus make updates towards theories-about-the-universe in which this observation would have been less surprising.
- When I make observation #1, I guess I’m supposed to shrug and say “whatever, descendent civilizations are not in my reference class, who cares about them.”
If that’s right, why the difference? What’s the basis for saying that “descendent civilizations” are not in my reference class and I shouldn’t consider them in my anthropic update, but “civilizations that start when the universe is 1 trillion years old” are in my reference class and I should consider them in my anthropic update?
(Sorry if I’m misunderstanding.)
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-03-17T18:18:33.054Z · LW(p) · GW(p)
If that’s right, what’s the basis for saying that “descendent civilizations” are not in my reference class and I shouldn’t consider them in my anthropic update...
This is an assumption made by the paper: it assumes that the prior on us being a descendant civilization is low. If your point is that rejecting this leads to the central conclusions of the paper falling apart, that's a correct assessment.
...but “civilizations that start when the universe is 1 trillion years old” are in my reference class and I should consider them in my anthropic update?
Think of it as you having a prior over the time at which a grabby alien civilization will arrive on Earth for the first time from a different point of origin. Conditional on any such time , our likelihood of having arrived when we have is
with support . You can now use this likelihood for a Bayesian update over your prior for .
Just to illustrate this, suppose you start with a scale-invariant improper prior - scale invariance is desirable when we're completely agnostic about the timescales involved. Bayesian update with hard steps takes us to a posterior supported on , and computing the expected value of gives
In other words, with the roughly hard steps that Hanson takes in his paper and years, grabby aliens should arrive on Earth within roughly 1.36 billion years in expectation.
Hanson does something in the same spirit but different: he matches the median (he can do it for any percentile, but the central result is the one coming from the median) of the distribution of our arrival time directly with minus how long the grabby aliens would have to travel to get here. This seems reasonable but there's no formal justification for it as far as I can see. The Bayesian approach, however, doesn't raise any problems of anthropic reference classes and gives more or less the same answer.