Grabby Aliens could be Good, could be Bad

post by mako yass (MakoYass) · 2022-03-07T01:24:43.769Z · LW · GW · 10 comments

Contents

  This could be Very Good
  It's not as Bad as You Might Think
  Or Maybe it's a Lot Worse than You Might Expect
None
12 comments

Robin Hanson's Grabby Aliens is a succinct model of how and when technological life will spread. It argues that we've simply arrived too early in the universe's lifespan for other civilizations to have grown to the point of being visible to us yet, but they are there, and eventually, most of us will give rise to large enough civilizations that we'll start to run into each other.

The paper gave me the impression that this was kind of going to be a bad thing, for us, because it means there will be these rapacious colonizers penning us in on every side and forbidding us from rapaciously colonizing as much of the accessible universe as we otherwise would have liked to. I will argue that having neighbors might actually be really good, and then I will argue that it may also be extremely bad, in a way that I don't think the article touched on.

This could be Very Good

It's not as Bad as You Might Think

Or Maybe it's a Lot Worse than You Might Expect


Ultimately, though, it does not matter whether the abundance of technological life is "good" or "bad". It begins outside of our lightcone. There is absolutely nothing any of us could have done to prevent it. To experience an impulse to deem such a thing as "good" or "bad" and to feel regretful or elated about it might just be a sort of neurosis.

Regardless, I hope my musings here will be helpful to the en-fleshing of thy eschatology, good reader.

10 comments

Comments sorted by top scores.

comment by mako yass (MakoYass) · 2022-03-07T01:45:47.973Z · LW(p) · GW(p)

Okay, no, the Teilhardian laser-as-nanomanufacturer idea is probably not workable. I read an extremely basic article about laser attenuation and, bad news: lasers attenuate.

The best a laser could do to any of the planets about the nearest star seems to be making a pulse of somewhat bright light visible to all of them.

I still wonder about sending packets of resilient self-organizing material that could survive a landing, though.

Replies from: TLW, donald-hobson
comment by TLW · 2022-03-08T06:49:26.588Z · LW(p) · GW(p)

bad news: lasers attenuate.

Yep. There's hints that you might be able to alleviate this somewhat with a very powerful laser (vacuum self-focusing is arguably a thing[1], although it hasn't been observed thus far I don't believe), but good luck getting the accuracy necessary to do anything with it beyond signaling.

(Ditto, a Bessel-beam arguably doesn't attenuate... but requires infinite energy and beamwidth. Finite approximations do start attenuating eventually.)

  1. ^
Replies from: None
comment by [deleted] · 2022-03-08T18:44:41.813Z · LW(p) · GW(p)Replies from: TLW
comment by TLW · 2022-03-09T02:51:12.634Z · LW(p) · GW(p)

I don't think there are enough stars in the universe for that.

Replies from: None
comment by [deleted] · 2022-03-09T03:59:49.673Z · LW(p) · GW(p)Replies from: MakoYass
comment by mako yass (MakoYass) · 2022-03-10T00:14:57.656Z · LW(p) · GW(p)

It would be worth writing, yeah. It would be an update for me.

P(any civilization in its early computing stage will run any code that is sent to them) ≈ 1 for me, not sure about the other terms. Transmission would also require that a civilization within the broadcast radius enters its computer age, and notices the message, before they mature and stop being vulnerable to being hacked, all before that region of space is colonized by a grabby civ (Oh, note, though, this model of spread, if it is practical, we might be able to assume that grabby civs can't otherwise expand at relativistic speeds, so that buys us some time before colonization blankets that region of space and stops these vulnerable ages from arising, though I'm not sure how much time that buys us.)

Interesting that the attacker they end up noticing would be fairly random, less to do with who is closest, more to do with which segment of the sky they happen to scrutinize first.

comment by Donald Hobson (donald-hobson) · 2022-03-09T23:38:11.518Z · LW(p) · GW(p)

There are 2 possible cheats I can think of to attenuating lasers. 

Firstly, attenuation depends on radius of the emitter. If you have a 100ly bubble of your tech, it should in principle be possible to do high precision laser stuff 200ly away.  A whole bunch of lasers across your bubble, tuned to interfere in just the right way. 

Secondly quantum entanglement. You can't target one photon precisely, but can you ensure 2 photons go in precisely the same direction as each other?   

comment by Donald Hobson (donald-hobson) · 2022-03-09T23:45:25.404Z · LW(p) · GW(p)

A beamed mind is vulnerable. You send your mind into the grasp of unknown aliens and they can do whatever they like. Do you want to trust the aliens to be nice? 

Replies from: MakoYass
comment by mako yass (MakoYass) · 2022-03-10T00:36:52.820Z · LW(p) · GW(p)

For travel through neighboring grabby civs, mm, I guess you'd want to get to know them first. Are there ways they could prove that they're a certain kind of civ, with a certain trusted computing model, that lets them prove that they wont leak you?

For travel through neighboring primitive civs in the vulnerable stage... Maybe you'd send a warrior emissary who doesn't attribute negative utility to any of its own states of mind. If it's successful... Hmm... it establishes an encryption protocol with home, and only then do you start sending softer minds.

But that would all take a long time. I wonder if there'd be a way of sending it with the encryption protocol already determined (so it could start accepting your minds without having to send you a public key first), in such a way that it would provably only be able to decrypt later messages if it conquered the target system successfully, maybe this protocol would require it to make use of more resources to compute the keys than it would be worth spending for the adversary, if they wanted to extort you. 5 years of multiple stars running hashers.

Might not be the most profitable approach.

Maybe a mindpattern that elegantly mixes suffering-proof eudaimonia generation with the production of proofs of conquest.

comment by mako yass (MakoYass) · 2022-03-18T00:11:02.755Z · LW(p) · GW(p)

This post is relevant, and has more to say about the benefits of neighbors in approaching lightspeed travel https://www.lesswrong.com/posts/DWHkxqX4t79aThDkg/my-current-thoughts-on-the-risks-from-seti#Alien_expansion_and_contact [LW · GW]

Apparently there's an armstrong - sandberg paper that found that getting 99% of lightspeed is totally feasible with coil guns. So the benefits are mild.

comment by Flaglandbase · 2022-03-07T08:11:10.951Z · LW(p) · GW(p)

I suspect there infinitely many copies of each of our minds spread throughout the Omniverse (or certainly more than a hundred). 

These minds have identical experiences, but may live under different laws of physics without knowing it. A lucky minority must live in universes where vacuum decay is impossible, including almost all of our distant descendants. 

But it is worrying and unpleasant that we seem to live so close to the beginning of time rather than an endless utopia - almost as if that won't happen at all. The only solution may be that young universes are somehow constantly being generated within older universes.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-03-09T23:43:06.854Z · LW(p) · GW(p)

Vacuum decay isn't enough to get us to be here. Even if aliens appear lots, and all want vacuum decay, if we don't, we can still expect millions of years before it hits. In a million years, a Dyson sphere can hold a huge number of humans. (Even more with mind uploading). Ergo us being this early is still a surprise.

 What you need to make our position fairly typical is either descendants who run lots of ancestor sims, or others who sim us. Or us being utterly doomed to destroy ourselves. The only serious candidate for something this doomy is UFAI.