So You Want to Colonize The Universe
post by Diffractor · 2019-02-27T10:17:50.427Z · LW · GW · 18 commentsContents
Part 1a: Gotta Go Fast, Astronomy is a-Wasting Part 1b: Stars For the Black Hole God! Utils For the Util Throne! Part 1c: The Triple Tradeoff Part 1d: An Exploitable Fiction Opportunity None 18 comments
Epistemic Status: Mix of facts, far-future-speculation with the inevitable biases from only considering techniques we know are physically possible, fermi calculations, and an actual spacecraft design made during a one-week research bender.
(this is a sequence. 2 [LW · GW], 3 [LW · GW], 4 [LW · GW], 5 [LW · GW])
Part 1a: Gotta Go Fast, Astronomy is a-Wasting
Once a civilization or agent grows up enough to set it sights on colonizing realms beyond its host planet as its first priority (instead of averting existential risks), there is a very strong convergent instrumental goal which kicks in. Namely, going as close to lightspeed (abbreviated as c) as possible.
This is because the universe is expanding, so there is a finite sphere of reachable galaxies, and more pass outside the horizon every year. IIRC (60% probability), about half of the galaxies in the Hubble Ultra-Deep Field are unreachable even if we traveled at lightspeed.
Arriving at a galaxy even one year faster nets you a marginal gain of (one galaxy of stars)*(average stellar luminosity)*(1 year) of energy, which for our Milky way comes out to about Joules. Assuming energy production on earth stays constant, that's enough energy for a billion years of earth civilization, 130 trillion times over. And I'd expect a transhuman civilization to be quite a few orders of magnitude better at getting value from a joule of energy than our current civilization. And that's just for a single galaxy. There are a lot of galaxies, a one-year speedup in reaching them has tremendous value.
This is basically Bostrom's astronomical waste argument, except Bostrom's version then goes on to apply this loss in total form (which is far larger) instead of marginal form, to argue for the value of reducing existential risk.
Now, there are a few corrections to this to take into account. The first and most important is that, by the Landauer limit, the amount of (irreversible) computations that can be done is inversely proportional to temperature, so waiting until the universe cools down from its current 2.7 K temperature nets you several orders of magnitude more computational power from a given unit of energy than spending it now. Also, if you do reversible computation, this limit doesn't apply except to bit erasures, which nets you a whole lot more orders of magnitude of computation.
Another correction is that if there are aliens, they'll be rushing to colonize the universe, so the total volume a civilization can grab is going to be much smaller than the entire universe. There's still an incentive to go fast to capture more stars before they do, though.
There's also a correction where the total energy available from colonizing at all is more like the mass-energy of a galaxy than the fusion power of a galaxy, for reasons I'll get to in a bit. The marginal loss from tarrying for one year is about the same, though.
And finally, if we consider the case where there aren't aliens and we're going up to the cosmological horizon, the marginal loss is less than stated for very distant galaxies, because by the time you get to a distant galaxy, it will have burned down to red dwarfs only, which aren't very luminous.
Putting all this together, we get the conclusion that, for any agent whose utility function scales with the amount of computations done, the convergent strategy is "go really really fast, capture as many galaxies as possible, store as much mass-energy as possible in a stable form and ensure competitors don't arise, then wait until the universe is cold and dead to run ultra-low-temperature reversible computing nodes."
Part 1b: Stars For the Black Hole God! Utils For the Util Throne!
Now, banking mass-energy for sextillions of years in a way that doesn't decay is a key part of this, and fortunately, there's something in nature that does it! Kerr black holes are spinning rapidly, and warp space around them in such a way that it's possible to recover some energy from them, at the cost of spinning down the black hole slightly. For a maximally spinning black hole, 29% of the mass-energy can be recovered as energy via either the Penrose Process (throwing something near the hole in a way that involves it coming back with more energy than it went in), or the Blandford-Znajek Process (which involves setting up a magnetic field around the hole and this inducing a current, and this is a major process powering quasars). I'm more partial to the second because it produces a current. Most black holes are Kerr black holes, and we've found quite a few black holes (including supermassive ones) that are spinning at around 0.9x the maximum spin, so an awful lot of energy can be extracted from them. So, if we sacrificed the entire Milky Way galaxy to the black hole at the center by nudging stellar orbits until they all went in, we'd have 5x10^57 joules of extractible energy to play around with. Take a minute to appreciate how big this is. And remember, this is per-galaxy. Another order of magnitude could be gotten if there's some way for a far-future civilization to interface with dark matter.
So, the dominant strategy is something like "get to as much of the universe as fast as possible, and sacrifice all the stars you encounter to the black hole gods, and then in the far far future you can get the party started, with an absolutely ridiculous amount of energy at your disposal, and also the ability to use a given unit of energy far far more efficiently than we can today, by reversible computation and the universe being really cold"
(It's a fuzzy memory, and Eliezer is welcome to correct me on this if I've misrepresented his views, and I'll edit this section)
Due to the expansion of the universe, these mega-black-holes will be permanently isolated from each other. I think Eliezer's proposal was to throw as much mass back to the Milky Way as possible, and set up shop there, instead of cutting far-future civilization into a bunch of absolutely disconnected islands. I don't think this is as good, because I'd prefer a larger civilization over the whole universe (from not having to throw mass back to the milky way, just throw it to the nearest hole), cut into more disconnected islands, than a much smaller civilization that's all in one island.
Part 1c: The Triple Tradeoff
In unrelated news, there's also a unsettling argument that I came up with that there's a convergent incentive to reduce the computational resources the computing node consumes. If you switch a simulated world to be lower fidelity, and 80% as fun, but now it only takes a fifth of the computational resources so 5x as much lifespan is available, I think I'd take that bargain. Taking this to the endpoint, I made the joke on a Dank EA Memes poll that the trillion trillion heavens of the far-future all have shitty Minecraft graphics, but I'm actually quite uncertain how that ends up, and there's also the argument that most of the computational power goes to running the minds themselves and not the environment, in which case there's an incentive for simplifying one's own thought processes so they take less resources.
Generalizing this, there seems to be a three-way tradeoff between population, lifespan, and computational resources consumed per member. Picking the population extreme, you'd get a gigantic population of short-lived simple agents. Picking the lifespan extreme, you get a small population of simple agents living for a really really long time. Picking the computational resources extreme, you get a small population of short-lived really really posthuman agents. (note that short-lived can still be quite long relative to human lifespans) I'm not sure what the best tradeoff point is here, and it may vary by person, so something like "you get a finite but ridiculously large amount of computational resources, and if you want to be simpler and live longer, or go towards ever-greater heights of posthumanity with a shorter life, or have a bunch of babies and split your resources with them, you can do that". However, that approach plus would lead to most of the population being descended from people who valued reproduction over long life or being really transhuman, and they'd get less resources for themselves, and that seems intuitively bad. Also maybe there could be merging of people, with associated pooling of resources? I'm not quite sure how to navigate this tradeoff, except to say that the population extreme of it seems bad, and that it's a really important far-future issue. I should probably also point out that if this is the favored approach, in the long-time limit, most of those that are left will be those that have favored lifespan over being really transhuman or reproduction, so I guess the last thing left living before heat death might actually be a minimally-resource-intensive conscious agent in a world with low-quality graphics.
Part 1d: An Exploitable Fiction Opportunity
Also, in unrelated news, I think I see an exploitable gap in fiction-writing. The elephant in the room for all space-travel stories is that space is incompatible with mammals, and due to advances in electronics, it just makes more sense to send up robotic probes.
However, Burnside's Zeroth Law of Space Combat is:
Science fiction fans relate more to human beings than to silicon chips.
I'm not a writer, but this doesn't strike me as entirely true, due to the tendency of humans to anthropomorphize. When talking about the goal of space travel being to hit up as many stars and galaxies as possible as fast as possible, and throw them into black holes, the very first thing that came to mind was "aww, the civilization is acting just like an obsessive speedrunner!"
I like watching speedruns, it's absolutely fascinating watching that much optimization power being directed at the task of going as fast as possible in defiance of the local rules and finding cool exploits. I'd totally read about the exploits of a civilization that's overjoyed to find a way to make their dust shields 5% more efficient because that means they can reach a few thousand more galaxies, and Vinny the Von Neumann probe struggling to be as useful as it can given that it was sent to a low-quality asteroid, and stuff like that. The stakes are massive, you just need to put in some work to make the marginal gain of accelerated colonization more vivid for the reader. It's the ultimate real-life tale of munchkinry for massive stakes and there's also ample "I know you know I know..." reasoning introduced by virtue of light-speed communication delays, and everyone's on the same side vs. nature.
I think Burnside might have been referring to science fiction for a more conventional audience, given the gap between his advice and my own reaction. But hard-sci-fi fans are already a pretty self-selected group, and Less Wrong readers are even moreso, and besides, with the advent of the internet, really niche fiction is a lot easier to pull off, it feels like there's a Hard-SciFi x Speedrunning niche out there available to be filled. A dash of anthropomorphization along with Sufficiently Intelligent Probes feels like it could go a long way towards making people relate to the silicon chips.
So I think there's an exploitable niche here.
Putting all this to the side, though, I'm interested in the "go fast" part. Really, how close to light speed is attainable?
18 comments
Comments sorted by top scores.
comment by avturchin · 2019-02-27T11:40:52.805Z · LW(p) · GW(p)
One thing is important: starting space colonisation with light speed is an irreversible act, and even small probability of miscalculation should be taken seriously. The error could be in the end goals and in the ultimate fate of the universe. I would spend a million year thinking locally about what to do - even paying the astronomical waste for the delay in starting the space exploration.
Replies from: Charlie Steiner, donald-hobson, Diffractor↑ comment by Charlie Steiner · 2019-02-27T18:59:43.916Z · LW(p) · GW(p)
How about deliberately launching probes that you could catch up with by expending more resources, but are still adequate to reach our local group of galaxies? That sounds like a pretty interesting idea. Like "I'm pretty sure moral progress has more or less converged, but in case I really want to change my mind about the future of far-off galaxies, I'll leave myself the option to spend lots of resources to send a faster probe."
If we imagine that the ancient Greeks had the capability to launch a Von Neumann probe to a receding galaxy, I'd rather they do it (and end up with a galaxy full of ancient Greek morality) than they let it pass over the cosmic horizon.
Replies from: Diffractor, avturchin, TheMajor↑ comment by Diffractor · 2019-02-27T19:28:57.684Z · LW(p) · GW(p)
Or maybe accepting messages from home (in rocket form or not) of "whoops, we were wrong about X, here's the convincing moral argument" and acting accordingly. Then the only thing to be worried about would be irreversible acts done in the process of colonizing a galaxy, instead of having a bad "living off resources" endstate.
↑ comment by TheMajor · 2019-02-28T15:14:53.910Z · LW(p) · GW(p)
I think this is an interesting idea, but doesn't really intersect with the main post. The marginal benefits of reaching a galaxy earlier are very very huge. This means that if we are ever in the situation where we have some probes flying away, and we have the option right now to build faster ones that can catch up, then this makes the old probes completely obsolete even if we give the new ones identical instructions. The (sunk) cost of the old probes/extra new probes is insignificant compared to the gain from earlier arrival. So I think your strategy is dominated by not sending probes that you feel you can catch up with later.
↑ comment by Donald Hobson (donald-hobson) · 2021-08-27T14:38:18.126Z · LW(p) · GW(p)
You can send probes programmed to just grab resources, build radio receivers and wait.
Replies from: avturchin↑ comment by Diffractor · 2019-02-27T19:31:12.774Z · LW(p) · GW(p)
Agreed. Also, there's an incentive to keep thinking about how to go faster until the marginal gain in design by one day of thought speeds the rocket up by less than one day, instead of launching, otherwise you'll get overtaken, and agreeing on a coordinated plan ahead of time (you get this galaxy, I get that galaxy, etc...) to avoid issues with lightspeed delays.
comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-01T17:06:42.890Z · LW(p) · GW(p)
I think that the Landauer limit argument was debunked.
Replies from: vakus-drake↑ comment by Vakus Drake (vakus-drake) · 2019-03-11T02:54:53.124Z · LW(p) · GW(p)
You're misunderstanding the argument. The article you link is about the aestivation hypothesis which is basically the opposite strategy to the "expand as fast as possible" strategy put forth here. The article you linked doesn't say that some computation _can't_ be done orders of magnitude more efficiently when the universe is in the degenerate era, it just says that there's lots of negentropy that you will never get the chance to exploit if you don't take advantage of it now.
comment by habryka (habryka4) · 2019-02-28T19:40:16.490Z · LW(p) · GW(p)
This was marked as "Don't promote this to the frontpage", but I would totally promote this to the frontpage, and also that feature is still a bit janky so I wasn't sure how much to trust that. Let me know whether you want us to promote the whole series to the frontpage.
Replies from: habryka4, Diffractor↑ comment by habryka (habryka4) · 2019-02-28T20:18:57.755Z · LW(p) · GW(p)
Note: Promoted them to frontpage. Let me know if you want to reverse it.
↑ comment by Diffractor · 2019-02-28T21:10:48.895Z · LW(p) · GW(p)
Whoops, I guess I messed up on that setting. Yeah, it's ok.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-02-28T21:18:56.438Z · LW(p) · GW(p)
Almost certainly our fault. We had some janky behavior on that checkbox that caused a bunch of false positives. Sorry for that. (Will be fixed in the next few hours)
comment by habryka (habryka4) · 2019-05-10T04:52:57.540Z · LW(p) · GW(p)
I added this post to a sequence and transferred ownership to you. This now makes it so that the sequence navigation always shows up when someone navigates to any of the posts in the sequence.
I was doing a bunch of research on space colonization and ran across this series of posts, so I figured I would clean it up and make it easier for people to find.
comment by DataPacRat · 2019-02-27T15:45:18.831Z · LW(p) · GW(p)
If you want to learn more about the interplanetary and interstellar scale of this sort of colony-ship design, you could do a lot worse than to pick up the 3rd edition of the boardgame "High Frontier" by Phil Eklund. Its reference guide (PDF here) includes a couple of dozen pages of descriptions of how the game's various reactors, radiators, drives, and other pieces work, with references to the original design papers. For a wider overview of related ideas, the indispensable resource is the Atomic Rockets site.
Replies from: Diffractor↑ comment by Diffractor · 2019-02-27T17:57:20.690Z · LW(p) · GW(p)
Yeah, Atomic Rockets was an incredibly helpful resource for me, I definitely endorse it for others.
comment by avturchin · 2019-03-05T08:35:53.576Z · LW(p) · GW(p)
On more thing pop up in my mind: tiling the universe with happy minds maybe a wrong goal. The really good goal is how to survive the end of the universe. I had a post [LW · GW] with many ideas how to survive such end depending on the type of the end (Big Crunch, Big Rip, Heat death), and some literature also exists, e.g. Omega by Tipler for Big Crunch.
It is the same tradeoff as any person has now: try to live as happy life as possible, or try to reach indefinite life extension.