To reduce astronomical waste: take your time, then go very fast

post by Stuart_Armstrong · 2013-07-13T16:41:05.623Z · LW · GW · Legacy · 50 comments

Contents

50 comments

While we dither on the planet, are we losing resources in space? Nick Bostrom has an article on astronomical waste, talking about the vast amounts of potentially useful energy that we're simply not using for anything:

As I write these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives.

The rate of this loss boggles the mind. One recent paper speculates, using loose theoretical considerations based on the rate of increase of entropy, that the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization.

On top of that, galaxies are slipping away from us because of the exponentially accelerating expansion of the universe (x axis in years since Big Bang, cosmic scale function arbitrarily set to 1 at the current day):

At the rate things are going, we seem to be losing slightly more than one galaxy a year. One entire galaxy, with its hundreds of billions of stars, is slipping away from us each year, never to be interacted with again. This is many solar systems a second; poof! Before you've even had time to grasp that concept, we've lost millions of times more resources than humanity has even used.

So it would seem that the answer to this desperate state of affairs is to rush thing: start expanding as soon as possible, greedily grab every hint of energy and negentropy before they vanish forever.

Not so fast! Nick Bostrom's point was not that we should rush things, but that we should be very very careful:

However, the lesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.

If we rush things and lose the whole universe, then we certainly don't come out ahead in this game.

But let's ignore that; let's pretend that we've solved all risks, that we can expand safely, without fear of messing things up. Right. Full steam ahead to the starships, no?

No. Firstly, though the losses are large in absolute terms, they are small in relative terms. Most of the energy of a star is contained in its mass. The light streaming through windows in empty rooms? A few specks of negentropy, that barely diminish the value of the huge hoard that is the stars' physical beings (which can be harvested by eg dropping them slowly into small black holes and feeding off the Hawking radiation). And we lose a galaxy a year - but there are still billions out there. So waiting a while isn't a major problem, if we can gain something by doing so. Gain what? Well, maybe just going a tiny bit faster.

In a paper published with Anders Sandberg, we looked at the ease and difficulty of intergalactic or universal expansion. It seems to be surprisingly easy (which has a lot of implications for the Fermi Paradox), given sufficient automation or AI. About six hours of the sun's energy would be enough to launch self-replicating probes to every reachable galaxy in the entire universe. We could get this energy by constructing a Dyson swarm around the sun, by, for instance, disassembling Mercury. This is the kind of task that would be well within the capacities of an decently automated manufacturing process. A video overview of the process can be found in this talk (and a longer exposition, with slightly older figures, can be found here).

How fast will those probes travel? This depends not on the acceleration phase (which can be done fine with quench guns or rail guns, or lasers into solar sales), but on the deceleration. The relativistic rocket equation is vicious: it takes a lot of reaction mass to decelerate even a small payload. If fission power is used, decelerations from 50%c is about all that's reasonable. With fusion, we can push this to 80%c, while with matter-anti-matter reactions, we can get to 99%c. The top speed of 99%c is also obtainable if have more exotic ways of decelerating. This could be somehow using resources from the target galaxy (cunning gravitational braking or Bussard ramjets or something), or using the continuing expansion of the universe to bleed speed away (this is most practical for the most distant galaxies).

At these three speeds (and at 100% c), we can reach a certain distance into the universe, in current comoving coordinates, as shown by this graph (the x axis is in years since the Big Bang, with the origin set at the current day):

The maximum reached at 99%c is about 4 GigaParsecs - not a unit often used is casual conversation! If we can reach these distances, we can claim this many galaxies, approximately:

SpeedDistance (Parsecs)
# of Galaxies
50%c
 1.24x109 1.16x108
80%c
2.33x109 7.62x108
99%c 4.09x109 4.13x109

These numbers don't change much if we delay. Even wasting a million years won't show up on these figure: it's a rounding error. Why is this?

Well, a typical probe will be flying through space, at quasi-constant velocity, for several billion years. Gains in speed make an immense difference, as they compound over the whole duration of the trip; gains from an early launch, not so much. So if we have to wait a million years to squeeze an extra 0.1% of speed, we're still coming out ahead. So waiting for extra research is always sensible (apart from the closest galaxies). If we can get more efficient engines, more exotic ways of shielding the probe, or new methods for deceleration, the benefits will be immense.

So, in conclusion: To efficiently colonise the universe, take your time. Do research. Think things over. Go to the pub. Saunter like an Egyptian. Write long letters to mum. Complain about the immorality of the youth of today. Watch dry paint stay dry.

But when you do go... go very, very fast.

50 comments

Comments sorted by top scores.

comment by Pablo (Pablo_Stafforini) · 2013-07-11T16:47:58.618Z · LW(p) · GW(p)

In a paper published with Anders Sandberg, we looked at the ease and difficulty of intergalactic or universal expansion.

free pdf

comment by Will_Newsome · 2013-07-11T21:48:13.543Z · LW(p) · GW(p)

Similar considerations apply to possible singleton-ish AGIs that might be architecturally constrained to varying levels of efficiency in optimization, e.g. some decision theories might coordinate poorly and so waste the cosmic commons. Thus optimizing for an AGI's mere friendliness to existent humans could easily be setting much too low a bar, at least for perspectives and policies that include a total utilitarian bent—something much closer to "ideal" would instead have to be the target.

Replies from: Wei_Dai, Stuart_Armstrong
comment by Wei Dai (Wei_Dai) · 2013-07-11T22:14:49.707Z · LW(p) · GW(p)

This is why I don't like "AI safety". It's implicitly setting the bar too low. "Friendliness" in theory has the same problem, but Eliezer actually seems to be aiming at "ideal", while working under that name. When Luke asked for suggestions for how to rename their research program, I suggested "optimal AI", but he didn't seem to like that very much.

Replies from: ESRogs, ikrase
comment by ESRogs · 2013-07-13T07:56:32.489Z · LW(p) · GW(p)

How about BAI for Best AI?

comment by ikrase · 2013-07-17T22:01:47.420Z · LW(p) · GW(p)

This is actually more or less what I am getting at when I talk about distinctions between FAI and Obedient AI.

comment by Stuart_Armstrong · 2013-07-12T04:58:57.997Z · LW(p) · GW(p)

Life got so much simpler when I went anti-total utilitarianism :-)

But actually your point stands, in a somewhat weaker form, for any system that likes resources and dislikes waste.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-07-12T07:45:14.903Z · LW(p) · GW(p)

Life got so much simpler when I went anti-total utilitarianism

Mmh, life should be "hard" for proponents of any theory of population ethics. See Arrhenius (2000) and Blackorby, Bossert & Donaldson (2003).

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-07-12T07:55:05.238Z · LW(p) · GW(p)

Yes, but total utilitarianism has an unbounded utility function, with the present state being vanishingly low on that scale, so it pressures to expand like few other theories do.

comment by [deleted] · 2013-07-11T23:36:55.505Z · LW(p) · GW(p)

So, in conclusion: To efficiently colonise the universe, take your time. Do research. Think things over. Go to the pub. Saunter like an Egyptian. Write long letters to mum. Complain about the immorality of the youth of today. Watch dry paint stay dry. But when you do go... go very, very fast.

Your general argument that speed is more important than starting time seems plausible, but this paragraph (despite being well-written!) is a bit misleading. As you imply, colonising "the closest galaxies" gives us far more resources to put towards research.

Actually, it seems odd to limit it to just the closest galaxies. If spreading is pretty easy (as you say), and more resources means more research, we should still be trying to colonise pretty fast.

Presumably our colonisers should be programmed to use the resources they find to both replicate and research more efficient propulsion (before converting to running emulations/simulations at the heat death of the universe or doing the hedonic shockwave or whatever).

Replies from: bogdanb, Decius
comment by bogdanb · 2013-07-12T00:53:51.036Z · LW(p) · GW(p)

colonising "the closest galaxies" gives us far more resources to put towards research.

The closest galaxy is more than 11 million light-years away. If after 10 million years of research you still didn’t reach high enough speeds to get most of the universe, you’re not doing it right.

ETA: Sorry, that’s about 1.6 million, not sure where I got that 11. And of course there are dwarfs that satellites of the Milky Way which are a bit closer. I think my point stands.

comment by Decius · 2013-07-13T03:13:38.536Z · LW(p) · GW(p)

Interestingly enough, the universe tends to have clusters: Colonize the planet first, and study how to move fast enough to colonize the solar system, then the local group, then the galaxy, then the local galaxies...

comment by timtyler · 2013-07-13T00:35:59.674Z · LW(p) · GW(p)

So, in conclusion: To efficiently colonise the universe, take your time. Do research. Think things over. Go to the pub. Saunter like an Egyptian. Write long letters to mum. Complain about the immorality of the youth of today. Watch dry paint stay dry.

Unless you are in competition with others - which most modern agents currently are. Then the optimal strategy is quite different. Thus The Perils of Precaution.

Replies from: Decius
comment by Decius · 2013-07-13T03:11:50.394Z · LW(p) · GW(p)

Only if the modern agents have a chance to kill you first.

Getting a small boost in speed gets your colonizers to distant destinations first anyway. I'm reminded of any of several science-fiction stories where a slower colony ship is overtaken by a faster one.

Replies from: timtyler
comment by timtyler · 2013-07-13T10:52:38.774Z · LW(p) · GW(p)

Only if the modern agents have a chance to kill you first.

...or out-reproduce you. Both of which are commonplace occurrences in the real world.

Replies from: Decius
comment by Decius · 2013-07-13T21:27:33.473Z · LW(p) · GW(p)

Either way, the competition has to be local enough to stop you while you are still in the 'thinking' phase. It also doesn't do any good to launch a .95c shell of expansion if they can come by a million years later and launch a .99c shell. (With those exact numbers, we beat them to locations within 40,000 LY and they beat us to all other locations).

However, the best of both worlds is theoretically possible; launching a set of colony ships now does not preclude launching a faster set later. The first colonists out should just be warned to anticipate that they might not be the first ones to arrive.

comment by DanArmak · 2013-07-12T22:22:48.042Z · LW(p) · GW(p)

I think there are two potential problems with this argument, given below.

For these reasons, I think in practice everyone will keep dividing resources between both research and launching probes with existing technology, as soon as we have technology good enough to launch colonizing probes at all, even extremely slow ones at (say) 0.01c to neighbouring stars. Of course there'll be different tradeoffs depending on the balance of costs: perhaps research is hard but probes are cheap. As a result, there will be some waste of resources as later, faster probes overtake earlier slow ones.

Competition:

If you were alone in the universe, or could speak for everyone even when they're spread out across a galaxy, then your analysis would work. But if, deciding for yourself, you wait and research, then others could launch first. Since others generally have different values, and because people want things for themselves and their own clans, this is undesirable. So people will launch first to get first-mover advantage.

Suppose your research will eventually develop fast probes that can overtake their earlier, slow probes (though you can't be certain ahead of time, when deciding not to launch). The sphere of space right around you will have been taken by others, which is not a good position to be in. They might be able to stop you from spreading your probes through their occupied space, if they choose. Or they could use their resources to interfere directly to stop you from launching probes.

Even others who are generally friendly may prefer to lose some time copying your fast probe technology, to having most of the universe settled by you instead of them.

In fact, it might be much easier to buy, copy or steal new technology than to develop it in the first place. Then if I know others are working on research, I may decide to launch early and try to free-ride on their research later. In particular, if people wno do research aren't guaranteed to share the results with all of humanity - if not everyone is offered to be a part of the research project, or has resources to contribute to pay their way in, or lightspeed limits prevent cooperation across star systems - then those excluded will launch slow probes if they can, and hope to acquire the fast probe technology later.

Competition and value divergence may be prevented by a singleton AGI. Fear of such a singleton appearing will drive people to colonize other stars before it happens, and to keep expanding instead of slowing down to research. Some people believe that the only probably futures in which we colonize other stars at all, are ones in which a Friendly AI singleton is created first. This sounds probable to me, but not the only possibility.

Acquiring more resources:

(HaydnB already suggested the first part of this.)

Suppose it's expected to take a long time to develop faster probes, compared with the time it takes to settle and develop new nearby systems with existing, slow probes. Then it would be worthwhile to divert some resources to expanding with slow probes, and then use the acquired resources - new settled systems - to speed up the research. This might lead to the same balance of "always send out probes of existing technology, and divert some percentage of resources to research for faster probes."

Why would that be possible? If the research is expected to take a long time despite big investments, there is probably a limiting factor that can be raised by adding resources of some kind. Like matter/energy/space for building things and running experiments. Or like more sentient minds to do research.

Also, if new star systems can become highly developed quickly enough - compared with the time you need for your research - then it's more likely faster probes will first be developed on one of the many worlds first settled by slow probes, than by those who stayed behind, simply because there are many new worlds and few old ones. For reasons of competitino, people will want to settle new worlds to increase the chance that they or their descendants will first develop new faster probes and colonize even more worlds.

Differences of distance:

Matter is clumped on different scales: as stars and planets, then star systems, then galaxies, then groups and clusters, superclusters, sheets and filaments. At each transition there is a jump of many orders of magnitude in the distances involved. So if expansion is going to stop temporarily, it will probably be at one of these points. (We've stopped for a long while after colonizing the entire planet.)

Replies from: None
comment by [deleted] · 2013-07-17T22:41:32.196Z · LW(p) · GW(p)

I really want to read Stuart's response!

p.s. Thanks for the HT

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-05T21:11:06.016Z · LW(p) · GW(p)

Interesting stuff. It might be worth noting that by colonizing a few galaxies early on, you would gain increased research capacity with which to improve the way you colonize the rest of them. So the problem might be a bit more complicated.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-07-06T10:57:17.562Z · LW(p) · GW(p)

If you're going really fast, there's only limited time for someone to research speed upgrades before you get too far away to ever get caught (cosmic expansion).

But it is slightly more complicated, as cosmic expansion robs you of speed as you hurtle. So there may be an argument to stop and re-accelerate at some point.

comment by tom_cr · 2013-08-28T22:41:01.446Z · LW(p) · GW(p)

As a thought experiment, this is interesting, and I’m sure informative, but there is one crucial thing that this post neglects to examine: whether the inscription under the hood actually reads “humanity maximizer.” The impression from the post is that this is already established.

But has anybody established, or even stopped to consider whether avoiding the loss of 10^46 potential lives per century is really what we value? If so, I see no evidence of it here. I see no reason to even suspect that enabling that many lives in the distant future has any remotely proportional value to us.

Does Nick Bostrom believe that sentient beings living worthwhile lives in the future are the ultimate value structures? If so, whose value is he thinking of, ours or theirs? If theirs, then he is chasing something non-existent, they can’t reach back in time to us (there can be no social contract with them).

No matter how clever we are at devising ways to maximize our colonization of the universe, if this is not actually what we desire, then it isn’t rational to do so. Surely, we must decide what we want, before arguing the best way to get it.

comment by Shmi (shminux) · 2013-07-11T17:02:55.098Z · LW(p) · GW(p)

it takes a lot of reaction mass to decelerate even a small payload.

Braking is not likely to be a problem, if you travel far enough: both the interstellar/intergalactic medium and the cosmic microwave background radiation will provide some.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-11T18:35:28.073Z · LW(p) · GW(p)

I suspect that this slowdown will make the faster speeds not stack up so well as in the above table, due to the greater braking effects.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-11T19:15:04.434Z · LW(p) · GW(p)

Let's make a rough estimate.

The critical density of the universe is about 10^-27 kg/m^3, CMB is currently about 10^-5 of that, or 10^-32 kg/m^3, which corresponds to the pressure of 10^-15 Pa comoving, or, given an order of magnitude increase at 99% light speed, 10^-14 Pa directional braking.

The intergalactic medium contains a few protons per cubic meter, so 10^-27 kg/m^3 density, which is much higher than the CMB, meaning the latter can be ignored for large traveling bodies. The same calculation as above gives 10^-9 Pa directional braking. For a spaceship with a 1000 m^2 cross section this translates into 10^-6 N. If the ship weighs, say, 10^6 kg, you get 10^-12 m/s^2 deceleration effect, or some 10um/s per year velocity reduction in the ship's frame (1/10th of that in the comoving frame). If the travel time is, say, 1 billion years subjective time (to cover the observable universe), this adds up to only 10^4 m/s cumulative reduction in speed.

OK, if this calculation is right, then my original comment is wrong, there is no significant braking due to intergalactic medium.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-11T19:29:35.717Z · LW(p) · GW(p)

I don't quite follow on your transition from atmospheric pressure to braking pressure.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-11T20:01:05.648Z · LW(p) · GW(p)

I used relativistic beaming, where nearly all of the particles come from straight ahead, multiplied by the beaming factor. Alternatively, the calculation can be done fully Newtonian in the comoving frame, with the spaceship's mass multiplied by gamma (about 10 when v=.99c).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-11T21:08:49.788Z · LW(p) · GW(p)

So you multiplied the gas pressure by the beaming factor? But the gas pressure at rest is proportional to the temperature of the gas, and the forward-facing pressure of a relativistic ship couldn't possibly care less about the temperature of the gas.

Replies from: shminux, bogdanb
comment by Shmi (shminux) · 2013-07-12T02:25:22.120Z · LW(p) · GW(p)

Well to be more precise, I took the comoving stress energy tensor of photon gas, rho*diag(1,1/3,1/3,1/3) in the natural units, and Lorentz transformed it. Same for dust, only its comoving stress energy tensor is rho*diag(1,0,0,0).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-12T13:45:15.101Z · LW(p) · GW(p)

Okay, so you weren't basing the braking pressure of the gas off of the atmospheric pressure of the gas, but simply off of its density?

I guess I shouldn't be too shocked that super-high vacuum is approximately frictionless.

comment by bogdanb · 2013-07-12T01:02:12.922Z · LW(p) · GW(p)

Gas pressure at rest is also proportional to the number of molecules. (PV=nRT) Which at constant volume and known composition basically means mass, i.e. how much gas you’re hitting, which does matter.

That said, I still don’t get the exact calculation, so I’m not sure that it’s correct reasoning.

Replies from: Luke_A_Somers, shminux
comment by Luke_A_Somers · 2013-07-12T17:10:31.824Z · LW(p) · GW(p)

My argument is typical physicist fare - note that the answer has a spurious dependence, therefore it's wrong. That it also has one of the right dependences wouldn't matter.

I was off on what the implied steps in the derivation were, so it didn't have the problem I described.

comment by Shmi (shminux) · 2013-07-12T17:59:57.623Z · LW(p) · GW(p)

At the interstellar temperatures (2.7K or so) the ideal gas pressure has negligible contribution to the kinetic friction at near-light speeds. The situation is somewhat different for photon gas, where pressure is always large, of the order of density speed of light^2, not density RT. But in the end it does not matter, since the CMB density is much much less than the dust density even in the intergalactic space.

Replies from: bogdanb
comment by bogdanb · 2013-07-13T19:40:38.772Z · LW(p) · GW(p)

OK, I got it, I think. I was confused both about the question and the answer :-)

comment by JoshuaZ · 2013-07-11T12:37:36.184Z · LW(p) · GW(p)

given sufficient automation or AI (which has a lot of implications for the Fermi Paradox)

No. See, Katja Grace's article here. AI can act as a Filter for doomsday type arguments but expansion in a lightcone is not a likely Filter from a Fermi standpoint since if the expansion is going at almost any rate that is not extremely close to lightspeed we'd still expect to have time to likely see it coming, and if one thinks speeds slower than that (like the 50% and 80% used in your article), this becomes even more severe.

Replies from: Stuart_Armstrong, Will_Newsome, Stuart_Armstrong
comment by Stuart_Armstrong · 2013-07-11T12:44:17.211Z · LW(p) · GW(p)

I said it has a lot of implications - I didn't say what they were! The implications are that it's easy to cross galactic distances, so this makes the Fermi paradox much worse (or may imply AI is harder to reach).

Sentence rephrased to:

It seems to be surprisingly easy (which has a lot of implications for the Fermi Paradox), given sufficient automation or AI.

Replies from: JoshuaZ, bogdanb
comment by JoshuaZ · 2013-07-11T12:50:28.578Z · LW(p) · GW(p)

Ah, that makes much more sense. Thanks for clarifying.

comment by bogdanb · 2013-07-12T01:09:46.186Z · LW(p) · GW(p)

I suggest reordering that as “It seems to be surprisingly easy, given sufficient automation or AI (which has a lot of implications for the Fermi Paradox)”, which makes your point a bit clearer.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-07-12T04:56:12.598Z · LW(p) · GW(p)

No, that's how I had it initially, and that's what caused confusion! The ease of expansion is relevant to Fermi P., not the AI or automation.

Replies from: bogdanb
comment by bogdanb · 2013-07-12T07:14:40.993Z · LW(p) · GW(p)

Oh, right, sorry, I misread Joshua’s comment. I thought he didn't notice that the ease of expansion is relevant given AI.

comment by Will_Newsome · 2013-07-11T19:57:11.337Z · LW(p) · GW(p)

we'd still expect to have time to likely see it coming

Unless it doesn't want you to see it coming, or has swept past Earth already and left a false-sky planetarium in its wake.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-11T22:15:22.452Z · LW(p) · GW(p)

Unless it doesn't want you to see it coming

Which requires likely using substantially slower speeds and also requires that every single AI coming in our direction has made that same decision.

or has swept past Earth already and left a false-sky planetarium in its wake.

This seems extremely unlikely. It first requires an AI to want to care enough to deceive us at the cost of pretty high energy levels and it requires his AI to use an extremely complex deception. The most obvious deception (and most likely form if one had occurred any time other than very recently) would be to simply make the sky look empty of stars. Not only that, but this apparent false sky has uneccessary details which would be extremely hard to fake, such as neutrino bursts from supernova. Note also that if there is such a false-sky planetarium then all the data we are using to discuss the Great Filter becomes complete suspect anyhow (because the AI could have deliberately made cosmology look very different than it actually is), so this essentially should fall into the same category as any highly deceptive, nearly omnipotent being.

Replies from: Will_Newsome
comment by Will_Newsome · 2013-07-11T22:45:06.789Z · LW(p) · GW(p)

(Penrose process, black holes as radiators.)

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-11T22:54:32.828Z · LW(p) · GW(p)

Can you explain the relevance? I'm not seeing it.

Replies from: bogdanb
comment by bogdanb · 2013-07-12T01:07:18.862Z · LW(p) · GW(p)

Based on the last link, I think he means that advanced civilizations will (almost always, almost completely) live very near black holes. It’s very unlikely we would notice that with current technology, if they make an effort not to be very obvious.

comment by Stuart_Armstrong · 2013-07-12T04:38:49.207Z · LW(p) · GW(p)

We would not expect to see it coming in my model - the probes would be very small, the deceleration signature virtually undetectable.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-12T04:54:20.616Z · LW(p) · GW(p)

the probes would be very small, the deceleration signature virtually undetectable.

Sure, but if they start doing stuff on any large scale on another solar system, that should be noticeable. We don't see any evidence that star's energy is being harvested.

comment by drnickbone · 2013-07-11T13:39:48.708Z · LW(p) · GW(p)

A thought strikes me, given the accelerating expansion of galaxies. How do the numbers of galaxies reached vary when considering very big head-starts or big delays?

Imagine that another civilization started expanding 1 billion years earlier than us, or 5 billion, or 10 billion years earlier: how many more galaxies than us could it reach?

Alternatively, imagine that we die out, and another civilization starts expanding 1 billion, 5 billion or 10 billion years later than us: how many fewer galaxies could they then reach?

Is there an easy formula to plug-in different assumptions like this?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-07-11T13:55:17.829Z · LW(p) · GW(p)

Is there an easy formula to plug-in different assumptions like this?

No, unfortunately. I can calculate any given number, if you want (complicated mathematica with numerical solving of friedman equations).

Over short time frames (up to a million years), changes in distance reached and number of galaxies claimed is pretty linear. Going back a few billion years makes a huge difference, though.

Replies from: drnickbone
comment by drnickbone · 2013-07-11T19:16:01.112Z · LW(p) · GW(p)

If too hard to calculate, can you do a quick Fermi estimate? I notice that - assuming there are order 100 billion galaxies in the observable universe - then it seems only a few percent of them are still reachable, even at 99% of c. Already too late to explore and settle most of them :-(

But if someone started 5 billion or 10 billion years ago, would it be more like 100%? (And could they get to more of them at 80% or 90% of light speed, rather than having to zoom up to 99%?)

This could be of some relevance to anthropic arguments. Suppose that early-expanding civilizations tend to end up with much more real estate than late-expanding ones. Then we might expect to be part of one of the early-appearing and early-expanding ones (true under both SSA and SIA), So even if we turned out to be in some sort of ancestor simulation or recreation (a zoo or prehistoric theme park, or something like that), we'd expect to resemble ancestors who lived very early in the history of the universe, rather than 14 billion years in. So maybe this counts as a bit of evidence against the simulation hypothesis?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-07-11T20:50:08.292Z · LW(p) · GW(p)

Nick Bostrom's been considering that argument recently :-)

Replies from: drnickbone
comment by drnickbone · 2013-07-11T21:03:11.237Z · LW(p) · GW(p)

Wow! It would be nice to know how the relative numbers play out, and whether the argument goes anywhere.

As a piece of related evidence, it seems that Earth-like rocky planets were appearing much earlier in the universe than previously thought. This article suggests they started showing up 12-13 billion years ago, not long after the first stars.