# Which parts of the paper Eternity in Six Hours are iffy?

post by Ruby · 2019-05-06T23:59:16.777Z · score: 18 (5 votes) · LW · GW · No comments

This is a question post.

## Contents

  Answers
18 Grothor
10 Ruby
3 habryka
None


The FHI paper, Eternity in Six Hours, is very optimistic about what can be done:

In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods.

Is this paper reasonable? Which parts of its assertions are most likely to be mistaken?

This question was inspired by a conversation with Nick Beckstead.

answer by Richard Korzekwa (Grothor) · 2019-05-20T19:00:41.204Z · score: 18 (4 votes) · LW(p) · GW(p)

In the order that they appear in the paper, these are a few of the parts that seemed iffy to me. Some of them may be easily shown to be either definitely iffy, or definitely not-so-iffy, with a little more research:

As for nuclear fusion, the standard fusion reaction is 3H +2H→4He +n+ 17.59 MeV. In MeV, the masses of deuterium and tritium are 1876 and 2809, giving an η of 17.59/(1876 + 2809) = 0.00375. We will take this η to be the correct value,because though no fusion reactor is likely to be perfectly efficient, there is also the possibility of getting extra energy from the further fusion of helium and possibly heavier elements.

I'm not sure what existed at the time the paper was written, but there are now proposals for fusion rockets, and using the expected exhaust velocities from those might be better than using the theoretical value from DT fusion.

The overall efficiency of the solar captors is 1/3, by the time the solar energy is concentrated, transformed and beamed back to Mercury.

I feel like I'm the only one that thinks this Dyson sphere method is a little dubious. What system is going to be used to collect energy using the captors and send it to Mercury? How will it be received on Mercury? The total power collected toward the end is more than W. If whatever process is used to disassemble the planet is 90% efficient, the temperature required to radiate the waste heat over Mercury's surface area is about 7000K. This is hotter than the surface of the sun, and more than twice the boiling point of both iron and silica. In order to keep this temperature below the boiling point of silica, we would either need the process to be better than 99.98% efficient, to attach Mercury to a heat sink may times the size of Jupiter, or to limit power to about W. If melting the planet isn't our style, we need to limit power to about W.

I don't think this kills their overall picture. It "only" means the whole process takes a few orders of magnitude longer.

Of the energy available, 1/10 will be used to propel material into space(using mass-drivers for instance [37]), the rest going to breaking chemical bonds, reprocessing material, or just lost to inefficiency. Lifting a kilo of matter to escape velocity on Mercury requires about nine mega-joules, while chemical bonds have energy less that one mega-joule per mol. These numbers are comparable, considering that reprocessing the material will be more efficient than simply breaking all the bonds and discarding the energy.

The probes will need stored energy and reaction mass to get into the appropriate orbit, unless all the desired orbits intersect Mercury's orbit. Maybe this issue can be mitigated by gradually pushing Mercury into new orbits via reaction force from the probes. Or maybe it's just not much of a limitation. I'm not sure.

Because practical efficiency never reaches the theoretical limit, we’ll content ourselves with assuming that the launch system has an efficiency of at least 50%

This seems pretty optimistic. In particular, making a system that launches large objects at .5. Doing this over the distance from the sun to Earth requires an average force of about N per kg. For .9 and .99, it requires about 8 and about 35 this force/mass, respectively. I don't know what the limiting factor will be on these things, but this seems pretty high, and suggests that the launcher would need to be a huge structure, and possibly a bigger project than the Dyson swarm.

I also have some complaints about the notation, which I will post later, and possibly other things, but this is what I have for now.

answer by Ruby · 2019-06-01T01:27:01.828Z · score: 10 (2 votes) · LW(p) · GW(p)

One of the things the paper does is make the overall determination that power available >> power needed. So as part of assessing that, I identified all the components which contributed to each and examined how realistic/sensitive to assumptions they are.

To begin with, here are the factors from the paper which impact the energy required for colonizing the galaxy.

• The mass of each payload / self-replicating probe to be sent.
• Mass of the payload is a linear factor in the energy required.
• The travel speed.
• The efficiency of the fuel mass.
• This is well understood physics. From a shallow investigation, this seems to be a highly sensitive point in the paper since 1) we have not yet constructed rockets/fuel source combinations with sufficient specific impulse values required by the paper, and 2) the specific impulse is part of an exponential factor in energy required.
• The number of probes to be sent as function of number of destinations and probes sent per destination as redundancy against loss of probes due collision or other issues.
• 1. I outsourced checking the paper’s work to calculate the number of reachable stars at different speeds. Physics contacts reported that the calculations made in the paper all appeared correct.
• 2. I did not examine the redundancy requirements in depth since wasn’t easy to reconstruct how they derived the result, my intuition is this is not a crucial aspect of the paper, especially a lower speeds which are my main interest.
• Launch system feasibility and efficiency.

Note that some of these variables directly the per-probe energy costs and others (specifically redundancy due to expected collisions and travel speed hence stars reachable) affect the total energy required.

comment by Ruby · 2019-06-01T01:31:42.061Z · score: 4 (2 votes) · LW(p) · GW(p)

### Energy Required to Accelerate Mass

To travel between the stars it is necessary to be able to accelerate an interstellar probe to some very fast speed, let it travel through space (without friction, it will keep going due to energy), and then decelerate it once it arrives at the target destination. Space is insanely large and to get most places you really want to be traveling at a significant fraction of the speed of light; however, this requires enormous amounts of energy since by special relativity, the faster you are travelling, the more energy is required to accelerate further. This enforces the limit that you cannot go faster than the speed of light since this would require infinite energy.)

The relativistic kinetic energy of a rigid body is given by:

This formula is linear in mass (m) and considerably superlinear in velocity (v) approaching infinity as velocity approaches the speed of light (c).

The y-axis is incremented in units of 100 million gigajoules (=10^17 joules). Even accelerating a 1kg mass to 10% the speed of light requires 4.5*10^14 joules. 50% of c requires 1.4*10^16 joules.

For comparison, world energy consumption in 2013 was estimated by the International Energy Association to be 5.67x10^20 joules. In other words, accelerating a single 5 tonne probe to 10% c would require ~1% of Earth’s entire energy consumption. Accelerating to 80% could required 100% of 2013’s energy consumption. Now consider that to colonize the universe, we need to send upwards of 100 million (10^8) probes.

Since we need to both accelerate and decelerate a probe, this energy is required twice over. Doubling the mass of the probe doubles the energy required, but doubling the target speed multiplies the energy required many times over even at very small fractions of the speed of light.

comment by Ruby · 2019-06-01T01:46:23.598Z · score: 4 (2 votes) · LW(p) · GW(p)

### Launch System Feasibility Efficiency

To save on the mass which has to be launched (which requires squaring the rocket equation), the authors of Eternity in Six Hours favor an external fixed launch system which accelerates the replicating probe including both probe and fuel for its later deceleration.

In the paper, the authors briefly list coilguns, quenchguns, laser propulsion, and particle beam propulsion as potential means of accelerating the probes. They state even though the theoretical energy efficiency of these systems could approach 100%, since one never obtains the theoretical efficiency, they assume 50% efficiency.

The question here:

1. Is it possible to build a launch system which launches (possibly quite large) probes to significant fractions of the speed of light?
2. What efficiency is realistically achievable?

When contacted for comment, one of the authors, Anders Sandberg, stated [LW(p) · GW(p)]:

Looking back at our paper, I think the weakest points are (1) we handwave the accelerator a bit too much (I now think laser launching is the way to go) . . .

My shallow impression is that the proposed launch systems might only represent large engineering challenges more than difficult physics/designs breakthroughs. Coilguns have been constructed and laser propulsion has been demonstrated in the lab. What remains is a question of scale and efficiency. However, even a difference between 5% efficiency and 50% efficiency is only a single order of magnitude. Not a large difference in the overall picture here.

comment by Ruby · 2019-06-01T01:45:22.376Z · score: 4 (2 votes) · LW(p) · GW(p)

### Number of Probes to be Sent

The mass of replicator, specific impulse of the fuel, and travel speed determine the energy required to launch a single probe.

The number of probes to be sent is determined by the number of destinations and the redundancy factor in number of probes sent to ensure that one probe arrives at each destination. Due to collisions with interstellar dust or other failures, we can imagine that not every self-replicating probe will arrive at its destinations.

The number of destinations is limited by one’s travel speed since increasing large regions of space are moving beyond our reach due to expansion of the universe. The faster one travels, the more distant stars one is able to reach before they get too far away.

The authors of Eternity in Six Hours (pg. 21) calculated that:

• Travelling at 50% c, one could reach 1.16 x 10^8 galaxies
• Travelling at 80%c, one could reach 7.62 x 10^8 galaxies
• Travelling at 90%c, one could reach 4.13 x 10^9 galaxies

For reference, an average galaxy might have 10^8 stars.

The authors calculated travelling at 99%c, a redundancy factor of 40 is required, i.e. 40 probes for every destination. For 80%c and 50%x, the redundancy is less than 2. I did not prioritize looking into this calculation and am trusting the result that at lower speeds, the redundancy factor required is not very high.

comment by Ruby · 2019-06-01T01:44:16.726Z · score: 4 (2 votes) · LW(p) · GW(p)

### Specific Impulse and the Fuel Component of the Probe’s Mass

Importantly, part of what we need accelerate is the fuel required to decelerate the probe once it arrives at its destination. The greater the fuel we send along with a probe, the greater the initial launch energy required. We want fuel which is highly efficient by its mass, i.e, it has high specific impulse.

The amount of fuel mass required to decelerate a probe is extremely sensitive to the specific impulse (Isp) of the fuel used. Oliver Habryka noticed [LW · GW] that Eternity in Six Hours may be making unreasonable assumptions about achievable specific impulses attainable. Further investigation revealed that which specific impulses are attainable may determine whether or not space colonization is affordable at all. To me, it is a major sensitivity in the paper.

Transformed to isolate initial mass, the relativistic rocket equation gives:

m0: initial mass

m1: final mass

c: speed of light

Isp: specific impulse

Δv: change in velocity

This formula for the initial mass is linear in final mass and exponential in specific impulse. For a fixed mass of 1kg, the initial fuel mass required to accelerate to different fractions of the speed of light are shown by:

Isp can be measured in m/s; the x-axis gives Isp as a fraction of the speed of light. The dotted lines correspond to 4% of c (the Isp for fission given by the paper) and half that value, 2% of c.

On page 11, Armstrong and Sandberg provide the following values of specific impulse (measured as a fraction of c).

Of these, we have only actually attained nuclear fission, however not in efficient rocket form. As Habryka pointed out [LW · GW], the paper makes the assumption that almost all of the energy released by nuclear fission is converted into kinetic energy, however that this may be unrealistic.

Habryka identified a nuclear rocket concept which may be capable of achieving an Isp of 3%-5% the speed of light. Fission fragment rockets, while not yet built, uses only existing materials. Habryka, who’s physics are stronger than mine, believes the the rockets simple design and mechanism should be feasible.

I’ll reiterate the importance of specific impulse by pointing out that to accelerate a mass of 1k to 50% requires 1.02 x 10^3 tonnes of fuel with Isp of 4%c (value given for fission) and 1.04 x 10^9 tonnes of fuel at 2% of c, i.e. half the value. A mere halving of the specific impulse of the fuel results in an increase of six orders of magnitude in the fuel mass required.

To give a sense of where we are at, the Falcon Heavy rocket has an Isp of only 0.001% of C (3110 m/s). Proven and prototyped propulsion systems are in this range too. Ion thrusters, an existing technology, may be able to achieve up to 3% (100,000 m/s), however these required a both source of electricity as well as propellant, likely requiring a nuclear reactor and nuclear fuel.

It seems to me that only by making innovations somewhere with rocket technology (fission, fusion, antimatter) or with something more exotic (light sails, Bussard ramjets, etc.) that space colonization to distances much beyond our immediate neighbors could be possible. What we have now would not be enough apart from perhaps some experimental ion thrusters, but those require both a source of electricity, e.g. nuclear reactor, as well as a propellant.

answer by habryka · 2019-05-08T00:11:09.276Z · score: 3 (2 votes) · LW(p) · GW(p)

Analyzing the rocket-equation section, I found the following statement:

The relativistic rocket equation is
where is the difference in velocity, is the initial mass of the probe, the final mass of the replicator and the term denotes the specific impulse of the fuel burning process. The term can be derived from η, the proportion of fuel transformed into energy during the burning process [24].

The fact that we can fully derive from the fuel energy-transformation efficiency seemed weird to me, so I looked it up in the underlying reference and found the following quote (I slightly cleaned up the math typesetting and replaced it with equivalent symbols above, emphasis mine):

For the relativistic case, there is a maximum exhaust velocity for the reaction mass that is given by:
where e is the fuel mass fraction converted into kinetic energy of the reaction mass. I was not able to improve on that derivation from a presentation point of view.

This is obviously a very similar equation, but importantly this equation specifies an upper bound on the exhaust velocity and does not say when that upper bound can be attained. Intuitively it seems to me not at all obvious that we should be able to attain that maximum exhaust velocity, since it would require the ability to perfectly direct the energy released in the fuel burning, which would naively be primarily released as heat.

The keyword that finally helped me understand the relevant rocket design is "Fission-Fragment Rocket", which at least according to Wikipedia could indeed reach specific impulses sufficient to support the conclusions in the paper.

comment by habryka (habryka4) · 2019-05-08T01:27:10.553Z · score: 3 (2 votes) · LW(p) · GW(p)

Ok, I think the original calculations here are still correct, if you design your rocket to directly emit the fission material at high speeds. This is a paper that proposes such a rocket design:

Dusty Plasma Based Fission Fragment Nuclear Reactor
We propose an innovative nuclear power generation system design using dusty radioactive (fissile or not) material plasma as a fuel. The fission fragments or decay products accelerated during the disintegration process to velocities of 3–5% of the speed of light are trapped and collected in a simple combination of electric and magnetic fields resulting in a highly efficient (90%), non-Carnot, DC power supply. In a conventional nuclear reactor this high kinetic energy of the fission fragments is dissipated by collisions to generate heat, which is converted to electrical power with efficiencies of no more than 50%. Alternatively, the fission fragments produced in our dusty plasma reactor can be used directly for providing thrust. The highly directional fission fragment exhaust can produce a specific impulse of one million seconds resulting in burnout velocities several thousand times those attainable today. Previous concepts suffered from impractical or inadequate methods to cool the fission fuel. In this work the heating problem is overcome by dividing the solid fuel into small dust particles and thereby increasing the surface to volume ratio of the fuel. The small size of the fuel particle allows adequate cooling to occur by the emission of thermal radiation.