Politicians stymie human colonization of space to save make-work jobs

post by Roko · 2010-07-18T12:57:47.388Z · LW · GW · Legacy · 98 comments

Contents

  How realistic is a risk-reducing colony?
  Space colonies versus underground colonies
  Rhetoric versus rational planning
None
98 comments

An example of the collective action failures that happen when millions of not-so-bright humans try to cooperate. From the BBC

US President Barack Obama had laid out his vision for the future of human spaceflight. He was certain that low-Earth orbit operations should be handed to the commercial sector - the likes of SpaceX and Orbital Sciences Corp. As for Nasa, he believed it should have a much stronger R&D focus. He wanted the agency to concentrate on difficult stuff, and take its time before deciding on how America should send astronauts to distant targets such as asteroids and Mars.

This vision invited fury from many in Congress and beyond because of its likely impact in those key States where the re-moulding of the agency would lead to many job losses - in Florida, Texas, Alabama and Utah. 

The continued provision of seed funding to the commercial sector to help it develop low-cost "space taxis" capable of taking astronauts to and from the ISS. The funding arrangements would change, however. Instead of the White House's original request for $3.3bn over three years, the Committee's approach would provide $1.3bn. (Obama had wanted some $6bn in total over five years; the Committee says the total may still be possible, but over a longer period)

Make-work bias and pork-barrel funding are not exactly news, but in this case they are exerting a direct negative influence on the human race's chances of survival. 

Opinion in singularitarian circles has gradually shifted to under-emphasizing the importance of space colonization for the survival of the human race. The justification is that if a uFAI is built, we're all toast, and if an FAI is built, it can build spacecraft that make the Falcon 9 look like a paper aeroplane.

However, the development of any kind of AI may be preceded by a period where humanity has to survive nano- or bio-disasters, which space colonization definitely helps to mitigate. Before or soon after we develop cheap, advanced nanotechnology, we could already have a self-sustaining colony on the moon (though this would require NASA to get its ass in gear).

I leave you with an artist's impression of the physical embodiment of government inefficiency, a spacecraft optimized to make work rather than to advance the prospects of the future of the human race:

A shuttle-derived concept for a heavy-lift rocket

The Space Shuttle cost $1.5 billion per launch (including development costs), so with a payload of 25 tons to LEO, that makes a cost of $60,000 per kg to orbit. Falcon 9 gets 10 tons to orbit for $50 million, making a cost of $5000/kg, and falcon 9 heavy gets 32 tons for (apparently) 78 million, a price of $2500/kg. As the numbers clearly indicate, what we need is obviously another space shuttle. 

 

How realistic is a risk-reducing colony?

Robin Hanson points out that a self-sustaining space/lunar/Martian colony is a long way away, and Vladimir Nesov and I point out that self-sustaining is unnecessary: a colony somewhere (the moon, under the ground on earth, Antarctica, etc) needs only to be able to last a long time, and be able to un-do the disaster. So Vladimir suggests a quarantined underground colony that can do Friendly AI research in case of a Nuclear/Nanotech/Biotech disaster.

 

Space colonies versus underground colonies

Space provides an inherent cost disadvantage to building a long-life colony that is basically proportional to the cost per kg to orbit. Once the cost to orbit falls below, say, $200/kg, the cost of building a very reliably quarantined, nuke-proof shelter on earth will catch up with the costs inherent in operating in vacuum. 

It was also noted that motivating people to become lunar or Martian colonists with disaster resilience as a side benefit seems a hell of a lot easier than motivating them to be underground colonists. An underground colony with the sole aim of allowing a few thousand lucky humans to survive a major disaster is almost universally perceived negatively by the public; it pattern matches with "unfair", "elitists surviving whilst the rest of us die", etc, and it should be noted that de facto no-one constructed such a colony even though the need was great in the cold war, and no-one has constructed one since, or even tried to my knowledge (though smaller underground shelters have been constructed, they wouldn't make the difference between extinction and survival). 

On the other hand, most major nations have space programs, and it is relatively easy to convince people of the virtue of colonizing mars; "The human urge to explore", etc. Competitive, idealistic and patriotic pressures seem to reinforce each other for space travel. 

It is therefore not the dollar cost of a space-colony versus an underground colony, but amount of advocacy required to get people to spend the requisite amount of money that matters. It may be the case that no realistic amount of advocacy will get people to build or even permit the construction of a risk-reducing underground colony. 

 

Rhetoric versus rational planning

The thoughts that you verbalize whilst planning risk-reduction are not necessarily the same as the words you emit in a policy debate. Suppose that there is some debate involving an existential risk-reducer (X), a space advocate (S), and a person who is moderately anti-space exploration (A) (for example, the public).

Perhaps S has A convinced to not block space exploration in part because saving the human race seems virtuous, and then X comes along and points out that underground shelters do the same job more efficiently. X has weakened S's position more than she has increased the probability of an underground shelter being built. Why? First of all, in a debate about space exploration, people will decide on the fate of space exploration only, then forget the details. The only good outcome of the debate for X is that space exploration goes ahead. Whether or not underground shelters get built will be (if X is really lucky) another debate entirely (most likely there will simply never be a debate about underground shelters)

Second, space is a rhetorically strong position. It provides jobs (voters are insane: they are pro-government-funded-jobs and anti-tax), it fulfills our far-mode need to be positive and optimistic, symbolizing growth and freedom, and it fulfills our patriotic need to be part of a "great" country. Also don't underestimate the rhetorical force of the subconscious association of "up" with "good", and "down" with "bad". Underground shelters have numerous points against them: they invoke pessimism (they're only useful in a disaster), selfishness (wanting to live whilst others die), "playing god" (who decides who gets to go in the shelter? Therefore the most ethical option is that no-one goes in the shelter, thinks the deontologist, so don't bother building it) and injustice. 

So by pointing out that space is not the most efficient way to achieve a disaster-shelter, X may in fact increase existential risk. If instead she had cheered for space exploration and kept quiet about underground options or framed it as a false dichotomy, S's case would have been strengthened, and some branches of the future that would otherwise have died survive. Furthermore, it may be that X doesn't want to spend her time advocating underground shelters, because she thinks that they have worse returns that FAI research. So X's best policy is to simply mothball the underground shelter idea, and praise space exploration whenever it comes up, and focus on FAI research. 

 

98 comments

Comments sorted by top scores.

comment by Roko · 2010-07-18T22:37:38.294Z · LW(p) · GW(p)

An interesting note on colonizing mars: Right now, we could send an unmanned mission to plant 20kT nukes under dust flows near the frozen CO2 poles. Detonation would cover the CO2 with dark dust and cause it to start subliming, setting off a chain reaction of global warming on Mars. This process is simple and cheap to start, but also inherently slow (takes decades), and it might not actually work. Once the planet has warmed up (this would take until 2020 if we started the process now, I think), algae would be able to live on the planet, converting CO2 into O2, leading to habitability.

Admittedly, the returns per dollar on this project are not as good as the best projects we do (to start with, the sheer cost of such a space mission would be a minimum of $100,000,000). But they are amenable to a much larger funding base, and have far more advocates, infrastructure, etc, so if the opportunity arises to provide positive publicity for such proposals, we should do it.

Also, compare a set of missions to warm mars up and seed it with algae at a cost of perhaps $ 5 billion to the iraq/afghan wars at $ 3000 billion.

See This article by Zubrin for more such ideas, including mirrors and super-greenhouse gasses, which seem to be an order of magnitude more expensive but more reliable.

Note that developments in robotics and synthetic biology make everything more viable.

Replies from: CarlShulman
comment by CarlShulman · 2010-07-20T02:09:34.534Z · LW(p) · GW(p)

Citations needed.

Replies from: Roko, JoshuaZ
comment by Roko · 2010-07-20T10:30:44.442Z · LW(p) · GW(p)

Looking for citations makes me doubt whether the nuke idea actually works. JoshuaZ cites the place I found the idea. Zubrin's detailed paper (cited above) may partially explain why: only if the feedback coefficient is optimistically high would such an intervention work. Still, there are other methods that come with an order-of-magnitude higher price tag, such as super-greenhouse gasses. Also, we don't yet know how favorable the feedback coefficient is.

However, Zubrin does propose a 125km-radius mirror to melt the ice caps, and dust would make such a project much more efficient. Building a 10,000 ton reflector in space is no mean feat, though.

I still claim that we could terraform mars for less than a tenth of the cost of Iraq/Afghanistan, if the money was actually used sanely (which is itself doubtful)

comment by JoshuaZ · 2010-07-20T02:28:52.584Z · LW(p) · GW(p)

This article doesn't cite everything that Roko says but seems like an ok citation for the general idea of using nukes to cover the poles with dust. I don't know how much of a reliable source that is. I am under the impression that Zubrin in his book the Case for Mars suggests various methods for covering the poles with dust but doesn't discuss using nukes. Given Zubrin's general approach and the extensive nature of the book, this would suggest to me that Zubrin doesn't take the idea seriously (and he's clearly thought about Mars colonization more than anyone almost else). However, the book is old enough at this point that if this is a new idea he may just not have been aware of it at that time.

Replies from: Roko
comment by Roko · 2010-07-20T10:19:24.773Z · LW(p) · GW(p)

If Zubrin didn't mention nukes, it may have been for PR reasons.

comment by RobinHanson · 2010-07-18T13:11:04.708Z · LW(p) · GW(p)

The mere ability to hurl things into space doesn't reduce existential risk at all. The only thing that would do that is the ability to create an independently self-sustaining economy in space. But we are so very far away from that, cheaper space-flight just isn't of much help now. Far better to just grow the world economy and tech-base faster, then make cheaper space flight when we are nearer the point where an independent space economy is feasible.

Replies from: Roko, CarlShulman, timtyler
comment by Roko · 2010-07-18T15:44:38.343Z · LW(p) · GW(p)

Note that a moon/mars base wouldn't have to produce everything it consumed; there could be some things that just last a long time, like the terrapower nuclear reactor, or containment domes that naturally last a long time, or large stores of food or chemicals that just sit on the moon for a long time. Most importantly for Mars, the effort put into warming the planet and finding suitable synthetic life-forms to convert the atmosphere would be a one-off investment that would pay returns forever.

The moon/mars base could ride out a nuclear winter, spend decades finding a cure to a bioengineered virus, and maybe even find a highly effective blue-goo to fight grey goo (though this last one is admittedly much harder, but 2 out of 3 ain't bad).

Replies from: sketerpot
comment by sketerpot · 2010-07-19T20:13:00.603Z · LW(p) · GW(p)

I'm going to tech-nerd out and elaborate on some of the things you said. This is a joyous thing, so thanks for the opportunity. ;-)

like the terrapower nuclear reactor

You can get much the same effect with any breeder reactor; indeed, if you're sending it to the moon or mars, a LFTR would probably be a better investment. But either one works.

or containment domes that naturally last a long time

These are a very reasonable thing to expect. For building on the moon or mars with native materials, the easiest thing to do is form it into bricks and build masonry structures. Arches and domes are not only easy structures to make from bricks, but they are extraordinarily stable structures, capable of remaining in place even after taking considerable damage and wear.

Plus, on the moon you would probably build very thick domes (or half-cylinders) to get enough radiation shielding. Those things would naturally be very strong.

comment by CarlShulman · 2010-07-19T02:27:00.792Z · LW(p) · GW(p)

I agree with Robin, and underground refuges do compete with space, in our advocacy/attention if nothing else. Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.

Replies from: Roko, JoshuaZ
comment by Roko · 2010-07-19T14:18:00.887Z · LW(p) · GW(p)

Worried about being on the other side of the debate than both Robin and Carl.

I guess I was thinking of Nick Bostrom giving a speech praising the existing private space industry, and that adding some legitimacy to the claim that private spaceflight is for the greater good. In fact exactly this mechanism (with Stephen Hawking advocating instead of Nick) is actually contributing to the resurgence of space that we do have.

This mechanism is cheap, and it diverts resources from places where they clearly do absolutely no good for existential risks, to somewhere where they do some small amount of good.

You could also advocate the construction of an underground shelter, but as others have commented, this has emotional connotations of selfishness, so although you get more risk reduction per unit money, you get less per unit advocacy (maybe).

comment by JoshuaZ · 2010-07-19T02:55:10.029Z · LW(p) · GW(p)

Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.

Programs of that sort are generally not self-sufficient and isolated enough to substantially reduce existential risk. For example, a gray goo scenario will hit those about as hard as it hits anywhere else. And such programs are rarely long-term enough to be able to remain isolated for long if normal infrastructure gives out.

Replies from: Roko
comment by Roko · 2010-07-19T14:14:16.859Z · LW(p) · GW(p)

Yes, I agree.

comment by timtyler · 2010-07-18T15:10:44.230Z · LW(p) · GW(p)

We can't colonise other habitats just yet - but we could get into a better position to punch out incoming meteorites.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-18T15:14:32.545Z · LW(p) · GW(p)

This risk is relatively insignificant.

Replies from: timtyler, JoshuaZ
comment by timtyler · 2010-07-18T16:03:18.853Z · LW(p) · GW(p)

To argue that we shouldn't devote some resources to it, I think it would be necessary to argue that the disadvantages outweigh the advantages. Arguing that the advantages are relatively small doesn't really cut it when the future of civilisation is at stake.

Replies from: Vladimir_Nesov, Roko
comment by Vladimir_Nesov · 2010-07-18T16:19:21.233Z · LW(p) · GW(p)

Arguing that the advantages are relatively small doesn't really cut it when the future of civilisation is at stake.

Yes it does. That advantages are relatively small (as compared to other existential risk reduction plans) is meaningful, since it suggests reallocation of resources. Saying that we can't compromise because "the future of civilization is at stake" invites stupidity.

Replies from: torekp, timtyler
comment by torekp · 2010-07-18T16:47:58.733Z · LW(p) · GW(p)

But the comparison to other existential risk reduction plans is not the right comparison. We should compare the other uses to which the resources will likely be put. Those usually won't be existential risk reduction projects.

Replies from: CarlShulman
comment by CarlShulman · 2010-07-19T01:49:55.059Z · LW(p) · GW(p)

Who is this argument supposed to be addressed to?

Replies from: khafra
comment by khafra · 2010-07-19T12:59:20.702Z · LW(p) · GW(p)

That's what always gets me about policy debates. If we're debating what an LW member who gets put in charge of the national budget should do, Nesov has it. If asking what every LW member should vote for if a referendum specifically on "allocate billions to asteroid defense" comes up, torekp is correct. I am annoyed by disagreements between people who actually agree which take this form.

comment by timtyler · 2010-07-18T16:44:40.613Z · LW(p) · GW(p)

So, the case you are apparently attempting to make is that all resources that could be spent on asteroid deflecting would be better spent on other things. Maybe - but that is far-from obvious. Here is what is currently happening:

http://en.wikipedia.org/wiki/Asteroid_impact_avoidance

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-18T17:18:57.041Z · LW(p) · GW(p)

I'm not attempting to make that case - at some point (sufficiently low amount of resources) marginal worth of asteroid-avoidance might become competitive.

Replies from: timtyler
comment by timtyler · 2010-07-18T19:08:42.603Z · LW(p) · GW(p)

Right - OK - that's what I was saying. Some people are space cadets - and I figure some of them can probably make useful contributions.

Space has some other possibilities for reducing risks too. For example, communications satellites network the world, make everyone friends - and reduce the chances of war. Of course there's also star wars - but I don't think that space can be simply written off as not helping.

comment by Roko · 2010-07-18T16:09:42.930Z · LW(p) · GW(p)

Agreed

comment by JoshuaZ · 2010-07-18T17:12:45.160Z · LW(p) · GW(p)

Is it that insignificant?

Asteroids larger than 1 km hit the Earth about every 500,000 years. (source). That's in the large-scale devastation but not extinction range. Indeed, even asteroids a few 10s or 100s of meters can cause major devastation. The object that caused the Tunguska event is estimated to be between 50-80 meters , and such impacts occur every few hundred years or. Historically such events have had minimal loss of human life, but that's partially due to much less of Earth being populated by humans than it is now. So even without worrying about existential level threats, asteroid impacts pose a substantial risk to human life. As the population grows that risk will become more severe.

How frequent are extinction level asteroid collisions? There's some disagreement there but the ratio seems to be between about 1 per 40-200 million years. That seems plausibly like a low risk existential threat, but how does one compare other existential risks? How does that compare to the chance of say global thermonuclear war or the probability of a uFAI arising? If one presents a very low probability on a uFAI, or put a low probability not on a uFAI but on an AI going FOOM, then this becomes potentially more relevant.

Note also that one doesn't really need an existential level asteroid impact to permanently ruin human life. If we use up enough resources on Earth, especially fossil fuels, then it may not be possible to bootstrap ourselves back up to modern tech levels if the tech level is substantially reduced. As we use more limited resources this risk becomes more serious. We're not anywhere near using up the deuterium supply, but that's also limited as is the supply of U-235 which is much closer to depletion (although again, not very close). This permanent resource crunch after a major civilization setback is enough of a risk that Nick Bostrom takes it seriously (see first link given above.) An asteroid in the 3-5 km range if it hit in a bad way could cause this sort of scenario.

The main problem with the asteroid type event is that there's very little we can do about it. Breaking up an asteroid into little parts won't actually do much if they still impact Earth since the total kinetic velocity delivered is still about the same. There's more of a chance at potentially redirecting an asteroid if one attaches a solar sail or a large nuke at just the right spot. But all such options require knowing about the threat a while in advance.

There are two other point that supports a space program as a existential risk reducer which Timtyler didn't touch on but are worth bringing up: 1) Even if we can't construct self-sustaining colonies yet, every bit we go in that direction increases the chance that we will be able to have such colonies before any event occurs that wipes out or substantially reduces human life on Earth. 2) There are many space based extinction threats other than rogue asteroids where having advance warning even by a few days or hours could substantially reduce the risk. These include supernova risks primarily from IK Pegasi A and Betelgeuse. Our current estimates put both of these as low probability events. Under current estimates Betelgeuse is too far away given the predicted supernova size. But there's a not insignificant chance that our models are wrong and even a small bit off could substantially ruin our day. IK Pegasi A is close enough that if it went through a a Type Ia supernova now (well a 150 years ago), it would easily be an extinction level event. It is likely that the star will go through such at some point in the future, but current estimates put that a few million years in the future. But again, modeling issues could make this drastically wrong (although the chance of a modeling error is much smaller than with Betelgeuse.) Then there are other more exotic and as yet difficult to estimate threats such as gamma ray bursts and rogue brown dwarfs.

Replies from: Roko, Vladimir_Nesov, Vladimir_Nesov, Roko
comment by Roko · 2010-07-18T17:57:32.955Z · LW(p) · GW(p)

Asteroids larger than 1 km hit the Earth about every 500,000 years.

Implying a 1/5000 chance this century. That's small potatoes compared to Bio, Nano, AI.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-18T18:30:02.454Z · LW(p) · GW(p)

That's small potatoes compared to Bio, Nano, AI.

Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1/5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there's too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there's no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you're talking about just the AI problem and nanotech becomes incidental to that.

I'm not sure what you mean by "bio"- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.

Replies from: orthonormal, Mitchell_Porter, Roko
comment by orthonormal · 2010-07-19T19:45:57.232Z · LW(p) · GW(p)

I assume "Bio, Nano, AI" to mean "any global existential threats brought on by human technology", which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you'd have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.

Replies from: homunq
comment by homunq · 2010-07-20T12:27:47.374Z · LW(p) · GW(p)

As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.

Replies from: orthonormal
comment by orthonormal · 2010-07-20T18:26:58.484Z · LW(p) · GW(p)

Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?

I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.

comment by Mitchell_Porter · 2010-07-19T06:44:52.160Z · LW(p) · GW(p)

If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil.

For the airborne replicator, the obvious path is "diamondoid mechanosynthesis", as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and "cracking" of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration.

There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device - power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems - but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes they are aqueous bags of floppy proteins rather than evacuated diamond mechanisms, but I would think that has more to do with the methods available to DNA-based evolution, rather than the physical impossibility of free-living rigid nanobots. The Royal Society report to which you link hardly examines this topic. It casually cites a few qualitative criticisms made by Smalley and others, and attaches some significance to a supposed change of heart by Drexler - but in fact, Drexler simply changed his emphasis, from accident to abuse. There is no reason to expect free-living rogue replicators to emerge by accident from nanofactories, because such industrial assemblers will be tailored to operate under conditions very different to the world outside the factory. But there has been no concession that free-living nanomechanical replicators are simply impossible, and people like Freitas and Merkle who continue to work on the details of mechanosynthesis have many time expressed the worry that it looks alarmingly easy (relatively speaking) to design such devices.

As for my second method, you don't even need free-living replicators, just mass production of the greenhouse-gas nanofactories, and a supply of appropriate ingredients.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-19T13:42:17.868Z · LW(p) · GW(p)

I'm not sure if this counts as an existential threat, but I'm more concerned about a biowar wrecking civilization-- enough engineered human and food diseases that civilization is unsustainable.

I can't judge likelihood, but it's at least a combination of plausible human motivations and technology. Your tech is plausible, but it's hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere.

There are a few people who'd like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-07-21T08:05:17.870Z · LW(p) · GW(p)

There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous "negative utilitarians" exist who might do it as an act of mercy. But the other types are surely more numerous.

comment by Roko · 2010-07-18T18:43:05.377Z · LW(p) · GW(p)

If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1/5000 in the next century.

Massive overconfidence. You need to go closer to 50/50.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-18T19:05:56.261Z · LW(p) · GW(p)

Massive overconfidence. You need to go closer to 50/50.

Where is your estimate coming from?

My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we've had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn't using carbon.

Replies from: Roko
comment by Roko · 2010-07-18T19:33:33.044Z · LW(p) · GW(p)

1) experts suggest that the possibility is very unlikely.

But how did you translate "very unlikely" to "less that 1 in 5000"? Why not say 1%? or 3%? Or 1 in 10^100?

I think that I need to do an article on why one shouldn't be so keen to assign very low probabilities to events where the only evidence is extrapolative.

Replies from: Vladimir_Nesov, FAWS
comment by Vladimir_Nesov · 2010-07-18T19:50:59.862Z · LW(p) · GW(p)

Still depends on the nature of the event (Russel's teapot). There is no default level of certainty, no magical 50/50.

Replies from: Roko
comment by Roko · 2010-07-18T20:12:03.861Z · LW(p) · GW(p)

Sure, for cases where arbitrary complexity has been added, the "default level of certainty" is 2^-(Complexity).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-18T20:50:22.693Z · LW(p) · GW(p)

Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.

Replies from: Roko
comment by Roko · 2010-07-18T22:32:27.440Z · LW(p) · GW(p)

I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition.

The latter works better with more rigid rules and less leeway for people to believe what they want. It's a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.

comment by FAWS · 2010-07-18T22:51:47.327Z · LW(p) · GW(p)

Less than 1 in 5000 sounds about right to me. I'm much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.

Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They'd be useful in space and for sending to other planets, but there wouldn't be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.

Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.

That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there'd also be the option of using macroscopic weapons against larger concentrations.

Replies from: Roko
comment by Roko · 2010-07-19T12:01:04.373Z · LW(p) · GW(p)

The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk.

So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.

Replies from: FAWS
comment by FAWS · 2010-07-19T12:38:10.336Z · LW(p) · GW(p)

Other nano-risks aren't necessarily extinction risks, though. And while I'm sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn't seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.

Replies from: Roko, Vladimir_Nesov
comment by Roko · 2010-07-19T12:50:21.983Z · LW(p) · GW(p)

But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000.

You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.

I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven't even thought of, which instantly invalidates claiming a figure as low as, say, 1/5000 for the extinction risk before you have considered them.

This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.

Replies from: FAWS, JoshuaZ, whpearson
comment by FAWS · 2010-07-19T14:54:38.102Z · LW(p) · GW(p)

But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000.

The question wasn't whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.

There doesn't seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I'm far from convinced that the potential dangers of nano are greater than the benefits). I'm not even sure there is a good way to spend money on migitation of any single nano risk.

The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can't do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.

you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?)

Yes, I did, that's one of the most ovious ones. It's not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I'm not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you'd need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don't see how spending money now could help in any way.

This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.

I 'm not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn't really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.

comment by JoshuaZ · 2010-07-19T13:46:58.099Z · LW(p) · GW(p)

But now you have to catalogue all the possible risks of nanotech, and add a category for "risks I haven't thought of", and then claim that the total probability of all that is < 1/5000

The 1/5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you've made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I'll agree that if one's comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.

Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that's really the main practical limitation in building fission weapons.

Replies from: Roko
comment by Roko · 2010-07-19T16:01:12.052Z · LW(p) · GW(p)

Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.

comment by whpearson · 2010-07-19T13:10:49.076Z · LW(p) · GW(p)

A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.

comment by Vladimir_Nesov · 2010-07-19T15:21:25.389Z · LW(p) · GW(p)

Accidental grey goo doesn't seem plausible, and purposeful destructive use of nanotech doesn't necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.

Replies from: FAWS
comment by FAWS · 2010-07-19T15:38:13.759Z · LW(p) · GW(p)

Are you disagreeing with something I said? I'm not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that's necessary). Nanotech might be able to do things that a virus can't, but that would be the sort of thing I mentioned. Anyway I don't see how we could effectively spend money now to prevent either.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T15:44:58.395Z · LW(p) · GW(p)

Anyway I don't see how we could effectively spend money now to prevent either.

I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.

comment by Vladimir_Nesov · 2010-07-18T17:47:29.826Z · LW(p) · GW(p)

Is it that insignificant?

It's relatively insignificant, compared to other sources of existential risk. Overall, it's a vastly better investment than lipstick.

comment by Vladimir_Nesov · 2010-07-18T17:24:03.279Z · LW(p) · GW(p)

1) Even if we can't construct self-sustaining colonies yet, every bit we go in that direction increases the chance that we will be able to have such colonies before any event occurs that wipes out or substantially reduces human life on Earth.

It's not generally valid, since this diverts resources from development of other potentially relevant tech that could help with establishing a colony once the time is right.

comment by Roko · 2010-07-18T17:55:53.914Z · LW(p) · GW(p)

Speed of light fail

There are many space based extinction threats other than rogue asteroids where having advance warning even by a few days or hours could substantially reduce the risk. These include supernova risks primarily from IK Pegasi A and Betelgeuse

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-18T18:16:04.325Z · LW(p) · GW(p)

Speed of light fail

No. We know that there are changes in a star before the supernova occurs. For example, in a Type II supernova, the radiation level initially increases linearly. For other supernova types the luminosity of the star does sometimes increase before the supernova event itself. Also, hours before a supernova, there may be a drastic increase in neutrino production.

It is also likely that more detailed observation of stars will give us a better idea what sort of more subtle signs show up prior to supernovae.

Replies from: Roko
comment by Roko · 2010-07-18T18:35:34.286Z · LW(p) · GW(p)

Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.

Replies from: Nick_Tarleton, JoshuaZ
comment by Nick_Tarleton · 2010-07-18T19:28:44.705Z · LW(p) · GW(p)

From the parent post:

We know that there are changes in a star before the supernova occurs.

The notification and the blast travel at c, but the blast is hours behind the notification.

comment by JoshuaZ · 2010-07-18T18:49:37.120Z · LW(p) · GW(p)

Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.

If observed changes to a star happen well before the supernova event itself then the fact that everything is happening at c doesn't matter. Say for example that the neutrino flux increase happens 24 hours before hand. That means we have a 24 hour warning before the supernova event. Similarly, if we see an increase in luminosity before the supernova we still get advance warning. What matters is that there is a delay between when stars show signs of supernovaing and when they actually supernova.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-18T19:00:18.939Z · LW(p) · GW(p)

The point is that being closer to the star when that happens doesn't provide you with more forewarning than if you look at it from home.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-18T19:19:03.887Z · LW(p) · GW(p)

The point is that being closer to the star when that happens doesn't provide you with more forewarning than if you look at it from home.

I don't think anyone is advocating that we send actual probes to Betelgeuse or IK Pegasi. I'm confused why one would think that would even be on the table. Even if we sent a probe today at a tenth of the speed of light (well beyond our current capabilities) it will still take around 1500 years to get to IK Pegasi. I don't know why one would even think that would be at all in the useful category.

What is helpful is having more space based observation equipment in our solar system. The more we put into space the less of a problem we have with atmospheric interference, artificial radio sources, and general light pollution. To use one specific example that would help a lot, if we had a series of optical telescopes that were spread out around the solar system we could use parallax measurements to get a better idea how far away Betelgeuse is. For a variety of reasons there's a lot of uncertainty about how far away it is with 330 light years as a lower estimate and around 700 as an upper estimate although it seems like around 640 is where things seem to be settling down at. Given the inverse square law for radiation, this matters for a supernova concern. A difference of 300 light years corresponds to about a factor of 4 in the radiation strength. Overall, most of the interesting, practical investigation and reduction of astronomical existential risks can be done right here in our home system.

Replies from: Cyan
comment by Cyan · 2010-07-18T20:29:19.762Z · LW(p) · GW(p)

The more we put into space the less of a problem we have with atmospheric interference, artificial radio sources, and general light pollution.

So the benefit of space-based observation is signal amplification rather than signal speed.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-18T20:34:37.295Z · LW(p) · GW(p)

So the benefit of space-based observation is signal amplification rather than signal speed.

In a nutshell yes. And the more signal amplification we get the quicker we can detect problems before it is too late.

comment by Vladimir_Nesov · 2010-07-18T13:11:42.986Z · LW(p) · GW(p)

I believe it's vastly more efficient to focus on FAI (or WBE institutions) and protected Earth-based shelters than space colonization.

It's unrealistically difficult to create in the near future a self-sustaining (and quarantined!) colony that can seed civilization after a disaster that takes down even Earth-based shelters. Development of more efficient space transit doesn't even seem an important subproblem of that. By the time it's done, with however much effort one can realistically expect, only a small fraction of remaining non-uFAI risk will remain unrealized, and Earth-based shelters could be made more resilient in the meantime, faster and cheaper.

Replies from: Roko, xamdam
comment by Roko · 2010-07-18T14:11:05.526Z · LW(p) · GW(p)

By the time it's done, with however much effort one can realistically expect, only a small fraction of remaining non-uFAI risk will remain unrealized, and Earth-based shelters could be made more resilient in the meantime, faster and cheaper.

So, this claim depends upon two subclaims.

(1) The probability distribution of time-to-colony (2) The probability distribution of time-to-non-AI-risk

Can you post your probability distributions for these two events in separate comments, so that I can agree or disagree with each separately?

comment by xamdam · 2010-07-18T14:02:30.353Z · LW(p) · GW(p)

protected Earth-based shelters than space colonization.

Additionally I think that a rational person should have some survivalist skills in their arsenal to improve their/family/community chances in a major local or "small" earth-scale disruption.

I think building earth-based shelters is a good idea in general, but will run into huge psychological walls because there will not be enough shelter for everybody.

One advantage of the space strategy is skirting these issues, since space programs are not ostensibly survivalist-oriented.

comment by SilasBarta · 2010-07-18T18:58:38.209Z · LW(p) · GW(p)

I know you touched on this, but: since the beginning, the space program has existed due to make-work deals. To get the original legislation approved, they had to buy off legislators in various districts. (Why do you think the major centers are in Texas and Florida, two of the states mentioned?) To this day, the problem persists in that NASA can't switch to metric because of the numerous English-oriented workshops scattered across the country that they've locked themselves into buying from

But, to paraphrase a point EY made a while back: yes, it sucks that politicians control technologies they couldn't invent, but then, don't engineers get funding they couldn't secure on their own?

Also, I completely agree about the stupidity of complaining about an improvement in inefficiency just because it requires people to take different jobs. But what makes it particularly irksome for me is how adamant the same people are about propping up those jobs rather than simply getting some kind of adjustment compensation.

In other words, I would have a lot more understanding if the traditional argument were, "Yeah, this will be an improvement in efficiency, but could we maybe also include some funding to help with readjustment for all the workers this would cut off?" Instead, the usual demand is not only that we should appease existing beneficiaries, but that we should do that specifically by persisting ad infinitum in paying them to do the same worthless jobs.

It's one thing to say, "yes, we'll support your mentally retarded brother". It's quite another to say "... and we'll do it by making customers endure his ineptness too!"

And I've never understood this mentality. I don't feel entitled to perpetual demand for the kind of labor my employer provides, and I'd feel completely rotten about encouraging such waste just so I can keep exactly the same job. Where do people come up with this worldview?

Replies from: James_K, Roko, Jonathan_Graehl
comment by James_K · 2010-07-19T10:40:23.327Z · LW(p) · GW(p)

And I've never understood this mentality. I don't feel entitled to perpetual demand for the kind of labor my employer provides, and I'd feel completely rotten about encouraging such waste just so I can keep exactly the same job. Where do people come up with this worldview?

Go back a generation and the concept of life-long careers was much more common. I think it's that social expectations for Boomers and earlier was that they would have a particular career for life, and many from those generations feel affronted at the thought of having to give up on their existing career. Effectively they feel they've suffered a breach of the social contract.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-19T16:36:47.177Z · LW(p) · GW(p)

But aren't the Boomers at the end of their careers now? It seems it would have to be a problem with a later cohort for this to be a major issue now.

Replies from: James_K
comment by James_K · 2010-07-20T04:51:43.204Z · LW(p) · GW(p)

The ones in politics aren't at the end of their careers, that means that legislatures as a body will be more likely to consider making people change jobs to be unthinkable than the average person.

You are right though, over the next decade or so, this hypothesis would predict that demand for job security will fall over the next 10-20 years as the Boomers retire.

comment by Roko · 2010-07-19T22:50:37.983Z · LW(p) · GW(p)

You and I understand the principles of economic efficiency and the invisible hand. Ordinary people don't even not understand it. It comes from a different mental universe than their thoughts.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-19T23:24:19.615Z · LW(p) · GW(p)

Ordinary people don't even not understand it. It comes from a different mental universe than their thoughts.

The first time I read this I thought "right on!" but then rereading it I'm not actually sure what it means. Can you expand on what you mean?

Replies from: Roko
comment by Roko · 2010-07-19T23:39:25.817Z · LW(p) · GW(p)

Suppose you take someone who doesn't know math, never has. To them, "million", "billion" and "trillion" mean "many". GDP is as meaningless to them as RFETR is to you, and to them economics means when rich greedy corporate people lie to them and take their jobs away.

They are also heavily biased without realizing it, and without even realizing that people can be biased without realizing it (they think that all untruths are either lies or mistakes).

They don't know what falsification or the scientific method is. Science and engineering are indistinguishable from magic to them.

Then you take this person, and you try to explain "efficient allocation of capitol" to them. You may as well try to explain what a frequent flyer club is to a cave-man. The words simply wouldn't generate concepts for him to misunderstand.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-19T23:52:33.712Z · LW(p) · GW(p)

Ok. I thought something approximately like that but wasn't sure if this was due to an illusion of transparency. Spending time on LW may just be making me too paranoid about that.

comment by Jonathan_Graehl · 2010-07-19T02:33:45.113Z · LW(p) · GW(p)

I thought Florida was closer to the equator than most of the US, which decreases the energy needed to achieve orbit. I've often wondered if this is significant; if so, then why can't we launch from some friendly equatorial country?

Replies from: nerzhin, JoshuaZ
comment by nerzhin · 2010-07-19T16:59:57.454Z · LW(p) · GW(p)

why can't we launch from some friendly equatorial country?

The Europeans do.

Replies from: Alexandros, Roko
comment by Alexandros · 2010-07-20T09:45:14.482Z · LW(p) · GW(p)

French Guyana is not a European-friendly equatorial country, it's an overseas region of France and therefore, the EU.

comment by Roko · 2010-07-19T17:02:24.541Z · LW(p) · GW(p)

Because Europe is better than America. ;-0

comment by JoshuaZ · 2010-07-19T03:03:02.034Z · LW(p) · GW(p)

That's the correct reason for Florida. As I understand it, no equatorial country was considered stable and friendly enough to put the infrastructure there. And the US would have to then regularly transport a lot of equipment and personnel there. In contrast, putting mission control in Texas really was about politics. In particular, LBJ was from Texas. And while he was actually a fan of the space program (in some ways more so than Kennedy), he still wanted his home state to get something out of it.

comment by Vladimir_Nesov · 2010-07-18T13:24:44.941Z · LW(p) · GW(p)

Come to think of it, we don't really need sustainable civilization-seeding space colonies or shelters to protect against global non-uFAI disasters, we only need matter-quarantined FAI research institutions (in space or Earth-based shelters) that can last long enough to complete the development of FAI.

comment by Roko · 2010-07-20T13:15:57.500Z · LW(p) · GW(p)

And whilst we're on cheap but high-sanity ways to get stuff to orbit, Brian Wang's Nuclear Space Gun comes out on top.

Need 100,000 tons of aluminized mylar mirror or a CFC factory to go terraform mars? Easy. Just take one ageing 10MT nuke, a hole in the ground and a sprinkling of mad scientist. Total cost for the launch itself would be a small fraction of the NASA budget as far as I can see. The cost-to-orbit per kilogram would be rock-bottom.

comment by [deleted] · 2010-07-18T15:45:27.269Z · LW(p) · GW(p)

Doesn't this post violate the "no politics" rule?

Replies from: Nic_Smith, Roko, None, timtyler
comment by Nic_Smith · 2010-07-18T18:36:49.938Z · LW(p) · GW(p)

It has always been my impression that there is a "no politics" guideline around here, not a rule. Rightfully so, as it's easy to generate irrational talk about politics which would quickly overwhelm just about anything else and ruin everyone's day.

However, Roko brings up something that deserves serious discussion since there's a lot of interest in existential risk here, but a good historical analog seems like it would difficult to find and might obfuscate more than enlighten (historical colonies are more controversial than space programs).

comment by Roko · 2010-07-18T15:46:54.436Z · LW(p) · GW(p)

I didn't say it was a left-wing or a right wing rocket!

Replies from: SilasBarta
comment by SilasBarta · 2010-07-19T16:38:14.132Z · LW(p) · GW(p)

A rocket that even has a wing-configuration handedness is pretty much screwed anyway ...

comment by [deleted] · 2010-07-18T16:18:04.514Z · LW(p) · GW(p)

Look. I see that previous posts tagged "politics" have been basically anti-political, or a plea for rationality. It's all rather abstract.

This post is not like that. The business with "make-work" is a partisan poke. (One I agree with, but never mind.)

Replies from: Roko
comment by Roko · 2010-07-18T16:51:13.513Z · LW(p) · GW(p)

The business with "make-work" is a partisan poke.

Which political party or faction supports government waste, pork-barrel money and make-work jobs at companies that have entrenched special interests? Is it anti-right because the right wing likes big business, or anti-left because it's big government? Seems like both to me...

Replies from: Aurini
comment by Aurini · 2010-07-18T23:53:53.928Z · LW(p) · GW(p)

Seconded. This post is "Anti-current party in power" which happens to be Democrat, but even a cursory amount of research would provide examples of Bush-era policies benefiting local individuals at the cost of technology- and the population in general.

This example just happens to be more relevant to our concerns - existential threats and all.

Replies from: LucasSloan
comment by LucasSloan · 2010-07-19T00:21:58.482Z · LW(p) · GW(p)

even a cursory amount of research would provide examples of Bush-era policies

Clinton-

Bush Sr.-

Reagan-

Carter-

Ford-

Nixon-

Johnson-

etc.

comment by Roko · 2010-07-20T13:44:25.406Z · LW(p) · GW(p)

Lastly, I should mention Asteroid Mining. Consider the asteroid Eros:

In the 2,900 cubic kms of Eros, there is more aluminium, gold, silver, zinc and other base and precious metals than have ever been excavated in history or indeed, could ever be excavated from the upper layers of the Earth's crust.

You suddenly begin to see that entrepreneurs like Elon Musk could be the force that pushes us into a space economy.

Brian Wang thinks that there is $100 trillion (10^14) worth of platinum and gold alone there. Of course the price would begin to fall once you had made your first few hundred billion.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-20T17:49:56.619Z · LW(p) · GW(p)

Are there actually any materials on Earth that are so rare and precious (and perhaps in danger of running out in the foreseeable future) that it would make sense to mine them from space?

By the way, the claim about aluminum sounds highly implausible to me. Aluminum accounts for about 8% of the Earth's crust by weight, and even if most of it is difficult to access, I would expect that more than the amount present on Eros would be extractable with methods much easier than any conceivable sort of asteroid mining.

Replies from: Roko, khafra, Soki
comment by Roko · 2010-07-20T18:46:47.442Z · LW(p) · GW(p)

Rhodium is currently worth $88 million per 1000kg.

I think that Platinum is an interesting possibility, as well as gold. 1000kg of platinum is currently worth $50 million.

See this table of elements from wikipedia

{Platinum, Rhodium, Gold, Iridium, Osmium, Palladium, Rhenium, Ruthenium} are in the $10,000+ per kg range, with {Platinum, Rhodium, Gold} being $30,000+ /kg

If you consider the basket of metals in that table as a whole, there's obviously a lot of money to be made, and I bet that at least one of them will hold its price relatively well as you mine more of it.

When Brian Wang Says Eros is worth $100 trillion, he's probably not far wrong.

comment by khafra · 2010-07-20T18:57:56.060Z · LW(p) · GW(p)

A cursory googling for "peak rare earth metals" yields a likely affirmative response. Hafnium, Iridium, neodymium, lathanum, cerium, and several others are both necessary for modern electronics and/or EVs, and rapidly diminishing. Barring societal collapse or a new technological revolution on the scale of transistors, we'll probably want to go out and get more within the century--and that's not even including the advantage of avoiding the deletorious effects of mining on Earth.

comment by Soki · 2010-07-20T18:22:48.256Z · LW(p) · GW(p)

Helium-3 could be mined from the moon. It would be a good fusion fuel, but it is rare on earth so it makes sense to get it from space.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-07-20T19:32:23.040Z · LW(p) · GW(p)

Now that's interesting! I didn't know that the prospects for helium-3 fusion are allegedly that good. Still, given the previous history of controlled fusion research, I'm inclined to be skeptical. Do you know of any critical references about the present 3He fusion research? All the references I've seen from a casual googling appear to be pretty optmistic about it.

Replies from: Soki
comment by Soki · 2010-07-21T03:46:06.414Z · LW(p) · GW(p)

I have no reference, but as far as I understand, deuterium-tritium fusion is easier to achieve than deuterium-helium-3. But deuterium-helium-3 seems cleaner and the energy produced is easier to harvest.
So I think that the first energy producing fusion reactor would be a deuterium-tritium one, and deuterium-helium-3 would come later.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-21T04:07:07.752Z · LW(p) · GW(p)

The primary reason that D-T is considered to be more easily viable than others is that it has the best numbers under the Lawson criterion. This is also true under the Triple Product test. While Wikipedia gives a good summary I can't find a better reference that is online (The Wikipedia article gives references including Lawson's original paper but I can't find any of them online). The real advantage of He3 Deuterium fusion is that it is aneutronic, that is it doesn't produce any neutrons. This means that there's much less nasty radiation that will harm the containment vessel and other parts and that much less of the energy will be in difficult to capture forms. This is especially important for magnetic confinement since neutrons lack of charge makes them not confined by electromagnetic fields. This is a non-technical article that discusses a lot of the basic issues including the distinction between fusion types, although they don't go through the level of detail of actually using Lawson's equation.

comment by Roko · 2010-07-20T13:08:55.025Z · LW(p) · GW(p)

Note that the feasibility of all these proposals are relative to sanity: the NASA budget is $20bn, and Quicklaunch has a viable system to launch bulk materials out of a space gun for $250/kg. So 1 kiloton costs just 250 million, or 1% of the NASA budget. The space gun is dominated in cost by fixed costs of the gun, and scales up well (more volume per unit surface area of the projectile helps a space gun, because it reduces drag and drag heating, as well as the usual scale economies for a larger rocket) so if you really wanted to build a 10,000 square kilometer orbital mirror to terraform Mars, you could probably do it for less than 50% of one years' NASA budget.

comment by MichaelVassar · 2010-07-21T05:46:11.443Z · LW(p) · GW(p)

Carl and Robin seconded.

Experiments like biosphere 2 are orders of magnitude more efficient than space travel as ways to protect mankind.