Roadmap: Plan of Action to Prevent Human Extinction Risks

post by turchin · 2015-06-01T09:58:09.898Z · LW · GW · Legacy · 88 comments

Contents

88 comments

Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.

Should more than one person have the same idea, the award will be made to the person who posted it first.

The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.

I may include you as a co-author in the roadmap (if you agree).

The roadmap is distributed under an open license GNU.

Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).

The competition is open until the end of 2015.

The roadmap can be downloaded as a pdf from:

UPDATE: I uploaded new version of the map with changes marked in blue.

 http://immortality-roadmap.com/globriskeng.pdf

Email: alexei.turchin@gmail.com

 

88 comments

Comments sorted by top scores.

comment by John_Maxwell (John_Maxwell_IV) · 2015-06-02T08:55:08.628Z · LW(p) · GW(p)

What about taking steps to reduce the incidence of conflict, e.g. making meditation more pleasant/enjoyable/accessible/effective so people chill out more? Improved translation/global English fluency could help people understand one another. Fixing harmful online discussion dynamics could also do this, and prevent frivolous conflicts from brewing as often.

BTW, both Nick Beckstead and Brian Tomasik have research-wanted lists that might be relevant.

Replies from: turchin
comment by turchin · 2015-06-02T20:57:58.641Z · LW(p) · GW(p)

I like your ideas "reduce the incidence of conflict" - maybe by some mild psychedelic or brain stimulation? and "Improved translation/global English fluency". I would be happy to provide you two awards - how can I do it? Also links are useful, thanks.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-06-03T06:12:52.455Z · LW(p) · GW(p)

I'm not sure "Improved translation/global English fluency" is an unalloyed good... could lead to a harmful global monoculture (putting all our eggs in one basket culturally). Feel free to reduce my award count by one :)

Also helping Westerners chill out could leave them less prepared to deal with belligerent Middle Easterners.

Replies from: turchin
comment by turchin · 2015-06-03T10:23:54.600Z · LW(p) · GW(p)

Ok, how I could send it to you?

comment by Gondolinian · 2015-06-03T12:34:30.240Z · LW(p) · GW(p)

(I know there are almost certainly problems with what I'm about to suggest, but I just thought I'd put it out there. I welcome corrections and constructive criticisms.)

You mention gene therapy to produce high-IQ people, but if that turns out not to be practical, or if we want to get started before we have the technology, couldn't we achieve the same through reproduction incentives? For example, paying and encouraging male geniuses to donate lots of sperm, and paying and encouraging lots of gifted-level or higher women to donate eggs (men can donate sperm more frequently than women can donate eggs, so the higher levels of women would not be enough to match the higher levels of men, and you'd have to bring in the next-highest level), and then just having the children of the two groups be born from surrogates, whose IQ AFAIK should not have any effect on the child's, and can therefore be selected based on how cheaply they can be hired?

Replies from: turchin
comment by turchin · 2015-06-03T12:42:09.165Z · LW(p) · GW(p)

If we have 200-300 years before well proved catastrophe, this technic may work. But in 10-50 years time scale it is better to search good clever students and pay them to work on x-risks.

Replies from: Gram_Stone, Gondolinian
comment by Gram_Stone · 2015-06-03T22:24:30.103Z · LW(p) · GW(p)

Embryo selection is a third alternative, the progress of which is more susceptible to policy decisions than gene therapy, and the effects of which are more immediate than selective breeding. I recommend Shulman & Bostrom (2014) for further information.

comment by Gondolinian · 2015-06-05T16:47:04.445Z · LW(p) · GW(p)

If we have 200-300 years before well proved catastrophe, this technic may work.

If you're talking about significant population changes in IQ, then I agree, it would take a while to make that happen with only reproduction incentives. However, I was thinking more along the lines of just having a few thousands or tens of thousands of >145 IQ people more than we would otherwise, and that could be achieved in as little as one or two generations (< 50 years) if the program were successful enough.

Now for a slightly crazier idea. (Again, I'm just thinking out loud.) You take the children and send them to be unschooled by middle-class foster families, both to save money, and to make sure they are not getting the intellectual stimulation they need from their environment alone, which they might if you sent them to upper-class private schools, for example. But, you make sure they have Internet access, and you gradually introduce them to appropriately challenging MOOCs on math and philosophy specially made for them, designed to teach them a) the ethics of why they should want to save the world (think some of Nate's posts) and b) the skills they would need to do it (e.g., they should be up to speed on what MIRI recommends for aspiring AI researchers before they graduate high school).

The point of separating them from other smart people is that smart people tend to be mostly interested in money, power, status, etc., and that could spread to them if they are immersed in it. If their focus growing up is simply to find intellectual stimulation, then they would be essentially blank slates and when they're introduced to problems that are very challenging and stimulating, have other smart people working on them, and are really, really* important, they might be more likely to take them seriously.

*Please see my clarification below.

Replies from: Lumifer
comment by Lumifer · 2015-06-05T17:06:19.148Z · LW(p) · GW(p)

they would be essentially blank slates

I don't think this is how it works with people. Especially smart ones with full 'net access.

Replies from: Gondolinian
comment by Gondolinian · 2015-06-05T17:22:09.477Z · LW(p) · GW(p)

I don't think this is how it works with people. Especially ones with full 'net access.

You're right; that was poorly phrased. I meant that they would have a lot less tying them down to the mainstream, like heavy schoolwork, expectations to get a good job, etc. Speaking from my own experience, not having those makes a huge difference in what ideas you're able to take seriously.

The Internet exposes one to many ideas, but 99% of them are nonsense, and smart people with the freedom to think about the things they want to think about eventually become pretty good at seeing that (again speaking from personal experience), so I think Internet access helps rather than hurts this "blank slate"-ness.

Replies from: Lumifer
comment by Lumifer · 2015-06-05T18:39:31.316Z · LW(p) · GW(p)

they would have a lot less tying them down to the mainstream, like heavy schoolwork, expectations to get a good job, etc

I am confused as to why do you think it's a good thing.

You're basically trying to increase the variance of outcomes. I have no idea why do you think this variance will go precisely in the direction you want. For all I know you'll grow a collection of very very smart sociopaths. Or maybe wireheads. Or prophets of a new religion. Or something else entirely.

comment by ChristianKl · 2015-06-01T16:31:32.235Z · LW(p) · GW(p)

The roadmap is distributed under an open license GNU.

I don't know what that sentence means. If you mean the GPL that includes a provision of distributing the work along with a copy of the GPL which you aren't doing.

Creative Commons licenses don't require you to distribute a copy of them, which makes them better for this kind of project.

Replies from: turchin
comment by turchin · 2015-06-01T18:40:15.549Z · LW(p) · GW(p)

I mean that you you are free to copy and modify the roadmap, but should track changes and do not create proprietary commercial productes based on it. I also would be happy to be inform if any derivates will be created. It is is a little bit different from CC3.0, but it now clear that I have elaborate.

comment by gjm · 2015-06-01T11:10:11.326Z · LW(p) · GW(p)

PDF not available without "joining" Scribd, which appears to require giving them information I do not wish to give them. Any chance of making it available in some other way?

Replies from: turchin
comment by turchin · 2015-06-01T12:37:29.243Z · LW(p) · GW(p)

http://immortality-roadmap.com/globriskeng.pdf Link fixed

Replies from: gjm
comment by gjm · 2015-06-01T15:46:09.010Z · LW(p) · GW(p)

Yup, that works. Thanks.

comment by Satoshi_Nakamoto · 2015-06-12T14:39:09.480Z · LW(p) · GW(p)

I would use the word resilient rather than robust.

  • Robust: A system is robust when it can continue functioning in the presence of internal and external challenges without fundamental changes to the original system.

  • Resilient: A system is resilient when it can adapt to internal and external challenges by changing its method of operations while continuing to function. While elements of the original system are present there is a fundamental shift in core activities that reflects adapting to the new environment.

I think that it is a better idea to think about this from a system perspective rather than the specific X-risks or plans that we know about or think are cool. We want to avoid the availability bias. I would assume that there are more X-risks and plans that we are unaware of then we are aware of.

I recommend adding in the risks and relating them to the plans as most of your plans if they fail would lead to other risks. I would do this in a generic way. An example to demonstrate what I am talking about is: with a risk tragedy of the commons and a plan to create a more capable type of intelligent life form that will uphold, improve and maintain the interests of humanity. This could be done by genetic engineering and AI to create new life forms. And, Nanotechnology and biotechnology could be used to change existing humans. The potential risk of this plan is that it leads to the creation of other intelligent species that will inevitably compete with humans.

One more recommendation is to remove the time line from the road map and just have the risks and plans. The timeline would be useful in the explanation text you are creating. I like this categorisation of X risks:

  • Bangs (extinction) – Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.

  • Crunches (permanent stagnation) – The potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.

  • Shrieks (flawed realization) – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.

  • Whimpers(subsequent ruination) – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.

I don’t want this post to be too long, so I have just listed the common systems problems below:

  • Policy Resistance – Fixes that Fail

  • Tragedy of the Commons

  • Drift to Low Performance

  • Escalation

  • Success to the Successful

  • Shifting the Burden to the Intervenor—Addiction

  • Rule Beating

  • Seeking the Wrong Goal

  • Limits to Growth

Four additional plans are:

  1. (in Controlled regression) voluntary or forced devolution

  2. uploading human consciousness into a super computer

  3. some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware

  4. dramatic societal changes to avoid some existential risks like the over use of resources. An example of this is in the book: The world inside.

You talk about being saved by non-human intelligence, but it is also possible that SETI could actually cause hostile aliens to find us. A potential plan might be to stop SETI and try to hide. The opposite plan (seeking out aliens) seems as plausible though.

Replies from: turchin, None
comment by turchin · 2015-06-12T19:44:19.186Z · LW(p) · GW(p)

I accepted your idea about replacing the word “robust" and will award the prize for it.

The main idea of this roadmap is to escape availability bias by listing all known ideas for x-risk prevention. This map will be accompanied by the map of all known x-risks which is ready and will be published soon. More than 100 x-risks have been identified and evaluated.

The idea that some of plans create their own risks is represented in this map with red boxes below plan A1.

But it may be possible to create completely different future risks and prevention map using system approach, or something like a scenarios tree.

Yes, each plan is better to contain specific risks. A1 is better to contain biotech and nanotech risks, A2 is better for UFAI, A3 for nuclear war and biotech an so on. So another map may be useful to correspond risks and prevention methods.

Timeline was already partly replaced with "steps", as was already suggested by "elo" and he was awarded for it.

Phil Torres shows that Bostroms classification of x-risks is not as good as it seems to be, in: http://ieet.org/index.php/IEET/more/torres20150121 So I prefer the notion of "human extinction risks" as more clear.

I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

  1. The idea of uploding was already suggested here in the form of "migrating into simulation" and was awarded.

  2. I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.

I think I should accept "dramatic social changes”, as it could include many interesting but different topics: demise of capitalism, hipster revolution, internet connectivity, global village, dissolving of national states. I got many suggestions in this line and I could unite them under this topic.

Do you mean METI - messaging to stars? Yes, it dangerous, and we should do it only if everything else fails. That is why I put in into plan C. But by the way SETI is even more dangerous as we could download decryption of alien AI. I have an article about it here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/

Thank for your suggestions which were the first ones which imply creation of the map on completely different principles.

So in total for now I suggest you 2 award and one from Romashka, total 150 usd. Your username suggests me that you would prefer to keep anonymity, so I could send money to a charity of your choice.

Replies from: Satoshi_Nakamoto
comment by Satoshi_Nakamoto · 2015-06-13T05:18:02.866Z · LW(p) · GW(p)

In plans: 1. Is not "voluntary or forced devolution" the same as "ludism" and "relinquishment of dangerous science" which is already in the plan?

I was thinking more along the lines of restricting the chance for divergence in the human species. I guess I am not really sure what is it that you are trying to preserve. What do you take to be humanness? Technological advances may allow us to alter ourselves so substantially that we become post-human or no longer human. This could be for example from cybernetics or genetic engineering. "ludism" and "relinquishment of dangerous science" is a way to restrict what technologies we use, but note that we are still capable of using and creating these technologies. Devolution, perhaps there is a better word for it, would be something like the dumbing down of all or most humans so that they are no longer capable of using or creating the technologies that could make them less purely human.

I think that "some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware" is basically the same idea as "smaller catastrophe could help unite humanity (pandemic, small asteroid, local nuclear war)", but your wording is excellent.

Yes you are right. I guess I was more implying man-made catastrophes which are created in order to cause a paradigmatic change rather than natural ones.

I still don't know how we could fix all the world system problems which are listed in your link without having control of most of the world which returns us to plan A1.

I'm not sure either. I would think you could do it by changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems. Perhaps, this is just AI and improved computational modelling. This idea of needing control of the world seems extremely dangerous to me. Although, I suppose a top-down approach could solve the problems. I think that you should also think about what a good bottom-up approach would be. How do we make local communities and societies more resilient, economical and capable of facing potential X-risks.

In survive the catastrophe I would add two extra boxes:

  • Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. For example, with pandemics you could: improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeasure

  • Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and responded to more quickly

I could send money to a charity of your choice.

Send it to one of the charities here.

Replies from: hairyfigment, turchin, turchin
comment by hairyfigment · 2015-06-13T20:27:33.919Z · LW(p) · GW(p)

What do you take to be humanness?

Technically, I wouldn't say we'd lost it if the price of sperm donation rose (from its current negative level) until it stopped being an efficient means of reproduction. But I think you underestimate the threat of regular evolution making a lot of similar changes, if you somehow froze some environment for a long time.

Not only does going back to our main ancestral environment seem unworkable - at least without a superhuman AI to manage it! - we should also consider the possibility that our moral urges are a mixed bag derived from many environments, not optimized for any.

comment by turchin · 2015-06-13T17:59:31.882Z · LW(p) · GW(p)

A question: is it possible to create risk control system, which is not based on centralized power, like bitcoin is not based on central banking?

For example: local police could handle local crime and terrorists; local health authorities could find and prevent disease spread. If we have many x-risks peers, they could control their neighborhood in their professional space.

Counter example: how it could help in situations like ISIS or other rogue state, which is going (may be) to create a doomsday machine or virus which will be used to blackmail or exterminate other countries?

Replies from: Satoshi_Nakamoto
comment by Satoshi_Nakamoto · 2015-06-14T07:34:40.510Z · LW(p) · GW(p)

bitcoinis an electronic payment system based on cryptographic proof instead of trust. I think the big difference between it and the risk control system is the need for enforcement i.e. changing what other people can and can’t do. There seems to be two components to the risk control system: prediction of what should be researched and enforcement of this. The prediction component doesn’t need to come from a centralised power. It could just come from the scientific community. I would think that the enforcement would need to come from a centralised power. I guess that there does need to be a way to stop the centralized power causing X-risks. Perhaps, this could come from a localised and distributed effort. Maybe, something like a better version of anonymous.

comment by turchin · 2015-06-13T10:03:08.929Z · LW(p) · GW(p)

Sent 150 USD to Against Malaria foundation.

The idea of dumbing people is also present in Bad plan section, "limitation of human or collective intelligence"... But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can't said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.

There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.

I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)

Replies from: Satoshi_Nakamoto
comment by Satoshi_Nakamoto · 2015-06-14T07:35:33.902Z · LW(p) · GW(p)

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

Replies from: turchin
comment by turchin · 2015-06-14T21:15:30.520Z · LW(p) · GW(p)

I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf

Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem.

Lowering human intelligence is in bad plans.

I have been thinking about hive mind... It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.

If a hive mind is enforced, it is like worst totalitarian state... If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.

comment by [deleted] · 2015-06-12T16:36:43.163Z · LW(p) · GW(p)

This is useful. Mr. Turchin, please redirect my award to Satoshi.

Replies from: turchin
comment by turchin · 2015-06-13T18:52:02.309Z · LW(p) · GW(p)

Done

comment by [deleted] · 2015-06-07T13:50:32.188Z · LW(p) · GW(p)

(Thinking out loud)

Currently, about a third of all food produced in the world doesn't make it to being consumed (or something like that - we were told this in our phytopathology course.) With the increase in standardization of food processing, there should be more common causes of spoilage and the potential of resistant pathogen evolution and rapid spread. How much worse should the food loss become before initiating a cascade of x-threats to mankind?

Replies from: turchin
comment by turchin · 2015-06-07T18:23:31.149Z · LW(p) · GW(p)

As a lot of grain now is consumed by meat industry, returning to vegetable and limited diet could effectively increase food supply 4-5 times. Many other options exist to do so.

Replies from: None
comment by [deleted] · 2015-06-07T19:03:44.468Z · LW(p) · GW(p)

Legally enforced veganism? (But grain spoils. It is also often stored in buildings designed specifically for that purpose, and once they get infected...) All in all, I was just trying to think of a hard-to-contain, hard-to-notice, hard-to-prevent x-risk; those already discussed seem more... straightforward, perhaps. I am sure there are other examples of systemic failure harder to fight with international treaties than nuclear war.

Replies from: turchin
comment by turchin · 2015-06-07T19:14:49.210Z · LW(p) · GW(p)

If your suggestion would be something like "invest in bio-divercity of food supply chain" or "prevent crop loss due bad transportation" it may be interesting. Because while the whole humanity can't extinct because of food shortage, it could contribute to wars, terrorism and riots, as happened during Arab spring.

Replies from: None
comment by [deleted] · 2015-06-08T01:22:38.526Z · LW(p) · GW(p)

Those would be useful things to do, I think, resulting in 1) better carantine law (the current one does not seem to be taken seriously enough, if Ambrosia's expansion is any indicator, and the timescales for a pathogen will not be decades), 2) portable equipment for instant identification of alien inclusions in medium bulks of foodstuffs, and 3) further development of nonchemical ways of sterilization.

Replies from: turchin
comment by turchin · 2015-06-08T09:07:31.156Z · LW(p) · GW(p)

Thank you for this interesting suggestion! I will include it in the map and I want to send you award. PM details to me. But what is Ambrosia? Corn rust?

Replies from: None
comment by [deleted] · 2015-06-08T10:08:24.097Z · LW(p) · GW(p)

Thank you. (Ambrosia artemisiifolia is a carantine plant species, I used it as an example because it's notorious for having allergenic pollen, contributing to desertification - its roots can reach about 4m down, maybe more - and is rather easy to recognize, but people just don't care to eradicate it or at least cut off the inflorescences. And yes, in many places its spread is already unstoppable. Some other plants of the same genus are carantine weeds, too.)

I referred rather more to pathogens that could arise and benefit from totally manmade environments. (I remember from somewhere that in supermarkets all over the world, the same six species of Drosophila occur; I think that transportation and storage networks can be modeled as ecosystems, especially if more and more stuff gets produced.)

Replies from: turchin
comment by turchin · 2015-06-08T17:39:55.423Z · LW(p) · GW(p)

Yes, in fact fungi rusts could eliminate entire species, as has happened with previous variant of banana and now with amphibious. And here rises the question: could some kind of fungi be dangerous to human existence?

Replies from: None
comment by [deleted] · 2015-06-08T18:05:55.421Z · LW(p) · GW(p)

I really cannot tell. The Irish Famine comes to mind, but surely such things are in the past?.. It's just not a question that you should ask a non-expert if you, because the trivial answer is of course 'yes', but enfolding it takes expertise.

Replies from: turchin
comment by turchin · 2015-06-08T19:49:52.613Z · LW(p) · GW(p)

I meant the fungi which kill humans...

Replies from: None
comment by [deleted] · 2015-06-08T20:06:53.425Z · LW(p) · GW(p)

IANAD, but there's Pneumocystis pneumonia, a really ugly resistant to treatment thing. Don't know if it's virulent enough to threaten mankind as a whole.

Edit: but considering that the fungus 'appears to be present in healthy individuals in general population' and causes pneumonia if the immune system is weakened, I would not disregard the possibility.

comment by [deleted] · 2015-06-02T14:38:01.542Z · LW(p) · GW(p)

I have an idea related to Plan B – Survive the Catastrophe.

The unfortunate reality is that we do not have enough resources to effectively prepare for all potential catastrophes. Therefore, we need to determine which catastrophes are more likely and adjust our preparation priorities accordingly.

I propose that we create/encourage/support prediction markets in catastrophes, so that we can harness the “wisdom of the crowds” to determine which catastrophes are more likely. Large prediction markets are good at determining relative probabilities.

Of course, the prediction market contracts could not be based on an actual extinction event because no one would be alive to collect the payoff! However, if the contracts are based on severe (but not existential) events, they would still help us infer more accurate estimates for extinction event probabilities.

Replies from: turchin, Lumifer
comment by turchin · 2015-06-02T20:16:06.256Z · LW(p) · GW(p)

I think that you have two ideas:

  1. Prediction market for x-risks
  2. Built prepare to most probable catastrophe.

I don't buy the first one... But in fact prizes that I suggested in the opening post is something like it. I mean the idea to use money to extract wisdom of the crowd is good. But prediction market is not the best variant. Because majority of people have a lot of strange ideas about x-risks and such ideas would dominate.

The idea to prepare to most probable catastrophe is better. In fact, we could built bio and nuclear refuges, but not AI and nanotech refuges. And bio-hazard refuges are more important as now pandemic seems to be more risky than nuclear war. So we could concentrate on bio-hazard refuges. I would like to award you the prize for the idea and you could PM to me payment details.

comment by Lumifer · 2015-06-02T17:17:50.315Z · LW(p) · GW(p)

if the contracts are based on severe (but not existential) events

You can treat insurance and reinsurance markets as prediction markets for severe events (earthquakes, hurricanes, etc.). I don't think they (or your proposed prediction markets) would be helpful in estimating the probabilities of extinction events.

Replies from: None
comment by [deleted] · 2015-06-02T19:10:52.899Z · LW(p) · GW(p)

Seems like the definition of "severe" is an issue here. Maybe I should have used "incredibly severe"?

Yes, reinsurance markets deal in large insured risks, but they do not target the incredibly large humanitarian risks that are more informative to us. See reinsurance deals here for reference: http://www.artemis.bm/deal_directory/

I don't think they (or your proposed prediction markets) would be helpful in estimating the probabilities of extinction events.

Care to explain your reasoning? For example, if the market indicated that the chance of a pandemic killing 50% of the population is 1,000x greater than the likelihood of a nuclear war of any kind, wouldn't a forecaster find this at least a little useful?

Replies from: Lumifer
comment by Lumifer · 2015-06-02T19:19:31.557Z · LW(p) · GW(p)

For the prediction markets to work they need to settle: a bet must be decided one way or another within reasonable time so that the winners could collect the money from the losers.

How are you going to settle the bets on a 50%-population pandemic or a nuclear war?

Replies from: None
comment by [deleted] · 2015-06-02T20:15:07.767Z · LW(p) · GW(p)

a bet must be decided one way or another within reasonable time

Each contract would have a maturity date - that is standard.

How are you going to settle the bets on a 50%-population pandemic or a nuclear war?

Your primary concern is that the market would not be functional after a 50%-population pandemic or a nuclear war? That is a possibility. The likelihood depends on the severity of the catastrophe, popularity of the market, its technology and infrastructure, geographic distribution, disaster recovery plan, etc.

With the proper funding and interest, I think a very robust market could be created. And if it works, the information it provides will be very valuable (in my opinion).

Replies from: Lumifer
comment by Lumifer · 2015-06-02T20:29:55.862Z · LW(p) · GW(p)

Each contract would have a maturity date - that is standard.

So, a bet would look like "There will or will not be a nuclear war during the year 2016"? I am not sure you will find enough serious bidders on the "will be" side to actually provide good predictions. You are likely to get some jokers and crackpots, but for prediction purposes you actually don't want them.

Is there any empirical data that prediction markets can correctly estimate the chances of very-low-probability events?

Replies from: gwern
comment by gwern · 2015-06-02T21:05:08.533Z · LW(p) · GW(p)

...With the proper funding and interest, I think a very robust market could be created. And if it works, the information it provides will be very valuable (in my opinion).

...Is there any empirical data that prediction markets can correctly estimate the chances of very-low-probability events?

Liquidity problems are an issue but it may have been partially solved by first, paying normal interest on deposits to avoid opportunity cost issues, and second by market makers like Hanson's LMSR. In particular, people can subsidize the market-maker, paying to get trading activity and hence accuracy.

Replies from: Lumifer
comment by Lumifer · 2015-06-02T21:18:52.985Z · LW(p) · GW(p)

Liquidity problems are an issue

It's not the liquidity problems I'm worried about, but rather the signal-to-noise ratio.

Assume that the correct underlying probability is, say, 0.5% and so you should expect 0.5% of the participants to bet on the "the end is nigh!" side. However you also have a noise level -- say 3% of your participants are looking for teh lulz or believe that a correct bet will get them front-row Rapture seats (or vice versa). Given this noise floor you will be unable to extract the signal from the prediction market if the event has a sufficiently low probability.

Replies from: gwern
comment by gwern · 2015-06-02T21:36:01.275Z · LW(p) · GW(p)

Assume that the correct underlying probability is, say, 0.5% and so you should expect 0.5% of the participants to bet on the "the end is nigh!" side.

? That's not a prediction market. A PM is a continuous price interpretable as a probability, which traders trade based on its divergence from their estimated probability. You can buy or sell either way based on how it's diverged. I have 'bet for' and 'bet against' things I thought were high or low probability all the time on past prediction markets.

Replies from: Lumifer
comment by Lumifer · 2015-06-03T02:10:03.748Z · LW(p) · GW(p)

You're right, I got confused with the noise floor in polls.

However my concern with counterparties didn't disappear. People need incentives to participate in markets. Those who bet that there won't be a nuclear war next year will expect to win some money and if that money is half of a basis point they are not going to bother. Who will be the source of money that pays the winners?

At which probability you're going to take the side "yes, there will be a nuclear war in 2016"?

And if this particular market will be subsidized, well, then it becomes free money and your prediction ability goes out of the window.

I suspect prediction markets won't work well with very-low-probability bets. Especially bets with moral overtones ("You bet on a global pandemic happening? What are you, a monster??")

Replies from: gwern
comment by gwern · 2015-06-04T00:50:36.754Z · LW(p) · GW(p)

Those who bet that there won't be a nuclear war next year will expect to win some money and if that money is half of a basis point they are not going to bother.

Half a basis point is half a basis point. Bond traders would prostitute their grandmothers for a regular half a basis point boost to their returns.

Who will be the source of money that pays the winners?

The source is everyone who takes the other side of the contract? PMs are two-sided.

And if this particular market will be subsidized, well, then it becomes free money and your prediction ability goes out of the window.

I don't pretend to understand how the LMSR and other market-makers really work, but I think that you can't simply trade at random to extract money from them.

Especially bets with moral overtones ("You bet on a global pandemic happening? What are you, a monster??")

Seems to have worked so far with the IARPA bets. (Admittedly, my own trades on ISIS being overblown didn't work out too well but I think I did gain on the various flu contracts.)

comment by ChristianKl · 2015-06-01T16:25:21.471Z · LW(p) · GW(p)

I think the word "Trust" is lacking from your roadmap. We need to find ways to determine when to trust scientists that their findings are sound.

On a smaller level trust is also very important to get people to cooperate. Empathy alone doesn't make you cooperate when you don't trust the other person.

comment by Gondolinian · 2015-06-12T20:25:29.885Z · LW(p) · GW(p)

Improbable idea for surviving heat death: computers made from time crystals. (h/t Nostalgebraist and Scott)

Replies from: turchin, Lumifer
comment by turchin · 2015-06-12T20:49:44.636Z · LW(p) · GW(p)

I have another roadmap "how to survive the end of the universe", and one of ideas there is geometric computers. But thanks for links. This map in OP is about x-risks in approximately near future, like next 100 yeras

comment by Lumifer · 2015-06-12T20:59:07.871Z · LW(p) · GW(p)

From that set of solutions I prefer the good old-fashioned elven magic X-D

comment by Elo · 2015-06-06T13:24:23.881Z · LW(p) · GW(p)

New idea: Nuclear hoarding. Collecting of all nuclear particles to limit its ability to be used. (not sure if this falls under a larger "worldwide risk prevention authority", but it doesn't have to be carried out willingly, it can be carried out via capitalism. Just purchase and contain the material.)

New idea: to limit climate change. tree-planting. plant massive numbers of green species in order to reduce the carbon in the atmosphere. Australia is a large land mass that is unused and could be utilised to grow the oxygen farm and carbon capture we need to keep the planet from melting. (there are still problems with water and arable land, but that is a problem that slowly solves itself as more plants are grown)

New idea: be first. develop defensive biotechnology/nanotechnology systems before destructive ones exist. rather than catering to them as they appear.

New idea: solve x-risk threat the same way we solve tragedy of the commons. Bind those involved to consequences. But this time use personal consequences. i.e. the individuals responsible, their families and their genetics will be put to death by a global army if they cause x-damage. effectively dis-incentivising starting a war that leads to anything short of total planetary destruction because there are personal repercussions of doing so.

new idea: ozone repair drones. Robots to improve the ozone layer so that it does not receive damage from co2 as harshly. improving our chances of keeping the planet cool.

new idea: rain-drones, weather control technologies to decrease the damage of adverse weather and improve our control over it and reduce the damage caused by climate change.

new idea: create a second moon (or synthetic planet) and second-moon colony. either by capturing an asteroid; or by sending mass from earth to become a floating body.

new idea: push mars into a more habitable solar orbit and terraform.

Replies from: turchin
comment by turchin · 2015-06-07T10:59:56.939Z · LW(p) · GW(p)
  1. If you mean nuclear materials by "particles" , it is practically impossible because uranium is dissolved in sea water and could by mined. Also it requires world goverment with high power.
  2. I already added CO2 capture. It could also be plankton.
  3. It is in practice the Bostrom's idea of differential technology development.
  4. They will be already punished by x-risks catastrophe: they will die and their families also. If they don't want to think about it; they will not take punishment seriously. But may му we could punish people only for a rising risk or not preventing it enough. It is like a law which punish people for inadvertency or neglect. I will think about it. The R.B. idea is about it in fact.
  5. Ozon is about UV, not cooling. But nanobots could become part of geoingineering later. I don't go in detail about all possible ways of geoingineering in the map.
  6. Mostly the same.
  7. Not clear wht new moon would be any better than real Moon or Internatinal space station.
  8. Terraforming planets in the map is the same as making Mars habitable, and also moving Mars is risky and require dangerous technologies.

The most interesting idea which I derived from here is the idea to write international law about x-risks which will punish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people for lowering x-risks, finding new risks, and for efforts in their prevention. I would like to reward you 1 prize for it, pm me.

Replies from: Elo
comment by Elo · 2015-06-08T09:56:37.844Z · LW(p) · GW(p)

n4. Xrisk is not always risky to the developer: the Atomic bomb creators did not feel the effects of their weapons personally. in this way an x-risk can be catastrophic but not personally of consequence. I was suggesting something to overcome this predicament. Where one goverment might commission a scientist to design a bioweapon to be released on another country and offer to defend/immunise the creators.

Its a commons-binding contract that discourages the individual. it only takes one. which is the challenge of humans taking x-risky actions, it is not always of direct consequence to the individual (or it may not feel that way to the individual)

n5. radiation from space that we are not currently accustomed to or protected from would be an x-risk to the biological population.

n7. as a way to increase the number of colonies nearby, creating another moon that is close to earth might be easier and cheaper and more viable than mars. Although I might be completely wrong and mars might be easier. really depends on how much inter-colony travel there will likely be.

n1. I meant - buy up supply of nuclear material, push the price above viable for a long time, therefore discouraging the development of surrounding technology.

Replies from: turchin
comment by turchin · 2015-06-08T19:46:12.652Z · LW(p) · GW(p)

I got your idea: I think it should be covered in x-risk law. Something like Article 1: Any one, who consciously or unconsciously is rising x-risks will go to jail.

Ozone level depletion is not proved to be extinction level event, but is really nasty thing anyway. It could be solved without nanobots, by injecting right chemicals.

NASA is planning to attract small asteroid so it may work, but can't be main solution. May be useful step. Market forces will rise uranium supply.

You don't need a lot of uranium if you are going to enrich it in centrifuges.

If you are really going to limit supply, its better to try to buy all the best scientists in field. US in 90s were buying Russian scientists who worked previously on secret biological weapons.

Replies from: Elo
comment by Elo · 2015-06-09T04:55:35.859Z · LW(p) · GW(p)

The other advantage of forcing people to use a limited supply of radioactive material in a reaction would be enhanced safety of doing so as well. (in the case of a failure there will be less total material to account for)

comment by Elo · 2015-06-02T21:56:01.000Z · LW(p) · GW(p)

Meta: I honestly didn't read the plan in full the first two times I posted. Instead I went to Wikipedia and looked up global catastrophic risk. Then once I had an understanding of what the definition of global catastrophic risk is; I thought up solutions (How would I best solve X) and checked if they were on the map.

The reason why I share this is because the first several things I thought of were not on the map. And it seems like several other answers are limited to "whats outside the box" (Think outside the box is a silly concept because it often involves people telling you exactly where the box is and where to think outside of) and indicated by things near the existing map. I am not sure that you are getting great improvements to the map from the way you have set the problem.

New idea: If I were hosting the map there would be a selection of known x-risk problems.
Something like:

AI:

  • paperclippers
  • UFAI
  • Oppressive AI (modifies our quality of life)
  • trickster AI (AI built with limits, i.e. human happiness - redefines its own reference term for human and happiness and kills all old humans that are not happy)

Nanotechnology:

  • 2nd gen Molecular assembly that escapes containment (on purpose or by accident)
  • Race to profit includes a game of chicken to take the highest risk.

Biotechnology:

  • new disease with no known relationship to existing diseases and high virulence (difficult to cure)
  • new strain of old disease (known effect and a race to fight it off)

Nuclear:

  • catastrophic death of all life by nuclear war and ongoing radiation
  • reduction of lifespan due to radioactivity induced cancer. (possibly reducing us back to pre-colonial civilisation)
  • concerning speed of mutation due to nuclear particles (either in humans or in things that harm us or ensure our wellbeing, i.e. viruses, food supplies)

Global climate:

  • planet becomes uninhabitable to humans
  • planet becomes less habitable to humans - slows down growth of science/technology
  • humans are forced underground limiting the progress of scientific research or our ability to sustain food produce.
  • humans are cut off from each other and forced to live in small colonies

And what each of the solutions on the solution map might help solve.

Edits: formatting. I still can't get the hang of formatting after this long!

Edit: it looks like you are working on other maps at http://immortality-roadmap.com/.

Replies from: turchin
comment by turchin · 2015-06-02T22:19:52.550Z · LW(p) · GW(p)

Yes, the site is not finished, but the map "Typology of human extinction risks" is ready and will be published next week. Around 100 risks will be listed. Any roadmap has its limitations because of its size and its basic 2D structure. Of course we could and should cover all option for all risks but it should be done in more details. Maybe I should do a map there to each risks will be suggested ways to its prevention.

Replies from: Elo
comment by Elo · 2015-06-02T22:32:45.453Z · LW(p) · GW(p)

I didn't really know what x-risks you were talking about; which is why a map of x-risks would have helped me.

Replies from: turchin
comment by turchin · 2015-06-02T22:40:09.786Z · LW(p) · GW(p)

Basically the same risks you listed here. I can PM you the map.

comment by ChristianKl · 2015-06-01T16:28:23.550Z · LW(p) · GW(p)

I don't think "low rivalry" in science is desirable. Rivalry makes scientists criticize the work of their peers and that's very important.

Replies from: turchin
comment by turchin · 2015-06-01T18:49:17.842Z · LW(p) · GW(p)

By "law rivalry" I mean something like "productive cooperation", based on "trust" between scientists to each other and to the society to the scientist. Productive cooperation does not exclude competition if it based on honest laws. And it is really important topic. In the movie "2012" the most fantastic thing was that than the scientist found the high neutrino level, he was able to inform government about the risk and was heard. I really want to award the prize, you could email details me on alexei.turchin@gmail.com I am going to replace rivalry with "productive cooperation between scientists and society based on trust". Do you think it will be right phrase?

comment by Elo · 2015-06-01T13:42:20.825Z · LW(p) · GW(p)

Is A3 meant to have connecting links horizontal through it's path?

Another bad idea: build a simulation-world to live in so that we don't actually have to worry about real-world risks. (disadvantage - is possibly an X-risk itself)

It kinda depends on which x-risk you are trying to cover...

For example - funding technologies that improve the safety or efficiency of nuclear use might mean that any use is a lot more harmless. Or develop ways to clean up nuclear mess; or mitigate the decay of nuclear radiation (i.e. a way to gather nuclear radioactive dust)

Encouraging people to start small bio-hack groups around the world could improve the biotechnology understanding of the public to the point where no one accidentally creates a bio-technology hazard. Developing better guidance on safe biotechnology processes and exactly why its safe this way and not otherwise... effectively "raising the sanity waterline" but specific to the area of biotechnology risks.

(I suggest that maybe you want to offer to take free suggestions before you pay people - at least that might save you some dollars)

Replies from: MarsColony_in10years, turchin, turchin
comment by MarsColony_in10years · 2015-06-02T04:31:57.378Z · LW(p) · GW(p)

Encouraging people to start small bio-hack groups around the world could improve the biotechnology understanding of the public to the point where no one accidentally creates a bio-technology hazard.

I'm all for biohazard awareness groups, and even most forms of BioHacking at local HackerSpaces or wherever else. However, I never want to see potentially dangerous forms of BioTech become decentralized. Centralized sources are easy to monitor and control. If anyone can potentially make an engineered pandemic in their garage, then no amount of education will be enough for sufficient safety margin. Think of how many people people cut fingers off in home table saws or lawnmowers or whatever. DIY is a great way to learn through trial and error, but not so great where errors have serious consequences.

The "economic activation energy" for both malicious rogue groups and accidental catastrophes is just too low, and Murphy's law takes over. However if the economic activation energy is a million dollars of general purpose bio lab equipment, that's much safer, but would require heavy regulation on the national level. Currently it's something like a billion dollars of dedicated bio warfare effort, and has to be regulated on the international level. (by the Geneva Protocol and the Biological Warfare Convention)

(I suggest that maybe you want to offer to take free suggestions before you pay people - at least that might save you some dollars)

I'd agree with you here. Although money is a fantastic motivator for repetitive tasks, it has the opposite effect on coming up with insightful ideas.

Replies from: Elo
comment by Elo · 2015-06-02T05:14:48.766Z · LW(p) · GW(p)

(I suggest that maybe you want to offer to take free suggestions before you pay people - at least that might save you some dollars)

I was really saying - save your money till after people shoot off some low-hanging fruit ideas.

I would argue that the current barrier of "it costs lots of money to do bio-hacking right" is a terrible one to hide behind because of how easy it is to overcome it; or do biohacking less-right and less-safely. i.e. without safe containment areas.

Perhaps funding things like clean-rooms with negative pressure and leaving the rest up to whoever is using the lab-space.

comment by turchin · 2015-06-01T22:16:52.333Z · LW(p) · GW(p)

In A3 blocks are not connected because they not consequent steps, but more like themes or ideas.

Replies from: Elo
comment by Elo · 2015-06-01T22:28:19.086Z · LW(p) · GW(p)

Okay. Maybe bolden up the outlines or change the colours so they appear more distinct, or make some lines into arrows?

comment by turchin · 2015-06-01T22:06:28.239Z · LW(p) · GW(p)

I like all 3 ideas - simulation, nuclear waste reduction and bio-hack awareness groups. I would like to include them in the map and award you 150 usd. How can I pay you?

Replies from: Elo, Elo
comment by Elo · 2015-06-01T23:55:34.277Z · LW(p) · GW(p)

Simulation is an X-Risk that we stagnate our universal drive to growth and live in a simulation for the rest of our lives and extinguish ourselves from existence.

Bio-Hack is an X-Risk because if done wrong you would encourage all these small bio-tech interests and end up with someone doing it unsafely.

The failure of mini biohack groups could probably be classified as controlled regression->small catastrophy. Similar to the small nuclear catastrophies of current history and their ability to discourage any future risk taking behaviour in the area.

The advantage of common bio-hack groups is less reliance on the existing big businesses to save us with vaccines etc.

Indeed the suggestion of "Invite the full population to contribute to solving the problem" might be a better description.

New suggestion: "lower the barriers of entry into the field of assistance in X-risk". Easy explanation of the X-risks; easier availability of resources to attempt solutions. Assuming your main x-risks are 1. biotech; 2. nanotech; 3. nuclear, 4. climate change and 5. UFAI)

  1. Provide Biotech upskill (education, some kind of OpenBio foundation) and Bioresources for anyone interested in the area (starter kit, smaller cheaper lab-on-a-chip, simple biotech "at-home" experiments like GFP insertion).
  2. Teach the risks of molecular manufacturing before teaching people how to do it. (or restructure the education program to make sure it is included)
  3. Teach 4th gen nuclear technologies to everyone. Implement small scale nuclear models. (i.e. tiny scale - not sure if it would work) to help people understand the possibility of a teeny nuclear failure but scaled up to large. (if it is possible to make a tiny scale nuclear reactor is beyond my knowledge)
  4. empower the public with technology or understanding to reverse pollution. i.e. solar + batteries + electric cars; plant-trees initiatives (or oxygen bio-filters), carbon capture programs, educate and make possible small-scale sustainability (or close to it). Teach people 3D printing; Maker (fixer) mindset; reuse/upcycle; Reward disposable packaging more than non-disposable.
  5. UFAI free education in the area, starter packages in programming. (I have no idea other than it seems to be being worked on by smart people)

New suggestion: Teach x-risk from 5 years old upwards. So that the next generation of humans understand that when they play with these kinds of powerful forces - they risk a whole lot more than they realise. (hopefully before having an x-risky accident to explicitly warn people about things)

New idea - I don't think you covered: lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements and bullshit. No one gets to work on nuclear, no one gets to work on biotech without ridiculous safety standards, no one gets to create pollution without being arrested and charged, no one gets to code learning machines without strict supervision.

Replies from: turchin
comment by turchin · 2015-06-02T09:08:51.215Z · LW(p) · GW(p)

I like ideas about risks education and about bureaucracy I think I should include them in the map and award you 2 prizes. How I can transfer them?

Replies from: Elo
comment by Elo · 2015-06-02T10:58:33.748Z · LW(p) · GW(p)

Details by PM.

comment by Elo · 2015-06-01T22:30:09.768Z · LW(p) · GW(p)

Reply in a PM.

Replies from: turchin
comment by turchin · 2015-06-02T09:10:36.024Z · LW(p) · GW(p)

Replied

comment by plex (ete) · 2015-06-01T23:49:51.603Z · LW(p) · GW(p)

Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.

One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for getting much momentum behind the other, more direct, goals you've laid out.

  • Make it less and less acceptable to be partisan/tribal in a moloch-fueling way in the public sphere (starting with our corner of it, spreading opportunistically).
  • Grow EA, so there's funding for high-impact causes like some of the projects listed, and caring about solving problems is normalized.
  • Pick up potentially high-impact people with training and give them a support network of people who have an explicit goal to fix the world, like CFAR does, to create the kind of people to staff the projects.

In a few places, particularly in A1, you drift into general "things that would be good/cool", rather than appearing to stay focused on things applicable to countering an extinction risk. Maybe there is a link that I'm missing, but other than bringing more resources I'm not sure what risk "Planetary mining" for example helps counter.

I'd advise against giving dates. AI timelines in particular could plausibly be much quicker or much slower than your suggestions, and it'd have massive knock-on effects. False confidence on specifics is not a good impression to give, maybe generalize them a bit?

"Negotiation with the simulators or prey for help"

pray?

Replies from: turchin, ete
comment by turchin · 2015-06-02T21:21:36.023Z · LW(p) · GW(p)

I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.

I don't think that I should go inside all details of decision theory and EA. I just put "rationality".

Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it.

Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't believe in usefulness of space mining without nanotech.

The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?

I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.

"Pray" corrected.

Replies from: ete
comment by plex (ete) · 2015-06-03T00:33:26.471Z · LW(p) · GW(p)

Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).

Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely accepted", rather than rationality's normal meanings as being effective. I'd also think it more appropriate for the A3 stream than A2, for what I have in mind at least.

I'd think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from "person who is smart and will probably get some good job in some mildly evil corporation" to "person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside" on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).

Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I'm not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I'm not sure about a good set of labels for this though, but perhaps something like:

  • Immediate (aka: things which we could/are just working on right now)
  • Near future (single digit years? things which need some foundations, but are within sight)
  • Mid-term (unsure when we'll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can't get into the details until previous layers of tech/organization are ready)
  • Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they're accessible)
  • Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I'm not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.

And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.

Replies from: turchin
comment by turchin · 2015-06-03T12:19:33.100Z · LW(p) · GW(p)

Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating "world saviour" from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.

Replies from: Lumifer
comment by Lumifer · 2015-06-03T15:42:34.083Z · LW(p) · GW(p)

The idea of creating "world saviour"

I'm very very suspicious of the idea of creating "world saviours". In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/

comment by plex (ete) · 2015-06-02T13:04:58.251Z · LW(p) · GW(p)

I'm curious about why this was downvoted?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-02T13:23:02.405Z · LW(p) · GW(p)

One person downvoted it, which means it could be anything from "I don't like spelling corrections" to "I disagree about not giving dates".

In general, if only one person downvotes, it is best not to ask. I don't see anything worth downvoting in your post myself, although I wouldn't upvote it, because it reads to me more like an attempt at compressing many applause lights into one comment without paying attention to any one than an attempt at genuine suggestions for improvement. (It's a little -too- Less Wrongian.)

comment by Xerographica · 2015-06-02T04:11:13.845Z · LW(p) · GW(p)

Hedge hedge hedge! The most successful plant family on Earth is the Orchidaceae. There are around 30,000 different species! Each orchid seed pod can contain a million dust like seeds that are dispersed by the wind. A million seeds is a huge hedge.

The opposite of hedging is to put all your eggs in one basket. Right now humans have all their eggs in one basket... aka "Earth". We also allow a small group of government planners to allocate all our taxes. Coincidence? Nope.

Centralization is always a function of conceit. People think they have enough facts to block/limit hetergeneous activity. The more you appreciate fallibilism the more you appreciate decentralization.

The answer is always tax choice.

Replies from: turchin, Jiro
comment by turchin · 2015-06-02T21:01:08.016Z · LW(p) · GW(p)

How we could do it without space colonization?

Replies from: Xerographica
comment by Xerographica · 2015-06-02T21:46:41.333Z · LW(p) · GW(p)

The lesson of the potato famine was that crops should be more, rather than less, diverse. The potatoes that were in cultivation didn't have enough genetic variation which is why the the disease had such a huge impact. But if it's true of crops... then it's also true of people. People should be more genetically diverse. This way a new pathogen can't kill all of us. Although I have no idea how you'd practically ensure greater human diversity!!?? History might refer to you as the opposite of Hitler.

Regarding the danger of AI... if greater diversity is better for crops and humans... then it's also better for robots as well. We'll give more resources to the most beneficial robots. Evil robots won't have a leg to stand on.

And war can be eliminated by tax choice.

Replies from: turchin
comment by turchin · 2015-06-02T21:55:55.085Z · LW(p) · GW(p)

Interesting fact is that humans are one of less diversed species because our population passed recently through bottle neck and after it experienced rapid growth. Each chimp is more different from any other chimp than any human between each other. This means that we are prone to large pandemic. We are almost clones. So some genetic experiments in embryos may improve situation, but better to invest in the bio safety.

comment by Jiro · 2015-06-02T19:09:55.192Z · LW(p) · GW(p)

You've said this before and it's wrong the same way it was last time. Orchids produce lots of seeds, but producing lots of seeds doesn't let them survive in more varied environments.