How can I reduce existential risk from AI?

post by lukeprog · 2012-11-13T21:56:12.609Z · LW · GW · Legacy · 92 comments

Contents

  Meta-work, strategy work, and direct work
  Meta-work for AI x-risk reduction
  Strategy work for AI x-risk reduction
  Direct work for AI x-risk reduction
  Conclusion
None
92 comments

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

Suppose also that you think AI is the most pressing x-risk, because (1) mitigating AI risk could mitigate all other existential risks, but not vice-versa, and because (2) AI is plausibly the first existential risk that will occur.

In that case, what should you do? How can you reduce AI x-risk?

It's complicated, but I get this question a lot, so let me try to provide some kind of answer.

 

Meta-work, strategy work, and direct work

When you're facing a problem and you don't know what to do about it, there are two things you can do:

1. Meta-work: Amass wealth and other resources. Build your community. Make yourself stronger. Meta-work of this sort will be useful regardless of which "direct work" interventions turn out to be useful for tackling the problem you face. Meta-work also empowers you to do strategic work.

2. Strategy work: Purchase a better strategic understanding of the problem you're facing, so you can see more clearly what should be done. Usually, this will consist of getting smart and self-critical people to honestly assess the strategic situation, build models, make predictions about the effects of different possible interventions, and so on. If done well, these analyses can shed light on which kinds of "direct work" will help you deal with the problem you're trying to solve.

When you have enough strategic insight to have discovered some interventions that you're confident will help you tackle the problem you're facing, then you can also engage in:

3. Direct work: Directly attack the problem you're facing, whether this involves technical research, political action, particular kinds of technological development, or something else.

Thinking with these categories can be useful even though the lines between them are fuzzy. For example, you might have to do some basic awareness-raising in order to amass funds for your cause, and then once you've spent those funds on strategy work, your strategy work might tell you that a specific form of awareness-raising is useful for political action that counts as "direct work." Also, some forms of strategy work can feel like direct work, depending on the type of problem you're tackling.


Meta-work for AI x-risk reduction

Make money. Become stronger. Build a community, an audience, a movement. Store your accumulated resources in yourself, in your community, in a donor-advised fund, or in an organization that can advance your causes better than you can as an individual.

  1. Make money: In the past 10 years, many people have chosen to start businesses or careers that (1) will predictably generate significant wealth they can spend on AI x-risk reduction, (2) will be enjoyable enough to "stick with it," and (3) will not create large negative externalities. But certainly, the AI x-risk reduction community needs a lot more people to do this! If you want advice, the folks at 80,000 Hours are the experts on "ethical careers" of this sort.

  2. Become stronger: Sometimes it makes sense to focus on improving your productivity, your research skills, your writing skills, your social skills, etc. before you begin using those skills to achieve your goals. Example: Vladimir Nesov has done some original research, but mostly he has spent the last few years improving his math skills before diving into original research full-time.

  3. Build a community / a movement: Individuals can change the world, but communities and movements can do even better, if they're well-coordinated. Read What Psychology Can Teach Us About Spreading Social Change. Launch (or improve) a Less Wrong group. Join a THINK group. Help grow and improve the existing online communities that tend to have high rates of interest in x-risk reduction: LessWrong, Singularity Volunteers, and 80,000 Hours. Help write short primers on crucial topics. To reach a different (and perhaps wealthier, more influential) audience, maybe do help with something like the Singularity Summit.

  4. Develop related skills in humanity. In other words, "make humanity stronger in ways that are almost certainly helpful for reducing AI x-risk" (though strategic research may reveal they are not nearly the most helpful ways to reduce AI x-risk). This might include, for example, getting better at risk analysis with regard to other catastrophic risks, or improving our generalized forecasting abilities by making wider use of prediction markets.

  5. Fund a person or organization doing (3) or (4) above. The Singularity Institute probably does more AI x-risk movement building than anyone, followed by the Future of Humanity Institute. There are lots of organizations doing things that plausibly fall under (4).

Note that if you mostly contribute to meta work, you want to also donate a small sum (say, $15/mo) to strategy work or direct work. If you only contribute to meta work for a while, an outside view (around SI, anyway) suggests there's a good chance you'll never manage to ever do anything non-meta. A perfect Bayesian agent might not optimize this way, but optimal philanthropy for human beings works differently.

 

Strategy work for AI x-risk reduction

How can we improve our ability to do long-term technological forecasting? Is AGI more likely to be safe if developed sooner (Goertzel & Pitt 2012) or later (Muehlhauser & Salamon 2012)? How likely is hard takeoff vs. soft takeoff? Could we use caged AGIs or WBEs to develop safe AGIs or WBEs? How might we reduce the chances of an AGI arms race (Shulman 2009)? Which interventions should we prioritize now, to reduce AI x-risk?

These questions and many others have received scant written analysis — unless you count the kind of written analysis that is (1) written with much vagueness and ambiguity, (2) written in the author's own idiosyncratic vocabulary, (3) written with few citations to related work, and is (4) spread across a variety of non-linear blog articles, forum messages, and mailing list postings. (The trouble with that kind of written analysis is that it is mostly impenetrable or undiscoverable to most researchers, especially the ones who are very busy because they are highly productive and don't have time to comb through 1,000 messy blog posts.)

Here, then, is how you might help with strategy work for AI x-risk reduction:

  1. Consolidate and clarify the strategy work currently only available in a disorganized, idiosyncratic form. This makes it easier for researchers around the world to understand the current state of play, and build on it. Examples include Chalmers (2010), Muehlhauser & Helm (2012), Muehlhauser & Salamon (2012), Yampolskiy & Fox (2012), and (much of) Nick Bostrom's forthcoming scholarly monograph on machine superintelligence.

  2. Write new strategic analyses. Examples include Yudkowsky (2008), Sotala & Valpola (2012), Shulman & Sandberg (2010), Shulman (2010), Shulman & Armstrong (2009), Bostrom (2012), Bostrom (2003), Omohundro (2008), Goertzel & Pitt (2012), Yampolskiy (2012), and some Less Wrong posts: Muehlhauser (2012), Yudkowsky (2012), etc. See here for a list of desired strategic analyses (among other desired articles).

  3. Assist with (1) or (2), above. This is what SI's "remote researchers" tend to do, along with many SI volunteers. Often, there are "chunks" of research that can be broken off and handed to people who are not an article's core authors, e.g. "Please track down many examples of the 'wrong wish' trope so I can use a vivid example in my paper" or "Please review and summarize the part of the machine ethics literature that has to do with learning preferences from examples."

  4. Provide resources and platforms that make it easier for researchers to contribute to strategy work. Things like my AI risk bibliography and list of forthcoming and desired articles on AI risk make it easier for researchers to find relevant work, and to know what projects would be helpful to take on. SI's public BibTeX file and Mendeley group make it easier for researchers to find relevant papers. The AGI conference, and volumes like Singularity Hypotheses, provide publishing venues for researchers in this fledgling field. Recent improvements to the Less Wrong wiki will hopefully make it easier for researchers to understand the (relatively new) concepts relevant to AI x-risk strategy work. A scholarly AI risk wiki would be even better. It would also help to find editors of prestigious journals who are open to publishing well-written AGI risk papers, so that university researchers can publish on these topics without hurting their chances to get tenure.

  5. Fund a person or organization doing any of the above. Again, the most obvious choices are the Singularity Institute or the Future of Humanity Institute. Most of the articles and "resources" above were produced by either SI or FHI. SI offers more opportunities for (3). The AGI conference is organized by Ben Goertzel and others, who are of course always looking for sponsors for the AGI conference.


Direct work for AI x-risk reduction

We are still at an early stage in doing strategy work on AI x-risk reduction. Because of this, most researchers in the field feel pretty uncertain about which interventions would be most helpful for reducing AI x-risk. Thus, they focus on strategic research, so they can purchase more confidence about which interventions would be helpful.

Despite this uncertainty, I'll list some interventions that at least some people have proposed for mitigating AI x-risk, focusing on the interventions that are actionable today.

Besides engaging in these interventions directly, one may of course help to fund them. I don't currently know of a group pushing for AGI development regulations, or for banning AGI development. You could accelerate AGI by investing in AGI-related companies, or you could accelerate AGI safety research (and AI boxing research) relative to AGI capabilities research by funding SI or FHI, who also probably do the most AI safety promotion work. You could fund research on moral enhancement or cognitive enhancement by offering grants for such research. Or, if you think "low-tech" cognitive enhancement is promising, you could fund organizations like Lumosity (brain training) or the Center for Applied Rationality (rationality training).

 

Conclusion

This is a brief guide to what you can do to reduce existential risk from AI. A longer guide could describe the available interventions in more detail, and present the arguments for and against each one. But that is "strategic work," and requires lots of time (and therefore money) to produce.

My thanks to Michael Curzi for inspiring this post.

92 comments

Comments sorted by top scores.

comment by Alex_Altair · 2012-11-11T23:32:28.916Z · LW(p) · GW(p)

Thanks for putting all this stuff in one place!

It makes me kind of sad that we still have more or less no answer to so many big, important questions. Does anyone else share this worry?

Replies from: lukeprog, None
comment by lukeprog · 2012-11-12T01:10:00.878Z · LW(p) · GW(p)

Notice also that until now, we didn't even have a summary of the kind in this post! So yeah, we're still at an early stage of strategic work, which is why SI and FHI are spending so much time on strategic work.

I'll note, however, that I expect significant strategic insights to come from the technical work (e.g. FAI math). Such work should will give us insight into how hard the problems actually are, what architectures look most promising, to what degree the technical work can be outsourced to the mainstream academic community, and so on.

Replies from: Alex_Altair
comment by Alex_Altair · 2012-11-12T06:54:16.790Z · LW(p) · GW(p)

I expect significant strategic insights to come from the technical work (e.g. FAI math).

Interesting point. I'm worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won't learn from FAI math which of those other paths are dangerous or likely.

I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.

Replies from: roystgnr, lukeprog, timtyler
comment by roystgnr · 2012-11-13T23:35:58.914Z · LW(p) · GW(p)

You're probably right about safety promotion, but calling it "clear" may be an overstatement. A possible counterargument:

Existing AI researchers are likely predisposed to think that their AGI is likely to naturally be both safe and powerful. If they are exposed to arguments that it will instead naturally be both dangerous and very powerful (the latter half of the argument can't be easily omitted; the potential danger is in part because of the high potential power), would it not be a natural result of confirmation bias for the preconception-contradicting "dangerous" half of the argument to be disbelieved and the preconception-confirming "very powerful" half of the argument to be believed?

Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality. If the end result of persuasion is that as many as half of them have that mistake corrected while the remainder are merely convinced that they should work even harder, that may not be a net win.

Replies from: danieldewey, Alex_Altair, None, MugaSofer
comment by danieldewey · 2012-11-16T09:48:06.825Z · LW(p) · GW(p)

believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality

Catchy! Mind if I steal a derivative of this?

Replies from: roystgnr
comment by roystgnr · 2012-11-19T23:22:31.256Z · LW(p) · GW(p)

I've lost all disrespect for the "stealing" of generic ideas, and roughly 25% of the intended purpose of my personal quotes files is so that I can "rob everyone blind" if I ever try writing fiction again. Any aphorisms I come up with myself are free to be folded, spindled, and mutilated. I try to cite originators when format and poor memory permit, and receiving the same favor would be nice, but I certainly wouldn't mind seeing my ideas spread completely unattributed either.

Replies from: army1987, danieldewey
comment by A1987dM (army1987) · 2012-11-20T18:29:43.437Z · LW(p) · GW(p)

I've lost all disrespect for the "stealing" of generic ideas

Relevant TED talk

comment by danieldewey · 2012-11-20T11:58:01.879Z · LW(p) · GW(p)

Noted; thanks.

comment by Alex_Altair · 2012-11-14T00:01:38.369Z · LW(p) · GW(p)

Yeah, quite possibly. But I wouldn't want people to run into analysis paralysis; I still think safety promotion is very likely to be a great way to reduce x-risk.

comment by [deleted] · 2012-11-20T18:38:18.786Z · LW(p) · GW(p)

Half of the AI researcher interviews posted to LessWrong appear to be with people who believe that "Garbage In, Garbage Out" only applies to arithmetic, not to morality.

Does 'garbage in, garbage out' apply to morality, or not?

comment by MugaSofer · 2012-11-16T13:39:15.716Z · LW(p) · GW(p)

Upvoted for the "Garbage in, Garbage Out" line.

comment by lukeprog · 2012-11-12T11:52:37.261Z · LW(p) · GW(p)

Somehow I managed not to list AI safety promotion in the original draft! Added now.

comment by timtyler · 2012-11-17T18:13:32.396Z · LW(p) · GW(p)

Looking many existing risky technologies the consumers and governments are the safety regulators, and manufacturers mostly cater to their demands. Consider the automobile industry, the aeronautical industry and the computer industry for examples.

Replies from: adamisom, Alex_Altair
comment by adamisom · 2012-11-17T21:40:49.153Z · LW(p) · GW(p)

Unfortunately, AGI isn't a "risky technology" where mostly is going to cut it in any sense, including adhering to expectations for safety regulation.

Replies from: timtyler
comment by timtyler · 2012-11-17T22:54:09.443Z · LW(p) · GW(p)

All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing - where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple's negative marketing machine at work.

comment by Alex_Altair · 2012-11-17T21:36:57.056Z · LW(p) · GW(p)

Are you suggesting that we encourage consumers to have safety demands? I'm not sure this will work. It's possible that consumers are to reactionary for this to be helpful. Also, I think AI projects will be dangerous before reaching the consumer level. We want AGI researchers to think safe before they even develop theory.

Replies from: timtyler
comment by timtyler · 2012-11-17T22:37:58.710Z · LW(p) · GW(p)

It isn't clear that influencing consumer awareness of safety issues would have much effect. However, it suggests that influencing the designers may not be very effective - they are often just giving users the safety level they are prepared to pay for.

comment by [deleted] · 2012-11-11T23:53:37.771Z · LW(p) · GW(p)

Yes! It's crucial to have those questions in one place, so that people can locate them and start finding answers.

comment by OnTheOtherHandle · 2012-11-14T04:08:29.756Z · LW(p) · GW(p)

I’m having trouble deciding where I should target my altruism. I’m basically just starting out in life, and I recognize that this is a really valuable opportunity that I won’t get in the future – I have no responsibilities and no sunk costs yet. For a long time, I’ve been looking into the idea of efficient charity, and I’ve been surfing places like 80K Hours and Give Well. But I get the feeling that I might be too emotionally invested in the idea of donating to, say, disease prevention or inoculation or de-worming (found to be among the most efficient conventional charities) over, say, Friendly AI.

I think that given my skills, personality, etc, there are some good and bad reasons to go into either existential risk mitigation or health interventions, but it’s not like the balance is going to be exactly even. I need some help figuring out what to do – I suppose I could work out some “combination,” but it’s usually not a good idea to hedge your bets in this way because you aren’t making maximum impact.

Direct reasons for health interventions:

  • Lots of good data, an actual dollar amount per life has been calculated; low risk of failure, whereas for everything I’ve read I’m really not sure about where we stand with x-risk or what to do about it or how to calculate if we’ve reduced it

  • Easy to switch from charity to charity and intervention to intervention as new evidence rolls in, whereas x-risk requires some long-term commitments to long-range projects, with only one institution there

  • I would most likely be best off giving money, which I can confidently say I’ll be able to generate; it’s hard to be unclear about whether my actions are making an impact, whereas with x-risk I don't know how much good my actions are creating or whether they're even helping at all

  • Doesn’t require me to have very many more skills than I currently do in order to estimate accurately the costs and benefits

  • It saves lives immediately and continuously, which will be good for motivation and the probability that I will stick it through and actually be altruistic; I also feel a stronger emotional connection to people who are here today rather than potential future humans, although I don’t know if that’s a mistake or a value

Selfish reasons for health interventions:

  • It would make me look virtuous and self-sacrificing when I’m really not hurt very much because there aren’t that many material goods I enjoy

  • Would make me look far less weird and embarrassing at dinner parties

Direct reasons for X-risk:

  • It is by far the highest-stakes problem we have, whereas with health interventions I would be saving lives one by one, in the dozens or the hundreds, if I actually make some sort of marginal impact on x-risk that could translate to many, many more lives

  • Helping to reduce x-risk by helping to bring about Friendly AI would help us increase the probability of all sorts of good things, like the end to all disease and stabilization of the economy, whereas the positive externalities of health intervention are not as dramatic

Selfish reasons for X-risk:

  • I get to feel like I’m on an epic adventure to save the world like all my favorite SF/F heroes

  • It sounds like it would be a pretty awesomely fun challenge and I'd work with cool people

My personal situation: I’m a senior in high school; I’ve read up quite a bit on both developing world health interventions and x-risk. My programming is abysmal, but I can improve quickly because I like it; I’m going into CS hopefully. I’m not the most highly motivated of people – to be quite honest, I’m a procrastinator and would make less of a marginal impact working for SIAI, than some other readers of LW. (That’s where I stand now, but I want to improve my productivity.)

I would do well in a low-stress, medium high pay job through which I could donate money. I harbor the dream of having a really “cool” job, but it’s not necessarily a priority – I can trade awesomeness and challenge for security and ease, as long as my donated money goes to the most useful cause.

I don’t know what I should do. Low risk, low impact/High risk, high impact/Some specific combination? Any variables I'm missing here? I'd love some data that would make the choice more clear.

Replies from: lukeprog, Mitchell_Porter
comment by lukeprog · 2012-11-15T01:17:18.758Z · LW(p) · GW(p)

Sounds like you should ask for a call with the philanthropic career experts at 80,000 Hours if you haven't already.

comment by Mitchell_Porter · 2012-11-16T16:38:21.138Z · LW(p) · GW(p)

I suggest that you practice thinking of yourself as a future member of the posthuman ruling class. In this century, the planet faces the pressures of 9 or 10 billion people trying to raise their standards of living, amid ecological changes like global warming, and at the same time biotechnology, neurotechnology, and artificial intelligence will come tumbling out of Pandora's box. The challenges of the future are about posthuman people living in a postnatural world, and many of the categories which inform current thinking about "efficient charity" and "existential risk" are liable to become obsolete.

comment by RyanCarey · 2012-11-12T07:26:34.277Z · LW(p) · GW(p)

I was reading through these publications one by one, thinking that there must be a quick way to download all pdf links from a page at once, and it turns out there is

comment by incariol · 2012-11-12T19:42:41.811Z · LW(p) · GW(p)

When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I'm wondering what is the value of this kind of speculation - other than amusing oneself with a picture of "what would this button do" on a simulation of Earth under one's hands.

As I see it, there's no point in thinking about these kind of "large scale" interventions that are closely interweaved with politics. Better to focus on what relatively small groups of people can do (this includes, e.g. influencing a few other AGI development teams to work on FAI), and in this context, I think out best hope is in deeply understanding the mechanics of intelligence and thus having at least a chance at creating FAI before some team that doesn't care the least about safety dooms us all - and there will be such teams, regardless of what we do today, just take a look at some of the "risks from AI" interviews...

Replies from: ChristianKl, Bruno_Coelho
comment by ChristianKl · 2012-11-16T17:02:08.624Z · LW(p) · GW(p)

When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over

I'm not sure whether that's true. Government officials who are tasked with researching future trend might right the article. Just because you yourself have no influence on politics doesn't mean that the same is true for everyone who reads the article.

Even if you think that at the moment nobody with policial power reads LessWrong, it's valuable to signal status. If you want to convince a billionaire to fund your project it might be be benefitial to speak about options that require a high amount of resources to pull off.

comment by Bruno_Coelho · 2012-11-13T01:17:50.903Z · LW(p) · GW(p)

In early stages is not easy to focus directly in the organizations x or y, mostly because a good amount of researchers are working in projects who could end in a AGI expert in numerous specific domains. Futhermore, large scale coordination is important too, even if not a top priority. Slowing down a project or funding another is a guided intervetion who could gain some time while technical problems remain unsolved.

comment by Pablo (Pablo_Stafforini) · 2012-11-12T14:25:59.330Z · LW(p) · GW(p)

Suppose you think that reducing the risk of human extinction is the highest-value thing you can do. Or maybe you want to reduce "x-risk" because you're already a comfortable First-Worlder like me and so you might as well do something epic and cool, or because you like the community of people who are doing it already, or whatever.

I think this post is great: important, informative, concise, and well-referenced. However, my impression is that the opening paragraph trivializes the topic. If you were listing the things we could do to reduce or eliminate global poverty, would you preface your article by saying that "reducing global poverty is cool"? You probably wouldn't. Then why write that kind of preface when the subject is existential risk reduction, which is even more important?

Replies from: ciphergoth, Dorikka, army1987
comment by Paul Crowley (ciphergoth) · 2012-11-15T14:59:07.455Z · LW(p) · GW(p)

I took that as anticipating a counter of "Hah, you think your donors really believe in your cause, when really loads of them are just trying to be cool!" - "That's fine, I've noticed their money works just as well."

comment by Dorikka · 2012-11-13T18:51:55.570Z · LW(p) · GW(p)

Hm. It's possible that I don't have an good model of people with things like this, but it seems likely that at least some of the people contributing to x-risk reduction might do it for one of these reasons, and this paragraph is making it abundantly clear that the author isn't going to be a jerk about people not supporting his cause for the right reasons. I liked it.

comment by A1987dM (army1987) · 2012-11-13T12:50:53.824Z · LW(p) · GW(p)

I took that to be slightly tongue-in-cheek.

comment by heath_rezabek · 2012-11-12T18:08:10.814Z · LW(p) · GW(p)

Greetings. New to LessWrong, but particularly compelled by the discussion of existential risk.

It seems like one of the priorities would be to ease the path for people, once they're aware of existential risk, to move swiftly through doing meta work to doing strategy work and direct work. For myself, once I'd become aware of existential risk as a whole, it became an attractor for a whole range of prior ideas and I had to find a way towards direct work as soon as possible. That's easier said than done.

Yet it seems like the shortest path would be to catalyse prosperous industry around addressing the topic. With Bostrom's newer classification scheme, and the inclusion of outcomes such as Permanent Stagnation and Flawed Realization, the problem space is opened wider than if we were forced to deal with a simple laundry list of extinction events.

So: What of accelerating startups, hiring, career paths, and industry around minimizing Permanent Stagnation and Flawed Realization as existential risk subtypes, always with existential risk in mind? I've started an IdeaScale (additions welcomed) along these lines. ie, what activities could accelerate the growth of options for those seeking ways to pour their energy into a livelihood spent mitigating existential risk?

http://vesselcodex.ideascale.com

(The title is after my own work regarding the topic, which has to do with long-term archival and preservation of human potential. I presented this proposal for what I call Vessel Archives at the 100 Year Starship Symposium in September 2012. http://goo.gl/X4Fr9 - Though this is quite secondary to the pressing question of accelerating and incubating ER-reducing livelihood as above.)

Replies from: beoShaffer
comment by beoShaffer · 2012-11-13T00:40:52.741Z · LW(p) · GW(p)

Hi heath_rezabek, welcome to less wrong.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-12T04:56:59.106Z · LW(p) · GW(p)

Your "caged AGIs" link goes to the SI volunteer site.

Replies from: Curiouskid, lukeprog
comment by Curiouskid · 2012-11-18T04:13:05.150Z · LW(p) · GW(p)

I wish I had an AGI volunteer.

comment by lukeprog · 2012-11-12T05:28:07.269Z · LW(p) · GW(p)

Fixed.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-12T04:55:18.709Z · LW(p) · GW(p)

Note that if you mostly contribute to meta work, you want to also donate a small sum (say, $15/mo) to strategy work or direct work.

I suspect the mechanism underlying this is a near/far glitch that's also responsible for procrastination. I've had the experience of putting important but deadline-free stuff off for months, sincerely believing that I would get to it at some point (no, I wasn't short on free time...)

It takes time to develop the skill of noticing that now is a good time to do that thing you've been telling yourself to do, and then actually biting the bullet and doing it.

comment by Benya (Benja) · 2012-11-11T23:48:37.632Z · LW(p) · GW(p)

I don't currently know of a group pushing (...) for banning AGI development. You could accelerate AGI by investing in AGI-related companies (...)

This is not meant as a criticism of the post, but it seems like we should be able to do better than having some of us give money to groups pushing for banning AGI development, and others invest in AGI-related companies to accelerate AGI, especially if both of these are altruists with a reasonably similar prior aiming to reduce existential risk...

(Both giving to strategic research instead seems like a reasonable alternative.)

Replies from: lukeprog
comment by lukeprog · 2012-11-12T01:18:51.554Z · LW(p) · GW(p)

Right... it's a bit like in 2004 when my friend insisted that we both waste many hours to go vote on the presidential election, even though we both knew we were voting for opposite candidates. It would have been wiser for us both to stay home and donate to something we both supported (e.g. campaign finance reform), in whatever amount reflected the value of the time we actually spent voting.

I should note that investing in an AGI company while also investing in AGI safety research need not be as contradictory as it sounds, if you can use your investment in the AGI company to bias its development work toward safety, as Legg once suggested. In fact, I know at least three individuals (that I shall not name) who appear to be doing exactly this.

comment by TrickBlack · 2012-11-20T09:00:28.587Z · LW(p) · GW(p)

So I read the title and thought you mean the risk of AI having existential crises... which is an interesting question, when you think about it.

comment by wedrifid · 2012-11-12T00:21:43.053Z · LW(p) · GW(p)

How can I reduce existential risk from AI?

First answer that sprung to my mind: You could work to increase existential risk from other sources. If you make it less likely that an AI will ever be built you reduce the risk of AI. Start work on self-replicating nano-tech or biological weapons. Or even just blow an important building and make it look like an Arabic speaker did it.

That leads to the second solution: When working on AI (or with genies or problem solvers in general) take care construct questions that are not lost purposes.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-12T11:29:37.529Z · LW(p) · GW(p)

First answer that sprung to my mind: You could work to increase existential risk from other sources. If you make it less likely that an AI will ever be built you reduce the risk of AI. Start work on self-replicating nano-tech or biological weapons. Or even just blow an important building and make it look like an Arabic speaker did it.

The first two paragraphs in the article prevented that answer from springing to my mind.

comment by ialdabaoth · 2012-11-11T23:41:20.091Z · LW(p) · GW(p)

Regarding "making money" / "accumulating wealth": Why is wealth in my hands preferable to wealth in someone else's hands?

Replies from: Pablo_Stafforini, Benja, roystgnr
comment by Pablo (Pablo_Stafforini) · 2012-11-11T23:54:48.295Z · LW(p) · GW(p)

Because it's extremely unlikely that a random person will be at least as concerned with existential risk as you are.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T01:07:21.114Z · LW(p) · GW(p)

But why is it likely that I'll be better at doing anything about it? Just because I try to be rational, doesn't mean I'm any good at it - especially at something where we have no idea what the correct actions are. How do I know that my efforts will even have a positive dot-product with the "do the right thing" vector?

Replies from: Kaj_Sotala, Giles
comment by Kaj_Sotala · 2012-11-12T10:31:05.525Z · LW(p) · GW(p)

The average person has zero interest in fighting existential risk. It's very easy to do better than average, if the average is zero. Even if you've only spent fifty hours (say) familiarizing yourself with the topic, that's already much better than most.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-12T11:32:02.096Z · LW(p) · GW(p)

The average person has zero interest in fighting existential risk.

This strikes me as, ahem, an inflationary use of the term zero. Try negligible instead. :-)

EDIT: Turns out it was an inflationary use of the term average instead. :-) Retracted.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-11-12T12:16:27.891Z · LW(p) · GW(p)

Well, if we measure interest by what they actually do, then I hold by the "zero".

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-12T12:57:30.060Z · LW(p) · GW(p)

EY's interest in fighting existential risks is strictly greater than 0 as far as I can tell; is someone else cancelling that out in the average? (Or by average did you actually mean 'median'?) The number of arms the average human has is strictly less than 2.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-11-12T14:18:26.878Z · LW(p) · GW(p)

I meant median.

comment by Giles · 2012-11-12T01:57:10.070Z · LW(p) · GW(p)

Do you feel the odds improve if you choose "become stronger" first?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T02:31:51.597Z · LW(p) · GW(p)

No, because I am perpetually trying to "become stronger", and I still have yet to solve the problem that I'm just "trying" (due to an infinite regress of "trying to try to try to try...)

Replies from: Giles
comment by Giles · 2012-11-12T02:52:38.134Z · LW(p) · GW(p)

What kinds of approach have you tried, and which have resulted in any progress? (at least according to their own metric - e.g. you make progress in "becoming better at math" if you become better at math, even if you're not sure this has improved your general rationality)

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T04:26:44.713Z · LW(p) · GW(p)

Well, I've managed to dissuade myself of most cognitive biases clustered around the self-serving bias, by deliberately training myself to second-guess anything that would make me feel comfortable or feel good about myself. This one worked pretty well; I tend to be much less susceptible to 'delusional optimism' than most people, and I'm generally the voice of rational dissent in any group I belong to.

I've tried to make myself more 'programmable' - developed techniques for quickly adjusting my "gut feelings" / subconscious heuristics towards whatever ends I anticipated would be useful. This worked out really well in my 20's, but after awhile I wound up introducing a lot of bugs into it. As it turns out, when you start using meditation and occult techniques to hack your brain, it's really easy to lock yourself into a stable attractor where you no longer have the capacity to perform further useful programming. Oops.

Replies from: Giles
comment by Giles · 2012-11-12T05:24:29.343Z · LW(p) · GW(p)

I'm not sure if this advice will be useful to you, but I think what I'd do in this situation would be to stick to standard techniques and avoid the brain hacking, at least for now. "Standard techniques" might be things like learning particular skills, or asking your friends to be on the lookout for weird behavior from you.

One other thing - though I may have understood you here - you say you've trained yourself not to feel good about yourself in certain circumstances. To compensate, have you trained yourself to feel better about yourself in other circumstances? I'd guess there's an optimal overall level of feeling good about yourself and our natural overall level is probably about right (but not necessarily right about the specifics of what to feel good about)

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T05:48:05.217Z · LW(p) · GW(p)

No, in general I've trained myself to operate, as much as possible, with an incredibly lean dopamine mixture. To hell with feeling good; I want to be able to push on no matter how bad I feel.

(As it turns out, I have limits, but I've mostly trained to push through those limits through shame and willpower rather than through reward mechanisms, to the point that reward mechanisms generally don't even really work on me anymore - at least, not to the level other people expect them to).

A lot of this was a direct decision, at a very young age, to never exploit primate dominance rituals or competitive zero-sum exchanges to get ahead. It's been horrific, but... the best metaphor I can give is from a story called "Those Who Walk Away From Omelas".

Essentially, you have a utopia, that is powered by the horrific torture and suffering of a single innocent child. At a certain age, everyone in the culture is explained how the utopia works, and given two choices: commit fully to making the utopia worth the cost of that kid's suffering, or walk away from utopia and brave the harsh world outside.

I tried to take a third path, and say "fuck it. Let the kid go and strap me in."

So in a sense, I suppose I tried to replace normal feel-good routines for a sort of smug moral superiority, but then I trained myself to see my own behavior as smug moral superiority so I wouldn't feel good about it. So, yeah.

Replies from: Giles, Strange7
comment by Giles · 2012-11-12T06:13:38.227Z · LW(p) · GW(p)

Are you sure this is optimal? You seem to have goals but have thrown away three potentially useful tools: reward mechanisms, primate dominance rituals and zero-sum competitions. Obviously you've gained grit.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-12T06:20:26.855Z · LW(p) · GW(p)

Optimal by what criteria? And what right do I have to assign criteria for 'optimal'? I have neither power nor charisma; criteria are chosen by those with the power to enforce an agenda.

Replies from: Kaj_Sotala, Giles
comment by Kaj_Sotala · 2012-11-12T11:20:34.997Z · LW(p) · GW(p)

By the same right that you assign criteria according to which primate dominance rituals or competitive zero-sum exchanges are bad.

comment by Giles · 2012-11-12T14:12:53.623Z · LW(p) · GW(p)

Some people might value occupying a particular mental state for its own sake, but that wasn't what I was talking about here. I was talking purely instrumentally - your interest in existential risk suggests you have goals or long term preferences about the world (although I understand that I may have got this wrong), and I was contemplating what might help you achieve those and what might stand in your way.

Just to clarify - is it my assessment of you as an aspiring utility maximizer that I'm wrong about, or am I right about that but wrong about something at the strategic level? (Or fundamentally misunderstanding your preferences)

comment by Strange7 · 2012-11-13T22:10:36.092Z · LW(p) · GW(p)

Problem being that Omelas doesn't just require that /somebody/ be suffering; if it did, they'd probably take turns or something. It's some quality of that one kid.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T22:25:54.426Z · LW(p) · GW(p)

Which is part of where the metaphor breaks down. In our world, our relative prosperity and status doesn't require that some specific, dehumanized 'Other' be exploited to maintain our own privilege - it merely requires that someone be identified as 'Other', that some kind of class distinction be created, and then natural human instincts take over and ensure that marginal power differentials are amplified into a horrific and hypocritical class structure. (Sure, it's a lot better than it was before, but that doesn't make it "good" by any stretch of the imagination).

I have no interest in earning money by exploiting the emotional tendencies of those less intelligent than me, so Ialdabaoth-sub-1990 drew a hard line around jobs (or tasks or alliances or friendships) that aid people who do such things.

More generally, Brent-sub-1981 came up with a devastating heuristic: "any time I experience a social situation where humans are cruel to me, I will perform a detailed analysis of the thought processes and behaviors that led to that social situation, and I will exclude myself from performing those processes and behaviors, even if they are advantageous to me."

It's the kernel to my "code of honor", and at this point it's virtually non-negotiable.

It is not, however, particularly good at "winning".

comment by Benya (Benja) · 2012-11-11T23:50:10.570Z · LW(p) · GW(p)

It isn't... if those other hands would give at least as much of it to x-risk reduction as you will.

comment by roystgnr · 2012-11-13T23:16:23.102Z · LW(p) · GW(p)

Even if benthamite's excellent answer wasn't true, and every random person was at least as concerned with existential risk as you are, it would still be useful for you to accumulate wealth. The economy is not a zero-sum game; producing wealth for yourself does not necessitate reducing the wealth in the hands of others.

comment by A1987dM (army1987) · 2012-11-12T11:27:26.836Z · LW(p) · GW(p)

Upvoted for the large number of interesting links.

comment by negamuhia · 2012-11-13T08:59:54.454Z · LW(p) · GW(p)

I realize (and I'm probably not alone in this) that I've been implicitly using this {meta-work, strategy-work, direct-work} process to try and figure out where/how to contribute. Thanks for this guide/analysis.

comment by gjm · 2012-11-12T22:37:58.885Z · LW(p) · GW(p)

I conjecture that one of the "community of people" links was meant to go somewhere other than where it currently does. (SIAI?)

Replies from: lukeprog
comment by lukeprog · 2012-11-14T01:37:27.668Z · LW(p) · GW(p)

Fixed.

comment by janusdaniels · 2012-11-18T21:29:01.618Z · LW(p) · GW(p)

The investment control software of large financial companies seems the most likely source of rogue AI.

Charlie Stross, in his 2005 novel Accelerando, implicitly suggested "financial device" AI as the most likely seed for rogue AI. Last year, David Brin independently and explicitly promoted the possibility. The idea seems more likely today.

With a multi-million dollar salary from an evil bank, you can contribute to species survival.
You're welcome. And I'm dead serious.

Apparently, Brin first wrote a full outline of the idea here: http://ieet.org/index.php/IEET/more/brin20111217 More recently: http://www.newscientist.com/article/mg21528735.700-artificial-intelligence-to-sniff-out-bankers-scams.html The New Scientist link is from Brin's (excellent) blog, http://davidbrin.blogspot.com/2012/07/bulletins-from-transparency-front.html?showComment=1342734425933#c5447006304917530539

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-12T05:00:12.835Z · LW(p) · GW(p)

Lumosity panned in The Guardian: http://www.guardian.co.uk/science/2009/feb/26/brain-training-games-which

Replies from: lukeprog
comment by lukeprog · 2012-11-12T05:27:13.318Z · LW(p) · GW(p)

Yeah, I don't think much of Lumosity either. Can somebody suggest an alternative link?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-12T08:06:19.141Z · LW(p) · GW(p)

Maybe this? I haven't looked in to it any.

comment by G0W51 · 2015-05-19T22:39:39.411Z · LW(p) · GW(p)

I know discussing politics on LW is discouraged, but is voting in elections a viable method of decreasing existential risk by making it more likely that those who are elected will take more action to decrease it? If so, what parties should be voted for? If this isn't something that should be discussed on LW, just say so and I can make a reddit post on it.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-05-19T22:56:27.525Z · LW(p) · GW(p)

There are already a few good posts about voting on LW http://lesswrong.com/lw/fao/voting_is_like_donating_thousands_of_dollars_to/ comes to mind but there are a variety when you search.

As far as existential risk goes, I however don't know whether we have good information about which mainstream (=electable) candidate will decrease risk.

If this isn't something that should be discussed on LW, just say so and I can make a reddit post on it.

You could also try http://www.omnilibrium.com/index.php which is sourced with people from LW.

Replies from: G0W51
comment by G0W51 · 2015-05-24T01:44:56.083Z · LW(p) · GW(p)

Remember that there may still be some value in voting for candidates who aren't mainstream.

Replies from: ChristianKl
comment by ChristianKl · 2015-05-24T13:10:22.027Z · LW(p) · GW(p)

If your goal is to raise awareness of an issue in most cases writing a well argued article is going to do much more than giving your vote to a candidate who isn't mainstream.

Replies from: G0W51
comment by G0W51 · 2015-05-25T02:29:21.082Z · LW(p) · GW(p)

There are already are well-argued articles, I'm not sure how useful more articles would be. Perhaps a more accessible version of Existential Risk as a Global Priority would be useful, though.

comment by Lumifer · 2015-05-19T23:52:34.260Z · LW(p) · GW(p)

is voting in elections a viable method of decreasing existential risk

No.

Replies from: G0W51
comment by G0W51 · 2015-05-24T01:45:45.727Z · LW(p) · GW(p)

Why not? I imagine that different political parties have different views on what the government should do about existential risk and voting for the ones that are potentially more willing to decrease it would be beneficial. Currently, it seems like most parties don't concern themselves at all with existential risk, but perhaps this will change once strong AI becomes less far off.

Replies from: Lumifer, ChristianKl
comment by Lumifer · 2015-05-27T16:47:35.960Z · LW(p) · GW(p)

I imagine that different political parties have different views on what the government should do about existential risk

Actually, no, I don't think it is true. I suspect that at the moment the views of all political parties on existential risk are somewhere between "WTF is that?" and "Can I use it to influence my voters?"

That may (or may not) eventually change, but at the moment the answer is a clear "No".

Replies from: G0W51, G0W51
comment by G0W51 · 2015-10-09T05:59:31.290Z · LW(p) · GW(p)

Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.

comment by G0W51 · 2015-05-30T22:48:52.785Z · LW(p) · GW(p)

Noted. I'll invest my efforts on x-risk reduction into something other than voting.

comment by ChristianKl · 2015-05-24T13:17:49.863Z · LW(p) · GW(p)

I imagine that different political parties have different views on what the government should do about existential risk

Do you? I think most politicians would ask "What do you mean with 'existential risk'?" is you ask them about it.

Replies from: G0W51
comment by G0W51 · 2015-05-25T02:22:39.583Z · LW(p) · GW(p)

Yeah, I suppose you're right. Still, once something that could pose a large existential risk comes into existence or looks like it will soon come into existence, wouldn't politicians then consider existential risk reduction? For example, once a group is on the verge of developing AGI, wouldn't the government think about what to do about it? Or would they still ignore it? Would the responses of different parties vary?

You could definitely be correct, though; I'm not knowledgeable about politics.

Replies from: ChristianKl
comment by ChristianKl · 2015-05-25T12:51:21.640Z · LW(p) · GW(p)

Politics is a people sport. Depending on who creates the policy of the party in the time the topic comes up, the results can come out very differently.

comment by 3p1cd3m0n · 2014-12-01T00:14:01.574Z · LW(p) · GW(p)

How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 - 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?

Replies from: hawkice
comment by hawkice · 2014-12-01T01:56:33.852Z · LW(p) · GW(p)

I'm having trouble imagining how risk would ever go down, sans entering a machine-run totalitarian state, so I clearly don't have the same assessment of bad things happening "sooner rather than later". I can't imagine a single dangerous activity that is harder or less dangerous now than it was in the past, and I suspect this will continue. The only things that will happen sooner than later are establishing stable and safe equilibria (like post-Cold War nuclear politics). If me personally being alive meaningfully effects an equilibrium (implicit or explicit) then Humanity is quite completely screwed.

Replies from: 3p1cd3m0n
comment by 3p1cd3m0n · 2014-12-04T01:45:22.146Z · LW(p) · GW(p)

For one, Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven't thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.

comment by 3p1cd3m0n · 2014-11-20T03:31:22.241Z · LW(p) · GW(p)

Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I've found none, but I'd like to know because I'm considering developing AGI as a career.

Edit: What about AI that's not AGI?

Replies from: ike
comment by ike · 2014-11-20T14:39:48.113Z · LW(p) · GW(p)

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

Replies from: 3p1cd3m0n
comment by 3p1cd3m0n · 2014-11-21T02:30:03.914Z · LW(p) · GW(p)

Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?

comment by [deleted] · 2012-11-12T01:10:25.590Z · LW(p) · GW(p)

If you only contribute to meta work for a while, the outside view (around SI, anyway) suggests there's a good chance you'll forget to ever do anything non-meta.

Keeping a to-do list may be a cheaper way of keeping yourself from forgetting.

Replies from: lukeprog
comment by lukeprog · 2012-11-12T02:10:58.411Z · LW(p) · GW(p)

I don't think it's actually a problem of "forgetting"; I should probably clarify that language. It's more about habit formation. If one takes up the habit of doing no direct work day after day, it may be difficult to break that habit later.

Replies from: None
comment by [deleted] · 2012-11-12T03:37:56.843Z · LW(p) · GW(p)

Yes, good point.