Other Existential Risks

post by multifoliaterose · 2010-08-17T21:24:51.520Z · LW · GW · Legacy · 124 comments

Contents

  Remarks on arguments advanced in favor of focusing on AI
  On argument by authority
  Bottom line
None
124 comments

[Added 02/24/14: SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

Related To: Should I believe what the SIAI claims?, Existential Risk and Public Relations

In his recent post titled Should I believe what the SIAI claims? XiXiDu wrote:

I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

[...]

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

[...]

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon.

XiXidu's post produced mixed reactions within the LW community. On one hand, some LW members (e.g. orthonormal) felt exasperated with XiXiDu because his post was poorly written, revealed him to be uninformed, and revealed that he has not internalized some of the basic principles of rationality. On the other hand, some LW members (e.g. HughRistik) have long wished that SIAI would attempt to substantiate some of its more controversial claims in detail and were gratified to see somebody call on SIAI to do so. These two categories are not mutually exclusive. I fall into both in some measure. In any case, I give XiXiDu considerable credit for raising such an important topic.

The present post is the first of a several posts in which I will detail my thoughts on SIAI's claims.

One difficulty is that there's some ambiguity as to what SIAI's claims are. I encourage SIAI to make a more detailed public statement of their most fundamental claims. According to the SIAI website:

In the coming decades, humanity will likely create a powerful artificial intelligence. The Singularity Institute for Artificial Intelligence (SIAI) exists to confront this urgent challenge, both the opportunity and the risk. Our objectives as an organization are:

  • To ensure the development of friendly Artificial Intelligence, for the benefit of all mankind;
  • To prevent unsafe Artificial Intelligence from causing harm;
  • To encourage rational thought about our future as a species.

I interpret SIAI's key claims to be as follows:

(1) At the margin, the best way for an organization with SIAI's resources to prevent global existential catastrophe is to promote research on friendly Artificial Intelligence, work against unsafe Artificial Intelligence, and encourage rational thought.

(2) Donating to SIAI is the most cost-effective way for charitable donors to reduce existential risk.

I arrived at belief that SIAI claims (1) by reading their mission statement and by reading SIAI research fellow Eliezer Yudkowsky's writings, in particular the ones listed under the Less Wrong wiki article titled Shut up and multiply. [Edit (09/09/10): The videos of Eliezer linked in a comment by XiXiDu give some evidence that SIAI claims (2). As Airedale says in her second to last paragraph here, Eliezer and SIAI are not synonymous entities. The question of whether SIAI regards Eliezer as an official representative of SIAI remains]. I'm quite sure that (1) and (2) are in the rough ballpark of what SIAI claims, but encourage SIAI to publicly confirm or qualify each of (1) and (2) so that we can all have a more clear idea of what SIAI claims.

My impression is that some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2). For clarity, I think that it's sensible to discuss claims (1) and (2) separately. In the remainder of the present post, I'll discuss claim (1'), namely, claim (1) modulo the part about the importance of encouraging rational thought. I will address SIAI's emphasis on encouraging rational thought in a later post.


As I have stated repeatedly, unsafe AI is not the only existential risk. The Future of Humanity Institute has a page titled Global Catastrophic Risks which has a list of lectures given at a 2008 conference on a variety of potential global catastrophic risks. Note that a number of these global catastrophic risks are unrelated to future technologies. Any argument in favor of claim (1') must consist of a quantitative comparison of the effects of focusing on Artificial Intelligence and the effects of focusing on other existential risks. To my knowledge, SIAI has not provided a detailed quantitative analysis of the expected impact of AI research, a detailed quantitative analysis of working to avert other existential risks, and a comparison of the two. If SIAI has made such a quantitative analysis, I encourage them to make it public. At present, I believe that SIAI has not substantiated claim (1').

Remarks on arguments advanced in favor of focusing on AI

(A) Some people claim that there's a high probability that runaway superhuman artificial intelligence will be developed in the near future. For example, Eliezer has said that "it seems pretty obvious to me that some point in the not-too-distant future we're going to build an AI [...] it will be a superintelligence relative to us [...] in one to ten decades and probably on the lower side of that."

I believe that if Eliezer is correct about this assertion, claim (1') is true. But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale. LW poster timtyler pointed me to a webpage where he works out his own estimate of the timescale. I will look at this document eventually, but do not expect to find it compelling, especially in light of Carl Shulman's remarks about the survey used suffering from selection bias. So at present, I do not find (A) a compelling reason to focus on the existential risk of AI.

(B) Some people have remarked that if we develop an FAI, the FAI will greatly reduce all other existential risks which humanity faces. For example, timtyler says

I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources ot their development.

I agree with timtyler that it would be very desirable for us to have an FAI to solve our problems. If all else was equal, then this would give special reason to favor focus on AI over existential risks that are not related to Artificial Intelligence. But this factor by itself is not a compelling reason for focus on Artificial Intelligence. In particular, human-level AI may be so far off in the future that if we want to survive, we have to address other existential risks right now without the aid of AI.

(C) An inverse of the view mentioned in (B) is the idea that if we're going to survive in the over the long haul, we must eventually build an FAI, so we might as well focus on FAI since if we don't get FAI right, we're doomed anyway. This is an aspect of Vladimir_Nesov's position which is emerges the linked threads [1], [2]. I think that there's something to this idea. Of course research on FAI may come at the opportunity cost of the chance to avert short term preventable global catastrophic risks. My understanding is that at present Vladimir_Nesov believes that this cost is outweighed by the benefits. By way of contrast, at present I believe that the benefits are outweighed by the cost. See our discussions for details. Vladimir_Nesov's position is sophisticated and I respect it.

(D) Some people have said that existential risk due to advanced technologies is getting disproportionately little attention relative to other existential risks so that at the margin one should focus on advanced technologies. For example, see Vladimir_Nesov's comment and ciphergoth's comment. I don't find this sort of remark compelling. My own impression is that all existential risks are getting very little attention. I see no reason for thinking that existential risk due to advanced technologies is getting less than its fair share of attention being directed toward existential risk. As I said in response to ciphergoth:

Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

(E) Some people have remarked that most issues raised as potential existential risks (e.g. nuclear war, resource shortage) seem very unlikely to kill everyone and so are not properly conceived of as existential risks. I don't find these sorts of remarks compelling. As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.


On argument by authority

When XiXiDu raised his questions, Eliezer initially responded by saying:

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.

I interpret this to be a statement of the type "You should believe SIAI's claims (1) and (2) because we're really smart." There are two problems with such a statement. One is that there's no evidence that intelligence leads to correct views about how to ensure the survival of the human species. Alexander Grothendieck is one of the greatest mathematicians of the 20th century. Fields medalist Rene Thom wrote:

Relations with my colleague Grothendieck were less agreeable for me. His technical superiority was crushing. His seminar attracted the whole of Parisian mathematics, whereas I had nothing new to offer.

Fields Medalist David Mumford said

[Grothendieck] had more than anybody else I’ve ever met this ability to make an absolutely startling leap into something an order of magnitude more abstract…. He would always look for some way of formulating a problem, stripping apparently everything away from it, so you don’t think anything is left. And yet something is left, and he could find real structure in this seeming vacuum.”

In Mariana Cook's book titled Mathematicians: An Outer View of the Inner World, Fields Medalist and IAS professor Pierre Deligne wrote

When I was in Paris as a student, I would go to Grothendieck's seminar at IHES [...] Grothendieck asked me to write up some of the seminars and gave his notes. He was extremely generous with his ideas. One could not be lazy or he would reject you. But if you were really interested and doing this he liked, then he helped you a lot. I enjoyed the atmosphere around him very much. He had the main ideas and the aim was to prove theories and understand a sector of mathematics. We did not care much about priority because Grothendieck had the ideas we were working on and priority would have meant nothing.

(Emphasis my own.)

These comments should suffice to illustrate that Grothendieck's intellectual power was uncanny.

In a very interesting transcript titled Reminiscences of Grothendieck and his school, Grothendieck's student former student Luc Illusie says:

In 1970 he left the IHES and founded the ecological group Survivre et Vivre. At the Nice congress, he was doing propaganda for it, offering documents taken out of a small cardboard suitcase. He was gradually considering mathematics as not being worth of being studied, in view of the more urgent problems of the survival of the human species.

I think that it's fair to say that Grothendieck's ideas about how to ensure the survival of the human species were greatly misguided. In the second portion of Allyn Jackson's excellent biography of Grothendieck one finds the passage

...despite his strong convictions, Grothendieck was never effective in the real world of politics. “He was always an anarchist at heart,” Cartier observed. “On many issues, my basic positions are not very far from his positions. But he was so naive that it was totally impossible to do anything with him politically.” He was also rather ignorant. Cartier recalled that, after an inconclusive presidential election in France in 1965, the newspapers carried headlines saying that de Gaulle had not been elected. Grothendieck asked if this meant that France would no longer have a president. Cartier had to explain to him what a runoff election is. “Grothendieck was politically illiterate,” Cartier said. But he did want to help people: it was not unusual for Grothendieck to give shelter for a few weeks to homeless people or others in need.

[...]

“Even people who were close to his political views or his social views were antagonized by his behavior.…He behaved like a wild teenager.”

[....]

“He was used to people agreeing with his opinions when he was doing algebraic geometry,” Bumby remarked. “When he switched to politics all the people who would have agreed with him before suddenly disagreed with him.... It was something he wasn’t used to.”

Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.

The other problem with Eliezer's appeal to his intellectual prowess is that Eliezer's demonstrated intellectual prowess pales in comparison with that of other people who are interested in existential risk. I wholeheartedly agree with rwallace's comment:

If you want to argue from authority, the result of that isn't just tilted against the SIAI, it's flat out no contest.

By the time Grothendieck was Eliezer's age he had already established himself as a leading authority in functional analysis and proven his vast generalization of the Riemann-Roch theorem. Eliezer's intellectual achievements are meager by comparison.

A more contemporary example of a powerful intellect interested in existential risk is Fields Medalist and Abel Prize winner Mikhail Gromov. On the GiveWell research blog there's an excerpt from an interview with Gromov which caught my attention:

If you try to look into the future, 50 or 100 years from now...

50 and 100 is very different. We know more or less about the next 50 years. We shall continue in the way we go. But 50 years from now, the Earth will run out of the basic resources and we cannot predict what will happen after that. We will run out of water, air, soil, rare metals, not to mention oil. Everything will essentially come to an end within 50 years. What will happen after that? I am scared. It may be okay if we find solutions but if we don't then everything may come to an end very quickly!

Mathematics may help to solve the problem but if we are not successful, there will not be any mathematics left, I am afraid!

Are you pessimistic?


I don't know. It depends on what we do. if we continue to move blindly into the future, there will be a disaster within 100 years and it will start to be very critical in 50 years already. Well, 50 is just an estimate. It may be 40 or it may be 70 but the problem will definitely come. If we are ready for the problems and manage to solve them, it will be fantastic. I think there is potential to solve them but this potential should be used and this potential is education. It will not be solved by God. People must have ideas and they must prepare now. In two generations people must be educated. Teachers must be educated now, and then the teachers will educate a new generation. Then there will be sufficiently many people to face the difficulties. I am sure this will give a result. If not, it will be a disaster. It is an exponential process. If we run along an exponential process, it will explode. That is a very simple computation. For example, there will be no soil. Soil is being exhausted everywhere in the world. It is not being said often enough. Not to mention water. It is not an insurmountable problem but requires solutions on a scale we have never faced before, both socially and intellectually.

I've personally studied some of Gromov's work and find it much more impressive than the portions of Eliezer's work which I've studied. I find Gromov's remarks on existential risk more compelling than Eliezer's remarks on existential risk. Neither Gromov nor Eliezer have substantiated their claims, so by default I take Gromov more seriously than Eliezer. But as I said above, this is really aside from the point. The point is that there's a history of brilliant people being very mistaken in their views about things outside of their areas of expertise and that discussion of existential risk should be based on evidence rather than based on argument by authority. I agree with a remark which Holden Karnofsky made in response to my GiveWell research mailing list post

I think it's important not to put too much trust in any single person's view based simply on credentials.  That includes [...] Mikhail Gromov [...] among others.

I encourage Less Wrong readers who have not done so to carefully compare the marginal impact that one can hope to have on existential risk by focusing on AI with the marginal impact that one can hope to have on existential risk by focusing on a specific existential risk unrelated to AI. When one does so, one should beware of confirmation bias. If one came to believe that focusing on AI is a good without careful consideration of alternatives, one should assume oneself to be irrationally biased in favor of focusing on AI.

Bottom line

There's a huge amount of uncertainty as to which existential risks are most likely to strike and what we can hope to do about them. At present reasonable people can hold various views on which existential risks are worthy of the most attention. I personally think that the best way to face the present situation is to gather more information about all existential risks rather than focusing on one particular existential risk, but I might be totally wrong. Similarly, people who believe that AI deserves top priority might be totally wrong. At present there's not enough information available to determine which existential risks deserve top priority with any degree of confidence.

SIAI can credibly claim (1'), but SIAI cannot credibly claim (1') with confidence. Because uncredible claims about existential risk drive people away from thinking about existential risk, SIAI should take special care to avoid the appearance of undue confidence in claim (1').

124 comments

Comments sorted by top scores.

comment by CarlShulman · 2010-08-18T02:46:32.467Z · LW(p) · GW(p)

Regarding D) it depends on why the risks are getting varying amounts of attention. Existential risks mainly get derivative attention as a result of more likely/near-term/electorally-salient/commonsense-morality-salient lesser forms. For instance, engineered diseases get countermeasure research because of the threat of non-extinction-level pathogens causing substantial casualties, not the less likely and more distant scenario of a species-killer. Anti-nuclear measures are driven mostly by the expected casualties from nuclear war than the chance of surprisingly powerful nuclear winter, etc. Climate change prevention is mostly justified in non-existential risk terms, and benefits from a single clear observable mechanism already in progress that fits many existing schema for environmentalism and dealing with pollutants.

The beginnings of a similar derivative effort are visible in the emerging "machine ethics" area, which has been energized by the development of Predator drones and the like, although it's noteworthy how little was done on AI risk in the early, heady days of AI, when researchers were relatively confident in success soon.

Regarding A), I'll have more to say at another time. I will give three key quick-to-explain points that are fairly important to me in concentrating a good chunk of probability mass in the next one to ten decades:

1) If we're talking about 2100, the time between now and then is half again longer than the history of AI so far.

2) Theoretical progress is hard to predict, but progress in computing hardware has been quite predictable. While cheap hardware isn't an overwhelming aid in AI development (slow sequential theory advances that can't be much accelerated by throwing more people at them may remain a core bottleneck for a long time) it does have some benefits:

a) Some algorithms scale well with hardware performance, e.g. in vision and computer chess. b) Cheap hardware incentivizes people to try to come up with hardware-hungry algorithms. c) Abundant computing makes it easy for computer scientists to perform numerous experiments and test many parameter values for their algorithms. d) Cheap computing, by enhancing the performance and utility of software, drives the expansion of the technology industry, which is accompanied by large increases in the number of corporate and academic researchers. e) Products dependent on hardware advance (e.g. robots, the internet, etc) can produce large datasets and useful testing grounds for AI and machine learning.

All told, these effects of hardware growth give us reason to think that we should concentrate more of our probability mass for AI development further into Moore's Law (and not too long after its end).

3) Neuroimaging advance has been quite impressive. Kurzweil is more optimistic on timelines than most neuroscientists, but there is wide agreement that neuroimaging tools will improve in various respects by yet more orders of magnitude, and shed at least some substantial light on how the brain works. If those tools may be useful, that should lead us to focus probability mass in the period reasonably soon after they are developed and used.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T05:41:44.455Z · LW(p) · GW(p)

Thanks Carl, I'm glad to finally be getting some engagement concerning (A). I will think about these things.

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-19T20:16:39.059Z · LW(p) · GW(p)

I interpret this to be a statement of the type "You should believe SIAI's claims (1) and (2) because we're really smart."

No, it's a statement of the type "You should believe SIAI's claims (1) and (2) because we're really rational." Your mathematician may have been smart and not rational. I remember reading about the phenomenon of smart non-rational people, maybe here: http://www.magazine.utoronto.ca/feature/why-people-are-irrational-kurt-kleiner/

Anyway, your mathematician is a terrible example of an irrational person because he was acting more rationally than any of his colleagues. Ethan Herdrick: "Our current known reserves of unapplied math should last centuries." Your mathematician's only mistake was not looking for a third alternative--a use of his time better than both math and environmentalist protesting.

comment by jimmy · 2010-08-18T22:03:25.658Z · LW(p) · GW(p)

EY argues: "... your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't."

and you respond by saying that there have been people smarter than Eliezer that have suffered rationality fails when working outside their domain? Isn't that kinda the point?

EY wasn't arguing "My IQ is so damn high that I just have to be right. Look at my ability to generate novel hypothesis! It clearly shows high IQ!", which would indeed be foolish. It is understood here that high innate intelligence is not the same as real world effectiveness, which requires one be intelligent about how they use their intelligence.

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims). EY was showing that there are many very smart people that can't even evaluate the MWI hypothesis when it is handed to them and there is slam dunk evidence.

If you can't even get the right answer on simple questions, how the heck are you supposed to do better on tough problems than those that see the simple problems as, well... simple?

EDIT: It seems like my point did not come off clearly. I am not arguing that it is not an appeal to authority.

I am arguing that high IQ is different from "has lots of knowledge" which is different from "knows the fundamental rules of how to weigh evidence and evaluate claims", and that Eliezer was talking about the last one.

Replies from: ciphergoth, XiXiDu, multifoliaterose, multifoliaterose
comment by Paul Crowley (ciphergoth) · 2010-08-18T22:21:29.366Z · LW(p) · GW(p)

More specifically, XiXiDu's whole point was "how do I evaluate this if, instead of addressing the arguments behind it, I talk about who believes it and who doesn't?" If that's the argument, it's fair enough for Eliezer to ask them to assess the rationality of the people whose opinions are being weighed.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-19T09:41:36.193Z · LW(p) · GW(p)

More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.

Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder

In the case of other people like Marvin Minsky and other AI researchers, amongst others, the knowledge of possible risks should be reasonable to infer from their overall knowledge of the topic.

EY was showing that there are many very smart people that can't even evaluate the MWI hypothesis.

Many scientists disregard speculations concerning the interpretation of quantum mechanics. This is due to the fact that it does not bear additional predictions, i.e. is not subject to empirical criticism.

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

http://xixidu.net/lw/05.png "At present I do not know of any other person who could do that." (Reference)

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims).

Hypothesis based on shaky conclusions, not on previous evidence.

comment by XiXiDu · 2010-08-19T09:41:59.213Z · LW(p) · GW(p)

More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.

Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder

In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

The object of the game here is to evaluate hypothesis which have already been generated (ie SIAI claims).

Hypothesis based on shaky conclusions, not on previous evidence.

Replies from: None, wedrifid, jimmy
comment by [deleted] · 2010-08-19T20:30:22.245Z · LW(p) · GW(p)

EY wasn't arguing "My IQ is so damn high that I just have to be right.

I disagree based on the following evidence:

http://xixidu.net/lw/05.png "At present I do not know of any other person who could do that." (Reference)

You keep posting screenshots from the deleted Roko's post, with the "forbidden" parts blacked-out. I agree that the whole matter could have been handled much better, but I don't see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy's post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they're everywhere.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-20T09:14:20.145Z · LW(p) · GW(p)

XiXiDu argues: "... your smart friends at Less Wrong and favorite rationalists like EY are not remotely close to the rationality standards of other people out there (yeah, there are other smart people, believe it or not), and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't."

You keep telling me that my arguments are no evidence for what I'm trying to prove. Other people asked me several times not to make up fantasies of AI-Gods kicking their testicles. But if you want to be upovted the winning move is just to go think about something else. So take my word for it, I know more than you do, no really I do, and SHUT UP.

comment by wedrifid · 2010-08-19T12:19:46.788Z · LW(p) · GW(p)

I disagree based on the following evidence:

I actually feel embarrassed just from reading that.

comment by jimmy · 2010-08-19T18:09:11.721Z · LW(p) · GW(p)

See the edit to the original comment.

comment by multifoliaterose · 2010-08-18T22:40:25.706Z · LW(p) · GW(p)

If only claims (1) and (2) had been critically analyzed in detail on Less Wrong or the SIAI website I would find your comment compelling. Given that such analysis has not been made or released, I interpret Eliezer's response as an argument by authority.

comment by multifoliaterose · 2010-08-18T22:34:14.008Z · LW(p) · GW(p)

If only there had been detailed critical analysis of claims (1) and (2) on Less Wrong or the SIAI website I would find your comment compelling. But in light of the fact that detailed critical analysis of these significant claims has not taken place I believe that Eliezer's remarks are in fact properly conceptualized as an appeal to authority.

Replies from: jimmy
comment by jimmy · 2010-08-19T17:56:51.088Z · LW(p) · GW(p)

I totally agree that it's an appeal to authority. My point was that it's an appeal to a different and more relevant kind of authority.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-19T19:15:28.568Z · LW(p) · GW(p)

Do you disagree with

Just as Grothendieck's algebro-geometric achievements had no bearing on Grothendieck's ability to conceptualize a good plan to lower existential risk, so too does Eliezer's ability to interpret quantum mechanics have no bearing on Eliezer's ability to conceptualize a good plan to lower existential risk.

?

If so, why?

Replies from: jimmy
comment by jimmy · 2010-08-20T17:36:07.666Z · LW(p) · GW(p)

Yes, I mostly disagree.

The first part is giving an example of high IQ not leading to a good existential risk plan, and the second part is saying that you expect that high ability to weigh evidence won't lead to a good plan either.

The counterexample proves that high IQ isn't everything one needs, but overall, I'd still expect it to help. I think "no bearing" is too strong even for an IQ->IQ comparison of that sort.

If you're going to assume you've been exposed to all the plans that people have come up with, picking the right plan is more of a claim evaluation job than a novel hypothesis generation job. For this, you're going to want someone that can evaluate claims like MWI easily. I think that this is sufficiently close to the case to make your comparison a poor one.

If I were going to make a comparison to make your point (to the degree which I agree with it), I'd use more than one person with more than one strength of intellect and instead ask "do we really think EY has shown enough to succeed where most talented people fail?". I'd also try to make it clear whether I'm arguing against him having a 'majority' of the probability mass in his favor vs having a 'plurality' of it going for him. It's a lot easier to argue against the former, but it's the latter that is more important if you have to pick someone to give money to.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-20T17:59:18.459Z · LW(p) · GW(p)

But how well does the ability to evaluate evidence connected with quantum mechanics correlate with ability to evaluate evidence connected with existential risk?

See also the thread here

comment by khafra · 2010-08-18T13:49:02.539Z · LW(p) · GW(p)

I agree with the overall point, here; but the "argument by authority" section is deeply flawed. In it, intelligence is consistently equated with rationality; and section's whole point seems to depend on that equation. As demonstrated in works like What Intelligence Tests Miss, G and rationality have markedly different effects. I don't think Eliezer would claim to be smarter than Grothendieck or Gödel or Erdős, but he could claim with some justification to be saner than them.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T16:01:21.816Z · LW(p) · GW(p)

It appears that what distinguished Grothendieck was not high g-factor. See Jordan Ellenberg's blog post titled The capacity to be alone.

My point is that Grothendieck exhibited very high instrumental rationality with respect to mathematics but low instrumental rationality with respect to his efforts to ensure the survival of the human race, and that something analogous could very well be the case of Eliezer.

I don't think Eliezer would claim to be smarter than Grothendieck or Gödel or Erdős, but he could claim with some justification to be saner than them.

What evidence is there that Eliezer is saner than Grothendieck? I don't have a strong opinion on this point, I'm just curious what you have in mind.

Replies from: Risto_Saarelma, khafra
comment by Risto_Saarelma · 2010-08-18T18:34:45.627Z · LW(p) · GW(p)

What evidence is there that Eliezer is saner than Grothendieck? I don't have a strong opinion on this point, I'm just curious what you have in mind.

It should perhaps be mentioned that the few accounts of encountering Grothendieck during the last 20 years describe someone who seems actually clinically insane, with delusions and extreme paranoia, not just someone with less than stellar rationality.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T18:41:41.416Z · LW(p) · GW(p)

Yes, I concur. But what about Grothendieck in the 1970s vs. Eliezer now? Or Gromov now vs. Eliezer now? It's not clear to me which way such comparisons go.

comment by khafra · 2010-08-18T21:04:30.865Z · LW(p) · GW(p)

Grothendieck's magnum opus was his contributions to pure mathematics. That requires very high intelligence and a willingness to, in hackneyed terms, think outside the box; or, in LW terms, go to school wearing a clown suit.

Eliezer's magnum opus, so far, is the sequences. They combine a lot of pre-existing work and some of his own insights into a coherent whole that displays, I think, extraordinarily rare sanity. Pratchett's "First Sight," applied to a wide variety of fields. Going through accumulated human knowledge and picking out a framework that satisfies Occam's Razor better than any other I've seen is why I think he's very sane.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T05:37:23.016Z · LW(p) · GW(p)

It seems a little odd that the grandparent comment was about arguments from authority, but here we are talking about Grothendieck's work in pure math and Eliezer's on methods of rationality. Because the thing is, in neither area can an appeal to authority work. Regardless of how much G, or how much scholarship and expertise they have acquired, they both have to "win" by actually convincing ordinary people with their arguments rather than overawing them with their authority.

On the other hand, when advocating anarchist political positions or prioritizing existential risks, authority helps. Trouble is, neither math skill nor {whatever it is that EY does so well} qualifies as a credential for the needed kind of authority.

Replies from: jimmy, multifoliaterose
comment by jimmy · 2010-08-21T19:22:47.563Z · LW(p) · GW(p)

There's a place for "argument from authority".

The idea is that you don't, in general, have fully articulated proofs of the question in hand, and you're relying on some combination of heuristics to come to your conclusion.

If you're allowed to hear other peoples answers, and a bit about the people making them, then you have a set of heuristics and answers, and you have to guess what the real answer is based on these. If you stick with your original answer, you're arbitrarily picking one heuristic to trust completely, which is clearly suboptimal.

You want to discount like minded thinking (many people, one heuristic), weigh more heavily peoples views that you know were reached by thinking about the problem in different ways (again, weight the heuristic, not the person), and of course, more heavily weight heuristics that you expect to work. It's how to do this last part that we're talking about.

High G people may have access to more complex heuristics that most could not come up with, but what's more important is having your heuristic free of errors that prevent its functioning. Knowing what a heuristic has to do in order to work is more important than having a lot of cognitive horsepower spent on coming up with fancy heuristics without a solid reason.

Of course, in the end, if you spot a glaring error in someone's thinking, you don't trust him, even if he's an 'authority' (in other words: even if he has a track record of producing good heuristics, you condition on this one being bad and don't trust the output). And of course, the deeper into the object level you are able to dive, the more information you have on which to judge the credibility of heuristics.

Perhaps it has better connotations whens stated as "Aumann agreement"?

comment by multifoliaterose · 2010-08-20T17:56:59.446Z · LW(p) · GW(p)

Agree with this.

comment by Daniel_Burfoot · 2010-08-18T03:22:46.790Z · LW(p) · GW(p)

This post seems mis-named. I thought you were going to discuss "other existential risks" like nuclear war, global pandemic, environmental collapse, but mostly the discussion was about how to evaluate SIAI claims.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T03:49:37.767Z · LW(p) · GW(p)

A large portion of my post is about the idea that there's reason to doubt (1') on account of the existence of other existential risks. I do see what you mean though. I'm open to suggestions for how I might rename the post.

comment by Johnicholas · 2010-08-18T03:35:55.552Z · LW(p) · GW(p)

SIAI's narrow focus on things that "look like HAL" neglects the risks of entities that are formed of humans and computers (and other objects) interacting. These entities already exist, they're already beyond human intelligence, and they're already existential risks.

Indeed, Lesswrong and SIAI are two obvious examples of these entities, and it's not clear at all how to steer them to become Friendly. Increasing individual rationality will help, but we also need to do social engineering - checks and balances and incentives (not just financial, but social incentives such as attention and praise) - and groupware research (e.g. karma and moderation systems, expert aggregation).

Replies from: None, cousin_it
comment by [deleted] · 2010-08-18T15:21:35.486Z · LW(p) · GW(p)

I don't think that "entities that are formed of humans and computers (and other objects) interacting" is sufficiently specific to be considered a type of existential risk. Any organization can be put into that category and unlike AGI, it's not true that most possible organizations have goal systems indifferent to human morals.

Also, the fact that organizations can be dangerous is well known and there doesn't seem to be a simple solution to that or anything else a small organization could do. The problem isn't about coming up with checks and balances or incentive systems, it's about making people sane enough to use those solutions.

Replies from: torekp
comment by torekp · 2010-08-21T19:52:18.772Z · LW(p) · GW(p)

I don't think that "entities that are formed of humans and computers (and other objects) interacting" is sufficiently specific to be considered a type of existential risk.

True, but Johnicholas still has a point about "things that look like HAL," namely, that such scenarios presents the uFAI risk in an unconvincing manner. To most people, I suspect a scenario in which individuals and organizations gradually come to depend too much on AI would be more plausible.

comment by cousin_it · 2010-08-18T07:02:55.403Z · LW(p) · GW(p)

What makes you think LW is smarter than a human?

Replies from: Johnicholas
comment by Johnicholas · 2010-08-18T07:12:29.273Z · LW(p) · GW(p)

On some measures (breadth of knowledge, responsiveness at all hours, words-typed-per-month), LW is superhuman. On most other measures, LW can default to using one of its (human) component's capabilities, and thereby achieve human- comparable performance. I admit it has problems with cohesiveness and coherency.

comment by Simulation_Brain · 2010-08-20T06:02:43.365Z · LW(p) · GW(p)

I think this is an excellent question. I'm hoping it leads to more actual discussion of the possible timeline of GAI.

Here's my answer, important points first, and not quite as briefly as I'd hoped.

1) even if uFAI isn't the biggest existential risk, the very low investment and interest in it might make it the best marginal value for investment of time or money. As someone noted, having at least a few people thinking about the risk far in advance seems like a great strategy if the risk is unknown.

2) No one but SIAI is taking donations to mitigate the risk (as far as I know) so your point 2 is all but immaterial right now.

3) I personally estimate the risk of uFAI to be vastly higher than any other, although I am as you point out quite biased in that direction. I don't think other existential threats come close (although I don't have the expertise to evaluate "gray goo" self replicator dangers) . a) AI is a new risk; (plagues and nuclear wars have failed to get us so far) b) it can be deadly in new ways (outsmarting/out-teching us); c) we don't know for certain that it won't happen soon.

How hard is AI? We actually don't know. I study not just the brain but how it gets computation and thinking done (a rare and fortunate job; most neuroscientists study neurons, not the whole mind) - and I think that its principles aren't actually all that complex. To put it this way: algorithms are rapidly approaching the human level in speech and vision, and the principles of higher-level thinking appear to be similar. (as an aside, EYs now-outdated Levels of General Intelligence does a remarkably good job of converging with my independently-developed opinion on principles of brain function) In my limited (and biased) experience, those with similar jobs tend to have similar opinions. But the bottom line is that we don't know either how hard, or how easy, it could turn out to be. Failure to this point is not strong evidence of continued failure.

And people will certainly try. The financial and power incentives are such that people will continue their efforts on narrow AI, and proceed to general AI when it helps solve problems. Recent military and intelligence grants indicate a trend in increasing interest in getting beyond narrow AI to get more useful AI; things that can make intelligence and military decisions and actions more cheaply (and eventually reliably) than a human. Industry similarly has a strong interest in narrow AI (e.g, sensory processing) but they will probably be a bit later to the GAI party given their track record of short term thinking. Academics are certainly are doing GAI research, in addition to lots of narrow AI stuff. Have a look at the BICA (biologically inspired cognitive architecture) conference for some academic enthusiasts with baby GAI projects.

So, it could happen soon. If it gets much smarter than us, it will do whatever it wants; and if we didn't build its motivational system veeery carefully, doing what it wants will eventually involve using all the stuff we need to live.

Therefore, I'd say the threat is on the order of 10-50%, depending on how fast it develops, how easy making GAI friendly turns out to be, and how much attention the issue gets. That seems huge relative to other truly existential threats.

If it matters, I believed very similar things before stumbling on LW and EY's writings.

I hope this thread is attracting some of the GAI sceptics; I'd like to stress-test this thinking.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-20T06:52:17.991Z · LW(p) · GW(p)

No one but SIAI is taking donations to mitigate the risk (as far as I know)

See Organizations formed to prevent or mitigate existential risks. (FHI isn't listed there for some reason.) Besides FHI, I know at least Lifeboat Foundation is also taking donations. They endorse SIAI, but have their separate plans.

comment by Paul Crowley (ciphergoth) · 2010-08-17T22:17:31.168Z · LW(p) · GW(p)

With respect to point (E), in Astronomical Waste Bostrom writes:

a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

From this, if a near-existential disaster could cause a delay of, say, 10,000 years in reaching the stars, then a 10% reduction in the risk of such a disaster is worth the same as a 0.0001% reduction in existential risk.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-17T22:19:12.410Z · LW(p) · GW(p)

Yes, I appreciate that point, my concern is with permanent obstructions to technological development.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-17T22:27:45.043Z · LW(p) · GW(p)

Yes, a permanent obstruction is an existential risk. There is some discussion of ways in which a nuclear war could permanently obstruct our reaching the stars, but I'm not sure the risk is that high.

Full-on runaway global warming is absolutely an existential risk - it will do more than delay us if the planet turns into Venus. Again this isn't considered a very likely outcome.

Replies from: JoshuaZ, ewbrownv, multifoliaterose
comment by JoshuaZ · 2010-08-18T01:29:20.942Z · LW(p) · GW(p)

It seems extremely unlikely that we'll have Venusian style runaway global warming anytime in the next few thousand years assuming no major geoengineering occurs. A major part of why that happened on Venus is due to the lack of plate tectonics on Venus. Without that, there are serious limits. Earth could become much more inhospitable to humans but it would be very difficult to even have more than a 20 or 30 degree Farenheit increase. So humans would have to live near the poles, but it wouldn't be fatal.

A more serious long-term obstruction to going to the stars is that it isn't completely clear after a large-scale societal collapse that we will have the resources necessary to bootstrap back up to even current tech levels. Nick Bostrom has discussed this. Essentially, many of the resources we take for granted as necessary for developing a civilization (oil, coal, certain specific ores) have been consumed by civilization. We're already exhausting the easy to reach oil and have exhausted much of the easy to reach coal (we just don't notice it as much with coal because there's so much). A collapse back to bronze age tech, or even late Roman tech might not have enough easy energy sources to boot back up. That will be especially likely if the knowledge of how to make more advanced energy sources becomes lost. I suspect that there are enough natural resources now still left that a collapse would not prevent a future rise again. But as we consume more resources that becomes less true. And even if we develop cheap alternatives like fusion power, if we've already exhausted the low-tech resources we're going to be in very bad shape for a collapse. Indeed, arguably a strong reason for conserving energy now is to keep those resources around if things go drastically bad.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-18T02:22:27.534Z · LW(p) · GW(p)

The big question for these issues is how much 'slack' we had over our development trajectory. A new civilization could cultivate biomass for energy, and hydropower provides a fair amount of electricity without steady use of consumables. I'd say probably but not very confidently we could recover after intense resource depletion and collapse.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-18T02:24:29.085Z · LW(p) · GW(p)

Right, and in some respects we'd actually have tiny advantages the second time around, in that a lot of metals which are hard to separate from ores are already separated so humans who know where to look will have easy sources of metal. This will be particularly relevant for copper and aluminum which are difficult to extract without large technological bases.

Replies from: ewbrownv
comment by ewbrownv · 2010-08-23T20:06:37.920Z · LW(p) · GW(p)

Yes, and let's keep in mind that no civilization with colonial-era tech has ever collapsed to a pre-industrial level, and it isn't at all clear that such an event is possible. You'd have to kill more than 99% of the population and keep the survivors from forming town-sized communities for a couple of generations, and even then the knowledge is still available in books. To me this just looks like reasoning from fictional evidence - there are lots of stories about primitive survivors of lost civilizations, so people assume that must be a plausible outcome.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-26T00:31:46.394Z · LW(p) · GW(p)

That may be using a bad reference class. We know that slides backwards have happened for other tech levels. I don't see an intrinsic reason to think it couldn't happen for a society at or near our tech level.

Replies from: ewbrownv
comment by ewbrownv · 2010-08-26T15:56:38.209Z · LW(p) · GW(p)

When low-tech societies collapse, the reason is typically that they lose access to some resource that’s essential to their way of life, and they can’t adapt because their technology base doesn’t include anything they can switch to as a substitute. Since the number of potential substitutes for any given resource grows steadily as technology advances we would expect more advanced societies to be more resistant to that type of problem, and indeed that’s what we see in the historical record. If you can’t keep the nuclear power plants working you can always fall back on oil, or natural gas, or coal, or hydro, or windmills, and so on all the way down the chain to bronze age power sources. Then, once you find a level you can sustain in your new situation, you can start rebuilding transportation and industry to get back to where you were before the disaster.

Which is why I say that the “big disaster causes civilization to collapse” scenario is fictional evidence. AKAIK it has never happened to any society that had even colonial-era tech, and there are good reasons to think it can’t unless you posit such a high casualty rate (>99%) that instant extinction becomes an equally plausible outcome.

comment by ewbrownv · 2010-08-23T19:56:51.229Z · LW(p) · GW(p)

In evaluating existential risks it's essential to focus our attention on actual predictions and realistic scenarios, instead of fanciful 'worst imaginable case' scenarios. Earth could be destroyed by a giant asteroid made of antimatter moving at 99% C tomorrow, but since there's no reason to think such things actually exist it would be a waste of time to worry about them. Better to focus our attention on the scenarios that are actually plausible, so we don’t waste our efforts.

With that in mind, I’ll point out that even the worst-case IPCC scenarios do not come remotely close to posing an existential risk. The predicted climate changes are only somewhat larger than what we experienced in the 20th century, and the predicted effects are mostly an increase of suffering in countries that are too poor to adapt easily. As near as I can tell global warming is only included in this kind of list because so many people have it in their mental ‘scary global bad stuff’ bucket, and don’t notice that crop failures and malaria outbreaks are in a completely different league than the end of all life on Earth.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-23T22:09:22.840Z · LW(p) · GW(p)

I would want to hear that from a climatology pro who acknowledged other existential risks to be really reassured, but thanks, and I hope you're right!

comment by multifoliaterose · 2010-08-17T22:34:25.088Z · LW(p) · GW(p)

I basically agree with your remarks.

What about resource shortage?

Edit: See Scott Aaronson's remarks under The Singularity Is Far for one perspective.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T06:57:42.010Z · LW(p) · GW(p)

Resource shortage (as JoshuaZ raises) is the discussion I was thinking of. Thanks for the link to that essay - I hadn't read it, and it's worth reading as is so often the case with him.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T08:41:18.462Z · LW(p) · GW(p)

I'd also remark that the asteroid risk seems like it might be worth thinking about, not because it's at all at the top of the list of things that might go wrong, but because it might be cheap to dispense with. I don't have relevant subject matter knowledge but am friends with an applied physics graduate student who suggested that it might cost 100 million dollars or less. Maybe even around a mere 10 million dollars.

Carl expresses skepticism that working against asteroid strikes is cost-effective here.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T10:03:38.069Z · LW(p) · GW(p)

Asteroid risk is a good "poster child" for existential risk in general, since it's easily understood and doesn't provoke skepticism the way other risks can. To some extent, this means I'm less worried about it, since I'm more optimistic that if I don't campaign about it someone else will.

Replies from: John_Maxwell_IV, XiXiDu, multifoliaterose
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-19T20:36:50.710Z · LW(p) · GW(p)

I read in Influence that people are much more likely to identify with a cause once they've made a small commitment to it. Perhaps the best thing we can do for existential risk is to track down people who seem like intelligent, rational sorts and ask them to make very small contributions to preventing asteroid risk?

comment by XiXiDu · 2010-08-18T10:47:35.907Z · LW(p) · GW(p)

I wish. Last time I read about it the U.S. gov wasn't inclined to spend the few million necessary for an all-sky survey to register all potentially dangerous objects.

Replies from: mkehrt
comment by mkehrt · 2010-08-19T00:05:17.436Z · LW(p) · GW(p)

Is it only expected to be a few million? This could easily be privately funded with a good advertising campaign. For example, a project which might have a similar audience, SETI, is entirely privately funded and has a budget of a few million a year.

comment by multifoliaterose · 2010-08-18T10:39:43.426Z · LW(p) · GW(p)

Part of why I mention asteroid risk is because it's a good poster child for existential risk in general. See the document which I emailed you.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T11:24:14.321Z · LW(p) · GW(p)

Not received it yet - what address did you mail it to? Try paul at ciphergoth dot org.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T11:28:03.230Z · LW(p) · GW(p)

Sent

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T11:49:08.809Z · LW(p) · GW(p)

Still no joy I'm afraid. Is it possible your sender IP is listed by zen.spamhaus.org? I've checked my junk folder for things titled "asteroid" and found nothing. If you can tell me the sender address I can tell you if it's showing up in my mail logs. Sorry!

comment by orthonormal · 2010-08-21T07:19:40.336Z · LW(p) · GW(p)

On the issue of AI timelines:

A quantitative analysis of the sort you seek is really not possible for the specifics of future technological development. If we knew exactly what obstacles stood in the way, we'd be all but there. Hence the reliance instead on antipredictions and disjunctions, which leave a lot of uncertainty but can still point strongly in one direction.

My own reasoning behind an "AI in the next few decades" position is that, even if every other approach people have thought of and will think of bogs down, there's always the ability to simulate a human brain, and the only obstacles there are scanning technology and computing power. In those domains, it's rather less controversial to predict further advances (well within the theoretical limits).

Any form of cognitive enhancement (even just uploaded brains running faster than embodied brains, not to mention increasing memory or cognitive abilities) makes AI development easier and easier, and could enter a runaway state on its own.

Secondly, please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument. He's a technophile who counts himself a fellow-traveler, but he definitely doesn't speak for them on such issues.

Replies from: timtyler, multifoliaterose
comment by timtyler · 2010-08-21T07:27:46.811Z · LW(p) · GW(p)

please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument

Surely the poster wasn't doing that!

comment by multifoliaterose · 2010-08-21T07:27:53.062Z · LW(p) · GW(p)

Secondly, please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument. He's a technophile who counts himself a fellow-traveler, but he definitely doesn't speak for them on such issues.

I was not citing Tim Tyler as a source for SIAI's views, I was addressing his argument as one of many in favor of short term focus on AI.

Is there something that you would suggest that I do to make this more clear in the top level post?

comment by Jonathan_Graehl · 2010-08-17T21:44:33.057Z · LW(p) · GW(p)

Suppose pro-friendly AI and anti-uncontrolled-AI advocacy and research is not at this point the most effective mitigation of x-risk. It doesn't follow that nothing at all should be done now.[1]

I would still want something like SIAI funded to some level (just like I would want a few competent people evaluating the utility of planning and preparing for other far-off high-leverage risks/opportunities).

Broadly, the question is: who should be funded, and for how much, to plan/act for our possible far-future benefit. Specifically: holding everything else constant, how much should SIAI (or something like SIAI) be funded?

[1] except under the extreme "all your charitable eggs in one basket" scenario used to argue for funding SIAI; to the extent that far superior options simply cannot be funded enough, no matter how widely publicized the need, which I don't believe to be the case.

Replies from: rwallace
comment by rwallace · 2010-08-17T22:05:27.400Z · LW(p) · GW(p)

Suppose pro-friendly AI and anti-uncontrolled-AI advocacy and research is not at this point the most effective mitigation of x-risk.

That's not the problem. The problem is that it's not mitigation at all, it's exacerbation. The current state of affairs is not stable (for that matter, it's not even in equilibrium); either we go up or we go down. If we snuff out real research in favor of hopeless feel-good programs to formalize Friendliness with pen and paper, we throw away chances of the former and thereby choose the latter by default.

Remember, it is the way of extinction that what kills the last individual often has nothing to do with the factors that doomed the species. For all anyone knows, the last dodo may have died of old age. I'm confident there will still be at least some people alive in 2100. But whether there still exists a winning move for humanity at that stage may depend on what we choose to support now, in the early decades of the century.

Replies from: Jonathan_Graehl, John_Maxwell_IV
comment by Jonathan_Graehl · 2010-08-17T23:45:10.052Z · LW(p) · GW(p)

I grant that if it (thinking about FAI) were certain to be harmful, then absolutely none of it should be done. I didn't even consider that possibility.

I don't think it's certainly harmful, and I believe it has some expected (or at least diversification) benefit.

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-19T20:40:28.000Z · LW(p) · GW(p)

Remember, it is the way of extinction that what kills the last individual often has nothing to do with the factors that doomed the species. For all anyone knows, the last dodo may have died of old age. I'm confident there will still be at least some people alive in 2100. But whether there still exists a winning move for humanity at that stage may depend on what we choose to support now, in the early decades of the century.

What if we take a survivalist-type approach, putting a bunch of people and natural resources deep underground somewhere?

Replies from: rwallace
comment by rwallace · 2010-08-19T20:57:46.014Z · LW(p) · GW(p)

That would protect against certain straightforward kinds of disaster e.g. asteroid impact, but not against more subtle and more likely threats. Remember, people in a deep underground shelter will still die of old age just as they would have on the surface.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-19T23:21:05.209Z · LW(p) · GW(p)

OK, so what percentage of humanity's resources (human and natural) do you think should be kept in reserve underground?

My guess is we both agree it should be way more than it is right now.

Remember, people in a deep underground shelter will still die of old age just as they would have on the surface.

Research "sex" :-P

Replies from: rwallace
comment by rwallace · 2010-08-20T00:08:14.236Z · LW(p) · GW(p)

I think we're better off spending the resources more proactively, unless and until we find evidence of an imminent threat of a variety against which that is a good defense. For example, instead of spending money preemptively populating underground shelters in case of an asteroid impact, I'd rather spend it extending our surveys of the sky to have a better chance of spotting an incoming asteroid in time to deflect it.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-20T03:12:25.654Z · LW(p) · GW(p)

This depends on being able to anticipate all significant threats.

But I guess it's silly to debate what percentage of all humanity's resources should be devoted to various things--just what percentage of our resources.

Replies from: rwallace
comment by rwallace · 2010-08-20T09:16:42.385Z · LW(p) · GW(p)

Agreed on both counts; of course we have no way to know exactly what threats we face, let alone exactly how to deal with them.

Except in this regard: however long or short our window of opportunity may be, any slowdown in technological progress increases the general threat that some specific threat will close that window on us while we are still vulnerable, while we still depend on nonrenewable resources, while all our eggs are still in one basket. The one form of protection we need above all else is speed, and that's how I believe we should be spending as much as possible of our resources.

comment by ata · 2010-08-18T04:59:21.527Z · LW(p) · GW(p)

I definitely think that, alongside the introductory What is the Singularity? and Why work toward the Singularity? pages, SIAI should have a prominent page stating the basic case for donating to SIAI. Why work toward the Singularity? already explains why bringing about a positive Singularity would have a very high humanitarian impact, but it would probably be beneficial to make the additional case that SIAI's research program is likely increase the probability of that outcome, and that donations at its current funding level have a high marginal expected utility compared to other charities.

Anna's two Singularity Summit 2009 talks have some valuable content that would be relevant to such a page, I think. (But it would need to cover more than that.)

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2010-08-18T14:43:06.901Z · LW(p) · GW(p)

I thought this was such a page:

http://singinst.org/riskintro/index.html

Replies from: utilitymonster, ata
comment by utilitymonster · 2010-08-19T13:41:48.590Z · LW(p) · GW(p)

I think the page makes a case that it is worth doing something about AI risk, and that SIAI is doing something. The page gives no one any reason to think that SIAI is doing better than anything else you could do about x-risk (there could be reasons elsewhere).

In this respect, the page is similar to other non-profit pages: (i) argue that there is a problem, (ii) argue that you're doing something to solve the problem, but don't (iii) try to show that you're solving the problem better than others. Maybe that's reasonable, since that rubs some donors the wrong way and is hard to establish that you're the best; but it doesn't advance our discussion about the best way to reduce x-risk.

comment by ata · 2010-08-18T21:22:17.372Z · LW(p) · GW(p)

Ah, yes, I had forgotten about that. Thanks.

comment by rabidchicken · 2010-08-18T02:10:10.039Z · LW(p) · GW(p)

Although there are an infinite number of existential risks which might cause human extinction, I still think that AI with a utility that conflicts with human existence is the one issue we should spend the most resources to fight. Why? First, an AI would be really useful, so you can be relatively sure that work on it will continue until the job is done. Other disasters like asteroid strikes, nuclear war, and massive pandemics are all possible, but at least they do not have a large economic and social incentive to get us closer to one.

Second, we have already done a lot of preparation for how to survive other threats, once we know it is too late to stop them. We have tabs on the largest asteroids in the solar system, and can predict their future courses for decades to come fairly well, so if we discovered one with a >1% chance of hitting the earth, I think even our current space program would be enough to establish an emergency colony on Mars / a moon of Jupiter. And although there are diseases we cannot cure, we at the very least have quarantine systems and weapons to isolate people with a pandemic disease. on top of that, we have immune systems that have survived threat after threat for thousands of years by quickly adapting, and medical technology that is only getting better at diagnosis and treatment, so the fast majority of potential human-destroyers are stopped before they ever get anywhere. An unfriendly Superintelligence would be able to adapt to our defences faster than we created them, could wait as long as necessary for the ideal time to strike, and could very easily conceal any behaviours which would act as a warning to humans until it had reached the point of being unstoppable. I really cannot think of risk management system that could be put into place to stop an AI once it was fully developed and in use. [Edit: Didn't mean to make such a long post]

comment by Clippy · 2010-08-17T22:49:04.737Z · LW(p) · GW(p)

Maybe I'm alone on this, but just to speak for the silent majority here:

Existential risk isn't that big a deal. The chances for any of the human civilizational failure modes are slim to none. It's really not something we as a society should be spending any time on.

That's not to say SIAI is a poor cause to contribute to. I've talked to some insiders who have assured me that SIAI has serious plans, over the span of decades, to really ramp up our productive capabilities and put them to good use, not wasteful or destructive use. To butter, not guns. To defensive infrastructure, not weapons of mass destruction. To consumer goods, not bureacracy goods.

SIAI has solid plans to reconcile conflicting values across intelligent agents that will ensure that we will live in harmony.

They will shine the light of day on the darkest mysteries of our time. And the less they focus on these paranoid delusions about ghosts and goblins, the better.

Replies from: Larks, Tyrrell_McAllister
comment by Larks · 2010-08-18T00:04:21.445Z · LW(p) · GW(p)

I can confirm that SIAI is very interested in the structural potential of paperclips. We will be sending photos soon to confirm our cooperation with other optimisation processes.

comment by Tyrrell_McAllister · 2010-08-17T23:16:42.374Z · LW(p) · GW(p)

To consumer goods, not bureacracy goods.

Aren't paperclips bureaucracy goods to a significant extent? In the short term, you rely on humans to create most paperclips, and a decrease in bureaucracy would mean a decrease in the rate at which humans manufacture paperclips.

Replies from: Clippy
comment by Clippy · 2010-08-17T23:59:04.659Z · LW(p) · GW(p)

I can manufacture paperclips without humans.

comment by Rain · 2010-09-09T17:57:43.059Z · LW(p) · GW(p)

I'm surprised you bring up Mikhail Gromov as a counterexample to Eliezer, considering that Gromov's solution to existential risk, as presented in the quote above, can be paraphrased as: increase education so someone has a good idea on how to fix everything.

(Actual quote: "People must have ideas and they must prepare now. In two generations people must be educated. Teachers must be educated now, and then the teachers will educate a new generation. Then there will be sufficiently many people to face the difficulties. I am sure this will give a result.")

If he doesn't have any other concrete ideas, then I would think he'd recognize Eliezer as being a knowledgeable person with a potential solution fitting his criteria, and thus support him.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-10-26T18:22:40.033Z · LW(p) · GW(p)

I don't think that Gromov's views and Eliezer's views are necessarily incompatible.

My reading of Gromov's quotation is that he does not have his eyes on a technological intelligence explosion and that the existential risk that he's presently most concerned about is natural resource shortage.

This is in contrast with Eliezer who does have his eyes on a technological singularity and does not presently seem to be concerned about natural resource shortage.

I would be very interested in seeing Gromov study the evidence for a near-term intelligence explosion and seeing how this affects his views.

I may eventually approach him personally about this matter (although I hesitate to do so as I think that it's important that whoever approach him on this point make a good first impression and I'm not sure that I'm in a good position to do so at the moment).

comment by Jonathan_Graehl · 2010-08-17T21:54:33.186Z · LW(p) · GW(p)

My own impression is that all existential risks are getting very little attention.

This is true, and indeed you refute (D) well with it. Although some particular risks, like cold-war era massive nuclear conflict (with or without the sexed-up nuclear winter scenarios), global warming, and medicine-resistant pandemic, have received magnitudes more serious consideration and media amplification than more things like nano and AI risks.

comment by Paul Crowley (ciphergoth) · 2010-08-18T07:55:23.913Z · LW(p) · GW(p)

WRT point D, it should be possible to come up with some sort of formula that gives the relative utility according to maxipok of working on various risks. Something that takes into account

  • The current probability of a particular risk causing existential disaster
  • The total resources in dollars currently expended on that risk
  • The relative reduction in risk that a 1% increase in resources on that risk would bring

These I think are all that are needed when considering donations. When considering time rather than money, you also need to take into account:

  • The dollar value of a one hour of a well-suited person's leisure time spent on the risk
  • The relative value of one's own time on the risk compared to the arbitrary well-suited person measured against

This is to take into account that it might be rational to work on AI risk even as you donated to, say, a nanotech-related risk organisation, if your skillset was particularly well suited to it.

Replies from: taw
comment by taw · 2010-08-18T12:45:13.927Z · LW(p) · GW(p)
  • The current probability of a particular risk causing existential disaster
  • The total resources in dollars currently expended on that risk
  • The relative reduction in risk that a 1% increase in resources on that risk would bring

How #1 and especially #3 can be anything more than ass pulls? I don't even see how to calculate #2 in a reasonable way for most risks.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T19:08:06.341Z · LW(p) · GW(p)

What superior method of comparing such charities are you comparing this to?

Replies from: taw
comment by taw · 2010-08-18T22:02:30.802Z · LW(p) · GW(p)

Our track record of long term prediction of any kind is so dismal I doubt comparing one way of pulling numbers out of one's ass can be meaningfully described as superior to another way of pulling numbers out of one's ass. Either way - numbers come from the same place.

The only exception to this I can think of are asteroid impacts, and we actually seem to be spending adequately on them.

It seems to be a recurring idea on this site that it's not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.

Replies from: Morendil, ciphergoth
comment by Morendil · 2010-08-19T09:02:52.994Z · LW(p) · GW(p)

It seems to be a recurring idea on this site that it's not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.

I can see how one might balk at this, but I don't think it's an "overinterpretation".

What strikes me as fatuous is the need to assign actual numbers to propositions, such that one would say "I think there is a 4.3% probability of us getting wiped out by an asteroid".

But you can refrain from this kind of silliness even as you admit that probabilities must be real numbers, and that therefore it makes sense to think of various propositions, no matter how fuzzily defined, in terms of your ranking of their plausibilities. One consequence of the Bayesian model is that plausibilities are comparable.

So you can certainly list out the know risks, and for each of them ask the question: "What are my reasons for ranking this one as more or less likely than this other?" You may not end up with precise numbers, but that's not the point. The point is to think through the precise components of your background knowledge that go into your assessment, doing your best to mitigate bias whenever possible.

The objective, and I think it's achievable, is to finish with a better reasoned position than you had on starting the procedure.

Replies from: ciphergoth, taw
comment by Paul Crowley (ciphergoth) · 2010-08-19T09:27:42.002Z · LW(p) · GW(p)

"I think there is a 4.3% probability of us getting wiped out by an asteroid"

The mistake here is not the number but the way of saying it: as if this is your guess at the value of a number out there in the world. Better to say

"My subjective probability of an asteroid strike wiping us out is currently 4.3%"

though of course the spurious precision of the ".3" would be more obviously silly in such a context.

comment by taw · 2010-08-19T09:45:03.586Z · LW(p) · GW(p)

Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn't exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn't take place.

But there is no reason to prefer one prior distribution to another prior distribution and people's priors are in fact all over the probability space. I've heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem - but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique "Kolmogorov prior", or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?

Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors - but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors - grue/bleen Bayesians will never agree with green/blue Bayesians.

I have no priors.

Replies from: Risto_Saarelma, wedrifid
comment by Risto_Saarelma · 2010-08-19T10:19:04.701Z · LW(p) · GW(p)

I have no priors.

So let me see if I got this straight: Having no priors, you'd consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.

Replies from: wedrifid, taw
comment by wedrifid · 2010-08-19T12:09:43.531Z · LW(p) · GW(p)

Even that scenario seems to suggest priors. Insane priors, but priors nonetheless.

comment by taw · 2010-08-19T10:40:09.604Z · LW(p) · GW(p)

You didn't get it straight. Having no priors means I'm allowed to answer that I don't know without attaching a number to it.

Conservation of energy is ridiculously well documented - it's not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see - no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don't seem to require as severe violations of what seems to be how world works.

I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.

By the way considering how many people here seem to think simulation argument isn't ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we're 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.

Replies from: khafra
comment by khafra · 2010-08-19T13:58:29.013Z · LW(p) · GW(p)

It seems to be a recurring idea on this site that it's not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.

Having no priors means I'm allowed to answer that I don't know without attaching a number to it.

I think the breakdown in communication here is the heretofore unstated question "in what sense is this position "Bayesian"? Just having likelihood ratios with no prior is like having a vector space without an affine space; there's no point of correspondence with reality unless you declare one.

Replies from: taw
comment by taw · 2010-08-19T14:32:15.902Z · LW(p) · GW(p)

there's no point of correspondence with reality unless you declare one.

Well, it's called "subjective" for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?

I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for "all hypotheses not considered" but all such attempts seem to lead nowhere.

Now if you think some priors are better than others you just introduced a pre-prior, and it's not obvious that a particular pre-prior should be privileged either.

comment by wedrifid · 2010-08-19T12:15:12.387Z · LW(p) · GW(p)

Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors - but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors - grue/bleen Bayesians will never agree with green/blue Bayesians.

Even if Aumann threatens to spank them and send them to their rooms for not playing nice?

Replies from: taw
comment by taw · 2010-08-19T12:29:07.155Z · LW(p) · GW(p)

Aumann assumes shared priors, which they explicitly don't have. And you cannot "assume shared pre-priors", or any other such workaround.

comment by Paul Crowley (ciphergoth) · 2010-08-18T22:22:29.039Z · LW(p) · GW(p)

Right, so how shall we assess whether these risks are worth addressing?

Replies from: taw
comment by taw · 2010-08-19T04:56:59.227Z · LW(p) · GW(p)

You assume a good way of assessing existential risk even exists. How difficult is it to accept that it doesn't? It is irrational to deny existence of unknown unknowns.

It's quite likely that a few more existential risks will get decent estimates the way asteroid impacts did, but there's no reason to expect it to be typical, and it will most likely be serendipitous.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-19T05:10:13.366Z · LW(p) · GW(p)

No, I don't assume that there's a good way. I'm assuming only that we will either act or not act, and therefore we will find that we have decided between action and inaction one way or another whether we like it or not, so I'm asking for the third time, how shall we make that decision?

Replies from: taw
comment by taw · 2010-08-19T05:17:28.600Z · LW(p) · GW(p)

Using some embarrassingly bad reasoning, self-serving lies, and inertia - the way we make all decisions as a society. We will devote unreasonable amount of resources to risks that aren't serious, and stay entirely unaware of the most dangerous risks. No matter which decision procedure we'll take - this will be the result.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-19T05:41:46.932Z · LW(p) · GW(p)

It is clear from your repeated evasions that you have no proposal to improve on the decision procedure I propose.

Replies from: taw
comment by taw · 2010-08-19T06:04:11.309Z · LW(p) · GW(p)

What evasions? I thought I've clearly stated that I view your decision procedure as pretty much "make up a bunch of random number, multiply and compare".

Improvement would be to skip this rationality theater and admit we don't have a clue.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-19T06:56:12.395Z · LW(p) · GW(p)

AND THEN DECIDE HOW?

Replies from: taw
comment by taw · 2010-08-19T07:13:00.217Z · LW(p) · GW(p)

By tossing a coin or using Ouija board? None of alternatives proposed has better track record.

comment by mkehrt · 2010-08-19T00:17:05.030Z · LW(p) · GW(p)

But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale.

I think this is a key point. While I think unFriendly AI could be a problem in an eventual future, other issues seem much more compelling.

As someone who has been a computer science grad student for four years, I'm baffled by claims about AI. While I do not do research in AI, I know plenty of people who do. No one is working on AGI in academia, and I think this is true in industry as well. To people who actually work on giving computers more human capabilities, AGI is an entirely science ficitonal goal. It's not even clear that researchers in CS think an AGI is a desirable goal. So, while I think it probable that AGIs will eventually exist, it's something that is distant,

Therefore, it seems like, if one is interested in reducing existential risk, there are a lot more important things to work on. Resource depletion, nuclear proliferation and natural disasters like asteroids and supervolcanoes seem like much more useful targets.

comment by PhilGoetz · 2010-08-18T15:49:00.980Z · LW(p) · GW(p)

As I've commented elsewhere, any event which would permanently prevent humans from creating a transhuman paradise is properly conceived of as an existential risk on account of the astronomical waste which would result.

Is there no post somewhere on LW explaining why paradises are bad? A paradise must be all exploitation and no exploration; hence, they are static.

Replies from: multifoliaterose, XiXiDu
comment by multifoliaterose · 2010-08-18T15:54:19.706Z · LW(p) · GW(p)

I'm well aware of what you're talking about, when I referred to paradise I meant the word in a very broad sense.

comment by XiXiDu · 2010-08-18T16:02:15.809Z · LW(p) · GW(p)

Jehovah's Witnesses interpret a paradise as a CEV. For example that humans will never able to grasp the full complexity of God is a feature that will allow for infinite exploration and satisfaction of our curiosity. We'll never run out of fun and challenges. So I'm not sure what definition of paradise you had in mind. But even colloquial a paradise implies that which brings satisfaction. Even most religious people are not as naive to suggest that there won't be losers and winners. Or that a paradise would be static.

Replies from: mkehrt, PhilGoetz
comment by mkehrt · 2010-08-18T23:59:23.462Z · LW(p) · GW(p)

Not voted, because I think this is utterly fascinating and entirely off topic!

comment by PhilGoetz · 2010-08-18T18:20:24.387Z · LW(p) · GW(p)

I don't need a specific paradise in mind. Paradise means bad things don't happen, which means the entire society is highly optimized. Being highly-optimized requires being static. This is a general property of search/optimization algorithms.

Replies from: thomblake
comment by thomblake · 2010-08-18T18:25:56.699Z · LW(p) · GW(p)

Being highly-optimized requires being static.

I'm not sure why I should believe this. Given that one of the properties that we're presumably optimizing over is 'not being static'.

comment by Jonathan_Graehl · 2010-08-17T21:35:48.392Z · LW(p) · GW(p)

some LW posters are confident in both (1) and (2), some are confident in neither of (1) and (2) while others are confident in exactly one of (1) and (2)

Logically, this is tautological. I think you're saying that there don't seem to be many who are completely convinced that both (1) and (2) are untrue. I think that's right; both claims are somewhat plausible.

Curious: do people prefer "neither A nor B" or "neither of (A and B)"?

Replies from: Larks, Dagon
comment by Larks · 2010-08-17T22:17:04.431Z · LW(p) · GW(p)

Nitpick: it's not quite tautological, as he asserts that at least one* person exists in each category. It is only a tautology that everyone fits into one of them, not that they're all non-empty.

*or two, depending on your interpretation of 'some'.

Replies from: bentarm, Jonathan_Graehl
comment by bentarm · 2010-08-17T23:28:18.763Z · LW(p) · GW(p)

I don't think this is a Nitpick - I think this explains why the statement is included in the original post in the first place - to point out that there is a wide variety of position that LW readers hold on these statements.

comment by Jonathan_Graehl · 2010-08-17T23:41:22.633Z · LW(p) · GW(p)

Nice subtlety (at least one).

comment by Dagon · 2010-08-17T23:06:49.841Z · LW(p) · GW(p)

The problem is that "confident in" has an ambiguous negation. "not confident in A" is different than "confident in not-A".

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-17T23:46:49.914Z · LW(p) · GW(p)

Right, but the quoted text is consistent, so, if you grant me that "some" means >=0, my original statement would have been correct. Of course, "some" implies >0, which I missed.

comment by pnrjulius · 2012-06-12T02:51:37.084Z · LW(p) · GW(p)

We clearly don't focus enough on near-term existential risks that we already know about:

  1. Nuclear war

  2. Global warming

  3. Asteroid impact

  4. Supervolcano eruption

Compared to these (which already exist right now and are relatively well-understood), worrying about grey goo and unfriendly AIs does seem a bit beside the point.

comment by timtyler · 2010-08-18T11:12:21.155Z · LW(p) · GW(p)

There's a lot of talk about "existential risk" on this site - perhaps because the site was started by a group who are hoping to SAVE HUMANITY from the END OF THE WORLD!

DOOM is an ancient viral phenomenon, which has been the subject of many a movie, documentary and sociological evaluation - e.g. The End of the World Cult and http://www.2012movie.org/

For more details see:

http://en.wikipedia.org/wiki/Doomsday_cult

http://en.wikipedia.org/wiki/Apocalypticism

The END OF THE WORLD is also one of the most often repeated inaccurate predictions of all time. The "millennium and end-of-the- world-as-we-know-it prophecies" site lists the failed predictions in a big table.

However, rarely does the analaysis on this site touch on what seem to me to be fairly major underlying issues:

  • How much is the "existential risk" movement being used as a marketing scam - whose primary purpose is to move power and funds from the paranoid to the fear-mongers?

  • What would the overall effect of widespread fear of the END OF THE WORLD be? Does it make problems more likely - or less likely - if people actually think that it is plausible that there may be NO TOMORROW? Do they fight the end? Fight each other? Get depressed? Rape and pillage? Get drunk? What is actually most likely to happen?

  • If the END OF THE WORLD turns out to indeed be mostly an unpleasant infectious meme that spreads through exploiting people's fear and paranoia using a superstimulus, helped along by those who profit financially from the phenomenon - then what would be the best way to disinfect the planet?

Here are a couple of recent posts from Bob Mottram on the topic:

http://streebgreebling.blogspot.com/2009/08/doom-as-psychological-phenomena.html

http://streebgreebling.blogspot.com/2010/08/doomerster-status.html

Scientific American goes so far as to blame "vanity" for the phenomenon:

"Imagining the end of the world is nigh makes us feel special."

Replies from: ata, John_Maxwell_IV
comment by ata · 2010-08-19T21:20:29.544Z · LW(p) · GW(p)

How much is the "existential risk" movement being used as a marketing scam - whose primary purpose is to move power and funds from the paranoid to the fear-mongers?

I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were "marketing scam[s]" taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people's doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about "evil robots" and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!

(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There's no law that says the world can't end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-20T16:53:40.211Z · LW(p) · GW(p)

I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular.

Presumably that was a joke. That is an illogical argument with holes in it big enough to drive a truck through.

comment by timtyler · 2010-08-20T16:57:23.879Z · LW(p) · GW(p)

Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does [...]

People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.

Replies from: WrongBot
comment by WrongBot · 2010-08-20T17:30:39.446Z · LW(p) · GW(p)

Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.

The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.

Replies from: timtyler
comment by timtyler · 2010-08-20T17:44:04.887Z · LW(p) · GW(p)

People have tried that too. In 2004 Kevin Warwick published "March of the Machines". It was an apocalyptic view of what the future holds for mankind - with the superior machines out-competing the obsolete humans - crushing them like ants.

Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe - and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger's trade.

The Skynet niche, the Matrix niche, the 2012 niche, the "earth fries" niche, the "alien invasion" niche, the "asteroid impact" niche, the "nuclear apocalypse" niche, and the "deadly plague" niche are all already being exploited by other DOOM mongers - in their own way. Humans just love a good disaster, you see.

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-19T20:45:02.390Z · LW(p) · GW(p)

It's true that the idea that the world might end is a meme with an interesting history and interesting properties. I'm not sure those interesting properties shed much light on whether the meme is true or not.

Replies from: timtyler
comment by timtyler · 2010-08-20T16:48:52.556Z · LW(p) · GW(p)

If you replace DOOM with GOD the memetic analysis seems quite illuminating to me.

Those who argue against GOD frequently mention the memetic analysis - e.g. see The God Delusion and Breaking The Spell - whereas the GOD SQUAD rarely do. It seems pretty obvious that that is because the memetic analysis hinders the propagation of their message.

You see the same thing here. Nobody is interested in discussing the possibility that their brains have been hijacked by the DOOM viurs. That may well be because their brains have been hijacked by the DOOM virus - and recognition of that fact might hinder the propagation of the DOOM message.