Posts

Breaking the vicious cycle 2014-11-23T18:25:21.644Z
What do you mean by Pascal's mugging? 2014-11-20T16:38:46.970Z
Computer Science and Programming: Links and Resources 2012-05-29T13:17:53.156Z
Question about brains and big numbers 2012-04-17T11:57:22.405Z
Skoll World Forum: Catastrophic Risk and Threats to the Global Commons 2012-04-05T09:44:06.382Z
A Primer On Risks From AI 2012-03-24T14:32:42.166Z
Reply to Yvain on 'The Futility of Intelligence' 2012-03-17T13:28:34.138Z
What are YOU doing against risks from AI? 2012-03-17T11:56:44.852Z
The Futility of Intelligence 2012-03-15T14:25:24.062Z
Risks from AI and Charitable Giving 2012-03-13T13:54:36.349Z
[Link] Better results by changing Bayes’ theorem 2012-03-09T19:38:58.325Z
How does real world expected utility maximization work? 2012-03-09T11:20:07.453Z
[Link] Personality change key to improving wellbeing 2012-03-06T11:30:16.116Z
[Link] The emotional system (aka Type 1 thinking) might excel at complex decisions 2012-03-03T19:05:59.564Z
Q&A with experts on risks from AI #4 2012-01-19T16:29:53.996Z
Q&A with Abram Demski on risks from AI 2012-01-17T09:43:10.805Z
Q&A with experts on risks from AI #3 2012-01-12T10:45:13.633Z
[Template] Questions regarding possible risks from artificial intelligence 2012-01-10T11:59:07.288Z
Q&A with experts on risks from AI #2 2012-01-09T19:40:26.776Z
Q&A with experts on risks from AI #1 2012-01-08T11:46:15.378Z
More intuitive explanations! 2012-01-06T18:10:39.264Z
Explained: Gödel's theorem and the Banach-Tarski Paradox 2012-01-06T17:23:02.978Z
Intuition and Mathematics 2011-12-31T18:58:20.004Z
Should we discount extraordinary implications? 2011-12-29T14:51:27.834Z
Q&A with Michael Littman on risks from AI 2011-12-19T09:51:15.496Z
Question about timeless physics 2011-12-16T13:09:54.128Z
Q&A with Richard Carrier on risks from AI 2011-12-13T10:00:57.425Z
Objections to Coherent Extrapolated Volition 2011-11-22T10:32:13.175Z
OPERA Confirms: Neutrinos Travel Faster Than Light 2011-11-18T09:58:27.327Z
Why an Intelligence Explosion might be a Low-Priority Global Risk 2011-11-14T11:40:38.917Z
Is an Intelligence Explosion a Disjunctive or Conjunctive Event? 2011-11-14T11:35:40.518Z
No Basic AI Drives 2011-11-10T13:27:34.207Z
Singularity Institute mentioned on Franco-German TV 2011-11-07T14:14:15.721Z
Epistemic Utility Arguments for Probabilism [Link] 2011-09-26T11:10:01.558Z
What if we make better decisions when we trust our gut instincts? [Link] 2011-09-25T12:22:45.033Z
Rough calculations: Fermi and the art of guessing 2011-09-08T10:39:34.009Z
Video: You Are Not So Smart 2011-09-08T09:43:51.104Z
Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound 2011-09-07T18:27:42.794Z
Make evidence charts, not review papers? [Link] 2011-09-04T13:26:57.261Z
AI-Box Experiment - The Acausal Trade Argument 2011-07-08T09:18:39.846Z
What do the patterns of good and bad behaviours in an online world reveal about the nature of humanity? 2011-07-06T17:36:50.622Z
Richard Dawkins on vivisection: "But can they suffer?" 2011-07-04T16:56:20.407Z
Hanson Debating Yudkowsky, Jun 2011 2011-07-03T16:59:29.894Z
People neglect small probability events 2011-07-02T10:54:31.526Z
Khan Academy: Introduction to programming and computer science 2011-07-02T09:44:00.517Z
Health Inflation, Wealth Inflation, and the Discounting of Human Life 2011-06-26T10:31:49.345Z
Entangled with Reality: The Probabilistic Inferential Learning Model (Link) 2011-06-25T13:27:29.849Z
Music: The 21st Century Monads 2011-06-25T10:38:14.023Z
SIAI’s Short-Term Research Program 2011-06-24T11:43:04.500Z
existential-risk.org by Nick Bostrom 2011-06-20T17:59:27.958Z

Comments

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-27T10:51:41.608Z · LW · GW

I don't have time to evaluate what you did, so I'll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong.

This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence "that something has gone wrong", please let me know by email or via a private message on e.g. Facebook or some other social network that's available at that point in time.

A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.

Thanks.

I noticed that there is still a post mentioning MIRI. It is not at all judgemental or negative but rather highlights a video that I captured of a media appearance of MIRI on German/French TV. I understand this sort of posts not to be relevant posts for either deletion or any sort of header.

Then there is also an interview with Dr. Laurent Orseau about something you wrote. I added the following header to this post:

Note: I might have misquoted, misrepresented, or otherwise misunderstood what Eliezer Yudkowsky wrote. If this is the case I apologize for it. I urge you to read the full context of the quote.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-26T12:23:27.596Z · LW · GW

Since you have not yet replied to my other comment, here is what I have done so far:

(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]

(2) I slightly changed your given disclaimer and added it to my about page:

Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.

The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.

But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that's what you desire, let me know. But I hope you are satisfied with the actions I took so far.

[1] If I missed something, let me know.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-25T11:24:15.527Z · LW · GW

I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.

I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do it.

You also mention that I could delete my site (I already deleted a bunch of posts related to you and MIRI). I am not going to do that, as it is my homepage and contains completely unrelated material. I am sorry if I possibly gave a false impression here.

You further talk about withdrawing entirely from all related online discussions. I am willing to entirely stop to add anything negative to any related discussion. But I will still use social media to link to material produced by MIRI or LW (such as MIRI blog posts) and professional third party critiques (such as a possible evaluation of MIRI by GiveWell) without adding my own commentary.

I stand by what I wrote above, irrespective of your future actions. But I would be pleased if you maintain a charitable portrayal of me. I have no problem if you in future write that my arguments are wrong, that I have been offending, or that I only have an average IQ etc. But I would be pleased if you abstain from portraying me as an evil person, or that I deliberately lie. Stating that I misrepresented you is fine. But suggesting that I am a malicious troll who hates you is what I strongly disagree with.

As evidence that I mean what I write I now deleted my recent comments made on reddit.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-25T09:10:58.166Z · LW · GW

I already deleted the 'mockery index' (which had included a disclaimer for some months that read that I distant myself from those outsourced posts). I also deleted the second post you mentioned.

I changed the brainwash post to 'The Singularity Institute: How They Convince You' and added the following disclaimer suggested by user Anatoly Vorobey:

I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit.

I also completely deleted the post 'Why you should be wary of the Singularity Institute'.

Yesterday I also deleted the Yudkowsky quotes page and the personality page.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-25T08:53:51.398Z · LW · GW

Yes, it was a huge overreaction on my side and I shouldn't have written such a comment in the first place. It was meant as an explanation of how that post came about, it was not meant as an excuse. It was still wrong. The point I want to communicate is that I didn't do it out of some general interest to cause MIRI distress.

I apologize for offending people and overreacting to what I perceived the way I described it but which was, as you wrote, not that way. I already deleted that post yesterday.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T18:48:54.158Z · LW · GW

To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).

I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T18:32:38.612Z · LW · GW

Also, the page where you try to diagnos him with narsisism just seems mean.

I can clarify this. I never intended to write that post but was forced to do so out of self-defense.

I replied to this comment whose author was wondering why Yudkowsky is using Facebook more than LessWrong these days. To which I replied with an on-topic speculation based on evidence.

Then people started viciously attacking me, to which I had to respond. In one of those replies I unfortunately used the term "narcissistic tendencies". I was then again attacked for using that term. I defended my use of that term with evidence, the result of which is that post.

What do you expect that I do when I am mindlessly attacked by a horde of people? That I just leave it at that and let my name being dragged into dirt?

Many of my posts and comments are direct responses to personal attacks on me from LessWrong members.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T16:27:21.363Z · LW · GW

I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently.

This is exactly the kind of misrepresentation that make me avoid deleting my posts. Most of the most outrageous things he said have been written in the past ten years.

I suppose you are partly referring to the quotes page? Please take a look, there are only two quotes that are older than 2004, for one of which I explicitly note that he doesn't agree with it anymore, and a second which I believe he still agrees with.

Those two quotes that are dated before 2004 are the least outrageous. They are there mainly to show that he has long been believing into singularitarian ideas and that he can save the world. This is important in evaluating how much of the later arguments are rationalizations of those early beliefs. Which is in turn important because he's actually asking people for money and giving a whole research field a bad name with his predictions about AI.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T15:25:11.435Z · LW · GW

If you feel there was something wrong about your articles, why can't you write it there, using your own words?

I made bad experiences with admitting something like that. I once wrote on Facebook that I am not a high IQ individual and got responses suggesting that now everyone can completely ignore me and everything I say is garbage. If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever write. If the disclaimer was written by a third independent party, then I thought that this would show that I am willing to let the opponents voice their disagreement, and that I concede the possibility of being wrong.

I noticed that many people who read my blog take it much too seriously. I got emails praising me for what I have written. Which made me feel very uncomfortable, since I have not invested the necessary thoughtfulness in wirting those posts. They were never meant for other people to form a definitive opinion about MIRI, like some rigorous review by GiveWell. But this does not mean that they are random bullshit as people like to conclude when I admit this.

Sorry for using this analogy, but once I had a stalker, and she couldn't resist sending me e-mails, a few of them every day. And anything I did, or didn't do, was just a pretext for sending another e-mail. Like, she wrote ten e-mails about how she wants to talk with me, or asking me what am I doing right now, or whether I have seen this or that article on the web.

Hmm...I think my problems would be analog to loving you but wanting to correct some character mistakes you have. Noticing that you perceive this to be stalking would make me try to communicate that I really don't want to harass you, since I actually like you very much, but that I think you should stop farting in public.

The point I want to make here is that while you believe your offer to MIRI is generous, to MIRI it may seem like yet another step in an endless unproductive debate they want to avoid completely.

This seems obvious when it comes to your stalker scenario. But everything that involves MIRI involves a lot of low probability high utility considerations which really break my mind. I thought years about whether I should stop criticizing MIRI because I might endanger a future galactic civilization if the wrong person reads my posts and amplifies their effect. But I know that fully embracing this line of reasoning would completely break my mind.

I am not joking here. I find a lot of MIRI's beliefs to be absurd, yet I have always been susceptible to their line of argumentation. I believe that it is very important to solve this meta-issue of how to decide such things rationally. And the issues surrounding MIRI seem to be perfectly suited to highlight this problematic issue.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T14:46:22.917Z · LW · GW

You don't need to delete any of your posts or comments. What I mainly fear is that if I was to delete posts, without linking to archived versions, then you would forever go around implying that all kinds of horrible things could have been found on those pages, and that me deleting them is evidence of this.

If you promise not to do anything like that, and stop portraying me as somehow being the worst person on Earth, then I'll delete the comments, passages or posts that you deem offending.

But if there is nothing reasonable I could do to ever improve your opinion of me (i.e. other than donating all my money to MIRI), as if I committed some deadly sin, then this is a waste of time.

I would be willing to delete them because they offend certain people and could have been written much more benignly, with more rigor, and also because some of them might actually be misrepresentations which I accidentally made. Another reason for deletion would be that they have negative expected value, not because the arguments are necessarily wrong.

And if you agree, then please think about the Streisand effect. And if you e.g. ask me to delete my basilisk page, think about whether people could start believing that I take it seriously and as a result take it more seriously themselves. I have thought about this before and couldn't reach a conclusive answer.

This is obviously not an agreement to delete everything you might want, such as my interview series.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T13:32:33.338Z · LW · GW

I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit.

Sounds good. Thanks.

Plenty of people manage to be skeptical of MIRI/EY and criticize them here without being you.

Hmm...the state of criticism leaves a lot to be desired currently. The amount of criticism does not reflect the amount of extraordinary statements being made here. I think there are a lot of people who shy away from open criticism.

Some stuff that is claimed here is just very very (self-censored).

I have recently attempted to voice a minimum of criticism when I said something along the lines of "an intelligence explosion happening within minutes is an extraordinary claim". I actually believe that the whole concept is...I can't say this in a way that doesn't offend people here. It's hard to recount what happened then, but my perception was that even that was already too much criticism. In a diverse community with healthy disagreement I would expect a different reaction.

Please take the above as my general perception and not a precise recount of a situation.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T12:17:06.702Z · LW · GW

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors.

If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone might highlight something they said. This is not a smear campaign.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T12:02:03.093Z · LW · GW

We don't have, nor ever had, a "Why Alexander Kruel/Xixidu sucks" page that we can take down.

That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.

So you getting health related issues as a result of the viciousness you perpetrate...

Stressful fights adversely affect an existing condition.

Unlike you have done with EY, I haven't even screenshotted the comments by you that you've later chosen to take down because you found them embarrassing to yourself.

I have maybe deleted 5 comments and edited another 5. If I detect other mistakes I will fix them. You make it sound like doing so is somehow bad.

LessWrongers always treated you (and Rationalwiki too), and is still treating you and any of your different opinions, much more civilly than you (or Rationalwiki) ever did us and any of ours.

You are one of the people spouting comments such as this one for a long time. I reckon you might not see that such comments are a cause of what I wrote in the past.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T11:46:29.443Z · LW · GW

I don't think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings.

Yudkowsky has a number of times recently found it necessary to openly attack RationalWiki, rather than ignoring it and clarifying the problem on LessWrong or his website in a polite manner. He also voiced his displeasure over the increasing contrarian attitude on LessWrong. This made me think that there is a small chance that they might desire to mitigate one of only a handful sources who perceive MIRI to be important enough to criticize them.

Given this, either you have failed to understand what apologizing actually consists in, or are still (perhaps subconsciously) trying to undermine MIRI.

I will apologize for mistakes I make and try to fix them. The above post was the confession that there very well could be mistakes, and a clarification that the reasons are not malicious.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T11:13:49.748Z · LW · GW

If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme...

How often and for how long did I spread this, and what do you mean by "spread"?

Imagine yourself in my situation back in 2010: After the leader of a community completely freaked out over a crazy post (calling the author an idiot in all bold and caps etc.) he went on to massively nuke any thread mentioning the topic. In addition there are mentions of people having horrible nightmares over it while others are actively trying to dissuade you from mentioning a thought experiment they believe to be dangerous, in private messages and emails, by referring to the leaders superior insight.

This made a lot of alarm bells ring for me.

But you are expecting an entity which you have devoted most of blog to criticizing to be caring enough about your psychological state that they take time out to write header statements for each of your posts?

No. I made an unilateral offer.

Comment by XiXiDu on Breaking the vicious cycle · 2014-11-24T10:13:51.306Z · LW · GW

If you believe that I am, or was, a troll then check out this screenshot from 2009 (this was a year before my first criticism). And also check out this capture of my homepage from 2005, on which I link to MIRI's and Bostrom's homepage (I have been a fan).

If you believe that I am now doing this because of my health, then check out this screenshot of a very similar offer I made in 2011.

In summary: (a) None of my criticisms were ever made with the intent of giving MIRI or LW a bad name, but were instead meant to highlight or clarify problematic issues (b) I believe that my health issues allow me to quit caring about the problems I see, but they are not the crucial reason for wanting to quit. The main reason is that I hate fights and want people to be happy rather than being constantly engaged in emotional battles.

That said, many of the replies to this post perfectly resemble the reason for why I kept going on for so long: lots of misunderstandings combined with smug personal attacks against me. Anyway, I made the above offer expecting that this would continue, so it still stands. And if this isn't worthwhile for MIRI, fine. But because of people like ArisKatsaris, paper-machine, wedrifid and others with a history of vicious personal attacks against me, I am unable to just delete everything, because that would only leave their misrepresentations of my motives and actions behind. Yes, you understand that correctly. I believe myself to be the one who has been constantly mishandled and forced to strike back (if you constantly call someone a troll and liar then you shouldn't be surprised if they call you brainwashed). And yet I offer you the chance to leave this battle as the winner by posting counterstatements to my blog.

Comment by XiXiDu on xkcd on the AI box experiment · 2014-11-21T16:13:11.088Z · LW · GW

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.

To quote one of my posts:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.

I even defended LessWrong against RationalWiki previously.

The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don't bother offending the many fanboys who have a problem with this.

Comment by XiXiDu on xkcd on the AI box experiment · 2014-11-21T16:11:48.765Z · LW · GW

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.

To quote one of my posts:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.

I even defended LessWrong against RationalWiki previously.

The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don't bother offending the many fanboys who have a problem with this.

Comment by XiXiDu on xkcd on the AI box experiment · 2014-11-21T12:05:59.385Z · LW · GW

Regarding Yudkowsky's accusations against RationalWiki. Yudkowsky writes:

First false statement that seems either malicious or willfully ignorant:

In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self

TDT is a decision theory and is completely agnostic about anthropics, simulation arguments, pattern identity of consciousness, or utility.

Calling this malicious is a huge exaggeration. Here is a quote from the LessWrong Wiki entry on Timeless Decision Theory:

When Omega predicts your behavior, it carries out the same abstract computation as you do when you decide whether to one-box or two-box. To make this point clear, we can imagine that Omega makes this prediction by creating a simulation of you and observing its behavior in Newcomb's problem. [...] TDT says to act as if deciding the output of this computation...

RationalWiki explains this in the way that you should act as if it is you that is being simulated and who possibly faces punishment. This is very close to what the LessWrong Wiki says, phrased in a language that people with a larger inferential distance can understand.

Yudkowsky further writes:

The first malicious lie is here:

an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward

Neither Roko, nor anyone else I know about, ever tried to use this as an argument to persuade anyone that they should donate money.

This is not a malicious lie. Here is a quote from Roko's original post (emphasis mine):

...there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished.

This is like a robber walking up to you and explaining that you could take into account that he could shoot you if you don't give him your money.

Also notice that Roko talks about trading with uFAIs as well.

Comment by XiXiDu on xkcd on the AI box experiment · 2014-11-21T11:08:07.067Z · LW · GW

For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

For a better idea of what's going on you should read all of his comments on the topic in chronological order.

Comment by XiXiDu on Musk on AGI Timeframes · 2014-11-19T16:09:42.681Z · LW · GW

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.

He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers?

I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).

ETA: Just take those people who destroy GMO test fields. Musk won't do something like that. But other people, who would commit such acts, might be inspired by his remarks.

Comment by XiXiDu on Musk on AGI Timeframes · 2014-11-18T14:22:14.355Z · LW · GW

The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'

Comment by XiXiDu on Open thread, Nov. 17 - Nov. 23, 2014 · 2014-11-18T09:40:12.241Z · LW · GW

Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:

(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks

(3) Google: A Neural Image Caption Generator

(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions

[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.

Comment by XiXiDu on Musk on AGI Timeframes · 2014-11-17T15:44:31.852Z · LW · GW

What are you worried he might do?

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

If he believes what he's said, he should really throw lots of money at FHI and MIRI.

Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.

Comment by XiXiDu on Musk on AGI Timeframes · 2014-11-17T13:18:31.457Z · LW · GW

I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.

Comment by XiXiDu on Musk on AGI Timeframes · 2014-11-17T12:33:57.011Z · LW · GW

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

If he is seriously convinced that doom might be no more than 5 years away, then I share his worries about what an agent with massive resources at its disposal might do in order to protect itself. Just that in my case this agent is called Elon Musk.

Comment by XiXiDu on The Danger of Invisible Problems · 2014-11-07T10:02:34.784Z · LW · GW

A chiropractor?

Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?

...

However, I haven't done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I'm not really sure where I picked up.

Same for me. I was a little bit shocked to read that someone on LessWrong goes to a chiropractor. But for me this attitude is also based on something I considered to be common knowledge, such as astrology being pseudoscience. And the Wikipedia article on chiropractic did not change this attitude much.

Comment by XiXiDu on Link: Elon Musk wants gov't oversight for AI · 2014-10-29T12:20:26.887Z · LW · GW

Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)

Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.

I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the formation of a loose movement of people who reinforce their extreme beliefs about AI by mutual reassurance.

There are whole books now about this topic. What's missing are the empirical or mathematical foundations. It just consists of non-rigorous arguments that are at best internally consistent.

So even if we were only talking about sane domain experts, if they solely engage in unfalsifiable philosophical musings then the whole endeavour is suspect. And currently I don't see them making any predictions that are less vague and more useful than the second coming of Jesus Christ. There will be an intelligence explosion by a singleton with a handful of known characteristics revealed to us by Omohundro and repeated by Bostrom. That's not enough!

Comment by XiXiDu on Link: Elon Musk wants gov't oversight for AI · 2014-10-29T09:53:22.997Z · LW · GW

Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.

I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?

The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause autism, that genetically modified food is dangerous, or scare them about nuclear power...you don't do that if all you got are arguments that you personally find convincing. What you do is hard empirically science in order to verify your hunches and eventually reach a consensus among experts that your fears are warranted.

I am aware of many of the tactics that the sequences employ to dismiss the above paragraph. Tactics such as reversing the burden of proof, conjecturing arbitrary amounts of expected utility etc. All of the tactics are suspect.

Do you have some convincing counterarguments?

Yes, and they are convincing enough to me that I dismiss the claim that with artificial intelligence we are summoning the demon.

Mostly the arguments made by AI risk advocates suffer from being detached from an actual grounding in reality. You can come up with arguments that make sense in the context of your hypothetical model of the world, in which all the implicit assumptions you make turn out to be true, but which might actually be irrelevant in the real world. AI drives are an example here. If you conjecture the sudden invention of an expected utility maximizer that quickly makes huge jumps in capability, then AI drives are much more of a concern than e.g. within the context of a gradual development of tools that become more autonomous due to their increased ability of understading and doing what humans mean.

Comment by XiXiDu on Link: Elon Musk wants gov't oversight for AI · 2014-10-28T11:03:28.368Z · LW · GW

Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.

Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent an apartment on a seastead on the high seas of a terraformed Mars. Or something along these lines...

The common ground between those people seems to be that they all hold weird beliefs, beliefs that someone who has not been indoctrinated...cough...educated by the sequences has a hard time to take seriously.

Comment by XiXiDu on What false beliefs have you held and why were you wrong? · 2014-10-17T11:24:14.220Z · LW · GW

Could you provide examples of advanced math that you were unable to learn? Why do you think you failed?

Comment by XiXiDu on What math is essential to the art of rationality? · 2014-10-17T08:54:53.060Z · LW · GW

I appreciate having Khan academy for looking up math concept that on which I need a refresher, but I've herd (or maybe just assumed?) that the higher level teaching was a bit mediocre. You disagree?

Comparing Khan Academy's linear algebra course to the free book that I recommended, I believe that Khan Academy will be more difficult to understand if you don't already have some background knowledge of linear algebra. This is not true for the calculus course though. Comparing both calculus and linear algebra to the books I recommend, I believe that Khan Academy only provides a rough sketch of the topics with much less rigor than can be found in books.

Regarding the quality of Khan Academy. I believe it is varying between excellent and mediocre. But I haven't read enough rigorous material to judge this confidently.

The advantage of Khan Academy is that you get a quick and useful overview. There are books that are also concise and provide an overview, often in the form of so called lecture notes. But they are incredible difficult to understand (assume a lot of prerequisites).

As a more rigorous alternative to Khan Academy try coursera.org.

What's the value of taking classes in math vs. teaching myself (or maybe teaching myself with the occasional help of a tutor)?

I've never visited a class or got the help of a tutor. I think you can do just fine without one if you use Google and test your knowledge by buying solved problem books. There are a lot of such books:

Some massive open online courses now offer personal tutors if you pay a monthly fee. udacity.com is one example here.

I also want to add the following recommendations to my original sequence, since you specifically asked about Bayesian statistics:

  1. Bayes' Rule: A Tutorial Introduction to Bayesian Analysis
  2. Doing Bayesian Data Analysis: A Tutorial with R and BUGS (new version will be released in November)
Comment by XiXiDu on What math is essential to the art of rationality? · 2014-10-15T09:27:13.256Z · LW · GW

I am not sure about the prerequisites you need for "rationality" but take a look at the following courses:

(1) Schaum's Outline of Probability, Random Variables, and Random Processes:

The background required to study the book is one year calculus, elementary differential equations, matrix analysis...

(2) udacity's Intro to Artificial Intelligence:

Some of the topics in Introduction to Artificial Intelligence will build on probability theory and linear algebra.

(3) udacity's Machine Learning: Supervised Learning :

A strong familiarity with Probability Theory, Linear Algebra and Statistics is required.

My suggestion is to use khanacademy.org in the following order: Precalculus->Differential calculus->Integral calculus->Linear Algebra->Multivariable calculus->Differential equations->Probability->Statistics.

If you prefer books:

  1. Free precalculus book
  2. The Calculus Lifesaver
  3. A First Course in Linear Algebra (is free and also teaches proof techniques)
  4. Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus
  5. Ordinary Differential Equations (Dover Books on Mathematics)
  6. Schaum's Outline of Probability, Random Variables, and Random Processes
  7. Discovering Statistics Using R

Statistics comes last, here is why. Take for example the proof of minimizing squared error to regression line. You will at least need to understand how to solve partial derivatives and systems of equations.

(Note: Books 4-7 are based on my personal research on what to read. I haven't personally read those particular books yet. But they are praised a lot and relatively cheap and concise.)

Comment by XiXiDu on Connection Theory Has Less Than No Evidence · 2014-08-01T15:06:31.065Z · LW · GW

So, this "Connection Theory" looks like run-of-the-mill crackpottery. Why are people paying attention to it?

From the post:

“I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”

Comment by XiXiDu on Connection Theory Has Less Than No Evidence · 2014-08-01T10:56:15.527Z · LW · GW

Sounds like a persiflage of MIRI.

Comment by XiXiDu on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-20T13:52:55.478Z · LW · GW

What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky's cached thoughts.

LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.

To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.

I am not talking about censorship here. I am talking about something unproblematic. Since once the aim of LessWrong is clear, to tackle technical problems, moderation becomes an understandable necessity. And I'd be surprised if any moderation will be necessary once only highly technical problems are discussed.

Doing this will make people hold LessWrong in high esteem. Because nothing is as effective at proving that you are smart and rational than getting things done.

ETA How about trying to solve the Pascal's mugging problem? It's highly specific, technical, and does pertain rationality.

Comment by XiXiDu on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-20T13:02:11.174Z · LW · GW

Of course, mentioning the articles on ethical injuctions would be too boring.

It's troublesome how ambiguous the signals are that LessWrong is sending on some issues.

On the one hand LessWrong says that you should "shut up and multiply, to trust the math even when it feels wrong". On the other hand Yudkowsky writes that he would sooner question his grasp of "rationality" than give five dollars to a Pascal's Mugger because he thought it was "rational".

On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don't justify the means for humans.

On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying "Oops". On the other hand Yudkowsky tries to patch a framework that is obviously broken.

Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced with too much uncertainty.

Comment by XiXiDu on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-19T13:30:25.262Z · LW · GW

Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.

The problem isn't that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko's post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko's post. Someone in the audience makes the following statement:

Whenever I hear the Singularity Institute talk I feel like they are a bunch of religious nutters...

Or consider the following comment by Ben Goertzel from 2004:

Anyway, I must say, this display of egomania and unpleasantness on the part of SIAI folks makes me quite glad that SIAI doesn’t actually have a viable approach to creating AGI (so far, anyway…).

And this is Yudkowsky's reply:

[...] Striving toward total rationality and total altruism comes easily to me. [...] I’ll try not to be an arrogant bastard, but I’m definitely arrogant. I’m incredibly brilliant and yes, I’m proud of it, and what’s more, I enjoy showing off and bragging about it. I don’t know if that’s who I aspire to be, but it’s surely who I am. I don’t demand that everyone acknowledge my incredible brilliance, but I’m not going to cut against the grain of my nature, either. The next time someone incredulously asks, “You think you’re so smart, huh?” I’m going to answer, “Hell yes, and I am pursuing a task appropriate to my talents.” If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence.

LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.

Comment by XiXiDu on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-19T07:57:40.574Z · LW · GW

Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.

Roko's post explicitly mentioned trading with unfriendly AI's.

Comment by XiXiDu on [LINK] Another "LessWrongers are crazy" article - this time on Slate · 2014-07-18T10:27:19.057Z · LW · GW

Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

…and further…

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?

(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:

It’s clear that the basilisk was censored, not just to save unlucky susceptible people from the trauma of imagining that they were being acausally blackmailed, but because Eliezer judged that acausal blackmail might actually be possible. The thinking was: maybe it’s possible, maybe it’s not, but it’s bad enough and possible enough that the idea should be squelched, lest some of the readers actually stumble into an abusive acausal relationship with a distant evil AI.

If Yudkowsky really thought it was irrational to worry about any part of it, why didn't he allow people to discuss it on LessWrong, where he and others could debunk it?

Comment by XiXiDu on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult · 2014-07-14T10:41:45.669Z · LW · GW

It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.

I use the term "new rationalism".

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T19:23:07.687Z · LW · GW

With proper preparation, yes. To reuse my example: it doesn't take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn't set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building botnets, writing flashworms, etc.

Thanks. Looks like my perception is mainly based on my lack of expertise about security and the resulting inferential distance.

Hard to see how any plausible AI could copy its entire source code & memories over the existing Internet that fast unless it was for some reason already sitting on something like a gigabit link.

Are there good reasons to assume that the first such AI won't be running on a state of the art supercomputer? Take the movie Avatar. The resources needed to render it were: 4,000 Hewlett-Packard servers with 35,000 processor cores with 104 terabytes of RAM and three petabytes of storage. I suppose that it would have been relatively hard to render it on illegally obtained storage and computational resources?

Do we have any estimates on how quickly a superhuman AI's storage requirements would grow? CERN produces 30 petabytes of data per year. If an AI undergoing an intelligence explosion requires to store vast amounts of data then it will be much harder for it to copy itself.

The uncertainties involved here still seem to be too big to claim that a superhuman intelligence will be everywhere moments after you connect it to the Internet.

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T17:01:29.401Z · LW · GW

This is not magic, I am not a layman, and your beliefs about computer security are wildly misinformed. Putting trojans on large fractions of the computers on the internet is currently within the reach of, and is actually done by, petty criminals acting alone.

Within moments? I don't take your word for this, sorry. The only possibility that comes to my mind is by somehow hacking the Windows update servers and then somehow forcefully install new "updates" without user permission.

While this does involve a fair amount of thinking time, all of this thinking goes into advance preparation, which could be done while still in an AI-box or in advance of an order.

So if I uploaded you onto some alien computer, and you had a billion years of subjective time to think about it, then within moments after you got an "Internet" connection you could put a trojan on most computers of that alien society? How would you e.g. figure out zero day exploits of software that you don't even know exists?

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T12:53:51.471Z · LW · GW

Right, and I'm saying: the "moments later" part of what Luke said is not something that should be surprising or controversial, given the premises.

The premise was a superhuman intelligence? I don't see how it could create a large enough botnet, or find enough exploits, in order to be everywhere moments later. Sounds like magic to me (mind you, a complete layman).

If I approximate "superintelligence" as NSA, then I don't see how the NSA could have a trojan everywhere moments after the POTUS asked them to take over the Internet. Now I could go further and imagine the POTUS asking the NSA to take it over within 10 years in order to account for the subjective speed with which a superintelligence might think. But I strongly doubt that such a speed could make up for the data the NSA already possess and which the superintelligence still needs to acquire. It also does not make up for the thousands of drones (humans in meatspace) that the NSA controls. And since the NSA can't take over the Internet within moments I believe it is very extreme to claim that a superintelligence can. Though it might be possible within days.

I hope you don't see this as an attack. I honestly don't see how that could be possible.

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T12:42:11.742Z · LW · GW

...you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years

Hit the brakes on that line of reasoning! That's not what the question asked. It asked WILL it, not COULD it.

If I have a statement "X will happen", and ask people to assign a probability to it, then if the probability is <=50% I believe it isn't too much to a stretch to paraphrase "X will happen with a probability <=50%" as "It could be that X will happen". Looking at the data of the survey, of 163 people who gave a probability estimate, only 15 people assigned a probability >50% to the possibility that there will be a superhuman intelligence that greatly surpasses the performance of humans within 2 years after the creation of a human level intelligence.

That said, I didn't use the word "could" on purpose in my comment. It was just an unintentional inaccuracy. If you think that is a big deal, then I am sorry. I'll try to be more careful in future.

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T08:29:38.201Z · LW · GW

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".

With "very extreme" I was referring to the part where he claims that this will happen "moments later".

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T08:26:57.133Z · LW · GW

The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial.

My problem with Luke's quote was the "moments later" part.

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T08:24:27.624Z · LW · GW

That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies!

He wrote it will be moments later everywhere. Do you claim that it could take over the Internet within moments?

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-09T18:51:20.526Z · LW · GW

...to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.

In the paper human-level AI was defined as follows:

“Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.”

Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...

Fast takeoff / intelligence explosion has always seemed to me to be the most controversial premise, which the most people object to, which most consigned SIAI/MIRI to being viewed like cranks;

What sounds crankish is not that a human level AI might reach a superhuman level within 2 years, but the following. In Yudkowsky's own words (emphasis mine):

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM". Just to be clear on the claim, "fast" means on a timescale of weeks or hours rather than years or decades;

These kind of very extreme views are what I have a real problem with. And just to substantiate "extreme views", here is Luke Muehlhauser:

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

Comment by XiXiDu on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-09T18:07:46.917Z · LW · GW

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”

Question 3 was about takeoff speeds.

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.

Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.