Existential Risk and Public Relations

post by multifoliaterose · 2010-08-15T07:16:32.802Z · LW · GW · Legacy · 628 comments

[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is "yes, by talking about it." But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.

As Yvain has discussed in his excellent article titled The Trouble with "Good"

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form "Boo for Person X's claims!"  If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type "Boo for existential risk reduction!"

The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

[...]

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

[...]

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm very disappointed that Eliezer has made statements such as:

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don't know of any other person who could do that...

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status.  I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme.  This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

628 comments

Comments sorted by top scores.

comment by Vladimir_M · 2010-08-15T18:58:39.835Z · LW(p) · GW(p)

I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it's possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I'm familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can't resist commenting on this article.

To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.

Probably the worst such example I've seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there's some loony cult thing going on here, unless he's also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko's departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that's interesting and appealing to people who aren't hard-core insiders.)

Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they're given openly for the whole world to see. I'm not going to get into details of concrete examples -- in particular, I do not concur unconditionally with any of the specific complaints from the above article -- but I really can't help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that's fine -- but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.

Replies from: prase, None, Kevin, Will_Newsome
comment by prase · 2010-08-16T16:01:47.306Z · LW(p) · GW(p)

I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog's header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.

Replies from: Morendil
comment by Morendil · 2010-08-16T16:26:50.769Z · LW(p) · GW(p)

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality

Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.

I do agree with you that too much of the newer material keeps returning to those few habitual topics that are "superstimuli" for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)

A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.

Replies from: prase
comment by prase · 2010-08-16T17:15:48.874Z · LW(p) · GW(p)

Thanks for the link, I haven't known YANSS.

As for overcoming absurdity heuristics, more helpful would be to illustrate its inaproppriateness (is this a real word?) on thoughts which are seemingly absurd while having a lot of data proving them right, rather than predictions like Singularity which are mostly based on ... just different heuristics.

comment by [deleted] · 2010-08-15T19:19:48.942Z · LW(p) · GW(p)

Agreed.

One good sign here is that LW, unlike most other non-mainstream organizations, doesn't really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.

I've tended to overlook the weirder stuff around here, like the Roko feud -- it got filed under "That's confusing and doesn't make sense" rather than "That's an outrage." But maybe it would be more constructive to change that attitude.

Replies from: timtyler
comment by timtyler · 2010-08-17T17:52:40.582Z · LW(p) · GW(p)

Singularitirianism, transumanism, cryonics, etc probably qualify as cults under at least some of the meanings of the term: http://en.wikipedia.org/wiki/Cult Cults do not necessarily lack critics.

Replies from: WrongBot
comment by WrongBot · 2010-08-17T18:37:37.176Z · LW(p) · GW(p)

The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn't even close.

Replies from: thomblake, timtyler
comment by thomblake · 2010-08-17T19:04:24.990Z · LW(p) · GW(p)

I disagree with your assessment. Let's just look at Lw for starters.

Eileen Barker:

  1. It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.
  2. Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer's posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer's conclusions nonetheless.
  3. Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
  4. Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
  5. Nope. Though some would credit Eliezer with trying to become or create God.
  6. Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather... driven in his own overarching goal.

Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.

Shirley Harrison:

  1. I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
  2. While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
  3. Nope
  4. Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
  5. This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
  6. There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
  7. No sign of this
  8. "Exclusivity - 'we are right and everyone else is wrong'". Very yes.

Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.

Similar analysis using the other lists is left as an exercise for the reader.

Replies from: WrongBot, cousin_it, Zvi, Perplexed, JGWeissman, ciphergoth
comment by WrongBot · 2010-08-17T20:25:15.824Z · LW(p) · GW(p)

On Eileen Barker:

Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.

I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.

Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.

Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.

Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.


On Shirley Harrison:

I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.

What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."

There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.

Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.

So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.

My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.

Replies from: gwern, Jack, Sniffnoy
comment by gwern · 2010-11-18T18:29:41.501Z · LW(p) · GW(p)

Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.

Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.

(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)

comment by Jack · 2010-11-18T20:23:06.845Z · LW(p) · GW(p)

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

What exactly are Eliezer's qualifications supposed to be?

Replies from: jimrandomh
comment by jimrandomh · 2010-11-18T20:38:20.627Z · LW(p) · GW(p)

What exactly are Eliezer's qualifications supposed to be?

You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

Replies from: Jack, XiXiDu
comment by Jack · 2010-11-18T21:44:05.697Z · LW(p) · GW(p)

I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.

No one looks at open problems in other fields this way.

Replies from: Vladimir_Nesov, XiXiDu, multifoliaterose
comment by Vladimir_Nesov · 2010-11-18T22:09:41.806Z · LW(p) · GW(p)

No one looks at open problems in other fields this way.

Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.

Replies from: Jack
comment by Jack · 2010-11-18T22:15:17.887Z · LW(p) · GW(p)

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.

Replies from: multifoliaterose, JGWeissman, ata
comment by multifoliaterose · 2010-11-18T23:26:50.019Z · LW(p) · GW(p)

Eliezer's past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:

We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling.

comment by JGWeissman · 2010-11-18T22:33:45.094Z · LW(p) · GW(p)

They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.

comment by ata · 2010-11-18T22:43:58.138Z · LW(p) · GW(p)

I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project

Eliezer has said: "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me." Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)

That said, "self-image comparable to the Manhattan project" is an unusually generous ascription of humility to SIAI and Eliezer. :P

comment by XiXiDu · 2010-11-19T12:57:25.773Z · LW(p) · GW(p)

...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it.

I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.

Replies from: Jack
comment by Jack · 2010-11-19T13:04:59.734Z · LW(p) · GW(p)

Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven't said anything in this comment than I disagree with so I don't understand what we're disputing.

comment by multifoliaterose · 2010-11-18T23:27:15.629Z · LW(p) · GW(p)

Great comment.

comment by XiXiDu · 2010-11-18T21:03:27.078Z · LW(p) · GW(p)

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.

Replies from: jimrandomh, WrongBot
comment by jimrandomh · 2010-11-18T21:36:41.513Z · LW(p) · GW(p)

The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.

How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

Replies from: XiXiDu, multifoliaterose
comment by XiXiDu · 2010-11-19T09:56:32.537Z · LW(p) · GW(p)

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.

Would you tell a politician to go and read the sequences and if, after reading the publications, they don't see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?

Writing is influential when many people are influenced by it.

You talked about Yudkowsky's influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don't think they influenced the right people.

comment by multifoliaterose · 2010-11-18T23:36:41.546Z · LW(p) · GW(p)

Downvoted for this:

The motivated cognition here is pretty thick

Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition to a greater extent than your own comments.

Moreover, I believe that even when such statements are true, one should avoid making them when possible as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.

Replies from: shokwave
comment by shokwave · 2010-11-23T08:13:06.212Z · LW(p) · GW(p)

Moreover, I believe that even when such statements are true, one should avoid making them when possible

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

pushing them into an arguments as soldiers mode which is detrimental to rational discourse.

On this blog, any person should definitely be resisting this push.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-11-23T08:28:08.801Z · LW(p) · GW(p)

Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.

I did not say that one should avoid telling people when and where they're going wrong. I was objecting to the practice of questioning people's motivations. For the most part I don't think that questioning somebody's motivations is helpful to him or her.

On this blog, any conversational partners should definitely not be construing anything as personal attacks.

I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn't mean that the commentators are always above this sort of thing.

I agree with you insofar as I think that one work to interpret comments charitably.

On this blog, any person should definitely be resisting this push.

I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.

Replies from: shokwave
comment by shokwave · 2010-11-23T09:16:55.272Z · LW(p) · GW(p)

I was objecting to the practice of questioning people's motivations.

Not questioning their motivations; you objected to the practice of pointing out motivated cognition:

I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition ... Moreover, I believe that even when such statements are true, one should avoid making them when possible

Pointing out that someone hasn't thought through the issue because they are motivated not to - this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn't let them know that they have something wrong, and they miss a chance to improve it.

Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise.

To paraphrase steven, if you're interested in winning disputes you should dismiss personal attacks, but if you're interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it's a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.

this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.

Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you're right, and this whole third part of our discussion is irrelevant.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-11-23T15:17:08.505Z · LW(p) · GW(p)

It's quite possible to be inaccurate about other people's motivations, and if you are, then they will have another reason to dismiss your argument.

How do you identify motivated cognition in other people?

Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)

Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.

Replies from: shokwave
comment by shokwave · 2010-11-23T16:08:12.452Z · LW(p) · GW(p)

How do you identify motivated cognition in other people?

Some of the same ways I see it in myself. Specifically, when dealing with others:

  • Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
  • All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
  • Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
  • Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
  • Opposed to plausible third alternatives: weak evidence of motivated stopping - strong evidence with a caveat and split, as "arguments as soldiers" can also produce this effect. Mild caveat on plausibility being somewhat subjective.

In the case of XiXiDu's comment, focusing on Ben Goertzel's rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.

The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on "Something's fishy about Eliezer's setup".

¹: As well as "Goertzel is significantly ahead of AI development curve", "AGI research and development is a field with rigid formal rules on what does and doesn't convince people" - the first is easily tested by looking at Ben's other views, the second is refuted by many researchers in that field

Replies from: NancyLebovitz, multifoliaterose
comment by NancyLebovitz · 2010-11-23T16:31:54.313Z · LW(p) · GW(p)

I recommend explaining that sort of thing when you say someone is engaging in motivated cognition.

I think it seems more like a discussable matter then and less like an insult.

comment by multifoliaterose · 2010-11-23T17:04:01.644Z · LW(p) · GW(p)

Thanks for engaging with me; I now better understand where jimrandomh might have been coming from. I fully agree with Nancy Lebovitz here.

comment by WrongBot · 2010-11-18T22:58:01.375Z · LW(p) · GW(p)

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Replies from: XiXiDu, XiXiDu, Jack
comment by XiXiDu · 2010-11-19T10:04:01.201Z · LW(p) · GW(p)

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.

comment by XiXiDu · 2010-11-19T10:15:20.515Z · LW(p) · GW(p)

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Interesting, when did he come up with the concept of "Seed AI". Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-11-19T12:11:31.778Z · LW(p) · GW(p)

Didn't find the phrase "Seed AI" there. One plot element is a "resurrection seed", which is created by an existing, mature evil AI to grow itself back together in case it's main manifestation is destroyed. A Seed AI is a different concept, it's something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don't remember recursive self-improvement being mentioned with the seed in Ventus.

A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it's own architecture, goes all the way back to Alan Turing's 1950 paper on machine intelligence.

Replies from: XiXiDu, XiXiDu, XiXiDu
comment by XiXiDu · 2010-11-19T12:36:58.326Z · LW(p) · GW(p)

Here is a quote from Ventus:

Look at it this way. Once long ago two kinds of work converged. We'd figured out how to make machines that could make more machines. And we'd figured out how to get machines to... not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn't even recognize.

[...]

And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do--namely, anything.

Replies from: timtyler
comment by timtyler · 2010-11-20T12:58:29.975Z · LW(p) · GW(p)

...and here's a quote from I.J. Good, from 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

He didn't coin the term "Seed AI" either.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-20T13:01:37.946Z · LW(p) · GW(p)

Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn't disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.

Replies from: timtyler
comment by timtyler · 2010-11-20T13:15:56.050Z · LW(p) · GW(p)

Not the concept - the term.

"Seed AI theory" probably refers to something or another in here - which did indeed originate with Yu'El.

Presumably http://en.wikipedia.org/wiki/Seed_AI should be considered to be largely SIAI marketing material.

comment by XiXiDu · 2010-11-19T12:34:30.347Z · LW(p) · GW(p)

They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.

It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.

In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.

comment by XiXiDu · 2010-11-19T12:25:23.267Z · LW(p) · GW(p)

The Winds are seed AI, in the sense provided by Yudkowsky.

ETA

Well, of course I just tried to figure out of Yudkowsky invented cheesecake and not just some special recipe of cheesecake.

comment by Jack · 2010-11-19T12:18:50.163Z · LW(p) · GW(p)

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

Replies from: WrongBot, wedrifid, komponisto
comment by WrongBot · 2010-11-19T17:56:48.138Z · LW(p) · GW(p)

If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

Replies from: ata, Jack
comment by ata · 2010-11-19T18:28:08.269Z · LW(p) · GW(p)

(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)

I'd be interested to hear more about that.

Replies from: WrongBot
comment by WrongBot · 2010-11-20T02:29:33.172Z · LW(p) · GW(p)

From Ten Years to a Positive Singularity:

And computer scientists haven’t understood the self – because it isn’t about computer science. It’s about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.

and

The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined. So one reasonable approach is to make the first Novamente a kind of Oracle. Give it a goal system with one top-level goal: To answer peoples’ questions, in a way that’s designed to give them maximum understanding.

From The Singularity Institute's Scary Idea (And Why I Don't Buy It):

It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences.

From Chance and Consciousness:

At the core of this theory are two very simple ideas:

1) that consciousness is absolute freedom, pure spontaneity and lawlessness; and

2) that pure spontaneity, when considered in terms of its effects on structured systems, manifests itself as randomness. (Emphasis his.)

And pretty much all of On the Algebraic Structure of Consciousness and Evolutionary Quantum Computation.

This is all just from fifteen minutes of looking around his website. I'm amazed anyone takes him seriously.

Replies from: ata, David_Gerard
comment by ata · 2010-11-20T03:32:42.825Z · LW(p) · GW(p)

From Chance and Consciousness

Oh...
wow.

I think that paper alone proves your point quite nicely.

Replies from: jimrandomh
comment by jimrandomh · 2010-11-20T04:10:30.210Z · LW(p) · GW(p)

I mostly disagree with Ben, but I don't think judging him based on that paper is fair. It's pretty bad, but it was also written in 1996. Fourteen years is a lot of time to improve as a thinker.

Replies from: ata
comment by ata · 2010-11-20T04:25:49.121Z · LW(p) · GW(p)

I had that thought too, and I was thinking of retracting or amending my comment to that effect, but looking at some of his later publications in the same journal(?) suggests that he hasn't leveled up much since then.

comment by David_Gerard · 2010-11-20T02:36:10.121Z · LW(p) · GW(p)

"The Futility Of Emergence" really annoys me. It's a perfectly useful word. It's a statement about the map rather than about the territory, but it's a useful one. Whereas magic means "unknowable unknowns", emergent means "known unknowns" - the stuff that we know follows, we just don't know how.

e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they're separate sciences. But we do know we have that work to do.

Just linking to that essay every time someone you're disagreeing with says "emergent" is difficult to distinguish from applause lights.

Replies from: WrongBot, timtyler
comment by WrongBot · 2010-11-20T03:00:06.020Z · LW(p) · GW(p)

Saying the word "emergent" adds nothing. You're right that it's not as bad as calling something magic and declaring that it's inherently unknowable, but it also offers zero explanatory power. To reword your example:

Chemistry is a property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they're separate sciences. But we do know we have that work to do.

There is absolutely no difference in meaning when you take the word "emergent" out. That's why it isn't useful, which Eliezer was pointing out.

Replies from: Sniffnoy, Vaniver
comment by Sniffnoy · 2010-11-20T03:57:49.189Z · LW(p) · GW(p)

Nitpick: I don't think that is exactly what EY was pointing out. Take a look at the comments and the general response of "Huh? Who makes that mistake?" It seems EY was complaining about the tendency of AGI researchers to use "emergence" as if it were an explanation, not ordinary use of the word that doesn't pretend it is one but just, say, points out that the behavior is surprising given what it's composed of, or that your current methods aren't powerful enough to predict the consequences. He didn't seem to have realized that particular mistake was mostly localized to AGI people.

Replies from: timtyler, WrongBot
comment by timtyler · 2010-11-20T12:50:18.096Z · LW(p) · GW(p)

It seems more likely that when the cited people said "intelligence is an emergent phenomenon", they were misunderstood as proposing that as a satisfactory explanation of the phenomenon.

comment by WrongBot · 2010-11-20T04:05:21.821Z · LW(p) · GW(p)

Nitpick accepted.

comment by Vaniver · 2010-11-20T03:15:01.825Z · LW(p) · GW(p)

There is absolutely no difference in meaning when you take the word "emergent" out. That's why it isn't useful, which Eliezer was pointing out.

I'm not entirely sure this is correct. I wouldn't call the trajectories of planets and galaxies "properties" of Relativity, but I would call it emergent behavior due to Relativity. It's a stylistic and grammatical choice, like when to use "which" and when to use "that." They may seem the same to the uninitiated, but there's a difference and the initiated can tell when you're doing it wrong.

So, I agree with David Gerard that trying to eradicate the use of the word is misplaced. It'd be like saying "the word 'which' is obsolete, we're only going to use 'that' and look down on anyone still using 'which'." You lose far more by such a policy than you gain.

comment by timtyler · 2010-11-20T12:47:06.140Z · LW(p) · GW(p)

IIRC, that post was adequately dismantled in its comments.

comment by Jack · 2010-11-19T18:28:38.018Z · LW(p) · GW(p)

I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.

From what I've seen, the people who comment here who have read Broderick's book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn't at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone's beliefs on the issue in a general argument about their rationality. You can't just assume it as you do here.

Replies from: wedrifid
comment by wedrifid · 2010-11-19T18:41:59.200Z · LW(p) · GW(p)

You can't just assume it as you do here.

Yes, here WrongBot is safe to assume basic physics.

Edit for the sake of technical completeness: And biology.

Replies from: Jack
comment by Jack · 2010-11-19T18:49:34.065Z · LW(p) · GW(p)

Goertzel's paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven't read it. But you can't assume it is.

Replies from: wedrifid
comment by wedrifid · 2010-11-19T18:59:58.667Z · LW(p) · GW(p)

Maybe it is nonsense, I haven't read it. But you can't assume it is.

I disagree. I do not need to (and should not) discard my priors when evaluating claims.

It would be an error in reasoning on my part if I did not account for the low prior (to reading it) probability of a psyonics theory being sane when evaluating the proponents other claims. For emphasis: not lowering my confidence in Goertzel's other beliefs because he is a proponent of psi without me having read his paper would be an outright mistake.

I also note that you defending Goertzel on the psi point is evidence against Goertzel's beliefs regarding AI. Extremely weak evidence.

Replies from: Jack
comment by Jack · 2010-11-19T19:12:56.978Z · LW(p) · GW(p)

I also note that you defending Goertzel on the psi point is evidence against Goertzel's beliefs regarding AI. Extremely weak evidence.

Huh?

Replies from: wedrifid
comment by wedrifid · 2010-11-19T19:30:33.495Z · LW(p) · GW(p)

I mean what is written in the straightforward English sense. I mention it to emphasize that all evidence counts.

Replies from: FAWS
comment by FAWS · 2010-11-20T00:57:48.068Z · LW(p) · GW(p)

Could you unpack your reasoning? Do you mean that Jack defending Goertzel on psi discredits defense of Goertzel on AI because it shows such defense to be less correlated to the validity of the opinion than previously thought? Or did you drop a negation or something and mean the opposite of what you wrote, because Jack defending Goertzel on psi is very slight evidence of Goertzel's opinion on psi not being as crazy as you previously thought?

comment by wedrifid · 2010-11-19T18:06:36.696Z · LW(p) · GW(p)

I don't think the failure of someone to be convinced of some position is ever strong evidence against that position.

Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

Replies from: Jack
comment by Jack · 2010-11-19T18:16:05.252Z · LW(p) · GW(p)

That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".

I'm being generous and giving the original comment credit for an implicit premise. As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt. WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.

Fair point re: "ever".

Replies from: WrongBot, wedrifid
comment by WrongBot · 2010-11-19T18:32:37.001Z · LW(p) · GW(p)

I generally don't try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.

But I'm annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:

  • It isn't permitted by known physics.
  • There are no suggested mechanisms (so far as I'm aware) for PSI which do not contradict proven physical laws.
  • The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven't been replicated with larger sample sizes.
  • Publication bias.
  • PSI researchers often seem to possess motivated cognition.
  • We've analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don't seem to affect how those individual neurons behave.
  • Etc.
Replies from: Jack
comment by Jack · 2010-11-19T19:08:45.327Z · LW(p) · GW(p)

No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.

There are no suggested mechanisms (so far as I'm aware) for PSI which do not contradict proven physical laws.

Well to begin with, Goertzel's paper claims to be such a mechanism. Have you read it? I don't know if it works or not. Seems unwise to assume it doesn't though.

Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.

Replies from: WrongBot
comment by WrongBot · 2010-11-19T21:20:57.251Z · LW(p) · GW(p)

Oh man! I left out the most important objection!

If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don't we see it in other species? Why aren't the effects stronger, since there's such a strong evolutionary pressure in favor of them?

Goertzel's paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he's talking about for that one. Or a video.

Replies from: Jack
comment by Jack · 2010-11-20T00:32:22.196Z · LW(p) · GW(p)

If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don't we see it in other species? Why aren't the effects stronger, since there's such a strong evolutionary pressure in favor of them?

All of this is also discussed in Outside the Gates. I can try to dig up what he said this weekend.

Goertzel's paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he's talking about for that one. Or a video.

The experiments aren't macroscopic. The results involve statistical deviations from expected normal distributions of say, white noise generators when participants try to will the results in different directions. I don't think these results are nearly as compelling as other things, see Jahn and Dunne 2005 for example. They had some methodological issues and the one attempt that was made at replication, while positive, wasn't significant at anywhere near the level of the original.

If you're actually interested you should consider checking out the book. It is a quick, inexpensive read. Put it this way: I'm not some troll who showed up here to argue about parapsychology. Six months ago I was arguing your position here with someone else and they convinced me to check out the book. I then updated significantly in the direction favoring psi (not enough to say it exists more likely than not, though). Everything you've said is exactly what I was saying before. It turns out that there are sound responses to a lot of the obvious objections, making the issue not nearly as clear cut as I thought.

comment by wedrifid · 2010-11-19T18:25:44.513Z · LW(p) · GW(p)

As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt.

It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote 'therefore' or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.

WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.

There is a clear implied premise 'psychic phenomenon are well known to be bullshit'. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don't think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.

Replies from: Jack
comment by Jack · 2010-11-19T18:38:59.872Z · LW(p) · GW(p)

It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote 'therefore' or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.

It isn't even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.

There is a clear implied premise 'psychic phenomenon are well known to be bullshit'. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don't think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.

I think I'm just restating the exchange I had with komponisto on this point. Goertzel's position isn't that of someone who is doesn't know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are 'well known to be bullshit'. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try "Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate."

Replies from: wedrifid
comment by wedrifid · 2010-11-19T18:52:19.672Z · LW(p) · GW(p)

It isn't even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.

This is obviously going to be the case when trying to convince an individual of something. The beliefs (crackpot or otherwise) of the target audience are always going to be relevant to persuasively. As a comment directed in part to the wider lesswrong audience the assumed premises will be different.

Try "Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate."

If I were a reader who thought Robin's position on health care was as implausible as belief in magic and thought that making claims about the fertility was similar to AI strategy then I would take this seriously. As it stands the analogy is completely irrelevant.

comment by komponisto · 2010-11-19T13:05:22.577Z · LW(p) · GW(p)

I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?

The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.

Replies from: Jack
comment by Jack · 2010-11-19T13:11:13.341Z · LW(p) · GW(p)

Any argument of this nature needs to include some explanation of why someone's ability to think about y is linked to their ability to think about z. But even with that (which wasn't included in the comment) you can only conclude that y and z imply each other. You can't just conclude z.

In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.

Replies from: komponisto
comment by komponisto · 2010-11-19T13:30:48.930Z · LW(p) · GW(p)

I don't disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person's belief in them raises serious doubts about that person's understanding of science at the very least, if not their general rationality level.

Replies from: Risto_Saarelma, Jack
comment by Risto_Saarelma · 2010-11-19T14:42:16.868Z · LW(p) · GW(p)

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There's a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren't subject to the inverse square law, so this isn't a new idea.

Damien Broderick's attitude in his book is basically that there's a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel's attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than "would need extra particles" to show as nonsense.

"Not understanding basic physics" doesn't really seem to cut it in either case. "It's been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn't anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn't have" is basically the one I've got.

I'm not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I'm still waiting for someone more physics-literate to have a go at Goertzel's pilot wave paper.

Replies from: komponisto
comment by komponisto · 2010-11-19T15:24:26.450Z · LW(p) · GW(p)

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...

"Not understanding basic physics" doesn't really seem to cut it in either case

"Not understanding basic physics" sounds like a harsh quasi-social criticism, like "failing at high-school material". But that's not exactly what's meant here. Rather, what's meant is more like "not being aware of how strong the evidence against psi from 20th-century physics research is".

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Replies from: Jack
comment by Jack · 2010-11-19T16:03:15.937Z · LW(p) · GW(p)

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Replies from: komponisto
comment by komponisto · 2010-11-19T23:36:29.645Z · LW(p) · GW(p)

Evidence distinguishes between not for individual models.

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Carroll claims that current data implies the probability of such models being correct is near zero. So I'd like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll's (and others') mistake?

Replies from: Jack, wnoise
comment by Jack · 2010-11-22T16:59:50.949Z · LW(p) · GW(p)

including a "model", which is just a name for a complex conjunction of hypotheses

If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.

That is all I meant.

comment by wnoise · 2010-11-20T01:27:48.695Z · LW(p) · GW(p)

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

Yes, but this depends on what other hypotheses are considered in the "false" case.

Replies from: komponisto
comment by komponisto · 2010-11-20T02:09:17.288Z · LW(p) · GW(p)

The "false" case is the disjunction of all other possible hypotheses besides the one you're considering.

Replies from: wnoise
comment by wnoise · 2010-11-20T02:58:35.891Z · LW(p) · GW(p)

That's not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.

Replies from: komponisto
comment by komponisto · 2010-11-20T03:53:37.220Z · LW(p) · GW(p)

One typically works with some limited ensemble of possible hypotheses

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else

You appear to be thinking in terms of ad-hoc statistical techniques ("computable", "one typically works..."), rather than fundamental laws governing belief. But the latter is what we're interested in in this context: we want to know what's true and how to think, not what we can publish and how to write it up.

Replies from: wnoise
comment by wnoise · 2010-11-20T08:23:19.124Z · LW(p) · GW(p)

Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.

As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.

(This is obviously true when we look at P(H_i|e). It's a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of "all the other theories", than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It's also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)

It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as "log odds", for instance is only useful for talking about comparing two specific hypotheses, not "this hypothesis" and "everything else".

But I still have objections to most you say.

You've given an essentially operational definition of "evidence for" in terms of operations that can't be done.

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else.

Yes. The standard way to express that is that you can't actually work with P(Hypothesis), only P(Hypothesis | Model Space).

You can then, of course expand your model spaces, if you find your model space is inadequate.

You appear to be thinking in terms of ad-hoc statistical techniques ("computable",

"Computable" is hardly ad-hoc. It's a fundamental restriction on how it is possible to reason.

we want to know what's true and how to think,

If you want to know how to think, you had better pick a method that's actually possible.

This really is just another facet of "all Bayesian probabilities are conditional."

Replies from: komponisto
comment by komponisto · 2010-11-20T16:39:43.160Z · LW(p) · GW(p)

Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero.

And you shouldn't do that.

Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses

Yes, of course. The point is that if you're using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be "considering" all possible hypotheses, not merely a small important-looking subset. Now it's true that what you won't be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior -- since, as you point out, that's computationally intractable. Instead, you'll take your important-looking subset just as you would in the science paper, let's say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words "something I didn't think of"/"my paradigm is wrong"/etc. And you have to assign a nonzero probability to H4.

Yes. The standard way to express that is that you can't actually work with P(Hypothesis), only P(Hypothesis | Model Space).

No, see above. In science papers, "paradigm shifts" happen, and you "change your model space". Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to "changing your model space", because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these "new" sub-hypotheses into your "important-looking subset".

To return to the issue at hand in this thread, here's what's going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they've already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said "P(psi|QFT) is small". It doesn't do to reply "well, their paradigm may be wrong"; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll's post is a defense of the proposition that "P(psi|QFT) is small"; Jack's comment is an assertion that "psi&QFT may be true", which sounds like an assertion that "P(psi|QFT) is higher than Sean Carroll thinks it is" -- in which case Jack would need to account somehow for Carroll being mistaken in his analysis.

Replies from: Jack
comment by Jack · 2010-11-22T16:39:55.305Z · LW(p) · GW(p)

"P(psi|QFT) is higher than Sean Carroll thinks it is" -- in which case Jack would need to account somehow for Carroll being mistaken in his analysis.

This is basically my position. ETA: I may assign a high probability to "not all of the hypotheses that make up QFT are true" a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).

I don't think Carroll's analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can't look at his analysis and take it as convincing evidence that the claims of parapsychologists aren't consistent with QFT since Carroll doesn't once mention any of the claims made by parapsychologists!

Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can't shouldn't result in us updating on anything.

I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can't be right. I think there is a good chance I don't grasp just how inconsistent these claims are with known physics-- and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn't come close to writing such a case. I think the reason you think he has is that you're not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-22T17:00:18.863Z · LW(p) · GW(p)

The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.

It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.

He doesn't have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.

Replies from: Jack, Jack
comment by Jack · 2010-11-22T23:33:04.527Z · LW(p) · GW(p)

I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I'll be short.

And there are indeed parapsychologists who claim telekinesis is worth investigating.

But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don't.

This is really getting away from what Komponisto and I were talking about. I'm not really disputing the claim that parapsychology is a pseudo-science. I'm disputing the claim that Carroll's analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven't really thought about delineation issues regarding parapsychology.

comment by Jack · 2010-11-22T17:04:58.735Z · LW(p) · GW(p)

His point is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.

But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they've found. Thats sort of their point actually.

There are lots of silly people in the field who think the results imply dualism of course-- but thats precisely why it would be nice to have materialists tackle the questions.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-22T22:27:29.232Z · LW(p) · GW(p)

There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.

That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.

Replies from: Jack
comment by Jack · 2010-11-22T22:50:38.306Z · LW(p) · GW(p)

Cite?

ETA: Bem, for example, whose study initiated this discussion has a BA and did graduate work in physics.

comment by Jack · 2010-11-19T14:02:42.504Z · LW(p) · GW(p)

This isn't someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn't obviously the case, waving our hands and throwing out these words isn't an explanation of the results. I'm going to try and make a post on this subject a priority now.

Replies from: komponisto
comment by komponisto · 2010-11-19T14:19:46.339Z · LW(p) · GW(p)

This isn't someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs

Did you read the linked post by Sean Carroll? Parapsychologists aren't condemned for holding a similar position to the uneducated; they're condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century's worth of experimentally confirmed physical knowledge is far from hand-waving.

Humans are still confused enough about the world that there is room for change in our current understanding of physics

Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.

Now, I'm not a physicist, so if I'm actually wrong about any of this, I'm willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.

Replies from: Jack
comment by Jack · 2010-11-19T15:48:57.318Z · LW(p) · GW(p)

Physicists are not confused in the relevant regimes here.

We're don't know what the relevant regimes are here. Obviously human brains aren't producing force fields that are bending spoons.

We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven't seen yet and we don't have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).

On the other hand, maybe our physics isn't complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we're in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn't shown that all but error/fraud/bias have been ruled out.

Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.

I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren't familiar with the results of the field. I recommend Outside the Gates of Science if you haven't read it yet.

Replies from: shokwave
comment by shokwave · 2010-11-19T16:20:01.336Z · LW(p) · GW(p)

It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that - through pure, random chance - happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event - but he's spent eight years trying to make it happen, and so happen it eventually has. Good for him!

The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that's going on here.

Replies from: Jack
comment by Jack · 2010-11-19T16:36:41.580Z · LW(p) · GW(p)

Right, like I said, publication bias is a possibility. But in Honorton's precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem's lone study, that troubles me.

Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.

What evidence is there for this?

Replies from: shokwave
comment by shokwave · 2010-11-20T06:27:52.501Z · LW(p) · GW(p)

Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.

What evidence is there for this?

From here,

The paper ... is the culmination of eight years' work by Daryl Bem of Cornell University in Ithaca, New York.

Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image's eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.

Replies from: Jack
comment by Jack · 2010-11-22T16:13:01.467Z · LW(p) · GW(p)

Why do we think this means early test groups weren't included in the study? It just sounds like it took eight years to get the large sample size he wanted.

Replies from: shokwave
comment by shokwave · 2010-11-23T00:42:35.514Z · LW(p) · GW(p)

I think that it means that early test groups weren't included because that is the easiest way to produce the results we're seeing.

It just sounds like it took eight years to get the large sample size he wanted.

Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying "I took 4 years to make sure the sample size was large enough."

Replies from: Jack
comment by Jack · 2010-11-23T01:15:57.881Z · LW(p) · GW(p)

Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can't just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn't that hard it seems silly just to assume.

comment by Sniffnoy · 2010-08-18T00:04:10.715Z · LW(p) · GW(p)

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.

comment by cousin_it · 2010-08-17T19:55:41.477Z · LW(p) · GW(p)

That was... surprisingly surprising. Thank you.

For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I'd like to do (I've been there, thanks).

Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.

Replies from: John_Baez, Kevin, David_Gerard, DanielVarga, None, Sniffnoy, JoshuaZ
comment by John_Baez · 2010-08-19T07:58:44.784Z · LW(p) · GW(p)

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

Replies from: cousin_it, Vladimir_Nesov, XiXiDu, ciphergoth
comment by cousin_it · 2010-08-19T08:38:29.849Z · LW(p) · GW(p)

Wow.

Hello.

I didn't expect that. It feels like summoning Gauss, or something.

Thank you a lot for twf!

comment by XiXiDu · 2010-08-19T08:45:47.216Z · LW(p) · GW(p)

It's new? I'm already following it for some time. Can't remember how I came across it in the first place though...very cool but over my head, thanks.

comment by Paul Crowley (ciphergoth) · 2010-08-19T08:02:22.723Z · LW(p) · GW(p)

The markup syntax here is a bit unusual and annoying - click the "Help" button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!

comment by Kevin · 2010-08-19T08:08:18.494Z · LW(p) · GW(p)

Make a top level post about the kind of thing you want to talk about. It doesn't have to be an essay, it could just be a question ("Ask Less Wrong") or a suggested topic of conversation.

comment by David_Gerard · 2010-11-18T21:20:45.084Z · LW(p) · GW(p)

I love your posts, so having seen this comment I'm going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)

Replies from: cousin_it
comment by cousin_it · 2010-11-18T23:24:41.971Z · LW(p) · GW(p)

Thanks!

comment by DanielVarga · 2010-08-21T20:22:55.548Z · LW(p) · GW(p)

I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.

comment by [deleted] · 2010-08-21T18:59:24.980Z · LW(p) · GW(p)

"Leaving" LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?

I've been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it's been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it's own.

Not that I claim any ownership over it, but:

I'm going to try to more clearly brand it as "A friendly place to analytically discuss fantastic, strange or bizarre ideas."

comment by Sniffnoy · 2010-08-18T00:05:26.928Z · LW(p) · GW(p)

Of course, MathOverflow isn't really a place for discussion...

comment by JoshuaZ · 2010-08-17T20:05:19.852Z · LW(p) · GW(p)

At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I'd actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.

Replies from: cousin_it
comment by cousin_it · 2010-08-17T20:14:36.238Z · LW(p) · GW(p)

About Polymath: thanks! (blushes)

I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of "thank yous" here on LW for clearing up mathy stuff, but it feels like I could be more useful... somewhere.

comment by Zvi · 2010-08-31T21:01:09.223Z · LW(p) · GW(p)

I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I'm not a cult leader. Although that does sound kind of neat. Observe:

Eileen Barker:

  1. When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion 'Magic colonies' form for a few weeks. It's not substantially less isolating than what SIAI dos. Check.
  2. I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check.
  3. I make reasonably import, on the level of the Cryonics decision if Cryonics isn't worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check.
  4. We identify other teams as 'them' reasonably often, and certain other groups are certainly viewed as the enemy. Check.
  5. Nope, even fainter argument than Eliezer.
  6. Again, yes, obviously.

Shirley Harrison:

  1. I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check.
  2. My writings count at least as much as the sequences. Check.
  3. Not intentionally, but often new recruits have little idea what to expect. Check plus.
  4. Totalitarian rules structure, and those who game too much often alienate friends and family. I've seen it many times, and far less of a cheat than saying that you'll be alienated from them when they are all dead and you're not because you got frozen. Check.
  5. I make people believe what I want with the exact same techniques we use here. If anything, I'm willing to use slightly darker arts. Check.
  6. We make the lower level people do the grunt work, sure. Check.
  7. Based on some of the deals I've made, one looking to demonize could make a weak claim. Check plus.
  8. Exclusivity. In spades. Check.

I'd also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.

comment by Perplexed · 2010-11-18T19:10:36.729Z · LW(p) · GW(p)

the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I have to disagree that this "smugness" even remotely reaches the level that is characteristic of a cult.

As someone who has frequently expressed disagreement with the "doctrine" here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism - any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma

Replies from: David_Gerard
comment by David_Gerard · 2010-11-18T21:16:46.346Z · LW(p) · GW(p)

Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I'm objecting to. So the moderation system - "vote up things you want more of" - works really well, and I like the comments here.

This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It's amazing what you can get away with if you show your references.

comment by JGWeissman · 2010-08-17T19:34:06.009Z · LW(p) · GW(p)

This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:

1.

A movement that separates itself from society, either geographically or socially;

It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.

comment by Paul Crowley (ciphergoth) · 2010-08-17T19:34:39.030Z · LW(p) · GW(p)

Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I've not seen this happening - examples?

Replies from: JGWeissman
comment by JGWeissman · 2010-08-17T19:43:08.362Z · LW(p) · GW(p)

I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.

With this qualification, it no longer seems like evidence of being cult.

comment by timtyler · 2010-08-17T18:52:49.412Z · LW(p) · GW(p)

That's the pejorative usage. There is also:

"Cult also commonly refers to highly devoted groups, as in:

  • Cult, a cohesive group of people devoted to beliefs or practices that the surrounding culture or society considers to be outside the mainstream

    • Cult of personality, a political leader and his following, voluntary or otherwise
    • Destructive cult, a group which exploits and destroys its members or even non-members
    • Suicide cult, a group which practices mass self-destruction, as occurred at Jonestown
    • Political cult, a political group which shows cult-like features"

http://en.wikipedia.org/wiki/Cults_of_personality

http://en.wikipedia.org/wiki/Cult_following

http://en.wikipedia.org/wiki/Cult_%28religious_practice%29

comment by Kevin · 2010-08-16T10:01:47.739Z · LW(p) · GW(p)

What are the scenarios where someone unfamiliar with this website would hear about Roko's deleted post?

I suppose it could be written about dramatically (because it was dramatic!) but I don't think anyone is going to publish such an account. It was bad from the perspective of most LWers -- a heuristic against censorship is a good heuristic.

This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn't allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.

If Less Wrong had a mark as dead function (on HN unregistered users don't see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko's post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.

As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn't a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It's just not the kind of thing that actually makes a PR disaster... honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don't take that as a reason to make this a PR issue.

Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.

If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you'd probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)

Replies from: homunq
comment by homunq · 2010-08-31T15:37:26.052Z · LW(p) · GW(p)

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don't really know what happened. Of course I have vague theories , and I've received a terse and unhelpful response from EY (a link to a horror story about a "riddle" which kills - a good story which I simply don't accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.

Replies from: Airedale
comment by Airedale · 2010-08-31T17:49:37.585Z · LW(p) · GW(p)

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial.

I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.

I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.

All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).

Replies from: homunq
comment by homunq · 2010-09-01T12:58:19.515Z · LW(p) · GW(p)

I think it was 30 karma points (3 net downvotes), though I'm not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn't been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.

comment by Will_Newsome · 2010-08-16T09:48:34.455Z · LW(p) · GW(p)

Looking at my own posts I see a lot of this problem; that is, the problem of addressing only far too small an audience. Thank you for pointing it out.

comment by Eneasz · 2010-08-24T17:46:22.210Z · LW(p) · GW(p)

informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible

I believe you are completely ignoring the status-demolishing effects of hypocrisy and insincerity.

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty and did so in a way I've never seen other people able to pull off - without sounding nuts at all. In fact, sounding very reasonable. I've since updated enough that I no longer wince and hold my breath, I smile and await the triumph.

If, as most people (and nearly all politicians) do, he would have waffled and presented an argument that he doesn't honestly hold, but that is more publicly acceptable, I'd feel disappointed and a bit sickened and I'd tune out the rest of what he has to say.

Hypocrisy is transparent. People (including neurotypical people) very easily see when others are making claims they don't personally believe, and they universally despise such actions. Politicians and lawyers are among the most hated groups in modern societies, in large part because of this hypocrisy. They are only tolerated because they are seen as a necessary evil.

Right now, People Working To Reduce Existential Risk are not seen as necessary. So it's highly unlikely that hypocrisy among them would be tolerated. They would repel anyone currently inclined to help, and their hypocrisy wouldn't draw in any new support. The answer isn't to try to deceive others about your true beliefs, it is to help make those beliefs more credible among the incredulous.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Replies from: Eliezer_Yudkowsky, pnrjulius, Carinthium
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-24T18:30:57.349Z · LW(p) · GW(p)

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty

I am so glad that someone notices and appreciates this.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Agreed.

comment by pnrjulius · 2012-06-12T02:59:28.080Z · LW(p) · GW(p)

On the other hand... people say they hate politicians and then vote for them anyway.

So hypocrisy does have upsides, and maybe we shouldn't dismiss it so easily.

Replies from: CuSithBell
comment by CuSithBell · 2012-06-12T03:35:00.031Z · LW(p) · GW(p)

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

Replies from: pnrjulius
comment by pnrjulius · 2012-06-12T03:40:03.282Z · LW(p) · GW(p)

Well yes, exactly. If it takes a certain degree of hypocrisy to get campaign contributions, advertising, etc., and it takes these things to get elected... then you're going to have to have a little hypocrisy in order to win.

And we do want to win, right? We want to actually reduce existential risk, and not just feel like we are?

If you can find a way to persuade people (and win elections, never forget that making policy in a democracy means winning elections) that doesn't involve hypocrisy, I'm all ears.

comment by Carinthium · 2010-11-23T09:19:06.671Z · LW(p) · GW(p)

The above is a good comment, but 26 karma? How did it deserve that?

Replies from: wnoise
comment by wnoise · 2010-11-24T02:06:58.963Z · LW(p) · GW(p)

Karma (despite the name) has very little to do with "deserve". All it really means is that 26 (now 25) more people desire more content like this than desire less content like this.

Replies from: Carinthium
comment by Carinthium · 2010-11-24T02:26:46.756Z · LW(p) · GW(p)

On the other hand, it is a good thing to shift the Karma system to better resemble a system based on merit- i.e. they should vote down the comment up to a point because although it is a good one it doesn't deserve it's very high score.

Replies from: wnoise
comment by wnoise · 2010-11-24T17:19:53.764Z · LW(p) · GW(p)

Why should something that is mildly liked by many not have a higher score than something that is highly liked by fewer?

In any case, it's rather hard to do. How do you propose to make your standards for a good comment the one other people use? Each individual sets their own level at which they will up- or down-vote a comment or post. They can indeed take into account the current score of a post, but that does rather poorly as others come by and change it. Should the first guy who up-voted that check back and see if it is now too highly rated? That seems hardly worth his time. And pretty much by definition, the guy who voted it from 25 to 26 was happier with the score at 26 than at 25, so at least one person does think it was worth 26.

And what happens as norms change as to what a "good score" is as more comments have more eyeballs and voters looking at them?

Or we could all just take karma beyond "net positive" and "net negative" a whole lot less seriously.

Complaining about a given score and the choices of others certainly isn't likely to go much of anywhere.

comment by komponisto · 2010-08-16T00:38:38.646Z · LW(p) · GW(p)

I'll state my own experience and perception, since it seems to be different from that of others, as evidenced in both the post and the comments. Take it for what it's worth; maybe it's rare enough to be disregarded.

The first time I heard about SIAI -- which was possibly the first time I had heard the word "singularity" in the technological sense -- was whenever I first looked at the "About" page on Overcoming Bias, sometime in late 2006 or early 2007, where it was listed as Eliezer Yudkowsky's employer. To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

Now, when someone has made that kind of demonstration of rationality, I just don't have much problem listening to whatever they have to say, regardless of how "outlandish" it may seem in the context of most human discourse. Maybe I'm exceptional in this respect, but I've never been under the impression that only "normal-sounding" things can be true or important. At any rate, I've certainly never been under that impression to such an extent that I would be willing to dismiss claims made by the author of The Simple Truth and A Technical Explanation of a Technical Explanation, someone who understands things like the gene-centered view of evolution and why MWI exemplifies rather than violates Occam's Razor, in the context of his own professional vocation!

I really don't understand what the difference is between me and the "smart people" that you (and XiXiDu) know. In fact maybe they should be more inclined to listen to EY and SIAI; after all, they probably grew up reading science fiction, in households where mild existential risks like global warming were taken seriously. Are they just not as smart as me? Am I unusually susceptible to following leaders and joining cults? (Don't think so.) Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims? (But why wouldn't they as well, if they're "smart"?)

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

Replies from: None, multifoliaterose
comment by [deleted] · 2010-08-17T20:34:31.269Z · LW(p) · GW(p)

I STRONGLY suspect that there is a enormous gulf between finding out things on your own and being directed to them by a peer.

When you find something on your own (existential risk, cryonics, whatever), you get to bask in your own fortuitousness, and congratulate yourself on being smart enough to understand it's value. You get a boost in (perceived) status, because not only do you know more than you did before, you know things other people don't know.

But when someone else has to direct you to it, it's much less positive. When you tell someone about existential risk or cryonics or whatever, the subtext is "look, you're weren't able to figure this out by yourself, let me help you". No matter how nicely you phrase it, there's going to be resistance because it comes with a drop in status - which they can avoid by not accepting whatever you're selling. It actually might be WORSE with smart people who believe that they have most things "figured out".

comment by multifoliaterose · 2010-08-16T09:18:21.236Z · LW(p) · GW(p)

Thanks for your thoughtful comment.

To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

I know some people who have had this sort of experience. My claim is not that Eliezer has uniformly repelled people from thinking about existential risk. My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims?

My guess would be that this is it. I'm the same way.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality, or what the sign of that correlation is. People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc. more often than usual. Statistically, people who make strange-sounding claims are not worth listening to. Too much willingness to listen to strange-sounding claims can easily result in one wasting large portions of one's life.

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

Replies from: ciphergoth, katydee, komponisto
comment by Paul Crowley (ciphergoth) · 2010-08-16T10:48:59.289Z · LW(p) · GW(p)

For my part, I keep wondering how long it's going to be before someone throws his "If you don't sign up your kids for cryonics then you are a lousy parent" remark at me, to which I will only be able to say that even he says stupid things sometimes.

(Yes, I'd encourage anyone to sign their kids up for cryonics; but not doing so is an extremely poor predictor of whether or not you treat your kids well in other ways, which is what the term should mean by any reasonable standard).

Replies from: James_Miller, multifoliaterose
comment by James_Miller · 2010-08-18T14:55:30.203Z · LW(p) · GW(p)

Given Eliezer's belief about the probability of cryonics working and belief that others should understand that cryonics has a high probability of working, his statement that "If you don't sign up your kids for cryonics then you are a lousy parent" is not just correct but trivial.

One of the reasons I so enjoy reading Less Wrong is Eliezer's willingness to accept and announce the logical consequences of his beliefs.

Replies from: ciphergoth, pcm
comment by Paul Crowley (ciphergoth) · 2010-08-18T15:00:15.761Z · LW(p) · GW(p)

There is a huge gap between "you are doing your kids a great disservice" and "you are a lousy parent": "X is an act of a lousy parent" to me implies that it is a good predictor of other lousy parent acts.

EDIT: BTW I should make clear that I plan to try to persuade some of my friends to sign up themselves and both their kids for cryonics, so I do have skin in the game...

Replies from: FAWS
comment by FAWS · 2010-08-18T15:04:41.020Z · LW(p) · GW(p)

I'm not completely sure I disagree with that, but do you have the same attitude towards parents who try to heal treatable cancer with prayer and nothing else, but are otherwise great parents?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T15:31:11.450Z · LW(p) · GW(p)

I think that would be a more effective predictor of other forms of lousiness: it means you're happy to ignore the advice of scientific authority in favour of what your preacher or your own mad beliefs tell you, which can get you into trouble in lots of other ways.

That said, this is a good counter, and it does make me wonder if I'm drawing the right line. For one thing, what do you count as a single act? If you don't get cryonics for your first child, it's a good predictor that you won't for your second either, so does that count? So I think another aspect of it is that to count, something has to be unusually bad. If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

Replies from: shokwave
comment by shokwave · 2010-11-23T00:56:56.728Z · LW(p) · GW(p)

If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

True. However, if absolutely everyone you ever meet thinks vaccines are evil except for one doctor and that doctor has science on his side, and you choose not to get your kids vaccinated because of "going along with" social pressures, then "lousy parent" is exactly the right strength of term. And that's really the case here. Not absolutely everyone thinks cryonics is wrong or misguided. And if you can't sort the bullshit and wishful thinking from the science, then you're doing your child a disservice.

comment by pcm · 2010-08-21T20:09:32.226Z · LW(p) · GW(p)

If "you" refers to a typical parent in the US, then it's sensible (but hardly trivial). But it could easily be interpreted as referring to parents who are poor enough that they should give higher priority to buying a safer car, moving to a neighborhood with a lower crime rate, etc.

Eliezer's writings about cryonics may help him attract more highly rational people to work with him, but will probably reduce his effectiveness at warning people working on other AGI projects of the risks. I think he has more potential to reduce existential risk via the latter approach.

comment by multifoliaterose · 2010-08-16T11:47:35.244Z · LW(p) · GW(p)

Yes, this is the sort of thing that I had in mind in making my cryonics post - as I said in the revised version of my post, I have a sense that a portion of the Less Wrong community has the attitude that cryonics is "moral" in some sort of comprehensive sense.

Replies from: James_Miller
comment by James_Miller · 2010-08-18T15:00:47.883Z · LW(p) · GW(p)

If you believe that thousands of people die unnecessarily every single day then of course you think cryonics is a moral issue.

If people in the future come to believe that we should have know that cryonics would probably work then they might well conclude that our failure to at least offer cryonics to terminally ill children was (and yes I know what I'm about to write sounds extreme and will be off-putting to many) Nazi-level evil.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T15:42:52.379Z · LW(p) · GW(p)

I've thought carefully about this matter and believe that there's good reason to doubt your prediction. I will detail my thoughts on this matter in a later top level post.

Replies from: James_Miller
comment by James_Miller · 2010-08-18T15:50:20.043Z · LW(p) · GW(p)

I would like the opportunity to make timely comments on such a post, but I will be traveling until Aug 27th and so request you don't post before then.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T15:51:08.156Z · LW(p) · GW(p)

Sure, sounds good.

comment by katydee · 2010-08-16T10:31:46.610Z · LW(p) · GW(p)

Also, keep in mind that reading the sequences requires nontrivial effort-- effort which even moderately skeptical people might be unwilling to expend. Hopefully Eliezer's upcoming rationality book will solve some of that problem, though. After all, even if it contains largely the same content, people are generally much more willing to read one book rather than hundreds of articles.

comment by komponisto · 2010-08-16T11:10:39.861Z · LW(p) · GW(p)

Thank you for your thoughtful reply; although, as will be evident, I'm not quite sure I actually got the point across.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality,

I didn't realize at all that by "smart" you meant "instrumentally rational"; I was thinking rather more literally in terms of IQ. And I would indeed expect IQ to correlate positively with what you might call openness. More precisely, although I would expect openness to be only weak evidence of high IQ, I would expect high IQ to be more significant evidence of openness.

People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc...

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics. Yes, of course, if all you know about a person is that they make strange claims, then you should by default assume they're a UFO/New Age type. But I submit that the fact that Eliezer has written things like these decisively entitles him to a pass on that particular inference, and anyone who doesn't grant it to him just isn't very discriminating.

Replies from: multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-08-16T12:04:13.540Z · LW(p) · GW(p)

And I would indeed expect IQ to correlate positively with what you might call openness.

My own experience is that the correlation is not very high. Most of the people who I've met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.

I didn't realize at all that by "smart" you meant "instrumentally rational";

I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they're open minded or not) interested in existential risk.

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn't mean that they're holistically irrational. My own experience has been that my openness has both benefits and drawbacks.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics.

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand. See bentram's comment

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

Replies from: komponisto, wedrifid
comment by komponisto · 2010-08-16T12:33:25.787Z · LW(p) · GW(p)

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers

You may be right about this; perhaps Eliezer should in fact work on his PR skills. At the same time, we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand

This is a problem; no question about it.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-16T12:39:14.603Z · LW(p) · GW(p)

At the same time we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

I agree with this. It's all a matter of degree. Maybe at present one has to be in the top 1% of the population in nonconformity to be interested in existential risk and with better PR one could reduce the level of nonconformity required to the top 5% level.

(I don't know whether these numbers are right, but this is the sort of thing that I have in mind - I find it very likely that there are people who are nonconformist enough to potentially be interested in existential risk but too conformist to take it seriously unless the people who are involved seem highly credible.)

comment by wedrifid · 2010-08-16T12:52:27.535Z · LW(p) · GW(p)

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

I would perhaps expand 'conformity' to include neighbouring social factors - in-group/outgroup, personal affiliation/alliances, territorialism, etc.

comment by multifoliaterose · 2010-08-16T12:23:40.144Z · LW(p) · GW(p)

One more point - though I could immediately recognize that there's something important to some of what Eliezer says, the fact that he makes outlandish claims did make me take longer to get around to thinking seriously about existential risk. This is because of a factor that I mention in my post which I quote below.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm not proud that I'm so influenced, but I'm only human. I find it very plausible that there are others like me.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:28:13.728Z · LW(p) · GW(p)

I don't mean to dismiss the points of this post, but all of those points do need to be reinterpreted in light of the fact that I'd rather have a few really good rationalists as allies than a lot of mediocre rationalists who think "oh, cool" and don't do anything about it. Consider me as being systematically concerned with the top 5% rather than the average case. However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

Replies from: XiXiDu, multifoliaterose, ChristianKl, pnrjulius
comment by XiXiDu · 2010-08-18T14:58:32.192Z · LW(p) · GW(p)

Somewhere you said that you are really happy to be finally able to concentrate directly on the matters you deem important and don't have to raise money anymore. This obviously worked, so you won't have to change anything. But if you ever need to raise more money for a certain project, my question is how much of the money you already get comes from people you would consider mediocre rationalists?

I'm not sure if you expect to ever need a lot of money for a SIAI project, but if you solely rely on those few really good rationalists then you might have a hard time in that case.

People like me will probably always stay on your side, whether you tell them they are idiots. But I'm not sure if that might be enough in a scenario where donations are important.

comment by multifoliaterose · 2010-08-18T16:17:02.077Z · LW(p) · GW(p)

Agree with the points of both of ChistianKI and XiXiDu.

As for really good rationalists, I have the impression that even when it comes to them you inadvertently alienate them with higher than usual frequency on account of saying things that sound quite strange.

I think (but am not sure) that you would benefit from spending more time understanding what goes on in neurotypical people's minds. This would carry not only social benefits (which you may no longer need very much at this point) but also epistemological benefits.

However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

I'm encouraged by this remark.

comment by ChristianKl · 2010-08-18T15:09:18.777Z · LW(p) · GW(p)

If we think existential risk reduction is important than we should care about whether politicians think that existential risk reduction is a good idea. I don't think that a substantial number of US congressman are what you consider to be good rationalists.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T16:28:20.314Z · LW(p) · GW(p)

For Congress to implement good policy in this area would be performance vastly exceeding what we've previously seen from them. They called prediction markets terror markets. I expect more of the same, and expect to have little effect on them.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-08-18T20:58:00.698Z · LW(p) · GW(p)

The flipside though is if we can frame the issue in a way that there's no obvious Democrat or Republican position, then we can, as Robin Hanson puts it, "pull the rope sideways".

The very fact that much of the existential risk stuff is "strange sounding" relative to what most people are used to really thinking about in the context of political arguments might thus act as a positive.

comment by pnrjulius · 2012-06-12T03:05:19.472Z · LW(p) · GW(p)

We live in a democracy! How can you not be concerned with 95% of the population? They rule you.

If we lived in some sort of meritocratic aristocracy, perhaps then we could focus our efforts on only the smartest 5%.

As it is, it's the 95% who decide what happens in our elections, and its our elections who decides what rules get made, what projects get funded. The President of the United States could unleash nuclear war at any time. He's not likely to---but he could. And if he did push that button, it's over, for all of us. So we need to be very concerned about who is in charge of that button, and that means we need to be very concerned about the people who elect him.

Right now, 46% of them think the Earth is 6000 years old. This worldview comes with a lot of other anti-rationalist baggage like faith and the Rapture. And it runs our country. Is it just me, or does this seem like a serious problem, one that we should probably be working to fix?

comment by Larks · 2010-08-17T06:40:47.075Z · LW(p) · GW(p)

It must be said that the reason no-one from SingInst has commented here is they're all busy running the Singularity Summit, a well-run conference full of AGI researchers, the one group that SingInst cares about impressing more than any other. Furthermore, Eliezer's speech was well received by those present.

I'm not sure whether attacking SingInst for poor public relations during the one week when everyone is busy with a massive public relations effort is either very ironic or very Machiavellian.

comment by Oligopsony · 2010-08-15T11:20:09.251Z · LW(p) · GW(p)

I'm new to all this singularity stuff - and as an anecdotal data point, I'll say a lot of it does make my kook bells go off - but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

There is, of course, the question of fundraising. ("This problem is too complicated for you to help with directly, but you can give us money..." sets off further alarm bells.) But from that perspective someone who thinks you're nuts is no worse than someone who hasn't heard of you. You can ramp up the variance of people's opinions and come out better financially.

Replies from: CarlShulman, wedrifid, jacob_cannell
comment by CarlShulman · 2010-08-15T11:26:44.999Z · LW(p) · GW(p)

Awareness on the part of government funding agencies (and the legislators and executive branch people with influence over them), technology companies and investors, and political and military decisionmakers (eventually) could all matter quite a lot. Not to mention bright young people deciding on their careers and research foci.

comment by wedrifid · 2010-08-15T11:43:40.076Z · LW(p) · GW(p)

Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

The people who do the real work. Utlimately it doesn't matter if the people who do the AI research care about existential risk or not (if we make some rather absolute economic assumptions). But you've noticed this already and you are right about the 'further alarm bells'.

Ultimately, the awareness of the layperson matters for the same reason that the awareness of the layperson matters for any other political issue. While with AI people can't get their idealistic warm fuzzies out of barely relevant things like 'turning off a light bulb' things like 'how they vote' do matter. Even if it is at a lower level of 'voting' along the lines of 'which institutions do you consider more prestigious'?

You can ramp up the variance of people's opinions and come out better financially.

Good point!

comment by jacob_cannell · 2010-08-25T03:57:50.053Z · LW(p) · GW(p)

Don't you realize the default scenario?

The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to 'control it', fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.

The question then is what kind of god do we want to unleash.

Replies from: ata, timtyler
comment by ata · 2010-08-25T04:10:39.809Z · LW(p) · GW(p)

While we're in a thread with "Public Relations" in its title, I'd like to point out that calling an AGI a "god", even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn't going to help much in combating the singularity-as-religion pattern completion.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T07:00:38.940Z · LW(p) · GW(p)

Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.

I read the "by (some) definition" and I find it actually supports the cluster mapping utility of the god term as it applies to AI's. "Scary powerful optimization process" just doesn't instantly convey the proper power relation.

But nonetheless, I do consider your public relations image point to be important. But I'm not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.

Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.

Understanding the latter domain is important for creating good AI and CEV and all that.

Replies from: Pavitra
comment by Pavitra · 2010-08-25T07:23:23.256Z · LW(p) · GW(p)

Calling an AGI a god too easily conjures up visions of a benevolent force. Even those who consider that it might not have our best interests at heart tend to think of dystopian science fiction.

I use the phrase "robot Cthulhu", because the Singularity will probably eat the world without particularly noticing or caring that there's someone living on it.

Replies from: kodos96
comment by kodos96 · 2010-08-25T08:56:03.409Z · LW(p) · GW(p)

Calling an AGI a god too easily conjures up visions of a benevolent force

That really depends on how you feel about religion/god in the first place. To a guy like me, who is, as Hitchens is fond of describing himself, "not just an atheist, but an anti-theist", the uFAI/god connection makes me want to donate everything I have to SIAI to make sure it doesn't happen.

Maybe that's just me.

comment by timtyler · 2010-08-25T05:58:35.665Z · LW(p) · GW(p)

The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to 'control it', fails,

You assume incompetent engineers?!? What's the best case for engineers predictably failing at safety-critical tasks.

Replies from: khafra
comment by khafra · 2010-08-25T14:08:04.421Z · LW(p) · GW(p)

Incompetence is not a necessary condition for failure. Building something new is pretty near a sufficient condition for it, though. For instance, bridge design has been well-understood by engineers for millenia, but a slight variation on it brought catastrophic failure.

Replies from: timtyler
comment by timtyler · 2010-08-25T16:19:22.272Z · LW(p) · GW(p)

Moon landings? Man in space?

http://en.wikipedia.org/wiki/Transatlantic_flight#Early_notable_transatlantic_flights

...shows that after the first success there were some failures - but nobody died up until The White Bird in 1927.

Engineers are pretty good at not killing people. In fact their efforts have created lives on a large scale.

Major sources of lives lost to engineering are automobile accidents and weapons of war. Automobile accidents are due to machines being too stupid - and intelligent machines should help fix that.

The bug that destroyed the world scenario seems pretty incredible to me - and I don't see a case for describing it as the "default scenario".

It seems, if anything - based on what we have seen so far - that it is slightly more likely that a virus might destroy the world - not that the chances of that happening are very high either.

Replies from: thomblake, thomblake
comment by thomblake · 2010-08-25T16:23:26.335Z · LW(p) · GW(p)

...shows that after the first success there were some failures - but nobody died.

"Notable attempt (3)" - "lost" likely means "died".

Replies from: timtyler
comment by timtyler · 2010-08-25T16:39:10.633Z · LW(p) · GW(p)

Thanks. I had edited my post before seeing your reply.

Powered flight had a few associated early deaths: Otto Lilienthal died in a glider in 1896. Percy Pilcher in another hang gliding crash in 1899. Wilbur Wright almost came to a sticky end himself.

comment by thomblake · 2010-08-25T16:20:42.228Z · LW(p) · GW(p)

It seems, if anything, slightly more likely that a virus might destroy the world - not that the chances of that happening are very high either.

I'd never compared the likelihood of those two events before; is this comparison discussed anywhere prominent?

Replies from: timtyler
comment by timtyler · 2010-08-25T16:43:59.218Z · LW(p) · GW(p)

I don't know. Looking at the current IT scene, viruses, trojans and malware are probably the most prominent source of damage.

Bugs which are harmful are often the ones that allow viruses and malware to be produced.

We kind-of know how to avoid most harmful bugs. But either nobody cares enough to bother - or else the NSA likes people to be using insecure computers.

comment by [deleted] · 2010-08-15T14:46:09.362Z · LW(p) · GW(p)

I am one of those who haven't been convinced by the SIAI line. I have two main objections.

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.

Replies from: orthonormal, ciphergoth, John_Maxwell_IV, nhamann, NancyLebovitz, multifoliaterose, xamdam, Jonathan_Graehl
comment by orthonormal · 2010-08-15T15:46:57.311Z · LW(p) · GW(p)

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies.

As was mentioned in other threads, SIAI's main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that's a much stronger claim than one very detailed scenario.

For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever 'recipe'– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.

comment by Paul Crowley (ciphergoth) · 2010-08-16T18:14:01.058Z · LW(p) · GW(p)

There needs to be an article on this point. In the absence of a really good way of deciding what technologies are likely to be developed, you are still making a decision. You haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed, so I'd like to see your prediction mechanism and whether it's worked in the past. In the absence of really good information, we sometimes have to decide on the information we have.

EDIT: I was thinking about cryonics when I wrote this, though the argument generalizes.

Replies from: None, timtyler
comment by [deleted] · 2010-08-16T23:29:49.954Z · LW(p) · GW(p)

My point, with this, is that everybody is risk-averse and everybody has a time preference. The less is known about the prospects of a future technology, the less willing people are to invest resources into ventures that depend on the future development of that technology. (Whether to take advantage of the technology -- as in cryonics -- or to mitigate its dangers -- as in FAI.) Also, the farther in the future the technology is, the less people care about it; we're not willing to spend much to achieve benefits or forestall risks in the far future.

I don't think it's reasonable to expect people to change these ordinary features of economic preference. If you're going to ask people to chip in to your cause, and the time horizon is too far, or the uncertainty too high, they're not going to want to spend their resources that way. And they'll be justified.

Note: yes, there ought to be some magnitude of benefit or cost that overcomes both risk aversion and time preference. Maybe you're going to argue that existential risk and cryonics are issues of such great magnitude that they outweigh both risk aversion and time preference.

But: first of all, the importance of the benefit or cost is also an unknown (and indeed subjective.) How much do you value being alive? And, second of all, nobody says our risk and time preferences are well-behaved. There may be a date so far in the future that I don't care about anything that happens then, no matter how good or how bad. There may be loss aversion -- an amount of money that I'm not willing to risk losing, no matter how good the upside. I've seen some experimental evidence that this is common.

Replies from: wedrifid
comment by wedrifid · 2010-08-17T05:06:43.658Z · LW(p) · GW(p)

My point, with this, is that everybody is risk-averse and everybody has a time preference.

From what I understand this applies to most people but not everyone, especially outside of contrived laboratory circumstances. Overconfidence and ambition essentially amount to risk-loving choices for some major life choices.

comment by timtyler · 2010-08-16T18:22:36.231Z · LW(p) · GW(p)

you haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed

What is it that is making you think that whatever SarahC hasn't "signed up" to is having a positive effect - and that she can't do something better with her resources?

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T00:24:34.668Z · LW(p) · GW(p)

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Let's keep in mind that your estimated probabilities of various technological advancements occurring and your level of confidence in those estimates are completely distinct... In particular, here you seem to express low estimated probabilities of various advancements occurring, and you justify this by saying "we really have no idea". This seems like a complete non sequitur. Maybe you have a correct argument in your mind, but you're not giving us all the pieces.

Replies from: None
comment by [deleted] · 2010-08-16T00:37:09.742Z · LW(p) · GW(p)
  1. Technology X is likely to be developed in a few decades.
  2. Technology X is risky.
  3. We must take steps to mitigate the risk.

If you haven't demonstrated 1 -- if it's still unknown -- you can't expect me to believe 3. The burden of proof is on whoever's asking for money for a new risk-mitigating venture, to give strong evidence that the risk is real.

Replies from: Aleksei_Riikonen, John_Maxwell_IV
comment by Aleksei_Riikonen · 2010-08-16T01:35:58.509Z · LW(p) · GW(p)

So you think a danger needs to likely arrive in a few decades for it to merit attention?

I think that is quite irresponsible. No law of physics states that all problems can certainly be solved very well in a few decades (the solutions for some problems might even necessarily involve political components, btw), so starting preparations earlier can be necessary.

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T01:03:42.942Z · LW(p) · GW(p)

I see "burden of proof" as a misconcept in the same way that someone "deserving" something is. A better way of thinking about this: "You seem to be making a strong claim. Mind sharing the evidence for your claim for me? ...I disagree that the evidence you present justifies your claim."

For what it's worth, I also see "must _" as a misconcept--although "must _ to _" is not. It's an understandable usage if the "to _*" clause is implicit, but that doesn't seem true in this case. So to fix up SIAI's argument, you could say that these are the statements whose probabilities are being contested:

  1. If SarahC takes action Y before the development of Technology X and Technology X is developed, the expected value of her action will exceed its cost.
  2. Technology X will be developed.

And depending on their probabilities, the following may or may not be true:

  • SarahC wants to take action Y.

Pretty much anything you say that's not relevant to one of statements 1 or 2 (including statements that certain people haven't been "responsible" enough in supporting their claims) is completely irrelevant to the question of whether you want to take action Y. You already have (or ought to be able to construct) probability estimates for each of 1 and 2.

Replies from: Perplexed
comment by Perplexed · 2010-08-16T01:53:42.494Z · LW(p) · GW(p)

Your grasp of decision theory is rather weak if you are suggesting that when Technology X is developed is irrelevant to SarahC's decision. Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

But your real point was not to set up a correct decision problem, but rather to suggest that her questions about whether "certain people" have been "responsible" are irrelevant. Well, I have to disagree. If action Y is giving money to "certain people", then their level of "responsibility" is very relevant.

I did enjoy your observations regarding "burden of proof" and "must", though probably not as much as you did.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T02:33:08.147Z · LW(p) · GW(p)

Your grasp of decision theory is rather weak if you are suggesting that when Technology X is developed is irrelevant to SarahC's decision.

Of course that is important. I didn't want to include a lot of qualifiers.

I'm not trying to make a bulletproof argument so much as concisely give you an idea of why I think SarahC's argument is malformed. My thinking is that should be enough for intellectually honest readers, as I don't have important insights to offer beyond the concise summary. If you think I ought to write longer posts with more qualifications for readers who aren't good at taking ideas seriously feel free to say that.

Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

Really? So in some circumstances it is rational to take an action for which the expected cost is greater than the expected value? Or it is irrational to take an action for which the expected value exceeds the expected cost? (I'm using "rational" to mean "expected utility maximizing", "cost" to refer to negative utility, and "value" to refer to positive utility--hopefully at this point my thought process is transparent.)

If action Y is giving money to "certain people", then their level of "responsibility" is very relevant.

It would be a well-formed argument to say that because SIAI folks make strong claims without justifying them, they won't use money SarahC donates well. As far as I can tell, SarahC has not explicitly made that argument. (Recall I said that she might have a correct argument in her mind but she isn't giving us all the pieces.)

I did enjoy your observations regarding "burden of proof" and "must", though probably not as much as you did.

Please no insults, this isn't you versus me is it?

Replies from: Perplexed
comment by Perplexed · 2010-08-16T02:53:27.700Z · LW(p) · GW(p)

Similarly, you seem to suggest that the ratio of value to cost is irrelevant and that all that matters is which is bigger. Wrong again.

Really? So in some circumstances it is rational to take an action for which the expected cost is greater than the expected value?

No, your error was in the other direction. If you look back carefully, you will notice that the ratio is being calculated conditionally on Technology X being developed. Given that the cost is sunk regardless of whether the technology appears, it is possible that SarahC should not act even though the (conditionally) expected return exceeds the cost.

Please no insults, this isn't you versus me is it?

Shouldn't be. Nor you against her. I was catty only because I imagined that you were being catty. If you were not, then I surely apologize.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T03:04:24.165Z · LW(p) · GW(p)

I edited my post before I saw your response :-P

Replies from: Perplexed
comment by Perplexed · 2010-08-16T03:13:02.924Z · LW(p) · GW(p)

I'm sorry, I don't see any edits that matter for the logic of the thread. What am I missing?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T03:24:26.992Z · LW(p) · GW(p)

OK, my mistake.

I didn't say what SarahC should do with the probabilities once she had them. All I said was that they were pretty much all was relevant to the question of whether she should donate. Unless I didn't, in which case I meant to.

comment by nhamann · 2010-08-15T18:22:22.540Z · LW(p) · GW(p)

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I'm not sure what you refer to by "actual AI." There is a sub-field of academic computer science which calls itself "Artificial Intelligence," but it's not clear that this is anything more than a label, or that this field does anything more than use clever machine learning techniques to make computer programs accomplish things that once seemed to require intelligence (like playing chess, driving a car, etc.)

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

Replies from: Eliezer_Yudkowsky, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:31:02.804Z · LW(p) · GW(p)

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

You have to know some of their math (some of it is interesting, some not) but this does not require getting on the phone with them and asking them to explain their math, to which of course they would tell to you to RTFM instead of calling them.

comment by [deleted] · 2010-08-15T18:59:28.923Z · LW(p) · GW(p)

Yes, the subfield of computer science is what I'm referring to.

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it. A machine that drives a car is doing one of the things a human mind does; it may, in some cases, do it through a process that's structurally similar to the way the human mind does it. It seems to me that machines that can do these simple cognitive tasks are the best source of evidence we have today about hypothetical future thinking machines.

Replies from: nhamann
comment by nhamann · 2010-08-15T20:10:18.446Z · LW(p) · GW(p)

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it.

I gave the wrong impression here. I actually think that machine learning might be a good framework for thinking about how parts of the brain work, and I am very interested in studying machine learning. But I am skeptical that more than a small minority of projects where machine learning techniques have been applied to solve some concrete problem have shed any light on how (human) intelligence works.

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Replies from: komponisto, JoshuaZ
comment by komponisto · 2010-08-15T22:19:50.575Z · LW(p) · GW(p)

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.

At first glance this may seem puzzling, since, given how much more attention is given to narrow AI by researchers, you might think that someone who believes AGI is "fundamentally different" from narrow AI might be more pessimistic about the prospect of AGI coming soon than someone (like me) who is inclined to suspect that the difference is essentially quantitative. The explanation, however, is that (from what I can tell) the former belief leads Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

This -- much more than all the business about fragility of value and recursive self-improvement leading to hard takeoff, which frankly always struck me as pretty obvious, though maybe there is hindsight involved here -- is the area of Eliezer's belief map that, in my opinion, could really use more public, explicit justification.

Replies from: Daniel_Burfoot, nhamann, Vladimir_Nesov, jacob_cannell
comment by Daniel_Burfoot · 2010-08-15T23:32:57.832Z · LW(p) · GW(p)

whereas it seems to me like a daunting engineering task analogous to colonizing Mars

I don't think this is a good analogy. The problem of colonizing Mars is concrete. You can make a TODO list; you can carve the larger problem up into subproblems like rockets, fuel supply, life support, and so on. Nobody knows how to do that for AI.

Replies from: John_Maxwell_IV, komponisto
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-16T00:31:40.719Z · LW(p) · GW(p)

OK, but it could still end up being like colonizing Mars if at some point someone realizes how to do that. Maybe komponisto thinks that someone will probably carve AGI in to subproblems before it is solved.

comment by komponisto · 2010-08-16T02:02:16.559Z · LW(p) · GW(p)

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it's unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).

EDIT: When I say "I suspect" here, of course I mean "my impression is". I don't mean to imply that I don't think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).

Replies from: CarlShulman, Eliezer_Yudkowsky, Jonathan_Graehl, whpearson
comment by CarlShulman · 2010-08-16T11:35:39.286Z · LW(p) · GW(p)

The portion of the genome coding for brain architecture is a lot smaller than Windows 7, bit-wise.

Replies from: whpearson, Jonathan_Graehl, komponisto
comment by whpearson · 2010-08-17T14:29:22.098Z · LW(p) · GW(p)

An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we'll be able reverse engineer the human brain in a decade by looking at the genome.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-17T14:54:39.164Z · LW(p) · GW(p)

P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.

Edit: Kurzweil has now replied, more or less along the lines above.

Replies from: timtyler, whpearson, Perplexed
comment by timtyler · 2010-08-17T16:46:30.693Z · LW(p) · GW(p)

Kurzweil's analysis is simply wrong. Here's the gist of my refutation of it:

"So, who is right? Does the brain's design fit into the genome? - or not?

The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.

We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.

At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."

comment by whpearson · 2010-08-17T22:44:28.222Z · LW(p) · GW(p)

Wired really messed up the flow of the talk in that case. Is it based off a singularity summit talk?

comment by Perplexed · 2010-08-17T15:31:02.674Z · LW(p) · GW(p)

I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm - itself scattered through the whole genome and perhaps also the zygotic "epigenome". You might perhaps clarify that what you were talking about with "small portion of the genome" was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields - neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.

Replies from: timtyler
comment by timtyler · 2010-08-17T16:52:51.945Z · LW(p) · GW(p)

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV?

Why bother? PZ's rather misguided rant isn't doing very much damage. Just ignore him, I figure.

Maybe it is a slow news day. PZ's rant got Slashdotted:

http://science.slashdot.org/story/10/08/17/1536233/Ray-Kurzweil-Does-Not-Understand-the-Brain

PZ has stooped pretty low with the publicity recently:

http://scienceblogs.com/pharyngula/2010/08/the_eva_mendes_sex_tape.php

Maybe he was trolling with his Kurzweil rant. He does have a history with this subject matter, though:

http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php

comment by Jonathan_Graehl · 2010-08-16T21:58:56.987Z · LW(p) · GW(p)

Obviously the genome alone doesn't build a brain. I wonder how many "bits" I should add on for the normal environment that's also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.

comment by komponisto · 2010-08-16T12:14:45.274Z · LW(p) · GW(p)

Thanks, this is useful to know. Will revise beliefs accordingly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:32:54.383Z · LW(p) · GW(p)

Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

What do you think you know and how do you think you know it? Let's say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides "then a miracle occurs"?

Replies from: komponisto
comment by komponisto · 2010-08-19T00:13:15.853Z · LW(p) · GW(p)

What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human's or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

Now, to head off the "Fly Q" objection, Iet me point out that I'm not at all suggesting that an AGI has to be designed like a human brain. Instead, I'm "arguing" (expressing my perception) that the human brain's general intelligence isn't a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the "zillions" part is crucial.

(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I'm just expressing my epistemic state, and I've even made it clear that I don't believe I have information that SIAI folks don't, or am being more rational than they are.)

Replies from: Eliezer_Yudkowsky, jacob_cannell
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-19T00:38:13.987Z · LW(p) · GW(p)

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Replies from: thomblake, komponisto
comment by thomblake · 2010-08-19T00:48:18.528Z · LW(p) · GW(p)

Tough problem. My first reaction is 'yes', but I think that might be because we're assuming cooperation, which might be letting more in the door than you want.

Replies from: wedrifid
comment by wedrifid · 2010-08-19T04:54:20.763Z · LW(p) · GW(p)

Exactly the thought I had. Cooperation is kind of a big deal.

comment by komponisto · 2010-08-19T00:47:29.770Z · LW(p) · GW(p)

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

Replies from: komponisto, whpearson, WrongBot
comment by komponisto · 2010-08-19T02:04:07.740Z · LW(p) · GW(p)

I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.

Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation

And then someone came along, read this, and thought....what? Was it:

  • "No, you idiot, obviously no optimization process could be that powerful." ?

  • "There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?

  • "Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?

  • Something else?

comment by whpearson · 2010-08-19T09:01:32.712Z · LW(p) · GW(p)

Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.

comment by WrongBot · 2010-08-19T02:01:32.850Z · LW(p) · GW(p)

The optimization process is the part where the intelligence lives.

Replies from: komponisto
comment by komponisto · 2010-08-19T02:08:24.167Z · LW(p) · GW(p)

Natural selection is an optimization process, but it isn't intelligent.

Also, the point here is AI -- one is allowed to assume the use of intelligence in shaping the cooperation. That's not the same as using intelligence as a black box in describing the nature of it.

If you were the downvoter, might I suggest giving me the benefit of the doubt that I'm up to speed on these kinds of subtleties? (I.e. if I make a comment that sounds dumb to you, think about it a little more before downvoting?)

Replies from: WrongBot
comment by WrongBot · 2010-08-19T02:23:32.543Z · LW(p) · GW(p)

You were at +1 when I downvoted, so I'm not alone.

Natural selection is a very bad optimization process, and so it's quite unintelligent relative to any standards we might have as humans.

Replies from: komponisto
comment by komponisto · 2010-08-19T02:28:39.968Z · LW(p) · GW(p)

Now it's my turn to downvote, on the grounds that you didn't understand my comment. I agree that natural selection is unintelligent -- that was my whole point! It was intended as a counterexample to your implied assertion that an appeal to an optimization process is an appeal to intelligence.

EDIT: I suppose this confirms on a small scale what had become apparent in the larger discussion here about SIAI's public relations: people really do have more trouble noticing intellectual competence than I tend to realize.

Replies from: WrongBot, Eliezer_Yudkowsky
comment by WrongBot · 2010-08-19T17:23:08.371Z · LW(p) · GW(p)

(N.B. I just discovered that I had not, in fact, downvoted the comment that began this discussion. I must have had it confused with another.)

Like Eliezer, I generally think of intelligence and optimization as describing the same phenomenon. So when I saw this exchange:

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."

To answer the question you asked here, I thought the comment was worthy of a downvote (though apparently I did not actually follow through) because it was circular in a non-obvious way that contributed only confusion.

I am probably a much more ruthless downvoter than many other LessWrong posters; my downvotes indicate a desire to see "fewer things like this" with a very low threshold.

Replies from: komponisto
comment by komponisto · 2010-08-20T07:56:53.553Z · LW(p) · GW(p)

I read your reply as meaning approximately "1000 small cognitive modules are a really powerful optimization process if and only if their cooperation is controlled by a sufficiently powerful optimization process."

Thank you for explaining this, and showing that I was operating under the illusion of transparency.

My intended meaning was nothing so circular. The optimization process I was talking about was the one that would have built the machine, not something that would be "controlling" it from inside. I thought (mistakenly, it appears) that this would be clear from the fact that I said "controlling the form of their cooperation" rather than "controlling their cooperation". My comment was really nothing different from thomblake's or wedrifid's. I was saying, in effect, "yes, on the assumption that the individual components can be made to cooperate, I do believe that it is possible to assemble them in so clever a manner that their cooperation would produce effective intelligence."

The "cleverness" referred to in the previous sentence is that of the whatever created the machine (which could be actual human programmers, or, theoretically, something else like natural selection) and not the "effective intelligence" of the machine itself. (Think of a programmer, not a homunculus.) Note that I easily envision the process of implementing such "cleverness" itself not looking particularly clever -- perhaps the design would be arrived at after many iterations of trial-and-error, with simpler devices of similar form. (Natural selection being the extreme case of this kind of process.) So I'm definitely not thinking magically here, and least not in any obvious way (such as would warrant a downvote, for example).

I can now see how my words weren't as transparent as I thought, and thank you for drawing this to my attention; at the same time, I hope you've updated your prior that a randomly selected comment of mine results from a lack of understanding of basic concepts.

Replies from: WrongBot
comment by WrongBot · 2010-08-20T14:33:10.751Z · LW(p) · GW(p)

Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it's a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-20T14:41:38.498Z · LW(p) · GW(p)

Hooray for polite, respectful, informative disagreements on LW!

Replies from: komponisto
comment by komponisto · 2010-08-20T15:01:26.845Z · LW(p) · GW(p)

It's why I keep coming back even after getting mad at the place.

(That, and the fact that this is one of very few places I know where people reliably get easy questions right.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-19T04:10:51.662Z · LW(p) · GW(p)

Downvoted for retaliatory downvoting; voted everything else up toward 0.

Replies from: wedrifid
comment by wedrifid · 2010-08-19T04:51:51.718Z · LW(p) · GW(p)

Downvoted for retaliatory downvoting; voted everything else up toward 0.

Downvoted the parent and upvoted the grandparent. "On the grounds that you didn't understand my comment" is a valid reason for downvoting and based off a clearly correct observation.

I do agree that komponisto would have been better served by leaving off mention of voting altogether. Just "You didn't understand my comment. ..." would have conveyed an appropriate level of assertiveness to make the point. That would have avoided sending a signal of insecurity and denied others the invitation to judge.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-19T17:08:18.570Z · LW(p) · GW(p)

Voted down all comments that talk about voting, for being too much about status rather than substance.

Vote my comment towards -1 for consistency.

Replies from: komponisto
comment by komponisto · 2010-08-19T17:48:03.292Z · LW(p) · GW(p)
  • Status matters; it's a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don't want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it's wrong for aspiring rationalists to pursue those things. Still less is it considered bad to discuss them.

  • Comments like the parent are disingenuous. If we didn't want users to think about status, we wouldn't have adopted a karma system in the first place. A norm of forbidding the discussion of voting creates the wrong incentives: it encourages people to make aggressive status moves against others (downvoting) without explaining themselves. If a downvote is discussed, the person being targeted at least has better opportunity to gain information, rather than simply feeling attacked. They may learn whether their comment was actually stupid, or if instead the downvoter was being stupid. When I vote comments down I usually make a comment explaining why -- certainly if I'm voting from 0 to -1. (Exceptions for obvious cases.)

  • I really don't appreciate what you've done here. A little while ago I considered removing the edit from my original comment that questioned the downvote, but decided against it to preserve the context of the thread. Had I done so I wouldn't now be suffering the stigma of a comment at -1.

Replies from: thomblake, Morendil, Oligopsony
comment by thomblake · 2010-08-19T18:01:47.964Z · LW(p) · GW(p)

When I vote comments down I usually make a comment explaining why -- certainly if I'm voting from 0 to -1. (Exceptions for obvious cases.)

Then you must be making a lot of exceptions, or you don't downvote very much. I find that "I want to see fewer comments like this one" is true of about 1/3 of the comments or so, though I don't downvote quite that much anymore since there is a cap now. Could you imagine if every 4th comment in 'recent comments' was taken up by my explanations of why I downvoted a comment? And then what if people didn't like my explanations and were following the same norm - we'd quickly become a site where most comments are explaining voting behavior.

A bit of a slippery slope argument, but I think it is justified - I can make it more rigorous if need be.

Replies from: komponisto
comment by komponisto · 2010-08-19T18:22:53.015Z · LW(p) · GW(p)

Then you must be making a lot of exceptions, or you don't downvote very much

Indeed I don't downvote very much; although probably more than you're thinking, since I on reflection I don't typically explain my votes if they don't affect the sign of the comment's score.

Could you imagine if every 4th comment in 'recent comments' was taken up by my explanations of why I downvoted a comment?

I think you downvote too much. My perception is that, other than the rapid downvoting of trolls and inane comments, the quality of this site is the result mainly of the incentives created by upvoting, rather than downvoting.

Yes, too much explanation would also be bad; but jimrandomh apparently wants none, and I vigorously oppose that. The right to inquire about a downvote should not be trampled upon!

Replies from: thomblake
comment by thomblake · 2010-08-19T18:34:37.952Z · LW(p) · GW(p)

I have no problem with your right to inquire about a downvote; I will continue to exercise my right to downvote such requests without explanation.

Replies from: komponisto
comment by komponisto · 2010-08-19T18:49:57.126Z · LW(p) · GW(p)

I consider that a contradiction.

From the recent welcome post (emphasis added):

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation.

Replies from: thomblake, lsparrish
comment by thomblake · 2010-08-19T19:11:58.078Z · LW(p) · GW(p)

Perhaps we have different ideas of what 'rights' and 'trampling upon' rights entail.

You have the right to comment about reasons for downvoting - no one will stop you and armed guards will not show up and beat you for it. I think it is a good thing that you have this right.

If I think we would be better off with fewer comments like that, I'm fully within my rights to downvote the comment; similarly, no one will stop me and armed guards will not show up and beat me for it. I think it is a good thing that I have this right.

I'm not sure in what sense you think there is a contradiction between those two things, or if we are just talking past each other.

Replies from: Alicorn, komponisto
comment by Alicorn · 2010-08-19T19:15:49.580Z · LW(p) · GW(p)

I think you should be permitted to downvote as you please, but do note that literal armed guards are not necessary for there to be real problems with the protection of rights.

Replies from: thomblake
comment by thomblake · 2010-08-19T19:20:42.843Z · LW(p) · GW(p)

My implicit premise was that 1) violent people or 2) a person actually preventing your action are severally necessary for there to be real problems with the protection of rights. Is there a problem with that version?

comment by komponisto · 2010-08-19T19:22:18.373Z · LW(p) · GW(p)

In such a context, when someone speaks of the "right" to do X, that means the ability to do X without being punished (in whatever way is being discussed). Here, downvoting is the analogue of armed guards beating one up.

Responding by pointing out that a yet harsher form of punishment is not being imposed is not a legitimate move, IMHO.

Replies from: Clippy, thomblake, wedrifid
comment by Clippy · 2010-08-19T19:29:49.751Z · LW(p) · GW(p)

*reads through subthread*

You are all talking about this topic, and yet you regard me as weird??? That's like the extrusion die asserting that the metal wire has a grey spectral component!

(if it could communicate, I mean)

Replies from: wedrifid
comment by wedrifid · 2010-08-19T22:59:04.357Z · LW(p) · GW(p)

It is unfortunate that I can only vote you up here once.

comment by thomblake · 2010-08-19T19:47:32.560Z · LW(p) · GW(p)

In such a context, when someone speaks of the "right" to do X, that means the ability to do X without being punished (in whatever way is being discussed). Here, downvoting is the analogue of armed guards beating one up.

Ah, I could see how you would see that as a contradiction, then.

In that case, for purposes of this discussion, I withdraw my support for your right to do that.

And since I intend to downvote any comment or post for any reason I see fit at the time, it follows that no one has the right to post any comment or post of any sort, by your definition, since they can reasonably expect to be 'punished' for it.

For the purposes of other discussions, I do not accept your definition of 'right', nor do I accept your framing of a downvote as a 'punishment' in the relevant sense. I will continue to do my very best to ensure that only the highest-quality content is shown to new users, and if you consider that 'punishment', that is irrelevant to me.

Replies from: komponisto
comment by komponisto · 2010-08-19T20:00:07.751Z · LW(p) · GW(p)

I won't bother trying any further to convince you here; but in general I will continue to ask that people behave in a less hostile manner.

comment by wedrifid · 2010-08-19T22:55:51.737Z · LW(p) · GW(p)

Here, downvoting is the analogue of armed guards beating one up.

Wouldn't that analogue better apply to publicly and personally insulting the poster, targeting your verbal abuse at the very attributes that this community holds dear, deleting posts and threatening banning? Although I suppose your analogous scale could be extended in scope to include 'imprisonment and torture without trial'.

On the topic of the immediate context I do hope that you consider thomblakes position and make an exception to your usual policy in his case. I imagine it would be extremely frustrating for you to treat others with what you consider to be respect and courtesy when you know that the recipient does not grant you the same right. It would jar with my preference for symmetry if I thought you didn't feel free to implement a downvote friendly voting policy at least on a case by case basis. I wouldn't consider you to be inconsistent, and definitely not hypocritical. I would consider you sane.

comment by lsparrish · 2010-08-20T14:59:40.111Z · LW(p) · GW(p)

The proper reason to request clarification is in order to not make the mistake again -- NOT as a defensive measure against some kind of imagined slight on your social status. Yes social status is a part of the reason for the karma system -- but it is not something you have an inherent right to. Otherwise there would be no point to it!

Some good reasons to be downvoted: badly formed assertions, ambiguous statements, being confidently wrong, being belligerent, derailing the topic.

In this case your statement was a vague disagreement with the intuitively correct answer, with no supporting argument provided. That is just bad writing, and I would downvote it for so being. It does not imply that I think you have no real idea (something I have no grounds to take a position on), just that the specific comment did not communicate your idea effectively. You should value such feedback, as it will help you improve your writing skills,

Replies from: wedrifid, komponisto
comment by wedrifid · 2010-08-20T15:29:26.054Z · LW(p) · GW(p)

The proper reason to request clarification is in order to not make the mistake again

I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.

When people ask me for an explanation of a downvote I most certainly do not take it for granted that by so doing they are entering into my moral reality and willing to accept my interpretation of what is right and what is a 'mistake'. If I choose to explain reasons for a downvote I also don't expect them to henceforth conform to my will. They can choose to keep doing whatever annoying thing they were doing (there are plenty more downvotes where that one came from.)

There is more than one reason to ask for clarification for a downvote - even "I'm just kinda curious" is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don't feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.

Yes social status is a part of the reason for the karma system -- but it is not something you have an inherent right to. Otherwise there would be no point to it!

Not what Kompo was saying.

Replies from: lsparrish
comment by lsparrish · 2010-08-20T18:02:02.038Z · LW(p) · GW(p)

I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.

I never said anything about pretending anything. I said if you request clarification, and don't actually need clarification, you're just making noise. Ideally you will be downvoted for that.

There is more than one reason to ask for clarification for a downvote - even "I'm just kinda curious" is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don't feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.

Sure, but I still maintain that a request for clarification itself can be annoying and hence downvote worthy. I don't think any comment is inherently protected or should be exempt from being downvoted.

Replies from: wedrifid
comment by wedrifid · 2010-08-21T01:14:28.203Z · LW(p) · GW(p)

Sure, but I still maintain that a request for clarification itself can be annoying and hence downvote worthy. I don't think any comment is inherently protected or should be exempt from being downvoted.

I agree with you on these points. I downvote requests for clarification sometime - particularly if, say, the reason for the downvote is transparent or the flow conveys an attitude that jars with me. I certainly agree that people should be free to downvote freely whenever they please and for whatever reason they please - again, for me to presume otherwise would be a demand for naivety or dishonesty (typically both).

comment by komponisto · 2010-08-20T15:13:18.697Z · LW(p) · GW(p)

Feedback is valuable when it is informative, as the exchange with WrongBot turned out to be in the end.

Unfortunately, a downvote by itself will not typically be that informative. Sometimes it's obvious why a comment was downvoted (in which case it doesn't provide much information anyway); but in this case, I had no real idea, and it seemed plausible that it resulted from a misinterpretation of the comment. (As turned out to be the case.)

(Also, the slight to one's social status represented by a downvote isn't "imagined"; it's tangible and numerical.)

In this case your statement was a vague disagreement with the intuitively correct answer, with no supporting argument provided. That is just bad writing, and I would downvote it for so being

The comment was a quick answer to a yes-no question posed to me by Eliezer. Would you have been more or less inclined to downvote it if I had written only "Yes"?

Replies from: lsparrish
comment by lsparrish · 2010-08-20T17:13:30.332Z · LW(p) · GW(p)

Unfortunately, a downvote by itself will not typically be that informative. Sometimes it's obvious why a comment was downvoted (in which case it doesn't provide much information anyway); but in this case, I had no real idea, and it seemed plausible that it resulted from a misinterpretation of the comment. (As turned out to be the case.)

Providing information isn't the point of downvoting, it is a means of expressing social disapproval. (Perhaps that is information in a sense, but it is more complicated than just that.) The fact that they are being contrary to a social norm may or may not be obvious to the commenter, if not then it is new information. Regardless, the downvote is a signal to reexamine the comment and think about why it was not approved by over 50% of the readers who felt strongly enough to vote on it.

(Also, the slight to one's social status represented by a downvote isn't "imagined"; it's tangible and numerical.)

Tangibility and significance are completely different matters. A penny might appear more solid than a dollar, but is far less worthy of consideration. You could ignore a minus-1 comment quite safely without people deciding (even momentarily) that you are a loser or some such. That you chose not to makes it look like you have an inflated view of how significant it is.

The comment was a quick answer to a yes-no question posed to me by Eliezer. Would you have been more or less inclined to downvote it if I had written only "Yes"?

Probably less, as I would then have simply felt like requesting clarification, or perhaps even thinking of a reason on my own. A bad argument (or one that sounds bad) is worse than no argument.

comment by Morendil · 2010-08-19T17:57:23.799Z · LW(p) · GW(p)

Status matters; it's a basic human desideratum, like food and sex

You can live without sex, you can't live without food. So the latter two are "desiderata" in rather different senses.

comment by Oligopsony · 2010-08-19T17:55:43.162Z · LW(p) · GW(p)

Status matters; it's a basic human desideratum, like food and sex (in addition to being instrumentally useful in various ways). There seems to be a notion among some around here that concern with status is itself inherently irrational or bad in some way. But this is as wrong as saying that concern with money or good-tasting food is inherently irrational or bad. Yes, we don't want the pursuit of status to interfere with our truth-detecting abilities; but the same goes for the pursuit of food, money, or sex, and no one thinks it's wrong for aspiring rationalists to pursue those things.

Status is an inherently zero-sum good, so while it is rational for any given individual to pursue it; we'd all be better off, cet par, if nobody pursued it. Everyone has a small incentive for other people not to pursue status, just as they have an incentive for them not to be violent or to smell funny; hence the existence of popular anti-status-seeking norms.

Replies from: komponisto, komponisto
comment by komponisto · 2010-08-19T18:10:40.242Z · LW(p) · GW(p)

Status is an inherently zero-sum good

I don't think I agree, at least in the present context. I think of status as being like money -- or, in fact, the karma score on LW, since that is effectively what we're talking about here anyway. It controls the granting of important privileges, such as what we might call "being listened to" -- having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.

(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won "status" in my mind.)

Replies from: JGWeissman
comment by JGWeissman · 2010-08-19T18:20:00.799Z · LW(p) · GW(p)

I agree with this.

While status may appear zero-sum amongst those who are competing for influence in a community, for the community as a whole, status is postive sum when in it accurately reflects the value of people to the community.

comment by komponisto · 2010-08-19T18:11:46.684Z · LW(p) · GW(p)

Status is an inherently zero-sum good

I don't think I agree, at least in the present context. I think of status as being like money -- or, in fact, the karma score on LW, since that is effectively what we're talking about here anyway. It controls the granting of important privileges, such as what we might call "being listened to" -- having folks read your words carefully, interpret them charitably, and perhaps even act on them or otherwise be influenced by them.

(To tie this to the larger context, this is why I started paying attention to SIAI: because Eliezer had won "status" in my mind.)

comment by jacob_cannell · 2010-08-25T03:46:49.441Z · LW(p) · GW(p)

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.

The cortex is no more specialized than your hard drive.

Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.

You can think of cortical tissue as a biological 'neuronium'. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this

All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.

comment by Jonathan_Graehl · 2010-08-16T22:01:23.601Z · LW(p) · GW(p)

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

There may be other approaches that are significantly simpler (that we haven't yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn't think of. In other words, you think you have an upper bound on how much time/expense it will take.

comment by whpearson · 2010-08-16T11:00:24.741Z · LW(p) · GW(p)

I'm not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven't been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.

These are problems such as

  • How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
  • How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).

Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T03:52:35.301Z · LW(p) · GW(p)

Learning is the capacity to build complex unconscious machinery for dealing with novel problems. Thats the whole point of AGI.

And Learning is equivalent to absorbing memes. The two are one and the same.

Replies from: wedrifid
comment by wedrifid · 2010-08-25T06:50:51.974Z · LW(p) · GW(p)

And Learning is equivalent to absorbing memes. The two are one and the same.

I don't agree. Meme absorption is just one element of learning.

To learn how to play darts well you absorb a couple of dozen memes and then spend hours upon hours rewiring your brain to implement a complex coordination process.

To learn how to behave appropriately in a given culture you learn a huge swath of existing memes, continue to learn a stream of new ones but also dedicate huge amounts of background processing reconfiguring the weightings of existing memes relative to each other and external inputs. You also learn all sorts of implicit information about how memes work for you specifically (due to, for example, physical characteristics), much of this information will never be represented in meme form.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T07:05:57.328Z · LW(p) · GW(p)

Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.

comment by nhamann · 2010-08-16T03:48:56.825Z · LW(p) · GW(p)

I don't think AGI in a few decades is very farfetched at all. There's a heckuvalot of neuroscience being done right now (the Society for Neuroscience has 40,000 members), and while it's probably true that much of that research is concerned most directly with mere biological "implementation details" and not with "underlying algorithms" of intelligence, it is difficult for me to imagine that there will still be no significant insights into the AGI problem after 3 or 4 more decades of this amount of neuroscience research.

Replies from: komponisto
comment by komponisto · 2010-08-16T04:53:11.205Z · LW(p) · GW(p)

Of course there will be significant insights into the AGI problem over the coming decades -- probably many of them. My point was that I don't see AGI as hard because of a lack of insights; I see it as hard because it will require vast amounts of "ordinary" intellectual labor.

Replies from: nhamann, timtyler
comment by nhamann · 2010-08-16T06:10:36.324Z · LW(p) · GW(p)

I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.

I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).

If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.

Replies from: komponisto, Daniel_Burfoot
comment by komponisto · 2010-08-16T07:35:04.292Z · LW(p) · GW(p)

Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor,

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)

Actually, think of math problems if you like. Surely there are conjectures in existence now -- probably some of them already famous -- that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn't my impression -- indeed, it looks to me more analogous to problems that are considered "hopeless", like the "problem" of classifying all groups, say.

Replies from: Eliezer_Yudkowsky, jacob_cannell
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:36:25.201Z · LW(p) · GW(p)

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)

Replies from: Eliezer_Yudkowsky, FAWS, JoshuaZ, komponisto, XiXiDu
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T16:33:21.690Z · LW(p) · GW(p)

Scott says that he thinks P != NP is easier / likely to come first.

Replies from: XiXiDu, bcoburn
comment by XiXiDu · 2010-08-18T17:56:50.585Z · LW(p) · GW(p)

Here an interview with Scott Aaronson:

After glancing over a 100-page proof that claimed to solve the biggest problem in computer science, Scott Aaronson bet his house that it was wrong. Why?

comment by bcoburn · 2010-08-21T16:11:01.164Z · LW(p) · GW(p)

It's interesting that you both seem to think that your problem is easier, I wonder if there's a general pattern there.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-21T16:36:27.470Z · LW(p) · GW(p)

What I find interesting is that the pattern nearly always goes the other way: you're more likely to think that a celebrated problem you understand well is harder than one you don't know much about. It says a lot about both Eliezer's and Scott's rationality that they think of the other guy's hard problems as even harder than their own.

comment by FAWS · 2010-08-18T15:29:16.495Z · LW(p) · GW(p)

and no existence proof of a proof of P != NP

Obviously not. That would be a proof of P != NP.

As for existence proof of a general intelligence, that doesn't prove anything about how difficult it is, for anthropic reasons. For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving.

Replies from: CarlShulman, Emile
comment by CarlShulman · 2010-08-18T15:43:36.022Z · LW(p) · GW(p)

Of course, if you buy the self-indication assumption (which I do not) or various other related principles you'll get an update that compels belief in quite frequent life (constrained by the Fermi paradox and a few other things).

More relevantly, approaches like Robin's Hard Step analysis and convergent evolution (e.g. octopus/bird intelligence) can rule out substantial portions of "crazy-hard evolution of intelligence" hypothesis-space. And we know that human intelligence isn't so unstable as to see it being regularly lost in isolated populations, as we might expect given ludicrous anthropic selection effects.

Replies from: timtyler
comment by timtyler · 2010-08-22T12:43:56.484Z · LW(p) · GW(p)

I looked at Nick's:

http://www.anthropic-principle.com/preprints/olum/sia.pdf

I don't get it. Anyone know what is supposed to be wrong with the SIA?

comment by Emile · 2010-08-18T15:36:09.613Z · LW(p) · GW(p)

We can make better guesses than that: evolution coughed up quite a few things that would be considered pretty damn intelligent for a computer program, like ravens, octopuses, rats or dolphins.

Replies from: FAWS
comment by FAWS · 2010-08-18T15:44:38.704Z · LW(p) · GW(p)

Not independently (not even cephalopods, at least completely). And we have no way of estimating the difference in difficulty between that level of intelligence and general intelligence other than evolutionary history (which for anthropic reasons could be highly untypical), and similarity in makeup, but already know that our type of nervous system is capable of supporting general intelligence, most rat level intelligences might hit fundamental architectural problems first.

Replies from: Emile
comment by Emile · 2010-08-18T16:36:31.542Z · LW(p) · GW(p)

We can always estimate, even with very little knowledge - we'll just have huge error margins. I agree it is possible that "For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving", I would just bet on a much higher probability than that, though I agree with the principle.

The evidence that pretty smart animals exist in distant branches of the tree of life, and in different environments is weak evidence that intelligence is "pretty accessible" in evolution's search space. It's stronger evidence than the mere fact that we, intelligent beings, exist.

Replies from: FAWS
comment by FAWS · 2010-08-18T17:27:17.876Z · LW(p) · GW(p)

Intelligence sure. The original point was that our existence doesn't put a meaningful upper bound on the difficultly of general intelligence. Cephalopods are good evidence that given whatever rudimentary precursors of a nervous system our common ancestor had (I know it had differentiated cells, but I'm not sure what else. I think it didn't really have organs like higher animals, let alone anything that really qualified as a nervous system) cephalopod level intelligence is comparatively easy, having evolved independently two times. It doesn't say anything about how much more difficult general intelligence is compared to cephalopod intelligence, nor whether whatever precursors to a nervous system our common ancestor had were unusually conductive to intelligence compared to the average of similar complex evolved beings.

If I had to guess I would assume cephalopod level intelligence within our galaxy and a number of general intelligences somewhere outside our past light cone. But that's because I already think of general intelligence as not fantastically difficult independently of the relevance of the existence proof.

Replies from: whpearson
comment by whpearson · 2010-08-18T18:03:34.074Z · LW(p) · GW(p)

This page on the history of invertebrates) suggests that our common ancestors had bilateral symmetry, triploblastic and with hox genes.

Hox genes suggest that they both had a modular body plan of some sort. Triploblasty implies some complexity (the least complex triploblastic organism today is a flatworm).

I'd be very surprised if most recent common ancestor didn't have neurons similar to most neurons today, as I've had a hard time finding out the differences between the two. A basic introduction to nervous systems suggests they are very similar.

comment by JoshuaZ · 2010-08-18T15:19:30.296Z · LW(p) · GW(p)

Well, I for one strongly hope that we resolve whether P = NP before we have AI since a large part of my estimate for the probability of AI being able to go FOOM is based on how much of the complexity hierarchy collapses. If there's heavy collapse, AI going FOOM Is much more plausible.

comment by komponisto · 2010-08-18T23:32:03.365Z · LW(p) · GW(p)

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind

Well actually, after thinking about it, I'm not sure I would either. There is something special about P vs NP, from what I understand, and I didn't even mean to imply otherwise above; I was only disputing the idea that "vast amounts" of work had already gone into the problem, for my definition of "vast".

Scott Aaronson's view on this doesn't move my opinion much (despite his large contribution to my beliefs about P vs NP), since I think he overestimates the difficulty of AGI (see your Bloggingheads diavlog with him).

comment by XiXiDu · 2010-08-18T15:04:10.357Z · LW(p) · GW(p)

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind.

Awesome! Be sure to let us know what he thinks. Sounds unbelievable to me though, but what do I know.

comment by jacob_cannell · 2010-08-25T03:33:31.255Z · LW(p) · GW(p)

Why is AGI a math problem? What is abstract about it?

We don't need math proofs to know if AGI is possible. It is, the brain is living proof.

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-25T06:11:48.744Z · LW(p) · GW(p)

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

There may be a few clues in there - but engineers are likely to get to the goal looong before the emulators arrive - and engineers are math-friendly.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T07:08:26.823Z · LW(p) · GW(p)

A 'few clues' sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.

I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn't my point.

Replies from: timtyler
comment by timtyler · 2010-08-25T07:38:41.035Z · LW(p) · GW(p)

If information is not extracted and used, it doesn't qualify as being a "clue".

The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.

The search oracles and stockmarketbot makers have paid precious little attention to the brain. They are based on engineering principles instead.

I agree engineers reverse engineering will succeed way ahead of full emulation,

Most engineers spend very little time on reverse-engineering nature. There is a little "bioinspiration" - but inspiration is a bit different from wholescale copying.

comment by timtyler · 2010-08-25T06:01:54.601Z · LW(p) · GW(p)

Why is AGI a math problem? What is abstract about it?

This is a good part of the guts of it. That bit of it is a math problem:

http://timtyler.org/sequence_prediction/

comment by Daniel_Burfoot · 2010-08-18T18:03:12.024Z · LW(p) · GW(p)

but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time.

I think the following quote is illustrative of the problems facing the field:

After [David Marr] joined us, our team became the most famous vision group in the world, but the one with the fewest results. His idea was a disaster. The edge finders they have now using his theories, as far as I can see, are slightly worse than the ones we had just before taking him on. We've lost twenty years.

-Marvin Minsky, quoted in "AI" by Daniel Crevier.

Some notes and interpretation of this comment:

  • Most vision researchers, if asked who is the most important contributor to their field, would probably answer "David Marr". He set the direction for subsequent research in the field; students in introductory vision classes read his papers first.
  • Edge detection is a tiny part of vision, and vision is a tiny part of intelligence, but at least in Minsky's view, no progress (or reverse progress) was achieved in twenty years of research by the leading lights of the field.
  • There is no standard method for evaluating edge detector algorithms, so it is essentially impossible to measure progress in any rigorous way.

I think this kind of observation justifies AI-timeframes on the order of centuries.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T03:27:22.468Z · LW(p) · GW(p)

Edge detection is rather trivial. Visual recognition however is not, and there certainly are benchmarks and comparable results in that field. Have you browsed the recent pubs of Poggio et al at MIT vision lab? There is lots of recent progress, with results matching human levels for quick recognition tasks.

Also, vision is not a tiny part of intelligence. Its the single largest functional component of the cortex, by far. The cortex uses the same essential low-level optimization algorithm everywhere, so understanding vision at the detailed level is a good step towards understanding the whole thing.

And finally and most relevant for AGI, the higher visual regions also give us the capacity for visualization and are critical for higher creative intelligence. Literally all scientific discovery and progress depends on this system.

"visualization is the key to enlightenment" and all that

the visual system

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-08-26T03:55:45.770Z · LW(p) · GW(p)

Edge detection is rather trivial.

It's only trivial if you define an "edge" in a trivial way, e.g. as a set of points where the intensity gradient is greater than a certain threshold. This kind of definition has little use: given a picture of a tree trunk, this definition will indicate many edges corresponding to the ridges and corrugations of the bark, and will not highlight the meaningful edge between the trunk and the background.

I don't believe that there is much real progress recently in vision. I think the state of the art is well illustrated by the "racist" HP web camera that detects white faces but not black faces.

Also, vision is not a tiny part of intelligence [...] The cortex uses the same essential low-level optimization algorithm everywhere,

I actually agree with you about this, but I think most people on LW would disagree.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-26T04:25:23.271Z · LW(p) · GW(p)

Whether you are talking about canny edge filters, gabor like edge detection more similar to what V1 self-organizes into, they are all still relatively simple - trivial compared to AGI. Trivial as in something you code in a few hours for your screen filter system in a modern game render engine.

The particular problem you point out with the tree trunk is a scale problem and is easily handled in any good vision system.

An edge detection filter is just a building block, its not the complete system.

In HVS, initial edge preprocessing is done in the retina itself which essentially does on-center, off-surround gaussian filters (similar to low-pass filters in photoshop). The output of the retina is thus essentially a multi-resolution image set, similar to a wavelet decomposition. The image output at this stage becomes a series of edge differences (local gradients), but at numerous spatial scales.

The high frequency edges such as the ridges and corrugations of the bark are cleanly separated from the more important low frequency edges separating the tree trunk from the background. V1 then detects edge orientations at these various scales, and higher layers start recognizing increasingly complex statistical patterns of edges across larger fields of view.

Whether there is much real progress recently in computer vision is relative to one's expectations, but the current state of the art in research systems at least is far beyond your simplistic assessment. I have a layman's overview of HVS here. If you really want to know about the current state of the art in research, read some recent papers from a place like Poggio's lab at MIT.

In the product space, the HP web camera example is also very far from the state of the art, I'm surprised that you posted that.

There is free eye tracking software you can get (running on your PC) that can use your web cam to track where your eyes are currently focused in real time. That's still not even the state of the art in the product space - that would probably be the systems used in the more expensive robots, and of course that lags the research state of the art.

comment by timtyler · 2010-08-16T06:28:37.482Z · LW(p) · GW(p)

...but you don't really know - right?

You can't say with much confidence that there's no AIXI-shaped magic bullet.

Replies from: komponisto, jacob_cannell
comment by komponisto · 2010-08-16T07:38:22.533Z · LW(p) · GW(p)

That's right; I'm not an expert in AI. Hence I am describing my impressions, not my fully Aumannized Bayesian beliefs.

comment by jacob_cannell · 2010-08-25T03:14:50.243Z · LW(p) · GW(p)

AIXI-shaped magic bullet?

AIXI's contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the 'math' of choice vs computational complexity theory, which is the proper domain.

The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn't subscribe to SIAI's math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.

This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.

I explore this a little more here

Replies from: timtyler, timtyler, timtyler
comment by timtyler · 2010-08-25T05:51:55.182Z · LW(p) · GW(p)

right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

That seems like crazy talk to me. The brain is not optimal - not its hardware or software - and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units - and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T07:31:43.427Z · LW(p) · GW(p)

Edit: removed a faulty argument at the end pointed out by wedrifid.

I am talking about optimality for AGI in particular with respect to circuit complexity, with the typical assumptions that a synapse is vaguely equivalent to a transistor, maybe ten transistors at most. If you compare on that level, the brain looks extremely efficient given how slow the neurons are. Does this make sense?

The brain's circuits have around 10^15 transistor equivalents, and a speed of 10^3 cycles per second. 10^18 transistor cycles / second

A typical modern CPU has 10^9 transistors, with a speed of 10^9 cycles per second. 10^18 transistor cycles / second

Our CPU's strength is not their circuit architecture or software - its the raw speed of CMOS, its a million X substrate advantage. The learning algorithm, the way in which the cortex rewires in response to input data, appears to be a pretty effective universal learning algorithm.

Replies from: timtyler, wedrifid
comment by timtyler · 2010-08-25T07:44:46.544Z · LW(p) · GW(p)

The brain's architecture is a joke. It is as though a telecoms engineer decided to connect a whole city's worth of people together by running cables directly between any two people who wanted to have a chat. It hasn't even gone fully digital yet - so things can't easily be copied or backed up. The brain is just awful - no wonder human cognition is such a mess.

comment by wedrifid · 2010-08-25T10:24:48.826Z · LW(p) · GW(p)

Therefore, the brain's circuit level architecture is very well optimized for AGI.

Nothing you wrote lead me to this conclusion.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T18:05:57.954Z · LW(p) · GW(p)

Then some questions: How long would moore's law have to continue into the future with no success in AGI for that to show that the brain's is well optimized for AGI at the circuit level?

I've taken some attempts to show rough bounds on the brain's efficiency, are you aware of some other approach or estimate?

Replies from: timtyler, wedrifid
comment by timtyler · 2010-08-25T20:05:10.657Z · LW(p) · GW(p)

Then some questions: How long would moore's law have to continue into the future with no success in AGI for that to show that the brain's is well optimized for AGI at the circuit level?

Most seem to think the problem is mostly down to software - and that supercomputer hardware is enough today - in which case more hardware would not necessarily help very much. The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space. It would not throw much light on the issue of how optimally "designed" the brain is. So: your question is a curious one.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-25T22:02:37.953Z · LW(p) · GW(p)

The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space

For every computational system and algorithm, there is a minimum level of space-time complexity in which this system can be encoded. As of yet we don't know how close the brain is to the minimum space-time complexity design for an intelligence of similar capability.

Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind? If you think the brain is far off that, how do you justify that?

Of course more hardware helps: it allows you to search through the phase space faster. Keep in mind the enormity of the training time.

I happen to believe the problem is 'mostly down to software', but I don't see that as a majority view - the Moravec/Kurzweil view that we need brain-level hardware (within an order of magnitude or so) seems to be majoritive at this point.

Replies from: timtyler, wedrifid
comment by timtyler · 2010-08-26T06:57:41.352Z · LW(p) · GW(p)

We need brain-level hardware (within an order of magnitude or so) if machines are going to be cost-competitive with humans. If you just want a supercomputer mind, then no problem.

I don't think Moravec or Kurzweil ever claimed it was mostly down to hardware. Moravec's charts are of hardware capability - but that was mainly because you can easily measure that.

Replies from: Pavitra
comment by Pavitra · 2010-08-26T07:12:43.928Z · LW(p) · GW(p)

We need brain-level hardware (within an order of magnitude or so) if machines are going to be cost-competitive with humans.

I don't see why that is. If you were talking about ems, then the threshhold should be 1:1 realtime. Otherwise, for most problems that we know how to program a computer to do, the computer is much faster than humans even at existing speeds. Why do you expect that a computer that's say, 3x slower than a human (well within an order of magnitude) would be cost-competitive with humans while one that's 10^4 times slower wouldn't?

Replies from: timtyler
comment by timtyler · 2010-08-26T17:37:45.731Z · LW(p) · GW(p)

Evidently there are domains where computers beat humans today - but if you look at what has to happen for machines to take the jobs of most human workers, they will need bigger and cheaper brains to do that. "Within an order of magnitude or so" seems like a reasonable ballpark figure to me. If you are looking for more details about why I think that, they are not available at this time.

Replies from: Pavitra
comment by Pavitra · 2010-08-26T21:12:58.916Z · LW(p) · GW(p)

I suspect that the controlling reason why you think that is that you assume it takes human-like hardware to accomplish human-like tasks, and greatly underestimate the advantages of a mind being designed rather than evolved.

comment by wedrifid · 2010-08-26T10:37:34.115Z · LW(p) · GW(p)

Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind?

Way off. Let's see... I would bet at even odds that it is 4 or more orders of magnitude off optimal.

If you think the brain is far off that, how do you justify that?

We have approximately one hundred billion neurons each and roughly the same number of glial cells (more of the latter if we are smart!). Each of those includes a full copy of our DNA, which is itself not exactly optimally compressed.

Replies from: jacob_cannell, PaulAlmond
comment by jacob_cannell · 2010-08-26T18:34:35.219Z · LW(p) · GW(p)

Way off. Let's see... I would bet at even odds that it is 4 or more orders of magnitude off optimal.

  1. you didn't answer my question: what is your guess at minimum bit representation of a human equi mind?

  2. you didn't use the typical methodology of measuring the brain's storage, nor did you provide another.

I wasn't talking about molecular level optimization. I started with the typical assumption that synapses represent a few bits, the human brain has around 100TB to 1PB of data/circuitry, etc etc - see the singularity is near.

So you say the human brain algorithmic representation is off by 4 orders of magnitude or more - you are saying that you think a human equivalent mind can be represented in 10 to 100GB of data/circuitry?

If so, why did evolution not find that by now? It has had plenty of time to compress at the circuit level. In fact, we actually know that the brain does perform provably optimal compression on its input data in a couple of domains - see V1 and its evolution into gabor-like edge feature detection.

Evolution has had plenty of time to find a well-optimized cellular machinery based on DNA, plenty of time to find a well-optimized electro-chemical computing machinery based on top of that, and plenty of time to find well-optimized circuits within that space.

Even insects are extremely well-optimized at the circuit level - given their neuron/synapse counts, we have no evidence whatsoever to believe that vastly simpler circuits exist that can perform the same functionality.

When we have used evolutionary exploration algorithms to design circuits natively, given enough time we see similar complex, messy, but near optimal designs, and this is a general trend.

comment by PaulAlmond · 2010-08-26T12:58:33.301Z · LW(p) · GW(p)

Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that's invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can't see that we'd count the DNA for each cell then - yet it is no different really.

I agree that the number of cells is relevant, because there will be a lot of information in the structure of an adult brain that has come from the environment, rather than just from the DNA, and more cells would seem to imply more machinery in which to put it.

Replies from: wedrifid
comment by wedrifid · 2010-08-26T13:21:17.256Z · LW(p) · GW(p)

Are you saying that you are counting every copy of the DNA as information that contributes to the total amount? If so, I say that's invalid. What if each cell were remotely controlled from a central server containing the DNA information? I can't see that we'd count the DNA for each cell then - yet it is no different really.

I thought we were talking about the efficiency of the human brain. Wasn't that the whole point? If every cell is remotely controlled from a central server then well, that'd be whole different algorithm. In fact, we could probably scrap the brain and just run the central server.

Genes actually do matter in the functioning of neurons. Chemical additions (eg. ethanol) and changes in the environment (eg. hypoxia) can influence gene expression in cells in the brain, impacting on their function.

I suggest the brain is a ridiculously inefficient contraption thrown together by the building blocks that were practical for production from DNA representations and suitable for the kind of environments animals tended to be exposed to. We should be shocked to find that it also manages to be anywhere near optimal for general intelligence. Among other things it would suggest that evolution packed the wrong lunch.

Replies from: PaulAlmond
comment by PaulAlmond · 2010-08-26T14:47:26.035Z · LW(p) · GW(p)

Okay, I may have misunderstood you. It looks like there is some common ground between us on the issue of inefficiency. I think the brain would probably be inefficient as well as it has to be thrown together by the very specific kind of process of evolution - which is optimized for building things without needing look-ahead intelligence rather than achieving the most efficient results.

comment by wedrifid · 2010-08-26T10:20:54.211Z · LW(p) · GW(p)

Then some questions: How long would moore's law have to continue into the future with no success in AGI for that to show that the brain's is well optimized for AGI at the circuit level?

A Sperm Whale and a bowl of Petunias.

My first impulse was to answer that Moore's law could go forever and never produce success in AGI, since 'AGI' isn't just what you get when you put enough computronium together for it to reach critical mass. But even given no improvements in understanding we could very well arrive at AGI just through ridiculous amounts of brute force. In fact, given enough space and time, randomised initial positions and possibly a steady introduction of negentropy we could produce an AGI in Conways Life.

I've taken some attempts to show rough bounds on the brain's efficiency, are you aware of some other approach or estimate?

You could find some rough bounds by seeing how many parts of a human brain you can cut out without changing IQ.Trivial little things like, you know, the pre-frontal cortex.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-26T18:25:02.201Z · LW(p) · GW(p)

You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation - ie seeing. We have an example system in the human brain - the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)

So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer - not "sperm whale and petunia" nonsense.

Until someone makes a system better than HVS, or proves some complexity bounds, we don't know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.

comment by timtyler · 2010-08-25T05:47:37.291Z · LW(p) · GW(p)

AIXI-shaped magic bullet?

Good quality general-purpose data-compression would "break the back" of the task of buliding synthetic intelligent agents - and that's a "simple" math problem - as I explain on: http://timtyler.org/sequence_prediction/

At least it can be stated very concisely. Solutions so far haven't been very simple - but the brain's architecture offers considerable hope for a relatively simple solution.

comment by Vladimir_Nesov · 2010-08-15T22:38:55.895Z · LW(p) · GW(p)

Note that allowing for a possibility of sudden breakthrough is also an antiprediction, not a claim for a particular way things are. You can't know that no such thing is possible, without having understanding of the solution already at hand, hence you must accept the risk. It's also possible that it'll take a long time.

comment by jacob_cannell · 2010-08-25T02:56:41.152Z · LW(p) · GW(p)

I'm reading through and catching up on this thread, and rather strongly agreed with your statement:

Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

However, pondering it again, I realize there is an epistemological spectrum ranging from math on the one side to engineering on the other. Key insights into new algorithms can undoubtedly speed up progress, and such new insights often can be expressed as pure math, but at the end of the day it is a grand engineering (or reverse engineering) challenge.

However, I'm somewhat taken aback when you say, "the notion that AGI is only decades away, as opposed to a century or two."

A century or two?

comment by JoshuaZ · 2010-08-15T22:31:10.111Z · LW(p) · GW(p)

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Replies from: Simulation_Brain
comment by Simulation_Brain · 2010-08-18T06:20:49.545Z · LW(p) · GW(p)

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Replies from: timtyler, JoshuaZ
comment by timtyler · 2010-08-18T06:31:35.391Z · LW(p) · GW(p)

Machine intelligence has surpassed "human level" in a number of narrow domains. Already, humans can't manipulate enough data to do anything remotely like a search engine or a stockbot can do.

The claim seems to be that in narrow domains there are often domain-specific "tricks" - that wind up not having much to do with general intelligence - e.g. see chess and go. This seems true - but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.

Those who make a big deal about the distinction between their projects and "mere" expert systems are probably mostly trying to market their projects before they are really experts at anything.

One of my videos discusses the issue of whether the path to superintelligent machines will be "broad" or "narrow":

http://alife.co.uk/essays/on_general_machine_intelligence_strategies/

comment by JoshuaZ · 2010-08-18T15:28:59.863Z · LW(p) · GW(p)

Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I'm wrong but I'm under the impression that

1) neutral networks cannot in general detect connected components unless the network has some form of recursion. 2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.

This is the sort of thing I was thinking of. Am I wrong about 1 or 2?

Replies from: Simulation_Brain
comment by Simulation_Brain · 2010-08-20T20:58:47.922Z · LW(p) · GW(p)

Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there's no concern with recurrent loops.

On the whole, neural net research has slowed dramatically based on the common view you've expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-21T14:47:12.364Z · LW(p) · GW(p)

Thanks. GeneRec sounds very interesting. Will take a look. Regarding 1, I was thinking of something like the theorems in chapter 9 in Perceptrons which shows that there are strong limits on what topological features of input a non-recursive neural net can recognize.

comment by NancyLebovitz · 2010-08-15T23:17:33.423Z · LW(p) · GW(p)

Prediction is hard, especially about the future.

One thing that intrigues me is snags. Did anyone predict how hard to would be to improve batteries, especially batteries big enough for cars?

comment by multifoliaterose · 2010-08-15T16:16:54.737Z · LW(p) · GW(p)

I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.

I agree completely. The reason why I framed my top level post in the way that I did was so that it would be relevant to readers of a variety of levels of confidence in SIAI's claims.

As I indicate here, I personally wouldn't be interested in funding SIAI as presently constituted even if there was no PR problem.

comment by xamdam · 2010-08-15T16:36:33.190Z · LW(p) · GW(p)

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

I think there are ways to make these predictions. On the most layman level I would point out that IBM build a robot that beats people at Jeopardy. Yes, I am aware that this is a complete machine-learning hack (this is what I could gather from the NYT coverage) and is not true cognition, but it surprised even me (I do know something about ML). I think this is useful to defeat the intuition of "machines cannot do that". If you are truly interested I think you can (I know you're capable) read Norvig's AI book, and than follow up on the parts of it that most resemble human cognition; I think serious progress is made in those areas. BTW, Norvig does take FAI issues seriously, including a reference to EY paper in the book.

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I think they should, I have no idea if this is being done; but if I would do it I would not do it publicly, as it may have very counterproductive consequences. So until you or I become SIAI fellows we will not know, and I cannot hold such lack of knowledge against them.

Replies from: None
comment by [deleted] · 2010-08-15T19:07:04.335Z · LW(p) · GW(p)

First, I'm not really claiming "machines cannot do that." I can see advances in machine learning and I can imagine the next round of advances being pretty exciting. But I'm thinking in terms of maybe someday a machine being able to distinguish foreground from background, or understand a sentence in English, not being a superintelligence that controls Earth's destiny. The scales are completely different. One scale is reasonable; one strains credibility, I'm afraid.

Thanks for the book recommendation; I'll be sure to check it out.

Replies from: Apprentice
comment by Apprentice · 2010-08-16T23:01:58.519Z · LW(p) · GW(p)

I think controlling Earth's destiny is only modestly harder than understanding a sentence in English - in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.

You sound to me like someone saying, sixty years ago: "Maybe some day a computer will be able to play a legal game of chess - but simultaneously defeating multiple grandmasters, that strains credibility, I'm afraid." But it only took a few decades to get from point A to point B. I doubt that going from "understanding English" to "controlling the Earth" will take that long.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:43:56.798Z · LW(p) · GW(p)

I think controlling Earth's destiny is only modestly harder than understanding a sentence in English.

Well said. I shall have to try to remember that tagline.

Replies from: cousin_it, Will_Newsome, Will_Newsome
comment by cousin_it · 2010-09-21T23:28:34.868Z · LW(p) · GW(p)

There's a problem with it, though. Some decades ago you'd have just as eagerly subscribed to this statement: "Controlling Earth's destiny is only modestly harder than playing a good game of chess", which we now know to be almost certainly false.

Replies from: SilasBarta, Rain
comment by SilasBarta · 2010-09-22T19:17:28.809Z · LW(p) · GW(p)

I agree with Rain. Understanding implies a much deeper model than playing. To make the comparison to chess, you would have to change it to something like, "Controlling Earth's destiny is only modestly harder than making something that can learn chess, or any other board game, without that game's mechanics (or any mapping from the computer's output to game moves) being hard-coded, and then play it at an expert level."

Not obviously false, I think.

comment by Rain · 2010-09-22T18:49:14.889Z · LW(p) · GW(p)

It's the word "understanding" in the quote which makes it presume general intelligence and/or consciousness without directly stating it. The word "playing" does not have such a connotation, at least to me. I don't know if I would think differently back when chess required intelligence.

comment by Will_Newsome · 2011-07-17T23:10:42.790Z · LW(p) · GW(p)

(Again:) Hey, remember this tagline: "I think controlling Earth's destiny is only modestly harder than understanding a sentence in English."

comment by Will_Newsome · 2010-09-21T23:08:20.433Z · LW(p) · GW(p)

Hey, remember this tagline: "I think controlling Earth's destiny is only modestly harder than understanding a sentence in English."

comment by Jonathan_Graehl · 2010-08-16T21:51:13.083Z · LW(p) · GW(p)

shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

Yes. It's hardly urgent, since AI researchers are nowhere near a runaway intelligence. But on the other hand, control of AI is going to be crucial+difficult eventually, and it would be good for researchers to be aware of it, if they aren't.

Replies from: LucasSloan
comment by LucasSloan · 2010-08-19T05:09:59.101Z · LW(p) · GW(p)

It's hardly urgent, since AI researchers are nowhere near a runaway intelligence.

Sadly, there's no guarantee of that.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-19T08:21:44.773Z · LW(p) · GW(p)

Right, it's just (in my and most other AI researchers'[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it's interesting to me that I don't feel there's that much difference in probability of "(good enough to) run away improving itself quickly past human level AI" in the next year, and in the next 10 years - both extremely close to 0 is the most specific I can be at this point. That suggests I haven't really quantified my beliefs exactly yet.

[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.

comment by Rain · 2010-08-15T14:28:19.228Z · LW(p) · GW(p)

Have we seen any results (or even progress) come from the SIAI Challenge Grants, which included a Comprehensive Singularity FAQ and many academic papers dealing directly with the topics of concern? These should hopefully be less easy to ridicule and provide an authoritative foundation after the peer review process.

Edit: And if they fail to come to fruition, then we have some strong evidence to doubt SIAI's effectiveness.

comment by orthonormal · 2010-08-15T15:21:51.350Z · LW(p) · GW(p)

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Replies from: wedrifid, Eliezer_Yudkowsky, Vladimir_Nesov, homunq, timtyler
comment by wedrifid · 2010-08-15T17:22:43.226Z · LW(p) · GW(p)

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.

Replies from: orthonormal, komponisto, ciphergoth, timtyler
comment by orthonormal · 2010-08-15T17:31:51.374Z · LW(p) · GW(p)

I meant "optimal within the category of X-risk reduction", and I see your point.

comment by komponisto · 2010-08-15T22:34:50.781Z · LW(p) · GW(p)

Upvoted.

We've had agreements and disagreements here. This is one of the agreements.

comment by Paul Crowley (ciphergoth) · 2010-08-16T18:20:36.316Z · LW(p) · GW(p)

I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the "maxipok" term of our efforts toward the greater good can't fail to absolutely dominate all other terms.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-08-16T18:47:27.386Z · LW(p) · GW(p)

I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine.

That sounds very optimistic. I just don't see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.

Most of what I consider 'good' is, for the purposes of competition, a complete waste of time.

comment by timtyler · 2010-08-16T18:28:59.367Z · LW(p) · GW(p)

Lack of interest in existential risk reduction makes perfect sense from an evolutionary perspective. As I have previously explained:

"Organisms can be expected to concentrate on producing offspring - not indulging paranoid fantasies about their whole species being wiped out!"

Most people are far more concerned about other things - for perfectly sensible and comprehensible reasons.

Replies from: orthonormal, Jonathan_Graehl
comment by orthonormal · 2010-08-17T23:08:03.746Z · LW(p) · GW(p)

This is a bizarre digression from the parent comment. You're already having this exact conversation elsewhere in the thread!

Replies from: timtyler
comment by timtyler · 2010-08-17T23:35:16.378Z · LW(p) · GW(p)

It follows from - "This seems to assume that existential risk reduction is the only thing people care about." - and - "I disagree." - People do care about other things. They mostly care about other things.

comment by Jonathan_Graehl · 2010-08-16T21:44:18.976Z · LW(p) · GW(p)

Your last sentence seems true.

I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don't need the evolutionary argument to lend support to what direct observation already shows in our case.

So, I can't see why this is downvoted except that it's somewhat of a tangent.

Replies from: timtyler
comment by timtyler · 2010-08-17T06:10:18.080Z · LW(p) · GW(p)

Well, I wasn't really claiming that "evolved-intelligence-should-be-myopic".

Evolved-intelligence is what we have, and it can predict the future - at least a little:

Even if the "paranoid fantasies" have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a "tragedy of the commons" situation. Free riding - and letting someone else do that - may well reap the benefits without paying the costs.

comment by timtyler · 2010-08-15T17:29:09.657Z · LW(p) · GW(p)

It seems pretty clear that very few care much about existential risk reduction.

That makes perfect sense from an evolutionary perspective. Organisms can be expected to concentrate on producing offspring - not indulging paranoid fantasies about their whole species being wiped out!

The bigger puzzle is why anyone seems to care about it at all. The most obvious answer is signalling. For example, if you care for the fate of everyone in the whole world, that SHOWS YOU CARE - a lot! Also, the END OF THE WORLD acts as a superstimulus to people's warning systems. So - they rush and warn their friends - and that gives them warm fuzzy feelings. The get credit for raising the alarm about the TERRIBLE DANGER - and so on.

Disaster movies - like 2012 - trade on people's fears in this area - stimulating and fuelling their paranoia further - by providing them with fake memories of it happening. One can't help wondering whether FEAR OF THE END is a healthy phenomenon - overall - and if not, whether it realy sensible to stimulate those fears.

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage? If their behaviour is likely to be worse then responsible adults should think very carefully before promoting the idea that THE END IS NIGH on the basis of sketchy evidence.

Replies from: Eneasz, Jonathan_Graehl
comment by Eneasz · 2010-08-24T19:27:33.742Z · LW(p) · GW(p)

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage?

Given the current level of technology the end IS nigh, the world WILL end, for every person individually, in less than a century. On average it'll happen around the 77-year mark for males in the US. This has been the case through all of history (for most of it at a much younger age) and yet people generally do not rape and pillage. Nor are they more likely to do so as the end of their world approaches.

Thus, I do not think there is much reason for concern.

Replies from: ata, timtyler
comment by ata · 2010-08-24T20:20:32.360Z · LW(p) · GW(p)

People care (to varying degrees) about how the world will be after they die. People even care about their own post-mortem reputations. I think it's reasonable to ask whether people will behave differently if they anticipate that the world will die along with them.

comment by timtyler · 2010-08-24T19:58:08.396Z · LW(p) · GW(p)

The elderly are not known for their looting and rabble-rousing tendencies - partly due to frailty and sickness.

Those who believe the world is going to end do sometimes cause problems - e.g. see The People's Temple and The Movement for the Restoration of the Ten Commandments of God.

comment by Jonathan_Graehl · 2010-08-16T21:46:42.044Z · LW(p) · GW(p)

This seems correct. Do people object on style? Is it a repost? Off topic?

Replies from: cata
comment by cata · 2010-08-16T22:12:59.318Z · LW(p) · GW(p)

I think it's bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don't see why I should take Tim's word over theirs. It certainly isn't the "most obvious answer."

I also don't like this claim that people are likely to behave worse when they think they're in impending danger, because again, I don't agree that it's intuitive, and no evidence is provided. It also isn't sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.

Replies from: SilasBarta, timtyler, Perplexed
comment by SilasBarta · 2010-08-16T22:20:41.551Z · LW(p) · GW(p)

Hm, I didn't get that out of timtyler's post (just voted up). He didn't seem to be saying, "Each and every person interested in this topic is doing it to signal status", but rather, "Hey, our minds aren't wired up to care about this stuff unless maybe it signals" -- which doesn't seem all that objectionable.

Replies from: whpearson
comment by whpearson · 2010-08-16T22:39:32.575Z · LW(p) · GW(p)

DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn't ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.

The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won't get the women if I eventually help save humanity, but my circuits that trigger on "important issues" don't seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-16T22:55:35.252Z · LW(p) · GW(p)

Ok, so people don't like the implication of either the evo-psych argument, or the signaling argument. They both seem plausible, if speculative.

comment by timtyler · 2010-08-17T05:49:06.458Z · LW(p) · GW(p)

I didn't say that "the only reason anyone cares about existential risk is signalling". I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.

I was raising the issue of whether typical humans behave better or worse - if they become convinced that THE END IS NIGH. I don't know the answer to that. I don't know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it - if not for signalling and marketing purposes?

comment by Perplexed · 2010-08-16T22:37:16.313Z · LW(p) · GW(p)

I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from "signaling".

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-08-17T05:25:38.830Z · LW(p) · GW(p)

That's what Tim could have said. His post may have got a better reception if he left off:

It seems pretty clear that very few care much about existential risk reduction. The bigger puzzle is why anyone seems to care about it at all.

I mean, I most certainly do care and the reasons are obvious. p(wedrifid survives | no human survives) = 0

Replies from: timtyler
comment by timtyler · 2010-08-17T05:52:32.674Z · LW(p) · GW(p)

What I mean is things like:

"Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks," and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than "human extinction." Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and "human extinction" gets fewer than 1,200."

I am not saying that nobody cares. The issue was raised because you said:

This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct.

...and someone disagreed!!!

People do care about other things. They mostly care about other things. And the reason for that is pretty obvious - if you think about it.

Replies from: wedrifid
comment by wedrifid · 2010-08-17T06:28:43.069Z · LW(p) · GW(p)

The issue was raised because you said:

Wow... this was my tangent? Then "WOO! Whatever point I was initially making!", or something.

comment by timtyler · 2010-08-17T06:00:06.021Z · LW(p) · GW(p)

The common complaint here is that the signalled motive is usually wonderful and altruistic - in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger - etc).

So - if the signalling theory is accepted - people are less likely to believe there is altruism underlying the signal any more (because there isn't any). It will seem fake - the mere appearance of altruism.

The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:46:17.615Z · LW(p) · GW(p)

if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute. Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

Agreed. (Modulo a caveat about marginal ROI eventually balancing if FHI got large enough or SIAI got small enough.)

comment by Vladimir_Nesov · 2010-08-15T15:40:32.393Z · LW(p) · GW(p)

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

But who does the evaluation? It seems that it's better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn't themselves research FAI, and instead fund its research, similarly most people shouldn't research feasibility of research of FAI, and instead fund the research of that feasibility.

Replies from: orthonormal
comment by orthonormal · 2010-08-15T16:01:03.463Z · LW(p) · GW(p)

I think you claim too much. If I decided I couldn't follow the relevant arguments, and wanted to trust a group to research the important uncertainties of existential risk, I'd trust FHI. (They could always decide to fund or partner with SIAI themselves if its optimality became clear.)

Replies from: whpearson, Vladimir_Nesov, timtyler
comment by whpearson · 2010-08-15T20:32:50.914Z · LW(p) · GW(p)

My only worry about funding FHI exclusively is that they are primarily philosophical and academic. I'd worry that the default thing they would do with more money would be to produce more philosophical papers. Rather than say doing/funding biological research or programming, if that was what was needed.

But as the incentive structures for x-risk reduction organisations go, those of an academic philosophy department aren't too bad at this stage.

comment by Vladimir_Nesov · 2010-08-15T16:04:47.807Z · LW(p) · GW(p)

This seems to work as an argument for much greater marginal worth of ensuring minimal funding, so that there is at least someone who researches the uncertainties professionally (to improve on what people from the street can estimate in their spare time), before we ask about the value of researching the stuff these uncertainties are about. (Of course, being the same organization that directly benefits from a given answer is generally a bad idea, so FHI might work in this case.)

comment by homunq · 2010-08-31T16:11:20.108Z · LW(p) · GW(p)

I do "think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction". I also think that SIAI is not addressing the most important problem in that regard. I suspect there's a lot of people who would agree, for various reasons.

In my case, the logic is that I think:

1) That corporations, though not truly intelligent, are already superhuman and unFriendly.

2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness

3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI ...

3a) ... while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.

I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.

Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.

Replies from: orthonormal, pjeby, timtyler, timtyler
comment by orthonormal · 2010-08-31T21:07:57.661Z · LW(p) · GW(p)

From your analysis, it seems that FHI would be very well aligned with your goals: it's a high-profile, academic rather than corporate, entity which can publicize existential risks (and takes corporate creation of such seriously, IIRC).

Would this not be desirable, or is there any organization within the broader anticorporate movement you speak of that would even think to do the same with comparable competency?

Replies from: homunq
comment by homunq · 2010-09-01T14:31:41.020Z · LW(p) · GW(p)

I believe that explicitly political movements, not academic ones, are the only ones which are other-optimizing enough to fight the mal-optimization of corporations. And I think that at our current level of corporate power versus AI-relevant technological understanding, my energy is best spent fighting the former rather than advancing the latter (and I majored in cognitive science and work as a programmer, so I hold that same conclusion for most people.)

I realize that these beliefs are partly tribal (something which allows me to get along with my wife and friends) and partly hope-seeking (something which allows me to get up in the morning). I think that these are valid reasons to give a belief the benefit of the doubt. I would not, however, use these excuses to justify a belief with no rational basis, or to avoid considering an argument for the lack of rational basis. Anyway, even if one tried to rid oneself of tribal and hope-seeking biases, beyond the caveats in the previous sentence, I don't think it would help one be appreciably more rational.

comment by pjeby · 2010-08-31T17:27:30.366Z · LW(p) · GW(p)

In my case, the reason is that I think that corporations, though not truly intelligent,

They get to use borrowed intelligence from their human symbiotes, though. ;-) (Or would they be symbionts? Hm...)

comment by timtyler · 2010-08-31T21:07:56.590Z · LW(p) · GW(p)

Re: coordinated action to tame corporations

One thing we need is corporation reputation systems. We have product reviews, and so forth - but the whole area is poorly organised.

comment by timtyler · 2010-08-31T21:05:11.171Z · LW(p) · GW(p)

Why are corporations "not truly intelligent"? They contain humans, surely. Would you say that humans are "not truly intelligent" either?

Replies from: homunq
comment by homunq · 2010-09-01T14:21:16.193Z · LW(p) · GW(p)

They contain humans. However, while corporations themselves are psychopathic, most are not controlled and staffed by psychopaths. This gives corporations (thank Darwin) cognitive biases which systematically reduce their intelligence when pursuing obviously unFriendly goals.

In the end, it depends on your definition of intelligence. The intelligence of a corporation in choosing strategies to fit its goals is sometimes of the level of natural selection (weak), sometimes human intelligence (true), and sometimes effective crowd intelligence (mildly superhuman). I'd guess that on the whole, they average somewhat below human intelligence (but much higher power) when pursuing explicitly unFriendly subgoals; and somewhat above human intelligence when pursuing subgoals that happen to be neutral or Friendly. But that does not necessarily mean they are on balance Friendly, because their root goals are not.

Replies from: timtyler
comment by timtyler · 2010-09-01T15:36:45.921Z · LW(p) · GW(p)

The basic idea with corporations is that they are kept in check by an even more powerful organisation: the government. If any corporation gets too big, the Monopolies and Mergers commission intervenes and splits it up. As far as I know, no corporation has ever overthrown their "parent" government.

Replies from: wnoise
comment by wnoise · 2010-09-01T15:43:07.524Z · LW(p) · GW(p)

Other governments however...

comment by timtyler · 2010-08-18T15:09:27.796Z · LW(p) · GW(p)

You approve of their plan to build a machine to take over the world and impose its own preferences on everyone? You talk about "optimality" - how confident are you that that is really going to help? What reasoning supports such a claim?

comment by JamesAndrix · 2010-08-16T03:02:28.595Z · LW(p) · GW(p)

Warning: Shameless Self Promotion ahead

Perhaps part of the difficulty here is the attempt to spur a wide rationalist community on the same site frequented by rationalists with strong obscure positions on obscure topics.

Early in Lesswrong discussion of FAI was discouraged so that it didn't just become a site about FAI and the singularity, but a forum about human rationality more generally.

I can't track down an article[s] from EY about how thinking about AI can be too absorbing, and how in order properly create a community, you have to truly put aside the ulterior motive of advancing FAI research.

It might be wise for us to again deliberately shift our focus away from FAI and onto human rationality and how it can be applied more widely (say to science in general.)

Enter the SSP: For months now I've been brainstorming a community to educate people on the creation and use of 3D printers, with the eventual goal of making much better 3D printers. So this is a different big complicated problem with a potential high payoff, and it ties into many fields, provides tangible previews of the singularity, can benefit from the involvement of people with almost any skill set, and seems to be much safer than advancing AI, nanotech, or genetic engineering.

I had already intended to introduce rationality concepts where applicable and link a lot to Lesswrong. but if a few LWers were willing to help, It could become a standalone community of people committed to thinking clearly about complex technical and social problems, with a latent obsession with 3D printers.

comment by wedrifid · 2010-08-15T09:05:06.898Z · LW(p) · GW(p)

I like your post. I wouldn't go quite so far as to ascribe outright negative utility to SIAI donations - I believe you underestimate just how much potential social influence money provides. I suspect my conclusion there would approximately mirror Vassar's.

It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

(Typo: I think you meant to include 'traits' or similar in there.)

While Eliezer occasionally takes actions that seem clearly detrimental to his cause I do suggest that Eliezer is at least in principle aware of the dynamics you discuss. His alter ego "Harry Potter" has had similar discussion with his Draco in his fanfiction.

Also note that appearing too sophisticated would be extremely dangerous. If Eliezer or SIAI gains the sort of status and credibility you would like them to seek they open themselves to threats from governments and paramilitary organisations. If you are trying to take over the world it is far better to be seen as an idealistic do gooder who writes fanfic than as a political power player. You don't want the <TLA of choice> to raid your basement, kill you and run your near complete FAI with the values of the TLA. Obviously there is some sort of balance to be reached here...

Replies from: XiXiDu, timtyler, multifoliaterose, CarlShulman
comment by XiXiDu · 2010-08-15T13:09:37.665Z · LW(p) · GW(p)

I raised a similar point on the IEET existential risk mailing list in a reply to James J. Hughes:

Michael

For the record, I have no problem with probability estimates. I am less and less willing to offer them myself however since we have the collapse of the Soviet Union etc. as evidence of a chaotic and unpredictable nature of history, Ray’s charts not-with-standing.

What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind’s problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling.

James J. Hughes (existential.ieet.org mailing list, 2010-07-11)

I replied:

Keep your friends close...maybe they just want to keep the AI crowd as close together as possible. Making enemies wouldn't be a smart idea either, as the 'K-type S^' subgroup would likely retreat from further information disclosure. Making friends with them might be the best idea.

An explanation of the rather calm stance regarding a potential giga-death or living hell event would be to keep a low profile until acquiring more power.

Replies from: wedrifid, Rain
comment by wedrifid · 2010-08-15T17:09:50.625Z · LW(p) · GW(p)

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations. ... You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

This is a perfect example of where the 'outside view' can go wrong. Even the most basic 'inside view' of the topic would make it overwhelmingly obvious why a "75% certain of death by AI" folks could be allied (or the same people!) as the "solve all problems through AI" group. Splitting the two positions prematurely and trying to make a simple model of political adversity like that is just naive.

I personally guess >= 75% for AI death and also advocate FAI research. Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly. Nevermind the even longer term which would probably result in undesirable outcomes even if humanity did manage to artificially stunt its own progress in such a manner.

(The guy also uses 'schizophrenic' incorrectly.)

Replies from: Aleksei_Riikonen, Jonathan_Graehl, timtyler, Jordan
comment by Aleksei_Riikonen · 2010-08-15T19:35:32.947Z · LW(p) · GW(p)

I don't think James Hughes would present or believe in that particular low-quality analysis himself either, if he didn't feel that SIAI is an organization competing with his IEET for popularity within the transhumanist subculture.

So mostly that statement is probably just about using "divide and conquer" towards transhumanists/singularitarians who are currently more popular within the transhumanist subculture than he is.

Replies from: timtyler
comment by timtyler · 2010-08-15T19:49:05.750Z · LW(p) · GW(p)

James Hughes seems like a fine fellow to me - and his SIAI disagreements seem fairly genuine. It is much of the rest of IEET that is the problem.

comment by Jonathan_Graehl · 2010-08-16T22:21:16.817Z · LW(p) · GW(p)

Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly.

This reminds me of Charles Stross' "why even try - AI can't be stopped" (my paraphrase).

Surely if it buys a little extra time for a FAI singleton to succeed, desperate struggle to suppress other lines of dangerously-near-culmination AI research would seem incumbent. I guess this might be one of those scary (and sufficiently distant) things nobody wants to advertise.

comment by timtyler · 2010-08-15T18:41:42.798Z · LW(p) · GW(p)

What does "75% certain of death by AI" mean?

comment by Jordan · 2010-08-15T18:25:02.027Z · LW(p) · GW(p)

Preventing AI development indefinitely is via desperate politico-military struggle would just not work in the long term. Trying would be utter folly.

Greater folly than letting happen something with greater than a 75% chance of destroying the human race?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-15T18:37:37.804Z · LW(p) · GW(p)

If you estimate a high chance of this action destroying humanity then trying to get through that bottleneck with slightly better than 75% chance of surviving is almost certainly better than trying to stamp out such research and buying a few years in exchange for replacing 75% with a near certainty. The only argument against that I can see if one accepts the 75% number is that forced delaying until we have uploads might help matters since uploads would have moral systems close to those of their original humans, and uploads can will have a better chance at solving the FAI problem or if not solving it being able to counteract any unFriendly or unfriendly AI.

Replies from: Jordan
comment by Jordan · 2010-08-15T20:18:16.360Z · LW(p) · GW(p)

AI research is hard. It's not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years. Communication, collaboration, recruitment, funding... all of these would be much more difficulty. Even more, since current AI researchers are open about their work they would be the easiest to track after a ban, thus any new AI research would have to come from green researchers.

That aside, I agree that a ban whose goal is simply indefinite postponement of AGI is unlikely to work (and I'm dubious of any ban in general). Still, it isn't hard for me to imagine that a ban could buy us 10 years, and that a similar amount of political might could also greatly accelerate an upload project.

The biggest argument against, in my opinion, is that the only way the political will could be formed is if the threat of AGI was already so imminent that a ban really would be worse than worthless.

Replies from: wedrifid, NancyLebovitz, timtyler
comment by wedrifid · 2010-08-16T03:31:59.017Z · LW(p) · GW(p)

AI research is hard. It's not clear to me that a serious, global ban on AI would only delay the arrival of AGI by a few years.

The other thing to consider is just what the ban would achieve. I would expect it to lower the 75% chance by giving us the opportunity to go extinct in another way before making a mistake with AI. When I say 'extinct' I include (d)evolving to an equilibrium (such as those described by Robin Hanson from time to time).

comment by NancyLebovitz · 2010-08-20T04:04:19.959Z · LW(p) · GW(p)

How well defined is AI research? My assumption is that if AI is reasonably possible for humans to create, then it's going to become much easier as computers become more powerful and human minds and brains become better understood.

comment by timtyler · 2010-08-15T20:24:25.123Z · LW(p) · GW(p)

A ban seems highly implausible to me. What is the case for considering it? Do you really think that enough people will become convinced that there is a significant danger?

Replies from: Jordan
comment by Jordan · 2010-08-15T21:46:01.466Z · LW(p) · GW(p)

I agree, it seems highly implausible to me as well. However, the subject at hand (AI, AGI, FAI, uploads, etc) is riddled with extremes, so I'm hesitant to throw out any possibility simply because it would be incredibly difficult.

Do you really think that enough people will become convinced that there is a significant danger?

See the last line of the comment you responded to.

comment by Rain · 2010-08-15T16:50:24.568Z · LW(p) · GW(p)

Seeing you quote James Hughes makes me wonder if I didn't realize where you were getting your ideas when I said the anti-Summit should be technical minded and avoid IEET-style politics.

comment by timtyler · 2010-08-15T09:28:13.273Z · LW(p) · GW(p)

If you are trying to take over the world it is far better to be seen as an idealistic do gooder who writes fanfic than as a political power player.

They took down the "SIAI will not enter any partnership that compromises our values" "commitment" from their web site. Maybe they are more up for partnerships these days.

comment by multifoliaterose · 2010-08-15T10:14:27.781Z · LW(p) · GW(p)

(Typo: I think you meant to include 'traits' or similar in there.)

Thanks for pointing this out :-)

While Eliezer occasionally takes actions that seem clearly detrimental to his cause I do suggest that Eliezer is at least in principle aware of the dynamics you discuss. His alter ego "Harry Potter" has had similar discussion with his Draco in his fanfiction.

Interesting, I'll check this out at some point.

comment by CarlShulman · 2010-08-15T09:14:06.849Z · LW(p) · GW(p)

This is pretty silly.

Replies from: wedrifid
comment by wedrifid · 2010-08-15T09:45:39.060Z · LW(p) · GW(p)

This is pretty silly.

I do not find your comment in any way helpful. It does not even inform me of which of my statements you disagree with, much less why. It is mere rudeness.

I further observe that the very subject is about silliness. That is, it doesn't matter how correct Eliezer is, if he looks silly people will find reasons to disrespect him and his cause. Eliezer describes the phenomenon in terms of clown suits vs lab coats.

comment by Mitchell_Porter · 2010-08-15T11:51:14.066Z · LW(p) · GW(p)

But what if you're increasing existential risk, because encouraging SIAI staff to censor themselves will make them neurotic and therefore less effective thinkers? We must all withhold karma from multifoliaterose until this undermining stops! :-)

comment by Jonathan_Graehl · 2010-08-16T20:58:49.425Z · LW(p) · GW(p)

When I'm talking to someone I respect (and want to admire me), I definitely feel an urge to distance myself from EY. I feel like I'm biting a social bullet in order to advocate for SIAI-like beliefs or action.

What's more, this casts a shadow over my actual beliefs.

This is in spite of the fact that I love EY's writing, and actually enjoy his fearless geeky humor ("hit by a meteorite" is indeed more fun than the conventional "hit by a bus").

The fear of being represented by EY is mostly due to what he's saying, not how he's saying it. That is, even if he were always dignified and measured, he'd catch nearly as much flak. If he'd avoided certain topics entirely, that would have made a significant difference, but on the other hand, he's effectively counter-signaled that he's fully honest and uncensored in public (of course he is probably not, exactly), which I think is also valuable.

I think EY can win by saying enough true things, convincingly, that smart people will be persuaded that he's credible. It's perhaps true that better PR will speed the process - by enough for it to be worth it? That's up to him.

The comments in this diavlog with Scott Aaronson - while some are by obvious axe-grinders - are critical of EY's manner. People appear to hate nothing more than (what they see as) undeserved confidence. Who knows how prevalent this adverse reaction to EY is, since the set of commenters is self-selecting.

People who are floundering in a debate with EY (e.g. Jason Lanier) seem to think they can bank on a "you crazy low-status sci-fi nerd" rebuttal to EY. This can score huge with lazy or unintellectual people if it's allowed to succeed.

Replies from: Eneasz
comment by Eneasz · 2010-08-24T17:52:00.517Z · LW(p) · GW(p)

This can score huge with lazy or unintellectual people if it's allowed to succeed.

What is the likelihood that lazy or unintellectual people would have ever done anything to reduce existential risk regardless of any particular advocate for/against?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-24T18:00:52.708Z · LW(p) · GW(p)

They might give money to the people will actually use do the reduction in existential risk. I'd also note that even more people who are generally intellectuals or at least think of themselves as intellectuals, this sort of argument can if phrased in the right way still impact them; scifi is still a very low status association for many of those people.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-24T18:51:39.631Z · LW(p) · GW(p)

I think Eneasz is right, but I agree with you that we should care about the support of ordinary people and those who choose to specialize elsewhere.

I was thinking also of the motivational effect of average people's (dis)approval on the gifted. Sure, many intellectual milestones were first reached by those who either needed less to be accepted, or drew their in-group/out-group boundary more tightly around themselves, but social pressure matters.

comment by Perplexed · 2010-08-15T18:22:57.178Z · LW(p) · GW(p)

I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.

I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the "debate" with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.

Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.

Replies from: timtyler
comment by timtyler · 2010-08-15T18:31:09.700Z · LW(p) · GW(p)

It depends on exactly what you mean by "existential risk". Development will likely - IMO - create genetic and phenotypic takeovers in due course - as the bioverse becomes engineered. That will mean no more "wild" humans.

That is something which some people seem to wail and wave their hands about - talking about the end of the human race.

The end of earth-originating civilisation seems highly unlikely to me too - which is not to say that the small chance of it is not significant enough to discuss.

Eliezer's main case for that appears to be on http://lesswrong.com/lw/y3/value_is_fragile/

I think that document is incoherent.

comment by [deleted] · 2010-08-18T15:46:37.153Z · LW(p) · GW(p)

I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.

Replies from: zemaj, josh0
comment by zemaj · 2010-08-19T10:09:06.561Z · LW(p) · GW(p)

"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.

I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.

In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.

comment by josh0 · 2010-08-21T03:53:41.096Z · LW(p) · GW(p)

It may be true that many are worried about 'the end of the world', however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God's Kingdom on Earth, Olam Haba, or whatever.

We could rebrand 'existential risk' as 'end time' and gain vast numbers of followers. But I doubt that would actually be desirable.

I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. 'Friendly' does sound a bit unserious.

comment by James_Miller · 2010-08-18T15:30:54.714Z · LW(p) · GW(p)

Given how superficially insane Eliezer's beliefs seem he has done a fantastic job of attracting support for his views.

Eliezer is popularizing his beliefs, not directly through his own writings, but by attracting people (such as conference speakers and this comment writer who is currently writing a general-audience book) who promote understanding of issues such as intelligence explosion, unfriendly AI and cryonics.

Eliezer is obviously not neurotypical. The non-neurotypical have a tough time making arguments that emotionally connect. Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Eliezer might not have won the backing of people such as super-rationalist self-made tech billionaire Peter Thiel had Eliezer devoted less effort to rational arguments.

Replies from: FAWS
comment by FAWS · 2010-08-18T15:37:27.694Z · LW(p) · GW(p)

Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Do you mean comparative disadvantage? Otherwise I can't make sense of what you are trying to say. Not that I'd agree with that anyway, Eliezer is very good rhetorically, and I'm suspicious of psychological diagnoses performed over the internet.

Replies from: James_Miller
comment by James_Miller · 2010-08-18T15:44:55.862Z · LW(p) · GW(p)

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

I have twice talked with Eliezer in person, seen in person a few of his talks, watched several videos of him talking and for family reasons I have read a huge amount about the non-neurotypical.

Replies from: FAWS
comment by FAWS · 2010-08-18T16:07:54.580Z · LW(p) · GW(p)

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

??? So you mean he has a massive absolute advantage, but is also so hugely better at other things compared to normal people it's still not worth his time??? Or does that actually mean that he has an absolute advantage of unspecified size, that happens to be very much non-comparative? What someone only vaguely familiar with economic terminology like me might call a "massively non-comparative advantage"?

comment by JoshuaZ · 2010-08-15T16:28:32.509Z · LW(p) · GW(p)

I disagree strongly with this post. In general, it is a bad idea to refrain from making claims that one believes are true simply because those claims will make people less likely to listen to other claims. That direction lies the downwards spiral of emotional manipulation, rhetoric, and other things not conducive to rational discourse.

Would one under this logic encourage the SIAI to make statements that are commonly accepted but wrong in order to make people more likely to listen to the SIAI? If not, what is the difference?

Replies from: multifoliaterose, timtyler, Jonathan_Graehl, multifoliaterose
comment by multifoliaterose · 2010-08-15T17:26:31.231Z · LW(p) · GW(p)

I believe that there are contexts in which the right thing to do is to speak what one believes to be true even if doing so damages public relations.

These things need to be decided on a case-by-case basis. There's no royal road to instrumental rationality.

As I say here, in the present context, a very relevant issue in my mind is that Eliezer & co. have not substantiated their most controversial claims with detailed evidence.

It's clichéd to say so, but extraordinary claims require extraordinary evidence. A claim of the type "I'm the most important person alive" is statistically many orders of magnitude more likely to be made by a poser than by somebody for whom the claim is true. Casual observers are rational to believe that Eliezer is a poser. The halo effect problem is irrational, yes, but human irrationality must be acknowledged, it's not the sort of thing that goes away if you pretend that it's not there.

I don't believe that Eliezer's outlandish and unjustified claims contribute to rational discourse. I believe that Eliezer's outlandish and unjustified claims lower the sanity waterline.

To summarize, I believe that in this particular case the costs that you allude to are outweighed by the benefits.

Replies from: timtyler
comment by timtyler · 2010-08-15T18:03:07.305Z · LW(p) · GW(p)

Come on - he never actually claimed that.

Besides, many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

comment by timtyler · 2010-08-15T16:32:54.818Z · LW(p) · GW(p)

It seems as though the latter strategy could backfire - if the false statements were exposed. Keeping your mouth shut about controversial issues seems safer.

comment by Jonathan_Graehl · 2010-08-16T21:37:42.009Z · LW(p) · GW(p)

To the extent that people really want what you argue against, perhaps they should pursue an alternate organization than SIAI that promotes only the more palatable subset. I agree with you that somebody should be making all the claims, popular or not, that bear on x-risk.

comment by multifoliaterose · 2010-08-15T17:13:59.620Z · LW(p) · GW(p)

I believe that there are contexts in which the right thing to do is to speak what one believes to be true even if doing so damages public relations.

These things need to be decided on a case-by-case basis. There's no royal road to instrumental rationality.

As I say here, in the present context, a very relevant issue in my mind is that Eliezer & co. have not substantiated their most controversial claims with detailed evidence.

It's clichéd to say so, but extraordinary claims require extraordinary evidence. A claim of the type "I'm the most important person alive" is statistically many orders of magnitude more likely to be made by a poser than by somebody for whom the claim is true. Casual observers are rational to believe that Eliezer is a poser. The halo effect problem is irrational, yes, but human irrationality must be acknowledged, it's not the sort of thing that goes away if you pretend that it's not there.

I don't believe that Eliezer's outlandish and unjustified claims contribute to rational discourse. I believe that Eliezer's outlandish and unjustified claims lower the sanity waterline. I feel no squeamishness about placing pressure on Eliezer to cease to make such claims in public even if he sincerely believes them.

comment by Larks · 2010-08-17T22:11:48.771Z · LW(p) · GW(p)

This page is now the 8th result for a google search for 'existential risk' and the 4th result for 'singularity existential risk'

Regardless of the effect SIAI may have had on the public image of existential risk reduction, it seems this is unlikely to be helpful.

Edit: it is now 7th and first, respectively. This is plusungood.

Replies from: jimrandomh, multifoliaterose
comment by jimrandomh · 2010-08-18T03:32:02.375Z · LW(p) · GW(p)

This is partially because Google gives a ranking boost to things it sees as recent, so it may not stay that well ranked.

Replies from: Larks, multifoliaterose
comment by Larks · 2010-08-18T05:00:28.299Z · LW(p) · GW(p)

Right & upvoted.

comment by multifoliaterose · 2010-08-18T04:09:40.293Z · LW(p) · GW(p)

Yes, good point.

comment by multifoliaterose · 2010-08-17T22:15:46.402Z · LW(p) · GW(p)

I disagree. I think that my post does a good job of highlighting the fact that public aversion to thinking about existential risk reduction is irrational.

Replies from: Larks
comment by Larks · 2010-08-17T22:25:23.642Z · LW(p) · GW(p)

The post (as I parse it) has two points:

  • The public are irrational with respect to existential risk
  • Donating to SIAI has negative expected impact on existential risk reduction

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

As you so accurately quote Yvain, for the average reading this is not an intelligent critique of the public relations of SingInst. This is 'boo Eliezer!'

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-17T22:36:21.455Z · LW(p) · GW(p)

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

I agree that this article is not one of the first that should appear when people Google the singularity or existential risk. I'm somewhat perplexed as to how this happened?

Despite this issue, I think that the benefits of my posting on this topic outweigh the costs. I believe that ultimately whether or not humans avoid global catastrophic risk depends much more on people's willingness to think about the topic than it does on SIAI's reputation. I don't believe that my post will lower readers' interest thinking about in existential risk.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-18T11:52:16.296Z · LW(p) · GW(p)

Interesting, this post does indeed appear on the first page of results when searching for "existential risk":

http://www.google.com/search?q=existential+risk

I don't think it is inappropiate though. As public relations are critical to tackle such problems.

I submitted the following article to the existential risk mailing list run by the IEET: MIT political scientists demonstrate how much candidate appearances affect election outcomes, globally.

James J. Hughes wrote me an e-mail saying:

I thought this one was a little off-topic.

My reply:

I thought this does hint at possible dangers of empowering large amounts of uneducated people, i.e. in the case of a global democracy. It also shows that people concerned with the reduction of existential risks should partly focus on the education of biases if they plan to tackle such problems by means of democracy. Further, it means that organisations concerned with policy making regarding global catastrophic risks should employ highly charismatic individuals for public relations and the importance of the former.

I'd also like to post another submission I wrote in response to some tense debate between members of the IEET and the SIAI regarding public and academic relations within the x-risk fraction:

I'm seriously trying to be friendly. I do not side with anyone right now. But have some of you ever read any of EY' writings, i.e. over at lesswrong.com / yudkowsky.net?

What have I missed? This discussion seems emotionally overblown. Makes me wonder if Yudkowsky actually uttered 'really stupid things', or if it was just an overreaction between people who are not neurotypical.

Whatever they are, EY and Anissimov are not dumb. If EY farts during lunch or commits otherwise disgusting acts it does not belittle what good he does. If a murder is proclaiming that murder is wrong the fact of being a murder does not negate the conclusion.

What I'm trying to say is that too many personal issues seem to play too much of role in your circles.

I'm a complete outsider from Germany, with no formal education. And you know how your movements (SIAI, IEET...) appear to me? Here's a quote from a friend who pretty much summarized it by saying:

"I rather think that it is their rationality that is at risk here...somehow in most of their discussions it is the large questions that seem to disappear into the infinite details of risk analysis, besides the kind of paranoia they emanate is a kind of chronic fatigue symptoms of over-saturated minds..."

You need to get back to the basics. Step back and concentrate on the important issues. Much of the Tranhumanist-AI-Singularity movement seems to be drowning in internal conflict over minor issues. All of the subgroups have something in common while none is strong enough on its own to achieve sufficient impact. First and foremost you have to gather pace together. Deciding upon the details is something to be deferred in favor of cooperation on the central issues of public awareness of existential risks and the importance of responsible scientific research and ethical progress.

Step back and look at what you are doing. You indulge in tearful debates over semantics. It's insignificant as long as you're not having a particular goal in mind, as in convincing the public of a certain idea by means of rhetoric.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-18T12:08:08.223Z · LW(p) · GW(p)

Without wishing to be harsh, I'll say that I don't see what this comment adds to the discussion.

comment by JRMayne · 2010-08-15T16:25:58.915Z · LW(p) · GW(p)

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they and only they can accomplish the task.

I also don't care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I could be wrong.

--JRM

Replies from: nhamann, Eliezer_Yudkowsky
comment by nhamann · 2010-08-15T17:33:46.402Z · LW(p) · GW(p)

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I'd like to point out that it's not either/or: it's possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-15T18:05:22.977Z · LW(p) · GW(p)

I would say likely, conditional on eventual FOOM. The alternative means both a concentration of probability mass in the next ten years and that the relevant theory and tools are almost wholly complete.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T15:03:07.513Z · LW(p) · GW(p)

And saddened once again at how people seem unable to distinguish "multi claims that something Eliezer said could be construed as claim X" and "Eliezer claimed X!"

Please note that for the next time you're worried about damaging an important cause's PR, multi.

Replies from: JRMayne, multifoliaterose, XiXiDu
comment by JRMayne · 2010-08-18T16:52:19.005Z · LW(p) · GW(p)

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.

Number 4 is unnecessary for your being the most important person on earth, but:

  1. People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.

And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.

And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the cause - he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated "Eliezer Yudkowsky," with "AI existential risk."

Again.

I might be wrong about my interpretation - but I don't think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn't in the collective ignorance and stupidity in other people. Just a thought.

--JRM

Replies from: JGWeissman
comment by JGWeissman · 2010-08-18T18:39:40.769Z · LW(p) · GW(p)

Which of those statements do you reject?

comment by multifoliaterose · 2010-08-18T16:08:02.802Z · LW(p) · GW(p)

My understanding of JRMayne's remark is that he himself construes your statements in the way that I mentioned in my post.

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

comment by XiXiDu · 2010-08-18T15:23:21.053Z · LW(p) · GW(p)

I have to disagree based on the following evidence:

Q: The only two legitimate occupations for an intelligent person in our current world? (Answer)

and

"At present I do not know of any other person who could do that." (Reference)

This makes it reasonable to state that you think you might be the most important person in the world.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T15:26:54.084Z · LW(p) · GW(p)

I love that "makes it reasonable" part. Especially in a discussion on what you shouldn't say in public.

Now we're to avoid stating any premises from which any absurd conclusions seem reasonable to infer?

This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu's part, not necessarily something the average audience member would do.

Note that saying "But you must therefore argue X..." where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.

Replies from: JRMayne, XiXiDu, XiXiDu
comment by JRMayne · 2010-08-18T16:59:12.343Z · LW(p) · GW(p)

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?

I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.

Replies from: JGWeissman
comment by JGWeissman · 2010-08-18T18:45:20.550Z · LW(p) · GW(p)

In this case I would ask you if you really want Joe jailed, or if when you said that "All heathens should be jailed", you were using the word "heathen" in a stronger sense of explicitly rejecting the "One True God" than the weak sense that Joe is a "heathen" for not understanding the concept.

And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.

comment by XiXiDu · 2010-08-18T15:32:24.616Z · LW(p) · GW(p)

I'm too dumb to grasp what you just said in its full complexity. But I believe you are indeed one of the most important people in the world. Further, (1) I don't see what is wrong with that (2) It is positive for public relations as it attracts people to donate money (Evidence: Jesus) (3) It won't hurt academic relations as you are always able to claim that you were misunderstood.

comment by XiXiDu · 2010-08-18T15:41:57.530Z · LW(p) · GW(p)

I'm sorry for the other comment. I was just trying to take it lightly, i.e. joking. You are right of course.

But someone like me would infer that you think you are important from the given evidence. And I don't think it is wise to downplay your importance given public relations.

Note that saying "But you must therefore argue X..." where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.

Yeah, but that is part of public relations and has to be taken into account.

comment by Jordan · 2010-08-15T18:33:45.535Z · LW(p) · GW(p)

Damnit! My smug self assurance that I could postpone thinking about these issues seriously because I'm an SIAI donor .... GONE! How am I supposed to get any work done now?

Seriously though, I do wish the SIAI toned down its self importance and incredible claims, however true they are. I realize, of course, that dulling some claims to appear more credible is approaching a Dark Side type strategy, but... well, no buts. I'm just confused.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-16T09:08:17.355Z · LW(p) · GW(p)

Edit: I misunderstood what Jordan was trying to say - the previous version of this comment is irrelevant to the present discussion and so I've deleted it.

Replies from: Jordan, khafra
comment by Jordan · 2010-08-16T13:47:32.017Z · LW(p) · GW(p)

Deciding that the truth unconditionally deserves top priority seems to me to be an overly convenient easy way out of confronting the challenges of demanded by instrumental rationality.

No one is claiming that honesty deserves top priority. I would lie to save someone's life, or to make a few million dollars, etc. In the context of SIAI though, or any organization, being manipulative can severely discredit you.

I believe that when one takes into account unintended consequences, when Eliezer makes his most incredible claims he lowers overall levels of epistemic rationality rather than raising overall levels of epistemic rationality.

If he were to go back on his incredible claims, or even only make more credible claims in the future, how would he reconcile the two when confronted? If someone new to Eliezer read his tame claims, then went back and read his older, more extreme claims, what would they think? To many people this would enforce the idea that SIAI is a cult, and that they are refining their image to be more attractive.

All of that said, I do understand where you're coming from intuitively, and I'm not convinced that scaling back some of the SIAI claims would ever have a negative effect. Certainly, though, a public policy conversation about it would cast a pretty manipulative shade over SIAI. Hell, even this conversation could cast a nasty shade to some onlookers (to many people trying to judge SIAI, the two of us might be a sufficiently close proxy, even though we have no direct connections).

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-16T14:44:08.853Z · LW(p) · GW(p)

Okay, I misunderstood where you were coming from earlier, I thought you were making a general statement about the importance of stating one's beliefs. Sorry about that.

In response to your present comments, I would say that though the phenomenon that you have in mind may be a PR issue, I think it would be less of a PR issue than what's going on right now.

One thing that I would say is that I think that Eliezer would come across as much more credible simply by accompanying his weird sounding statements with disclaimers of the type "I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..." See my remark here.

Replies from: Jordan
comment by Jordan · 2010-08-16T17:57:54.960Z · LW(p) · GW(p)

I mostly agree, although I'm still mulling it and think the issue is more complicated than it appears. One nitpick:

"I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..."

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

"I know it's hard to believe, but it's likely an AI will kill us all in the future."

could become

"It's hard for me to come to terms with, but there doesn't seem to be any natural safeguards preventing an AI from doing serious damage."

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-16T18:43:01.263Z · LW(p) · GW(p)

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

Sure, I totally agree with this - I prefer your formulation to my own. My point was just that there ought to be some disclaimer - the one that I suggested is a weak example.

Edit: Well, okay, actually I prefer:

"It took me a long time to come to terms with, but there don't seem to be any natural safeguards preventing an AI from doing serious damage."

If one has actually become convinced of a position, it sounds disingenuous to say that it's hard for one to come to terms with at present, but any apparently absurd position should at some point have been hard to come to terms with.

Adding such a qualifier is a good caution against appearing to be placing oneself above the listener. It carries the message "I know how you must be feeling about these things, I've been there too."

comment by khafra · 2010-08-16T15:06:13.223Z · LW(p) · GW(p)

Mblume one critical analysis of honesty, quoting Steven0461:

Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever.

set against Eliezer's more stringent, TDT/mutually source code aware strategy of speaking the truth, even when the fate of the world is at stake.

Mblume merely presented the question without a recommended solution, but Alicorn came closest to your position in the comments.

comment by michaelkeenan · 2010-08-15T09:30:50.689Z · LW(p) · GW(p)

During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims.

Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn't a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.

Replies from: XiXiDu, wedrifid, multifoliaterose
comment by XiXiDu · 2010-08-15T13:32:06.872Z · LW(p) · GW(p)

Negative reactions to Yudkowsky from various people (academics concerned with x-risk), just within the past few weeks:

I also have an extreme distaste for Eliezer Yudkowsky, and so I have a hard time forcing myself to cooperate with any organization that he is included in, but that is a personal matter.

You know, maybe I'm not all that interested in any sort of relationship with SIAI after all if this, and Yudkowsky, are the best you have to offer.

...

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.

...

Wow, that's an incredibly arrogant put-down by Eliezer..SIAI won't win many friends if he puts things like that...

...

...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.

...

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

I was told that the quotes above state some ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed some person might not be have been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.

Replies from: Eliezer_Yudkowsky, NancyLebovitz, Vladimir_Nesov, Jonathan_Graehl
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T15:06:41.559Z · LW(p) · GW(p)

who has never written a single computer program

utterly false, wrote my first one at age 5 or 6, in BASIC on a ZX-81 with 4K of RAM

The fact that a lot of these reactions are based on false info is worth noting. It doesn't defeat any arguments directly, but it says that the naive model where everything happens because of the direct perception of actions I directly control is false.

Replies from: timtyler, XiXiDu
comment by timtyler · 2010-08-18T15:13:06.937Z · LW(p) · GW(p)

That sounds like a pretty rare device! Most ZX81 models had either 1K or 16K of RAM. 32 KB and 64 KB expansion packs were eventually released too.

comment by XiXiDu · 2010-08-18T15:12:38.704Z · LW(p) · GW(p)

Sent you a PM on who said that.

comment by NancyLebovitz · 2010-08-15T22:27:03.283Z · LW(p) · GW(p)

Is it likely that someone who's doing interesting work that's publicly available wouldn't attract some hostility?

comment by Vladimir_Nesov · 2010-08-15T13:43:23.937Z · LW(p) · GW(p)

That N negative reactions about issue S exist only means that issue S is sufficiently popular.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-15T13:53:58.294Z · LW(p) · GW(p)

Not if the polling is of folk in a position to have had contact with S, or is representative.

Replies from: XiXiDu, Vladimir_Nesov
comment by XiXiDu · 2010-08-15T13:55:44.233Z · LW(p) · GW(p)

I don't like to, but if necessary I can provide the indentity of the people who stated the above. They all directly work to reduce x-risks. I won't do so in public however.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-08-15T14:05:02.182Z · LW(p) · GW(p)

Identity of these people is not the issue. The percentage of people in given category that have negative reactions for given reason, negative reactions for other reason, and positive reactions would be useful, but not a bunch of filtered (in unknown way) soldier-arguments.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T14:12:21.785Z · LW(p) · GW(p)

I know. I however just wanted to highlight that there are negative reactions, including not so negative critique. If you look further, you'll probably find more. I haven't saved all I saw over the years, I just wanted to show that it's not like nobody has a problem with EY. And in all ocassion I actually defended him by the way.

The context is also difficult to provide as some of it is from private e-Mails. Although the first one is from here and after thinking about it I can also provide the name since he was anyway telling this Michael Anissimov. It is from Sean Hays:

Sean A Hays PhD Post Doctoral Fellow, Center for Nanotechnology in Society at ASU Research Associate, ASU-NAF-Slate Magazine "Future Tense" Initiative Program Director, IEET Securing the Future Program

Replies from: Rain
comment by Rain · 2010-08-18T16:48:56.308Z · LW(p) · GW(p)

You have a 'nasty things people say about Eliezer' quotes file?

comment by timtyler · 2010-08-15T14:04:55.636Z · LW(p) · GW(p)

The last one was from David Pearce.

comment by Vladimir_Nesov · 2010-08-15T14:03:34.010Z · LW(p) · GW(p)

Sure, but XiXiDu's quotes bear no such framing.

comment by Jonathan_Graehl · 2010-08-16T22:10:25.560Z · LW(p) · GW(p)

I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status.

This seems a rather minor objection.

Replies from: Emile
comment by Emile · 2010-08-18T15:26:20.080Z · LW(p) · GW(p)

But frogs are CUTE!

And existential risks are boring, and only interest Sci-Fi nerds.

comment by wedrifid · 2010-08-15T10:50:18.729Z · LW(p) · GW(p)

But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

Yudkowsky-hatred isn't the risk, Yudkowsky-mild-contempt is. People engage with things they hate, sometimes it brings respect and attention to both parties (by polarizing a crowd that would otherwise be indifferent.) But you never want to be exposed to mild contempt.

I can think of some examples of conversations about Eliezer that would fit the category but it is hard to translate them to text. The important part of the reaction was non-verbal. Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool. Another topic is the old "thinks he can know something about Friendly AIs when he hasn't even made an AI yet" theme. Again, I've seen that reaction evident through mannerisms that in no way translate to text. You can convey that people aren't socially relevant without anything so crude as saying stuff.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-08-15T11:19:33.790Z · LW(p) · GW(p)

Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool.

[insert the obvious bad pun here]

Replies from: wedrifid
comment by wedrifid · 2010-08-15T11:33:39.439Z · LW(p) · GW(p)

I know, I couldn't think of worthy witticism to lampshade it so I let it slide. :P

comment by multifoliaterose · 2010-08-15T10:53:19.249Z · LW(p) · GW(p)

Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn't a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson's view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer's status to compensate for what people perceive as inappropriate status grubbing on his part.

Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.

This leads some of them conceptualize him as a laughingstock; as somebody who's totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky's comment

"I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion."

I'm somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.

Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.

Yes, I think that you're right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.

Replies from: bentarm, michaelkeenan, ciphergoth, timtyler
comment by bentarm · 2010-08-15T11:45:24.116Z · LW(p) · GW(p)

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him

Ditto.

I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer's AI writings as a form of entertainment, and don't take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it's also because of Eliezer's writing style, and statements like the one in the initial post.

I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the 'taking Eliezer seriously' scale - I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I've never gotten round to doing anything about it.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-15T12:02:39.748Z · LW(p) · GW(p)

who essentially read Eliezer's AI writings as a form of entertainment, and don't take them even slightly seriously.

Why do they find them entertaining?

Replies from: bentarm, XiXiDu
comment by bentarm · 2010-08-15T12:44:34.631Z · LW(p) · GW(p)

As XiXiDu says - pretty much the same reason they find Isaac Asimov entertaining.

comment by XiXiDu · 2010-08-15T12:38:13.463Z · LW(p) · GW(p)

I said the same before. It's mainly good science fiction. I'm trying to find out if there's more to it though.

Just saying this as evidence that there is a lot doubt even within the LW community.

comment by michaelkeenan · 2010-08-15T18:23:24.573Z · LW(p) · GW(p)

I'm somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.

Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it's just me and the internet.

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson's view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer's status to compensate for what people perceive as inappropriate status grubbing on his part.

I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn't a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn't have a PhD). That is to say, it didn't seem the kind of ridicule that was avoidable by managing one's tone. I don't usually read ridicule in detail so it makes sense I'd be mistaken about that.

comment by Paul Crowley (ciphergoth) · 2010-08-16T18:27:55.430Z · LW(p) · GW(p)

I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction.

If it hasn't happened yet, that's at least some evidence it won't happen. Do you have reason to imagine a scenario which makes things very much worse than they already are based on such an effect, which means we must take care to tiptoe around these possibilities without allowing even one to happen? Because if not, we should probably worry about the things that already go wrong more than the things that might go wrong but haven't yet.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-16T18:55:02.727Z · LW(p) · GW(p)

I'm confused - I feel like I already address most of your remarks in the comment that you're responding to?

comment by timtyler · 2010-08-15T11:25:51.780Z · LW(p) · GW(p)

Recalling Hanson's view that a lot human behavior is really signaling and vying for status

Existential risk reduction too! Charities are mostly used for signalling purposes - and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals - to signal how much they care, to signal how much spare time and energy they have - and so on. The actual cause is usually not irrelevant - but it is not particularly central either. It doesn't make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.

comment by [deleted] · 2010-08-19T02:35:43.643Z · LW(p) · GW(p)

It sounds to me like half of the perceived public image problem comes from apparently blurred lines between the SIAI and LessWrong, and between the SIAI and Eliezer himself. These could be real problems - I generally have difficulty explaining one of the three without mentioning the other two - but I'm not sure how significant it is.

The ideal situation would be that people would evaluate SIAI based on its publications, the justification of the research areas, and whether the current and proposed projects satisfy those goals best, are reasonably costed, and are making progress.

Whoever actually holds these as the points to be evaluated will find the list of achievements. Individual projects all have detailed proposals and a budget breakdown, since donors can choose to donate directly to one research project or another.

Finally, a large number of those projects are academic papers. If you dig a bit, you'll find that many of these papers are submitted at academic and industry conferences. Hosting the Singularity Summit doesn't hurt either.

It doesn't make sense to downplay a researcher's strange viewpoints if those viewpoints seem valid. Eliezer believes his viewpoint to be valid. LessWrong, a project of his, has a lot of people who agree with his ideas. There are also people who disagree with some of his ideas, but the point is that it shouldn't matter. LessWrong is a project of SIAI, not the organization itself. Support on this website of his ideas should have little to do with SIAI's support of his ideas.

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated. This despite the fact that he's written half of the publications.

Here are the questions (that tie to your post) which I think are worth discussing on public relations, if not the contents of the publications:

  • Do people equate "The views of Eliezer Yudkowsky" with "The views of SIAI"? Do people view the research program or organization as "his" project?
  • Which people, and to what extent?
  • Is this good or bad, and how important is it?

The optimal answer to those questions is the one that leads the most AI researchers to evaluate the most publications with the respect of serious scrutiny and consideration.

I'll repeat that other people have published papers with the SIAI, that their proposals are spelled out, that some papers are presented at academic and industry conferences, and that the SIAI's Singularity Summit hosts speakers who do not agree with all of Eliezer's opinions, who nonetheless associate with the organization by attendance.

Replies from: None, nonhuman, None
comment by [deleted] · 2010-08-19T02:46:19.497Z · LW(p) · GW(p)

To top it off, the SIAI is responsible for getting James Randi's seal of approval on the Singularity being probable. That's not poisoning the meme, not one bit.

comment by nonhuman · 2010-08-21T03:37:13.968Z · LW(p) · GW(p)

I feel it's worth pointing out that just because something should be, doesn't mean it is. You state:

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated.

I agree with the sentiment, but how practical is it? Just because it would be incorrect to equate Eliezer and the SIAI doesn't meant that people won't do it. Perhaps it would be reasonable to say that the people who fail to make the distinction are also the people on whom it's not worth expending the effort trying to explicate the situation, but I suspect that it is still the case that the majority of people are going to have a hard time not making that equation if they even try at all.

The purpose of this article, I would presume to say, is that public relations actually does serve a valid and useful purpose. It is not a wasted effort to ensure that the ideas that one considers true, or at least worthwhile, are presented in the sort of light that encourages people to take them seriously. This is something that I think many people of a more intellectual bent often fail to consider; though some of us might actually invest time and effort into determining for ourselves whether an idea is good or not, I would say the majority do not and instead rely on trusted sources to guide them (with often disastrous results).

Again, it may just be that we don't care about those people (and it's certainly tempting to go that way), but there may be times when quantity of supporters, in addition to quality, could be useful.

Replies from: None
comment by [deleted] · 2010-08-21T18:32:48.933Z · LW(p) · GW(p)

We don't disagree on any point that I can see. I was contrasting an ideal way of looking at things (part of what you quoted) from how people might actually see things (my three bullet-point questions).

As much as I enjoy Eliezer's thoughts and respect his work, I'm also of the opinion that one of the tasks the SIAI must work on (and almost certainly is working on) is keeping his research going while making the distinction between the two entities more obvious. But to whom? The research community should be the first and primary target.

Coming back from the Summit, I feel that they're taking decent measures toward this. The most important thing to do is for the other SIAI names to be known. Michael Vassar's is the easiest to get people to hold because of the name of his role, and he was acting as the SIAI face more than Eliezer was. At this point, a dispute would make the SIAI look unstable - they need positive promotion of leadership and idea diversity, more public awareness of their interactions with academia, and that's about it.

Housing a clearly promoted second research program would solve this problem. If only there was enough money, and a second goal which didn't obviously conflict with the first, and the program still fit under the mission statement. I don't know if that is possible. Money aside, I think that it is possible. Decision theoretic research with respect to FAI is just one area of FAI research. Utterly essential, but probably not all there is to do.

comment by [deleted] · 2010-08-19T02:45:31.748Z · LW(p) · GW(p)

To top it off, the SIAI is responsible for getting James Randi's seal of approval on the Singularity being probable. That's not poisoning the meme, not one bit.

comment by Vladimir_Nesov · 2010-08-15T10:07:24.366Z · LW(p) · GW(p)

I don't find persuasive your arguments that the following policy suggestion has high impact (or indeed is something to worry about, in comparison with other factors):

"requiring [SIAI] staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible"

(Note that both times I qualified the suggestion to contact SIAI (instead of starting a war) with "if you have a reasonable complaint/usable suggestion for improvement".)

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-15T10:12:17.082Z · LW(p) · GW(p)

Understood. Here too, we may have legitimate grounds for difference of opinion rooted in the fact that our impressions have emerged from thousands of data points which we don't have conscious access to.

(Note that both times I qualified the suggestion to contact SIAI with "if you are serious and have a reasonable complaint".)

Do you feel that I've misrepresented you? I'd be happy to qualify my reference to you however you'd like if you deem my doing so suitable.

comment by timtyler · 2010-08-15T07:35:45.802Z · LW(p) · GW(p)

The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.

As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.

Replies from: multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-08-15T08:06:45.071Z · LW(p) · GW(p)

I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you're withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.

Replies from: timtyler, CarlShulman
comment by timtyler · 2010-08-15T11:36:55.408Z · LW(p) · GW(p)

I suggested what to do about this problem in my post: withhold funding from SIAI.

Right - but that's only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.

comment by CarlShulman · 2010-08-15T10:25:10.808Z · LW(p) · GW(p)

Will you do this?

Replies from: multifoliaterose, whpearson
comment by multifoliaterose · 2010-08-15T10:35:36.555Z · LW(p) · GW(p)

I'm definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn't be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.

As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.

Replies from: Wei_Dai, timtyler
comment by Wei Dai (Wei_Dai) · 2010-08-15T10:48:30.723Z · LW(p) · GW(p)

"transparency"? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-15T11:00:18.134Z · LW(p) · GW(p)

I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he's either unwilling or unable to share it.

Replies from: Wei_Dai, CarlShulman, rhollerith_dot_com, timtyler
comment by Wei Dai (Wei_Dai) · 2010-08-17T00:57:54.138Z · LW(p) · GW(p)

There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important. If Eliezer had shied away from stating some of the more "uncredible" ideas because there wasn't enough evidence to convince a typical smart person, it would surely prompt questions of "what do you really think about this?" or fail to attract people who are currently interested in SIAI because of those ideas.

If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

I think you make a good point that it's important to think about PR, but I'm not at all convinced that the specific advice you give are the right ones.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-17T05:27:28.170Z · LW(p) · GW(p)

Thanks for your feedback. Several remarks:

You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important.

This is of course true. I myself am fairly certain that SIAI's public statements are driving away the people who it's most important to interest in existential risk.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

•It's standard public relations practice to reveal certain information only if asked.

•An organization that has the strongest case for room for more funding need not be an organization that's doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.

•One need not be confident in one's belief that funding one's organization has highest expected value to humanity to believe that funding one's organization has highest expected to humanity. A major issue that I have with Eliezer's rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.

•Another major issue with Eliezer's rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has - it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.

I will be detailing my reasons for thinking that SIAI's research does not have high expected value in a future post.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-17T15:09:03.034Z · LW(p) · GW(p)

One need not be confident in one's belief [...]

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

Replies from: Emile
comment by Emile · 2010-08-17T15:26:37.251Z · LW(p) · GW(p)

But it isn't perceived as so by the general public - it seems to me that the usual perception of "confidence" has more to do with status than with probability estimates.

The non-technical people I work with often say that I use "maybe" and "probably" too much (I'm a programmer - "it'll probably work" is a good description of how often it does work in practice) - as if having confidence in one's statements was a sign of moral fibre, and not a sign of miscalibration.

Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn't correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).

Replies from: Vladimir_Nesov, Perplexed
comment by Vladimir_Nesov · 2010-08-17T15:51:12.709Z · LW(p) · GW(p)

That a lie is likely to be misinterpreted or not noticed doesn't make it not a lie, and conversely.

Replies from: Emile
comment by Emile · 2010-08-17T16:08:27.754Z · LW(p) · GW(p)

Oh, I fully agree with your point; it's a pity that high confidence on unusual topics is interpreted as arrogance.

comment by Perplexed · 2010-08-17T16:11:05.224Z · LW(p) · GW(p)

Try this: I prefer my leaders to be confident. I prefer my subordinates to be truthful.

comment by CarlShulman · 2010-08-15T12:41:20.057Z · LW(p) · GW(p)

higher expected value to humanity than what virtually everybody else is doing,

For what definitions of "value to humanity" and "virtually everybody else"?

If "value to humanity" is assessed as in Bostrom's Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don't agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one's area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it's easier to do well on a metric that others are mostly not focused on optimizing.

Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one's activities as much more valuable than most people would rank them) is a big factor on its own.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-15T16:20:52.213Z · LW(p) · GW(p)

In case my previous comment was ambiguous, I should say that I agree with you completely on this point. I've been wanting to make a top level post about this general topic for a while. Not sure when I'll get a chance to do so.

comment by RHollerith (rhollerith_dot_com) · 2010-08-15T12:16:41.178Z · LW(p) · GW(p)

Eliezer himself may have such evidence [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], but if so he's either unwilling or unable to share it.

Now that is unfair.

Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a "scientific generalist" and a lot of free time on their hands because in his writings Eliezer is constantly "watching out for" the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)

Replies from: multifoliaterose, XiXiDu
comment by multifoliaterose · 2010-08-15T16:29:02.315Z · LW(p) · GW(p)

So my impression has been that the situation is that

(i) Eliezer's writings contain a great deal of insightful material.

(ii) These writings do not substantiate the idea that [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing].

I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I've done to be a good "probabilistic proof" that the points (i) and (ii) apply to the portion of his writings that I haven't read.

That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.

I'm unwilling to read the whole of his opus given how much of it I've already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-08-15T23:48:23.163Z · LW(p) · GW(p)

It would help to know what steps in the probabilistic proof don't have high probability for you.

For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn't putting a dent in the FAI problem.

Or some combination of these and others.

Replies from: multifoliaterose, Perplexed
comment by multifoliaterose · 2010-08-16T03:25:14.213Z · LW(p) · GW(p)

Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.

But for a short answer, I would say that the situation is mostly that I think that:

Eliezer isn't putting a dent in the FAI problem.

comment by Perplexed · 2010-08-16T04:21:55.354Z · LW(p) · GW(p)

This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV:

  1. I am skeptical that safeguards against UFAI (unFAI) will not work. In part because:
  2. I doubt that the "takeoff" will be "hard". Because:
  3. I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
  4. And hence an effective safeguard would be to simply not give the machine its own credit card!
  5. And in any case, the Moore's law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances.
  6. Furthermore, even after the machine has more hardware, it doesn't yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time.
  7. And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us?
  8. Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don't kill us, they will at least prevent an early singularity.

Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.

Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple "tuning" change to the (soft) network connectivity parameters - changing the maximum number of inputs per "neuron" from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.

Replies from: CarlShulman, red75, Jonathan_Graehl, timtyler
comment by CarlShulman · 2010-08-18T05:20:01.892Z · LW(p) · GW(p)
  1. I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.

Do you think that progress in AI is limited primarily by hardware? If hardware is the limiting factor, then you should think AI soon relatively plausible. If software is the limiting factor (the majority view, and the reason most AI folk reject claims such as those of Moravec), such that we won't get AI until well beyond the minimum computational requirements, then either early AIs should be able to run fast or with numerous copies cheaply, or there will be a lot of room to reduce bloated hardware demands through software improvements.

Thinking that AI will take a long time (during which hardware will advance mightily towards physical limits) but also be sharply and stably hardware-limited when created is a hard view to defend.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T01:10:56.542Z · LW(p) · GW(p)

I am imagining that it will work something like the human brain (but not by 'scan and emulate'). We need to create hardware modules comparable to neurons, we need to have some kind of geometric organization which permits individual hardware modules to establish physical connections to a handful of nearby modules, and we need a 'program' (corresponding to human embryonic development) which establishes a few starting connections, and finally we need a training period (like training a neural net, and comparable to what the human brain experiences from the first neural activity in the womb through graduate school) which adds many more physical connections. I'm not sure whether to call these connections hardware or software. Actually, they are a hybrid of both - like PLAs (yeah, I way out of date on technology).

So I'm imagining a lot of theoretical work needed to come up with a good 'neuron' design (probably several dozen different kinds of neurons), more theoretical work to come up with a good 'program' to correspond to the embryonic interconnect, and someone willing to pay for lots and lots of neurons.

So, yeah, I'm thinking that the program will be relatively simple (equivalent to a few million lines of code), but it will take us a long time to find it. Not the 500 million years that it took evolution to come up with that program - apparently 500 million years after it had already invented the neuron. But for human designers, at least a few decades to find and write the program. I hope this explanation helps to make my position seem less weird.

comment by red75 · 2010-08-16T08:07:52.932Z · LW(p) · GW(p)

4 . And hence an effective safeguard would be to simply not give the machine its own credit card!

(Powerful) optimization processes can find such ways of solving problems by exploiting every possible shortcut that it is hard to predict those ways in advance. Recently here was an example of that. Genetic algorithm found unexpected solution of a problem exploiting analog properties of particular FPGA chip.

comment by Jonathan_Graehl · 2010-08-16T23:15:51.495Z · LW(p) · GW(p)

7-8 aren't hard-takeoff-denialist ideas; they're SIAI noncontribution arguments. Good summary, though.

comment by timtyler · 2010-08-16T05:52:47.162Z · LW(p) · GW(p)

Phew! First, my material on the topic:

http://alife.co.uk/essays/the_singularity_is_nonsense/

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Then a few points - which I may add to later.

3 and 4: hardware, sure - that is improving too - just not as fast, sometimes. A machine may find a way to obtain a credit card - or it will get a human to buy whatever it needs - as happens in companies today.

6: how much time? Surely a better example would be: "perform experiments" - and experiments that caan't be minaturised and executed at high speeds - such as those done in the LHC.

7: AltaVista didn't protect us from Google - nor did Friendster protect against MySpace. However, so far Google has mostly successfully crushed its rivals.

8: no way, IMO - e.g. see Matt Ridley. That is probably good advice for all DOOMsters, actually.

Some of the most obvious safeguards are likely to be self-imposed ones:

http://alife.co.uk/essays/stopping_superintelligence/

...though a resiliant infrastructure would help too. We see rogue agents (botnets) "eating" the internet today - and it is not very much fun!

Incidentally, a much better place for this kind of comment on this site would be:

http://lesswrong.com/lw/wf/hard_takeoff/

comment by XiXiDu · 2010-08-15T12:31:35.827Z · LW(p) · GW(p)

Can you be more specific than "it's somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence"?

This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-15T13:16:59.485Z · LW(p) · GW(p)

Can you be more specific . . . ?

Do you want me to repeat the links people gave you 24 hours ago?

The person who was scared to the point of having nightmares was almost certainly on a weeks-long or months-long visit to the big house in California where people come to discuss extremely powerful technologies and the far future and to learn from experts on these subjects. That environment would tend to cause a person to take certain ideas more seriously than a person usually would.

Replies from: Jonathan_Graehl, XiXiDu
comment by Jonathan_Graehl · 2010-08-16T23:09:29.597Z · LW(p) · GW(p)

Also, are we really discrediting people because they were foolish enough to talk about their deranged sleep-thoughts? I'd sound pretty stupid too if I remembered and advertised every bit of nonsense I experienced while sleeping.

comment by XiXiDu · 2010-08-15T13:51:55.170Z · LW(p) · GW(p)

It was more than one person. Anyway, I haven't read all of the comments yet so I might have missed some specific links. If you are talking about links to articles written by EY himself where he argues about AI going FOOM, I commented on one of them.

Here is an example of the kind of transparency in the form of strict calculations, references and evidence I expect.

As I said, I'm not sure what other links you are talking about. But if you mean the kind of LW posts dealing with antipredictions, I'm not impressed. Predicting superhuman AI to be a possible outcome of AI research is not sufficient. Where is the difference between claiming the LHC will go FOOM? I'm sure someone like EY would be able to write a thousand posts around such a scenario telling me that the high risk associated with the LHC going FOOM does outweigh its low probability. There might be sound arguments to support this conclusion. But it is a conclusion and a framework of arguments based on a assumption that is itself of unknown credibility. So is it too much to ask for some transparet evidence to fortify this basic premise? Evidence that is not somewhere to be found within hundreds of posts not directly concerned with the evidence in question but rather arguing based on the very assumption it is trying to justify?

Replies from: CarlShulman, FAWS, timtyler
comment by CarlShulman · 2010-08-15T14:48:55.906Z · LW(p) · GW(p)

Asteroids really are an easier problem: celestial mechanics in vacuum are pretty stable, we have the Moon providing a record of past cratering to calibrate on, etc. There's still uncertainty about the technology of asteroid deflection (e.g. its potential for military use, or to incite conflict), but overall it's perhaps the most tractable risk for analysis since the asteroids themselves don't depend on recent events (save for some smallish anthropic shadow effects).

An analysis for engineered pathogens, where we have a lot of uncertainty about the difficulty of engineering various of diseases for maximum damage, and how the technology for detection, treatment and prevention will keep pace. We can make generalizations based on existing diseases and their evolutionary dynamics (selection for lower virulence over time with person-to-person transmission, etc), current public health measures, etc, the rarity of the relevant motivations, etc, but you're still left with many more places where you can't just plug in well-established numbers and crank forward.

You can still give probability estimates, and plug in well-understood past data where you can, but you can't get asteroid-level exactitude.

comment by FAWS · 2010-08-15T14:32:06.483Z · LW(p) · GW(p)

The difference is that we understand both asteroids and particle physics far better than we do intelligence, and there is precedence for both asteroid impacts and high energy particle collisions (natural ones at far higher energy than in the LHC) while there is none for an engineered human level intelligence with access to its own source code.

So calculations of the kind you seem to be asking for just aren't possible at this point (and calculations with exactly that level of evidence won't be possible right up until it's too late), while refutations of the kind LHC panic gets aren't possible either. You should also note that Eliezer takes LHC panic more serious than most non-innumerate people.

But if you want some calculation anyway: Let's assume there is a 1% chance of extinction by uFAI within the next 100 years. Let's also assume that spending $10 million per year (in 2010 dollars, adjusting for inflation) allows us to reduce that risk by 10%, just by the dangers of uFAI being in the public eye and people being somewhat more cautious, and taking the right sort of caution instead of worrying about Skynet or homicidal robots. So $1 billion saves about an expected 1 million lives, a cost of $ 1000 per life, which is about the level of the most efficient conventional charities. And that's with Robins low-balling estimate (which was for a more specific case, not uFAI extinction in general, so even Robin would likely estimate a higher chance in the case considered) and assuming that FAI research won't succeed.

Replies from: XiXiDu, ciphergoth
comment by XiXiDu · 2010-08-15T14:55:11.770Z · LW(p) · GW(p)

So calculations of the kind you seem to be asking for just aren't possible at this point ...

I'm asking for whatever calculations should lead people to donate most of their money to the SIAI or get nightmares from stories of distant FAI's. Surely there must be something to outweigh the lack of evidence, or on what basis has anyone decided to take things serious?

I really don't want to anger you but the "let's assume X" attitude is what I have my problems with here. A 1% chance of extinction by uFAI? I just don't see this, sorry. I can't pull this out of my hat to make me believe either. I'm not saying this is wrong but I ask why there isn't a detailed synopsis of this kind of estimations available? I think this is crucial.

Replies from: FAWS, Aleksei_Riikonen
comment by FAWS · 2010-08-15T15:29:15.652Z · LW(p) · GW(p)

So what's the alternative?

You became aware of a possible danger. You didn't think it up at random, so you can't the heuristic that most complex hypotheses generated at random are wrong. There is no observational evidence, but the hypothesis doesn't predict any observational evidence yet, so lack of evidence is no evidence against (like e.g. the lack of observation is against the danger of vampires). The best arguments for and against are about equally good (at least no order of magnitude differences). There seems to be a way to do something against the danger, but only before it manifests, that is before there can be any observational evidence either way. What do you do? Just assume that the danger is zero because that's the default? Even though there is no particular reason to assume that's a good heuristic in this particular case? (or do you think there are good reasons in this case? You mentioned the thought that it might be a scam, but it's not like Eliezer invented the concept of hostile AIs).

The Bayesian way to deal with it would be to just use your prior (+ whatever evidence the arguments encountered provide, but the result probably mostly depends on your priors in this case). So this is a case where it's OK to "just make numbers up". It's just that you should should make them up yourself, or rather base them on what you actually believe (if you can't have experts you trust assess the issue and supply you with their priors). No one else can tell you what your priors are. The alternative to "just assuming" is "just assuming" zero, or one, or similar (or arbitrarily decide that everything that predicts observations that would be only 5% likely if it was false is true and everything without such observations is false, regardless of how many observations were actually made), purely based on context and how the questions are posed.

Replies from: XiXiDu, Jonathan_Graehl
comment by XiXiDu · 2010-08-15T16:08:11.090Z · LW(p) · GW(p)

This is the kind of summary of a decision procedure I have been complaining about to be missing, or hidden within enormous amounts of content. I wish someone with enough skill could write a top-level post about it demanding that the SIAI creates an introductory paper exemplifying how to reach the conclusion that (1) the risks are to be taken seriously (2) you should donate to the SIAI to reduce the risks. There could either a be a few papers for different people with different backgrounds or one with different levels of detail. It should feature detailed references to what knowledge is necessary to understand the paper itself. Further it should feature the formulas, variables and decision procedures you have to follow to estimate the risks posed by and incentive to alleviate ufriendly AI. It should also include references to further information from people not associated with the SIAI.

This would allow for the transparency that is required by claims of this magnitude and calls for action, including donations.

I wonder why it took so long until you came along posting this comment.

Replies from: FAWS, NancyLebovitz, CarlShulman, timtyler
comment by FAWS · 2010-08-15T16:44:27.834Z · LW(p) · GW(p)

You didn't succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn't have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I'm still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged "bayesian".

comment by NancyLebovitz · 2010-08-15T22:23:13.823Z · LW(p) · GW(p)

I wonder why it took so long until you came along posting this comment.

Because thinking is work, and it's not always obvious what question needs to be answered.

More generally (and this is something I'm still working on grasping fully). what's obvious to you is not necessarily obvious to other people, even if you think you have enough in common with them that it's hard to believe that they could have missed it.

I wouldn't have said so even a week ago, but I'm now inclined to think that your short attention span is asset to LW.

Just as Eliezer has said (can someone remember the link?) that science as conventionally set up to be too leisurely (not enough thought put into coming up with good hypotheses), LW is set up on the assumption that people have a lot of time to put into the sequences and ability to remember what's in them.

comment by CarlShulman · 2010-08-15T18:09:40.890Z · LW(p) · GW(p)

This isn't quite what you're talking about, but a relatively accessible intro doc:

http://singinst.org/riskintro/index.html

comment by timtyler · 2010-08-15T16:13:56.933Z · LW(p) · GW(p)

This seems like a summary of the idea of there being significant risk:

Anna Salamon at Singularity Summit 2009 - "Shaping the Intelligence Explosion"

comment by Jonathan_Graehl · 2010-08-16T23:23:28.689Z · LW(p) · GW(p)

Good comment.

However,

arbitrarily decide that everything that predicts observations that would be only 5% likely if it was false is true and everything without such observations is false, regardless of how many observations were actually made

This was hard to parse. I would have named "p-value" directly. My understanding is that a stated "p-value" will indeed depend on the number of observations, and that in practice meta-analyses pool the observations from many experiments. I agree that we should not use a hard p-value cutoff for publishing experimental results.

Replies from: FAWS
comment by FAWS · 2010-08-16T23:56:56.865Z · LW(p) · GW(p)

I should have said "a set of observations" and "sets of observations". I meant things like that if you and other groups test lots of slightly different bogus hypotheses 5% of them will be "confirmed" with statistically significant relations.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-17T00:12:03.685Z · LW(p) · GW(p)

Got it, and agreed. This is one of the most pernicious forms of dishonesty by professional researchers (lying about how many hypotheses were generated), and is far more common than merely faking everything.

comment by Aleksei_Riikonen · 2010-08-15T15:35:29.068Z · LW(p) · GW(p)
1% chance of extinction by uFAI? I just don't see this, sorry. I can't pull this out of my hat to make me believe either. I'm not saying this is wrong but I ask why there isn't a detailed synopsis of this kind of estimations available? I think this is crucial.

Have you yet bothered to read e.g. this synopsis of SIAI's position:

http://singinst.org/riskintro/index.html

I'd also strongly recommend this from Bostrom:

http://www.nickbostrom.com/fut/evolution.html

(Then of course there are longer and more comprehensive texts, which I won't recommend because you would just continue to ignore them.)

Replies from: timtyler
comment by timtyler · 2010-08-15T19:59:53.494Z · LW(p) · GW(p)

The core of:

http://singinst.org/riskintro/

...that talks about risk appears to be:

"Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals."

Personally, I think that presents a very weak case for there being risk. It argues that there could be risk if we built these machines wrong, and the bad machines became powerful somehow. That is true - but the reader is inclined to respond "so what". A dam can be dangerous if you build it wrong too. Such observations don't say very much about the actual risk.

comment by Paul Crowley (ciphergoth) · 2010-08-16T18:56:01.472Z · LW(p) · GW(p)

This calculation places no value on the future generations whose birth depends on averting existential risk. That's not how I see things.

comment by timtyler · 2010-08-15T14:23:40.667Z · LW(p) · GW(p)

That claims that "that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash".

It cites:

Impacts on the Earth by asteroids and comets: assessing the hazard:

http://www.nature.com/nature/journal/v367/n6458/abs/367033a0.html

I am very sceptical about that being true for those alive now:

We have been looking for things that might hit us for a long while now - and we can see much more clearly what the chances are for that period than by looking at the historical record. Also, that is apparently assuming no mitigation attempts - which also seems totally unrealistic.

Looking further:

http://users.tpg.com.au/users/tps-seti/spacegd7.html

...gives 700 deaths/year for aircraft - and 1,400 deaths/year for 2km impacts - based on assumption that one quarter of the human population would perish in such an impact.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T14:45:29.718Z · LW(p) · GW(p)

Yet, does the SIAI provide evidence on par with the paper I linked to?

Replies from: timtyler
comment by timtyler · 2010-08-15T14:49:47.271Z · LW(p) · GW(p)

What - about the chances of superintelligence causing THE END OF THE WORLD?!?

Of course not! How could they be expected to do that?

comment by timtyler · 2010-08-15T11:21:13.261Z · LW(p) · GW(p)

If there really was "abundant evidence" there probably wouldn't be much of a controversy.

comment by timtyler · 2010-08-15T14:16:29.050Z · LW(p) · GW(p)

With machine intelligence, you probably want to be on the winning side - if that is possible.

Until it is clearer who that is going to be, many will want to hedge.

comment by whpearson · 2010-08-15T13:05:49.288Z · LW(p) · GW(p)

I'm planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn't exist)

My position is roughly this.

  • The nature of intelligence (and its capability for FOOMing) is poorly understood

  • The correct actions to take depend upon the nature of intelligence.

As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.

And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.

Replies from: NancyLebovitz, NancyLebovitz
comment by NancyLebovitz · 2010-08-15T22:24:42.296Z · LW(p) · GW(p)

What would the charity you'd like to contribute to look like?

Replies from: whpearson
comment by whpearson · 2010-08-15T23:02:19.333Z · LW(p) · GW(p)

When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can't be anything but what they say it is.

I want to get the same feeling off the group studying intelligence as I do from that type of research. They don't need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.

Questions I hope they would be asking:

Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?

If it wasn't then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.

Replies from: NancyLebovitz, Perplexed, JamesAndrix
comment by NancyLebovitz · 2010-08-15T23:13:30.408Z · LW(p) · GW(p)

Interesting points. Speaking only for myself, it doesn't feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.

For a different angle, here's an old theory of Michael Vassar's-- I don't know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-16T23:02:10.275Z · LW(p) · GW(p)

Talent consists of happening to have a reward system which happens to make doing the right thing feel good.

Definitely not just that. Knowing what the right thing is, and being able to do it before it's too late, are also required. And talent implies a greater innate capacity for learning to do so. (I'm sure he meant in prospect, not retrospect).

It's fair to say that some of what we identify as "talent" in people is actually in their motivations as well as their talent-requisite abilities.

comment by Perplexed · 2010-08-15T23:35:48.487Z · LW(p) · GW(p)

If SIAI had the ethos I'd like, we'd be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound.

And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?

How often in human history have organizations announced, "Mission accomplished - now we will release our employees to go out and do something else"?

Replies from: timtyler
comment by timtyler · 2010-08-16T06:09:37.666Z · LW(p) · GW(p)

It doesn't seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.

Replies from: ciphergoth, NancyLebovitz
comment by Paul Crowley (ciphergoth) · 2010-08-16T08:59:27.564Z · LW(p) · GW(p)

I think that what SIAI works on is real and urgent, but if I'm wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn't seem like a disastrous outcome.

comment by NancyLebovitz · 2010-08-16T08:06:36.927Z · LW(p) · GW(p)

From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn't awful to look for something useful for the organization to do rather than dissolving it.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T03:19:13.296Z · LW(p) · GW(p)

The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.

Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don't begrudge them a few additional decades of corporate existence.

comment by JamesAndrix · 2010-08-15T23:53:58.525Z · LW(p) · GW(p)

Then they will test the idea to destruction.

I like this concept.

Assume your theory will fail in some places, and keep pressing it until it does, or you run out of ways to test it.

comment by NancyLebovitz · 2010-08-15T14:33:44.600Z · LW(p) · GW(p)

FHI?

Replies from: whpearson
comment by whpearson · 2010-08-15T14:53:17.063Z · LW(p) · GW(p)

The Future of Humanity Institute.

Nick Bostrom's personal website probably gives you the best idea of what they produce.

A little too philosophical for my liking, but still interesting.

comment by multifoliaterose · 2010-08-15T08:27:35.509Z · LW(p) · GW(p)

The point of my post is not that there's a problem of SIAI staff making claims that you find uncredible, the point of my post is that there's a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-15T09:23:35.104Z · LW(p) · GW(p)

Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it's probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.

Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?

Replies from: JanetK, whpearson, multifoliaterose, wedrifid, XiXiDu
comment by JanetK · 2010-08-15T10:48:01.744Z · LW(p) · GW(p)

Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority - but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Replies from: wedrifid
comment by wedrifid · 2010-08-15T11:04:58.921Z · LW(p) · GW(p)

Is accepting multi-universes important to the SIAI argument?

Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.

If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-15T11:13:21.982Z · LW(p) · GW(p)

Cryonics probably comes with a worse Sci. Fi. vibe

This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven't already done so - I hope it's more clear than it was before.

comment by whpearson · 2010-08-15T12:34:18.657Z · LW(p) · GW(p)

Things that stretch my credibility.

  • AI will be developed by a small team (at this time) in secret
  • That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
Replies from: Wei_Dai, JoshuaZ
comment by Wei Dai (Wei_Dai) · 2010-08-16T23:52:18.674Z · LW(p) · GW(p)

AI will be developed by a small team (at this time) in secret

I find this very unlikely as well, but Anna Salamon once put it as something like "9 Fields-Medalist types plus (an eventual) methodological revolution" which made me raise my probability estimate from "negligible" to "very small", which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well. There's a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn't designed to be Friendly).

If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.

Replies from: whpearson
comment by whpearson · 2010-08-17T00:56:22.907Z · LW(p) · GW(p)

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well.

They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn't encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.

Replies from: PaulAlmond
comment by PaulAlmond · 2010-08-17T03:05:05.033Z · LW(p) · GW(p)

I don't see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.

As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model - how much effect they have on predicted values that are of interest - and we would tend to keep those parts of the model that have high relevance. If we "grow" the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance "regions".

Replies from: whpearson
comment by whpearson · 2010-08-17T12:01:27.678Z · LW(p) · GW(p)

I feel we are going to get stuck in an AI bog. However... This seems to neglect linguistic information.

Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.

What is the relevance of the fact that the word "car" refers to cars to this model? None directly.

Now if I was to tell you that "there is a car leaving at 2pm", then it would become relevant assuming you trusted what I said.

A lot of real world AI is not about collecting examples of basic input output pairings.

AIXI deals with this by simulating humans and hoping that that is the smallest world.

comment by JoshuaZ · 2010-08-17T01:02:02.405Z · LW(p) · GW(p)

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

I'm not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can't make a program that will in general tell if any arbitrary program will crash.

Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.

Replies from: whpearson
comment by whpearson · 2010-08-17T01:53:19.296Z · LW(p) · GW(p)

I'm mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-17T02:08:36.016Z · LW(p) · GW(p)

The point in the linked post doesn't deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.

Replies from: whpearson
comment by whpearson · 2010-08-17T10:42:05.170Z · LW(p) · GW(p)

Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.

We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.

I'm trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.

comment by multifoliaterose · 2010-08-15T10:27:00.048Z · LW(p) · GW(p)

Good question. I'll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you've seen are the main ones that I have in mind that have been stated in public, but there may be others that I'm forgetting.

There are some other examples that I have in mind from my private correspondence with Michael Vassar. He's made some claims which I personally do not find at all credible. (I don't want to repeat these without his explicit permission.) I'm sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.

comment by wedrifid · 2010-08-15T10:34:51.311Z · LW(p) · GW(p)

I second that question. I am sure there probably are other examples but they for most part wouldn't occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer... but that is hardly a huge step away from SIAI mainline!

comment by XiXiDu · 2010-08-15T14:35:51.858Z · LW(p) · GW(p)

Ok, I will provide a claim even if I get banned for it:

And if I was to spread the full context of the above and tell anyone outside of the hard core about it, do you seriously think that they would think these kind of reactions are credible?

Replies from: ciphergoth, jimrandomh, timtyler, Jonathan_Graehl, JoshuaZ
comment by Paul Crowley (ciphergoth) · 2010-08-16T18:30:03.651Z · LW(p) · GW(p)

The form of blanking out you use isn't secure. Better to use pure black rectangles.

Replies from: RobinZ, timtyler
comment by RobinZ · 2010-08-16T18:48:13.011Z · LW(p) · GW(p)

Pure black rectangles are not necessarily secure, either.

Replies from: SilasBarta, wedrifid
comment by SilasBarta · 2010-08-16T18:55:01.895Z · LW(p) · GW(p)

Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.

Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Replies from: wedrifid
comment by wedrifid · 2010-08-16T19:20:35.726Z · LW(p) · GW(p)

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Brilliant.

comment by wedrifid · 2010-08-16T19:22:37.599Z · LW(p) · GW(p)

Nice link. (It's always good to read articles where 'NLP' doesn't refer, approximately, to Jedi mind tricks.)

comment by timtyler · 2010-08-16T18:38:41.197Z · LW(p) · GW(p)

That document was knocking around on a public website for several days.

Using very much security would probably be pretty pointless.

comment by jimrandomh · 2010-08-15T14:55:20.194Z · LW(p) · GW(p)

Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T15:03:14.532Z · LW(p) · GW(p)

I'm sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.

Replies from: wedrifid, katydee
comment by wedrifid · 2010-08-16T03:56:47.052Z · LW(p) · GW(p)

You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn't reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.

Replies from: HughRistik, timtyler
comment by HughRistik · 2010-08-16T23:57:41.328Z · LW(p) · GW(p)

That is incredibly bad PR.

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-17T15:52:51.543Z · LW(p) · GW(p)

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR.

So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?

Well, let us take a concrete example: Doug Engelbart's lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart's vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let's not focus on that. Let's focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.

Replies from: HughRistik
comment by HughRistik · 2010-08-17T17:04:01.591Z · LW(p) · GW(p)

Yes, that would be an example. In general, organizations tend to need some level of PR to convince people to align with with its goal.

comment by timtyler · 2010-08-16T16:58:08.362Z · LW(p) · GW(p)

I still have a hard time believing it actually happened. I have heard that there's no such thing as bad publicity - but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.

comment by katydee · 2010-08-16T01:02:34.679Z · LW(p) · GW(p)

The "laugh test" is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.

Replies from: wedrifid
comment by wedrifid · 2010-08-16T03:45:28.778Z · LW(p) · GW(p)

The context asked 'what kind of things a typical smart person would find uncredible'. This is a perfect example of such a thing.

Replies from: katydee
comment by katydee · 2010-08-16T10:24:26.525Z · LW(p) · GW(p)

A typical smart person would find the laugh test credible? We must have different definitions of "smart."

Replies from: timtyler, wedrifid
comment by timtyler · 2010-08-16T17:01:26.947Z · LW(p) · GW(p)

The topic was the banned topic and the deleted posts - not the laugh test. If you explained what happened to an outsider - they would have a hard time believing the story - since the explanation sounds so totally crazy and ridiculous.

Replies from: katydee
comment by katydee · 2010-08-16T19:11:06.996Z · LW(p) · GW(p)

I'll try to test that, but keep in mind that my standards for "fully understanding" something are pretty high. I would have to explain FAI theory, AI-FOOM, CEV, what SIAI was, etc.

comment by wedrifid · 2010-08-16T12:36:19.835Z · LW(p) · GW(p)

(Voted you back up to 0 here.)

I think you are right about the laugh test itself.

comment by timtyler · 2010-08-15T14:44:35.116Z · LW(p) · GW(p)

Perhaps that was a marketing effort.

After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now - increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time - creating plenty of opportunities for it to "accidentally" leak out.

By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-16T22:58:00.221Z · LW(p) · GW(p)

Sure, but it was fair of him to give evidence when challenged, whether or not he baited that challenge.

comment by Jonathan_Graehl · 2010-08-16T22:29:49.502Z · LW(p) · GW(p)

The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it's so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.

I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture ("hell").

Therefore, I agree that it's possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I'm beginning to make fun now, so I'll stop.

comment by JoshuaZ · 2010-08-16T01:07:42.901Z · LW(p) · GW(p)

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-08-16T04:27:11.680Z · LW(p) · GW(p)

JoshuaZ:

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.

However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they're stated so clearly and poignantly that they're difficult to brush off or rationalize away. Or, to take another example, it's very hard to scare me with hypotheticals, but the post "The Strangest Thing An AI Could Tell You" and the subsequent thread came pretty close; I'm sure that at least a few readers of this blog didn't sleep well if they happened to read that right before bedtime.

So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I've failed to acquaint myself with?

Replies from: JoshuaZ, MatthewBaker
comment by JoshuaZ · 2010-08-16T15:10:15.531Z · LW(p) · GW(p)

That's a very valid set of points and I don't have a satisfactory response.

comment by MatthewBaker · 2011-07-05T18:20:53.041Z · LW(p) · GW(p)

Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.

comment by thomblake · 2010-08-16T14:17:13.963Z · LW(p) · GW(p)

This post reminds me of the talk at this year's H+ summit by Robert Tercek. Amongst other things, he was pointing out how the PR battle over transhumanist issues was already lost in popular culture, and that the transhumanists were not helping matters by putting people with very freaky ideas in the spotlight.

I wonder if there are analogous concerns here.

comment by mranissimov · 2010-08-16T07:24:11.679Z · LW(p) · GW(p)

Just to check... have I said any "naughty" things analogous to the Eliezer quote above?

Replies from: wedrifid
comment by wedrifid · 2010-08-16T07:57:52.419Z · LW(p) · GW(p)

Not to my knowledge... but Eliezer makes his words far more prominent that you do.

Replies from: mranissimov
comment by mranissimov · 2010-09-04T03:24:10.212Z · LW(p) · GW(p)

Only on LessWrong. In the wider world, more people actually read my words!

comment by MaoShan · 2010-08-15T20:26:59.509Z · LW(p) · GW(p)

Aside from the body of the article, which is just "common" sense, given the author's opinion against the current policies of SIAI, I found the final paragraph interesting because I also exhibit "an unusually high abundance of the traits associated with Aspergers Syndrome." Perhaps possessing that group of traits gives one a predilection to seriously consider existential risk reduction by being socially detached enough to see the bigger picture. Perhaps LW is somewhat homogenously populated with this "certain kind" of people. So, how do we gain credibility with normal people?

comment by pnrjulius · 2012-06-12T02:58:57.019Z · LW(p) · GW(p)

Basically, we need a PR campaign. It needs to be tightly focused: Just existential risk, don't try to sell the whole worldview at once (keep inferential distance in mind). Maybe it shouldn't even be through SIAI; maybe we should create a separate foundation called The Foundation to Reduce Existential Risk (or something). ("What do you do?" "We try to make sure the human race is still here in 1000 years. Can we interest you in our monthly donation plan?")

And if our PR campaign even slightly reduces the chances of a nuclear war or an unfriendly AI, it could be one of the most important things anyone has ever done.

Who do we know who has the resources to make such a campaign?

comment by complexmeme · 2010-08-19T02:43:21.026Z · LW(p) · GW(p)

Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)

Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willing to wrestle with my skepticism than most, and I'm still probably a "mediocre rationalist" (to put it in Eliezer's terms).

comment by rabidchicken · 2010-08-17T06:17:31.025Z · LW(p) · GW(p)

Come on... Who does not love being a social outcast? I made a decision when I was about 12 that rather then trying to conform to other people's expectations of me, I was going to do / express support for exactly what I thought made sense, even if something I supported was related to something I could not, and then get to know people who seemed to be making similar decisions. Its arrogant and has numerous flaws, but it has generally worked for me. Social status and popularity are overrated, compared to the benefits of meeting a large number of people you can interact with freely.

Replies from: KrisC
comment by KrisC · 2010-08-17T06:30:07.752Z · LW(p) · GW(p)

This works fine as long as you don't find yourself operating within a hierarchy.

comment by Jonathan_Graehl · 2010-08-16T23:12:05.712Z · LW(p) · GW(p)

The discussion reassures me that EY is not, for anyone here, a cult leader.

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

Replies from: wedrifid, rabidchicken
comment by wedrifid · 2010-08-17T05:11:22.993Z · LW(p) · GW(p)

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

To not advocate that would seem to set them up for attacks on their understanding of economics.

I suggest "and the SIAI is the marginally most efficient utility generator" is the one that opens them up to attacks. (I'm not saying that they shouldn't make that claim.)

Replies from: ciphergoth, Larks, Jonathan_Graehl
comment by Paul Crowley (ciphergoth) · 2010-08-17T07:28:17.318Z · LW(p) · GW(p)

In a saner world every charity would claim this. Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Replies from: Vladimir_Nesov, Eliezer_Yudkowsky
comment by Vladimir_Nesov · 2010-08-17T19:59:32.034Z · LW(p) · GW(p)

Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Many charities could have close marginal worth, and rational allocation of resources would keep them that way. A charity that is less efficient could still perform a useful function, merely needing a decrease in funding, and not disbanding.

And you can't have statically super-efficient charities either, because marginal worth decreases with more funding. For example, a baseline of hundred million dollars SIAI yearly budget might drive marginal efficiency of a dollar donation lower than of other causes.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T15:16:39.071Z · LW(p) · GW(p)

In a sane world where everyone had the same altruistic component of their values, the marginal EU of all utilities would roughly balance up to the cost of discriminating them more closely. I'd have to think about what would happen if everyone had different altruistic components of their values; but if large groups of people had the same values, then there would exist some class of charities that was marginally balanced with respect to those values, and people from that group would expend the cost to pick out a member of that class but then not look too much harder. If everyone who works for a charity is optimistic and claims that their charity alone is the most marginally efficient in the group, that raises the cost of discriminating among them and they will become more marginally unbalanced.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-18T15:40:41.635Z · LW(p) · GW(p)

This more detailed analysis doesn't I think detract from my main point: in broad terms, it's not weird that SIAI claim to be the most efficient way to spend altruistically, it's weird that all charities don't claim this.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T16:26:34.063Z · LW(p) · GW(p)

I agree with your main point and was refining it.

comment by Larks · 2010-08-17T06:22:19.504Z · LW(p) · GW(p)

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

The belief that SIAI is the best utility generator may be incorrect, but you can't criticise someone from SIAI for making it beyond criticising them for being at SIAI, a criticism that no-one seems to make.

Replies from: wedrifid, multifoliaterose
comment by wedrifid · 2010-08-17T06:32:06.793Z · LW(p) · GW(p)

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

Technically not true.SIAI could actually be the optimal way for them specifically to generate utility while at the same time being not the optimal place for people to donate. For example, they could use their efforts to divert charitable donations from even worse sources to themselves and then pass it on to Givewell.

Replies from: Larks
comment by Larks · 2010-08-17T06:43:40.793Z · LW(p) · GW(p)

I think that would be illegal, though I'm not as familiar with US rules with regard to this as UK ones. More importantly, that argument seems to rely on an unfairly expansive interpritation of what it is to work for SIAI: diverting money away from SIAI doesn't count.

comment by Jonathan_Graehl · 2010-08-17T05:33:52.250Z · LW(p) · GW(p)

Sure; that's more or less what I meant. Even calling attacks these bids by SIAI competitors to in fact offer better marginal-utility efficiency was a little over-dramatic on my part.

I have only one objection to the economic argument: "assume there is already sufficient diversification in improving or maintaining human progress; then you should only give to SIAI" is a simplification that only works if the majority aren't convinced by that argument. I guess there's practically speaking no danger of that happening.

In other words, SIAI's claim can only be plausible if they promise to adjust their allocation of effort to ensure some diversity, in the unlikely event that they end up receiving humongous amounts of money (and I'm sure they'll say that they will).

By the way, I don't mean to say that an individual diversifying their charitable spending, or for globally there to be diversity in charitable spending, is an end in itself. I just feel comforted that some of it is the kind that reduces overall risk (because the perceived-most-efficient group turns out to have a blind spot in retrospect due to politics, group-think, laziness, or any number of human weaknesses).

comment by rabidchicken · 2010-08-17T06:22:31.262Z · LW(p) · GW(p)

EY is not a cult leader, he is a Lolcat herder.

Replies from: wedrifid
comment by wedrifid · 2010-08-17T06:37:50.809Z · LW(p) · GW(p)

You have not behaved like a troll thus far, some of your contributions have been useful. Please don't go down that path now.

Replies from: jsalvatier, rabidchicken
comment by jsalvatier · 2010-08-20T20:15:26.247Z · LW(p) · GW(p)

I am confused: his comment reads like a joke, how is that trollish? I smiled.

comment by rabidchicken · 2010-08-17T21:41:28.305Z · LW(p) · GW(p)

That was a useless and stupid thing to say even if I am a troll, my apologies.

comment by Thomas · 2010-08-15T09:49:00.159Z · LW(p) · GW(p)

Just take the best of anybody and discard the rest. Yudkowsky has some very good points (about 80% of his writings, by my view) - take them and say thank you.

When he or the SIAI missed the point, to put it mildly, you know it better anyway, don't you?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-15T10:36:49.365Z · LW(p) · GW(p)

I agree that Yudkowsky has some very good points.

My purpose in making the top level post is as stated: to work against poisoning the meme.

Replies from: Thomas
comment by Thomas · 2010-08-15T10:48:21.831Z · LW(p) · GW(p)

My point is, that you can "poise the meme" only for the fools. A wise individual can see for zerself what to pick and what not to pick.

Replies from: multifoliaterose, Kaj_Sotala, Jonathan_Graehl
comment by multifoliaterose · 2010-08-15T11:07:18.575Z · LW(p) · GW(p)

But there are (somewhat) wise individuals who have not yet thought carefully about existential risk. They're forced to heuristically decide whether or not thinking about it more makes sense. Given what they know at present, it may be rational for them to dismiss Eliezer as being like a very smart version of the UFO conspiracy theorists or something like that. Because of the halo effect issue that Yvain talks about, this may lower their willingness to consider existential risk at all.

comment by Kaj_Sotala · 2010-08-15T11:24:10.512Z · LW(p) · GW(p)

Most people do not systematically go through every statement that some particular person has made. If someone has heard primarily negative things about somebody else, then that reduces the chance of them even bothering to look at the person's other writings. This is quite rational behavior, since there are a lot of people out there and one's time is limited.

Replies from: Thomas
comment by Thomas · 2010-08-15T11:40:45.063Z · LW(p) · GW(p)

You must not go through the entire "memesphere" by the people, but by the memes! The current internet makes that quite easy to do.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-16T03:15:09.805Z · LW(p) · GW(p)

You must mean "quite easy" in the sense that it would only require a few millions of man-hours, rather than a few quadrillions.

comment by Jonathan_Graehl · 2010-08-16T22:06:00.881Z · LW(p) · GW(p)

Admirable pluck.

But expressing strength doesn't make people strong, or strength free.

comment by Emile · 2010-08-15T16:52:22.871Z · LW(p) · GW(p)

My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.

Seems like a reasonable position to me.

An important part of existential risk reduction is making sure that people who are likely to work on AI, or fund it, have read the sequences, and are at least aware of how most possible minds are not minds we would want, and of how dangerous recursive self-improvement could be.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-15T16:56:00.943Z · LW(p) · GW(p)

My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.

Seems like a reasonable position to me.

Really? I don't understand this position at all. The vast majority of the planet isn't very rational and the people with lots of resources are often not rational. If one can get some of those people to direct their resources in the right directions then that's still a net win for preventing existential risk even if they aren't very rational. If say a hundred million dollars more gets directed to existential risk even if much of that goes to the less likely existential risks that's still an overall reduction in existential risk and a general increase to the sanity waterline.