How to improve the public perception of the SIAI and LW?
post by XiXiDu · 2011-03-08T14:48:54.904Z · LW · GW · Legacy · 35 commentsContents
An attack scenario Preemptive measures None 35 comments
I was recently thinking about the possibility that someone with a lot of influence might at some point try to damage LessWrong and the SIAI and what preemptive measures one could take to counter it.
If you believe that the SIAI does the most important work in the universe and if you believe that LessWrong serves the purpose of educating people to become more rational and subsequently understand the importance of trying to mitigate risks from AI, then you should care about public relations, you should try to communicate your honesty and well-intentioned motives as effectively as possible.
Public relations are very important because a good reputation is necessary to do the following:
- Making people read the Sequences.
- Raising money for the SIAI.
- Convincing people to take risks from AI seriously.
- Allowing the SIAI to influence other AGI researchers.
- Mitigating future opposition by politicians and other interest groups.
- Being no easy target for criticism.
An attack scenario
First one has to identify characteristics that could potentially be used to cast a damaging light on this community. Here the most obvious possibility seems to be to portray the SIAI, together with LessWrong, as a cult.
After some superficial examination an outsider might conclude the following about this community:
- Believing into heaven and hell in the form of a positive or negative Singularity.
- Discouraging skepticism while portraying their own standpoint as clear-cut.
- Encouraging to take ideas seriously.
- Encouraging and signaling strong cooperation and conformity.
- Evangelizing by scaring people and telling them to donate money.
- Social pressure by employing a reputation system with positive and negative incentive.
- Removing themselves from empirical criticism by framing everything as a prediction.
- Discrediting mainstream experts while placing themselves a level above them.
- Discouraging transparency and openness by referring to the dangers of AI research.
- Using scope insensitivity and high-risk to justify action, outweigh low probabilities and disregard opposing evidence.
Most of this might sound wrong to the well-read LessWrong reader. But how would those points be received by mediocre rationalists who don't know what you know, especially if eloquently summarized by a famous and respected person?
Preemptive measures
How one might counter such conclusions:
- Create an introductory guide to LessWrong.
- Explain why the context of the Sequences is important.
- Explain why LessWrong differs from mainstream skepticism.
- Enable and encourage outsiders to challenge and question the community before turning against it.
- Discourage the downvoting of people who have not yet read the Sequences.
- Don't expect people to read hundreds of posts without supporting evidence that it is worth it.
- Avoid jargon when talking to outsiders.
- Detach LessWrong from the SIAI by creating an additional platform to talk about related issues.
- Ask or pay independent experts to peer-review.
- Make the finances of the SIAI easily accessible.
- Openly explain why and for what the SIAI currently needs more money.
So what do you think needs improvement and what would you do about it?
35 comments
Comments sorted by top scores.
comment by lukeprog · 2011-03-08T20:14:44.711Z · LW(p) · GW(p)
(1) Write a short, introductory, thoroughly cited guide on each major concept employed by SIAI / LW.
As an example, this is what I'm currently doing for the point about why standard, simple designs for machine ethics will result in disaster if implemented in a superintelligent machine. Right now, you have to read hundreds of pages of dense material that references unusual terms described in hundreds of other pages all across Less Wrong and SIAI's website. That is unnecessary, and doesn't help public perception of SIAI / LW. It looks like we're being purposely obscurantist and cult-like.
Why an intelligence explosion is probable is another good example of this.
(2) Engage the professional community. Somebody goes to SIAI's page and looks for accomplishments and they see not a single article in a peer-reviewed journal. Compare this to, um... the accomplishments page of every other 10-year research institute or university research program on the planet.
EDIT: I should note that in the course of not publishing papers in journals and engaging the mainstream community, SIAI has managed to be almost a decade ahead of everyone else. Having just read quite nearly the entirety of extant literature in the field of machine ethics, I can say with some confidence that the machine ethics field still isn't caught up to where Eliezer was circa 2001.
So of course SIAI can work much more quickly if it doesn't bother to absorb the entirety of the (mostly useless) machine ethics literature and then write papers that use the same language and style as the mainstream community, and cites all the same papers.
The problem is that if you don't write all those papers, then people keep asking you dumb questions about "Why can't we just tell it to maximize human happiness?" You have to keep answering that question because there is no readable, thoroughly-cited, mainstream-language guide that answers those types of questions. (Except, the one I'm writing now.)
Also, not publishing those papers in mainstream journals leaves you with less credibility in the eyes of journals who are savvy enough to know there is a difference between conference papers and those accepted to mainstream journals.
So I think it's worth all that effort, though probably not for somebody like Yudkowsky. He should be working on TDT and CEV, I imagine. Not reading papers about Kantian solutions to machine ethics.
comment by TheOtherDave · 2011-03-08T15:16:18.145Z · LW(p) · GW(p)
Perhaps it would be useful to change the framing?
For example... if I join a book discussion group:
I understand that much of the discussion will not make any sense to me if I haven't read the book, and that there's a rapidly reached limit to how usefully I can participate in the discussion without having read the book.
I don't expect anyone to expend a lot of effort justifying the reading of the book to people skeptical about the benefits of doing so.
I don't expect anyone to summarize just the interesting parts of the book for me so I can participate in the discussion without actually reading the book.
All of this remains true even if the group welcomes new members who haven't read the book yet, but who hang around because the community's discussions seem interesting.
So, perhaps encouraging a similar attitude with respect to the Sequences would help manage some of the PR issues you identify surrounding them.
Of course, none of that would address the SIAI-related issues. Then again, from my perspective LW is already fairly separate from SIAI... at least, I participate in the former and not in the latter and nobody seems to mind... so I don't see a problem that needs solving there.
But I would not object to further separation, if a consensus emerged in favor of that.
Replies from: Armok_GoB, David_Gerard↑ comment by Armok_GoB · 2011-03-08T17:57:30.147Z · LW(p) · GW(p)
An useful device here might be the word "about", LW is framed to be about rationality, so everyone who think they know anything about rationality think they can participate. However, in practice it is about a specific type of rationality (that it happens to be the type that can be considered the only one is for the moment irrelevant) that requires having read the sequences. From an outside view one might even argue that LW is "about" the sequences "rather than" rationality.
↑ comment by David_Gerard · 2011-03-08T19:41:56.929Z · LW(p) · GW(p)
ciphergoth considers LW "a fan site for the sequences" (quote from Sunday). But this is only clear from people talking about them.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T19:51:33.321Z · LW(p) · GW(p)
That's not unreasonable... certainly it's what got me to stick around.
And like any fan site, it's as much about enjoying the company of the sorts of people who find this sort of thing engaging as it is about the thing itself.
comment by timtyler · 2011-03-09T12:10:55.430Z · LW(p) · GW(p)
First one has to identify characteristics that could potentially be used to cast a damaging light on this community. Here the most obvious possibility seems to be to portray the SIAI, together with LessWrong, as a cult.
Probably the other main possibilities that spring to my mind are:
That it is a luddite organisation;
That it is an unscrupulous machine intelligence outfit masquerading as a luddite organisation for marketing reasons;
That it has fallen too far behind to have much chance of meeting its goals;
That it is too perfectionist to have much chance of meeting its goals;
That its lack of experience and secretive culture are a bad combination.
comment by Scott Alexander (Yvain) · 2011-03-08T18:52:03.061Z · LW(p) · GW(p)
Less Wrong has a FAQ that anyone can edit. I think your first four Measures could be best addressed by getting an account on the Wiki and writing what you think we need.
Your worries about the structure of SIAI sound like the sort of thing worth talking about, but posting them on Less Wrong might not be the best way to go about them due to bystander effect and lots of important SIAI folk not being readers here. If you are really interested in this side of things, consider emailing someone on the organizational side of SIAI (Eliezer is not primarily organizational side and is usually busy; Michael Vassar might be good or at least know who to forward it to) and seeing what they have to say. Justin Shovelain also has a history of being good at explaining this side of things; he has a sequence that I think will get some of this across somewhere in the pipeline.
comment by [deleted] · 2011-03-08T14:59:33.409Z · LW(p) · GW(p)
Using scope insensitivity and high-risk to justify action, outweigh low probabilities and disregard opposing evidence.
This is true, but I think the problem goes even further than this: many people are unwilling to make what Less Wrong readers would consider the "obvious" utilitarian choice, e.g. in a scenario like Torture vs. Dust Specks. Outsiders probably consider these unintuitive moral decisions weird at best and scary at worst.
comment by [deleted] · 2011-03-23T19:02:49.008Z · LW(p) · GW(p)
.
Replies from: drethelin↑ comment by drethelin · 2011-03-23T19:10:21.347Z · LW(p) · GW(p)
I really love the idea of a commercial going "And now I'm showing you a picture of a man in glasses with a labcoat to prey on your learned respect for authority! Quick! flash to happy family to associate the product with happy families. nowthedisclaimerisreallyfastandintinytextbecauseiknowthiswillleavelessofanimpressiononyou"
Replies from: Nonecomment by David_Gerard · 2011-03-08T19:38:35.685Z · LW(p) · GW(p)
Advertising for LessWrong is plausible. I just got a £75 Google AdWords voucher in the post ...
(£75 is approximately nothing - a decent taster campaign is at least £300. But I have no use for it and LW is welcome to it ...)
You talk about "making people read the sequences". I suggest that "making" people to do anything doesn't work. You have to pull them. This means you need them to think there's something good there they want to have.
(You want to herd cats, you need to work out the local value of tuna.)
How about some advertising taglines? The current tagline is excellent, for example. But why would people want that? What can they get here they don't have now?
People want to WIN.
Most people don't feel like winners.
- "Win in the world with clear, rational thinking"
- "If you know why you do things, you can WIN in the world."
etc. Any others? Other ideas of things that will pull people towards LW?
Edit: And why has HP:MoR lured people in? What kept them here? How many came here from HP:MoR and did not stay? Why not? Etc.
Replies from: Alicorn, Mitchell_Porter, Desrtopa↑ comment by Alicorn · 2011-03-08T19:44:59.072Z · LW(p) · GW(p)
"Win in the world with clear, rational thinking"
"If you know why you do things, you can WIN in the world."
These both sound like "The Secret"-esque crank taglines, which will drive off the intended audience.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-08T21:28:11.100Z · LW(p) · GW(p)
It is unfortunate that this method has been applied to things with no substance at all, like The Secret. However, it certainly would not on any level be deceptive or promise anything it couldn't deliver.
People want to win. That's what LW rationality is for. That is, in point of fact, what we promise. You seem to be objecting to saying so upfront.
Is it worth introducing simple rationality tools to people who would otherwise think The Secret was a good idea? Or is that something you think should be avoided in general?
Using it as the only hook possibly wouldn't be good and might lead to the effect you describe. However, brainstorming is cheap. What I'm saying is "ideas, ideas, please come up with lots."
↑ comment by Mitchell_Porter · 2011-03-09T01:37:28.665Z · LW(p) · GW(p)
The public mind now associates "WINNING" with Sheentology.
Replies from: JoshuaZ, lukeprog, David_Gerard↑ comment by David_Gerard · 2011-03-09T08:22:55.772Z · LW(p) · GW(p)
(splutter) It'll pass :-)
↑ comment by Desrtopa · 2011-03-10T23:45:32.651Z · LW(p) · GW(p)
I think part of what attracted people about HPMoR is that it showed Harry being successful for distinct, comprehensible, imitable reasons, which people wanted to learn more about, but more of it was a feeling that "this Eliezer guy writes some funny, interesting stuff, I want to check out more of what he's written."
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-10T23:47:36.523Z · LW(p) · GW(p)
Which works :-) But I'm quite interested to know about the experience of those who read MoR, looked at LessWrong and went away never to return. I don't know if they can even be estimated, let alone counted, surveyed and analysed, but I suspect they're important - look at the evidence that would refute your hypothesis (in this case, that MoR is good for LW), not just that which confirms it.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-03-10T23:51:07.672Z · LW(p) · GW(p)
I haven't hypothesized that MoR is good for LW. I haven't bothered to track the contributions of the people who arrived from MoR, so I don't have much of a sense of what they're bringing to the community. I'm just aware that there seem to be a considerable number of members who've come here through MoR.
I would be very surprised though, if more karma-positive members are leaving Less Wrong due to MoR than are arriving because of it.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-03-10T23:56:43.705Z · LW(p) · GW(p)
I didn't say you did, but many others have.
Edit: I did say "you". I meant a general "you" (one's hypothesis), not anything you i particular said. Sorry!
comment by wedrifid · 2011-03-08T15:52:45.437Z · LW(p) · GW(p)
Discourage the downvoting of people who have not yet read the Sequences.
I don't downvote based on whether people have read the sequences. I vote based on merit and obnoxiousness.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-08T16:22:28.998Z · LW(p) · GW(p)
I don't downvote based on whether people have read the sequences
I doesn't matter why you do it, what matters is what newbies and outsiders think who are not aware of your superior and rational use of the reputation system. This post is about public relations so you have to take an outside view.
Replies from: TheOtherDave, childofbaud, wedrifid↑ comment by TheOtherDave · 2011-03-08T16:47:53.426Z · LW(p) · GW(p)
Sure.
But it's not unreasonable for me to treat the beliefs of people who actually pay attention to what I do as a different set from the beliefs of people who don't, and to devote different levels of effort to attempting to manipulate the former and the latter.
For example, I might decide that the beliefs of people who won't pay attention to what I actually do before deciding that I'm behaving badly simply aren't worth considering at all.
This might not be wise -- that is, I might not like the consequences of that decision -- but it's perfectly coherent, and entirely on-topic.
↑ comment by childofbaud · 2011-03-10T04:49:26.156Z · LW(p) · GW(p)
That would undermine whatever value the whole karma system may have at this point. Not punishing, or perhaps even rewarding mediocre posts seems likely to encourage complacency on behalf of the users.
A race to the bottom would likely ensue as well, since new negative achievements would become possible: who can get away with the most trolling? Who can get the most karma with the least effort?
In fact, I think the system, and most people, are far too lenient already, on the whole.
I wonder if posts shouldn't start out with a slight negative value from the outset, to reflect their high potential for introducing arbitrary complexity (noise) into the established information pool (mostly signal... though that may be up for debate) of the site.
Another idea: the more posts a user makes, the greater that initial negative value should be, to reflect the higher standard that is expected of them as time goes by. :-)
Yeah, that would require pretty complex algorithms.
↑ comment by wedrifid · 2011-03-08T16:43:55.812Z · LW(p) · GW(p)
I doesn't matter what you do, what matters is what newbies and outsiders think who are not aware of your superior and rational use of the reputation system.
Your tone doesn't fit well.
This post is about public relations so you have to take an outside view.
In my observation this is about you venting your same old issues yet again.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-03-08T17:11:15.443Z · LW(p) · GW(p)
Your tone doesn't fit well.
I meant to say "why you do it" not "what you do", my fault.
In my observation this is about you venting your same old issues yet again.
The first time I posted a post here was in August 2010 because of a public relations disaster. I don't see how you could conclude that topic is already old or has been dissolved. And to accuse me of "venting" while at the same time complaining about my tone and being upvoted for it just supports my current perception.
comment by David_Gerard · 2011-03-10T23:25:08.512Z · LW(p) · GW(p)
Occasionally people bother taking LW seriously enough to criticise, e.g. this FWIW. I suggest this is a useful sort of notice and that LW could do with a lot more of it.
comment by Emile · 2011-03-08T15:56:32.752Z · LW(p) · GW(p)
Detach LessWrong from the SIAI by creating an additional platform to talk about related issues.
I think LessWrong is sufficiently seperated from SIAI - most LessWrong members are not involved at all with SIAI, and many SIAI members/advisors/whatchamacallits don't post on LessWrong (I think).
I think of LessWrong more of as "a place for people who read the Sequences to discuss related topics", whereas SIAI is much more focused on a specific goal. SIAI people may make anouncements on LessWrong because a lot of members are interested, but I don't expect SIAI people to pay that much attention to LessWrong in general.
So in that light, these:
Ask or pay independent experts to peer-review. Make the finances of the SIAI easily accessible. Openly explain why and for what the SIAI currently needs more money.
... are for SIAI, and have little to do with LessWrong.
Replies from: David_Gerard, XiXiDu↑ comment by David_Gerard · 2011-03-08T19:45:41.497Z · LW(p) · GW(p)
I think LessWrong is sufficiently seperated from SIAI - most LessWrong members are not involved at all with SIAI, and many SIAI members/advisors/whatchamacallits don't post on LessWrong (I think).
It's not detached yet - it's still to a large degree about SIAI-related interests that are not related to the (excellent) tagline "a community blog devoted to the art of refining human rationality".
There are plenty of front-page promoted posts that are essentially advertising for SIAI, reasons why you should donate all you can spare to SIAI or how-tos on ways for readers to make money to donate to SIAI. Which I can live with - it's no more annoying than the banners on Wikipedia at the end of each year, and it takes money to keep the lights on - but it's not obviously on-mission (taking the tagline at face value).
I think that one day LW should be more independent of SIAI, but it's not a problem that it isn't yet and it can happen at its own pace.
↑ comment by XiXiDu · 2011-03-08T16:19:37.059Z · LW(p) · GW(p)
I think LessWrong is sufficiently seperated from SIAI...
Why I think this is not the case:
- The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI serious and therefore donate to the SIAI (Reference: An interview with Eliezer Yudkowsky).
- LessWrong is used to ask for donations.
- You can find a logo with a link to the SIAI in the header and a logo and a link to LessWrong on the SIAI's frontpage.
- LessWrong is mentioned as an achievement of the SIAI (Quote: "Less Wrong is important to the Singularity Institute's work towards a beneficial Singularity").
- A quote from the official SIAI homepage: "Less Wrong is [...] a key venue for SIAI recruitment".
LessWrong is the mouthpiece of the SIAI and its main advertisement platform. I don't think one can reasonably disagree about that.
Replies from: Emile↑ comment by Emile · 2011-03-08T17:48:26.220Z · LW(p) · GW(p)
I do disagree. LessWrong isn't the mouthpiece of SIAI, that would be the SIAI blog. I don't think it's reasonable to expect top-level posts on LessWrong to represent the SIAI's views, and even less to expect that of discussion posts, comments and voting patterns.
There may be a fair amount of SIAI-oriented posts by Eliezer or others on LessWrong, but I don't see that as using LessWrong as a platform, but rather "the SIAI talking to LessWrong people".
LessWrong may be The SIAI's most popular advertisement platform, but that's because the quality of Eliezer's writings and the community attract more audience than the SIAI website does.
Eliezer needs nerds for the SIAI; instead of going through the effort of hunting nerds in the wild, he created LessWrong in the hope of having a self-sustained place where nerds like to hang out and are already familiar with his ideas. But LessWrong isn't supposed to represent the SIAI, apart from the fact that it was shaped with the features that make it a good hunting ground for the kind of nerds Eliezer needs. A lot of features required for having a functional internet community (moderation, karma, openness) have nothing to do with the SIAI's goals themselves.
I'm rambling a bit, but I still think that LessWrong is the wrong place to come to complain about things you don't like about SIAI. Information flow is mostly SIAI -> LessWrong. And also the issues of "what the SIAI should do to reach it's goals" is very different from "What features should LessWrong have to be a valuable community".
Replies from: XiXiDu, orthonormal↑ comment by XiXiDu · 2011-03-09T09:33:48.455Z · LW(p) · GW(p)
I still think that LessWrong is the wrong place to come to complain about things you don't like about SIAI.
I don't necessarily agree but I will do you all a favor and from now on send any criticism directly to the SIAI, via e-Mail or otherwise. Except someone else starts a discussion about the SIAI here, in which case I might post a comment.
↑ comment by orthonormal · 2011-03-09T07:39:12.445Z · LW(p) · GW(p)
You're correct.