How To Lose 100 Karma In 6 Hours -- What Just Happened
post by waitingforgodel · 2010-12-10T08:27:28.781Z · LW · GW · Legacy · 212 commentsContents
212 comments
- 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
- Earlier today, Yudkowsky censored a post on less wrong
- 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).
212 comments
Comments sorted by top scores.
comment by Jack · 2010-12-10T16:56:50.874Z · LW(p) · GW(p)
Hear that sound beneath your feet? It's the high-ground falling out from under you.
I'm offended by the censorship as well and was voting a number of your comments up previously. But as long as discussions of the censorship itself aren't being censored peaceful advocacy for a policy change and skirting the censors are the best strategies. And when the discussions of censorship start being censored the best strategy is for everyone to leave the site. This increasing risk nonsense is insanely disproportional. Traditionally, the way to get back at censors is to spread the censored material not blow up 2 1/2 World Trade Centers.
comment by katydee · 2010-12-10T11:10:11.038Z · LW(p) · GW(p)
At this point I must conclude either that you have no grasp whatsoever of the math involved here or that you're completely insane. Assuming your claim is correct (which I sincerely doubt), you just killed ~6,790 people (on average) because someone deleted a blog post. If you believe that this is a commensurate and appropriate response, I'm not sure what to say to you.
Honestly, if you believe that attempting to increase the chance that mankind is destroyed is a good response to anything and are willing to brag about it in public, I think something is very clearly wrong.
Replies from: Oscar_Cunningham, Will_Sawin, atucker, Aleksei_Riikonen↑ comment by Oscar_Cunningham · 2010-12-10T11:32:47.645Z · LW(p) · GW(p)
Maybe they are of the belief that censorship on LessWrong is severely detrimental to the singularity. Then such a response might be justified.
Replies from: Lightwave↑ comment by Lightwave · 2010-12-10T14:46:18.764Z · LW(p) · GW(p)
In that case they should present their evidence and/or a strong argument for this, not attempt to blackmail moderators.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T18:18:12.852Z · LW(p) · GW(p)
I actually explicitly said what oscar said in the discussion of the precommitment.
I also posted my reasoning for it.
Those are both from the "precommitted" link in my article.
Replies from: Lightwave↑ comment by Lightwave · 2010-12-10T20:57:49.995Z · LW(p) · GW(p)
Not quite sure how to respond..
Do you really think you're completely out of options and you need to start acting in a way that increases existential risk with the purpose of reducing it, by attempting to blackmail a person who will very likely not respond to blackmail?
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-12T08:54:53.589Z · LW(p) · GW(p)
Yes. If I didn't none of this would make any sense...
↑ comment by Will_Sawin · 2010-12-10T12:15:20.849Z · LW(p) · GW(p)
Specifically, the argument against excessive punishment is this:
When dealing with humans, promising excessive punishment will not automatically move you to the "people do what you want" equilibrium. You need to prove you're serious. People will make mistakes. You will make mistakes.
This all requires punishing people.
This doesn't require murdering 6,790 people.
It seems like the sanest response would be to find some way of preventing waitingforgodel from viewing this site.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T12:43:49.292Z · LW(p) · GW(p)
It seems like the sanest response would be to find some way of preventing waitingforgodel from viewing this site.
No, because then you have to think of what a troll would do, i.e. whatever would upset people for great lulz. The correct answer is to ignore future silly persons, and hence the present silly person.
(Note that this does not require waitingforgodel to be trolling - I see no reason not to assume complete sincerity. This is about the example set by the reaction/response.)
↑ comment by atucker · 2010-12-11T23:30:29.577Z · LW(p) · GW(p)
At the risk of sounding silly, I have a really minor question.
The 6790 people figure comes from multiplying the world's population by .0001, right? I feel like causing an existential catastrophe to occur is worse than that, not only does everyone alive die, but every human who could have lived in this part of the universe in the future is kept out of existence. Thus, intentionally trying to cause existential risk is much more serious.
Is there some particular reason that everyone is only multiplying by the world's population that I'm missing?
Replies from: ata, katydee↑ comment by ata · 2010-12-11T23:34:23.568Z · LW(p) · GW(p)
No, you're right — talking about currently-living people is more just the very conservative lower bound, since we don't have a good way of calculating how many people could exist in the future if existential risks are averted.
Replies from: Vladimir_Nesov, atucker↑ comment by Vladimir_Nesov · 2010-12-12T01:39:32.527Z · LW(p) · GW(p)
If existential risks are averted, you shouldn't count people, you should count goodness (that won't necessarily take the form of people or be commensurately influenced by different people). So the number of people (ems) we can fill the future with is also a conservative lower bound for that goodness, which knowably underestimates it.
↑ comment by Aleksei_Riikonen · 2010-12-11T23:44:57.874Z · LW(p) · GW(p)
At this point I must conclude either that you have no grasp whatsoever of the math involved here or that you're completely insane.
The good news is, this mentioned insanity that some LW posters have sunk to makes me think of this very entertaining Cthulhu fan video, which I will now share for the entertainment of all:
comment by Leonhart · 2010-12-10T13:56:47.077Z · LW(p) · GW(p)
I'm curious.
I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here
Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.
(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)
Replies from: Alicorn, David_Gerard, TheOtherDave, PlaidX, JoshuaZ, Larks, Vaniver, Bongo, Vladimir_Nesov, Oscar_Cunningham, Jonii, benelliott, Grognor, Eliezer_Yudkowsky↑ comment by Alicorn · 2010-12-10T16:21:52.442Z · LW(p) · GW(p)
I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.
I award you +1 sanity point.
(I note that the Langford Basilisk in question is the only information that I know and wish I did not know. People acquainted with me and my attitude towards secrecy and not-knowing-things in general may make all appropriate inferences about how unpleasant I must find it to know the information, to state that I would prefer not to.)
Replies from: Normal_Anomaly, Tesseract, TraderJoe, Roko, Strange7↑ comment by Normal_Anomaly · 2010-12-12T16:40:46.003Z · LW(p) · GW(p)
Upvoted both the parent and the grandparent because I was nervous having no clue what was going on, looked at the basilisk, and would rather I hadn't. I'm not clever/imaginitive enough to be sure why I shouldn't have done it, but it was still a dumb move. I'm glad the thing was censored and I applaud leonhart for being sensible.
Replies from: Broggly↑ comment by Broggly · 2010-12-14T19:18:21.281Z · LW(p) · GW(p)
I'm not clever/imaginitive enough that I shouldn't have done it, if people really shouldn't do it. On the other hand, if I somehow find out people who have done it are taking drastic actions that would worry me enough to make further investigations, but as far as I can tell I'm probably better off knowing if that's the case (I think, depending on how altruistic those people are, what EY and the SIAI can actually do, how many worlds/"quantum immortality" work etc) Quite honestly it's far less of a worry to me than more mundane friendliness failures.
↑ comment by Tesseract · 2010-12-11T05:03:51.822Z · LW(p) · GW(p)
Though reading this comment and others like it have managed to convince not to seek out the deleted post, I can't help but think that they would be aided by a reminder of what it means to be Schmuck Bait.
↑ comment by Roko · 2010-12-12T12:51:34.092Z · LW(p) · GW(p)
the only information that I know and wish I did not know.
I don't think it's quite that extreme. For example, I wish I wasn't as intelligent as I am, wish I was more normal mentally and had more innate ability at socializing and less at math, wish I didn't suffer from smart sincere syndrome. I think these are all in roughly the same league as the banned material.
Replies from: Davorak↑ comment by David_Gerard · 2010-12-10T14:11:30.947Z · LW(p) · GW(p)
Not really :-) If you keep awareness of the cult attractor and can think of how thinking these things about an idea might trip you up, that's not a flawless defence but will help your defences against the dark arts.
What inspired you to the phrase "invincible mind fortresses"? I like it. Everyone thinks they live in one, that they're far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that. (It's like being smart enough to be quite aware of at least some of your own blithering stupidities.)
(I read the forbidden idea. It appears I'm dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don't actually find it interesting enough to have remembered the details.)
Replies from: None, Leonhart, Vladimir_Nesov↑ comment by [deleted] · 2010-12-10T15:19:52.556Z · LW(p) · GW(p)
(I read the forbidden idea. It appears I'm dumb and ignorant enough to have thought it was just really silly, and this reaction appears common. This is why some people find the entire incident ridiculous. I admit my opinion could be wrong, and I don't actually find it interesting enough to have remembered the details.)
Same here. I think (though no one has given a definitive answer) that there is concern about the general case of the specific hypothetical incident discussed therein, not the specific incident itself.
Replies from: Broggly↑ comment by Broggly · 2010-12-14T19:24:01.589Z · LW(p) · GW(p)
Hmm. I only read it recently, so maybe I haven't thought through the general case enough, but I think my solution (assuming it's not totally absurd) of treating it as though it is really silly with the caveat that if it becomes non-silly I'm not exactly powerless would work for all such cases.
↑ comment by Leonhart · 2010-12-10T17:26:45.634Z · LW(p) · GW(p)
Thank you. I've found your comments very useful, not least because when younger I came uncomfortably close to being parted from a reasonable sum of money, by a group who understood the Dark Arts rather well. That was before I read Cialdini, but I'm not sure how well it would have sunk in without the object lesson.
I'm not good at thinking things are silly. That's great for getting suspension of disbelief and fun out of certain things (for example, I can enjoy JRPG plots :) but it's also a spot where one can be hit for massive damage.
As for the happy phrasing, I might have been thinking of this. (Warning: 4chan, albeit one of its nicer suburbs.)
↑ comment by Vladimir_Nesov · 2010-12-10T15:25:19.978Z · LW(p) · GW(p)
Everyone thinks they live in [invincible mind fortresses], that they're far too intelligent/knowledgeable/rational/Bayesian/aware of their biases/expert on cults/etc to fall into cultishness. They are of course wrong, but try telling them that.
Again you tell us. Some people who think that are right. They are NOT "of course" wrong. A random person isn't guaranteed to be vulnerable, and there are people for which you can say that they are most certainly invincible. That any person is "of course vulnerable" is of course wrong as a point of simple fact.
Replies from: TheOtherDave, David_Gerard↑ comment by TheOtherDave · 2010-12-10T16:15:38.152Z · LW(p) · GW(p)
I would be interested in hearing about your evidence for the existence of people who are "most certainly invincible" to cultishness, as I'm not sure how I would go about testing that.
↑ comment by David_Gerard · 2010-12-10T16:10:35.046Z · LW(p) · GW(p)
I think a lot more people are vulnerable than consider themselves vulnerable. You can substitute "most" for "all" if you like.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-10T16:19:06.671Z · LW(p) · GW(p)
I think a lot more people are vulnerable than consider themselves vulnerable.
I mainly object to "of course", and your argument cited here (irrespective of its correctness) doesn't even try to support it. Please be more careful in what you use, you can't just throw an arbitrarily picked affective soldier, it has to actually argue for the conclusion it's supposed to support (i.e. be (inferential) evidence in its favor to an extent that warrants changing the conclusion).
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T16:24:14.370Z · LW(p) · GW(p)
I think a lot more people are vulnerable than consider themselves vulnerable.
I mainly object to "of course", and the argument I cited here (irrespective of its correctness) doesn't even try to support it.
I wasn't making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.
The "of course" is because it's a cognitive error: people are sure it could never happen to them. I observe them being really quickly, really certain of that when they hear of someone else falling for cultishness - that's the "of course". In some cases this will be true, but it's far from universally true. I don't know which particular error or combination of errors it is, but it does seem to be a cognitive error. It is true that I do need to work out which ones it is so that I can talk about it without those people who reply "aha, but you haven't proven right here it's every single one, aha" and think they've added something useful to discussion of the topic.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-10T16:49:46.063Z · LW(p) · GW(p)
I see. So they can sometimes be accidentally correct in expecting that they are not vulnerable, as in fact they will not be vulnerable, but their level of certainty in that fact will almost certainly ("of course") be off in a systematic predictable way. This interpretation works.
I wasn't making an argument (a series of propositions intended to support a conclusion), I was talking about the subject in passing. These are different modes of communication, and I would have thought it reasonably clear which one was being used.
I think of the "talking about the subject in passing" mode as "making errors, because it's easier that way", which looks to me as a good argument for making errors, but they are still errors.
↑ comment by TheOtherDave · 2010-12-10T14:43:30.777Z · LW(p) · GW(p)
It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses
In general, I treat attempts to focus my attention on any particular highly-unlikely-but-really-bad scenario as an invitation to inappropriately privilege the hypothesis, probably a motivated one, and I discount accordingly. So on balance, yeah, you can count me as "playing along" the way you mean it here.
I don't think my mind-fortress is invincible, and I am perfectly capable of being hurt by stuff on the Internet. I'm also perfectly capable of being hurt by a moving car, and yet I drive to work every morning.
And yes, if the dangerousness of the Dangerous Idea seems more relevant to you in this case than the politics of the community, I think you're miscalibrated. The odds of a power struggle in a community in which you have transient membership affecting your life negatively are very small, but I'd be astonished if they were anything short of astronomically higher than the odds of the Dangerous Idea itself affecting your life at all.
↑ comment by PlaidX · 2010-12-10T22:09:09.972Z · LW(p) · GW(p)
I also regret contact with the basilisk, but would not say it's the only information I wish I didn't know, nor am I entirely sure it was a good idea to censor it.
When it was originally posted I did not take it seriously, it only triggered "severe mental trauma" as others are saying, when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
I do not think the idea holds water, and I feel I owe much of my severe mental trauma to an ongoing anxiety and depression stemming from a host of ordinary factors, isolation chief among them. I would STRONGLY advise everyone in this community to take their mental health more seriously, not so much in terms of basilisks as in terms of being human beings.
This community is, as it stands, ill-equipped to charge forth valiantly into the unknown. It is neurotic at best.
I would also like to apologize for whatever extent I was a player in the early formation of the cauldron of ideas which spawned the basilisk and I'm sure will spawn other basilisks in due time. I participated with a fairly callous abandon in the SL4 threads which prefigure these ideas.
Even at the time it was apparent to anyone paying attention that the general gist of these things was walking a worrisome path, and I basically thought "well, I can see my way clear through these brambles, if other people can't, that's their problem."
We have responsibilities, to ourselves as much as to each other, beyond simply being logical. I have lately been reexamining much of my life, and have taken to practicing meditation. I find it to be a significant aid in combating general anxiety.
Also helpful: clonazepam.
Replies from: XiXiDu↑ comment by XiXiDu · 2010-12-11T10:29:33.510Z · LW(p) · GW(p)
...when I later read someone referring to it being censored, and some curiosity regarding it, and I updated on the fact that it was being taken that seriously by others here.
If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?
There is no expected payoff in harming me just because decision theory implies it being rational. Because I do not follow such procedures. If something wants to waste its resources on it, I win. Because I weaken it. It has to waste resources on me it could use in the dark ages of the universe to support a protégé. And it never received any payoff for this, because I do not play along in any branch that I exist. You see, any decision theory is useless if you deal with agents that don't care about such. Utility is completely subjective too, as Hume said, "`Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger". The whole problem in question is just due to the fact that people think that if decision theory implies a strategy to be favorable then you have to follow through on it. Well, no. You can always say, fuck you! The might of God and terrorists is in the mind of their victims.
Replies from: Broggly↑ comment by Broggly · 2010-12-14T19:32:36.501Z · LW(p) · GW(p)
If you join a community concerned with decision theory, are you surprised by the fact that they take problems in decision theory seriously?
Are they? Are they really? What actual, concrete actions have been taken, or are planned, regarding the basilisk? If people actually make material sacrifices based on having seen the Basilisk then I'm willing to take it seriously, if only for its effects on the human mind. Then again in the most worrying (or third most worrying, I guess) case, they would likely hide said activities to prevent anything damaging their plans. They could also hide it out of altruism to keep from disturbing halfway smart basilisk seers like us, I guess.
Replies from: drethelin↑ comment by JoshuaZ · 2010-12-10T20:49:29.208Z · LW(p) · GW(p)
I saw the original post. I had trouble taking the problem that seriously in the general case. In particular, there seemed to be two obvious problems that arose from the post in question. One was a direct decision theoretic basilisk, the other was a closely associated problem that was empirically causing basilisk-like results to some people who knew about the problem in question. I consider the first problem (the obvious decision-theoretic basilisk) to be extremely unlikely. But since then I've talked to at least one person (not Eliezer) who knows a lot more about the idea who has asserted that there are more subtle aspects of the basilisk which could make it or related basilisks more likely. I don't know if that person has better understanding of decision theory than I do, but he's certainly thought about these issues a lot more than I do so it did move my estimate that there was a real threat here upwards. But even given that, I still consider the problems to be unlikely. I'm much more concerned about the pseudo-basilisk which empirically has struck some people. The pseudo-basilisk itself might justify the censorship. Overall, I'm unconvinced.
↑ comment by Vaniver · 2010-12-10T17:51:17.128Z · LW(p) · GW(p)
I have read the idea. I am unscathed. It is not difficult to find, if you look.
There is some chance my mind fortress is better defended than other people's- I am known to be level-headed in situations with and without the presence of imminent physical harm- but I don't think that applies to this particular circumstance. It felt to me like something you would have to convince yourself to care about- and so for some people that may be easier than it is for others (or automatic).
Replies from: Hul-Gil, prase↑ comment by Hul-Gil · 2011-07-26T01:19:09.437Z · LW(p) · GW(p)
Hi there, Vaniver. I figured I'd ask you about this, because others seem too disturbed by the idea for me to want to bring it up again. Anyway, I've been reading through old threads, and encountered mention of this "basilisk"... and now I'm extremely curious. What was this idea that made so many people uncomfortable?
Edit Update: On the advice of several people, I am leaving this alone for now. If I do go ahead and read it, I'll edit this post again with my thoughts.
Replies from: Alicorn↑ comment by Alicorn · 2011-07-26T01:37:36.157Z · LW(p) · GW(p)
Please abandon this project, for your safety and comfort, that of people you might tell, and that of others who your "benefactor" might be disposed to tell if you succeed in weakening someone's resolve to keep it safely secret.
Replies from: Hul-Gil↑ comment by Hul-Gil · 2011-07-26T01:41:29.259Z · LW(p) · GW(p)
Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected. (I'm assuming you're referring to the possibility of anxiety, etc? I do suffer from anxiety, but I've had to learn to deal with fairly horrific things, so I am not easily disturbed any more.) I certainly won't tell anyone, even if I had someone to tell, and if someone has resolved to keep it secret I doubt they will tell me in the first place.
I'm not too worried about finding out, though; if no one wants to say, I won't pressure anyone to. That's why I have asked someone who wasn't affected: they will surely be able to judge without fear making them irrational. If they still don't want to say, I'll just live with being curious.
Replies from: wedrifid↑ comment by wedrifid · 2011-07-26T17:26:35.990Z · LW(p) · GW(p)
Since several posters reported that they were not affected by the basilisk, I am thinking my mental safety and comfort might not be affected.
I encourage you to accede to the tribal wishes and not tell anyone about the idea at least within the tribe and the scope of where lesswrong can claim any influence whatsoever (as you've already agreed). As you say, you don't sound like the sort of person who could be harmed by reading it personally so need not be concerned for your own sake.
Replies from: lessdazed↑ comment by lessdazed · 2011-08-16T05:31:56.762Z · LW(p) · GW(p)
It seems like it would be easy to predict an individual's reaction to the thing by looking for correlated reactions between that and some other things from people who have seen it all, and then seeing how a given innocent reacts to those other things.
I bet some pretty strong patterns would emerge, and we could predict reactions to the thing. I do not think that protecting people from harm now is a true objection, for it could be dealt with by identifying vulnerable people and not making the whole topic such forbidden fruit.
↑ comment by prase · 2010-12-11T00:33:09.707Z · LW(p) · GW(p)
It depends on how strongly you believe in singularity. It is easy to ignore the whole thing as silly (which is essentially what I do), but if you have slightly different priors (or reasoning), it may be harmful.
Replies from: Vaniver↑ comment by Vaniver · 2010-12-11T01:01:08.934Z · LW(p) · GW(p)
It depends on how strongly you believe in singularity.
While part of it, that doesn't appear to be all of it. It seems like it only applies for a narrow range of possible singularities. I keep coming back to visibility bias when I think about this.
↑ comment by Vladimir_Nesov · 2010-12-10T15:21:31.767Z · LW(p) · GW(p)
It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses
I'm certain that the forbidden topic couldn't possibly hurt me (probability of that is zilch). Still, I agree that from what we know, considering it should be discouraged, based on an expected utility argument (it either changes nothing or hurts tremendously with tiny probability, but can't correspondingly help tremendously because human value is a narrow target). Don't confuse these two arguments.
(I think this is my best summary of the shape of the argument so far.)
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-12-10T17:01:44.452Z · LW(p) · GW(p)
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)
The thing that's been bugging me about this whole issue is even given that a certain piece of information MAY (with really tiny probability) be highly (for lack of a better word), toxic... should we as humans really be in the habit of "this seems like dangerous idea, don't think about it"?
I can't help but think this must violate something analogous (though not identical) to an ethical injunction. ie, chances of human encountering inherently toxic idea are so small vs cost of smothering one's own curiosity/allowing censorship not due to trollishness or even revelation of technical details that could be used to do really dangerous thing, but simply because it is judged dangerous to even think about...
I get why this was perhaps a very particular special circumstance, but am still of several minds about this one. "Don't think about deliciously forbidden dangerous idea, just don't", even if perhaps actually is indicated in certain very unusual special cases, seems like the sort of thing that one would, as a human, want injunctions against.
Again, I'm of several minds on this however.
(EDIT: Just to clarify, that does not mean that I in any way approve of "existential threat blackmail" or that I'm even of two minds about that. That's just epically stupid)
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-11T15:19:11.163Z · LW(p) · GW(p)
(EDIT2: Looking at the discussion here, I am now reminded that it is not just potentially toxic due to decision theoretic oddities, but actually already known to be severely psychologically toxic to at least some people. This, of course, changes things significantly, and I am retracting my "being bugged" by the removal.)
Yeah, that was the reason that convinced me its removal from here was a good enough idea to bother enacting. I wouldn't try removing it from the net, but due warning is appropriate. Such things attract curious monkeys to test the wet paint - but! I still haven't seen 2 Girls 1 Cup and have no plans to! So it's not assured.
Replies from: Strange7↑ comment by Oscar_Cunningham · 2010-12-10T14:04:49.417Z · LW(p) · GW(p)
I feel the same as you, even though I know what the banned topic was. I haven't thought about it too deeply, because, well, duh.
↑ comment by Jonii · 2010-12-11T21:41:10.569Z · LW(p) · GW(p)
I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.
I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell
Replies from: Jonii↑ comment by Jonii · 2010-12-12T02:10:08.790Z · LW(p) · GW(p)
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.
↑ comment by benelliott · 2010-12-11T18:07:30.677Z · LW(p) · GW(p)
I've never seen the basilisk (and I have just about resisted the very powerful urge to seek it out), but if one of us came up with a dangerous idea, is it not likely that an AI would do the same. Taking into account the vastly greater possibility of an AI to cause harm if 'infected', might we not gain more from looking at the problem now in case we can find a resolution (perhaps a better decision theory) and use that to avert a genuinely catastrophic outcome. Even if our hopes of solving the problem are not high, the probabilities and utilities may still advise it.
Of course, since I haven't seen it, I might be totally misunderstanding the situation, or maybe there is an excellent reason why the above is wrong that I can't understand without exposing myself to the basilisk. Even if this isn't the case, it might still be best for a few people who have already seen it to work on the problem, rather than informing someone like me who probably wouldn't be much help anyway.
If it's not too much trouble, could you at least sate my burning curiosity by telling me which of the three options above, if any, is correct.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-11T20:24:58.111Z · LW(p) · GW(p)
You're totally misunderstanding the situation.
Replies from: benelliott↑ comment by benelliott · 2010-12-11T21:57:09.206Z · LW(p) · GW(p)
Thanks.
↑ comment by Grognor · 2012-03-16T10:43:53.952Z · LW(p) · GW(p)
If you're still curious, after all these years, and if another data point is still helpful-
I know the information in question, and I anticipate a non-negligible probability of being tortured horribly for knowing it (though presumably an FAI would figure out a way to make everyone think this happened rather than actually doing it), but oddly I am not sure whether I regret knowing it.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T20:42:07.591Z · LW(p) · GW(p)
Aw, look, it's someone sane.
Replies from: cousin_it↑ comment by cousin_it · 2010-12-13T16:31:14.756Z · LW(p) · GW(p)
Hi Eliezer. It took me way too long to figure out the right question to ask about this mess, but here it is: do you regret knowing about the basilisk?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-13T22:13:33.287Z · LW(p) · GW(p)
I regret that I work in a job which will, at some future point, require me to be one of maybe 2 or 3 people who have to think about this matter in order to confirm whether any damage has probably been done and maximize the chances of repairing the damage after the fact. No one who is not directly working on the exact code of a foomgoing AI has any legitimate reason to think about this, and from my perspective the thoughts involved are not even that interesting or complicated.
The existence of this class of basilisks was obvious to me in 2004-2005 or thereabouts. At the time I did not believe that anyone could possibly be smart enough to see the possibility of such a basilisk and stupid enough to talk about it publicly, or at all for that matter. As a result of this affair I have updated in the direction of "people are genuinely that stupid and that incapable of shutting up".
This is not a difficult research problem on which I require assistance. This is other people being stupid and me getting stuck cleaning up the mess, in what will be a fairly straightforward fashion if it can be done at all.
comment by nazgulnarsil · 2010-12-11T03:24:28.247Z · LW(p) · GW(p)
oh goody, lesswrong finally has its own super villain. is any community really complete without one?
Replies from: Will_Newsome, wedrifid, Manfred↑ comment by Will_Newsome · 2010-12-12T00:30:42.564Z · LW(p) · GW(p)
WHAT?! We need a much better supervillain. Ideally a sympathetic well-intentioned one so we can have some black and grey morality flying back and forth. Someone like.... Yvain.
Replies from: Kevin↑ comment by Manfred · 2010-12-11T03:58:14.378Z · LW(p) · GW(p)
This is an unprofitable way to think about the problem. If it becomes a Moral Imperative not to come to any sort of resolution, well then, we'll never see any sort of resolution.
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2010-12-11T07:31:25.209Z · LW(p) · GW(p)
next time gadget! next time!
I can't really imagine a resolution at this point that doesn't signal vulnerability to trolls in the future.
edit: How about a script that prefaces waitingforgodel's posts with meanwhile, at the hall of doom:
comment by Snowyowl · 2010-12-10T08:20:12.891Z · LW(p) · GW(p)
You're participating in a flamewar here, though it's a credit to you, EY, and LessWrong that nobody has yet posted in all caps. Tempers are running high all around; I recommend that one or all parties involved stop fighting before someone gets hurt. (read: is banned, has their reputation irrevocably damaged, or otherwise has their ability to argue compromised).
0.0001% is a huge amount of risk, enough that if one person in six thousand did what you just did, humanity should be doomed to certain extinction. Even murder doesn't have such a huge effect. I think you overestimate the impact of your actions. Sending a few emails to a blogger has an impact I would estimate to be 10^(-15) or less.
Certainly making this post has little purpose beyond inciting an argument. All you'll do is polarise LessWrong and turn us against each other.
Replies from: Will_Sawin, Eliezer_Yudkowsky↑ comment by Will_Sawin · 2010-12-10T17:20:37.000Z · LW(p) · GW(p)
Mildly interesting fact: I would have used capital letters in when I said "This doesn't require murdering 6,790 people," if not for this comment.
Is this type of praise, overall, effective in keeping the tone civil? Is it more effective than other methods?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-10T19:59:58.723Z · LW(p) · GW(p)
Well, a lot depends on what we mean by "effective" and "overall."
For example, it's a common observation by animal trainers that positive reinforcement training -- that is, rewarding the behavior that you want and ignoring the behavior that you don't want -- is a more effective form of behavior modification than many alternatives... in particular, than punishing the behavior you don't want.
That said, punishment is the fastest way of getting that behavior to stop in the environment you punish it in. And punishing it severely and consistently enough can also be very effective in getting it to stop in all environments.
The problem is knock-on effects. For example, if I beat my dog every time she barks around me, she'll quickly stop barking around me. She will also most likely stop choosing to be around me at all. Whether that was an effective form of behavioral modification depends a lot on my goals.
(There are other problems as well... for example, dispensing punishment can be rewarding for some people in some situations, which creates potential for escalations.)
The same principles apply to modifying human behavior, though it's generally counterproductive to call attention to it.
So, yes, praising civil behavior is an effective way of getting more of it, especially if you aren't seen as praising it in order to get more of it. More generally, rewarding civil behavior (e.g., by differentially attending to it, by awarding it karma, and so forth) is a way of getting more of it.
All of that said... there are more effective methods. Modeling the desired behavior can be way more cost-effective than rewarding it, for example, depending on the scale of the group and the number of modelers.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T08:33:37.097Z · LW(p) · GW(p)
I invite anyone who still sides with WaitingForGodel at this point to leave and find a site more suited to their intellects. I am sure it will only frustrate them and us to have them stick around.
Replies from: komponisto, Aharon↑ comment by komponisto · 2010-12-10T14:58:17.542Z · LW(p) · GW(p)
Reversed stupidity not being intelligence, I'll point out that I "side with" waitingforgodel to the extent of disapproving of the censorship that occurred yesterday (though I haven't complained about the original censorship from July).
Needless to say, of course, I also think this post is silly.
↑ comment by Aharon · 2010-12-10T13:05:12.226Z · LW(p) · GW(p)
I haven't followed the whole thing, because I couldn't. How can I decide wether he is right or not. I don't know what was censored, and why. Reading the thread on academic careers just had some big holes where, presumably, things were deleted, and I couldn't reconstruct why.
Other forums have some kind of policy, where they explicitly say what kind of post will be censored. I'm not against censoring stuff, but knowing what is worthy of being censored and what isn't would be nice.
With the knowledge I currently have about this whole thing, I still feel slightly sympathetic for WaitingForGoedel's cause. The "Free Speech is important" heuristic that Vladimir Nesov mentioned in the other thread is pretty useful, in my opinion, and without knowing the reason for posts being deleted, I can't decide for myself wether it made sense or not.
I intend to stick around, anyway, because I don't feel very strongly about this issue, so I won't frustrate anybody, I hope. But an answer would still be nice.
Replies from: rwallace↑ comment by rwallace · 2010-12-10T16:48:20.060Z · LW(p) · GW(p)
I do know what was censored and why, and I think Eliezer was wrong to delete the material in question.
That's a separate issue from whether waitingforgodel's method of expressing his (correct) disagreement with the censorship is sane or reasonable -- of course it isn't.
Replies from: Vaniver↑ comment by Vaniver · 2010-12-10T18:06:40.713Z · LW(p) · GW(p)
That's a separate issue from whether waitingforgodel's method of expressing his (correct) disagreement with the censorship is sane or reasonable -- of course it isn't.
Though, I can see a strong argument for "blow up whenever your rights are threatened," especially if you expect that you will only be able to raise awareness, not effect change. It also means those of us who internalized the sequences have our evaporative cooling alarms triggering. Is disagreeing with the existence of Langford basilisks, and caring enough to make a stink about it instead of just scoff, really enough to show someone the door?
Replies from: rwallace, WrongBot↑ comment by rwallace · 2010-12-10T18:43:17.935Z · LW(p) · GW(p)
It's true that the basilisk in question is a wild fantasy even by Singularitarian standards, and that people took it seriously enough to get upset about it, could well be considered cause for alarm.
But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!
And the pity is, it's not true he couldn't effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13'ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.
This is a great example of the general rule that if you think you need to do something crazy or evil for the greater good, you are probably wrong -- keep looking for a better solution instead.
Replies from: Vaniver, David_Gerard↑ comment by Vaniver · 2010-12-10T19:03:47.773Z · LW(p) · GW(p)
But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!
I am not entirely clear on the timeline- I haven't researched his precommitment and whether or not EY saw it- but at some point EY commented in his Mod Voice that undeleting comments was subject to banning, and so that is the part where most people seem to agree that wfg went crazy.
So it's not "wow, you're murdering people to make a point?" that started people saying "maybe you ought not be here," but it certainly is what made that idea catch on.
And the pity is, it's not true he couldn't effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13'ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.
I agree with the desirability of this hypothetical. I have no data on the probability of this hypothetical.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T20:38:00.213Z · LW(p) · GW(p)
No, WFG committed to that before I said anything in Mod Voice.
Replies from: Vaniver↑ comment by David_Gerard · 2010-12-10T19:45:26.005Z · LW(p) · GW(p)
But that's not why people are telling waitingforgodel they'd rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity's chances of survival. That's a lot crazier than believing in basilisks!
My main problem is just that he's being a bit of a dick, and this is bad in social spaces.
↑ comment by WrongBot · 2010-12-10T18:22:20.920Z · LW(p) · GW(p)
Is disagreeing with the existence of Langford basilisks, and caring enough to make a stink about it instead of just scoff, really enough to show someone the door?
No. Threatening to kill 6790 people and then claiming to actually gone through with it, however, is.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T18:31:35.373Z · LW(p) · GW(p)
By my math it's an existential risk reduction. Your point was talked about already in the "precommitment" post linked to from this article.
Replies from: WrongBot, Jack↑ comment by Jack · 2010-12-10T18:40:19.405Z · LW(p) · GW(p)
Why not share 'the Basilisk' with more people every time EY censors a post instead of raising existential risk?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-12-10T18:59:08.951Z · LW(p) · GW(p)
Is this comment the forum's first meta-basilisk?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T21:30:21.225Z · LW(p) · GW(p)
Reminder: Exposure to the basilisk can cause and has caused immediate severe mental torment to people with OCD or strong OCD tendencies. Again, this has already happened (at least two reports that I know of). So that's like posting a video that gives vulnerable people epileptic fits, like that infamous Pokemon episode.
"Please remember", he said in a dryly sarcastic voice, "that not everyone's mind is an invincible fortress like yours."
Replies from: Jack, Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-12-10T22:38:27.644Z · LW(p) · GW(p)
Sorry, I think this is lost on me, why did you post this in reply to my comment?
Replies from: wedrifid↑ comment by wedrifid · 2010-12-11T06:37:08.314Z · LW(p) · GW(p)
What I want to know is why Eliezer is still advertising the fact that members of SIAI are psychologically incapable of even considering the kinds of issues that come along with thinking about singularities. I can kind of understanding him letting that slip in the heat of the moment while in the throes of his emotional outburst but why is still saying it now? Why wouldn't he be trying to do whatever he can to convey that the important thing in his mind is the deep game theoretical issue that nobody else is sophisticated enough to understand?
Sure, even if someone at the SIAI has a disability in one area they could well make valuable contributions in another. But that doesn't make it something to boast about publicly without taking care to emphasise that not everyone else is so crippled.
If you are vulnerable to epileptic fits don't work in a pokemon factory - even if your factory only creates 'good' pokemon!
Replies from: major↑ comment by major · 2010-12-12T10:31:37.434Z · LW(p) · GW(p)
wedrifid
My interpretation (which Eliezer's above comment seems to have confirmed) was, Eliezer deleted Roko's comment for the exact same reason he would have deleted an epileptic-fit-inducing animation. Simply to protect some of the readers, many of whom might not even be aware of their own vulnerability, for this is not exacly a commonly triggered or recognized weakness.
I felt all the rest with 'existential risk' and 'supressed ideas' was just added by people in the absence of real information. Like, someone saw 'existential risk' near (in?) Roko's comment and heard that Eliezer is worried about 'existential risks' so they concluded that must have been the reason the post was deleted. This sort of thing tends to happen, especially when they were already critical, such as timtyler, who was taking potshots at Eliezer and the SIAI even before Roko's post was deleted (top 2 comments). (Yes, I mention timtyler because I know his opinion could have affected yours)
My big problem with this theory is that it requires you to have been making a basic mistake. Which is always suspect, since shown yourself a smart and competent poster. (That some other posters, such as WFG were foolish is a given, I'm afraid.) So the simplest way to resolve my confusion is to ask you directly, hence this comment.
Why do you dismiss the above interpretation? What do you see that I don't?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-12T11:11:54.199Z · LW(p) · GW(p)
Yes, whole rafts of stuff are being made-up here.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-12T11:18:33.636Z · LW(p) · GW(p)
Since you have already replied to the grandparent with a partial affirmation could you please confirm or (I hope) deny the primary contention of said comment?
My interpretation (which Eliezer's above comment seems to have confirmed) was, Eliezer deleted Roko's comment for the exact same reason he would have deleted an epileptic-fit-inducing animation.
That is another idiot ball which I have assumed you are not guilty of bearing. But if you are giving support to a comment which presents such an interpretation it warrants clarification.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-12T11:52:44.419Z · LW(p) · GW(p)
Depends what you mean by "exact same". I deleted the basilisk strictly to protect readers, yes. I didn't realize at the time that there was also an immediate damage mode for unusually vulnerable readers.
comment by cousin_it · 2010-12-10T09:41:29.769Z · LW(p) · GW(p)
I agree with Eliezer's comment asking you to leave. Even if LW had heavy censorship, I'd still read it (and hopefully participate) because of the great signal to noise ratio, which is what you're hurting with all your posts and comments - they don't add anything to my understanding.
comment by Unnamed · 2010-12-10T20:03:13.544Z · LW(p) · GW(p)
All four examples involve threats - one party threatening to punish another unless the other party obeys some rule - but the last threat (threatening to increase existential risk contingent on acts of forum moderation) sticks out as different from the others in several ways:
- Proportionality. The punishment in the other examples seems roughly proportional to the offense ($500 may seem a bit high for one album, but is in the ballpark given the low chance of being caught), but over 6,000 deaths (in expectation) plus preventing who-knows-how-many people from ever living is disproportionate to the offense of deleting forum comments.
- Narrow targeting. Most of the punishments are narrowly targeted at the offender - the offender is the one who suffers the negative consequences of the punishment, as much as possible (although there are some broader consequences - e.g., the rest of the forum is deprived of a banned poster's comments). But the existential risk threat is not targeted at all - it's aimed at the whole world. Threats to third parties are usually frowned upon - think of hostage taking, or threats to harm someone's family.
- Legitimate authority. There are laws & conventions regarding who has authority over what, and these limit what threats are seen as acceptable. Threats can be dangerous and destructive, because of the possibility that they will actually be carried out and because of the risk of escalating threats and counter-threats as people try to influence each other's behavior, and these conventions about domains of authority help limit the damage. It's widely accepted that the government is allowed to regulate driving and intellectual property, and to use fines as punishment. The law grants IP-holders rights to sue for money. Forum moderators are understood to have control over what gets posted on their forum, and who posts. But a single forum user does not have the authority to dictate what gets posted on a forum.
- Accountability. Those with legitimate authority are usually accountable to a broader public. If citizens oppose a law they can replace the legislators with ones who will change the law, and since legislators know this and want to keep their jobs they pay attention to the citizens' views when passing laws. Members of an online forum can leave en masse to another forum if they disagree strongly with the moderation policy, and forums take this into account when they set their moderation policy. But one person who threatens to increase existential risk if his preferred forum policy isn't put into place is not accountable to anyone - it doesn't matter how many people disagree with his preferred forum policy, or with his proposed punishment.
I'm not entirely in agreement with the first three threats, but they're at least within the bounds of the kinds of threats that are commonly acceptable. The fourth is not.
Replies from: David_Gerard, Manfred, waitingforgodel↑ comment by David_Gerard · 2010-12-10T21:19:51.223Z · LW(p) · GW(p)
And 5. Ridiculousness. "He threatened what? ... And they took it seriously?"
(Posted as an example of a way this is notably different to the typical example. Note that this is also my reaction, but I might well be wrong.)
↑ comment by Manfred · 2010-12-11T04:07:41.284Z · LW(p) · GW(p)
My bet would be that he believes that it is proportional. From where I'm standing, this looks like assigning too much impact to LW and to censorship of posts. Note that 2 and 4 are particularly good arguments why something of this nature was dumb regardless of importance.
↑ comment by waitingforgodel · 2010-12-11T05:15:14.144Z · LW(p) · GW(p)
Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen -- it is there to balance his motivation out.
Re #2: Letting Christians/Republicans know that they should be interested in passing a law is not the same as hostage taking or harming someone's family. I agree that narrow targeting is preferable.
Re #3 and #4: I have a right to tell Christians/Republicans about a law they're likely to feel should be passed -- it's a right granted to me by the country I live in. I can tell them about that law for whatever reason I want. That's also a right granted to me by the country I live in. By definition this is legitimate authority, because a legitimate authority granted me these rights.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-11T05:16:07.552Z · LW(p) · GW(p)
Re #1: EY claimed his censorship caused something like 0.0001% risk reduction at the time, hence the amount chosen -- it is there to balance his motivation out.
Citation? That sounds like an insane thing for Eliezer to have said.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-11T06:11:56.073Z · LW(p) · GW(p)
After reviewing my copies of the deleted post, I can say that he doesn't say this explicitly. I was remembering another commenter who was trying to work out the implications on x-risk of having viewed the basilisk.
EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase, but he does not say what he thinks that increase is.
Edit: can't reply, no karma. It means I don't know if it's proportional.
Replies from: wedrifid, TheOtherDave, Nick_Tarleton, Nick_Tarleton↑ comment by wedrifid · 2010-12-11T06:20:22.040Z · LW(p) · GW(p)
Nod. That makes more sense.
One thing that Eliezer takes care to avoid doing is giving his actual numbers regarding the existential possibilities. And that is an extremely wise decision. Not everyone has fully internalised the idea behind Shut Up and Do The Impossible! Even if Eliezer believed that all of the work he and the SIAI may do will only improve our existential expectation by the kind of tiny amount you mention it would most likely still be the right choice to go ahead and do exactly what he is trying to do. But not everyone is that good at multiplication.
↑ comment by TheOtherDave · 2010-12-11T06:19:55.657Z · LW(p) · GW(p)
Does that mean you're backing away from your assertion of proportionality?
Or just that you're using a different argument to support it?
↑ comment by Nick_Tarleton · 2010-12-11T12:37:09.861Z · LW(p) · GW(p)
EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase
I'm pretty sure that this is false.
↑ comment by Nick_Tarleton · 2010-12-11T12:36:42.564Z · LW(p) · GW(p)
EY does say things that directly imply he thinks the post is a basilisk because of an x-risk increase
I'm fairly certain this is false.
comment by ronnoch · 2010-12-10T22:39:44.447Z · LW(p) · GW(p)
This is an excellent cautionary tale about being careful what you precommit to.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-11T04:57:46.427Z · LW(p) · GW(p)
Yes, hopefully for EY as well
comment by NihilCredo · 2010-12-10T19:37:13.138Z · LW(p) · GW(p)
As someone who only now found out about this whole nonsense, and who believes that the maximum existential risk increase you can cause on a whim has a lot more decimal zeros in front of it, I'd like to thank you for providing a quarter-hour of genuine entertainment in the form of quality Internet drama.
With regards to Eliezer deleting what he regards as Langford Basilisks, I don't think he should do it *, but I also think their regular deletion does not cause perceptible harm to the LessWrong site as I care about it. Now, if he were to censor people who oppose his positions on various pet issues, even only if they brought particularly stupid reasons, that would be different (I could see him eventually degenerating into "a post saying that uFAI isn't dangerous increases existential risk"), but as far as I know that's currently not the case and he has stated so outright.
* (I read Roko's banned post, and while I wouldn't confidently state that I suffered zero damage, I am confident I suffered less damage than I did half an hour ago by eating some store-bought salmon without previously doing extensive research on its provenance.)
comment by Emile · 2010-12-10T09:40:23.310Z · LW(p) · GW(p)
Enough with the hypothetical,this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will bebanned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?
Have you considered that not everyone feels as strongly as you do about moderators deleting posts in online communities?
To those of us who think that moderators deleting stupid or dangerous content can be an essential ingredient to maintaining the level of quality, your post comes off as silly as threatening to kill a kitten unless LessWrong.com is made W3C compliant by 2011.
(That isn't to say moderation can't have problems - after all, lesswrong's voting system is a mechanism to improve on it. But it's a far cry from "can be improved" to "must be punished".)
comment by JoshuaZ · 2010-12-10T18:48:35.269Z · LW(p) · GW(p)
I'm curious, would you object if similar censorship occurred of instructions on how to make a nuclear weapon? What if someone posted code that they thought would likely lead to a very unFriendly AI if it were run? What if there were some close to nonsense phrase in English that causes permanent mental damage to people who read it?
I'm incidentally curious if you are familiar with the notion that there's a distinction between censorship by governments as opposed to private organizations. In general, most people who are against censorship agree that private organizations can decide what content they do and do not allow. Thus for example, you probably don't object to Less Wrong moderators removing spam. And we've had a few people posting who simply damaged the signal to noise ratio (like the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z). Is there any difference between those cases and the case you are objecting to? As far as I can tell, the primary difference is that the probability of very bad things happening if the comments are around is much higher in the case you object to. It seems that that's causing some sort of cognitive bias where you regard everything related to those remarks (including censorship of those remarks) as more serious issues than you might otherwise claim.
Incidentally, as a matter of instrumental rationality, using a title that complains about the karma system is likely making people less likely to take your remarks seriously.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-12-10T22:24:44.518Z · LW(p) · GW(p)
the fellow who claimed that he had ancient Egyptian nanotechnology that had been stolen by the rapper Jay-Z
Can you link me to this? Please? S/N ratio be damned, I need to read it.
Replies from: jaimeastorga2000↑ comment by jaimeastorga2000 · 2010-12-10T23:15:14.318Z · LW(p) · GW(p)
Replies from: NihilCredo↑ comment by NihilCredo · 2010-12-11T07:00:19.384Z · LW(p) · GW(p)
Thank you. It's fantastic.
I went to school at my family's Kingdom of Oyotunji Royal Academy where we learn about the ancient science of astral physics.
This was even more hilarious after I found out that Oyotunji is in North Carolina.
Replies from: Sniffnoycomment by Alicorn · 2010-12-10T16:17:20.363Z · LW(p) · GW(p)
Is there anything I, as an individual you have chosen to hold hostage to Eliezer's compliance via your attempts at increasing existential risk, can do to placate you? Or are you simply notifying us that resistance is futile, we will be put at risk until you get the moderation policy you want?
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T18:38:14.916Z · LW(p) · GW(p)
Yes: talk some sense into Eliezer.
comment by lsparrish · 2010-12-10T15:40:26.811Z · LW(p) · GW(p)
Would the comment have been deleted if the author had ROT13'd it?
Would the anti-censors have been incensed by the moderator ROT13-ing the content instead of deleting it?
Replies from: rwallace, wedrifid↑ comment by rwallace · 2010-12-10T16:10:31.508Z · LW(p) · GW(p)
Upvoted - this is an eminently sensible suggestion on how to deal with comments that some people would rather not view because they find the topic upsetting.
waitingforgodel: see, there usually are at least somewhat reasonable ways to deal with this sort of conflict. If you'd reacted to "I can't think of a reasonable way yet" with "I'll keep thinking about it" instead of "I'm going to go off and do something completely loony like pretending to destroy the world" you might have been the one to make this suggestion, or something even better, and you wouldn't be shooting for a record number of (deserved) downvotes.
Replies from: Lightwave↑ comment by Lightwave · 2010-12-11T01:26:16.416Z · LW(p) · GW(p)
I think I'd prefer that discussions of 'toxic' topics happen off-site. If it's all mixed-in with the rest of the comments/discussions it might be too hard to resist reading them. Temptation and curiosity would be too strong when you face threads/comments with a label "Warning. Dangerous ideas ahead. Read at your own risk."
comment by Mass_Driver · 2010-12-10T08:12:41.358Z · LW(p) · GW(p)
Don't you have better things to do than fight a turf war over a blog? Start your own if you think your rules make more sense -- the code is mostly open source.
comment by David_Gerard · 2010-12-10T12:33:59.691Z · LW(p) · GW(p)
Please, please fix "loose" in the title.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-12-10T14:06:07.282Z · LW(p) · GW(p)
This will cause it to end up in the RSS readers twice, thus being twice as annoying as before.
Replies from: DSimoncomment by [deleted] · 2010-12-10T15:13:05.466Z · LW(p) · GW(p)
You actually lost me before you even got to the main point, since record companies have good reasons to try to protect their intellectual property and governments have good reasons to institute seat belt laws. By the time I read the angry part I was already in disagreement; everything after that only made it worse.
comment by Psychohistorian · 2010-12-10T13:28:14.980Z · LW(p) · GW(p)
Laws are not comparable to blackmail because they have process behind them. If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail. Might even qualify as terrorism, since he is trying to constrain my actions by threat of illegitimate force.
A lone individual making a threat against the main moderator of a site if he uses his discretion in a certain way is indeed blackmail/terrorism, particularly when the threat is over a thing substantially outside the purview of the site, and the act threatened is on its own clearly immoral (e.g. it'd be legitimate to threaten leaving the site, or reposting censored material on a separate site). As it stands, it's an attempt to force another's will without any semblance of legitimate authority, which seems to qualify as " clearly wrong."
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T18:02:37.723Z · LW(p) · GW(p)
If one loan individual told me that if I didn't wear my seatbelt, he'd bust my kneecaps, then that would be blackmail.
I think this is closer to if one lone individual said that every time he saw you not wear a seatbelt (which for some reason a law couldn't get passed for), he'd nudge gun control legislation closer to being enacted (assuming he knew you'd hate gun control legislation)
Replies from: Psychohistorian↑ comment by Psychohistorian · 2010-12-10T21:25:17.916Z · LW(p) · GW(p)
No, it's not. You can't just pretend that the threat is trivial when it's not. "You'd hate gun control legislation" is not an appropriate comparison. The utility hit of nudging up the odds of something I'd hate happening is not directly comparable. Given the circumstances and EY's obvious beliefs, the negative utility value of an FAI is vastly worse.
Comparable would be this: every time he sees me not wear a seatbelt, he rolls 8 dice. If they all come up sixes, he'd hunt down, torture, and murder everyone I know and love. The odds are actually slightly lower, and the negative payoff is vastly smaller in this example, so if anything it's an understatement (though failing to wear a seatbelt is a much less bad thing to do than censoring someone, so perhaps it balances). I think this is pretty clearly improper.
↑ comment by TheOtherDave · 2010-12-11T05:33:20.255Z · LW(p) · GW(p)
Of course actual religious believers who accept that doctrine don't usually bite the bullet
I know a number of believers in various "homegrown" faiths who conclude essentially this, actually. That is, they assert that being aware of the spiritual organizing principle of existence without acting on it leaves one worse off than being ignorant of it, and they assert that consequently they refuse to share their knowledge of that principle.
comment by taw · 2010-12-10T11:55:22.988Z · LW(p) · GW(p)
Outside view question for anyone with relevant expertise:
It seems to be that lesswrong has some features of early cult (belief that the rest of the world is totally wrong about wide range of subjects, messiah figure, secretive inner circle, mission to save the world etc.). Are ridiculous challenges of group's leadership, met with similarly ridiculous response from it, typical feature of group's gradual transformation into a fully developed cult?
My intuitive guess is yes, but I'm no expert in cults. Anyone has relevant knowledge?
This is outside view question about similar groups, not inside view question about lesswrong itself and why it is/isn't a cult.
In my estimate lesswrong isn't close to the point where such questions would get deleted, but as I said, I'm no expert.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T12:25:56.997Z · LW(p) · GW(p)
I know a thing or two (expert on Scientology, knowledgeable about lesser nasty memetic infections). In my opinion as someone who knows a thing or two about the subject, LW really isn't in danger or the source of danger. It has plenty of weird bits, which set off people's "this person appears to be suffering a damaging memetic infection" alarms ("has Bob joined a cult?"), but it's really not off on crack.
SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I was chatting with ciphergoth about this last night, while he worked at chipping away my disinterest in signing up for cryonics. I'm actually excessively cautious about new ideas and extremely conservative about changing my mind. I think I've turned myself into Mad Eye Moody when it comes to infectious memes. (At least in paranoia; I'm not bragging about my defences.) On the other hand, this doesn't feel like it's actually hampered my life. On the other other hand, I would not of course know.
Replies from: ata, taw↑ comment by ata · 2010-12-10T19:46:15.564Z · LW(p) · GW(p)
SIAI, I can't comment on. I'd hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I don't have extensive personal experience with SIAI (spent two weekends at their Visiting Fellows house, attended two meetups there, and talked to plenty of SIAI-affiliated people), but the following have been my impressions:
People there are generally expected to have read most of the Sequences... which could be a point for cultishness in some sense, but at least they've all read the Death Spirals & Cult Attractor sequence. :P
There's a whole lot of disagreement there. They don't consider that a good thing, of course, but any attempts to resolve disagreement are done by debating, looking at evidence, etc., not by adjusting toward any kind of "party line". I don't know of any beliefs that people there are required or expected to profess (other than basic things like taking seriously the ideas of technological singularity, existential risk, FAI, etc., not because it's an official dogma, but just because if someone doesn't take those seriously it just raises the question of why they're interested in SIAI in the first place).
On one occasion, there were some notes on a whiteboard comparing and contrasting Singularitarians and Marxists. Similarities included "[expectation/goal of] big future happy event", "Jews", "atheists", "smart folks". Differences included "popularly popular vs. popularly unpopular". (I'm not sure which was supposed to be the more popular one.) And there was a bit noting that both groups are at risk of fully general counterarguments — Marxists dismissed arguments they didn't like by calling their advocates "counterrevolutionary", and LW-type Singularitarians could do the same with categorical dismissals such as "irrational", "hasn't overcome their biases", etc. Note that I haven't actually observed SIAI people doing that, so I just read that as a precaution.
(And I don't know who wrote that, or what the context was, so take that as you will; but I don't think it's anything that was supposed to be a secret, because (IIRC) it was still up during one of the meetups, and even if I'm mistaken about that, people come and go pretty freely.)
People are pretty critical of Eliezer. Of course, most people there have a great deal of respect and admiration for him, and to some degree, the criticism (which is usually on relatively minor things) is probably partly because people there are making a conscious effort to keep in mind that he's not automatically right, and to keep themselves in "evaluate arguments individually" mode rather than "agree with everything" mode. (See also this comment.)
So yeah, my overall impression is that people there are very mindful that they're near the cult attractor, and intentionally and successfully act so as to resist that.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T20:17:06.749Z · LW(p) · GW(p)
So yeah, my overall impression is that people there are very mindful that they're near the cult attractor, and intentionally and successfully act so as to resist that.
Sounds like it more so than any other small group I know of!
↑ comment by taw · 2010-12-10T12:56:08.437Z · LW(p) · GW(p)
I would be surprised if less wrong itself ever developed fully into a cult. I'm not so sure about SIAI, but I guess it will probably just collapse at some point. LW doesn't look like a cult now. But what was Scientology like in its earliest stages?
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
My intuition was more about Ayn Rand and objectivists than Scientology, but I don't really know much here. Anybody knows what were early objectivists like?
I didn't put much thought into this, it's just some impressions.
Replies from: David_Gerard, jimrandomh↑ comment by David_Gerard · 2010-12-10T13:18:25.050Z · LW(p) · GW(p)
I don't have a quick comment-length intro to how cults work. Every Cause Wants To Be A Cult will give you some idea.
Humans have a natural tendency to form close-knit ingroups. This can turn into the cult attractor. If the group starts going a bit weird, evaporative cooling makes it weirder. edit: jimrandomh nailed it: it's isolation from outside social calibration that lets a group go weird.
Predatory infectious memes are mostly not constructed, they evolve. Hence the cult attractor.
Scientology was actually constructed - Hubbard had a keen understanding of human psychology (and no moral compass and no concern as to the difference between truth and falsity, but anyway) and stitched it together entirely from existing components. He started with Dianetics and then he bolted more stuff onto it as he went.
But talking about Scientology is actually not helpful for the question you're asking, because Scientology is the Godwin example of bad infectious memes - it's so bad (one of the most damaging, in terms of how long it takes ex-members to recover - I couldn't quickly find the cite) that it makes lesser nasty cults look really quite benign by comparison. It is literally as if your only example of authoritarianism was Hitler or Pol Pot and casual authoritarianism didn't look that damaging at all compared to that.
Ayn Rand's group turned cultish by evaporative cooling. These days, it's in practice more a case of individual sufferers of memetic infection - someone reads Atlas Shrugged and turns into an annoying crank. It's an example of how impossible it is to talk someone out of a memetic infection that turns them into a crank - they have to get themselves out of it.
Is this helpful?
↑ comment by jimrandomh · 2010-12-10T13:24:06.363Z · LW(p) · GW(p)
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what's happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.
Replies from: David_Gerard, taw↑ comment by David_Gerard · 2010-12-10T14:15:12.865Z · LW(p) · GW(p)
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that's just detail, you've nailed the biggie. Good one.
and Less Wrong has never done anything like that.
SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they're working on a project they think is really important. Also, they have information they can't tell everyone (e.g. things they consider decision-theoretic basilisks). So there's a few untoward forces there. As I said, hope they all have their wits about them.
/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.
Replies from: Anonymous6004↑ comment by Anonymous6004 · 2010-12-10T15:25:21.804Z · LW(p) · GW(p)
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point.
No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely. They also have a ritualized training procedure in which an examiner repeatedly tries to provoke them, and they have to avoid producing a detectable response on an "e-meter" (which measures stress response). After doing this for awhile, they learn to remain calm under the most extreme circumstances and not react. And so when Scientology's leaders abuse them in terrible ways and commit horrible crimes, they continue to remain calm and not react.
Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.
Replies from: David_Gerard, Vaniver, taw↑ comment by David_Gerard · 2010-12-10T15:58:26.195Z · LW(p) · GW(p)
I was talking generally, not about Scientology in particular.
As I noted, Scientology is such a toweringly bad idea that it makes other bad ideas seem relatively benign. There are lots of cultish groups that are nowhere near as bad as Scientology, but that doesn't make them just fine. Beware of this error. (Useful way to avoid it: don't use Scientology as a comparison in your reasoning.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-10T16:07:42.289Z · LW(p) · GW(p)
But that error isn't nearly as bad as accidentally violating containment procedures when handling virulent pathogens, so really, what is there to worry about?
(ducks)
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T16:08:57.104Z · LW(p) · GW(p)
The forbidden topic, obviously.
↑ comment by Vaniver · 2010-12-10T17:57:14.466Z · LW(p) · GW(p)
Cults tear down members' defenses and smash their moral compasses. Less Wrong does the exact opposite.
What defense against EY does EY strengthen? Because I'm somewhat surprised by the amount I hear Aumann's Agreement Theorem bandied around with regards to what is clearly a mistake on EY's part.
↑ comment by taw · 2010-12-10T16:44:53.983Z · LW(p) · GW(p)
No, it's much more than that. Scientology makes its members cut off communication with their former friends and families entirely.
I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
If the claim is of merely weakening these ties, then this is definitely happening. I especially mean commitment by signing up for cryonics. It will definitely increase mental distance between affected person and their formerly close friends and family, I guess about as much signing up for a weird religion but mostly perceived as benign would. I doubt anyone has much evidence about this demographics?
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T17:06:35.052Z · LW(p) · GW(p)
I'd like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
I don't think they necessarily make them - all that's needed is for the person to loosen the ties in their head, and strengthen them to the group.
An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven't worked out precisely how to get from the first to the second.)
There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream - but there's enough weird stuff in the Blble to justify all manner of strangeness.
At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I'm not sure how to measure that.)
I need to ask some people about this. Unfortunately, the real experts on cult thinking include several of the people currently going wildly idiotic about cryonics on the Rick Ross boards ... an example of overtraining on a bad experience and seeing a pattern where it isn't.
Replies from: taw↑ comment by taw · 2010-12-10T19:42:57.843Z · LW(p) · GW(p)
Regardless of actual chances of both working and considering the issue from purely sociological perspective - signing up for cryonics seems to be to be a lot like "accepting Jesus" / born again / or joining some far-more-religious-than-average subgroups of mainstream religions.
In both situations there's some underlying reasonably mainstream meme soup that is more or less accepted (Christianity / strict mind-brain correspondence) but which most people who accept it compartmentalize away. Then some groups decide not to compartmentalize it but accept consequences of their beliefs. It really doesn't take much more than that.
Disclaimers:
I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.
The only "rationalist" idea from LW canon I take more or less seriously is the outside view, and the outside view says taking ideas too seriously tends to have horrible consequences most of the time. So I cannot even take outside view too seriously, by outside view - and indeed I have totally violated outside view's conclusions on several occasions, after careful consideration and fully aware of what I'm doing. Maybe I should write about it someday.
In my estimate all FAI / AI foom / nonstandard decision theories stuff is nothing but severe compartmentalization failure.
In my estimate cryonics will probably be feasible in some remote future, but right now costs of cryonics (very rarely honestly stated by proponents, backed by serious economic simulations instead of wishful thinking) are far too high and chances of it working now are far too slim to bother. I wouldn't even take it for free, as it would interfere with me being an organ donor, and that has non-negligible value for me. And even without that personal cost of added weirdness would probably be too high relative to my estimate of it working.
I can imagine alternative universes where cryonics makes sense, and I don't think people who take cryonics seriously are insane, I just think wishful thinking biases them. In non-zero but as far as I can tell very very tiny portion of possible future universes where cryonics turned out to work, well, enjoy your second life.
By the way, is there any reason for me to write articles expanding my points, or not really?
Replies from: multifoliaterose, None↑ comment by multifoliaterose · 2010-12-10T21:03:30.028Z · LW(p) · GW(p)
I'm probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.
My own situation is not so different although
(a) I have lower karma than you and
(b) There are some LW posters with whom I feel strong affinity
By the way, is there any reason for me to write articles expanding my points, or not really?
I myself am curious and would read what you had to say with interest and this is a weak indication that others would but of course it's for you to say whether it would be worth the opportunity cost. Probably the community would be more receptive to such pieces if they were cautious & carefully argued than if not; but this takes still more time and effort.
Replies from: taw↑ comment by taw · 2010-12-11T00:09:28.083Z · LW(p) · GW(p)
(a) I have lower karma than you
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Also you get more karma for more alignment with groupthink. I even recall how in early days of lesswrong I stated based on very solid outside view evidence (from every single subreddit I've been to) that karma and reception will come to correlate with not only quality but also alignment with groupthink - that on reddit-style karma system downvoting-as-disagreement / upvoting-as-agreement becomes very significant at some point. People disagreed, but the outside view prevailed.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it - and such trivial inconveniences matter.
(b) There are some LW posters with whom I feel strong affinity
I don't think I feel particular "affinity" with anyone here, but I find many posters highly enjoyable to read and/or having a lot of insightful ideas.
I mostly write when I disagree with someone, so for a change (I don't hate everyone all the time, honestly :-p) here are two among the best writings by lesswrong posters I've ever read:
- Twilight fanfiction by Alicorn - it is ridiculously good, I guess a lot of people will avoid it because it's Twilight, but it would be a horrible mistake.
- Contrarian excuses by Robin Hanson - are you able to admit this about your own views?
↑ comment by David_Gerard · 2010-12-11T01:51:15.353Z · LW(p) · GW(p)
I think it's a plus point that a contrarian comment will get upvotes for effort and showing its work (links, etc) - that is, the moderation method still seems to be "More like this please" than "Like". Being right and obnoxious gets downvotes.
(I think "Vote up" and "Vote down" might be profitably be replaced with "More like this" and "Less like this", but I don't think that's needed now and I doubt it'd work if it was needed.)
Replies from: taw↑ comment by taw · 2010-12-11T07:56:00.440Z · LW(p) · GW(p)
More like this/Less like this makes sense for top posts, but is it helpful for comments?
It's ok to keep an imperfect system - LW is nowhere near groupthink levels of subreddits or slashdot.
However - it seems to me that stealing HackerNews / Stackoverflow model of removing normal downvote, and keeping only upvote for comments (and report for spam/abuse, or possibly some highly restricted downvote for special situations only; or one which would count for a lot less than upvote) would reduce groupthink a lot, while keeping all major benefits of current system.
Other than "not fixing what ain't broken", are there any good reasons to keep downvote for comments? Low quality non-abusive coments will sink to the bottom just because of not getting upvotes, later reinforced by most people reading from highest rated first.
Disclaimers:
I'm obviously biased as a contrarian, and as someone who really likes reading a variety of contrarian opinions. I rarely bother posting comments saying that I totally agree with something. I occasionally send a private message with thanks when I read something particularly great, but I don't recall ever doing it here yet, even though a lot of posts were that kind of great.
And I fully admit that on several occasions I downvoted a good comment just because I though one below it was far better and deserving a lot of extra promotion. I always felt like I'm abusing the system this way. Is this common?
↑ comment by multifoliaterose · 2010-12-11T01:19:12.445Z · LW(p) · GW(p)
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Yes, I've noticed this; it seems like there's a danger of there being an illusion that one is that one is actually getting something done by posting or commenting on LW on account of collecting karma by default.
On the upside I think that the net value of LW is positive so that (taking the outside view; ignoring the quality of particular posts/comments which is highly variable), the expected value of posts and comments is positive though probably less than one subjectively feels.
Also you get more karma for more alignment with groupthink [...]
Yes; I've noticed this too. A few months ago I came across Robin Hanson's Most Rationalists Are Elsewhere which is in similar spirit.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it - and such trivial inconveniences matter.
Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don't know how well I succeed.
I mostly write when I disagree with someone, so for a change (I don't hate everyone all the time, honestly :-p)
Good to know :-)
Twilight fanfiction by Alicorn - it is ridiculously good, I guess a lot of people will avoid it because it's Twilight, but it would be a horrible mistake.
Fascinating; I had avoided it for this very reason but will plan on checking it out.
Contrarian excuses by Robin Hanson
Great article! I hadn't seen it before.
Replies from: taw↑ comment by taw · 2010-12-11T07:18:17.283Z · LW(p) · GW(p)
Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don't know how well I succeed.
I don't consider LW particularly bad - it seems considerably saner than a typical internet forum of similar size. Level of drama seems a lot lower than is typical. Is my impression right that most of drama we get centers about obscure FAI stuff? I tend to ignore these posts unless I feel really bored. I've seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-12-11T09:24:28.726Z · LW(p) · GW(p)
I don't consider LW particularly bad - it seems considerably saner than a typical internet forum of similar size.
I have a similar impression.
LW was the first internet forum that I had serious exposure to. I initially thought that I had stumbled onto a very bizarre cult. I complained about this to various friends and they said "no, no, the whole internet is like this!" After hearing this from enough people and perusing the internet some more I realized that they were right. Further contemplation and experience made me realize that it wasn't only people on the internet who exhibit high levels of group think & strong ideological agendas; rather this is very common among humans in general! Real life interactions mask over the effects of group think & ideological agendas. I was then amazed at how oblivious I had been up until I learned about these things. All of this has been a cathartic and life-changing.
Is my impression right that most of drama we get centers about obscure FAI stuff?
Not sure, I don't really pay enough attention. As a rule, I avoid drama in general on account of lack of interest in the arguments being made on either side. The things that I've noticed most are those connected with gender wars and with Roko's post being banned. Then of course there were my own controversial posts back in August.
I've seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.
Sounds about right.
Replies from: komponisto↑ comment by komponisto · 2010-12-11T18:58:20.318Z · LW(p) · GW(p)
The things that I've noticed most are those connected with gender wars and with Roko being banned
In the interest of avoiding the spread of false ideas, it should be pointed out that Roko was not banned; rather his post was "banned" (jargon for actually deleted, as opposed to "deleted", which merely means removed from the various "feeds" ("New", the user's overview, etc)). Roko himself then proceded to delete (in the ordinary way) all his other posts and comments.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-12-11T19:16:51.069Z · LW(p) · GW(p)
Good point; taw and I both know this but others may not; grandparent corrected accordingly.
↑ comment by [deleted] · 2010-12-10T22:21:40.126Z · LW(p) · GW(p)
By the way, is there any reason for me to write articles expanding my points, or not really?
I'm just some random lurker, but I'd be very interested in these articles. I share your view on cryonics and would like to read some more clarification on what you mean by "compartmentalization failure" and some examples of a rejection of the outside view.
Replies from: taw↑ comment by taw · 2010-12-10T23:41:09.722Z · LW(p) · GW(p)
Here's my view of current lesswrong situation.
On compartmentalization failure and related issues there are two schools present on less wrong:
- Pro-compartmentalization view - expressed in reason as memetic immune disorder - seems to correlate with outside view, and reasoning from experience. Typical example: me
- Anti-compartmentalization view - expressed in taking Ideas seriously - seems to correlate with "weak inside view" and reasoning from theory. Typical example: Eliezer
Right now there doesn't seem to be any hope of reaching Aumann agreement between these points of view, and at least some members of both camps view many of other camp's ideas with contempt. The primary reason seems to be that the kind of arguments that people on one end of the spectrum find convincing people on the other end see as total nonsense, and with full reciprocity.
Of course there's plenty of issues on which both views agree as well - like religion, evolution, akrasia, proper approach to statistics, and various biases (I think outside viewers seem to demand more evidence that these are also a problem outside laboratory than inside viewers, but it's not a huge disagreement). And many other disagreements seem to be unrelated to this.
Is this outside-viewers/pro-compartmentalization/firm-rooting-in-experience/caution vs weak-inside-viewers/anti-compartmentalization/pure-reason/taking-ideas-seriously spectrum only my impression, or do other people see it this way as well?
I might be very well biased, as I feel very strongly about this issue, and the most prominent poster Eliezer seems to feel very strongly about this in exactly the opposite way. It seems to me that most people here have reasonably well defined position on this issue - but I know better than to trust my impressions of people on an internet forum.
And second question - can you think of any good way for people holding these two positions to reach Aumann agreement?
As for cryonics it's a lot of number crunching, textbook economics, outside view arguments etc. - all leading to very very low numbers. I might do that someday if I'm really bored.
Replies from: Will_Newsome, None, multifoliaterose, David_Gerard, lsparrish↑ comment by Will_Newsome · 2010-12-12T01:04:50.917Z · LW(p) · GW(p)
Phrasing it as pro-compartmentalization might cause unnecessary negative affect for a lot of aspiring rationalists here at LW, though I'm too exhausted to imagine a good alternative. (Just in case you were planning on writing a post about this or the like. Also, Anna Salamon's posts on compartmentalization were significantly better than my own.)
Replies from: David_Gerard, taw↑ comment by David_Gerard · 2010-12-12T01:08:23.815Z · LW(p) · GW(p)
I'm trying to write up something on this without actually giving readers fear of ideas. I think I could actually scare the crap out of people pretty effectively, but, ah. (This is why it's been cooking for two months and is still a Google doc of inchoate scribbles.)
↑ comment by taw · 2010-12-13T06:22:06.870Z · LW(p) · GW(p)
A quick observation: Perfect Bayesian mind is impossible to actually build, that much we all know, and nobody cares.
But it's a lot worse - it is impossible even mathematically - even if we expected as little from it as consistently following the rule that P(b|a)=P(c|b)=100% implies P(c|a)=100% (without getting into choice of prior, infinite precision, transfinite induction, uncountable domains etc., merely the merest minimum still recognizable as Bayesian inference) over unbounded but finite chains of inference over countable set of statements, it can trivially solve the halting problem.
Yes, it will always tell you which theorem is true, and which is false, Goedel theorem be damned. It cannot say anything like P(Riemann hypothesis|basic math axioms)=50% as this automatically implies a violation of Bayes rule somewhere in the network (and there are no compartments to limit damage once it happens - the whole network becomes invalid).
Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there's no workaround, and no approximation that is of much use.
Ironically, perfect Bayesian inference systems works really well inside finite or highly regular compartments, with something else limiting its interactions with rest of the universe.
If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.
I realize that I just explicitly rejected one of the most basic premises accepted by pretty much everyone here, including me until recently. It surprised me that we were all falling for something as obvious retrospectively.
Robin Hanson's post on contrarians being wrong most of the time was amazingly accurate again. I'm still not sure which ideas I've came to believe that relied on perfect Bayesian minds being gold standard of rationality I'll need to reevaluate, but it doesn't bother me as much now that I fully accepted that compartmentalization is unavoidable, and a pretty good thing in practice.
I think there's a nice correspondence between outside view and set of preferred reference classes and Bayesian inference and set of preferred priors. Except outside view can be very easily extended to say "I don't know", estimate accuracy of itself as applied to different compartments, give more complex answers, evolve in time by reference classes formerly too small to be of any use now having enough data to return useful answers, and so on.
For very simple systems, these two should correspond to each other in a straightforward way. For complex systems, we have a choice of sometimes answering "I don't know" or being inconsistent.
I wanted to write this as a top level post, but "one of your most cherished beliefs is totally wrong, here's a sketch of mathematical proof" post would take a lot more effort to write well.
I tried a few extensions of Bayesian inference that I hoped would be able to deal with it, but this is really fundamental.
You can still use subjective Bayesian worldview - that P(Riemann hypothesis|basic math axioms)=50% is just your intuition. But you must accept that your probabilities can change with no new data, by just more thinking. This sort of Bayesian inference is just another tool of limited use, with biases, inconsistencies, and compartments protecting it from rest of the universe.
There is no gold standard of rationality. There simply isn't. I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.
Would anyone be willing to write a top level post out of my comment? You'll either be richly rewarded by a lot of karma, or we'll both be banned.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-13T07:49:54.926Z · LW(p) · GW(p)
Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there's no workaround, and no approximation that is of much use.
A perfect Bayesian is logically omniscient (and logically omniscient agents are perfect Bayesians) and come with the same problem (of being impossible). I don't see why this fact should be particularly troubling.
If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.
An outside view is only as good as the reference class you use. Your reference class does not appear to have many infinitely long levers, infinitely fast processors or a Maxwell's Demon. I don't have any reason to expect your hunch to be accurate.
"Outside View" doesn't mean go with your gut instinct and pick a few superficial similarities.
I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.
There is more to that analogy than you'd like to admit.
Replies from: taw↑ comment by taw · 2010-12-13T08:16:19.855Z · LW(p) · GW(p)
I'm quite troubled by this downvote.
A perfect Bayesian is logically omniscient (and logically omniscient agents are perfect Bayesians) and come with the same problem (of being impossible). I don't see why this fact should be particularly troubling.
The only way to be "omniscient" over even very simple countable universe is to be inconsistent. There is no way to assign probabilities to every node which obeys Bayes theorem. It's a lot like Kolmogorov complexity - they can be useful philosophical tools, but neither is really part of mathematics, they're just logically impossible.
Finite perfect Bayesian systems are complete and consistent. We're so used to every example of a Bayesian system ever used being finite, that we totally forgot that they are not logically possible to expand to simplest countable systems. We just accepted handwaving results in finite systems into countable domain.
An outside view is only as good as the reference class you use.
This is a feature, not a bug.
No outside view systems you can build will be omniscient. But this is precisely what lets them be consistent.
Different outside view systems will give you different results. It's not so different from Bayesian priors, except you can have some outside view systems for countable domains, and there are no Bayesian priors like it at all.
You can easily have nested outside view system judging outside view systems on which ones work and which don't. Or some other interesting kind of nesting. Or use different reference classes for different compartments.
Or you could use something else. What we have here are in a way all computer programs anyway - and representing them as outside view systems is just human convenience.
But every single description of reality must either be allowed to say "I don't know" or blatantly violate rules of logic. Either way, you will need some kind of compartmentalization to describe reality.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-12-13T09:35:30.526Z · LW(p) · GW(p)
Just to check, is this an expansion of "Nature never tells you how many slots there are on the roulette wheel"?
I thought I'd gotten the idea about Nature and roulette wheels from Taleb, but a fast googling doesn't confirm that.
Replies from: taw↑ comment by taw · 2010-12-13T13:56:31.368Z · LW(p) · GW(p)
It's not in any way related. Taleb's point is purely practical - that we rely on very simple models that work reasonably well most of the time, but very rare cases where they fail often also have huge huge impact. You wouldn't guess that life or human-level intelligence might happen looking at the universe up until that point. Their reference class was empty. And then they happened just once and had massive impact.
Taleb would be more convincing if he didn't act as if nobody knew even the power law. Everything he writes is about how actual humans currently model things, and that can easily be improved (well, there are some people who don't know even the power law...; or with prediction markets to overcome pundit groupthink).
You could easily imagine that while humans really suck at this, and there's only so much improvement we can make, perhaps there's a certain gold standard of rationality - something telling us how to do it right at least in theory, even if we cannot actually implement it ever due to physical constraints of the universe. Like perfect Bayesians.
My point is that perfect Bayesians can only deal with finite domains. Gold standard of rationality - basically something that would assign some probabilities to every outcome within some fairly regular countable domain, and they would merely be self-consistent and follow basic rules of probability - it turns that even the simplest such assignment of probabilities is not possible, even in theory.
You can be self-consistent by sacrificing completeness - for some questions you'd answer "no idea"; or you can be complete by sacrificing self-consistency (subjective Bayesianism is exactly like that, your probabilities will change if you just think more about something, even without observing any new data).
And not only perfect Bayesianism, nothing else can work the way people wish. Without some gold standard of rationality, without some one true way of describing reality, a lot of other common beliefs just fail.
Compartmentalization, biases, heuristics, and so on - they are not possible to avoid even in theory, in fact they're necessary in nearly any useful model of reasoning. Extreme reductionism is out, emergence comes back as an important concept, it'd be a very different less wrong.
More down to earth subjects like akrasia, common human biases, prediction markets, religion, evopsy, cryonics, luminosity, winning, science, scepticism, techniques, self-deception, overconfidence, signaling etc. would be mostly unaffected.
On the other hand so much of theoretical side of less wrong is based on flawed assumption that perfect Bayesians are at least theoretically possible on infinite domains, so that true always answer exists even if we don't know it, it would need something between a very serious update and simply throwing it away.
Some parts of theory would don't rely on this at all - like outside view. But these are not terribly popular here.
I don't think you'd see even much of sequences surviving without a major update.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-12-13T16:55:54.695Z · LW(p) · GW(p)
My point is that perfect Bayesians can only deal with finite domains. Gold standard of rationality - basically something that would assign some probabilities to every outcome within some fairly regular countable domain, and they would merely be self-consistent and follow basic rules of probability - it turns that even the simplest such assignment of probabilities is not possible, even in theory.
What are the smallest and/or simplest domains which aren't amenable to Bayesian analysis?
I'm not sure you're doing either me or Taleb justice (though he may well be having too much fun going on about how much smarter he than just about everyone else) -- I don't think he's just talking about completely unknown unknowns, or implying that people could get things completely right-- just that people could do a great deal better than they generally do.
For example, Taleb talks about a casino which had the probability and gaming part of its business completely nailed down. The biggest threats to the casino turned out to be a strike, embezzlement (I think), and one of its performers being mauled by his tiger. None of these are singularity-level game changers.
In any case, I would be quite interested in more about the limits of Bayesian analysis and how that affects the more theoretical side of LW, and I doubt you'd be downvoted into oblivion for posting about it.
Replies from: taw↑ comment by taw · 2010-12-13T19:57:34.215Z · LW(p) · GW(p)
What are the smallest and/or simplest domains which aren't amenable to Bayesian analysis?
Notice that you're talking domains already, you've accepted it, more or less.
I'd like to ask the opposite question - are there any non-finite domains where perfect Bayesian analysis makes sense?
On any domain where you can have even extremely limited local rules you can specify as conditions, and unbounded size of the world, you can use perfect Bayesian analysis to say if any Turing machine stops, or to prove any statement about natural number arithmetics.
The only difficulty is bridging language of Bayesian analysis and language of computational incompleteness. Because nobody seems to be really using Bayes like that, I cannot even give a convincing example how it fails. Nobody tried other than in handwaves.
Check things from Goedel incompleteness theorem and Turing completeness lists.
It seems that mainstream philosophy have figured it out long time ago. Contrarians turn out to be wrong once again. It's not new stuff, we just never bothered checking.
↑ comment by [deleted] · 2010-12-11T05:16:31.731Z · LW(p) · GW(p)
Thanks, I now understand what you mean. I'll have to think further about this.
Personally, I find myself strongly drawn to the anti-compartmentalization position. However, I had bad enough problems with it (let's just say I'm exactly the kind of person that becomes a fundamentalist, given the right environment) that I appreciate an outside view and want to adopt it a lot more. Making my underlying assumptions and motivations explicit and demanding the same level of proof and consistency of them that I demand from some belief has served me well - so far anyway.
Also, I'd have to admit that I enjoy reading disagreements most, even if just for disagreement's sake, so I'm not sure I actually want to see Aumann agreement. "Someone is wrong on the internet" syndrome has, on average, motivated me more than reasonable arguments, I'm afraid.
Replies from: taw↑ comment by taw · 2010-12-11T08:00:25.826Z · LW(p) · GW(p)
I enjoy reading disagreements most
Does it seem to you as well that removing downvote for comments (keeping report for spam and other total garbage etc.) would result in more of this? Hacker News seems to be doing a lot better than subreddits of similar size, and this seems like the main structural difference between them.
Replies from: None↑ comment by [deleted] · 2010-12-11T08:20:40.798Z · LW(p) · GW(p)
Probably yes. I don't read HN much (reddit provides enough mind crack already), but I never block any comments based on score, only downvote spam and still kinda prefer ye olde days of linear, barely moderated forums. I particularly disagree with "don't feed the trolls" because I learned tons about algebra, evolution and economics from reading huge flame wars. I thank the cranks for their extreme stubbornness and the ensuing noob-friendly explanations by hundreds of experts.
Replies from: taw↑ comment by taw · 2010-12-11T14:12:28.332Z · LW(p) · GW(p)
I particularly disagree with "don't feed the trolls"
And indeed, a very interesting discussion grew out of this otherwise rather unfortunate post.
I'm quite well acquainted with irc, mailing lists, wikis, wide variety of chans, somethingawful, slashdot, reddit, hn, twitter, and more such forums I just haven't used in a while.
There are upsides and downsides of all communication formats and karma/moderation systems, but as far as I can tell HN karma system seems to strictly dominate reddit karma system.
If you feel adventurous and don't mind trolls, I highly recommend giving chans a try (something sane, not /b/ on 4chan) - anonymity (on chans where it's widely practised, in many namefagging is rampant) makes people drastically reduce effort they normally put into signalling and status games.
What you can see there is human thought far less filtered than usual, and there are very few other opportunities to observe that anywhere. When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
(For some strange reasons online pseudonyms don't work like full anonymity.)
Replies from: TheOtherDave, multifoliaterose↑ comment by TheOtherDave · 2010-12-11T20:47:29.175Z · LW(p) · GW(p)
When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
I find that working with animals is good for this, too. Though it's rarely politic to say so.
↑ comment by multifoliaterose · 2010-12-11T18:48:15.338Z · LW(p) · GW(p)
What you can see there is human thought far less filtered than usual, and there are very few other opportunities to observe that anywhere. When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
This is the sort of thing that I was referring to here. Very educational experience.
↑ comment by multifoliaterose · 2010-12-11T04:09:27.785Z · LW(p) · GW(p)
I know what you're talking about here.
↑ comment by David_Gerard · 2010-12-11T01:20:49.753Z · LW(p) · GW(p)
And second question - can you think of any good way for people holding these two positions to reach Aumann agreement?
Sure: compartmentalisation is clearly an intellectual sin - reality is all one piece - but we're running on corrupt hardware so due caution applies.
That's my view after a couple of months' thought. Does that work for you?
(And that sums up about 2000 semi-readable words of inchoate notes on tne subject. (ctrl-C ctrl-V))
In the present Headless Chicken Mode, by the way, Eliezer is specifically suggesting compartmentalising the very bad idea, having seen people burnt by it. There's nothing quite like experience to help one appreciate the plus points of compartmentalisation. It's still an intellectual sin, though.
Replies from: taw↑ comment by taw · 2010-12-11T07:09:03.814Z · LW(p) · GW(p)
Sure: compartmentalisation is clearly an intellectual sin
Compartmentalisation is an "intellectual sin" in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models - and that in spite of many decades of hard effort, and a lot of ridiculous optimism.
So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.
reality is all one piece
And yet we our organize knowledge about reality into extremely complicated system of compartments.
Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.
but we're running on corrupt hardware so due caution applies.
I don't think our hardware is meaningfully "corrupt". All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it's not even mathematically possible by Rice Theorem etc.
Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources - otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they're harmful.
The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.
I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-11T14:53:34.027Z · LW(p) · GW(p)
it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Do you mean abandoning it completely, or abandoning it at all?
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective. It's not a natural way for savannah apes to think, it's incredibly difficult for most. But the payoff is ridiculously huge.
So we get quite excellent results if we decompartmentalise right. Reality does not appear to come in completely separate magisteria. If you want to form a map, that makes compartmentalisation an intellectual sin (which is what I meant).
By "appears to", I mean that if we assume that reality - the territory - is all of a piece, and we then try to form a map that matches that territory, we get things like Facebook and enough food and long lifespans. That we have separate maps called physics, chemistry and biology is a description of our ignorance; if the maps contradict (e.g. when physics and chemistry said the sun couldn't be more than 20 million years old and geology said the earth was at least 300 million years old [1]), everyone understands something is wrong and in need of fixing. And the maps keep leaking into each other.
This is keeping in mind the dangers of decompartmentalisation. The reason for bothering with it is an expected payoff in usefully superior understanding. People who know science works like this realise that a useful map is one that matches the territory, so decompartmentalise with wild abandon, frequently not considering dangers. And if you tell a group of people not to do something, at least a few will promptly do it. This does help explain engineer terrorists who've inadvertently decompartmentalised toxic waste and logically determined that the infidel must be killed. And why if you have a forbidden thread, it's an obvious curiosity object.
The problem, if you want the results of science, is then not whether to decompartmentalise, but how and when to decompartmentalise. And that there may be dragons there.
- Though Kelvin thought he could stretch the sun's age to 500MY at a push.
↑ comment by taw · 2010-12-11T18:57:04.712Z · LW(p) · GW(p)
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective.
But science itself is extremely compartmentalized! Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
Even microeconomics and macroeconomics make far better predictions when they're separate, and repeated attempt at bringing them together consistently result in a disaster.
Don't imagine that compartmentalization sets up impenetrable barriers once and for all - there's a lot of cautious exchange between nearby compartments, and their boundaries keep changing all the time. I quite like the "compartments as scientific disciplines" image. You have a lot of highly fuzzy boundaries - like for example computer science to math to theoretical physics to quantum chemistry to biochemistry to medicine. But when you're sick you don't ask on programming reddit for advice.
The best way to describe a territory is to use multiple kinds of maps.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-11T19:03:48.881Z · LW(p) · GW(p)
I don't think anything you've said and anything I said actually contradict.
Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
What are the examples you're thinking of, where both are right and said answers contradict, and said contradiction is not resolvable even in principle?
↑ comment by lsparrish · 2010-12-11T18:15:59.610Z · LW(p) · GW(p)
Upvoted. I think this is a useful way to think about things like this. Compartmentalizing and decompartmentalizing aren't completely wrong, but are wrongly applied in different contexts. So part of the challenge is to convince the person you're talking to that it's safe to decompartmentalize in the realm needed to see what you are talking about.
For example, it took me quite some time to decompartmentalize on evolution versus biology because I had a distrust of evolution. It looked like toxic waste to me, and indeed has arguably generated some (social darwinism, e.g.). People who mocked creationists actually contributed to my sense of distrust in the early stages, given that my subjective experience with (young-earth) creationists was not of particularly unintelligent or gullible people. However this got easier when I learned more biology and could see the reference points, and the vacuum of solid evidence (as opposed to reasonable-sounding speculation) for creationism. Later the creationist speculation started sounding less reasonable and the advocates a bit more gullible -- but until I started making the connections from evolution to the rest of science, there wasn't reason for these things to be on my map yet.
I'm starting to think arguments for cryonics should be presented in the form of "what are the rational reasons to decompartmentalize (or not) on this?" instead of "just shut up and decompartmentalize!" It takes time to build trust, and folks are generally justifiably skeptical when someone says "just trust me". Also it is a quite valid point that topics like death and immortality (not to mention futurism, etc.) are notorious for toxic waste to begin with.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-11T20:18:15.944Z · LW(p) · GW(p)
ciphergoth and I talked about cryonics a fair bit a couple of nights ago. He posits that I will not sign up for cryonics until it is socially normal. I checked my internal readout and it came back "survey says you're right" and nodded my head. I surmise this is what it will take in general.
(The above is the sort of result my general memetic defence gives. Possibly-excessive conservatism in actually buying an idea.)
So that's your whole goal. How do you make cryonics normal without employing the dark arts?
Replies from: topynate, lsparrish↑ comment by lsparrish · 2010-12-11T23:06:11.749Z · LW(p) · GW(p)
Mike Darwin had a funny idea for that. :)
I think some additional training in DADA would do me a lot of good here. That is, I don't want to be using the dark arts, but I don't want to be vulnerable to them either. And dark arts is extremely common, especially when people are looking for excuses to keep on compartmentalizing something.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-12T00:19:29.615Z · LW(p) · GW(p)
A contest for bored advertising people springs to mind: "How would you sell cryonics to the public?" Then filter the results that use dark arts. This will produce better ideas than you ever dreamed.
The hard part of this plan is making it sound like fun for the copywriters. Ad magazine competition? That's the sort of thing that gets them working on stuff for fun and kudos.
(My psychic powers predict approximately 0 LessWrong regulars in the advertising industry. I hope I'm wrong.)
(And no, I don't think b3ta is quite what we're after here.)
↑ comment by lsparrish · 2010-12-11T00:37:02.355Z · LW(p) · GW(p)
And second question - can you think of any good way for people holding these two positions to reach Aumann agreement?
I've been thinking about this a lot lately. It may be that there is a tendency to jump to solutions too much on this topic. If more time was spent talking about what the questions are that need to be answered for a resolution, perhaps it would have more success in triggering updates.
↑ comment by taw · 2010-12-10T16:36:57.840Z · LW(p) · GW(p)
Scientology was doing that from the very beginning
Quick reading suggests that Hubbard first founded "dianetics" in late 1949/early 1950, and it became "scientology" only in late 1953/early 1954. As far as I can tell it took them many years until they became Scientology we know. There's some evidence of evaporative cooling at that stage.
And just as David Gerard says, modern Scientology is extreme case. By cult I meant something more like objectivists.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T16:55:27.544Z · LW(p) · GW(p)
The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff - before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics' sites which were highly informative but frequently so bitter as to be all but unreadable.
(Despite the key rule of NPOV - write for your opponent - I doubt the CoS is a fan of WP's Scientology articles. Ah well!)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T08:29:08.485Z · LW(p) · GW(p)
Moved post to Discussion section. Note that user's karma has now dropped below what's necessary to submit to the main site.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T08:33:05.689Z · LW(p) · GW(p)
Also note that it wasn't when I submitted to the main site...
Replies from: Snowyowlcomment by waitingforgodel · 2010-12-11T05:57:34.102Z · LW(p) · GW(p)
At karma 0 I can't reply to each of you one at a time (rate limited - 10 min per post), so here are my replies in a single large comment:
I would feel differently about nuke designs. As I said in the "why" links, I believe that EY has a bug when it comes to tail risks. This is an attempt to fix that bug.
Basically non-nuke censorship isn't necessary when you use a reddit engine... and Roko's post isn't a nuke.
Yes, though you'd have to say more.
Incredible, thanks for the link
Incredible. Where were you two days ago!
After Roko's post on the question of enduring torture to reduce existential risks, I was sure they're must be a SIAI/LWer who was willing to kill for the cause, but no one spoke up. Thanks :p
In this case my estimate is a 5% chance that EY wants to spread the censored material, and used censoring for publicity. Therefore spreading the censored material is questionable as a tactic.
Great! Get EY to rot13 posts instead of censoring them.
You can't just pretend that the threat is trivial when it's not.
Fair enough. But you can't pretend that it's illegal when it's not (ie. the torture/murder example you gave).
Actually, I just sent an email. Christians/Republicans are killing ??? people for the same reason they blocked stem cell research: stupidity. Also, why you're not including EY in that causal chain is beyond me.
I think his blackmail declarations either don't cover my precommitment, or they also require him to not obey US laws (which are also threats).
Replies from: Manfred, shokwave↑ comment by Manfred · 2010-12-11T06:05:08.010Z · LW(p) · GW(p)
In this case my estimate is a 5% chance that EY wants to spread the censored material, and used censoring for publicity. Therefore spreading the censored material is questionable as a tactic.
Be careful to keep your eye on the ball. This isn't some zero-sum contest of wills, where if EY gets what he wants that's bad. The ball is human welfare, or should be.
↑ comment by JoshuaZ · 2010-12-15T05:58:11.591Z · LW(p) · GW(p)
Wow, that's even more impressive than the claim made by some Christian theologians that part of the enjoyment in heaven is getting to watch the damned be tormented. If any AI thinks anything even close to this then we have failed Friendliness even more than if we made a simple object maximizer.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2010-12-15T06:28:25.264Z · LW(p) · GW(p)
Next thing you're going to tell me that an FAI shouldn't push fat people in front of trolleys.
Note: A sufficiently powerful FAI shouldn't need to, but that is different from saying it wouldn't.
↑ comment by Hul-Gil · 2011-07-26T03:03:07.173Z · LW(p) · GW(p)
Thanks! I had been wishing for a PM system... and here we had one all along.
Replies from: wedrifid↑ comment by wedrifid · 2011-07-26T17:21:43.084Z · LW(p) · GW(p)
Thanks! I had been wishing for a PM system... and here we had one all along.
I know, it took me months to realize that the 'someone replied to me' envelope was actually a re-purposed indicator for a feature I had no idea existed.
Replies from: drethelin↑ comment by drethelin · 2012-01-26T05:41:32.742Z · LW(p) · GW(p)
I actually prefer conversations to be public if possible. It doesn't really harm anyone and it helps understanding of long dead threads to see more discussion of them
Replies from: wedrifid↑ comment by wedrifid · 2012-01-26T06:31:53.427Z · LW(p) · GW(p)
I actually prefer conversations to be public if possible. It doesn't really harm anyone and it helps understanding of long dead threads to see more discussion of them
Some conversations are more personal and wouldn't be appropriate if public.
Replies from: drethelincomment by Will_Sawin · 2010-12-12T03:24:16.855Z · LW(p) · GW(p)
Instead of trying to convince right wingers to ban FAI, how about trying to convince Peter Thiel to defund SIAI proportional to the number of comments in a certain period of time.
Advantages:
Better [incentive to Eliezer]/[increase in existential risk as estimated by waitingforgodel] ratio
Reversible if an equitable agreement is reached.
Smaller risk increase, as the problem warrants.
↑ comment by waitingforgodel · 2010-12-12T08:40:39.390Z · LW(p) · GW(p)
It's interesting, but I don't see any similarly high-effectiveness ways to influence Peter Thiel... Republicans already want to do high x-risk things, Thiel doesn't already want to decrease funding.
comment by waitingforgodel · 2010-12-10T18:36:50.082Z · LW(p) · GW(p)
The common misunderstanding from these comments is that they didn't click on the "precommitment" link and read the reasons why the precommitment reduced existential risk.
If I ever do this again, I'll make the reasoning more explicit. In the mean time I'm not sure what to do except add this comment, and the edit at the bottom of the article for new readers.
Replies from: rwallace, HughRistik↑ comment by rwallace · 2010-12-10T21:22:42.851Z · LW(p) · GW(p)
If I observe that I did read the thread to which you refer, and I still think your current course of action is stupid and crazy (and that's coming from someone who agrees with you about the censorship in question being wrong!) will that change your opinion even slightly?
↑ comment by HughRistik · 2010-12-11T00:01:57.032Z · LW(p) · GW(p)
I did read the original precommitment discussions. I thought your original threat was non-serious, and presented as an interesting thought experiment. I was with you on the subject of anti-censorship. When I discovered that your precommitment was serious, you lost the moral high-ground in my eyes, and entered territory where I will not follow.
comment by shokwave · 2010-12-10T18:19:24.494Z · LW(p) · GW(p)
You throw some scary ideas around. Try this one on for size. This post of yours has caused me to revise my probability of the proposition "the best solution to some irrational precommitments is murder" from Pascal's-wager levels (indescribably improbable) to 0.01%.
Replies from: waitingforgodel↑ comment by waitingforgodel · 2010-12-10T18:29:00.901Z · LW(p) · GW(p)
You throw some scary ideas around. Try this one on for size. This post of yours has caused me to revise my probability of the proposition "the best solution to some irrational precommitments is murder" from Pascal's-wager levels (indescribably improbable) to 0.01%.
There are some people who agree with you (the best way to block legislation is to kill the people who come up with it).
I'd say that since I've only been talking about doing things well within my legal rights (using the legal system), that talking about murdering me is a bit "cultish"...
Replies from: shokwave↑ comment by shokwave · 2010-12-10T18:55:45.703Z · LW(p) · GW(p)
The expected value of murder in any case only comes out positive if there are more than 7,000 people on average at risk from the action - which will happen when there are 7 billion people on the planet and I am 100% convinced the actor is going to perform the action once, OR there are 6.5 billion people on the planet and I am 54% convinced the actor is going to perform the action twice, OR 6.5 billion, 36% sure of three actions... etc.
I'd say that since I've only been talking about doing things well within my legal rights (using the legal system), that talking about murdering me is a bit "cultish"...
I can't speak for the legal system but "one death for 6570 lives" vs "6570 deaths for one blog post" speaks for itself.