More Irrationality Game

post by Fill_Cluesome · 2012-07-03T16:16:55.670Z · LW · GW · Legacy · 66 comments

Contents

  Please read the post before voting on the comments, as this is a game where voting works differently.
None
66 comments

I thought it would be good to play the irrationality game again. Let's do it!

Entire text of "Will Newsome's" original post:

Please read the post before voting on the comments, as this is a game where voting works differently.

Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.

Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.

Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.

Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."

If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.

That's the spirit of the game, but some more qualifications and rules follow.

If the proposition in a comment isn't incredibly precise, use your best interpretation. If you really have to pick nits for whatever reason, say so in a comment reply.

The more upvotes you get, the more irrational Less Wrong perceives your belief to be. Which means that if you have a large amount of Less Wrong karma and can still get lots of upvotes on your crazy beliefs then you will get lots of smart people to take your weird ideas a little more seriously.

Some poor soul is going to come along and post "I believe in God". Don't pick nits and say "Well in a a Tegmark multiverse there is definitely a universe exactly like ours where some sort of god rules over us..." and downvote it. That's cheating. You better upvote the guy. For just this post, get over your desire to upvote rationality. For this game, we reward perceived irrationality.

Try to be precise in your propositions. Saying "I believe in God. 99% sure." isn't informative because we don't quite know which God you're talking about. A deist god? The Christian God? Jewish?

Y'all know this already, but just a reminder: preferences ain't beliefs. Downvote preferences disguised as beliefs. Beliefs that include the word "should" are are almost always imprecise: avoid them.

That means our local theists are probably gonna get a lot of upvotes. Can you beat them with your confident but perceived-by-LW-as-irrational beliefs? It's a challenge!

Additional rules:

  • Generally, no repeating an altered version of a proposition already in the comments unless it's different in an interesting and important way. Use your judgement.
  • If you have comments about the game, please reply to my comment below about meta discussion, not to the post itself. Only propositions to be judged for the game should be direct comments to this post. 
  • Don't post propositions as comment replies to other comments. That'll make it disorganized.
  • You have to actually think your degree of belief is rational.  You should already have taken the fact that most people would disagree with you into account and updated on that information. That means that  any proposition you make is a proposition that you think you are personally more rational about than the Less Wrong average.  This could be good or bad. Lots of upvotes means lots of people disagree with you. That's generally bad. Lots of downvotes means you're probably right. That's good, but this is a game where perceived irrationality wins you karma. The game is only fun if you're trying to be completely honest in your stated beliefs. Don't post something crazy and expect to get karma. Don't exaggerate your beliefs. Play fair.
  • Debate and discussion is great, but keep it civil.  Linking to the Sequences is barely civil -- summarize arguments from specific LW posts and maybe link, but don't tell someone to go read something. If someone says they believe in God with 100% probability and you don't want to take the time to give a brief but substantive counterargument, don't comment at all. We're inviting people to share beliefs we think are irrational; don't be mean about their responses.
  • No propositions that people are unlikely to have an opinion about, like "Yesterday I wore black socks. ~80%" or "Antipope Christopher would have been a good leader in his latter days had he not been dethroned by Pope Sergius III. ~30%." The goal is to be controversial and interesting.
  • Multiple propositions are fine, so long as they're moderately interesting.
  • You are encouraged to reply to comments with your own probability estimates, but  comment voting works normally for comment replies to other comments.  That is, upvote for good discussion, not agreement or disagreement.
  • In general, just keep within the spirit of the game: we're celebrating LW-contrarian beliefs for a change!

 

66 comments

Comments sorted by top scores.

comment by TimS · 2012-07-03T16:35:24.057Z · LW(p) · GW(p)

Irrationality Game

For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)

I believe that human hardware can - in principle - be as intelligent as it is possible to be. (60%) To be clear, this doesn't actually occur in the real world we currently live in.

Edit: In deference to social norms in the community, Retracted.

Replies from: TheOtherDave, DanielLC
comment by TheOtherDave · 2012-07-03T17:53:17.804Z · LW(p) · GW(p)

Upvoted for significant overconfidence on your second claim, assuming some plausible understanding of the phrase "human hardware". I'm also interested in your reasoning.

comment by DanielLC · 2012-07-03T18:57:54.431Z · LW(p) · GW(p)

I'm not sure what you mean.

For reasons related to Godel's incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be.

We already know the upper limit on intelligence. It takes one bit of evidence to narrow down the possibilities by a factor of two.

I believe that human hardware can - in principle - be as intelligent as it is possible to be.

By "human hardware" do you mean an actual human brain, in the shape it normally forms, or just anything made out of neurons? If you mean the former, this is obviously false. We have a limited memory and thus a limited intelligence. If you mean the latter, we already know neurons are Turing complete, though you could still build a more efficient computer that does it faster and with less energy.

Do you mean that a human brain could, in principle, come very close to the upper limit of effective intelligence? That is, you might not be able to memorize 10^50 digits, but you could still answer any question you'd reasonably come across just as well?

Also, are you talking about just training a normal human, or something where their neurons have to just happen to be wired exactly right?

Also, there's the question of how to measure intelligence. Is it just how likely we are to set off a utilitronium shockwave, and how accurately it follows our goals?

comment by [deleted] · 2012-07-03T16:28:20.952Z · LW(p) · GW(p)

This is an Irrationality Game comment.

We are living in a time of relative technological stagnation outside of computers as argued by Peter Thiel and others. 70%

If we are he is right about the reasons for this. 60%

Replies from: DanArmak, Jack
comment by DanArmak · 2012-07-03T18:55:17.735Z · LW(p) · GW(p)

It's mostly a definitional matter. I think we are progressing quickly in many fields, but we're mainly doing so by using computers, not by inventing new unrelated tech.

Retracted. Do not feed the trolls.

comment by Jack · 2012-07-03T16:41:50.239Z · LW(p) · GW(p)

Is this like how the industrial revolution was a time of relative technological stagnation outside of steam-powered tools?

Replies from: None
comment by [deleted] · 2012-07-03T16:42:28.351Z · LW(p) · GW(p)

No. 1850 to 1950 for example was a period of fast technological progress in many different fields.

comment by wedrifid · 2012-07-03T16:37:57.746Z · LW(p) · GW(p)

This is an Irrationality Game comment.

No less than 15% of the population could gain expected net benefits to overall wellbeing through carefully planned and executed anabolic steroid use. 80%.

comment by [deleted] · 2012-07-03T16:26:57.282Z · LW(p) · GW(p)

This is an Irrationality Game comment.

We have not been experiencing moral progress in the past 250 years. Moral change? Sure. I'd also be ok with calling it value drift. 90%

Edit: I talked about this previously in some detail here.

Edit: Apparently the OP was a troll account, retracting all contributions to the thread.

Replies from: prase, TimS, Grognor
comment by prase · 2012-07-03T16:43:20.830Z · LW(p) · GW(p)

Do you believe that there is no non-arbitrary way to define "moral progress", or you think that "moral progress" is a coherent concept, just we haven't experienced it?

(Retracted for the same reasons as other comments in this thread.)

Replies from: None
comment by [deleted] · 2012-07-03T16:47:15.271Z · LW(p) · GW(p)

I think moral progress is a coherent concept, I'm inclined to argue no human society so far has experience it, though obviously I can't rule out some outliers that did do so in certain time periods since this is such a huge set. we have so little data and there seems to be great variance in the kinds of values we seen in them.

Replies from: Jack, TimS
comment by Jack · 2012-07-03T17:04:55.475Z · LW(p) · GW(p)

This is an Irrationality Game comment. (Though I'm actually not sure how it will score).

"Moral progress" simply describes moral change or value drift in the speaker's preferred direction. Very confident (~95%).

Replies from: None
comment by [deleted] · 2012-07-03T17:25:41.446Z · LW(p) · GW(p)

I don't use it that way. I like lots of moral changes in the past 250 years but feel the process behind it isn't something I want to outsource morality to. Just like I like having opposable thumbs but feel uncomfortable letting evolution shape humans any further. We should do that ourselves so it doesn't grind down our complex values.

There are lots of people running around who think society in 1990 is somehow morally superior to society in 1890 on some metric of rightness beyond the similarity of their values to our own. This is the difference between someone being on the "wrong side of history" being merely a mistake in reasoning they should get over as soon as possible and it being a tragedy for them. A tragedy that perhaps kept repeating for every human society and individual in existence for nearly all of history.

This also suggests different strategies are appropriate for dealing with future moral change. I think we should be very cautious since I'm sure we don't understand the process. Modern Western civilization doesn't have narrative of "over time values became more and more like our own", but "over time morality got better and better and this gives our society meaning!". Its the difference between seeing "God guiding evolution" and confronting the full horror of Azathoth.

comment by TimS · 2012-07-03T17:03:06.945Z · LW(p) · GW(p)

If you can't produce evidence that moral progress ever happened and believe that it definitely hasn't happened in the recent past, why do you think that moral progress is a coherent concept?

Replies from: None
comment by [deleted] · 2012-07-03T17:18:57.369Z · LW(p) · GW(p)

I didn't say I had great confidence in moral progress being a coherent concept. But it seems plausible to me that acquiring more true beliefs and thinking about them clearly might lead to discovering some values are incoherent or unreachable and thus stop pursuing them.

comment by TimS · 2012-07-03T16:29:42.870Z · LW(p) · GW(p)

Do you think any human society ever experienced moral progress?

Replies from: None
comment by [deleted] · 2012-07-03T16:36:29.351Z · LW(p) · GW(p)

Hard to say, history is blurry, we do know the past 300 years well enough that I'm ok with this level certainty.

I'm far from comfortable saying that there was no moral progress in say some Medieval European societies. Not perhaps from our perspective, but from a sort of CEV-of-700 AD values looking at 1100 AD one, who knows? I don't know enough to have a reasonable estimate.

There was also useful progress in philosophy made before the "Enlightenment" that sometimes captured previous values and preferences and fixed them up. But again nearly any society for which that is true there was also lots of harmful philosophy that mutated values in responses to various pressures.

comment by Grognor · 2012-07-03T17:11:38.597Z · LW(p) · GW(p)

Upvoted in disagreement. The trend to moral progress has been one of less accepting of violence, less acceptance of nonconsensual interaction, less victim blaming, and less standing by while terrible things happen to others (or at least looking indignant at past instances of this).

This leads to a falsifiable prediction. In the next one to four centuries, vegetarianism will increase to a majority, jails will be seen as unnecessarily, brutally, unjustifiably harsh, "the poor" will be less of an Acceptable Target (c.f. delusions that they are "just lazy" and so on), and a condemnation of the present generation for being so terrible at donating in general and at donating to the right causes. If all of those things happen, moral progress will have been flat-out confirmed.

Replies from: None
comment by [deleted] · 2012-07-03T17:40:29.660Z · LW(p) · GW(p)

I don't think I should be a vegetarian. Thus at best I feel uneasy that people in four centuries thinking vegetarianism should be compulsory and at worst I'll be dismayed them spending time on activities related to that instead of things i value. If I thought that was great I'd already be vegetarian, duh.

Also I think I like some violence to be ok. Completely non-violent minds would be rather inhuman, and violence has some neat properties if viewed from the perspective of fun theory. In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication. Your point on consent is questionable. Victim blaming as well since especially in the 20th century I would think all we saw was one set of scapegoats being swapped for another one.

This leads me to suspect Homer's FAI is probably different from my own FAI, is different from the FAI of 2400 AD values. If FAI2400 gets to play with the universe around forever, instead of FAI2012 I'd be rather pissed. Just because you see a trend line in moral change dosen't mean there is any reason to outsource your future value edits to. Isn't this the classical mistake of confusing is for should?

But if it was as you say then all our worries about CEV and FAI would be silly, since our society apparently already automagically is something very similar to what we want, we just need to figure out how to design it so that we can include emulated human minds into it while it continues working its thing.

Yay positive singularity problem solved!

Replies from: endoself
comment by endoself · 2012-07-04T04:48:08.689Z · LW(p) · GW(p)

In any case I strongly suspect the general non-violence trend (document by Pinker) in the past few thousand years was due to biological changes in humans because of our self-domestication.

They cite evidence of "moderate to strong heritability" of male aggressiveness. Shouldn't strong selection pressures use up variance and thus lower heritability?

Replies from: None
comment by [deleted] · 2012-07-04T04:56:44.527Z · LW(p) · GW(p)

Not in this case. At least not if Gregory Cochran and Henry Harpending are right and we have more new mutations tested due to a large population than we would otherwise.

comment by Fill_Cluesome · 2012-07-03T16:17:29.041Z · LW(p) · GW(p)

metadiscussion

Replies from: gwern, None, wedrifid
comment by gwern · 2012-07-03T17:04:11.184Z · LW(p) · GW(p)

I didn't find the last one very useful; why would this one be any better?

Replies from: wedrifid, Will_Newsome, RobertLumley
comment by wedrifid · 2012-07-03T17:12:15.706Z · LW(p) · GW(p)

Wait. I just looked at who you were replying to and noticed that the guy who started this is a sockpuppet. Deleted my contributions. Downvoted the post and everything the sockpuppet has written. I will downvote all non-meta replies to this thread and probably replies to Fill_Cluesome in accordance with "Do Not Feed". If a real user is interested in the activity I would not object to them starting a thread themselves.

Someone ban him already. Sockpuppets bad!

Replies from: None, Jack, Thrill_Shoesome
comment by [deleted] · 2012-07-03T18:06:02.591Z · LW(p) · GW(p)

Yeah I'll delete my comments too.

Edit: facepalm Fill Cluesome copying a thread by Will Newsome how did I not notice that?

Edit: Clarification I don't think this was Will Newsome's sock puppet.

Edit: Wait their short history didn't seem particularly trollish. I'm a bit confused now.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T18:12:26.423Z · LW(p) · GW(p)

Should we retract our comments?

Perhaps. My downvotes of the ones here don't matter too much given that voting is all backwards anyhow!

Perhaps we could repost them in the original irrationality thread?

That sounds like a good idea.

I do think your 'pure moral drift' idea is a good example of a controversial belief. Moral change does seem to be in a clear direction---adapting in part to new circumstances due to other forms of progress. I'd call that different to just 'drift'.

Replies from: None
comment by [deleted] · 2012-07-03T18:17:03.982Z · LW(p) · GW(p)

Not quite, while I'm a bit agnostic on "adaptive" (especially since that has a technically meaning in genetics that I'd likely dispute), but my core argument is that we have surprisingly little reason to consider the process generating moral change so far to be normative.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T18:54:03.790Z · LW(p) · GW(p)

but my core argument is that we have surprisingly little reason to consider the process generating moral change so far to be normative.

I'd agree with that.

comment by Jack · 2012-07-03T18:51:47.320Z · LW(p) · GW(p)

I noticed it was probably Will right away, but I don't really see the problem. It's a reasonable post and he could make (and probably has) non-obvious sockpuppets if we wanted to. You can't credibly threaten to ban non-obvious sockpuppets so why ban the ones that at least let us know who it really is? So long as the sock-puppets behave themselves why does it matter?

If people think the thread is a good idea upvote/participate. If they don't, don't. Whether or not the sockpuppet Will uses is obvious or non-obvious shouldn't make a difference.

The use of sockpuppets doesn't seem problematic to me unless they're being used to bolster support for something the true-user has a vested interest it. I.e. if someone with some responsibility did something wrong and they invent a sockpuppet to defend themselves. But they seem really unproblematic when they're obviously sockpuppets.

Replies from: wedrifid, RobertLumley, prase, Will_Newsome
comment by wedrifid · 2012-07-03T19:17:04.663Z · LW(p) · GW(p)

You can't credibly threaten to ban non-obvious sockpuppets so why ban the ones that at least let us know who it really is? So long as the sock-puppets behave themselves why does it matter?

Will initially threatened to create his sock-puppets for the purpose of attempting to do damage to lesswrong if Alicorn did not submit to his will when she considered intervening in a different thread. He has now created these sockpuppets and they are all mild nuisances. The appropriate response to that sort of overt anti-social behavior is banning---a ban that could be removed as soon as he agreed to stop violating what is either a clear norm of the community or an outright violation of the terms of use of the site (I'm not sure if lesswrong has one of those and if it does I haven't read it). Using multiple accounts is (with few exceptions) not-OK, particularly given how easy that makes it to abuse the karma system.

Banning Will (for as long as he blatantly defies the rules) can not prevent him from posting anonymously or vandalizing the site but it does change him from an accepted member of the community to outsider/vandal/troll/spammer. His posts can then be treated the same way the accounts with names like v234lkhj2lhksdfsdflh334 that come to post about pandora necklaces get treated. They barely get noticed and cause no disruption.

If Will manages to create a sockpuppet that is not recognized and doesn't cause any disruptions or cause frequent universal downvoting based on the perceived (lack of) merit of the comments by that account then fantastic. We've been "tricked" into accepting drastically improved contributions.

Replies from: MarkusRamikin, Jack, TheOtherDave, Will_Newsome
comment by Jack · 2012-07-03T19:39:12.323Z · LW(p) · GW(p)

Edit: I'm just going to drop it since a new thread had already been created.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-03T20:01:25.209Z · LW(p) · GW(p)

I've asked Will about a sockpuppet of his before and he said he used it to get feedback for an idea without having to deal with the negative reaction to it being his idea.

(I do this only very rarely, and even then I didn't aim for complete anonymity, just a buffer against knee-jerk reactions.)

I suppose there is the chance Will is building an armada of sockpuppets for abusing the karma system,

Agh that would be so lame. Also clearly immoral.

he's being rather obvious about this one

Someone is either making it seem as if I am being obvious about it, in which case they are clever, or did not anticipate the consequences of choosing a name that rhymes with mine after I threatened to do that if I were banned, in which case they aren't clever and have a very strange way of thinking. Given what I've seen of Thrill_Shoesome &c. I've found it suspiciously difficult to discern which hypothesis is more likely.

Replies from: wedrifid, Jack
comment by wedrifid · 2012-07-03T20:07:21.600Z · LW(p) · GW(p)

Someone is either making it seem as if I am being obvious about it, in which case they are clever, or did not anticipate the consequences of choosing a name that rhymes with mine after I threatened to do that if I were banned, in which case they aren't clever and have a very strange way of thinking. Given what I've seen of Thrill_Shoesome &c. I've found it suspiciously difficult to discern which hypothesis is more likely.

Incidentally, sockpuppets that by strong implication are impersonations of another user are at a whole different level of "BAN! BAN NOW!!!"

comment by Jack · 2012-07-03T20:10:34.197Z · LW(p) · GW(p)

(I do this only very rarely, and even then I didn't aim for complete anonymity, just a buffer against knee-jerk reactions.)

Which I'm fine with.

Agh that would be so lame. Also clearly immoral.

Lame, yes. Possibly immoral, I don't have a good sense for how much you would care about undermining the Less Wrong karma system. People here tend to be too worried about the integrity of the karma system. It is much more robust than people realize because it is really just a stand-in approximation for actual reputation which is why actual cases of karmassination have been quickly remedied when the user was in good-standing. I'm more worried about the site overreacting to threats than any of the suggested threats so far.

Someone is either making it seem as if I am being obvious about it, in which case they are clever, or did not anticipate the consequences of choosing a name that rhymes with mine after I threatened to do that if I were banned, in which case they aren't clever and have a very strange way of thinking.

Either way it sounds reasonable to ban the account.

comment by TheOtherDave · 2012-07-03T19:53:58.061Z · LW(p) · GW(p)

-a ban that could be removed as soon as he agreed to stop violating what is either a clear norm of the community or an outright violation of the terms of use of the site [..] Banning Will (for as long as he blatantly defies the rules

Emphasis mine.
The way you phrase this suggests that the Will_Newsome account is admitting to running the similarly named accounts.
Do you mean to suggest this, or merely that it should nevertheless be obvious that he's doing so?

As far as I can recall, he has made no explicit claim either way, though he has made various statements that seem clearly intended to imply that he's not responsible for them.

That's not to say that he can't be banned for them anyway, if enough people feel there's enough evidence. I just want to be clear about what's being claimed.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T20:03:26.464Z · LW(p) · GW(p)

see.

comment by Will_Newsome · 2012-07-03T19:43:49.917Z · LW(p) · GW(p)

He has now created these sockpuppets and they are all mild nuisances.

How could you possibly think jumping to that conclusion is justified? I can't tell if you're really that bad at epistemic rationality (maybe just in this domain? or maybe you're in a weird mood?) or if you're really that intent on getting me banned even if it requires underhanded tactics.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T19:59:23.619Z · LW(p) · GW(p)

How could you possibly think jumping to that conclusion is justified? I can't tell if you're really that bad at epistemic rationality (maybe just in this domain? or maybe you're in a weird mood?)

They are definitely mild nuisances. I mean, have you seen them? They jump in and say stupid things. Ban them all.

Your denial counts for something. Not enough that I would assign less than 0.5 probability to them being you but enough that 'benefit of the doubt' applies. At the very least you have distanced yourself from the behavior of the others. Of course if credible evidence (including the testimony of sufficient others) indicated that you were lying about them not being you I'd endorse an unconditional permanent ban.

or if you're really that intent on getting me banned even if it requires underhanded tactics.

The frequent inclusion of conditionals and caveats would indicate otherwise and I wouldn't consider my approach here particularly underhanded even if I did have that as a goal. No, this isn't personal---it really is about a preference for enforcement of a sock-puppet abuse policy. "Clippy" is actually on my "Do Not Feed" list due to sockpuppet abuse considerations---in particular, dishonesty regarding the use and being consistently not funny in the role. I now get hatemail from him. Literally, it says "i hate you" in the body and the subject.

Replies from: prase, Will_Newsome
comment by prase · 2012-07-03T21:17:03.314Z · LW(p) · GW(p)

I now get hatemail from him. Literally, it says "i hate you" in the body and the subject.

This is hilarious.

(How did you manage to reach this state by non feeding him, by the way?)

Replies from: wedrifid
comment by wedrifid · 2012-07-03T21:23:59.492Z · LW(p) · GW(p)

How did you manage to reach this state by non feeding him, by the way?

He wasn't always on said list. Even if he was "non-feeding" does not always mean not taking actions against. For example if I wrote "Do Not Feed" as a response to all other users who replied to given user then that wouldn't be feeding but it would be extremely hostile. (This is hypothetical only.)

comment by Will_Newsome · 2012-07-03T20:10:19.990Z · LW(p) · GW(p)

Not enough that I would assign less than 0.5 probability to them being you but enough that 'benefit of the doubt' applies.

Still think this is way too high, but whatever, too hard to consider the counterfactual.

They are definitely mild nuisances. I mean, have you seen them? They jump in and say stupid things. Ban them all.

I haven't been annoyed so much as puzzled. I feel like Harry right before he realizes that Snape doesn't make sense.

The frequent inclusion of conditionals and caveats

I dunno dude. I don't see a caveat in "Ban the sockpuppets. And Will." But whatever, as long as you're chill now.

Literally, it says "i hate you" in the body and the subject.

Lol. Strange. (I've been accused of being Clippy before, by the way. Also, as you might remember, AspiringKnitter. It's sorta getting old by now.)

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-07-03T22:22:53.829Z · LW(p) · GW(p)

We could try to compensate by starting the rumor that you aren't really Will Newsome, I suppose.

Replies from: CaveJohnson
comment by wedrifid · 2012-07-03T20:14:04.989Z · LW(p) · GW(p)

I've been accused of being Clippy before, by the way.

Ridiculous. You are too creative and intelligent to be Clippy but not quite creative and intelligent enough that you could pull off being that mediocre for that long. You'd have made a far better Clippy if that was a game you had felt like playing.

comment by RobertLumley · 2012-07-03T19:11:51.497Z · LW(p) · GW(p)

Because, among other things, giving karma to sockpuppets allows them to create networks that can karmassassinate people.

comment by prase · 2012-07-03T21:10:05.336Z · LW(p) · GW(p)

We should punish even obvious sockpuppets for slippery-slope reasons. Any sockpuppet, obvious or not, accepted by the community will be interpreted by someone as a signal that there's nothing wrong with sockpuppetry.

Also, I have a moderate to strong distaste for bizarre behaviour, such as creating obvious sockpuppets when there is no obvious reason. Absurdist fiction is not my favourite genre.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-06T03:58:09.298Z · LW(p) · GW(p)

In fact from a slippery-slope/community norm point of view obvious sockpuppets are worse than non-obvious ones.

comment by Will_Newsome · 2012-07-03T19:49:30.323Z · LW(p) · GW(p)

But they seem really unproblematic when they're obviously sockpuppets.

Fill_Cluesome &c. aren't obviously sockpuppets. Could be a user who happened to be inspired to make an account after reading one of my posts. In that case I don't think "sockpuppet" would be the right term... no?

Also I think you and wedrifid are absurd to think that I'm responsible for the accounts—that said, this sort of prediction problem seems like it'd be really difficult to formalize, so I don't have much basis for that intuition.

Replies from: Jack
comment by Jack · 2012-07-03T19:55:40.792Z · LW(p) · GW(p)

If you deny that they're you that seems like an excellent reason to ban them.

Edit: Regarding my belief that you're responsible for the accounts. I 1) Know you've used other account names and 2) didn't care that they could be linked to you. I assumed you were just using it to avoid the instant downvoting that your main account has dealt with in the past. I was not attributing their use to any kind of malice or conspiracy. My prior for someone else creating the account to follow you around with lame agreement and repost your old ideas in order to further sour opinion toward you (or do a really poor job of the opposite) is very low.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-07-03T20:27:05.062Z · LW(p) · GW(p)

If you deny that they're you that seems like an excellent reason to ban them.

wedrifid feels similarly. I'm not so sure. This strikes me as parody, or maybe imitation, but not impersonation.

comment by Will_Newsome · 2012-07-03T20:23:09.163Z · LW(p) · GW(p)

My prior for someone else creating the account to follow you around with lame agreement and repost your old ideas in order to further sour opinion toward you (or do a really poor job of the opposite) is very low.

Once you've seen the kinds of things I have the tails of your distributions start to get pretty thick.

comment by Thrill_Shoesome · 2012-07-03T17:45:47.275Z · LW(p) · GW(p)

What do you mean, "real user"? I'm just as real as anyone else!

Replies from: wedrifid
comment by wedrifid · 2012-07-03T17:54:14.915Z · LW(p) · GW(p)

Ban the sockpuppets. And Will.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-03T18:31:18.646Z · LW(p) · GW(p)

(Thrill_Shoesome: I don't understand your goals, but for maximum drama now would be a good time to make a few "bedrifid", "redrifid", &c. accounts then lobby for wedrifid's banning.)

Replies from: wedrifid
comment by wedrifid · 2012-07-03T18:51:57.157Z · LW(p) · GW(p)

Thrill_Shoesome: I don't understand your goals

I actually believe you.

, but for maximum drama now would be a good time to make a few "bedrifid", "redrifid", &c. accounts then lobby for wedrifid's banning.

It would be a tad amusing I must admit. (Although I do note that the Will_Newsome account declared that it would create sockpuppets, giving examples along the lines of the ones you have used here.)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-03T19:35:06.073Z · LW(p) · GW(p)

(Although I do note that the Will_Newsome account declared that it would create sockpuppets, giving examples along the lines of the ones you have used here.)

(But I also only threatened to do that if I were banned, which I wasn't. Again, making sockpuppets just so I can get indignant when people accuse me of having sockpuppets would not be funny, nor interesting, nor insightful, nor didactic. It'd just be a waste of people's time. I am not that lame.

I'm not sure the "Twosome" constellation are technically sockpuppets), and if they're trolling they're being exceedingly subtle about it. I find the phenomenon mysterious.)

Replies from: Hill_Twosome
comment by Hill_Twosome · 2012-07-03T22:42:46.064Z · LW(p) · GW(p)

I don't consider myself to be trolling.

Edit: In fact, to prove it, I'll stop posting forever.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-04T00:27:24.960Z · LW(p) · GW(p)

FWIW I don't consider you to be trolling either.

comment by Will_Newsome · 2012-07-03T20:14:08.350Z · LW(p) · GW(p)

Some people found the last one useful, as evidenced by its upvotes. (Though it's been downvoted at least three times today.) Why are you expressing your dislike? Don't people normally just silently downvote?

Replies from: gwern
comment by gwern · 2012-07-03T21:22:11.183Z · LW(p) · GW(p)

Dislike of something is different from pointing out the repetition. Even if the first one was useful, one would be entitled to ask why a second would be useful ('Doctor, doctor, I want my appendix out!' 'But why, we just removed it!' 'And it did me a world of good!'); how much more so if one didn't find the first one useful?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-07-03T21:36:25.723Z · LW(p) · GW(p)

/nods, makes sense, thanks for explaining.

comment by RobertLumley · 2012-07-03T17:21:18.824Z · LW(p) · GW(p)

I wasn't around for the first one - what is the point of this exercise?

Replies from: shokwave
comment by shokwave · 2012-07-03T17:34:08.428Z · LW(p) · GW(p)

Introduce temperature into LW discussions.

comment by [deleted] · 2012-07-03T17:52:28.304Z · LW(p) · GW(p)

Do people understand you are supposed to vote on the descendant comments of irrationality game statements normally?

comment by wedrifid · 2012-07-03T16:31:09.748Z · LW(p) · GW(p)

This game is frustrating. Most of my beliefs are too damn normal in this particular location. If I was anywhere else...

Let me see... how about this? That's probably my most controversial belief (and the only one I can actually think of at the moment).