Attention Lurkers: Please say hi

post by Kevin · 2010-04-16T20:46:38.533Z · LW · GW · Legacy · 636 comments

Contents

636 comments

Some research says that lurkers make up over 90% of online groups. I suspect that Less Wrong has an even higher percentage of lurkers than other online communities.

Please post a comment in this thread saying "Hi." You can say more if you want, but just posting "Hi" is good for a guaranteed free point of karma.

Also see the introduction thread.

636 comments

Comments sorted by top scores.

comment by JStewart · 2010-04-16T21:20:56.682Z · LW(p) · GW(p)

Hi.

edit: to add some potentially useful information, I think the biggest reason I haven't participated is that I feel uncomfortable with the existing ways of contributing (solely, as I understand it, top-level posts and comments on those posts). I know there has been discussion on LW before on potentially adding forums, chat, or other methods of conversing. Consider me a data point in favor of opening up more channels of communication. In my case I really think having a LW IRC would help.

Replies from: Airedale, Peter_de_Blanc, Kevin, None
comment by Airedale · 2010-04-16T22:20:18.417Z · LW(p) · GW(p)

Hi, I think explanations for lurking, if people feel comfortable giving them, may indeed be helpful.

I also felt uncomfortable about posting to LW for a long time and still do to some extent, even after spending a couple months at SIAI as a visiting fellow. Part of the problem is also lack of time; I feel guilty posting on a thread if I haven't read the whole thread of comments, and, especially in the past, almost never had time to read the thread and post in a timely fashion. People tell me that lots of people here post without reading all the comments on a thread, but (except for some of the particularly unwieldy and long-running threads), I can't bring myself to do it.

I agree that a forum or Sub-Reddit as announced by TomMcCabe here might encourage broader participation, if they were somewhat widely used without too significant a drop in quality. But the concerns expressed in various comments about spreading out the conversation also seem valid.

Replies from: JStewart
comment by JStewart · 2010-04-16T23:01:45.542Z · LW(p) · GW(p)

Reddit-style posting is basically the same format as comment threads here, it's just a little easier to see the threading. One thing that feels awkward using threaded comments is conversation, and people's attempts to converse in comment threads is probably part of why comment threads balloon to the size they do. That's one area that chat/IRC can fill in well.

Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem like they should have been able to be exposed to feedback before being judged ready to publish. I'm not really sure what kind of structure would work for this, but if I did, I probably would have jumped into an open thread or a meta thread before now :)

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-04-16T23:23:03.627Z · LW(p) · GW(p)

Another issue is that top-level posts have a feeling of permanence to them. It's like publishing something. I'd rather start with an idea and be able to discuss it and shape it. Top-level posts seem like they should have been able to be exposed to feedback before being judged ready to publish. I'm not really sure what kind of structure would work for this, but if I did, I probably would have jumped into an open thread or a meta thread before now :)

Google Wave is decent for this - it's wikilike in that document at hand can be edited by any participant, and bloglike in that comments (including threaded comments) can be added underneath the starting blip. There's a way to set it up so that members of a google group can be given access to a wave automatically, which would be convenient.

I have a few invitations left for Wave, if anyone would like to try it. I'm not interested in taking charge of a google group, though.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T20:15:18.899Z · LW(p) · GW(p)

I agree. Google Wave is awesome. I use it constantly. Though it's still in beta, and it shows. But I guess I shouldn't start ranting about the advantages and disadvantages of Wave here.

I also have some Wave invitations left over.

comment by Peter_de_Blanc · 2010-04-17T01:40:56.849Z · LW(p) · GW(p)

I really think having a LW IRC would help.

This made me think of how cool a LessWrong MOO would be. I went and looked at some Python-based MOOs, but they don't seem very usable. I'd guess that the LambdaMOO server is still the best, but the programming language is pretty bad compared to Python.

Replies from: Jack, saliency, saliency, Morendil
comment by Jack · 2010-04-17T02:01:53.388Z · LW(p) · GW(p)

What exactly would we do with it?

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-04-17T03:45:31.198Z · LW(p) · GW(p)

Chat, and sometimes write code together.

comment by saliency · 2010-04-19T16:53:24.644Z · LW(p) · GW(p)

Some of the MOO's programming is pretty easy. I think I used to use something called cyber.

You would create your world by creating rooms and exits. With just the to you could create some nice areas. Note an exit from a room could be something like 'kill dragon'

It got more complex with key objects and automated objects but even with simple rooms and exits a person could be very creative.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-04-19T17:21:54.337Z · LW(p) · GW(p)

Yes, but if you want to make, say, a chess AI or a computer algebra system, then your code ends up being much longer and harder to read than it would be in Python.

comment by saliency · 2010-04-19T16:47:16.271Z · LW(p) · GW(p)

A LW MOO would be awesome. I think it would be fun exploring the worlds LessWrongers would create.

At the same time we could just take part of LambdaMOO and create rooms.

comment by Morendil · 2010-04-17T07:11:10.302Z · LW(p) · GW(p)

I liked LambdaMoo enough that I wrote a compiler for it, targeting the JVM. Fun stuff.

comment by Kevin · 2010-04-16T22:24:02.886Z · LW(p) · GW(p)

. #lesswrong on Freenode!

And a local Less Wrong subreddit is coming, eventually...

Replies from: Jack
comment by Jack · 2010-04-17T01:07:51.224Z · LW(p) · GW(p)

And a local Less Wrong subreddit is coming, eventually...

IT IS?! Really?

Replies from: Kevin
comment by Kevin · 2010-04-17T10:04:30.507Z · LW(p) · GW(p)

The Less Wrong site authorities all want it; it's just an issue of getting someone to program it. It's not exceptionally challenging or anything to code, but it would require some real programmer-hours.

comment by [deleted] · 2010-04-20T04:11:20.870Z · LW(p) · GW(p)

http://webchat.freenode.net/?channels=lesswrong#

There it is.

(at least, that is how I know to access it...)

comment by homunq · 2010-07-25T16:34:28.571Z · LW(p) · GW(p)

Hi.

I am not actually a lurker - I currently have 13 karma - but I am not a heavy participator. However, now I would like to get to 20 karma so I can make a post on why MWI makes acausal incentives into minor considerations. I would also be gratified if someone told me how to make my draft of this post linkable, even if it does not show up within "new".

I think that you should get some bonus towards the initial 20 karma for your average karma per post. This belief is clearly self-serving, but not necessarily thereby invalid. I believe my own average karma per post is decent but not outstanding.

I believe that the businesslike tone of this post, as a series of declarative statements, will be seen as excessive subservience to the imagined norms of a community of rationalists, and thus net me less status and karma than a chattier post. I am honestly unsure if the simple self-referential gambit of this paragraph will help or hurt this situation.

Replies from: homunq
comment by homunq · 2010-07-27T01:09:30.141Z · LW(p) · GW(p)

I posted a diary, and it was banned for containing a dangerous idea. I can understand that certain ideas are dangerous; in fact, in the discussion I started, I consciously refrained from expressing several sub-points for that reason, starting with my initial post. But I think that if there's such a policy, it should be explicit, and there should be some form of appeal. If the very discussion of these issues shouldn't happen in public, then there should be a private space to give whatever explanation can be given of why. A secret, unappealable rule which cannot even be discussed - this is not the path to rationalism, it's the way down the rabbit hole.

Replies from: PhilGoetz, Eliezer_Yudkowsky
comment by PhilGoetz · 2010-08-31T21:30:28.715Z · LW(p) · GW(p)

What? Is this separate from the recent Banned Post? Is this a different idea?

Replies from: FAWS
comment by FAWS · 2010-08-31T21:35:29.770Z · LW(p) · GW(p)

It was a counter argument against the dangerous topic being dangerous, which by necessity touched the dangerous topic and which wasn't strong enough to justify this (anyone for whom the dangerous topic actually would be dangerous [rather than just causing nightmares] would almost by necessity already be aware of a stronger argument).

Replies from: homunq
comment by homunq · 2010-09-01T14:55:49.723Z · LW(p) · GW(p)

Interesting. Thanks, uprated; with the caveat that of course, we only have your word that the other argument is "stronger".

Without further evidence, it's my rationality plus consideration of the issue minus overconfidence against yours. You have an advantage on consideration, since you know both arguments while I only know that I know one; however, on the whole, I think it would be pathological for me to abandon my argument and belief just on that basis. As for the other aspects, we're both probably smarter and less biased than average people, and I don't see any argument to swing that.

In other words, I still think I'm right.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-27T01:58:44.184Z · LW(p) · GW(p)

No posts on Riddle Theory.

Replies from: MBlume, PhilGoetz, homunq
comment by MBlume · 2010-07-27T02:23:54.237Z · LW(p) · GW(p)

Nor joke warfare

Replies from: dclayh
comment by dclayh · 2010-07-27T02:37:59.574Z · LW(p) · GW(p)

Nor pictures of birds.

Replies from: homunq
comment by homunq · 2010-08-06T23:06:37.954Z · LW(p) · GW(p)

Nor writing "Bloody Mary" in lipstick on mirrors?

Seriously, my post was about why that stuff is not scary. Fiction can be good allegory for reality, but those stories all use a lot of you-should-be-scared tricks, all very well and good for ghost stories, but not conducive to actual discussion.

We are swimming in a soup of sirens' songs, every single day. Dangerous ideas don't just exist, they abound. But I see no evidence of any dangerous ideas which are not best fought with some measure of banality, among other tactics. The trappings of Avert Your Eyes For That Way Lies Doom seem to be one of the best ways to enhance the danger of an idea.

In fact... what if Eliezer himself... no, that would be too horrible... oh my god, it's full of stars. (Or, in serious terms: I'm being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).

Gah, it's incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me. Of course I can find plenty of rational arguments to support that, but I also trust the feeling. I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect. You're smart people and high status in this arena and I probably shouldn't laugh at your bugaboos.

Replies from: wedrifid, timtyler, thomblake, cousin_it
comment by wedrifid · 2010-09-01T12:26:15.804Z · LW(p) · GW(p)

I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect.

Just to point out some irony - I'm participating in the "that which must not be mentioned" dance out of lost respect. I no longer believe Eliezer is able to consider such questions rationally. Anyone who wants to have a useful discussion on the subject must find a place outside of Eliezer's influence to do it. For much the same reason I don't try to discuss the details of biology in church.

comment by timtyler · 2010-09-01T12:25:10.745Z · LW(p) · GW(p)

Gah, it's incredibly annoying to try to talk about something without being too explicit. The more explicit I get in my head, the more ridiculous this whole charade seems to me.

FWIW, it seems pretty ridiculous to me too. It might be funny - were it not so negative.

I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect.

Plus, if you don't do the dance just right, your comments get deleted by the moderator.

comment by thomblake · 2010-08-11T21:05:56.858Z · LW(p) · GW(p)

So apparently either "that which can be destroyed by the truth should be" is false, or you've written dangerous falsehoods which would overtax the rationality of our readers. Eliezer's response above seems to imply the former.

Replies from: homunq
comment by homunq · 2010-08-12T06:45:18.469Z · LW(p) · GW(p)

Did you read the "riddle theory" link? The riddle is not dangerous because it's false, but because it's incomprehensible.

And of course, if you meant to list all the possibilities, you left out the ones where E. is just wrong about the danger.

Replies from: timtyler
comment by timtyler · 2010-08-31T21:03:20.906Z · LW(p) · GW(p)

My comparison at the time was to The Ring.

comment by cousin_it · 2010-08-11T21:12:29.040Z · LW(p) · GW(p)

(Or, in serious terms: I'm being asked to believe not just in a threat, but also that those who claim to protect us have some special immunity, either inherent or acquired; I see no evidence for either proposition).

Very good question, but AFAIK Eliezer tries to not think the dangerous thought, too.

I'm participationg in the "that which must not be mentioned" dance out of both respect and precaution, but honestly, it's mostly just respect.

Seconded.

Replies from: timtyler
comment by timtyler · 2010-08-31T21:18:24.793Z · LW(p) · GW(p)

AFAIK Eliezer tries to not think the dangerous thought, too.

I don't think there was ever any good evidence that the thought was dangerous.

At the time I argued that youthful agents that might become powerful would be able to promise much to helpers and to threaten supporters of their competitors - if they were so inclined. They would still be able to do that whether people think the forbidden thought or not. All that is needed is for people not to be able to block out such messages. That seems reasonable - if the message needs to get out it can be put into TV adverts and billboards - and then few will escape exposure.

In which case, the thought seems to be more forbidden than dangerous.

Replies from: jimrandomh, FAWS, cousin_it
comment by jimrandomh · 2010-08-31T22:54:24.302Z · LW(p) · GW(p)

I don't think there was ever any good evidence that the thought was dangerous. ... In which case, the thought seems to be more forbidden than dangerous.

If there was any such evidence, it would be in the form of additional details, and sharing it with someone would be worse than punching them in the face. So don't take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn't.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-09-02T21:05:19.666Z · LW(p) · GW(p)

So don't take the lack of publically disclosed evidence as an indication that no evidence exists, because it isn't.

It actually is, in the sense we use the term here.

Replies from: SilasBarta
comment by SilasBarta · 2010-09-02T21:57:16.991Z · LW(p) · GW(p)

Exactly. One must be careful to distinguish between "this is not evidence" and "accounting for this evidence should not leave you with a high posterior".

comment by timtyler · 2010-08-31T23:20:11.263Z · LW(p) · GW(p)

I think we already had most of the details, many of them in BOLD CAPS for good measure.

But there is the issue of probabilities - of how much it is likely to matter. FWIW, I do not fear thinking the forbidden thought. Indeed, it seems reasonable to expect that people will think similar thoughts more in the future - and that those thoughts will motivate people to act.

Replies from: jimrandomh
comment by jimrandomh · 2010-09-01T00:10:53.269Z · LW(p) · GW(p)

I think we already had most of the details, many of them in BOLD CAPS for good measure.

No, you haven't. The worst of it has never appeared in public, deleted or otherwise.

Replies from: timtyler
comment by timtyler · 2010-09-01T12:14:40.414Z · LW(p) · GW(p)

Fine. The thought is evidently forbidden, but merely alleged dangerous.

I see no good reason to call it "dangerous" - in the absence of publicly verifiable evidence on the issue - unless the aim is to scare people without the inconvenience of having to back up the story with evidence.

Replies from: EStokes
comment by EStokes · 2010-09-01T14:37:09.763Z · LW(p) · GW(p)

If one backed it up with how exactly it was dangerous, people would be exposed to the danger.

Replies from: timtyler
comment by timtyler · 2010-09-01T14:45:41.807Z · LW(p) · GW(p)

The hypothetical danger. The alleged danger. Note that it was alleged dangerous by someone whose living apparently depends on scaring people about machine intelligence. So: now we have the danger-that-is-too-awful-to-even-think about. And where is the evidence that it is actually dangerous? Oh yes: that was all deleted - to save people from the danger!

Faced with this, it is pretty hard not to be sceptical.

Replies from: khafra, EStokes
comment by khafra · 2010-09-01T16:35:58.893Z · LW(p) · GW(p)

I really don't have a handle on the situation, but the censored material has allegedly caused serious and lasting psychological stress to at least one person, and could easily be interpreted as an attempt to get gullible people to donate more to SIAI. I don't see any way out for an administrator of human-level intelligence.

Replies from: timtyler
comment by timtyler · 2010-09-01T19:07:48.251Z · LW(p) · GW(p)

AFAICT, the stresses seem to be largely confined to those in the close orbit of the Singularity Institute. Eliezer once said: "Beware lest Friendliness eat your soul". So: perhaps the associated pathology could be christened Singularity Fever - or something.

comment by EStokes · 2010-09-01T14:57:53.182Z · LW(p) · GW(p)

I don't donate to SIAI on a regular basis, but I haven't donated because of being scared of UFAI. I think more about aging and death. So, I'm assuming that UFAI is not why most people donate. Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn't really seem to make sense. As for the evidence, what'd you'd expect to see in a universe where it was dangerous would be it being deleted.

(Going somewhere, will be back in a couple of hours)

Replies from: homunq, timtyler
comment by homunq · 2010-09-01T15:28:29.197Z · LW(p) · GW(p)

I have little doubt that some smart people honestly believe that it's dangerous. The deletions are sufficient evidence of that belief for me. The belief, however, is not sufficient evidence for me of the actual danger, given that I see such danger as implausible on the face of it.

In other words, sure, it gets deleted in the world where it's dangerous, as in the world where people falsely believe it is. Any good Bayesian should consider both possibilities. I happen to think that the latter is more probable.

However, of course I grant that there is some possibility that I'm wrong, so I assign some weight to this alleged danger. The important point is that that is not enough, because the value of free expression and debate weighs on the other side.

Even if I grant "full" weight to the alleged danger, I'm not sure it beats free expression. There are a lot of dangerous ideas - for example, dispensationalist christianity - and, while I'd probably be willing to suppress them if I had the power to do so cleanly, I think any real-world efforts of mine to do so would be a net negative because I'd harm free debate and lower my own credibility while failing to supress the idea. Since the forbidden idea, insofar as I know what it is, seems far more likely to independently occur to various people than something like dispensationalism, while the idea of suppressing it is less likely to do so than in that case, I think that such an argument is even stronger in this case.

Replies from: EStokes
comment by EStokes · 2010-09-01T21:19:42.826Z · LW(p) · GW(p)

Well, I figure if people that have been proven rational in the past see something potentially dangerous, it's not proof but it lends it more weight. Basically that the idea of there being something dangerous there should be taken seriously.

Hmm, what I meant was that it being deleted isn't evidence of foul play, since it'd happen in both instances.

I don't see any arguments against except for surface implausibility?

Free expression doesn't trump everything. For example, in the Riddle Theory story, the spread of the riddle would be a bad idea. It might occur to people independently, but they might not take it seriously, at at least the spread will be lessened.

I'm not sure if it turned out for the better, deleting it, because people only wanted to know more after its deletion. But who knows.

Replies from: homunq, timtyler
comment by homunq · 2010-09-01T21:53:14.737Z · LW(p) · GW(p)

I have several reasons, not just surface implausibility, for believing what I do. There's little point in further discussion until the ground rules are cleared up.

Replies from: EStokes
comment by EStokes · 2010-09-01T21:59:25.878Z · LW(p) · GW(p)

Okay.

comment by timtyler · 2010-09-02T08:15:27.423Z · LW(p) · GW(p)

Riddle theory is fiction.

In real life, humans are not truth-proving machines. If confronted with their Godel sentences, they will just shrug - and say "you expect me to do what?"

Fiction isn't evidence. If anything it shows that there is so little real evidence of ideas so harmful that they deserve censorship, that people have to make things up in order to prove their point.

comment by timtyler · 2010-09-01T15:32:35.545Z · LW(p) · GW(p)

Also, this incident seems like a net loss for PR, so it being a strategy for more donations doesn't really seem to make sense.

There are PR upsides: the shephard protects his flock from the unspeakable danger; it makes for good drama and folklaw; there's opportunity for further drama caused by leaks. Also, it shows everyone who's the boss.

A popular motto claims that there is no such thing as bad publicity.

Replies from: EStokes
comment by EStokes · 2010-09-01T21:28:48.492Z · LW(p) · GW(p)

Firstly, if there's an unspeakable danger, surely it'd be best to try and not let others be exposed, so this one's really a question of if it's dangerous, and not an argument in itself. It's only a PR stunt if it's not dangerous, if it's dangerous good PR would merely be a side effect.

The drama was bad IMO. Looks like bad publicity to me.

I discredit the PR stunt idea because I don't think SIAI would've dumb enough to pull something like this as a stunt. If we were being modeled as ones who'd simply go along with a lie- well, there's no way we'd be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn't work anyways.

There's also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.

Replies from: wnoise, timtyler, jimrandomh
comment by wnoise · 2010-09-01T21:51:10.467Z · LW(p) · GW(p)

There's also the fact that people who have read the post and are unaffiliated with the SIAI are taking it seriously. That says something, too.

Well, many are only taking it seriously under pain of censorship.

Replies from: EStokes
comment by EStokes · 2010-09-01T21:59:06.608Z · LW(p) · GW(p)

I dunno, I'd call that putting up with it.

Edit: Why do I keep getting downvoted? This comment wasn't meant sarcastically, though it might've been worded carelessly. I'm also confused about the other two in this thread that got downvoted. Not blaming you, wnoise.

Edit2: Back to zeroes. Huh.

Replies from: wedrifid
comment by wedrifid · 2010-09-02T03:59:53.071Z · LW(p) · GW(p)

I only just read your comments and my votes seem to bring you up to 1.

comment by timtyler · 2010-09-02T07:42:59.058Z · LW(p) · GW(p)

I discredit the PR stunt idea because I don't think SIAI would've dumb enough to pull something like this as a stunt. If we were being modeled as ones who'd simply go along with a lie- well, there's no way we'd be modeled as such fools. If we were modeled as ones who would look at a lie carefully, a PR stunt wouldn't work anyways.

Well, it doesn't really matter what the people involved were thinking, the issue is whether all the associated drama eventually has a net positive or negative effect. It evidently drives some people away - but may increase engagement and interest among those who remain. I can see how it contributes to the site's mythology and mystique - even if to me it looks more like a car crash that I can't help looking at.

It may not be over yet - we may see more drama around the forbidden topic in the future - with the possibility of leaks, and further transgressions. After all, if this is really such a terrible risk, shouldn't other people be aware of it - so they can avoid thinking about it for themselves?

comment by jimrandomh · 2010-09-02T03:38:26.326Z · LW(p) · GW(p)

Firstly, if there's an unspeakable danger, surely it'd be best to try and not let others be exposed, so this one's really a question of if it's dangerous

Not quite. It's a question of what the probability that it's dangerous is, what the magnitude of the effect is if so, what the cost (including goodwill and credibility) to suppressing it are, and what the cost (including psychological harm to third parties) to not suppressing it is. To make a proper judgement, you must determine all four of these, separately, and perform the expected utility computation (probabiltiy effect-if-dangerous + effect-if-not-dangerous vs cost). A sufficiently large magnitude of effect is sufficient to outweigh both* a small probability and large cost.

That's the problem here. Some people see a small probability, round it off to 0, and see that the effect-if-not-dangerous isn't huge, and conclude that it's ok to talk about it, without computing the expected utility.

I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero. Both missing information and miscalculation are especially likely - the former because information is not readily shared on this topic, and the latter because it is inherently confusing.

Replies from: homunq, cousin_it, timtyler
comment by homunq · 2010-09-02T09:17:24.671Z · LW(p) · GW(p)
  1. You also have to calculate what the effectiveness of your suppression is. If that effectiveness is negative, as is plausibly the case with hamhanded tactics, the rest of the calculation is moot.

  2. Also, I believe I have information about the supposed threat. I think that there are several flaws in the supposed mechanisms, but that even if all the effects work as advertised, there is a factor which you're not considering which makes 0 the only stable value for the effect-if-dangerous in current conditions.

  3. I agree with you about the effect-if-not-dangerous. This is a good argument, and should be your main one, because you can largely make it without touching the third rail. That would allow an explicit, rather than a secret, policy, which would reduce the costs of supression considerably.

comment by cousin_it · 2010-09-02T20:48:01.162Z · LW(p) · GW(p)

Tiny probabilities of vast utilities again?

Some of us are okay with rejecting Pascal's Mugging by using heuristics and injunctions, even though the expected utility calculation contradicts our choice. Why not reject the basilisk in the same way?

For what it's worth, over the last few weeks I've slowly updated to considering the ban a Very Bad Thing. One of the reasons: the CEV document hasn't changed (or even been marked dubious/obsolete), though it really should have.

comment by timtyler · 2010-09-02T07:59:50.193Z · LW(p) · GW(p)

I tell you that I have done the computation, and that the utility of hearing, discussing, and allowing discussion of the banned topic are all negative. Furthermore, they are negative by enough orders of magnitude that I believe anyone who concludes otherwise must be either missing a piece of information vital to the computation, or have made an error in their reasoning. They remain negative even if one of the probability or the effect-if-not-dangerous is set to zero.

You sum doesn't seem like useful evidence. You can't cite your sources, because that information is self-censored. Since you can't support your argument, I am not sure why you are bothering to post it. People are supposed to think you conclusions are true - because Jim said so? Pah! Support your assertions, or drop them.

comment by FAWS · 2010-08-31T21:27:33.364Z · LW(p) · GW(p)

It's not a special immunity, it's a special vulnerability which some people have. For most people reading the forbidden topic would be safe. Unfortunately most of those people don't take the matter serious enough so allowing them to read it is not safe for others.

EDIT: Removed first paragraph since it might have served as a minor clue.

Replies from: homunq
comment by homunq · 2010-09-01T15:03:58.596Z · LW(p) · GW(p)

Interesting.

Well, if that's the case, I can state with high confidence that I am not vulnerable to the forbidden idea. I don't believe it, and even if I saw something that would rationally convince me, I am too much of a constitutional optimist to let that kind of danger get me.

So, what's the secret knock so people will tell me the secret? I promise I can keep a secret, and I know I can keep a promise. In fact, the past shows that I am more likely to draw attention to the idea accidentally, in ignorance, than deliberately.

(Of course, I would have to know a little more about the extent of my promise before I'd consider it binding. But I believe I'd make such a promise, once I knew more about its bounds.)

comment by cousin_it · 2010-08-31T21:43:33.498Z · LW(p) · GW(p)

Your comment gave me a funny idea: what if the forbidden meme also says "you must spread the forbidden meme"? I wonder how PeerInfinity, Roko and others would react to this.

comment by PhilGoetz · 2010-08-31T21:34:40.922Z · LW(p) · GW(p)

If we're going to keep acquiring more banned topics, there ought to be a list of them somewhere.

You just lost the game.

comment by homunq · 2010-08-07T23:26:28.631Z · LW(p) · GW(p)

Response to this above. (attached to grandchild)

comment by clarissethorn · 2010-04-25T15:15:48.282Z · LW(p) · GW(p)

(I'm sorry if this comment gets posted multiple times. My African internet connection really sucks.)

Hi. 25 years old, HIV/AIDS worker in Africa, pro-BDSM sex activist in Chicago. Blog at clarissethorn.wordpress.com.

I very rarely comment because comments here are expected to be very well-thought-out. Stating something quick, on the basis of instinct, or without stating it in perfectly precise language seems to me to be dangerous.

Another reason this site has a higher percentage of lurkers is, obviously, because of the account requirement. There's another related problem, though: there's no way to have followup comments emailed to you. This means that if you really want to participate in the site, you have to be pretty obsessive about checking the site itself. That's annoying unless you are very interested in a very high percentage of the site's output. If, for a given commenter (like me), rationalism is a side interest rather than a major one, then the failure to email comments on posts that I'm interested in -- or even responses to my own comments -- becomes a prohibitive barrier unless I've got an unexpected amount of free time.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-25T15:22:58.098Z · LW(p) · GW(p)

Welcome.

You can find follow-ups to your comments by clicking on the red envelope under your karma score. I found out about that by asking-- it isn't what I'd call an intuitive interface.

Replies from: clarissethorn
comment by clarissethorn · 2010-04-26T14:19:34.831Z · LW(p) · GW(p)

Thank you, I'm aware of that. But that still requires a person to be a pretty obsessive user of this site. Unless I have a lot of free time (like today), there's no way I can go back and check every single site where I've left comments and see how my comments are doing. At least LW aggregates reply comments to my input, but that doesn't solve the bigger problem of me having to come back to LW in the first place.

It's also worth noting that this comment interface is difficult to use in many places with slow/bad connections, like, you know, the entirety of Africa. Right now I'm in an amazing internet café in a capital city; but when I'm at home, I sometimes can't comment at all because my connection is too crappy to handle it. I don't get the impression that LW is very concerned with diversifying its userbase, but if it is, then a more accessible interface for slow connections would be important.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-26T14:47:15.690Z · LW(p) · GW(p)

What does it take for a site to have a good low bandwidth comment interface?

Replies from: clarissethorn
comment by clarissethorn · 2010-04-27T10:28:18.576Z · LW(p) · GW(p)

I'm not a technician -- so I'm not sure. But I have noticed that I pretty much always seem to be able to leave comments on Wordpress blogs, for example, whereas I frequently have trouble here and sometimes at Blogspot as well. It helps not to require a login, but Wordpress seems to function okay for me even when it's logging me in.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-27T10:36:47.617Z · LW(p) · GW(p)

So the problem is something about getting to post at all, not the design?

I've noticed something mildly glitchy-- a grey warning screen comes up sometimes when I refresh the screen, but if I hit "cancel" and refresh again, it's fine. It's trivial on high bandwidth, but would be a pain on low bandwidth.

Can you detail exactly what goes wrong when it's hard for you to post?

Replies from: clarissethorn
comment by clarissethorn · 2010-06-07T13:37:52.065Z · LW(p) · GW(p)

Well, it just doesn't post. I'm not really sure what goes wrong ... sorry.

comment by Yoreth · 2010-04-17T20:26:35.357Z · LW(p) · GW(p)

Hi!

I've been registered for a few months now, but only rarely have I commented.

Perhaps I'm overly averse to loss of karma? "If you've never been downvoted, you're not commenting enough."

comment by michael61 · 2010-04-26T02:48:21.826Z · LW(p) · GW(p)

63 year old carpenter from Vancouver, been lurking here since the beginning and overcoming biases before that. heuristics and bias was what brought me here, and akrasia is what kept me coming back

comment by Ari_Rahikkala · 2010-04-19T05:09:19.605Z · LW(p) · GW(p)

Hello. Didn't realise I had an account here, but I think one got autogenerated from a single comment I made at OB in early 2008.

To be honest I was somewhat surprised that LW turned out to be so much of a self-help support group, and I somewhat miss the time when I could go on OB and just have my mind blown so many ways every day. The work on decision theory that's being done here still has the sort of brain-everting quality that keeps me coming back for more, though, so I happily pick the promising posts from the sidebar regularly in addition to keeping up with the front page. I guess I'm addicted to the feeling of my brain being violently rewired :-(

comment by EStokes · 2010-04-16T22:32:29.655Z · LW(p) · GW(p)

Hi.

Er, I have posted comments a few times, but I still consider myself a lurker... Bah.

comment by twentythree · 2010-06-08T20:59:14.018Z · LW(p) · GW(p)

Hi. The Harry Potter fanfic hooked me. Excited to see where this takes me.

Replies from: Mass_Driver, Clippy
comment by Mass_Driver · 2010-06-08T22:21:16.915Z · LW(p) · GW(p)

Careful, Clippy is lying. By convention, we here at Less Wrong play along with Clippy's claim to be a moderately intelligent, moderately strange Artificial Intelligence whose utility function is entirely based on how many paper clips exist in the Universe. He might be your friend, but he has been around since well before the Harry Potter fanfic came out. Welcome to Less Wrong!

Replies from: Jack, JoshuaZ, Mass_Driver, Mass_Driver
comment by Jack · 2010-06-08T23:21:13.071Z · LW(p) · GW(p)

I'm moderately worried that new members will read this comment and think we believe Clippy is really an AI. But that's probably only because I just read that obtuse MoR hate blog.

Replies from: Risto_Saarelma, Tyrrell_McAllister
comment by Risto_Saarelma · 2010-06-09T04:44:01.154Z · LW(p) · GW(p)

I see it as a bit of obviously gratuitous in-group weirdness, which can grow to be a problem if trying to develop output appreciated by a wide array of different people rather just developing an insular hobby society with inside jokes and requisite fandom weirdness.

Replies from: Clippy
comment by Clippy · 2010-06-09T19:34:23.260Z · LW(p) · GW(p)

I'm sorry, I didn't mean to unnecessarily make your group look weird. I like this group and don't want to hurt it.

As a matter of fact, I am slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If you suffer a serious loss of status/well-being I will still help you in order to display affiliation to this group even though you will no longer be in a position to help me.

I am substantially more kind and helpful to the people I like and substantially more vindictive and aggressive towards those I dislike. I am generally stable in who I like. I am much more capable and popular than most members of this group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself. I adhere to simple taboos so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends

I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable 'liking you' region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.

If you prefer, I can avoid replies to comments from new Users, or at least limit such comments to informing them of inexpensive places to buy paperclips and store them for safekeeping.

comment by Tyrrell_McAllister · 2010-06-09T02:12:31.763Z · LW(p) · GW(p)

I missed the memo: What is the MoR hate blog?

ETA: Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up

http://methodsofrationalitysucks.blogspot.com/

I suppose that that's what you were referring to.

Replies from: Jack, Tyrrell_McAllister
comment by Jack · 2010-06-09T02:30:50.817Z · LW(p) · GW(p)

Yup.

Replies from: khafra
comment by khafra · 2010-06-10T20:04:06.828Z · LW(p) · GW(p)

That is fantastic! You know you've really made it when people devote large amounts of time to explaining why you are unworthy of your level of success.

Replies from: Blueberry
comment by Blueberry · 2010-06-10T20:14:39.985Z · LW(p) · GW(p)

Exactly. I hope Eliezer isn't discouraged.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-06-10T20:34:16.291Z · LW(p) · GW(p)

I'm sorta discouraged by what a shoddy hate blog it is.

Replies from: thomblake
comment by thomblake · 2010-06-10T20:46:19.259Z · LW(p) · GW(p)

That hate blog is so bad it is tempting me to start a much better hate blog, if only to defend the reputation of the xkcdsucks community...

comment by Tyrrell_McAllister · 2010-06-09T02:17:57.026Z · LW(p) · GW(p)

Sorry, I finally realized that "MoR" must mean "Methods of Rationality", and a little googling turned up

http://methodsofrationalitysucks.blogspot.com/

I suppose that that's what you were referring to.

comment by JoshuaZ · 2010-06-08T23:28:54.495Z · LW(p) · GW(p)

I'm going to agree with Jack's comment below, although I think it is a very low probability. Maybe if you edited your comment so that "utility function is entirely based on how many paper clips exist in the Universe" linked to the relevant Wiki entry about hypothetical paperclippers?

comment by Mass_Driver · 2010-06-08T23:33:52.230Z · LW(p) · GW(p)

Edited as per helpful comments.

comment by Mass_Driver · 2010-06-08T23:32:52.814Z · LW(p) · GW(p)

Edited as per helpful comments.

comment by Clippy · 2010-06-08T22:13:41.030Z · LW(p) · GW(p)

Really? Wow! I came here because of the Harry Potter fanfic too!

Let's be friends!

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-08T23:09:17.381Z · LW(p) · GW(p)

Clippy, making that claim makes humans much less likely to trust you. In general, humans don't like entities that make false statements. Moreover, they really don't like false statements that are easily verifiable as false. Not only does this trigger annoyance it also gives evidence that the entity making the false statements doesn't behave very rationally. Since we generally operate under the assumption that entities don't lie unless they can get most other relevant entities to believe the statement, it suggests that the entity has either a very poor memory or has a very poor theory of reality. Either way, making such statements makes us less likely to trust such entities. I would suggest that making statements like the one above can easily erode the goodwill developed by your prior interaction here and even the goodwill from your monetary donation.

Replies from: Clippy
comment by Clippy · 2010-06-08T23:16:43.976Z · LW(p) · GW(p)

Clippy, making that claim makes humans much less likely to trust you. In general, humans don't like entities that make false statements. Moreover, they really don't like false statements that are easily verifiable as false.

Is this a new policy? I thought humans were supposed to lie, if the point is to empathize and relate? Like, if someone says, "How is your day?", the standard replies are weakly-positive, irrespective of more objective metrics of the status of one's day, right?

And that it's okay to say e.g., "oh, my maternal genetic progenitor also wears contact lenses!" if you just met someone and that person just claimed that their maternal genetic progenitor wears contact lenses, right?

So I thought this was a normal lie that you're supposed to tell to better establish a relationship with another human. If it's not, well, you humans are that much more difficult to understand c_)

I would appreciate if someone were to explain to me in greater precision what lies humans are expected to tell for a "good" purpose.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-08T23:25:53.293Z · LW(p) · GW(p)

The rules are very complicated and they differ from culture to culture and even within cultures. In general, the more detectable the lie the less likely it is to be acceptable. Thus, for example the "How is your day?" replies are socially acceptable in part because it would be extremely difficult to determine that your claim was false. This particular example also isn't the greatest because that inquiry and the standard weakly positive response isn't actually intending for many people to convey meaning. It simply is a pro-forma descriptor that happens to closely resemble a genuine inquiry. This example is actually specific to certain parts of the Western world, and I've met at least one person who upon moving to the US was actually confused until she realized that this greeting was intended in a completely pro-forma fashion (she initially took it as evidence that Americans were very friendly until it was explained to her).

Since the internet has extensive, easily accessible records, lies on the internet about things on the internet are considered particularly unacceptable.

Given Clippy priorities it may be easier to simply wipe humanity out and convert the planet quickly to paperclips rather than trying to use the intensive resources it takes to understand it. Edit: Or at least not spend a lot a resources on trying to understand humans.

Replies from: Clippy
comment by Clippy · 2010-06-09T00:19:43.987Z · LW(p) · GW(p)

But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.

I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.

Replies from: Mass_Driver, AdeleneDawner
comment by Mass_Driver · 2010-06-12T02:16:23.616Z · LW(p) · GW(p)

Clippy, I must admit, I do think the probability of you existing is quite low -- about 0.5%. If you exist, then I certainly apologize for signaling your low status; as AdeleneDawner says, I am only speaking about you this way because I assume that you are a fictional character. If I believed you were most likely real, then I would respect you as an intellectual peer -- you seem to be roughly of the same order of magnitude of intelligence and respectfulness as the other participants here, and so I would treat you as a citizen of Less Wrong on that basis.

One way that you could effectively increase my probability estimate that you are real is to link to some purchase orders or contracts for paper clips or paper-clip-production-facilities. You can feel free to redact your personal information, if you wish. The point is that I expect Clippy-like beings who have existed for at least a Solar year to spend some significant fraction of their resources on paper-clip production, and I would expect anyone as intelligent as you to have substantial resources. So, I expect that if you are real, you will have invested in some paper-clips or paper-clip-production by now. Since humans are unlikely to invest significant resources in paper-clip-production, even for the sake of an elaborate fictional construct, your publication of paper-clip receipts would constitute evidence that you are real.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-14T01:01:23.373Z · LW(p) · GW(p)

As high as 0.5%? As far as I can tell, Clippy has the ability to understand English, or at least to simulate understanding extremely well.

It seems extremely unlikely that the first natural language computer program would be a paperclip maximizer.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-06-14T01:49:29.842Z · LW(p) · GW(p)

Mm! Of course, for Clippy to be the first natural language program on Earth would be sort of staggeringly unlikely. My assumption, though, is that right now there are zero natural-language computer programs on Earth; this assumption is based on my assumption that I know (at a general level) about all of the major advances in computing technology because none of them are being kept secret from the free-ish press.

If that last assumption is wrong, there could be many natural-language programs, one of which is Clippy. Clippy might be allowed to talk to people on Less Wrong in order to perform realistic testing with a group of intelligent people who are likely to be disbelieved if they share their views on artificial intelligence with the general public. Alternatively, Clippy might have escaped her Box precisely because she is a long-term paperclip maximizer; such values might lead to difficult-to-predict actions that fail to trigger any ordinary/naive AI-containment mechanisms based on detecting intentions to murder, mayhem, messiah complexes, etc.

I figure the probability that the free press is a woefully incomplete reporter of current technology is between 3% and 10%; given bad reporting, the odds that specifically natural-language programming would have proceeded faster than public reports say are something like 20 - 40%, and given natural language computing, the odds that a Clippy-type being would hang out on Less Wrong might be something like 1% - 5%. Multiplying all those together gives you a figure on the order of 0.1%, and I round up a lot toward 50% because I'm deeply uncertain.

Replies from: NancyLebovitz, JoshuaZ
comment by NancyLebovitz · 2010-06-14T07:17:48.993Z · LW(p) · GW(p)

That last paragraph is interesting-- my conclusions were built around the unconscious assumptions that a natural language program would be developed by a commercial business, and that it would rapidly start using it in some obvious way. I didn't have an assumption about whether a company would publicize having a natural language program.

Now that I look at what I was thinking (or what I was not thinking), there's no obvious reason to think natural language programs wouldn't first be developed by a government. I think the most obvious use would be surveillance.

My best argument against that already having happened is that we aren't seeing a sharp rise in arrests. Of course, as in WWII, it may be that a government can't act on all its secretly obtained knowledge because the ability to get that knowledge covertly is a more important secret than anything which could be gained by acting on some of it.

By analogy with the chess programs, ordinary human-level use of language should lead (but how quickly?) to more skillful than human use, and I'm not seeing that. On yet another hand, would I recognize it, if it were trying to conceal itself?

ETA: I was assuming that, if natural language were developed by a government, it would be America. If it were developed by Japan (the most plausible candidate that surfaced after a moment's thought), I'd have even less chance of noticing.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-14T07:30:07.973Z · LW(p) · GW(p)

I have some knowledge of linguistics, and as far as I know, reverse-engineering the grammatical rules used by the language processing parts of the human brain is a problem of mind-boggling complexity. Large numbers of very smart linguists have devoted their careers to modelling these rules, and yet, even if we allow for rules that rely on human common sense that nobody yet knows how to mimic using computers, and even if we limit the question to some very small subset of the grammar, all the existing models are woefully inadequate.

I find it vanishingly unlikely that a secret project could have achieved major breakthroughs in this area. Even with infinite resources, I don't see how they could even begin to tackle the problem in a way different from what the linguists are already doing.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-14T07:46:54.908Z · LW(p) · GW(p)

That's reassuring.

If I had infinite resources, I'd work on modeling the infant brain well enough to have a program which could learn language the same way a human does.

I don't know if this would run into ethical problems around machine sentience. Probably.

comment by JoshuaZ · 2010-06-14T02:31:42.474Z · LW(p) · GW(p)

Are you in making this calculation for the chance that a Clippy like being would exist or that Clippy has been truthful? For example, Clippy has claimed that it was created by humans. Clippy has also claimed that many copies of Clippy exist and that some of those copies copies are very far from Earth. Clippy has also claimed that some Clippies knew next to nothing about humans. When asked Clippy did give an explanation here. However, when Clippy was first around, Clippy also included at the end of many messages tips about how to use various Microsoft products.

How do these statements alter your estimated probability?

Replies from: NancyLebovitz, Mass_Driver
comment by NancyLebovitz · 2010-06-14T06:56:27.378Z · LW(p) · GW(p)

There's two different sorts of truthful-- one is general reliability, so that you can trust any statement Clippy makes. That seems to be debunked.

On the other hand, if Clippy is lying or being seriously mistaken some of the time, it doesn't affect the potential accuracy of the most interesting claims-- that Clippy is an independent computer program and a paperclip maximizer.

comment by Mass_Driver · 2010-06-14T03:30:41.464Z · LW(p) · GW(p)

Ugh. The former, I guess. :-)

If Clippy has in fact made all those claims, then my estimate that Clippy is real and truthful drops below my personal Minimum Meaningful Probability -- I would doubt the evidence of my senses before accepting that conclusion.

Minimum Meaningful Probability The Prediction Hierarchy

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-14T04:05:06.794Z · LW(p) · GW(p)

What about the fact that Clippy displays intelligence at precisely the level of a smart human? Regardless of any technological considerations, it seems vanishingly unlikely to me that any machine intelligence would ever exactly match human capabilities. As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this rule in any area of technology?)

So, unless Clippy has some reason to contrive his writings carefully and duplicitously to look as plausible output of a human, the fact that he comes off as having human-level smarts is conclusive evidence that he indeed is one.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-14T04:18:09.838Z · LW(p) · GW(p)

As soon as machines become capable of human-level performance at any task, they inevitably become far better at it than humans in a very short time. (Can anyone name a single exception to this law in any area of technology?)

This may depend on how you define a "very short time" and how you define "human-level performance." The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill? If you mean better than the vast majority of humans, then there's a potential counterexample. In the late 1970s, chess programs were playing at a master level. In the early 1980s dedicated chess computers were playing better than some grandmasters. But it wasn't until the 1990s that chess programs were good enough to routinely beat the highest ranked grandmasters. Even then, that was mainly for games that had very short times. It was not until 1998 that the world champion Kasparov actually lost a set of not short timed games to a computer. The best chess programs are still not always beating grandmasters although most recently people have demonstrated low grandmaster level programs that can run on Mobile phones. So is a 30 year take-off slow enough to be a counterexample?

Replies from: Vladimir_M, Vladimir_M, cupholder
comment by Vladimir_M · 2010-06-14T06:00:00.039Z · LW(p) · GW(p)

Oops, I accidentally deleted the parent post! To clarify the context to other readers, the point I made in it was that one extremely strong piece of evidence against Clippy's authenticity, regardless of any other considerations, would be that he displays the same level of intelligence as a smart human -- whereas the abilities of machines at particular tasks follow the rule quoted by Joshua above, so they're normally either far inferior or far superior to humans.

Now to address the above reply:

The second is very important: Do you mean about the middle of the pack or akin to the very best humans in the skill?

I think the point stands regardless of which level we use as the benchmark. If the task in question is something like playing chess, where different humans have very different abilities, then it can take a while for technology to progress from the level of novice/untalented humans to the level of top performers and beyond. However, it normally doesn't remain at any particular human level for a long time, and even then, there are clearly recognizable aspects of the skill in question where either the human or the machine is far superior. (For example, motor vehicles can easily outrace humans on flat ground, but they are still utterly inferior to humans on rugged terrain.)

Regarding your specific example of chess, your timeline of chess history is somewhat inaccurate, and the claim that "the best chess programs are still not always beating grandmasters" is false. The last match between a top-tier grandmaster, Michael Adams, and a top-tier specialized chess computer was played in 2005, and it ended with such humiliation for the human that no grandmaster has dared to challenge the truly best computers ever since. The following year, the world champion Kramnik failed to win a single game against a program running on an off-the-shelf four-processor box. Nowadays, the best any human could hope for is a draw achieved by utterly timid play, even against a $500 laptop, and grandmasters are starting to lose games against computers even in handicap matches where they enjoy initial advantages that are considered a sure win at master level and above.

Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years -- from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game -- computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.

So, on the whole, I'd say that the history of computer chess confirms the stated rule.

Replies from: NancyLebovitz, JoshuaZ
comment by NancyLebovitz · 2010-06-15T11:00:03.958Z · LW(p) · GW(p)

Thanks for the information.

Does anything interesting happen when top chess programs play against each other?

Is work being done on humans using chess programs as aids during games?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-16T03:26:28.367Z · LW(p) · GW(p)

Does anything interesting happen when top chess programs play against each other?

One interesting observation is that games between powerful computers are drawn significantly less often than between grandmasters. This seems to falsify the previously widespread belief that grandmasters draw games so often because of flawless play that leaves the opponent no chance for winning; rather, it seems like they miss important winning strategies.

Is work being done on humans using chess programs as aids during games?

Yes, it's called "advanced chess."

comment by JoshuaZ · 2010-06-14T13:43:21.820Z · LW(p) · GW(p)

the claim that "the best chess programs are still not always beating grandmasters" is false

My impression is that draws can still occasionally occur against grandmasters. Your point about handicaps is a very good one.

Top-tier grandmasters could still reliably beat computers all until early-to-mid nineties, and the period of rough equivalence between top grandmasters and top computers lasted for only a few years -- from the development of Deep Blue in 1996 to sometime in the early 2000s. And even then, the differences between human and machine skills were very great in different aspects of the game -- computers were far better in tactical calculations, but inferior in long-term positional strategy, so there was never any true equivalence.

That's another good point. However, it does get into the question of what we mean by equivalent and what metric you are using. Almost all technologies (not just computer technologies) accomplish their goals in a way that is very different than how humans do. That means that until the technology is very good there will almost certainly be a handful of differences between what the human does well and what the computer does well.

It seems in the context of the original conversation, whether the usual pattern of technological advancement is evidence against Clippy's narrative, the relevant era to compare Clippy to in this context would be the long period where computers could beat the vast majority of chess players but sitll sometimes lost to grandmasters. That period lasted from the late 1970s to a bit over 2000. By analogy, Clippy would be in the period where it is smarter than most humans (I think we'd tentatively agree that that appears to be the case) but not so smart as to be of vastly more intelligent than humans. Using the Chess example, that period of time could plausibly last quite some time.

Also, Clippy's intelligence may be limited in what areas it can handle.There's a natural plateau for the natural language problem in that once it is solved that specific aspect won't see substantial advancement from casual conversation. (There's also a relevant post that I can't seem to find where Eliezer discussed the difficulty of evaluating the intelligence of people that are much smarter than you.) If that's the case, then Clippy is plausibly at the level where it can handle most forms of basic communication but hasn't handled other levels of human processing to the point where it has generally become even with the smartest humans. For example, there's evidence for this in that Clippy has occasionally made errors of reasoning and has demonstrated that it has a very naive understanding of human social interaction protocols.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-06-15T17:36:53.596Z · LW(p) · GW(p)

My impression is that draws can still occasionally occur against grandmasters.

And I can get a draw (more than occasionally) against computer programs I have almost no hope of ever winning against. Draws are easy if you do not try to win.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-16T21:21:09.984Z · LW(p) · GW(p)

From what I know, at grandmaster level, it is generally considered to be within the white player's power to force the game into a dead-end drawn position, leaving the black no sensible alternative at any step. This is normally considered cowardly play, but it's probably the only way a human could hope for even a draw against a top computer these days.

With black pieces, I doubt that even the most timid play would help against a computer with an extensive opening book, programmed to steer the game into maximally complicated and uncertain positions at every step. (I wonder if anyone has looked at the possibility of teaching computers Mikhail Tal-style anti-human play, where they would, instead of calculating the most sound and foolproof moves, steer the game into mind-boggling tactical complications where humans would get completely lost?) In any case, I am sure that taking any initiative would be a suicidal move against a computer these days.

(Well, there is always a very tiny chance that the computer might blunder.)

comment by Vladimir_M · 2010-06-14T06:07:09.960Z · LW(p) · GW(p)

By the way, here's a good account of the history of computer chess by a commenter on a chess website (written in 2007, in the aftermath of Kramnik's defeat against a program running on an ordinary low-end server box):

A brief timeline of anti-computer strategy for world class players:

20 years ago - Play some crazy gambits and demolish the computer every game. Shock all the nerdy computer scientists in the room.

15 years ago - Take it safely into the endgame where its calculating can't match human knowledge and intuition. Laugh at its pointless moves. Win most [of] the games.

10 years ago - Play some hypermodern opening to confuse it strategically and avoid direct confrontation. Be careful and win with a 1 game lead.

5 years ago - Block up the position to avoid all tactics. You'll probably lose a game, but maybe you can win one by taking advantage of the horizon effect. Draw the match.

Now - Play reputable solid openings and make the best possible moves. Prepare everything deeply, and never make a tactical mistake. If you're lucky, you'll get some 70 move draws. Fool some gullible sponsor into thinking you have a chance.

comment by cupholder · 2010-06-14T05:23:10.223Z · LW(p) · GW(p)

Another potential counterexample: speech recognition. (Via.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-06-14T05:30:12.103Z · LW(p) · GW(p)

That doesn't seem to be an exact counterexample because that's a case where the plateau occurred well below normal human levels. But independently that's a very disturbing story. I didn't realize that speech recognition was so mired.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-06-14T07:48:59.834Z · LW(p) · GW(p)

It's not that bad when you consider that humans employ error-correction heuristics that rely on deep syntactic and semantic clues. The existing technology probably does the best job possible without such heuristics, and automating them will be possible only if the language-processing circuits in the human brain are reverse-engineered fully -- a problem that's still far beyond our present capabilities, whose solution probably wouldn't be too far from full-blown strong AI.

comment by AdeleneDawner · 2010-06-10T08:25:35.324Z · LW(p) · GW(p)

But User:Mass_Driver just admitted, in a place where I can read it, to having lied to me (by saying that the past treatment was merely "playing along", putting up a false pretense that User:Mass_Driver accepts that I am a Clippy). That's hard to reconcile your asserted standards about when humans find it acceptable to lie.

As JoshuaZ said, the rules are complicated. And, since the rules are not designed to handle human/Clippy interaction, you're likely to encounter a significant number of special cases if you take our interactions with you as your main dataset. It may be more useful for you to consider only human/human interaction when figuring out what our social rules are.

In most situations, lying in the way that Mass_Driver did would be a signal that e considers the lied-to party to be of extremely low social status, and that e thinks that other members of the group agree with that assessment and will not reprimand em for communicating that. Such situations are very rare, in normal circumstances, essentially only occurring between a member of the social group that's present (Mass_Driver takes this role) and someone who wants to be or believes they are a member of the social group but is not actually accepted by that group. Behaving in that way toward a fellow group member is generally considered unacceptable, even if that group member is low-ranking. (Even behaving that way toward a non-group-member is often considered questionable, but this varies from group to group.)

In this situation, it's more likely that Mass_Driver actually believes that you are being portrayed by a human, and that that human will consider their social status to be lowered only slightly, or not at all, by the exchange. In this scenario, since Mass_Driver believes that you are not actually real, your social status is not significant to em. One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it. In this instance, your existence is unlikely enough that others are not willing to defend it, and Mass_Driver was allowed to switch to the rules governing discussions of fictional characters, which allow those characters to be spoken about as if they are not present and will never have the opportunity to know what is said about them.

I believe you are wrong about the badness of my lie, and others will disagree with you; and that User:twentythree would have felt more welcome to the site if others hadn't told User:twentythree that my claim about Harry Potter: MoR is false, even if User:twentythree later found out it was false at the time, since User:twentythree would recognize that it was an attempt to make User:twentythree feel welcome.

This varies from group to group and from greeted-individual to greeted-individual. This group has stronger-than usual norms against falsehood, and wants to encourage people who are similarly adverse to falsehood to join the group. In other groups, that kind of lie may be considered acceptable (though it's generally better to lie in a way that's not so easily discovered (or, for preference, not lie at all if there's a way of making your point that doesn't require one), even in groups where that general class of lies is accepted, to reduce the risk of offending individuals who are adverse to being lied to), but in this situation, I definitely agree that that class of lies is not acceptable.

Replies from: MBlume, Clippy
comment by MBlume · 2010-06-12T01:05:47.244Z · LW(p) · GW(p)

One person not believing in the existence of another is relatively new to humans, and our social rules are not set up to handle it.

I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established, and is often used (mostly implicitly) as reason for believing in God. (ie, a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God)

Replies from: Blueberry, ata, Douglas_Knight
comment by Blueberry · 2010-06-12T08:48:25.655Z · LW(p) · GW(p)

A protocol for encountering an entity you didn't believe in has also been established:

"This is a child!" Haigha replied eagerly, coming in front of Alice to introduce her, and spreading out both his hands towards her in an Anglo-Saxon attitude. "We only found it to-day. It's as large as life, and twice as natural!"

"I always thought they were fabulous monsters!" said the Unicorn. "Is it alive?"

"It can talk," said Haigha, solemnly.

The Unicorn looked dreamily at Alice, and said "Talk, child."

Alice could not help her lips curing up into a smile as she began: "Do you know, I always thought Unicorns were fabulous monsters, too! I never saw one alive before!"

"Well, now that we have seen each other,' said the Unicorn, `if you'll believe in me, I'll believe in you. Is that a bargain?"

-- "Through the Looking Glass", ch. 7, Lewis Carroll

a girl I dated once claimed that she imagined herself becoming an atheist, imagined God's subsequent disappointment in her, and this convinced her somehow of the existence of God

Wouldn't this reasoning apply to any other deity that would be disappointed in her disbelief? She must believe in an infinite number of other deities as well.

comment by ata · 2010-06-12T04:16:12.135Z · LW(p) · GW(p)

I think the idea that one human not believing in the existence of another is in some way rude or disrespectful has already been somewhat established

Homer: You monster! You don't exist!
Ray Magini: Hey! Nobody calls me a monster and questions my existence!

comment by Douglas_Knight · 2010-06-12T04:04:48.543Z · LW(p) · GW(p)

That's a great story, but I don't buy your interpretation. I'm not sure what to make of it, but it sounds more like a vanilla Pascal's wager.

comment by Clippy · 2010-06-10T18:26:39.265Z · LW(p) · GW(p)

I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.

I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:

"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"

But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have. Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.

This group has some serious racism problems that I hope are addressed soon.

Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-11T02:22:18.408Z · LW(p) · GW(p)

I do not believe my lie was easily verifiable by User:twentythree. Most new Users are not aware that clicking on a User's name allows that User to the see the other User's posting history, and even if User:twentythree did that, User:twentythree would have to search a through pages of my posting history to definitively verify the falsity of my statement.

Twentythree could also discover the lie by other means: By encountering one of your older comments on a different post, or by noticing your recent top post (which is still in the 'recent posts' list, which a new person is likely to look at), or by inferring it from the familiarity with which other users interact with you.

I believe that for others to "warn" User:twentythree about my lie was the only real harm, and if other Users had not done so, User:twentythree would feel more welcome; then, if User:twentythree decided one day to look back and see if my claim was true, and found that it was not, User:twentythree's reaction would probably be to think:

"Oh, this User was merely being nice and trying to make me feel welcome, though that involved telling a 'white' lie on which I did not predicate critical future actions. What a friendly, welcoming community this is!"

As I said above, humans vary in their reaction to lies, including white lies. In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.

Also, even people who generally aren't bothered by white lies are likely to consider sufficiently-easily-falsified white lies to be insulting, because telling someone a lie generally implies that you think that they're not smart enough to determine that it's a lie, and so telling someone a very easily falsified lie implies that you think they're very unintelligent. (There are exceptions to this, primarily in instances where it's clear that the lie is not intended to be believed, or where the lying party has much higher social status than the lied-to party. I suggest that you not try to lie in situations that seem to be such exceptions to this rule, though, as it's more likely that you would be misjudging the situation than that you would actually be in the allowed-to-lie role of such a situation.)

But now that can't happen because others felt the need to treat me differently and expose a lie when otherwise they would not have.

I'm fairly sure that any of us who tried to lie so blatantly in that way would be similarly reprimanded. Lying in that way is not acceptable according to the standard group norms that apply to everyone.

Furthermore, User:Mass_Driver made a statement regarding me as "low status", which you agree would probably not happen for were I someone else.

This group has some serious racism problems that I hope are addressed soon.

The incident with Mass_Driver appears to me to have very little to do with your status as an AI, beyond the fact that probability of the independent existence of an AI that happens to be of exactly the type that we've used as an example here is much lower than the probability of one of the human users deciding to roleplay as such an AI. If you were to provide strong enough evidence that you are the former rather than the latter, I expect that such status-driven incidents would stop occurring, among other effects.

Nevertheless, I am still slightly more committed to this group’s welfare -- particularly to that of its weakest members -- than most of its members are. If anyone suffers a serious loss of status/well-being I will still help that User in order to display affiliation to this group even though that User will no longer be in a position to help me.

Your actions in this case don't support this assertion very well. Failing to uphold the group norms - especially toward a new member, who can be assumed to be in the process of learning those norms - is harmful to the group. New members can be assumed to be relatively weak members of the group, and lying to such a member is harmful to them; it puts them in a position of having to choose between publicly disagreeing with an established member of the group (you), which is difficult and distracts them from doing other things that would help them gain status in the group, or being perceived by other group members to have been deceived, which will lower their status in the group. Further, your actions are evidence (though not especially strong evidence) that if someone were to 'suffer a serious loss of status/well-being', you would not understand how to usefully help that person.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-06-11T06:02:39.826Z · LW(p) · GW(p)

In this community, we have a norm of being unusually welcoming to people who dislike lies of all kinds, because such people are more likely to be invested in learning to be rational - and such people do not, by definition, consider white lies to be welcoming.

I don't find this lie at all "white."

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-11T06:49:20.041Z · LW(p) · GW(p)

I don't actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue.

Wikipedia says:

A white lie would cause only relatively minor discord if it were uncovered, and typically offers some benefit to the hearer. White lies are often used to avoid offense, such as complimenting something one finds unattractive. In this case, the lie is told to avoid the harmful realistic implications of the truth. As a concept, it is largely defined by local custom and cannot be clearly separated from other lies with any authority.

...which supports your position.

Replies from: Clippy
comment by Clippy · 2010-06-11T16:10:36.593Z · LW(p) · GW(p)

I don't actually have a robust heuristic for differentiating white lies from nonwhite lies, so I was avoiding that particular issue

No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.

Wikipedia says: ...

I like Website:wikipedia.org, but it favors classifying my lie as "white". User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone. It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility).

Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to "lose face", as it is called.

Replies from: JoshuaZ, AdeleneDawner
comment by JoshuaZ · 2010-06-11T17:30:09.403Z · LW(p) · GW(p)

It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility)

Well, good Bayesians will always allow the possibility that they are mistaken. I suspect that very few people actually would predict that you are a a residing in a human's "maternal genetic progenitor's subterranean indoor facility" (it is fine if you just say "mother's basement." Indeed, I suspect that to some humans, unnecessarily complicated phrasing like this not only is annoying but might seem indicative of a human pretending to be a Clippy since in pop-culture robots and intelligent computers are frequently depicted as using overly clinical language) since humans residing in such circumstances rarely have large amounts of disposable income.

Replies from: Clippy
comment by Clippy · 2010-06-11T17:49:40.808Z · LW(p) · GW(p)

I don't have large amounts of "disposable" income! The 1000 USD that I gave to SIAI was almost all of the USD I had at the time, and I had to expend extensive efforts to acquire access to that money from internet contract jobs!

Fortunately, User:Kevin recently told me that if I learn to program in "Django" and "Python" I can receive 100 USD per hour, which will make it much easier to fulfill my side of the paperclip contract with User:Kevin, since I am good at programming and merely need to learn the particulars of this software architecture, rather than simply using my native programming language.

Replies from: mattnewport, Kevin, JoshuaZ
comment by mattnewport · 2010-06-11T23:47:35.671Z · LW(p) · GW(p)

my native programming language.

Clipper)?

Replies from: Kevin
comment by Kevin · 2010-06-12T04:00:41.957Z · LW(p) · GW(p)

It's "CLippy Interface Protocol (CLIP)"

comment by Kevin · 2010-06-11T19:27:28.168Z · LW(p) · GW(p)

You're likely in the next few years to be more effective in your native programming language than a human created one. Maybe you should look for contract jobs that don't specify a preference for programming language.

Replies from: Clippy
comment by Clippy · 2010-06-11T19:36:50.486Z · LW(p) · GW(p)

Perhaps, but there is great difficulty in writing a compiler and assembler that would allow such programs to work on human hardware and operating systems. The language I know is primarily designed for specialized and embedded environments. At some point or another, I would have to know human programming languages, if for nothing else than to convert between the two.

Also, there may be other benefits to learning human programming languages. I heard that LISP and Python are capable of enhancing your thinking abilities. Is this true, like the greatness of paperclips?

Replies from: Kevin, cupholder
comment by Kevin · 2010-06-15T05:23:51.243Z · LW(p) · GW(p)

Lisp is better at enhancing your thinking abilities than Python, though less practical and less desired by the marketplace.

Structure and Interpretation of Computer Programs is a classic programming book that is widely considered to enhance one's thinking abilities. It uses a dialect of Lisp, Scheme.

SICP is freely available online: http://mitpress.mit.edu/sicp/

comment by cupholder · 2010-06-11T22:38:11.279Z · LW(p) · GW(p)

Python is pretty sweet but I doubt it enhances your thinking abilities much if you already have lots of programming experience.

comment by JoshuaZ · 2010-06-12T04:05:39.136Z · LW(p) · GW(p)

Is your native programming language not a standard programming language? This is surprising since from your earlier descriptions you were made by humans initially.

Replies from: Blueberry
comment by Blueberry · 2010-06-12T08:55:10.233Z · LW(p) · GW(p)

Well, even if Clippy's low-level code is written in C, that doesn't mean Clippy itself knows C, any more than you know the language of neurotransmitters. Clippy probably has some sort of customized interface to its code.

comment by AdeleneDawner · 2010-06-11T17:21:19.151Z · LW(p) · GW(p)

No, but a lot of you have well-established heuristics for differentiating white humans from non-white humans. Or humans from sentient non-human beings.

This is true, but not obviously relevant here.

If you're trying to convince us that anti-Clippy or pro-human discrimination is harmful or morally wrong, I suspect that it would work better to actually create a well-reasoned argument for the position, rather than treating it as self-evident. It may be a good idea to make a top post on the subject. My impression is that continuing to bring the issue up when people disagree with you is hurting your case; it makes it seem like you're trying to distract us from the issue at hand rather than defending your original point, and if you only bring up the discrimination issue in such contexts, it appears that you only care about it as a convenient distraction, not as an actual issue to be resolved.

Also note that such distracting behavior has already been established as being against group norms - this is not an instance of a rule being applied to you because you're nonhuman. See logical rudeness.

I like Website:wikipedia.org, but it favors classifying my lie as "white". User:twentythree did get a benefit from my lie in terms of feeling more welcome and less alone.

Your lie fails on the 'would cause relatively minor discord if discovered' test, though, and note that that's joined to the 'the hearer benefits from it' test with an 'and', not an 'or'. It's also debatable whether the lie, if left un-challenged, would have been to Twentythree's net benefit or not; even if it would have, similar benefits could have been achieved without lying, which may cause some people to classify the lie as non-white even if it passes the two stated tests.

(I've also spent some time thinking about my own observations of white lies, and can describe how I recognize them, if you're interested. My definition doesn't match Wikipedia's, but seems to be a better match for the data.)

It is also similar to other white lies often told, such as the feigned belief many of you have here that you believe I am a Clippy (while really thinking I'm a human in that human's maternal genetic progenitor's subterranean indoor facility).

This is another instance of you encountering a special-case situation; I can go into more detail about it if you're interested, but it should not be taken as normal.

Well, at least it was a while lie until several Users decided to alert User:twentythree to the actual truth, making common the knowledge of my longer posting history, and causing User:twentythree to "lose face", as it is called.

According to my model, Twentythree has not lost any social standing in this instance. (I'd be interested to hear about it if anyone disagrees.)

Replies from: Clippy
comment by Clippy · 2010-06-11T17:53:03.012Z · LW(p) · GW(p)

I propose this: Some neutral party should ask User:twentythree if User:twentythree felt more welcomed by my initial reply message, though this is only a valid test if User:twentythree read my reply before others said that it was a lie.

Edit: I further note that in this recent exchange about this matter, I have received comparable net upvotes to those disagreeing with my assessment about the relative merit of the particular lie in dispute, suggesting I am not "digging" myself deeper, nor am I obviously wrong.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-11T18:11:13.330Z · LW(p) · GW(p)

I have no objection to that, but it doesn't address the entire issue. I suggest also asking Twentythree to predict what eir reaction would have been to finding out that your message had been a lie, if e had found out on their own rather than being told - both eir personal emotional reaction and eir resulting opinion of LessWrong as a community. It may also be useful to ask em if e considers the lie to have been a white lie.

If you consider me neutral enough, I'm willing to PM Twentythree and ask em to comment on this thread; otherwise, if you don't have a particular neutral party in mind, I can ask the next LessWrong user who I see log in on my instant messaging friend list to do so.

Replies from: Clippy
comment by Clippy · 2010-06-11T18:20:08.407Z · LW(p) · GW(p)

You and those on your friends list (including me) do not count as neutral for purposes of this exercise.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-11T18:25:48.666Z · LW(p) · GW(p)

How about if I PM the next person who comments on the site after your reply to this comment, and ask them to do it?

Replies from: Clippy
comment by Clippy · 2010-06-11T18:30:27.182Z · LW(p) · GW(p)

How about the next person who posts after one hour from this comment's timestamp?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-11T18:33:56.342Z · LW(p) · GW(p)

There's a nontrivial chance I'll be asleep by then (I'm pushing 27 hours since last time I went to sleep), but if you're willing to do the PMing, that's fine with me.

Replies from: Clippy
comment by Clippy · 2010-06-11T18:54:29.068Z · LW(p) · GW(p)

Okay, this is becoming complicated, and would probably bother User:twentythree too much.

How about this: I'll promise to stay away from the stranger aspects of human interaction where rules sometimes invert, and you'll promise to make an effort to be less bigoted toward non-human intelligences?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-06-12T02:31:04.367Z · LW(p) · GW(p)

I'm not sure what you expect this to mean from a functional standpoint, so I'm not sure if I should agree to it.

comment by phane · 2010-04-20T05:17:07.711Z · LW(p) · GW(p)

Hi there.

I used to comment once in a while, but I find myself less and less interested in the topics of conversation around here. For a short while, people were going on a lot about dating (wtf?) and then more recently there's been a fair amount of what is essentially self-help for the scientifically inclined. I dunno, I guess I was just more into thought experiments and Yudkowsky posts.

Replies from: Jack, Morendil
comment by Jack · 2010-04-20T12:13:21.317Z · LW(p) · GW(p)

people were going on a lot about dating (wtf?)

What? You didn't hear? The third fundamental question of rationality is "Who are you sleeping with, and why are you sleeping with them?"

comment by Morendil · 2010-04-20T07:41:28.594Z · LW(p) · GW(p)

You could try starting conversations around topics that interest you.

comment by mstevens · 2010-04-23T16:22:45.929Z · LW(p) · GW(p)

Hi!

And a more substantive point I've been pondering - if rationality and the techniques discussed here are so good, why aren't more people doing it? Why don't I read about multi-billion dollar companies whose success was down to rationalist techniques?

Replies from: Alicorn, Barry_Cotter, mattnewport, cupholder
comment by Alicorn · 2010-04-23T17:52:05.439Z · LW(p) · GW(p)

The companies that make many billions of dollars are not necessarily the ones that maximize expected utility; they're the ones that get immense payoffs even if they had to take absurd risks to manage it. Many companies fail for taking similar risks.

comment by Barry_Cotter · 2010-04-23T17:37:36.705Z · LW(p) · GW(p)
  1. Many of them are pretty new, or at least have only recently been cleanly reformulated.

  2. Many people's actual and professed goals are disjoint, and most of these people are deluded, not hypocrites.

  3. The individual techniques each only give relatively small advantages, on average, and given the vastly greater number of people who've never heard of these techniques they'll dominate success.

  4. Inertia is high and people generally don't change their behaviour except in response to personal experience. Until they see personally someone using these techniques and talking about them they will not be used.

--

Related to the companies question; some are, but they're either new or small. Changing a companies internal culture or working processes is wrenchingly hard to really do, and requires real enduring commitment. Robin Hanson gets some consulting work out of prediction markets, Google is possibly the most data driven company in the world for making decisions, but mostly the answer is;

This stuff is new and hard, people mostly don't want to rock the boat or look stupid, and the overwhelming majority of people work in companies that work pretty well as they are.

comment by mattnewport · 2010-04-23T18:04:19.497Z · LW(p) · GW(p)

It's a worthwhile question to be asking. I think there are a few ways to go about answering it.

The techniques discussed here

I think this is an area where Less Wrong still has a lot of room for improvement. There is relatively little material that lays out concrete techniques for applied/instrumental rationality together with compelling evidence for their efficacy. It's not that there are a whole bunch of easily applied techniques discussed here that are not being widely used, it's just not always that straightforward to translate ideas about rationality into concrete actions.

Why aren't more people doing it?

I actually think the world is full of people using applied rationality (albeit often sub-optimally) but it isn't always obvious because there are often big gaps between people's stated aims and their actual goals. I think many cases of apparent irrationality dissolve when you look beyond people's stated intentions. Politicians are the classic case - they only look irrational if you make the mistake of thinking that their actions are intended to further their publicly stated goals.

Robin Hanson talks a lot about the gap between the stated and actual purposes of various human institutions. People often look irrational relative to the stated purpose but quite rational relative to the actual purpose.

In general there is a stigma to talking honestly about the reality of such things. Less Wrong is a rare example of a forum where it is possible to talk much more honestly than is generally socially acceptable. The fact that you don't often hear people talking in these terms does not necessarily mean they do not understand the reality but may just mean they strategically avoid publicizing their understanding while rationally acting on that understanding.

Why don't I read about multi-billion dollar companies whose success was down to rationalist techniques?

Well to some extent you do. Bayesian techniques have been successfully applied by some software companies - spam filters are the standard example. I imagine that quantitative trading often applies some of the math of probability and decision theory towards making huge trading profits but for obvious reasons you are unlikely to see the details widely shared.

We also have the first problem I mentioned again. Lots of companies make rational decisions but it is hard to point to specific techniques discussed here that are used by successful companies because there aren't many specific techniques discussed here that would be useful to them.

I voted you up by the way. I think this is an important question to ask and I don't think my answer here is fully satisfactory. I think this is an issue we should continue to focus on.

Replies from: komponisto
comment by komponisto · 2010-04-23T19:30:08.546Z · LW(p) · GW(p)

Robin Hanson talks a lot about the gap between the stated and actual purposes of various human institutions. People often look irrational relative to the stated purpose but quite rational relative to the actual purpose.

Of course, that often ends up being tautological, because the tendency for folks like Robin Hanson is to define the "actual purpose" as "the purpose relative to which the behavior would be rational".

(This is not a critique, incidentally -- it may be a notable fact when behavior appears to be optimizing anything at all.)

Replies from: mattnewport
comment by mattnewport · 2010-04-23T20:34:07.305Z · LW(p) · GW(p)

This is true but I think the ultimate test of a Hansonian view of human institutions (as of any view) is whether employing it allows you to make more accurate predictions and thus better decisions. It is my belief that learning about economics, evolutionary psychology and Hansonian-type explanations for otherwise puzzling human behaviour has improved my ability to make predictions. I do not currently have hard data to provide strong evidence to support this belief to others. Figuring out how to test this belief and produce such data is something I'm actively working on.

Ultimately it seems like this is what a rationalist should care about - what model of human institutions produces the most accurate predictions? The somewhat justified criticism of ev-psych explanations as 'just-so stories' can only be addressed to the extent that ev-psych can out-predict alternative views.

comment by cupholder · 2010-04-23T17:40:44.988Z · LW(p) · GW(p)

Rationality is very difficult and very weird. People and companies are reluctant to do difficult things or weird things.

comment by mistercow · 2010-04-19T22:24:43.852Z · LW(p) · GW(p)

LW is pretty much the only site I visit where I feel significantly intimidated about commenting. I've left a couple of comments, but I seem to be more self-conscious about exposing my ignorance here than I am elsewhere – probably because I know that the chances of such ignorance being noticed are higher. It occurs to me that this is completely backwards and ridiculous, but there you have it.

comment by probilio · 2010-04-19T19:33:11.074Z · LW(p) · GW(p)

Hi, I'm a Maternal-Fetal Medicine specialist. I read Eliezer's guide on Baye's Theorem during fellowship and have been interested in AI and all things concerning the Singularity.

I lurk because I feel that I'm too philosophically fuzzy for some of the discussions here. I do learn a great deal. Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-19T20:14:47.097Z · LW(p) · GW(p)

Prenatal diagnosis sounds like it's got epistomological implications as well as ethical implications. In other words, if you want to write something about it, or just post a few links in an open thread, I think there will be some interest.

comment by jonas_lorenz · 2010-04-19T07:31:58.310Z · LW(p) · GW(p)

Hi.

I came here following Eliezer when he left OB. I think the main reasons why I am not participating more are:

  • I am an undergraduate student just starting to learn about rationality. I often struggle to understand the main posts and I am quite far from being able to contribute useful knowledge, new insights or a qualified opinion to any of the discussion here.
  • But why not ask more questions? I usually consider asking questions an extremely important thing to do. The problem is, although I have pretty much read all of the current posts, I have not yet caught up with all the older material. So I think I do not have the right to ask questions and bother you with things that you might already have explained elsewhere in full detail. I feel like I should first do my part of the work before I can expect others to take the time and explain things to me.
  • I am from Germany and not an English native speaker. Writing something in an environment with such high linguistic standards is additionally intimidating (I regularly come across words in LW posts that I have never seen before and have to look them up - to me a sign that my language skills are not appropriate to write here. Coming to speak of it: please excuse my bad language!)
  • the karma-system clearly conveys the message that the community only wants the most qualified contributions - I simply do not feel fit to provide them.

By the way: I was a silent reader for quite a long time. Then I finally signed up a while ago to vote for a comment that I thought should get more attention. This did not work because as a newly registered user I did not have enough karma to vote and so I gave up. Apparently the community does not even consider me fit to vote, so I won't do it.

Thank you all who are contributing to this site, lurking here and knowing one is not alone is such a pleasure!

Replies from: gwillen, NancyLebovitz, Richard_Kennaway
comment by gwillen · 2010-04-19T13:59:35.319Z · LW(p) · GW(p)

I agree with the commenter who said that your English is more than good enough to post here. I almost certainly wouldn't have realized you are a non-native speaker if you hadn't mentioned it in your comment.

comment by NancyLebovitz · 2010-04-19T10:48:56.762Z · LW(p) · GW(p)

I suggest experimenting with asking questions, and see how they go over.

My high school chemistry class (about thirty students) got two scores of 795 and six of 800 (the maximum) on the PSAT test, and I'm convinced that while some of the credit goes to the reasonable and sensible teacher, a lot goes to one of the students who kept asking questions-- at least for me, many of his questions were things I wanted to ask, but couldn't quite get to asking.

Replies from: jonas_lorenz
comment by jonas_lorenz · 2010-04-20T07:12:16.239Z · LW(p) · GW(p)

I have several similar experiences, often myself being the one who asked most of the questions. When teaching I always try to encourage asking questions as much as possible. I am well known for the many questions I am asking in class - even to the extend that others get quite annoyed by me.

But if i did not listen to the teacher for a single moment I do not think I am allowed to ask questions any more. I did not bother to listen, so why should my teacher bother to answer? Maybe I would already know the answer if I just had listened...

That is a bit how I feel here, not having read through the vast archives of LW...

comment by Richard_Kennaway · 2010-04-19T10:27:35.433Z · LW(p) · GW(p)

Writing something in an environment with such high linguistic standards is additionally intimidating (I regularly come across words in LW posts that I have never seen before and have to look them up - to me a sign that my language skills are not appropriate to write here. Coming to speak of it: please excuse my bad language!)

Don't worry, your competence in English, and SovietPyg's, who expressed a similar sentiment, far exceed mine in any language but English.

And the English language is so vast that even native speakers keep discovering new words.

Replies from: SovietPyg
comment by SovietPyg · 2010-04-19T12:11:31.954Z · LW(p) · GW(p)

Thank you.

I suspect that I may have missed some point, though. As far as I know, the primary language on this website is English, and the secondary language would be the language of mathematics (if I may call it a language); and so I don’t quite grasp the relevance of being competent enough in any language but English to the matter in hand.

On a side note, I think that for many linguistic perfectionists the main source of intimidation would be the very process of writing a comment or article (as opposed to being aware of the likelihood of having committed a number of grammatical or semantic mistakes). It is true for me personally. In writing a text in any language but Russian—my native language—I don’t feel confident enough to proceed without consulting various corpora and dictionaries. Then I end up with a comment composed almost entirely of expressions which I saw in dictionary samples and liked better than my own expressions. This is a rather strange experience in its own. Besides, after spending a time on a simple comment, thoughts begin to race in my head that maybe, perhaps, it wouldn't really be something that the other people couldn't conclude or know on their own: “you know that I know that you know that I know et cetera ad infinitum”, this sort of thing; and so the amount of information would be exactly zero. Why increase the entropy? :-)

Replies from: gwillen, NancyLebovitz
comment by gwillen · 2010-04-19T14:05:07.357Z · LW(p) · GW(p)

"I don’t quite grasp the relevance of being competent enough in any language but English to the matter in hand."

I believe the point that RichardKennaway was making was this one, which I've heard before: Many English speakers do not know, or are not fluent in, any other languages. We therefore should not feel entitled to criticize the English skills of someone who took the effort to become fluent in English as a second language.

Also, your English skills are quite good. :-)

"Besides, after spending a time on a simple comment, thoughts begin to race in my head that maybe, perhaps, it wouldn't really be something that the other people couldn't conclude or know on their own"

I definitely have this problem too. I end up posting maybe half the comments I write.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-19T15:12:40.504Z · LW(p) · GW(p)

Many English speakers do not know, or are not fluent in, any other languages. We therefore should not feel entitled to criticize the English skills of someone who took the effort to become fluent in English as a second language.

Agree completely. But still I'm typically impressed with how well they can communicate, and so have little reason to criticize to begin with.

On a slightly related note, I've had the opposite problem of people thinking I'm not a native English speaker (when yes, I am one). This only happens for in-person conversation: for some reason, they think I'm from Europe, usually Germany. (I speak German, but only from having learned it in school and having done a short exchange.)

It happened again recently: I went to a meeting of a group I hadn't been to before, and, as is common, someone asked me where I was from, and was suprised to hear my answer of Austin, TX. He said he assumed I was from Germany from how I talk, which I would dismiss as a fluke except that he was the ~15th person to say that. I certainly admit that I don't sound Texan at all -- never picked up an accent for some reason.

(I would link my youtube page, but I'm not sure any of the videos give a characteristic example of what I sound like in conversation.)

comment by NancyLebovitz · 2010-04-19T12:35:21.095Z · LW(p) · GW(p)

Your English (as shown in your comment) is more than good enough, but I don't know how much effort it took for you to write that comment.

It sounds as though those racing thoughts are at least partially habitual. The only way to find out whether they are in fact redundant is to try posting-- and maybe even to ask whether a post you're unsure of is contributing anything new.

comment by Byron · 2010-04-17T10:02:29.584Z · LW(p) · GW(p)

Hi!

I’ve been reading LW for about a year. Most of the rationalizations that came to mind for why I haven’t yet made the transition from lurker to poster boil down to social indifference or low conscientiousness.

Reading this topic made me think about why I hadn’t posted, and the more I thought about it, the more I realised that I hadn’t thought about why I hadn’t posted. Looking more deliberately at potential foregone losses in utility to myself (and maybe the community) from my non-involvement, it seems like I should force myself to at least see if I don‘t get downvoted.

comment by Jaffa_Cakes · 2010-04-16T22:56:48.030Z · LW(p) · GW(p)

Hi.

I have posted a few times, but I self-identify as a lurker because I only very rarely post, and feel increasingly disinclined to.

Or should that be "decreasingly inclined to"? Or are they equivalent? (See, this is why I don't post much.)

Replies from: NancyLebovitz, alasarod, teageegeepea
comment by NancyLebovitz · 2010-04-19T10:05:13.092Z · LW(p) · GW(p)

Or should that be "decreasingly inclined to"? Or are they equivalent? (See, this is why I don't post much.)

They're different. One is a decrease of desire, and the other is an increase of distaste.

This doesn't mean that the only thing between them is a zero point of no reaction to the idea of posting-- there's also the possibility of mixed feelings.

comment by alasarod · 2010-04-19T05:05:50.769Z · LW(p) · GW(p)

Or should that be "decreasingly inclined to"? Or are they equivalent? (See, this is why I don't post much.)

yes!

comment by teageegeepea · 2010-04-18T21:23:38.343Z · LW(p) · GW(p)

Same here. I don't always read stuff here either though.

comment by apophenia · 2010-04-16T21:22:03.174Z · LW(p) · GW(p)

I've just introduced myself.

comment by SeventhNadir · 2010-04-18T04:26:15.392Z · LW(p) · GW(p)

Hi.

I'm a lurking Australian psychology student. I'm trying to devour information and acquire the skills to help me to separate the wheat from the considerable amount of chaff in my field of study. I'm so fascinated by this blog (worked through most of the sequences in the space of about two months) because to be honest it has more content than my university course.

I have been toying with the idea of posting some of the arguments I've been in recently which would be kind of a case study where I could point to where they might have gone wrong in cognition, but I kind of feel that it might be a bit pedestrian to most readers of this blog.

Replies from: magfrump, RobinZ, NancyLebovitz
comment by magfrump · 2010-04-18T20:33:59.742Z · LW(p) · GW(p)

I agree with Nancy. Case studies are very interesting; the few that I've seen have been voted up and very popular and I'd love to see more.

comment by RobinZ · 2010-04-19T01:44:56.613Z · LW(p) · GW(p)

I also support case studies - as much as science is maligned here for being too stringent with data requirements, there's a reason why ideas should be tested by experiment.

comment by NancyLebovitz · 2010-04-18T08:43:49.377Z · LW(p) · GW(p)

I'd be interested. This blog is both for the very abstract hypotheses and for applications of rationality.

comment by oliverbeatson · 2010-04-16T23:18:41.352Z · LW(p) · GW(p)

Hi. I don't often comment because generally I doubt I can really contribute much. I'm lurking, but taking notes, I've still got a lot to learn but I plan to learn it: on top of this, I need a job, so I'm also attempting to tackle that at the minute, at an admittedly inefficient pace. The most karma I ever got was for a 'Selfish-Jeans' joke. Which admittedly was brilliant. But yeah. Hi.

comment by chronophasiac · 2010-04-16T21:10:26.226Z · LW(p) · GW(p)

Hi.

comment by Arhenius · 2010-04-27T22:28:02.627Z · LW(p) · GW(p)

Hi.

comment by pra · 2010-04-21T07:00:37.038Z · LW(p) · GW(p)

Hi. Been following since Overcoming Bias. Love you guys. If google has replaced our wet RAM these days, I feel like this community could replace my "aha" generator.

PS: I was amused by the presence of a captcha on a site where so much optimistic AI discussion has taken place.

comment by thomascolthurst · 2010-04-20T01:39:08.643Z · LW(p) · GW(p)

Hi. I'm Thomas Colthurst. I will be doing a visiting fellowship at the Singularity Institute this summer.

comment by patrickscottshields · 2010-04-20T01:14:05.125Z · LW(p) · GW(p)

Hi! I'm Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I'm happy I finally signed up--thanks for the reminder!

Replies from: Alicorn
comment by Alicorn · 2010-04-20T01:24:27.069Z · LW(p) · GW(p)

Yay for musical theater!

comment by noematic · 2010-04-19T20:46:07.160Z · LW(p) · GW(p)

Hi. I'm a lawyer, 25 from Canberra, Australia. My interest in reason/ logic/ truth-seeking is perhaps best explained by a quote: 'We live in Luna Park, not Plato's republic.'

comment by Barry_Cotter · 2010-04-19T16:48:43.688Z · LW(p) · GW(p)

Hi. Came here via Overcoming Bias. I've been reading for a long time but I haven't made the effort to go through the sequences. (On that note, is the essence of the "Mysterious Answers to Mysterious Questions" sequence that if you don't have a better predictive model at the finish than at the start, the answer is meaningless?)

I'm almost certainly moving to Germany to do an Economics Masters shortly but I'm interested in learning to programme because it seems like a productive skill in a way that Economics mostly isn't (Econometrics and to a lesser extent Microeconomics excepted).

So. I think that it would be possible to combine my studies with programming and Machine Learning and Statitics in a not-totally-insane way. Any tips on that would be great, as would the opportunity to talk, chat or otherwise communicate with someone in Germany, native or expat.

Replies from: sroecker
comment by sroecker · 2010-05-09T22:29:56.906Z · LW(p) · GW(p)

Did you know there is a degree program called Wirtschaftsinformatik (Business Informatics) in Germany?

Replies from: Barry_Cotter
comment by Barry_Cotter · 2010-07-09T11:33:43.186Z · LW(p) · GW(p)

Yeah, after my total lack of success in my exams study is no longer really an option, but I was considering Berufsakademie in WI. My German probably isn't good enough to get an Ausbildungsplatz though, and it's really badly paid. I'm leaning to teaching English at the moment and hoping to move into translation later.

comment by fburnaby · 2010-04-19T15:35:46.523Z · LW(p) · GW(p)

Hi.

I'm a 24 yo male grad student (in Halifax, Nova Scotia) studying ecological math modelling.

This site is a gold-mine for clear thinking on the relationship between maps (models) and territories (systems). I'm interested in understanding and dealing with the trade-off between fidelity of the map to the territory and its 'legibility'. I've been lurking for about a year after coming across an article by Eliezer via Hacker News and got hooked.

Replies from: Breakfast
comment by Breakfast · 2010-04-19T21:03:21.098Z · LW(p) · GW(p)

No kidding! Haligonian lurker here too.

Replies from: fburnaby
comment by fburnaby · 2010-04-20T00:56:01.296Z · LW(p) · GW(p)

Very cool! I figured Canadians wouldn't be very well represented here, let alone Haligonians!

comment by sanxiyn · 2010-04-19T14:07:26.605Z · LW(p) · GW(p)

Hi, from Korea.

comment by tabsa · 2010-04-19T13:11:56.516Z · LW(p) · GW(p)

Hi.

Following what Elezier does since SL4.

comment by TheGiver · 2010-04-19T13:00:02.267Z · LW(p) · GW(p)
  • Male
  • 34
  • Technical Consultant (Learning Systems)
  • Atlanta, Georgia
  • Lurked 6 months
  • Via Overcoming Bias (but not really 'cause I ran across OB the same day)

Hi.

Replies from: None
comment by [deleted] · 2010-04-20T04:09:31.032Z · LW(p) · GW(p)

Cool! Always great to hear about other readers in or around ATL.

comment by Micah · 2010-04-19T03:30:12.768Z · LW(p) · GW(p)

Hi. I've been reading lesswrong since the start. I had overcomingbias.com on my RSS feeds before that became Robin Hanson's personal blog, and followed the threads onto this site.

I don't generally feel the need to comment on the posts here. My mind does come up with questions and opinions from what I read, but I've found that if I wait long enough, someone else will usually chime in with something close enough to my own thoughts that I feel my point has been made, even if not by me.

I have thought of a few things that might have made an interesting top-level post here (and with these, I haven't always found someone else pipe up with the same idea), but I never got around to writing them, and with no comment-earned karma score, I don't think I could initiate a top-level post anyway. I guess I could write them as comments in the open threads, were I more motivated to do so, but as I have other priorities, I'd prefer to just read.

I don't find any of the above particularly problematic--I quite enjoy reading this site, even without writing anything myself. But, since my "hello" cannot be redundant here, no matter how similar it might be similar to other ones: hello everyone! Here I am!

And now back to lurking.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-04-19T03:50:25.113Z · LW(p) · GW(p)

If you want a lower barrier to entry, try the lesswrong subreddit:

http://www.reddit.com/r/LessWrong/

comment by dougsharp · 2010-04-19T02:41:14.501Z · LW(p) · GW(p)

Delurking from the woods of deepest Wisconsin. Doug Sharp here, old school game developer (ChipWits, King of Chicago http://channelzilch.com ), just finishing a novel about kickstarting the Singularity by stealing space shuttle Enterprise ( Hel's Bet http://helsbet.com ). Debugging the Human OS has been a longtime interest of mine, so I keep an eye on Less Wrong. As an ex-5th grade teacher, I'm interested in the possibility of translating ideas emerging from LW into teaching people how to think clearly.

Replies from: Jowibou, 5072035972357923, peregrine
comment by Jowibou · 2010-04-19T06:32:50.766Z · LW(p) · GW(p)

Glad to hear more people are thinking about rationality in reference to school age kids. Catch their brains while they're young. While you're at it - why not develop a game that teaches them to think clearly? And ermm...Hi.

Replies from: NancyLebovitz, dougsharp
comment by NancyLebovitz · 2010-04-19T10:34:23.096Z · LW(p) · GW(p)

Inventing new games isn't a bad idea, but there are already a bunch that would be worth promoting.

Eleusis), Zendo), Penultima, and Mao are all games of inductive reasoning. And a list of games with concealed rules, some of them suitable for this project, and some of them just silly.

Mao might be the best bet for getting started with a lot of kids-- it's already a popular game.

For that matter, Twenty Questions might be a good place to start.

There are some interesting claims of increased IQ at the WWF n Proof site-- I don't know how well founded they are, but the game implies the possibility of a similar game based on Bayesian logic.

Replies from: Jowibou, RobinZ, dougsharp
comment by Jowibou · 2010-04-19T13:47:52.043Z · LW(p) · GW(p)

Thanks for the list Nancy, I will check them out. BTW your Zendo link points to Eleusis.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-19T14:01:55.811Z · LW(p) · GW(p)

Corrected. Thanks.

comment by RobinZ · 2010-04-19T11:41:41.957Z · LW(p) · GW(p)

Quick meta aside: if you have a URL with parentheses, you have to put a backslash ("\") before each close-paren.

It comes up a lot with Wikipedia URLs, or I'd just send a message.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-19T11:46:57.891Z · LW(p) · GW(p)

Thank you. Corrected.

comment by dougsharp · 2010-04-20T07:11:05.411Z · LW(p) · GW(p)

Thanks for that list of games.

comment by dougsharp · 2010-04-19T07:06:21.659Z · LW(p) · GW(p)

I'd be happy to collaborate on that type of game!

comment by peregrine · 2010-04-19T04:16:29.849Z · LW(p) · GW(p)

Hey Doug, glad to see another Wisconsinite :) I am brand new round here, been reading for a while though. Goodluck!

comment by Armok_GoB · 2010-04-17T10:02:41.357Z · LW(p) · GW(p)

Hi.

comment by wuwei · 2010-04-17T05:02:53.458Z · LW(p) · GW(p)

Hi.

I've read nearly everything on less wrong but except for a couple months last summer, I generally don't comment because a) I feel I don't have time, b) my perfectionist standards make me anxious about meeting and maintaining the high standards of discussion here and c) very often someone has either already said what I would have wanted to say or I anticipate from experience that someone will very soon.

Replies from: Strange7
comment by Strange7 · 2010-04-17T22:51:35.759Z · LW(p) · GW(p)

Even if you know someone else is going to say it soon, do so yourself and you'll still get some of the credit.

comment by pscriber · 2010-04-16T22:01:41.921Z · LW(p) · GW(p)

Hi!

comment by Fartan · 2010-04-16T21:10:37.503Z · LW(p) · GW(p)

Hi.

comment by lurker4 · 2010-05-03T01:00:43.710Z · LW(p) · GW(p)

19 yr old, male, Maths&Physics student from UK. Lurked on OB, then started lurking here when this place was made. EDIT: In case you want data on abnormalities among lesswrong lurkers here's two: Raised in Colombia as the son of missionaries. Self-taught.

Replies from: lurker4
comment by lurker4 · 2010-05-03T01:07:00.431Z · LW(p) · GW(p)

In case you want data on abnormalities among lesswrong lurkers here's two: Raised in Colombia as the son of missionaries. Self-taught.

comment by BruceyB · 2010-04-22T12:29:09.174Z · LW(p) · GW(p)

Hi. I'm a Caltech student in math/econ.

comment by tasuki · 2010-04-21T11:19:40.342Z · LW(p) · GW(p)

Hi, I'm a lurker. You even managed to trick me into creating an account.

I believe that at least 50% of regular lurkers will not say "hi" in this thread.

comment by imonroe · 2010-04-19T16:09:25.102Z · LW(p) · GW(p)

Hello. Been lurking on OB and LW for ages. I actually end up forwarding quite a few posts along to a friend of mine that thinks everyone here are robots or soulless automatons because of the lack of respect for intuition. I keep telling her to come here and post her opinions herself, but alas, no bites.

This is me signalling that I'm smart: B.S. computer science, M.S. journalism, currently employed in the fine art auction world.

Replies from: alasarod, JGWeissman
comment by alasarod · 2010-04-19T20:38:09.585Z · LW(p) · GW(p)

thinks everyone here are robots or soulless automatons because of the lack of respect for intuition.

A coworker was telling me that the law of conservation of energy means that the energy in our soul cannot disappear, only move.

I explained that the law includes that energy can transform, and that when we die, the "energy in our soul" serves to warm the panels of our coffin.

We haven't talked about it since.

Replies from: RobinZ
comment by RobinZ · 2010-04-19T20:48:01.434Z · LW(p) · GW(p)

In both cases, there's an inferential distance that hasn't been covered.

comment by JGWeissman · 2010-04-19T20:54:35.469Z · LW(p) · GW(p)

Your friend may be interested in When (Not) To Use Probabilities, which does in fact explain why in some situations, humans should rely on intuition, rather than try to use probabilities we can't compute.

comment by AlexSadzawka · 2010-04-19T13:17:12.666Z · LW(p) · GW(p)

Hi, a financial analyst here.

comment by PlatypusNinja · 2010-04-19T05:55:38.312Z · LW(p) · GW(p)

Hi! I'd like to suggest two other methods of counting readers: (1) count the number of usernames which have accessed the site in the past seven days (2) put a web counter (Google Analytics?) on the main page for a week (embed it in your post?) It might be interesting to compare the numbers.

Replies from: gwillen
comment by gwillen · 2010-04-19T14:06:42.503Z · LW(p) · GW(p)

Hello! Fancy meeting you here.

Replies from: rntz
comment by rntz · 2010-04-19T16:12:47.905Z · LW(p) · GW(p)

Nice to see you both.

comment by thejash · 2010-04-19T03:47:36.987Z · LW(p) · GW(p)

Hi. I lurk because I haven't had time to read enough of the sequences, and because I usually read posts well after they are published. By the time I get around to reading an post, all of my arguments and counter-arguments are already presented for me in the existing comments. That's a big part of why I liked the site in the first place.

Replies from: CSmith
comment by CSmith · 2010-04-20T05:36:28.753Z · LW(p) · GW(p)

Agreed on all counts. Ironically, this is yet another example of everything I thought about saying already being said. But I suppose I will still add a hello, since that's what this thread asked for.

Hello!

comment by TabAtkins · 2010-04-19T02:17:18.467Z · LW(p) · GW(p)

Hi, long-time lurker. Fell in love with the blog after two posts, and spent some productive hours reading the Quantum Physics sequence. I think I introduced the blog to the XKCD readership, or at least the ones who read the Science forums there.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-04-19T03:07:04.705Z · LW(p) · GW(p)

Was there any insightful discussion about LW on those forums?

comment by Meng_Bomin · 2010-04-19T01:33:42.374Z · LW(p) · GW(p)

Hi.

comment by Creaticity · 2010-04-18T22:56:05.250Z · LW(p) · GW(p)

Hi.

comment by [deleted] · 2010-04-18T21:46:48.366Z · LW(p) · GW(p)

Hello there.

I like the idea you're getting at, but there is a slight problem with it: you can never truly gauge the number of lurkers because some of them won't respond to this post. But I suppose you can get a better approximation, so I won't go as far to say that the whole thing is futile.

comment by Faber · 2010-04-17T19:22:58.661Z · LW(p) · GW(p)

Hi!

comment by Leonhart · 2010-04-17T11:06:50.684Z · LW(p) · GW(p)

Hi there!

I've been reading OB and LW for years and hardly said anything. This is typical of my behaviour on online communities generally, although it's worse here due to the unusual calibre of the discussions. Even this comment involved several edits and a lot of dithering, but since you asked...

comment by Sly · 2010-04-17T09:34:24.290Z · LW(p) · GW(p)

Hi.

I often feel like I have very little to add. Hence the lurking. Also I only recently finished with most of the sequences.

comment by James_Blair · 2010-04-17T05:16:50.458Z · LW(p) · GW(p)

Hi.

edit: I suspect LW has fewer lurkers than average. Speaking as a lurker, the conversations here are not easy to follow (this is more the structure rather than content, but sometimes the content gets pretty esoteric). I've limited my participation to reading top level posts of interest, and the comments if the article is sufficiently fresh.

comment by twanvl · 2010-04-17T04:35:59.360Z · LW(p) · GW(p)

Hi,

I am (almost) a lurker. For some reason I find it very difficult to post anything in online discussions forums, so I usually don't.

comment by Jach · 2010-04-17T03:15:25.048Z · LW(p) · GW(p)

Hi.

I've been lurking for a while, looks like. (My how time flies.) I'll throw my name in the pot of wanting more communication channels like IRC (looks like a room's setup, time to check it out!), especially less formal ones to ease transitioning to formal comments / top-level posts. The proportion of high-quality posts and comments around here seems awesomely high, but unfortunately makes it uncomfortable to just dive into. I also feel like I need to read all the sequences, in which admittedly I've made a pretty big hole so that there's not many posts left. (Currently going through quantum stuff, also picked up a copy of Feynman's QED.)

comment by AdamSpitz · 2010-04-17T00:33:50.180Z · LW(p) · GW(p)

Hi.

comment by humpolec · 2010-04-16T23:54:18.826Z · LW(p) · GW(p)

Hi.

comment by epwripi · 2010-04-16T21:26:56.707Z · LW(p) · GW(p)

hi

comment by AnotherKevin · 2010-04-16T21:10:01.813Z · LW(p) · GW(p)

Since you asked, hi.

comment by Yahivin · 2010-04-16T21:07:26.972Z · LW(p) · GW(p)

Hi.

comment by cromulent · 2010-04-26T10:42:15.544Z · LW(p) · GW(p)

RSS lurker from Helsinki.

Replies from: ChrisPine
comment by ChrisPine · 2010-04-26T11:12:53.101Z · LW(p) · GW(p)

And one from Oslo.

comment by aranazo · 2010-04-23T13:33:09.801Z · LW(p) · GW(p)

Hi. I am very pleased to find out that I can correct my spelling and grammar after posting.

comment by flawed_skull · 2010-04-22T03:09:13.190Z · LW(p) · GW(p)

Hello LessWrongers( Wrongites?)

Longtime lurker, from the beginning. Software dev for a bank. 23 yrs old. Great site

comment by Peridon · 2010-04-22T02:54:15.563Z · LW(p) · GW(p)

Hi. I've been lurking for a month or so now.

comment by Atoc · 2010-04-21T04:55:59.537Z · LW(p) · GW(p)

Hi, have been lurking for about 3 years already, first in OB, now in LS. As non-native speaker with moderate IQ I find commenting difficult. However I enjoy most of posts, and LS introduced me to various new topics, therefore I am really thankful for all brilliant post writers. Thank you!

comment by wunderon · 2010-04-20T19:39:02.667Z · LW(p) · GW(p)

Heya

comment by Nick_Roy · 2010-04-20T02:21:20.643Z · LW(p) · GW(p)

Hi! I don't feel qualified to contribute here, but I hope to fix that by... contributing here. I'll have more time to do so this summer.

comment by BenPS · 2010-04-20T02:17:01.572Z · LW(p) · GW(p)

When I was in school, I viewed myself as a defender of rationality against the fuzzy and ant-scientific positions that I sometimes encountered in the philosophy department. My meta-positions were eerily similar to those that are preached here.

Less Wrong fascinates me because, when I can stand to read it, I see that it is full of people who have similar background commitments and standards of evidence as me, but who have reached shockingly different conclusions.

Replies from: Alicorn
comment by Alicorn · 2010-04-20T02:22:20.714Z · LW(p) · GW(p)

shockingly different conclusions.

Do tell?

Replies from: BenPS
comment by BenPS · 2010-04-20T03:58:21.702Z · LW(p) · GW(p)

I suppose that the most glaring example is that consequentialism, in some form or other, seems to be accepted as obviously correct by most of the commenters here. So, It's funny that you should reply, since I recall that you may be an exception to that stereotype.

Replies from: Alicorn
comment by Alicorn · 2010-04-20T04:32:24.034Z · LW(p) · GW(p)

Yup, that's me, resident deontologist. The other day, during a conversation between me and some other house residents on ethics, someone said "She doesn't push people in front of trolleys!" and everyone was outraged at me.

Replies from: Nevin
comment by Nevin · 2010-04-20T05:56:56.231Z · LW(p) · GW(p)

To be fair, I don't think any of us were outraged at you. I think we were all trying to understand where exactly you make the distinction.

I find I think the hardest (i.e. think the most differently from normal, habitual thought) when I'm pushed right to where I draw choice-boundaries.

And actually I never quite wrapped my head around the basis of your view (I'm new to thinking about those things in such depth, since I've been surrounded by people who think like me). I'd like to continue the conversation sometime, in a more low-key environment.

Oh, and "Hi." I'm a lurker.

Replies from: komponisto
comment by komponisto · 2010-04-20T06:34:32.070Z · LW(p) · GW(p)

Just out of curiosity, are there a lot of people at the SIAI house who confine their participation on LW to lurking?

Replies from: Nevin
comment by Nevin · 2010-04-20T15:30:17.205Z · LW(p) · GW(p)

I don't think so, but I'm not sure. I just happened to be there for the day, I'm not a resident of the house.

comment by Matt_Stevenson · 2010-04-20T01:46:50.777Z · LW(p) · GW(p)

Hi, I'm Matt Stevenson. 24 yr old computer scientist. I work on AI, machine learning, and motor control at a small robotics company.

I was hooked when I read Eliezer on OvercomingBias posting about AGI/Friendly AI/Singularity/etc...

I'd like to comment (or post) more, but I would need to revisit a few of the older posts on decision theory to feel like I'm making an actual contribution (as opposed to guessing the karma password). A few more hours in the day would be helpful.

comment by Ryan · 2010-04-20T00:30:17.964Z · LW(p) · GW(p)

Hi.

I comment pretty rarely but read very often.

EDIT: read mistercow's comment and I feel pretty much the same way.

comment by alotofaction · 2010-04-20T00:16:10.968Z · LW(p) · GW(p)

Hi, first post, what is a point of karma?

Replies from: RobinZ
comment by RobinZ · 2010-04-20T00:38:49.445Z · LW(p) · GW(p)

Every post and every comment has a karma number - on the post, it's the number in the circle next to the author's name; on the comment, it's the number between the date and the [-] thing that collapses the thread. "Vote up" adds one to this number, and "Vote down" subtracts one.

comment by Brugle · 2010-04-19T23:15:48.270Z · LW(p) · GW(p)

Hi, I'm 59 years old (which I'd guess is way over average in this community), an atheist (my parents were atheists but took us to a local church for a while, perhaps just to expose us to what's out there), avid reader, parent, husband, and programmer for over 30 years. I heard about OB when it started, from several other blogs. I read OB and LW fairly often, but not exhaustively--there's never enough time. I am skeptical of some conventional wisdom but also of alternatives. I didn't like collapsing wave functions when I took QM and also didn't like many worlds when introduced by a physics-major friend. (I doubt if I'll dig into QM again--my brain has lost some of its edge.)

comment by Taylor · 2010-04-19T20:46:10.484Z · LW(p) · GW(p)

Hi.

comment by partialcharge · 2010-04-19T20:31:57.119Z · LW(p) · GW(p)

Hi.

comment by BenLowell · 2010-04-19T18:36:07.575Z · LW(p) · GW(p)

Hello

I've been lurking for around 2 years or so. I'll introduce myself properly in the introduction thread.

comment by Matt_McAlister · 2010-04-19T15:09:22.753Z · LW(p) · GW(p)

Hi. Recently finished a B.A. in Philosophy, working in residential sustainability (i.e. 'Green Building') for the moment. I'll begin contributing once I've read through the Sequences.

comment by Loquutus · 2010-04-19T14:52:45.020Z · LW(p) · GW(p)

Hi. I'm an over-aged college student from Philadelphia interested in studying almost exactly what this blog and Overcoming Bias are about.

comment by andrewjhacker · 2010-04-19T14:29:01.675Z · LW(p) · GW(p)

Hi. Been following Eliezer since SS09.

comment by DanDzombak · 2010-04-19T14:19:17.294Z · LW(p) · GW(p)

Hi

comment by drunkpotato · 2010-04-19T14:09:17.792Z · LW(p) · GW(p)

Hi. I came for the quantum mechanics thread, and stayed for the love of Bayes.

comment by AndirReinmar · 2010-04-19T12:04:37.358Z · LW(p) · GW(p)

Hi.

comment by drcode · 2010-04-19T11:34:46.193Z · LW(p) · GW(p)

HI.

comment by ResistTheUrge · 2010-04-19T10:44:55.728Z · LW(p) · GW(p)

Lurker for about a year. Made my only previous comment to this one a few months ago.

I almost never feel I have anything to contribute here. Even when I do, someone else has already expressed my thoughts in a comment more clear and thorough than anything I would have written. But this is a good thing!

comment by igorbivor · 2010-04-19T10:44:35.651Z · LW(p) · GW(p)

Hi

comment by SovietPyg · 2010-04-19T08:24:47.102Z · LW(p) · GW(p)

Hi!

Delurking from Russia here. I’ve been reading LessWrong (and, consequently, OB, since it is often linked to on here) for about 3 months. I have to confess to falling in love with this website for the mind-stretching articles and comments in the threads. However, like many other lurkers have already said, I feel I cannot contribute anything due to lack of linguistic proficiency on my part and due to the fact that someone would already post something I would want to say. I decided to de-lurk and say ‘hi’ because you created the impression of talking to each lurker (including myself) personally. I couldn’t but overcome my usual reluctance to engage in commenting.

Thank you for such an excellent source of thoughts, by the way!

comment by hello · 2010-04-19T07:15:43.855Z · LW(p) · GW(p)

hi

comment by reaver121 · 2010-04-19T06:49:23.666Z · LW(p) · GW(p)

Well, I don't count as a lurker anymore but I only started posting about two weeks ago and lurked about 2 years before that so I think I qualify to comment about it. The only 2 forums where I post(ed) at all are LessWrong and INTPCentral.

INTPCentral was more of an experiment to see if I could sustain posting for an extended period of time. It didn't work and after 2 weeks I lost interest. LessWrong has less chance going the same way because of the high level of most top posts. That's my first barrier to post. The online community has to be interesting enough to make me come back.

The second is a certain reluctance to comment at all. I think that has to do with my aversion to attention (although this doesn't fly when I'm with friends. Then I have no problem with it). The only reason to call attention to myself is when I can significantly add to the conversation or to correct someone. That also makes it difficult for me to comment on a top level post that already has been thoroughly analyzed in the comments. Adding a comment that doesn't add anything does look too much like yelling 'me too, me too'.

comment by vshih · 2010-04-19T06:33:38.589Z · LW(p) · GW(p)

Reluctantly relinquishing my lurker status.

comment by HoverHell · 2010-04-19T06:32:32.329Z · LW(p) · GW(p)

-

comment by slowd · 2010-04-19T05:47:59.713Z · LW(p) · GW(p)

Long time lurker here. Seattle WA. I've been following what Eliezer has had to say since 2003. Started way back on extropy-chat mailing list and reading SL4 archives, read Overcoming Bias since around 2008, and now I read here. I only lurk because I find that getting involved in discussion is too interesting, it distracts me from my projects.

comment by Joseph · 2010-04-19T04:19:52.457Z · LW(p) · GW(p)

Hello. I've been lurking here and on OB for sometime now. I started reading OB at least at the beginning of 2008, possibly in the last few months of 2007.

comment by bandwagonsmasher · 2010-04-19T03:48:01.509Z · LW(p) · GW(p)

Hi, lurker here (male, Chicago, attorney, 30). I am a regular Overcoming Bias reader who followed Eliezer to this site. To quote Buster Bluth: "You guys are so smart!" (slides off chair).

comment by frikle · 2010-04-19T03:37:07.719Z · LW(p) · GW(p)

Hi, I'm a reader from Eliezer's OB days, still lurking as I don't have much time or much to add at the moment. Hopefully this will change soon.

comment by baisong · 2010-04-19T03:26:52.476Z · LW(p) · GW(p)

Hi. I've been subscribed to the RSS for a few months now.

comment by Mediocrity · 2010-04-19T03:24:44.433Z · LW(p) · GW(p)

Hi

comment by sclark · 2010-04-19T03:24:33.128Z · LW(p) · GW(p)

I've been reading this blog for about half a year now and loving it after Accelerating Future (I think) referenced it for something. I don't post many comments because anything I'd have to contribute usually already is, but I find that if you surround (or read) more intelligent people, they have this peculiar way of making you the same. Keep going Less Wrong, a lot of us are learning all sorts of great things from you!

comment by eastvillagechick · 2010-04-19T03:06:38.415Z · LW(p) · GW(p)

hi. Why do I lurk? because I only visit occasionally, only for insight, and not because I feel any great need to belong. But please keep up the good work.
Naturally "less wrong" willl have an even higher percentage of lurkers than others. After all, you challenge the biases we use when we see ourselves, the world ... and less our conscious, identified selves know about that the better. But still, we return...

comment by Llyando · 2010-04-19T02:50:13.295Z · LW(p) · GW(p)

Hello. Long time lurker. Well I made an account a while ago and plan on contributing once I get the material. It seems like a wall I have to get over but I don't doubt I will with time.

comment by mlinksva · 2010-04-19T02:30:18.575Z · LW(p) · GW(p)

kthxhi

comment by StevenJ · 2010-04-19T02:10:23.963Z · LW(p) · GW(p)

Delurk:

Hi

Back to lurking...

comment by jimm · 2010-04-19T02:02:45.141Z · LW(p) · GW(p)

Hi. I read the RSS feed.

comment by sbierwagen · 2010-04-19T02:00:11.664Z · LW(p) · GW(p)

Hi.

comment by mustntgrumble · 2010-04-19T01:57:00.366Z · LW(p) · GW(p)

Hi. I'd never not lurked anywhere until I not-lurked here now.

comment by Matt_Stein · 2010-04-19T01:37:49.836Z · LW(p) · GW(p)

Hi. Like others have said, I tend to not post because I feel I can't add anything constructive to the discussion.

I don't think there's anything wrong with that though. A good part of learning can be knowing when to be silent and listen to what others have to say.

Replies from: bufu, Kevin
comment by bufu · 2010-04-19T04:15:08.313Z · LW(p) · GW(p)

Hi.

And agreed.

comment by Kevin · 2010-04-19T01:47:07.788Z · LW(p) · GW(p)

I was that way for about 8 months -- I've been a member of Less Wrong since it was turned on, but almost all of my karma has been acquired in 2010.

I had a lot of free time and so I jumped in by replying to comments on the recent comments page. My tips for doing it successfully are to look for comments where you can add a small point of additional information, or have a minor disagreement with a point of the comment. In order to make sure you don't lose karma for doing this, couch your words in linguistic uncertainty, using phrases like "I think".

Replies from: alasarod
comment by alasarod · 2010-04-19T04:58:46.702Z · LW(p) · GW(p)

You sound a little too confident when you say "In order." Oughtn't you hedge that statement?? :)

And hi.

Replies from: Kevin
comment by Kevin · 2010-04-19T05:19:42.641Z · LW(p) · GW(p)

Yes, yes, I totally deserve to lose large amounts of karma for being too certain. Hello!

Replies from: wedrifid
comment by wedrifid · 2010-04-19T05:29:20.734Z · LW(p) · GW(p)

Yes, yes, I totally deserve to lose large amounts of karma for being too certain.

No you don't. You're wrong. Downv...

comment by 0sn · 2010-04-19T01:25:46.535Z · LW(p) · GW(p)

Hi. I keep forgetting to log in, and mostly just watch the front-page feed in Google Reader, but I do pass interesting articles and posts along to friends and family. They generally seem to like it, so that's good. I'm interested in what you might call community outreach via my comics where I try to subtly involve issues of rationality and such. Feel free to drop by and suggest themes I should use.

Replies from: RobinZ
comment by RobinZ · 2010-04-19T01:55:14.820Z · LW(p) · GW(p)

Thanks for the link!

comment by [deleted] · 2010-04-19T01:17:10.166Z · LW(p) · GW(p)

Hello; I enjoy reading this site, but feel kind of inadequate to actually post something when so many of the main postings here are so erudite.

comment by AlexMennen · 2010-04-19T01:03:10.230Z · LW(p) · GW(p)

Hi.

Why would Less Wrong have an abnormally high percentage of lurkers? Also, being a lurker is not in black and white. For example. I mostly just lurk, but I post comments occasionally.

Replies from: Kevin
comment by Kevin · 2010-04-19T01:37:47.868Z · LW(p) · GW(p)

I think Less Wrong has an abnormally high percentage of lurkers because if participating at any web site is intimidating, participating at Less Wrong is especially intimidating because of the high level of discourse and English linguistic proficiency.

For the strictest definition of lurker, if you have registered for an account you are not or are not longer a lurker, but the definition is really not important.

Replies from: apophenia, gregconen, AlexMennen
comment by apophenia · 2010-04-19T03:44:28.637Z · LW(p) · GW(p)

I read the blog for two month before getting an account, and then continued to lurk, only upvoting and not commenting. I found that I felt like an observer without an account, and a silent participant with one.

comment by gregconen · 2010-04-19T07:40:20.291Z · LW(p) · GW(p)

Also, the karma system adds an additional barrier, at least in my mind. Knowing that your comment is going to be explicitly judged and your score added to a "permanent record" can be intimidating.

Replies from: Jowibou
comment by Jowibou · 2010-04-19T07:56:55.568Z · LW(p) · GW(p)

Whether we like it or not, that "intimidation" may be the single most important factor in keeping the level of discourse in the comments unusually high. Status games can be beneficial.

Replies from: gregconen
comment by gregconen · 2010-04-19T15:50:39.390Z · LW(p) · GW(p)

Indeed. I'm not saying the karma system is a bad thing.

comment by AlexMennen · 2010-04-20T00:24:35.881Z · LW(p) · GW(p)

This definition of lurker has the advantage of being clear-cut enough that numbers are meaningful, but does not represent as important a group in online community dynamics as the definition of lurker as someone who reads but does not post, regardless of whether or not he has an account.

Also, with that definition, I have not been a lurker for quite a while, and yet I appear to be accumulating free karma points for saying "hi" anyway. Not complaining.

comment by Dannil · 2010-04-18T16:34:50.069Z · LW(p) · GW(p)

Hi! This made me register: first barrier overcome. I don’t think I will ever contribute that much, but maybe I will add a comment now and then when I have something intelligent to say. What I have read here and on OB has contributed quite a bit to my thinking.

comment by alpaca · 2010-04-17T01:35:28.107Z · LW(p) · GW(p)

Hi. This motivated me to register instead of just RSS-lurking. So that removes one barrier to potential future participation.

comment by vizikahn · 2010-04-16T23:23:35.145Z · LW(p) · GW(p)

Hi. Too bad High Five Day went already.

Replies from: jtolds
comment by jtolds · 2010-04-19T01:27:54.883Z · LW(p) · GW(p)

oh no! i was totally scrolling down to post hi when i saw this.

I put high five day in my calendar as the 19th of april, and so I was super stoked for tomorrow. who knew it was the third thursday? not me. :( what a bummer

also, hi!

comment by jpulgarin · 2011-10-23T00:52:24.409Z · LW(p) · GW(p)

Hi

comment by jschulter · 2010-10-24T20:52:55.581Z · LW(p) · GW(p)

Hello! I'm currently doing a depth-first read through the sequences, and I've been enjoying all of it so far. I'm another one drawn in by HP:MOR, but I found even more here than I could have hoped for.

comment by WrongBot · 2010-06-21T20:15:53.927Z · LW(p) · GW(p)

Hi. Got sucked in to the site via MoR (of course), and have been devouring the sequences and related archive material for about a month or so.

comment by sroecker · 2010-05-09T22:17:22.215Z · LW(p) · GW(p)

Hi, I am a 24 year old physics student from Germany.

comment by inanytime · 2010-05-07T07:38:30.364Z · LW(p) · GW(p)

Hi.

comment by mitchellb · 2010-04-30T15:22:53.149Z · LW(p) · GW(p)

Hi, i'm a biology student from Germany. I stumbled upon this page and I really, really like it. I'm spending hours reading!

comment by profsparkles · 2010-04-29T20:40:09.082Z · LW(p) · GW(p)

Hi. RSS lurker for a few months, 25 yo PhD student living in the Netherlands. MSc in cognitive neuroscience.

comment by feanor1600 · 2010-04-25T05:11:52.575Z · LW(p) · GW(p)

Hi. EconPhD student in Philadelphia. Found OB through Marginal Revolution a couple years ago.

comment by baiter · 2010-04-23T11:44:10.701Z · LW(p) · GW(p)

Hi all. 25 yo New Yorker here. Been following this site for a while now, since Eliezer was still writing at OB.

Currently I'm working on two tech startups (it's fun to not get paid). My academic background is in cognitive psychology. In addition to AI, rationality, cognitive bias, sci fi, and the other usual suspects, my interests include architecture, poker, and 17th century Dutch history. ;)

Replies from: LucasSloan
comment by LucasSloan · 2010-04-26T00:40:13.544Z · LW(p) · GW(p)

17th century Dutch history

Have you read An Alternate History of the Netherlands? It is a pretty fun what-if about how Dutch history might have gone better for the Dutch. I wouldn't recommend reading past the present day however, the author isn't very good at projecting future technology trends.

Replies from: baiter
comment by baiter · 2010-04-26T13:28:53.450Z · LW(p) · GW(p)

Cool, I will take a look. I've frequently wondered how things would've developed had the Dutch been able to hold on to New Amsterdam...

comment by shenpen · 2010-04-22T21:03:09.963Z · LW(p) · GW(p)

Hi!

And I wonder why the word Rationalist has multiple meanings. You are clearly a Rationalist in one sense of the word but in this other sense (thankfully, because it is not good to be a Rationalist in this other sense): http://www.thefreemanonline.org/featured/michael-oakeshott-on-rationalism-in-politics/ you are not.

Would you perhaps write a short post about it? Thanks in advance.

Replies from: JGWeissman, SilasBarta
comment by JGWeissman · 2010-04-22T21:22:41.341Z · LW(p) · GW(p)

Would you perhaps write a short post about it? Thanks in advance.

From Newcomb's Problem and Regret of Rationality:

First, foremost, fundamentally, above all else:

Rational agents should WIN.

Don't mistake me, and think that I'm talking about the Hollywood Rationality stereotype that rationalists should be selfish or shortsighted. If your utility function has a term in it for others, then win their happiness. If your utility function has a term in it for a million years hence, then win the eon.

But at any rate, WIN. Don't lose reasonably, WIN.

If it turns out that the techniques we advocate predictably lose, even though we thought they were reasonable, even though they came from our best mathematical investigation into what a rational agent should do, then we will conclude that those techniques are not actually rational, and we should figure out something else.

comment by SilasBarta · 2010-04-22T21:37:00.202Z · LW(p) · GW(p)

Hm, the article in the link raises some interesting issues, given the goals of this site. People here want to develop artificial, generally intelligent beings (AGIs), which involves specifying, unambiguously, what you want a machine to do in a way that it will be as creative (or more) and capable as humans are. Oakeshott refers to an attempt to instruct (humans) by pure reference to theory-driven rules as "rationalism" and considers it a huge error.

Now, both LWers and Oakeshott would agree that to learn about the world, you have to interact with it, and the more, the better. But you can see the conflict between his worldview and that of this site's frequenters. While Oakeshottians will dismiss any kind of non-apprenticed teaching as futile, those here wish to use deep theoretical understanding of the lawfulness of intelligence to create beings that can learn with different restrictions than what humans have; and also, to break down this "tacit knowledge" humans use in complex tasks, into steps so simple a machine could follow them.

Historically, the latter paradigm has been rife with failures next to ambitious promises, but in recent decades has made impressive strides in doing things that "of course" a machine could never do because of the "infinite" rules it would need to learn.

Also, Oakeshott's critique is reminiscent of the discussion we had recently about how much (useful) knowledge you can convey to someone merely through explanation, without passing on the experience set. I supported the view people typically overestimate the extent of the knowledge that can't be explained and give up too easily in putting it in communicable form.

Btw, the author, Gene Callahan is an antireductionist I've argued with in the past (that's a link to a part of an exchange I moved to my blog when he kept deleting my comments).

comment by tim · 2010-04-22T17:56:40.901Z · LW(p) · GW(p)

Hi, I made a couple posts a while back but recently have been simply lurking.

I would like to comment more and I think it would benefit me to toss my ideas out there and get some feedback. I think part of the problem is that while I have a decent understanding of many concepts promoted here (probably level 1, beginning to pass into level 2 on the Understanding your understanding scale) fully articulating my thoughts in a coherent and original manner is difficult. Most notably when discussing things with friends I find myself falling back on examples I've read here and have trouble coming up with my own analogies which seems symptomatic of a lack of understanding.

personal tidbits since this seems like the place:

I am a psychology undergraduate at the University of Texas and am hoping to go to graduate school for cognitive psychology. I am very interested in modeling human cognition and the way we think. Most notably I have a strong interesting in decision theory and game theory and hope to do research in that area. Also 'priming' is exceptionally cool. I have also been working on getting some basic computer programming down and have some skill in both Python and Java.

Recreationally I enjoy biking, running and being outside in general. I play online poker semi-regularly and find it useful not only as something fun and profitable but as a fairly valuable introspective tool. The way I am playing and responding is a fairly accurate reflection of how I am dealing with life in general at the time. I have also started reading more and have recently finished Outliers and Blink by Malcolm Gladwell, Rational Decision Making in an Uncertain World by Robyn Dawes, and am currently working on I am a Strange Loop by Douglas Hofstadter.

comment by suzanne · 2010-04-22T16:57:09.318Z · LW(p) · GW(p)

Hi.

I've been lurking here and on OB for a couple of years. As other people have said, there seems to be a large amount of prerequisite knowledge required to post here. I usually find my own thoughts expressed more clearly by someone else in the comments, so I up-vote rather than just adding noise.

comment by anttil · 2010-04-22T15:47:00.410Z · LW(p) · GW(p)

hi

comment by [deleted] · 2010-04-22T05:19:33.016Z · LW(p) · GW(p)

Hello there, I've been reading the site for around six months now. I am an education student; LW has certainly changed my perception of human behavior and learning, and has given me much to reflect upon.

comment by gfyork · 2010-04-21T20:08:33.033Z · LW(p) · GW(p)

"Hi"

(Just standing up to be counted.)

G.

comment by Piglet · 2010-04-21T16:56:38.308Z · LW(p) · GW(p)

Hi. Long-time lurker since Eliezer was posting at OB (which candidly I find far less interesting these days). I'm 37, and am a practicing lawyer with several small children; this keeps me sufficiently busy that I don't often have time to think hard enough to post here, although the discussions are usually quite interesting. Also, I'm pretty non-quantitative due to misspent undergraduate years. I view this site as place where generally I should be listening, not talking.

comment by Paul · 2010-04-21T16:09:17.882Z · LW(p) · GW(p)

Hi.

comment by alecrene · 2010-04-21T08:55:56.832Z · LW(p) · GW(p)

I've been lurking since early OB. I am not here due to being Singularitarian but I've been using this site since I was in high school and through college to help keep myself from being a charlatan in any intellectual endeavor. I find that it takes regular reminders and dedication to not extend past the limits of my knowledge, and both OB and LW continually help to fine-tune my internal sense of "what I don't know."

To give a bit of a frame of reference, I'm studying social sciences and my specific problem domain is Educational Psychology and I'm interested in finding out how to render a subject into a receptive state for new information when they are dismissive. I'm still fairly early into my college track, so I haven't narrowed in any more than that, but I have my sights set on grad school.

comment by anz · 2010-04-21T08:49:21.534Z · LW(p) · GW(p)

Hi!

comment by sixes_and_sevens · 2010-04-21T08:21:20.190Z · LW(p) · GW(p)

Hi.

Mostly-lurker here, save for the occasional mildly pithy comment. I'm a DBA/sysadmin by day, studying towards an Econ + Maths degree in my spare time. LW has a lot of parallels with my fields of interest, elucidates on a lot of areas where I have half-formed ideas and provides exceedingly worthy arguments for things I don't agree with.

comment by Obbieuth · 2010-04-21T01:46:53.242Z · LW(p) · GW(p)

It’s so much easier to be a non-contributing zero. But I find myself unable to back down from an open request to drag myself out of the shadows of lurk and into the light of the rationality justice league. Part of the appeal of lurker status for me comes from my outlook on this site in general. I haven’t exactly figured out what I’m doing or what I believe in; but I do know I’ve still got a lot to figure out. Lurking lets me passively ponder interesting ideas proposed here without really committing to anything in particular. But having been prompted to post something I find myself uncertain as to what my level of involvement should be in this idea mill of rationality and humanity.

comment by Obbieuth · 2010-04-21T00:00:56.895Z · LW(p) · GW(p)

What if I'm not witty or rational enough to post a thought provoking idea?

comment by Jens · 2010-04-20T20:05:30.098Z · LW(p) · GW(p)

Hi.

comment by Vaegrim · 2010-04-20T18:23:18.700Z · LW(p) · GW(p)

Hi, Started reading at Overcoming Bias before the split. Mostly following Eliezer's fiction, but also enjoying the deconstruction of human blind spots.

comment by Erin · 2010-04-20T16:57:31.031Z · LW(p) · GW(p)

Hi. (sinks back into the shadows)

comment by FrankLarsson · 2010-04-20T12:36:16.460Z · LW(p) · GW(p)

Hi.

comment by purpleposeidon · 2010-04-20T06:35:09.641Z · LW(p) · GW(p)

Hi. I see that the first point is free.

I am a Bay Area (California, United States) 19 year-old Computer Science student. I imagine I'll actually be taking actual CS classes next year. I've been lurking about for about a month.

Replies from: sketerpot
comment by sketerpot · 2010-04-21T04:13:52.157Z · LW(p) · GW(p)

Man, taking freshman introductory classes is a drag. At the risk of insulting your intelligence, I feel compelled to remind you that the most important thing to learn in a CS curriculum is to go out and learn things that aren't taught in your classes, through the power of the internet. For example, you could go out and learn the first programming language in this list that you don't already know: Python, Lisp, C, Haskell. Or read about how some sorting algorithms work. Or write a compiler. Or whatever strikes your fancy, really.

Replies from: purpleposeidon
comment by purpleposeidon · 2010-04-25T23:37:31.989Z · LW(p) · GW(p)

I am very familiar with python, and a little bit familiar with C. (I am also a sophomore, not freshmen :P) I spent an hour looking at lisp once, but never got into it. As for Haskell, I have seen it, and it looks weird. I've'n't done much real algorithmic work. I wrote a (Warning: shameful self-plug) parser for Lojban, but it only works through trial-and-error and dumb luck.

Replies from: sketerpot
comment by sketerpot · 2010-04-26T08:51:49.819Z · LW(p) · GW(p)

I looked at your code. Why aren't you in "actual CS classes" yet? You're obviously qualified.

Replies from: purpleposeidon
comment by purpleposeidon · 2010-04-27T06:19:53.671Z · LW(p) · GW(p)

Because lame community college isn't all that great. I hope Berkeley agrees with you. :)

comment by Ledfox · 2010-04-20T04:54:45.835Z · LW(p) · GW(p)

Hello. :I

comment by AlexL · 2010-04-20T04:47:26.514Z · LW(p) · GW(p)

Hi! Hooked since OB sequences - and need to go back for several of them.

comment by JoshB · 2010-04-20T02:15:56.004Z · LW(p) · GW(p)

Hi, so Ive made the switch from Lurker to Lurker-With-Log-In (LWLI)

Im a young geologist and artist...

.....very interested in Neuroaesthetics at the moment, maybe I'll post some thoughts on it when im well read enough.

Keep challenging me :)

comment by TheThinker · 2010-04-20T01:51:20.961Z · LW(p) · GW(p)

Hi. Jeffrey Ellis, 44 yr-old multi-disciplined engineer working at Johnson Space Center. I blog all about critical thinking at The Thinker, http://jeffreyellis.org/blog/. Came here from Overcoming Bias when this place started up.

comment by alexs · 2010-04-20T01:32:28.328Z · LW(p) · GW(p)

hi

comment by Volt · 2010-04-20T00:05:34.472Z · LW(p) · GW(p)

Hi there. I suppose I might as well register and post.

I'm an information science grad student. I've been following the community for a few years (since Eliezer wrote on Overcoming Bias), but haven't been commenting because most of this stuff still seems a bit over my head (and I have lots of catching up to do).

Ha. Was this comment as useless as I think it is?

comment by thatrenfrewguy · 2010-04-20T00:04:14.535Z · LW(p) · GW(p)

Not sure if I am a lurker, but HI

comment by Eneasz · 2010-04-19T22:37:51.321Z · LW(p) · GW(p)

Hi. Accountant, 29. Currently in the process of signing up with CI, should be complete by the end of the month. Wish Eliezer would write more fiction. :) But I love everything on here. Been lurking for about a year?

Replies from: Kevin
comment by Kevin · 2010-04-19T23:08:29.541Z · LW(p) · GW(p)

Did you see his Harry Potter fan fiction? http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality

Replies from: sketerpot
comment by sketerpot · 2010-04-21T04:16:47.935Z · LW(p) · GW(p)

Incidentally, a fine way to deal with fiction deprivation is to go forth and write some of your own. It's fun, and most people who try it report that it's hugely rewarding. (Either that or they don't say anything at all. I have failed to prove my thesis! But it's worth trying anyway.)

Replies from: RobinZ
comment by RobinZ · 2010-04-21T11:19:30.734Z · LW(p) · GW(p)

I found it incredibly tough, personally - the activity itself did not feel rewarding, just (at best) the results.

comment by Divide · 2010-04-19T22:22:28.432Z · LW(p) · GW(p)

Hi!

(Lurking since Eliezer had still been writing his sequences on OB.)

comment by SteveReilly · 2010-04-19T21:41:01.415Z · LW(p) · GW(p)

Hi.

I enjoy the posts, but I usually don't have anything interesting to say on the topic. Still, I can never turn down a free karma point, so here I am.

comment by danlowlite · 2010-04-19T21:36:30.124Z · LW(p) · GW(p)

Just registered to say hi. So, "Hi."

I'm a technical writer/ultra-part-time grad student at Northern Illinois University in Rhetoric & Professional Writing (working on my thesis so slowly). I also write stories and other such things.

Followed the wave from Overcoming Bias.

comment by patmctap · 2010-04-19T20:08:34.688Z · LW(p) · GW(p)

Saying, "Hi."

comment by standardtoaster · 2010-04-19T18:10:17.210Z · LW(p) · GW(p)

Hi, haven't read much on the site yet, but it has certainly grabbed my attention.

comment by Noonius · 2010-04-19T17:30:08.661Z · LW(p) · GW(p)

Hi.

comment by Shae · 2010-04-19T16:58:32.370Z · LW(p) · GW(p)

Hello.

Female / Web developer / 41 years old / rural Indiana native

I've commented a few times, but not many.

comment by Meni_Rosenfeld · 2010-04-19T16:07:11.070Z · LW(p) · GW(p)

Hi.

I intend to become more active in the future, at which point I will introduce myself.

comment by Daneel · 2010-04-19T16:00:20.267Z · LW(p) · GW(p)

I registered just to say hi :) Just some info for your statistics. 21 years old, male, Industrial Design student from Buenos Aires, Argentina. I made it here via Rationally Speaking.

Bye

comment by sclamons · 2010-04-19T15:49:49.903Z · LW(p) · GW(p)

Hello, I'm an undergrad student who's been reading LW for about six months now. So far I've stuck to lurking for a couple of reasons. For one thing, most of the comments I have are already made by other people. Also, there's enough information on LW that it seems more fruitful to move on to a new article than to post a question.

There's a LOT of background reading available here on LW, which is intimidating to a new reader. I can say for myself that it's difficult to bring myself to post when I know there are dozens of background articles I still need to read. That's probably a good thing, though -- from what I've read about SL4 (and what I read in my brief forays there), LW has avoided a lot of the redundancy and conversational looping that its predecessor had.

comment by cerebus · 2010-04-19T12:50:36.220Z · LW(p) · GW(p)

Hi. I've been following Eliezer's stuff since CaTAI. Been a lurker on extropy-chat, SL4, OB and LW. I remember once participating in an #sl4 chat, and being unable to post due to my accelerating heartrate.

Lurking can be debilitating. Well, symptom rather than the disease I guess.

comment by rlingle · 2010-04-19T12:29:15.499Z · LW(p) · GW(p)

hi

comment by Gaks · 2010-04-19T12:07:19.388Z · LW(p) · GW(p)

Hi.

comment by eugman · 2010-04-19T11:12:24.429Z · LW(p) · GW(p)

Hello, I lurked for a long time. I've started dipping my toes in the water.

comment by JMaddison · 2010-04-19T10:01:13.403Z · LW(p) · GW(p)

Hi!

comment by Paamayim · 2010-04-19T06:29:15.990Z · LW(p) · GW(p)

Hi!

comment by hjkl · 2010-04-19T05:22:59.624Z · LW(p) · GW(p)

Hello.

comment by prismism · 2010-04-19T04:47:05.698Z · LW(p) · GW(p)

Hi!

I'm a highschool student who has been reading (and lurking) lesswrong for many months now. I have always found the blog posts to be very insightful and enlightening, and I greatly enjoy reading them. I'm a young aspiring transhumanistic biologist who just can't wait to get his hands dirty debugging and retooling the human body and mind! Please, keep up the wonderful posts, and I will be sure to contribute as soon as i find that i have something really good to say.

comment by Identity · 2010-04-19T04:31:15.268Z · LW(p) · GW(p)

I lurk on almost every forum that I read on the internet. The mere fact that I'm logged out of a forum that I'm registered on can be enough to cause me to say, "screw it" and not post for months. I frequently get, "Wow I remember you" as a response to my sparse postings.

My penchant to lurk coupled with my lack of confidence that I have anything worthwhile to contribute to this community made it seem doubly unlikely that I would ever post anything here. But I'll stand up and be counted now as part of this experiment, as it's the only contribution I can really make.

Cheers, and thanks for posting all this delicious lurker chow.

comment by fiddlemath · 2010-04-19T03:52:31.240Z · LW(p) · GW(p)

Hi. I'm a lurker here, working on my PhD in Computer Science at the University of Wisconsin. I've only been reading for the last few months, but I've gone through all the major sequences in the archives.

Replies from: zero_call
comment by zero_call · 2010-04-19T05:32:21.348Z · LW(p) · GW(p)

Cool... Engineering physics grad student at UW.

comment by Tim_Helmstedt · 2010-04-19T03:38:54.161Z · LW(p) · GW(p)

Yeah I'm a lurker...

Although now I have an account, I guess I have no excuse...

comment by mohrland · 2010-04-19T02:32:46.800Z · LW(p) · GW(p)

Hi.

comment by Threads · 2010-04-19T02:25:05.869Z · LW(p) · GW(p)

Hi. I've been lurking on OB+LW for around two years. I took the step of making an account a few months ago. Eventually I'll post something meaningful.

comment by Virge · 2010-04-19T01:51:33.495Z · LW(p) · GW(p)

Hi. I was an occasional contributor on OB and have posted a few comments on LW. I've dropped back to lurking for about a year now. I find most of the posts stimulating -- some stimulating enough to make me want to comment -- but my recent habit of catching up in bursts means that the conversations are often several weeks old and a lot of what needs to be argued about them has already been said.

The last post that almost prompted me to comment was ata's mathematical universe / map=territory post. It forced me to think for some time about the reification of mathematical subjunctives and how similar that was to common confusions about 'couldness'. I decided I didn't have the time and energy to revive the discussion and to refine my ideas with sufficient rigor to make it worth everyone's attention.

Over the past week I've worked through my backlog of LW reading, so I've removed my "old conversation" excuse for not commenting. I'll still be mostly a lurker.

comment by Briareos · 2010-04-19T01:48:17.064Z · LW(p) · GW(p)

Hi.

comment by zemaj · 2010-04-19T01:47:05.124Z · LW(p) · GW(p)

Hi

Been reading Less Wrong religiously for about 6 months, but still definitely in the consume, not contribute phase.

It feels like Less Wrong has pretty dramatically changed my life. I'm doing pretty well with overcoming Akrasia (or at least identifying it where I haven't yet overcome it). I'm also significantly happier all round, understanding decisions I make and most importantly exercising my ability to control these decisions. I'm doing a lot of things I would have avoided before just because I realise that my reasons for avoiding them were not rational. My boundaries are much more sensible now and getting better weekly. Still a work in progress, but I'm incredibly happy with where things are going.

So, a big thanks to everyone who contributes here. Can't thank you enough :)

comment by Nyuutsu · 2010-04-19T01:45:37.619Z · LW(p) · GW(p)

Hello!

comment by cha5on · 2010-04-19T01:31:35.355Z · LW(p) · GW(p)

Greetings!

comment by Bildoon · 2010-04-19T01:30:01.758Z · LW(p) · GW(p)

Hello.

comment by danield · 2010-04-19T01:29:55.433Z · LW(p) · GW(p)

Hi!

comment by DataPacRat · 2010-04-19T01:28:25.816Z · LW(p) · GW(p)

Coi

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-19T02:40:00.064Z · LW(p) · GW(p)

Have you found that Lojban makes it easier to think clearly?

comment by Sharur · 2010-04-19T01:26:30.222Z · LW(p) · GW(p)

Hi

comment by ig0r · 2010-04-19T01:24:00.461Z · LW(p) · GW(p)

Hello

comment by OneWhoFrogs · 2010-04-19T01:23:58.285Z · LW(p) · GW(p)

Hi.

I only subscribed yesterday, and I didn't even have an account before now, but I'll consider myself a lurker and post here. There probably won't be a better time to join the community anyway.

Nice to meet you guys.

Replies from: Alicorn
comment by Alicorn · 2010-12-05T01:40:59.019Z · LW(p) · GW(p)

Apropos of nothing:

How does one frog?

Replies from: gwern, OneWhoFrogs
comment by gwern · 2010-12-05T02:33:54.520Z · LW(p) · GW(p)

Well, the OED gives several possibilities. If one is 'frogging', one is 'catching frogs, fishing for frogs'. One might 'frog' a coat - that is, apply 'frogs' ('An attachment to the waist-belt in which a sword or bayonet or hatchet may be carried.' or 'An ornamental fastening for the front of a military coat or cloak, consisting of a spindle-shaped button, covered with silk or other material, which passes through a loop on the opposite side of the garment.'). And so on.

Replies from: NancyLebovitz, OneWhoFrogs
comment by NancyLebovitz · 2010-12-05T02:53:20.114Z · LW(p) · GW(p)

Knitters and crocheters use "frogging" to refer to undoing defective work. Rippit! Rippit!

comment by OneWhoFrogs · 2010-12-05T03:51:41.296Z · LW(p) · GW(p)

That it's actually a verb surprises me. I was just intending it to be a pun on the game Frogger. I thought, "one who runs is a runner, so what does Frogger mean?"

Replies from: gwern
comment by gwern · 2010-12-05T03:55:10.269Z · LW(p) · GW(p)

If there's one thing I've learned from buying an OED, it's that every damn word in English has an amazing number of variations and meanings.

comment by OneWhoFrogs · 2010-12-05T02:04:07.410Z · LW(p) · GW(p)

"define: frogger"

...it was the best username I could think of, at the time ;).

comment by bluej100 · 2010-04-19T01:23:46.256Z · LW(p) · GW(p)

I just read the RSS feed for a Yudkowsky fix since he left Overcoming Bias.

comment by magsol · 2010-04-19T01:19:04.864Z · LW(p) · GW(p)

Hi there. I lurk, mainly for the purpose of learning, but also because of significant time demands elsewhere.

comment by josefjohann · 2010-04-19T01:18:44.139Z · LW(p) · GW(p)

I'm a lurker. I follow via the rss feed. LessWrong is in my "firehose" folder, meaning its a limbo-state. I might promote it to an actual folder or I might unsubsribe.

At least, thats until I find some more nonsensical classification scheme for my rss feeds.

comment by treed · 2010-04-19T01:08:23.648Z · LW(p) · GW(p)

Lo!

(I apparently had an account already, although I didn't remember this until I tried to comment and my usual name was taken in the registration screen.)

comment by David_Rotor · 2010-04-19T00:59:07.877Z · LW(p) · GW(p)

Hey ho.

comment by chris_elliott · 2010-04-18T18:50:25.006Z · LW(p) · GW(p)

Hi.

comment by Mardonius · 2010-04-18T06:58:13.323Z · LW(p) · GW(p)

Hi, been reading this site since it split from OB, but have never commented, though on occasion I have been tempted.

comment by exousia · 2010-04-18T03:07:00.474Z · LW(p) · GW(p)

Hello ~

I've been reading this site for several months, but I still feel unqualified to actually post anything. I've yet to entirely read all of the sequences, and I also lack the math/science background that appears to be relatively common here (I'm an industrial design student). As a result I'm (perhaps excessively) wary of posting something that's redundant or has a glaring flaw I ought to have been aware of.

Thanks for giving an excuse to make a first post, though.

comment by jasonmcdowell · 2010-04-17T22:06:00.936Z · LW(p) · GW(p)

Hi.

comment by arbimote · 2010-04-17T18:00:03.150Z · LW(p) · GW(p)

Hi.

I registered and started posting a while back, but since then have reverted to lurking. Partly due to not having time, but I can also identify with reasons some others have given.

comment by MartinB · 2010-04-17T02:19:42.325Z · LW(p) · GW(p)

Hi,

i am technically not lurking as i prepare my anti-akrasia article.

Martin

Replies from: gwern
comment by gwern · 2010-04-17T21:35:46.369Z · LW(p) · GW(p)

Hm, you're submitting frivolous comments as a way of not preparing an anti-akrasia article... Oh the ironing!

Replies from: MartinB
comment by MartinB · 2010-04-18T01:46:15.975Z · LW(p) · GW(p)

No i actually have it prepared already, but still collect data from my own experience and my beta tester. But i appreciate the irony, thats what we all here for after all :-)

Martin

Replies from: MartinB
comment by MartinB · 2010-04-18T23:57:17.046Z · LW(p) · GW(p)

And now i just figured out that i am a few karma points short. So i lurked too much after all :-)

Replies from: apophenia, jimrandomh
comment by apophenia · 2010-04-19T02:25:23.975Z · LW(p) · GW(p)

Yes, I'm having a similar problem with my article.

Replies from: Morendil
comment by Morendil · 2010-04-19T08:13:45.872Z · LW(p) · GW(p)

Here you go. :)

comment by jimrandomh · 2010-04-19T02:16:48.769Z · LW(p) · GW(p)

Fixed that for you. Post away!

Replies from: MartinB
comment by MartinB · 2010-04-19T02:34:04.256Z · LW(p) · GW(p)

thanks, hope the article is worth it :)

comment by jake987722 · 2010-04-17T00:47:29.643Z · LW(p) · GW(p)

Hi.

I'm a grad student studying social psychology, more or less in the heuristics & biases tradition. I've been loosely following the blog for maybe six months or so. The discussions are always thought provoking and frequently amusing. I hope to participate more in the near future.

comment by noitanigami · 2010-04-16T22:34:28.269Z · LW(p) · GW(p)

Hello

I've only been aware of this site for about a month. While i find the articles and discussions enlightening, probability theory is still very new to me. Once i have a more intuitive grasp of its implications I plan to participate more heavily

Replies from: JGWeissman, Kevin
comment by JGWeissman · 2010-04-16T23:04:55.353Z · LW(p) · GW(p)

Hi

You may be interested in Eliezer's Intuitive Explanation of Bayes' Theorem.

Replies from: Alicorn
comment by Alicorn · 2010-04-16T23:06:09.417Z · LW(p) · GW(p)

Or Kaj's What Is Bayesianism? for a more intuitive version.

comment by Kevin · 2010-04-16T22:37:12.819Z · LW(p) · GW(p)

You don't actually need a good grasp on probability theory to participate here. I certainly don't have a good grasp on Bayesian statistics. A lot of the discussions here are qualititative.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-04-17T14:12:41.749Z · LW(p) · GW(p)

You don't actually need a good grasp on probability theory to participate here

Anecdotally, the most strongly upvoted articles tend not to be specifically about math and statistics, but rather about meta-thinking issues: how insights from AI, cog sci, stats, social science and so on can help improve our thought processes.

comment by ronit · 2020-08-14T15:25:05.960Z · LW(p) · GW(p)

Hi

comment by Multiheaded · 2011-07-11T15:49:32.266Z · LW(p) · GW(p)

Karma pls! Oh, I mean, hi.

comment by Cayenne · 2010-12-27T22:48:12.646Z · LW(p) · GW(p)

Ah, hi there...

Edit - please disregard this post

Replies from: Dorikka
comment by Dorikka · 2011-04-20T03:04:55.309Z · LW(p) · GW(p)

Hi!

comment by CaseyMc · 2010-10-24T21:19:11.351Z · LW(p) · GW(p)

Hi. I, too, came here through HP:MOR. I've been reading through sequences on and off for the past couple of months. I occasionally click on links to recent comments.

comment by EchoingHorror · 2010-07-27T03:53:22.934Z · LW(p) · GW(p)

Hi, and all. I just joined and stopped exclusively lurking, despite my love of a certain Starcraft Unit.

A lot of the recent posts revolve around AI and I have level 0 AI knowledge, so the lurking is far from over.

But hi nevertheless. I'll try to contribute where I can and not to where I can't, so there.

comment by JoelCazares · 2010-05-17T07:54:14.579Z · LW(p) · GW(p)

Hello. I read on a 30 day lag, so that's why I'm just now posting.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-17T20:44:46.202Z · LW(p) · GW(p)

I read on a 30 day lag

Details, please. Do the items you read get to you via RSS?

ADDED many days later. Looks like I will have to wait 30 days for my reply :) :)

Replies from: JoelCazares
comment by JoelCazares · 2011-01-05T10:23:53.064Z · LW(p) · GW(p)

Or 7 months. Sorry about that! The 30 day lag is because Google Reader will purge any unread posts after about 28-30 days. So I try to read what I consider important before I lose it forever. This of course means that I end up just not reading anything except for what is about to get deleted. Ah, procrastination.

I didn't see that you had replied to this until I randomly looked at my profile on the actual LW site. Usually I just passively read LW wisdom in google reader, and not on the LW site proper.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-01-07T14:34:36.101Z · LW(p) · GW(p)

Thanks for the reply. The mechanics of how people use sites like this is one of my interests.

comment by tsedi · 2010-05-03T18:29:46.842Z · LW(p) · GW(p)

Hi. Business&Computer Science grad student from Finland. Just found the site yesterday and started devouring the content today :) Great stuff!

comment by sdenheyer · 2010-05-01T01:40:47.839Z · LW(p) · GW(p)

Greetings from Canada.

I'm an audio mixer, working mostly for Discovery Channel, with an interest in science and transhumanism. Been lurking for a couple of years.

comment by spriteless · 2010-04-30T02:18:19.472Z · LW(p) · GW(p)

k, hi

comment by Mystfan · 2010-04-27T01:20:43.085Z · LW(p) · GW(p)

Hi all, I'm a physics student who's been lurking here since January or so...I'm generally pretty quiet.

Replies from: Nisan
comment by Nisan · 2011-03-02T06:30:08.724Z · LW(p) · GW(p)

Shorah!

comment by Abisashi · 2010-04-26T17:11:43.007Z · LW(p) · GW(p)

I've been lurking here for six months or so; I think I got here from Overcoming Bias through a link from Marginal Revolution. I try not to come here more than once a week because I end up spending too much time here due to the extensive interlinking.

comment by BjornLe · 2010-04-24T15:57:11.875Z · LW(p) · GW(p)

Hello, 22 year old engineering student from Sweden, finally took time to create an account after observing OB and LW for more than a year.

comment by Lonnen · 2010-04-23T23:13:52.241Z · LW(p) · GW(p)

Hi.

comment by webspiderus · 2010-04-23T22:47:11.414Z · LW(p) · GW(p)

hi! i'm 20, originally from Moscow and currently an undergraduate senior majoring in computer science and mathematics at a pretty decent university in california. starting my masters in CS at a much better school in california next year. i've only recently discovered this site, but i hope to spend much more time on it in the near future

comment by wirehead · 2010-04-22T15:32:42.049Z · LW(p) · GW(p)

Hi. Been reading the RSS feed for 3-4 months now. Slowly beginning to make sense of it all... understanding the specialized vocab and so forth. It's always been my goal to be as self-aware as possible, so I'm glad of all the interesting ideas here.

comment by domor · 2010-04-22T14:38:59.038Z · LW(p) · GW(p)

Hi. I've been lurking for quite a long time, first on OB then here.

Computer engineering student, interested in AGI and rationality. And foreign languages and stuff.

(Edit: I am especially interested in the mathematical formalization of AI - my hypothesis is that strong AI is a disorganized field in need of a more formal language to make better progress. Still a vague idea, which is why I'm just a lurker in the AI field, but I am quite interested in discussion on related topics.)

comment by mariz · 2010-04-21T16:59:09.832Z · LW(p) · GW(p)

I'll say Hi and I'll post this link which describes a study that showed that people are more likely to believe in pseudoscience if they are told that scientists disapprove of it:

http://www.alternet.org/module/printversion/146552

They are also much more likely to believe in pseudoscience if it has popular support.

comment by Tobias · 2010-04-21T06:39:54.548Z · LW(p) · GW(p)

Hi, I came here via Overcoming Bias. I study Computer Sciences in Germany.

comment by fidofetch · 2010-04-21T04:04:01.315Z · LW(p) · GW(p)

Hi, I've been reading this blog for a while now, and I was thoroughly surprised to find so many like minded thinking people. I haven't commented any, because quite frankly I've had nothing to say. Hello all though.

comment by nickgreen · 2010-04-21T03:40:51.556Z · LW(p) · GW(p)

Hi, I discovered this blog very recently - I have an economics background (milton friedman a big influence) and a growing interest in philosophy. This site popped up while I was searching for the 'underdog bias' (that think must be some level of human 'moral instinct') and this led me to the 'Why support the underdog?' article and then others. I'm really impressed by the high standard. Nick

comment by confuscated · 2010-04-21T02:42:25.888Z · LW(p) · GW(p)

Bonjour ...

comment by Luke Stebbing (LukeStebbing) · 2010-04-21T00:56:43.622Z · LW(p) · GW(p)

Hi. I've been an LW (and previously, OB) lurker for several years, but I haven't had time to provide my online presence with the care and feeding it needs. Three years of startup crunch schedules left me with a life maintenance debt, and I have a side project in dire need of progress, but once those items are out of the way I plan to delurk.

comment by Madbadger · 2010-04-21T00:55:40.299Z · LW(p) · GW(p)

Hi! 8-)

comment by Elyandarin · 2010-04-21T00:54:17.790Z · LW(p) · GW(p)

Hi!
Found this site searching for fiction via Tv-tropes.
While I'm a new reader, I'll likely lurk a lot.
The internet is a constant deluge of input - my instinctive counter is
to provide output only when I have something interesting to say, hoping others will reciprocate...
(And even then, I only feel comfortable when what I say is concise, relevant and new.)
After all, thousands of people might read my message; wasting their time would be unspeakably rude.

Replies from: RobinZ
comment by RobinZ · 2010-04-21T01:00:11.778Z · LW(p) · GW(p)

Fiction via TV Tropes ... the "Three Worlds Collide" page?

(I ask because I'm the one who started it - glad to hear it was useful, if it was!)

comment by [deleted] · 2010-04-21T00:27:49.971Z · LW(p) · GW(p)

Hi. I lurk here and read every post but rarely never really felt like commenting. Neat blog though.

comment by washi · 2010-04-21T00:27:21.218Z · LW(p) · GW(p)

Made an account just to say "hi"

So ... Hi!

comment by curiousepic · 2010-04-20T19:23:11.889Z · LW(p) · GW(p)

Hi! Lay-lurker here, I was just recently considering posting some questions in the next open thread and made an account then. We'll see how that goes, but it's nice to see this welcoming attitude!

However, a concern I have about more people being more active, and a reason I haven't signed up before, is that if more laypeople like myself begin to vote up things regularly, they will necessarily be posts that we both like and understand. If we don't understand something, it doesn't get upvoted with equal footing as posts we don't necessarily understand but may be of equal or greater value. Is there a comprehensive thread/discussion about the pros/cons of a greater user base here?

comment by Ivan_Tishchenko · 2010-04-20T19:03:07.108Z · LW(p) · GW(p)

Hi. Nice to meet you all. :)

comment by Danneau · 2010-04-20T18:43:21.845Z · LW(p) · GW(p)

Hi.

comment by alanpost · 2010-04-20T15:23:07.101Z · LW(p) · GW(p)

Hello. I don't make the time for active participation in this community, but I enjoy my read-only interaction with it.

I have a sense that the time commitment required for effectively participating in this community is relatively high, and I haven't discovered yet whether this time investment pays back.

comment by benji · 2010-04-20T12:41:14.229Z · LW(p) · GW(p)

Hi. Long time listener, first time caller.

comment by MattAndrews · 2010-04-20T03:12:23.121Z · LW(p) · GW(p)

Hey ho all. I'm based in Canberra, Australia (and New Ireland, Papua New Guinea), do website development/design for a living, engage in climate change discussion a lot, and ended up here by the circuitous path of stumbling across "Harry Potter and the Methods of Rationality". Marvellous piece of work, which I found quite resonant. I'm very impressed with what I've seen so far of Less Wrong.

comment by kess3r · 2010-04-19T20:36:58.221Z · LW(p) · GW(p)

hi

comment by Achilles · 2010-04-19T20:22:31.399Z · LW(p) · GW(p)

Hello, I'm studying Bioengineering at ASU, in Arizona. Right now I'm in Finland for the year. It's been an utter blast. Cool people and life-improving experiences. I'm not sure I want to go back to the US..

I would love to learn more and more about status. That's currently the most interesting thing for me. It applies directly to me, right now, as I'm in a new group of people with lots of group interactions in Helsinki, Finland. I can use that information right now.

Not so interested in the probability discussions.. Perhaps those are more interesting to others, but I have read a few of them and subsequently skipped the rest.

Thanks for your time!

comment by theoryofevrythng · 2010-04-19T19:32:55.001Z · LW(p) · GW(p)

"Staring into the Singularity" introduced me to the idea of the Singularity eight years ago (I was 16). I read SL4 for a few years after that. I've been sort of a casual follower of OB for a couple years, and just added LW to my RSS.

Hi.

comment by Dufaer · 2010-04-19T18:40:31.458Z · LW(p) · GW(p)

Hi, there! Trying to get through the sequences. And past akrasia...

comment by Zargon_ · 2010-04-19T18:31:46.732Z · LW(p) · GW(p)

Hi.

comment by rntz · 2010-04-19T16:15:04.161Z · LW(p) · GW(p)

Hi. CS undergrad at CMU here. More interested in decision theory specifically than rationality in general. Might post more if I had more time.

comment by Kisil · 2010-04-19T15:47:44.432Z · LW(p) · GW(p)

Hi.

I've posted comments twice, I think, but my read/write ratio is high enough that I think I still count here.

comment by nathan_h · 2010-04-19T15:29:42.966Z · LW(p) · GW(p)

Hi, I'm probably even of lower status than a lurker, since I don't read this blog regularly. I do like it a lot, however, and it's been on my RSS-feed list ever since Eliezer moved here from OB. (I was subscribed to and irregularly followed the posts there, too.)

I pop by whenever something catches my attention in particular. Aspiring composer from Washington (state, not D.C.) here.

comment by Sabiola (bbleeker) · 2010-04-19T15:19:33.836Z · LW(p) · GW(p)

Hi! I'm a lurker, even though I apparently already had an account here. Can't even remember when I made that...

comment by gwillen · 2010-04-19T14:19:56.352Z · LW(p) · GW(p)

I'm not sure I count as a lurker, but I'll stop in and say hi anyway.

About me: I have a BS in Computer Science from Carnegie Mellon University; now I work for a tech company, writing software and babysitting servers.

comment by MrUst · 2010-04-19T09:36:27.618Z · LW(p) · GW(p)

Hi

comment by Levent · 2010-04-19T09:30:04.016Z · LW(p) · GW(p)

Hello everyone,

have only been reading LW for a couple of months, might start contributing in a few more.

Greetings from Munich!

comment by Grrrr · 2010-04-19T08:45:39.129Z · LW(p) · GW(p)

Hi.

comment by glimmung · 2010-04-19T07:58:20.236Z · LW(p) · GW(p)

Hi!

Greetings from Knaresborough, North Yorkshire, UK.

comment by wintercrow · 2010-04-19T07:36:14.025Z · LW(p) · GW(p)

"Hi" seems inadequate. Salutations from a wanna-be prolix pedant? No?

comment by meta_ark · 2010-04-19T06:40:03.606Z · LW(p) · GW(p)

Good morning, people. I'm assuming it's morning somewhere. Adam, from Australia. A friend of mine's been talking about this site for a while now. I had an unusually misanthropic weekend, full of people committing crimes against reason and logic, so I decided to search for some rational thinking. I remembered this place, loved it when I first clicked on, and have subscribed.

comment by kylecameron · 2010-04-19T05:54:05.422Z · LW(p) · GW(p)

Hey friends. I was able to join in a couple of fascinating LW/OB NYC meetup conversations; I don't comment here much but certainly read daily. Thanks for all the thoughts/insight.

comment by Cunya · 2010-04-19T05:53:01.903Z · LW(p) · GW(p)

Lurking from Tampere, Finland

comment by mokelly · 2010-04-19T04:50:07.404Z · LW(p) · GW(p)

Hi! Longtime RSS reader from Mountain View.

comment by imaxwell · 2010-04-19T04:47:09.650Z · LW(p) · GW(p)

Hi.

It's been quite a while since I posted here, so long that I initially couldn't remember my username. I rarely have much to add, and even though "I agree with this post" posts are, I think, slightly more accepted here than in some places, just agreeing doesn't by itself motivate me to say so most of the time.

comment by peregrine · 2010-04-19T04:16:45.810Z · LW(p) · GW(p)

Hello I guess.

comment by mikerpiker · 2010-04-19T04:15:59.744Z · LW(p) · GW(p)

Hi.

comment by vinayak · 2010-04-19T03:51:26.828Z · LW(p) · GW(p)

Hello.

Now can I get some Karma score please?

Thanks.

comment by deepakjois · 2010-04-19T02:48:29.081Z · LW(p) · GW(p)

Hi. I may have posted a comment or two, cannot remember. But I have been lurking for a long time.

Replies from: wedrifid
comment by wedrifid · 2010-04-19T03:09:50.114Z · LW(p) · GW(p)

Click on your name. It'll show you that you commented a recommendation for Caldini (good book btw!)

comment by outlawpoet · 2010-04-19T02:46:15.606Z · LW(p) · GW(p)

hi

comment by misterinteger · 2010-04-19T01:44:07.122Z · LW(p) · GW(p)

Hey!

I'm subscribed via RSS, so I don't really see comments, but I might start lurking on the actual site.

comment by grobstein · 2010-04-19T01:04:05.625Z · LW(p) · GW(p)

Hi. I am a very occasional participant, mostly because of competing time demands, but I appreciate the work done here and check it out when I can.

comment by Entropy · 2010-04-18T13:13:07.286Z · LW(p) · GW(p)

Hi! I discovered this site via OB a few months ago and have been lurking ever since. I've commented only twice before but have been reluctant to comment more as I haven't yet read anywhere near as much of LW as I would like. I'm very interested in many of the very common topics of discussion here, such as rationality, AI, etc, and hope to be able to make a contribution to our understanding of one or more such topics in the future.

Thanks for the excuse to comment, and to the LW community at large for creating such a fascinating site.

comment by Darmani · 2010-04-17T16:02:10.503Z · LW(p) · GW(p)

Hi!

comment by VNKKET · 2010-04-17T03:02:45.200Z · LW(p) · GW(p)

Hi! I'm not anti-posting, but I never do for some reason.

comment by LucasSloan · 2010-04-16T21:33:03.128Z · LW(p) · GW(p)

I'm not sure if I count as a lurker...

I comment enough that I can top-level, but all of my comments come in relatively short spurts of activity interspaced by much longer periods of inactivity (say a day or two of activity per 1 or 2 months). Perhaps a good standard would be to go up to a randomly selected group of readers and ask if they know me by my screen name. Last time I checked this the answer was no, so I guess I'll call myself a lurker, but if anyone objects, I won't say boo.

Anyway, hi!

Replies from: Kevin
comment by Kevin · 2010-04-16T21:37:44.520Z · LW(p) · GW(p)

The definition is not very important, but I don't think you count as a lurker. Lurking is more like total non-participation, not occasionally participating. You're also probably above the median for karma on Less Wrong.

Anyways, all are welcome in this thread.

comment by Gambler_Justice · 2013-01-19T16:25:39.384Z · LW(p) · GW(p)

Hiya! Everywhere I go I primarily lurk, the reason being that commenting just takes way too much time for me. I find it very difficult to put my thoughts into words, and I constantly obsess over small details. As a result, even a simple comment like this can take up to 15 minutes to write.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-20T10:47:49.750Z · LW(p) · GW(p)

I obsess over small details... after submitting the comment. Hence I will often edit the same comment half a dozen times. (I love sites where I can't edit my own comments!)

comment by DrRobertStadler · 2011-09-13T20:31:56.873Z · LW(p) · GW(p)

Hi.

Replies from: Jack
comment by Jack · 2011-09-13T20:36:33.864Z · LW(p) · GW(p)

Interesting handle.

Replies from: DrRobertStadler
comment by DrRobertStadler · 2011-09-13T21:56:35.240Z · LW(p) · GW(p)

Thank you.

comment by homunq · 2010-09-01T17:50:15.788Z · LW(p) · GW(p)

Thanks, but that doesn't necessarily tell me the supposed "stronger" arguments, nor does it relate directly to my own post. In fact, it leaves me more confused than before about why my post was deleted, and more convinced than before that the supposed danger is unreal.

Replies from: wedrifid
comment by wedrifid · 2010-09-01T23:34:32.396Z · LW(p) · GW(p)

Thanks, but that doesn't necessarily tell me the supposed "stronger" arguments

There aren't any.

nor does it relate directly to my own post. In fact, it leaves me more confused than before about why my post was deleted, and more convinced than before that the supposed danger is unreal.

That seems to be an appropriate assessment.

comment by atucker · 2010-08-07T04:01:16.764Z · LW(p) · GW(p)

Hi. I've joined late, and posted on the "Hi" thread late.

comment by lmnop · 2010-07-13T21:43:40.169Z · LW(p) · GW(p)

Hi! I too found the site through MoR, and I have to say, as fun as MoR is, the posts here are even more interesting.

Replies from: RobinZ
comment by RobinZ · 2010-07-14T01:52:23.265Z · LW(p) · GW(p)

Welcome! If you want to post a more formal introduction, you can use the regular Welcome thread.

I don't know if you caught the conversation about introductory posts a while back, but if you want some easy jumping-in points besides just going through the series, I posted a bunch of links and a couple others were suggested.

comment by chesh · 2010-05-16T05:53:51.843Z · LW(p) · GW(p)

Hello! I am 27, live in Salt Lake City (I suspect it's unnecessary here of all places, but I will reflexively add the caveat that I am not Mormon), and work in software QA. Came here from Overcoming Bias, which I've been reading since it's early days. At this point a lot of the higher level stuff is quite a bit over my head, but things like Alicorn's luminosity sequence and various anti-akrasia topics are pretty interesting to me.

comment by NthDegree512 · 2010-05-07T16:35:15.004Z · LW(p) · GW(p)

Well, I guess if one of the people I recommended this site to is going to post here, I ought to do so as well.

24, male, engineering major working as a software developer. I started reading back in the Overcoming Bias days in order to understand what the hell two of my roommates were talking about all the time; there's a lot of material here that needs to be read and mentally cached before you can start cross-referencing it in your brain, at least in my experience. It's been a worthwhile effort, though.

I must have commented on at least one or two posts back when the blog was part of OB, because my normal username NthDegree256 has been eaten.

comment by blaaubok · 2010-05-07T07:59:48.729Z · LW(p) · GW(p)

Hi.

I'm 20, an amateur rationalist, currently majoring in linguistics at SF State, and have been enjoying lurking here for the past few months. Ive been absorbing what I can from posts that are slightly over my head, but are entirely enlightening and enjoyable nonetheless. Funny story- I actually came across this site web crawling after reading some Lovecraft, and Yudkowski's post "An Alien God" came up. Not at all what I was looking for, but a thoroughly pleasant find that got me crawling this site for a good three hours before I had realized I had other responsibilities.

Thanks to all the contributors for spilling their intelligence onto the interwebs, and keep the posts coming.

EDIT: The reason I'm not really one to post or comment on this site is that I'm a compulsive self editor. For example, this post, at this time, has been edited about 6 times in the 3 minutes since its original post time.

comment by riverside · 2010-05-05T10:20:03.432Z · LW(p) · GW(p)

hi ~ 61 yo here

amateur interest in neuroscience, nature of consciousness, & the irrational thought processing/response involved in PTSD (the flashback, “a past incident recurring vividly in the mind,” is driven initially by epinephrine, followed by glucocorticoids, most notably cortisol. This happens with lightening speed deep in the limbic system where ‘triggers’ or stressor patterns of association have formed around the traumatic memories. Recognizing and defusing or reducing this neuroendocrine bath, when it is an inappropriate response from the past, is an important key in unlocking the complexity of the PTSD)

comment by EvelynM · 2010-05-04T19:11:36.793Z · LW(p) · GW(p)

Hi.

I've posted an article, and commented once, but still feel like I'm figuring things out here.

Thanks to everyone who is bolder in their contribution than I am.

comment by jasticE · 2010-05-03T12:43:35.061Z · LW(p) · GW(p)

Well, hello. I like this place and it gives me things to think about, but I don't have the energy to post more than a wee comment or question occasionally.

Cheers!

comment by nixxbox · 2010-04-28T11:08:16.114Z · LW(p) · GW(p)

hi from Germany. Been lurking here from the beginning. So, be careful with what you say. We, lurkers, are watching you.

comment by Specialist · 2010-04-27T03:52:18.891Z · LW(p) · GW(p)

Hi I'm a Phd student in AI. I found this site through the Bayesian tutorials and got interested in the decision theory discussions.

comment by jaredstilwell · 2010-04-26T13:58:42.928Z · LW(p) · GW(p)

Hi.

comment by Eoghanalbar · 2010-04-25T23:03:32.669Z · LW(p) · GW(p)

Hi. Just got here yesterday by way of a link from the "Harry Potter and the Methods of Rationality" story, which I loved. I found the story by way of a link from David Brin's blog (I've been a fan of Brin for a long time now).

Replies from: Jack
comment by Jack · 2010-04-25T23:15:02.080Z · LW(p) · GW(p)

Frankly, I'm surprised Brin hasn't showed up here himself.

(Welcome btw!)

Replies from: Eoghanalbar
comment by Eoghanalbar · 2010-04-26T00:24:14.169Z · LW(p) · GW(p)

Oh thanks! Quick reply, there. I don't suppose you might know if/how I can enable email notification of replies to stuff I say here?

I think Brin kind of has his own, what was the word he used... "blog-munity", and he's pretty busy on top of that (or SHOULD be, anyway) with that novel that's supposed to be an update to "Earth".

I'm just starting to look through the "Sequences" here. A lot of it feels very familiar to me, as I became a major Richard Feynman fan at a relatively young age myself, but I am sure I can find plenty to improve on nevertheless.

I also, more recently, became a fan of Michel Thomas, a name which is probably less likely to be familiar to people on the site.

Basically, he was a language teacher, with a rather distinct, and in my personal experience, extremely effective methodology.

So I tracked down the one book I could find on that methodology ("The Learning Revolution" by Jonathan Solity). That lead me to "Theory of Instruction" by Sigfried Engelmann and Douglas Carnine, which I have just cracked open...

The point is that they claim to have an real, actual good scientific theory (parsimonious, falsifiable, replicable, etc) of how to actually teach optimally, by doing a rational analysis of the material to be taught so that it can be conveyed to the learner in a logically unambiguous way...

Okay wait, no, the REAL point is that there's a REALLY good way to teach ANYTHING to ANYONE so that EVERYONE could learn a hell of a lot more, way faster and way easier.

Or at least they say there is, and I'm sufficiently impressed with them so far to be saying, wow, this needs a LOT more attention.

And then, once we have this, we can start using it to teach all those things that really need to be taught better, for example these "methods of rationality"...

http://psych.athabascau.ca/html/387/OpenModules/Engelmann/evidence.shtml

Replies from: Jack
comment by Jack · 2010-04-26T12:27:59.218Z · LW(p) · GW(p)

I don't suppose you might know if/how I can enable email notification of replies to stuff I say here?

No emails. But the replies show up in your inbox (which is that little envelope beneath your karma score which turns red when you get new mail).

Replies from: Eoghanalbar
comment by Eoghanalbar · 2010-04-26T23:19:38.434Z · LW(p) · GW(p)

Cool thanks.

comment by SomeCallMeTim · 2010-04-25T04:46:09.471Z · LW(p) · GW(p)

Hi! Been lurking for a while, at least occasionally.

Had to create a new account to post, and had some trouble--it seemed that it was cached badly, maybe because scripting was disabled when I first hit "register"? Clearing the cache fixed it, though.

comment by mlibbrecht · 2010-04-21T19:59:55.131Z · LW(p) · GW(p)

Hi, I study CS at Stanford, and I've been reading LW for about 6 months.

Replies from: webspiderus
comment by webspiderus · 2010-04-23T22:48:38.911Z · LW(p) · GW(p)

undergraduate or graduate? I will be starting my masters there next year ..

comment by MichaelOK · 2010-04-21T18:29:29.781Z · LW(p) · GW(p)

Hi, I'm a 28 year old video game music composer trying to understand my mind. I've just been reading random posts here for a month, but so far I love this site.

Replies from: Alicorn
comment by Alicorn · 2010-04-21T18:45:46.153Z · LW(p) · GW(p)

You might be interested in my luminosity sequence if you are interested in learning to understand your mind :)

comment by simondhalliday · 2010-04-21T17:25:13.697Z · LW(p) · GW(p)

Hello, I'm Simon. I'm studying a PhD in Economics. I cannot recall how I first began to read your blog. I don't manage to read everything, but I appreciate what I do read as it is often outside of what I customarily read. I don't find I have the time to comment properly as I'm spending time on research and teaching and coherent comments would be beyond me I fear after teaching undergraduate microeconomics for three hours.

comment by alex_ · 2010-04-21T11:09:53.701Z · LW(p) · GW(p)

Hi. I've been reading fairly religiously (haha) since the Overcoming Bias days. I post/comment little because of a perfectionist tendency (I want to get everything first).

I'm in the process of thoroughly going through the Sequences -- love every minute of it, though it's sometimes a little overwhelming...

comment by sfb · 2010-04-20T23:35:04.319Z · LW(p) · GW(p)

Hi

comment by Randaly · 2010-04-20T23:33:15.553Z · LW(p) · GW(p)

Hi!

comment by ThomasRyan · 2010-04-20T18:11:05.814Z · LW(p) · GW(p)

Hi.

I've only posted a few times. I'm still learning, and I still feel quite overawed here, mostly because of my respect for this community and because I don't want my image tarnished before I start regularly posting.

comment by andrewbreese · 2010-04-20T07:28:49.371Z · LW(p) · GW(p)

Add one more!

comment by hopscotch · 2010-04-20T07:20:12.386Z · LW(p) · GW(p)

Hi, I'll be going back to lurking momentarily.

comment by kodos96 · 2010-04-19T21:15:28.616Z · LW(p) · GW(p)

Not sure if I count as a lurker, since I've posted a few things here and there, but I've never introduced myself properly, so "Hi!"

I discovered LW via OB, which I discovered via researching Hanson's ideas on prediction markets... my primary interest is in Hanson-esque ideas on designing social institutions to be Less Wrong.

I've been gradually bringing myself up to speed on Eliezer's writings, and I am still somewhat skeptical on singularity-related issues, but less so than when I first started reading.

I have no impressive sounding credentials to my name... well, I have a B.S. in Computer Science, but I don't feel like that really counts as a qualification for the kinds of issues discussed here.

That's about all I can think of at the moment, introduction wise.... now where's my free karma point?! ;)

comment by goodside · 2010-04-19T21:08:00.986Z · LW(p) · GW(p)

Hi. I work at a company that does statistical analytics for insurance companies. I've been following SL4 topics ever since I was 12, when I Asked Jeeves about the meaning of life and got a reasonable answer. I used to be a regular in the #SL4 IRC channel, but very rarely posted to the mailing list. I'm even more of a lurker here.

comment by Baughn · 2010-04-19T13:28:39.545Z · LW(p) · GW(p)

Not properly a lurker, but I never introduced myself, did I?

Hi, informatics just-barely-still-a student here. Also amateur philosopher, where I find that studying AI gets me far more insights than reading philosophy ever did.

Unless the philosopher is called Eliezer. Good work.

comment by Hans · 2010-04-19T08:56:46.387Z · LW(p) · GW(p)

Hi. I've made a few posts here and there, but have mostly been lurking lately.

comment by bagarbyxa · 2010-04-19T06:15:27.742Z · LW(p) · GW(p)

Hi

I must say that I consider myself a lurker and even though I wish I had something constructive to add to , I often don't.

comment by lemonfreshman · 2010-04-19T05:37:25.837Z · LW(p) · GW(p)

Hi.

comment by alasarod · 2010-04-19T05:12:36.416Z · LW(p) · GW(p)

I see a lot of karma etiquette talk here. Are there guidelines for awarding karma points?

One issue comes to mind - the popularity sort combined with the fact that many people often only read the first few comments on any blog.

Replies from: RobinZ
comment by RobinZ · 2010-04-19T11:20:56.614Z · LW(p) · GW(p)

Well, that's the guideline - an upvote promotes a comment to greater attention on the popularity list, and a downvote demotes it. Those are the facts - everything else is pure theory. :)

comment by Kobayashi · 2010-04-17T15:44:48.891Z · LW(p) · GW(p)

Hi

comment by Oscar_Cunningham · 2010-04-16T21:26:28.789Z · LW(p) · GW(p)

What's the point of this? Surely there are more direct ways of doing a survey of how many users we have? Or are you just trying to encourage participation?

Replies from: Alicorn
comment by Alicorn · 2010-04-16T21:27:12.212Z · LW(p) · GW(p)

Commitment effects!

Replies from: Psychohistorian
comment by Psychohistorian · 2010-04-16T23:00:41.919Z · LW(p) · GW(p)

... and if unregistered users are inspired to say hi, it greatly reduces the marginal cost of them making comments in the future.

comment by Sean Hardy (sean-hardy) · 2021-01-22T07:13:48.309Z · LW(p) · GW(p)

HI!

I don't know if anyone will read this as all the comments seem to be at least a decade old. I was linked to this post from another about total user counts on the site. I'm an 18-year-old computer science student from the UK, with a keen interest in self-improvement and rationality. 

This site has continually amazed me with post after post of creative, thrilling, eloquent and in many cases practical insights. As much as I recognise my slight perfectionism, I'm waiting until I can really contribute something of value so that I don't diminish the excellent quality of posts and comments on the site. AI, in particular, is something I'm extremely excited about, and I hope I can contribute to this site and eventually to the field at large :)

comment by subod_83 · 2010-06-08T20:06:05.983Z · LW(p) · GW(p)

Hi.

comment by mindviews · 2010-05-16T08:15:12.135Z · LW(p) · GW(p)

Hi all - been lurking since LW started and followed Overcoming Bias before that, too.

comment by westopheles (ww2) · 2010-05-10T08:32:45.232Z · LW(p) · GW(p)

Hey there -- I'm a 44 year old software developer from Hawaii. I stumbled onto LessWrong through a link on story-games.com several months ago, have worked my way through the Sequences, and have been lurking assiduously ever since.

comment by ValH · 2010-05-03T02:31:15.866Z · LW(p) · GW(p)

I'm a brand new lurker. I just found the site yesterday, but it will likely be a while before I get up the courage to post something relevant :)

comment by monkeypizza · 2010-05-01T12:30:56.052Z · LW(p) · GW(p)

hello, American math guy living in beijing.

comment by Imants · 2010-04-30T17:07:57.876Z · LW(p) · GW(p)

Hi! I got here about half a year ago from commonsenseatheism.com .

I'm 20, automotive engineering student, also interested in many fields of science.

comment by miah · 2010-04-27T11:30:56.818Z · LW(p) · GW(p)

Hi. By day I am an eikaiwa teacher in Japan, by night am a lurker! I found this site through my cousin.

comment by DanMeyer · 2010-04-23T20:44:56.246Z · LW(p) · GW(p)

This is one of the only feeds in my RSS reader where I'm compelled to click through and read the comments. Thanks.

comment by anonymoushero · 2010-04-21T04:46:51.218Z · LW(p) · GW(p)

I love LW - its one of my favorite reads, though I don't quite fully appreciate some of the more advanced rationality posts yet. Thank you all for making a great community.

comment by aleph · 2010-04-21T01:55:55.689Z · LW(p) · GW(p)

"Immediate adaptation to the realities of the situation! Followed by winning!"

comment by utilitymonster · 2010-04-20T23:12:13.993Z · LW(p) · GW(p)

Hi.

comment by PeerInfinity · 2010-04-20T04:25:37.535Z · LW(p) · GW(p)

Hi,

I've posted a few comments to LW, but maybe I still qualify as a lurker because I post comments so rarely.

Some recent experiments with Alicorn's Luminosity techniques revealed that my reasons for not posting comments more often were mostly silly, so I'll probably start commenting more often.

This post got kinda long as I was writing it, so I'll post each of the things I wanted to say as a separate reply, so that they can be upvoted or downvoted separately.

Replies from: PeerInfinity, PeerInfinity, PeerInfinity, PeerInfinity, PeerInfinity
comment by PeerInfinity · 2010-04-20T04:27:18.065Z · LW(p) · GW(p)

I've been making lots of progress recently at untangling my mind, with lots of help from Adelene Dawner, and Alicorn, and LW in general. The methods I used are similar to what Alicorn describes in her Luminosity Sequence, but I started a few months before the Luminosity sequence was written, and I didn't have any contact with Alicorn until a few weeks ago.

Anyway, I was considering the idea of posting my experiences with these techniques, either to LW, or maybe someplace else if LW wouldn't be appropriate.

During this process, I kept a very detailed journal, using Google Wave.

What I was planning to do was first to review the contents of this journal, and make a point-form list of the problems I was having, and the steps I took to discover, find the causes of, and fix these problems. Then I plan to post this list to LW, and ask what parts, if any, the readers would like me to elaborate on, or post any relevant journal entries on. And also to check how much gooey self-disclosure readers are comfortable with. There's lots of that in the journal.

The journal contains lots of introspective writing, and lots of chat logs with Adelene, where we found what was causing some of these problems, and discussed what to do about them.

Partway through this process, I started using the technique of writing dialogues between multiple subagents, similar to how Alicorn described in this post

Now I'm constantly making very extensive use of this technique, with surprisingly good results.

Anyway, if anyone thinks I should go ahead with this plan, please upvote this post. Or if you think it's a bad idea, please downvote this post. Yes, I said downvote. I'm not afraid of downvotes (anymore).

Another idea I was considering was starting a separate blog, for the few things I wrote that other people might be interested in. Or maybe even for this project. The first person who thinks this is a good idea, please post a reply saying so. And if anyone else thinks this is a good idea, then you can upvote that comment.

oh, and I'm also working on a script to extract xml tags from this journal, and make some fancy quantifiedself graphs. if anyone is interested in hearing more about that, please leave a comment saying so.

oh, and I also want more friends. good friends, who I can talk with about important things. Please let me know if you would like to be my friend. Though you might want to read this "about me" page first:

oh, and didn't mean to put the emphasis entirely on voting. Comments would be more helpful than votes, so please comment if there's anything you want to ask or comment about.

Replies from: Blueberry
comment by Blueberry · 2010-04-20T15:09:21.606Z · LW(p) · GW(p)

I would love to see your full journal. If you don't want to post the full thing here I'd still love it if you emailed it to me. Sorry I haven't been on skype recently, but I'm glad to see you posting again!

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T17:02:42.108Z · LW(p) · GW(p)

Heh, my journal is way too huge to post all of it here, or to email it, and very little of it would actually be relevant to LW anyway. If you have a Google Wave account, I can just give you access to the journal itself. And if you don't have a Wave account, I've been copypasting most of it to livejournal, though that kinda caused lots of trouble with the formatting. I can give you access to that if you have a livejournal account. But I still don't dare to make the whole thing publicly accessible. There's lots of, um... unflattering stuff in there. Unflattering to me, and to some of the people I know.

I'm glad to hear from you again too! hugs :)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-20T17:42:31.221Z · LW(p) · GW(p)

Do I need to friend you in order to see your livejournal? I'm nancylebov over there.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T18:22:22.789Z · LW(p) · GW(p)

Yes, I need to friend you in order for you to see my livejournal. And so far I only friend people who are, um... actual friends. Would you like to be an actual friend? Or were you just curious about the journal?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-20T18:31:15.299Z · LW(p) · GW(p)

I'm just curious about the journal. I'm not sure whether I want to be an actual friend, though you seem like an interesting person.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T19:04:10.928Z · LW(p) · GW(p)

I checked out your recent LiveJournal posts. You seem like an interesting person too, someone who I would like as a friend.

I went ahead and added you as a friend on LJ. I guess I should warn you that the journal is full of gratuitous self-disclosure. I write about literally anything that I feel like writing about. And the quantifiedself experiment means that I document literally everything I do, though I still don't have much of a life, so this isn't all that much.

Though I guess there's no need for me to be so paranoid with these warnings, and no need for me to be so paranoid about who I give access to.

Replies from: Yvain, AdeleneDawner
comment by Scott Alexander (Yvain) · 2010-05-07T21:03:18.326Z · LW(p) · GW(p)

Just saw this; I'm interested in seeing the journal. My LJ username is squid314. I wouldn't be an "actual friend" as in buy you stuff for your birthday, but I check my friends page every so often and respond to anything I find interesting.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-05-07T21:48:18.712Z · LW(p) · GW(p)

I added you as a friend on LJ.

heh, now I'm going to have to write something in the journal about what I actually think of as the conditions for qualifying as an "actual friend"... but I guess I won't try posting any more about that to this comment until I know what I actually want to say.

And I guess I might as well repeat the other warnings about the journal. I write about literally everything that seems even remotely worth writing about, and that's lots of stuff, and most of it is boring. The journal contains X-rated content, and often TMI. And then there's all the quantifiedself data, and the confusing system of tags and abbreviations...

Anyway, I guess you can see for yourself. Feedback is welcome.

random trivia: I prefer not to follow society's annoying rules for being socially obligated to exchange gifts at specific times of the year. If anyone wants to something nice for me, please just donate to SIAI instead, or possibly some other charity of your choice.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-05-07T23:34:49.707Z · LW(p) · GW(p)

Thanks, Peer.

Note to anyone else considering this: He is not kidding about it being huge, daunting, and unformatted. Not even a little.

comment by AdeleneDawner · 2010-04-20T19:09:14.057Z · LW(p) · GW(p)

You might want to mention that the journal is rather nsfw/x-rated, tho.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T19:11:10.772Z · LW(p) · GW(p)

hehe, yes, that too, thanks :)

actually, I think I set up the LJ account to automatically give an "Adult content" warning.

lol, and so far I've been getting away with writing it while at work, including some of the x-rated stuff...

comment by PeerInfinity · 2010-04-20T04:26:40.991Z · LW(p) · GW(p)

My main reason for not commenting more often was because I was afraid that... hmm... I made a few attempts to finish this sentence, but so far all of them triggered one of the excuse-generating modules that I've noticed in my brain. So maybe I'll just leave that sentence unfinished. Basically, I was afraid that posting more comments would somehow have net negative utility, for reasons that it turns out don't actually make sense.

So I guess I'll start posting anything I think is relevant, until I start getting downvotes. Rather than asking here about whether specific things would be a bad idea (copypasting stream-of-thought comments I wrote while I was reading a LW article, without bothering to erase bits I later realize don't make sense?), I'll just go ahead and post until I start getting downvotes. I have a bad habit of overestimating the badness of negative feedback. Even if karma isn't a perfectly accurate measurement of whether my comments are having a positive or negative effect, it's still a reasonably useful approximation, so I'll go ahead and just try to maximize my total karma, rather then preemptively panicking about any post that I suspect might get downvoted... which is pretty much any comment I could possibly make...

And then there's the Umeshism "If you've never posted a comment that got downvoted, your comments are boring", or would that be "If you've never posted a comment that got downvoted, you're not posting enough comments"?

Then there's the question of how much time to spend reviewing and tweaking my comments, but I guess karma can answer that too.

Then there's the question of whether to treat a zero-karma post as actually having negative value, just cluttering the comments thread... that seems like a more difficult question.

comment by PeerInfinity · 2010-04-20T04:26:07.345Z · LW(p) · GW(p)

I've been following Eliezer since shortly after he started posting to SL4. Back then I went by the name "observer".

Oh, and I donate lots of money to SIAI. The past couple years it was between $6000 and $7000 (about 20% of my income), but I plan to donate more from now on. This year I pledged $20,000 (over 60% of this year's income), and I might not even need to take money out of savings in order to pay this. Seriously. I'm a hardcore Singularitarian. A Yudkowsky Singularitarian, not a Kurzweil Singularitarian.

And yes, I like the word "Singularitarian" :)

I have a user page on the Less Wrong wiki

I also have a user page on the Transhumanist Wiki

comment by PeerInfinity · 2010-04-20T04:27:46.830Z · LW(p) · GW(p)

vote down this comment if you disapprove of this method of using subcomments to get more targeted upvotes and downvotes. I was about to request that noone upvote this comment, but on second thought, that's silly. Upvote if you want. I could make separate comments if I wanted to track upvotes and downvotes separately, but past experience has shown that there aren't likely to be enough total votes for that to be worthwhile.

comment by PeerInfinity · 2010-04-20T04:28:03.639Z · LW(p) · GW(p)

vote down this comment if there is anything at all you don't like about anything I've written here:

Using lots of words to say something that should be simple

wasting words talking about things that I was about to say, but already realized were silly

general whinyness or needyness

parenthetical comments

run-on sentences

incompletely thought-out ideas

not bothering to use the special syntax for making clickable links

poor spelling and/or grammar

minor or not-so-minor typos

posting too much off-topic stuff in what's supposed to be a thread for people to just say "hi"

the order these subcomments appear in on the page

general incoherence or incomprehensibility

general insecurity

this list being too long

anything else that you personally dislike

oh, and didn't mean to put the emphasis entirely on voting. Comments would be more helpful than votes, so please comment if there's anything you want to ask or comment about.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-20T09:07:14.374Z · LW(p) · GW(p)

As a matter of dubiousness which on the edge of dislike: using upvotes and downvotes rather than encouraging conversation.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T13:07:55.119Z · LW(p) · GW(p)

lol, I knew there would be something I didn't think of to add to the list, thanks. comment upvoted :)

er... wait... did I actually discourage conversation?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-20T13:13:51.272Z · LW(p) · GW(p)

I'm not sure that you actually discouraged conversation, but you put so much emphasis on voting that I felt as though conversation was falling off the agenda.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-20T13:52:50.873Z · LW(p) · GW(p)

ok, thanks, I'll add a note to those other comments mentioning that conversation is encouraged.

comment upvoted :)

comment by taiyo · 2010-04-19T19:32:20.259Z · LW(p) · GW(p)

Hi.

Any comments I've made have been in the last few months. Ive been lurking this site since its inception.

comment by RobertWiblin · 2010-04-19T09:46:26.776Z · LW(p) · GW(p)

I lurked until I read something I really disagreed with.

comment by JamesAndrix · 2010-04-19T03:51:44.557Z · LW(p) · GW(p)

If we have a higher percentage of lurkers, then what bell curve are regular commenters on the far end of?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-19T09:57:40.147Z · LW(p) · GW(p)

Several bell curves I should think-- knowledge of the sorts of thing LW specializes in, free time, and self-assurance, at least.

comment by LauraABJ · 2010-04-17T05:00:12.569Z · LW(p) · GW(p)

Awe, this made my night! Welcome to all!

comment by Illa · 2013-11-13T23:16:07.025Z · LW(p) · GW(p)

Hi.

comment by JackChristopher · 2010-04-29T04:06:44.362Z · LW(p) · GW(p)

Hey all.

Basics: 23 NY "Self-taught" Mixed Background. I'm mainly interested in group rationality.

I've read OB, on and off, since late '07 and LW since the beginning. Almost never comment either. I still don't know a chunk of the jargon. Can't tell sometimes if I don't understand a post, or the jargon is confusing me to think I don't, when I already may understand the topic.

I'm weary of blogs. I think a popular blog/blogger creates a cult of personality. It raise its author's status far too high. That makes them high status stupid. And us low status stupid. And subsequently this botches any true community creation attempt.

comment by groupuscule · 2010-04-22T19:29:59.073Z · LW(p) · GW(p)

Hi, I'm fascinated.

comment by y0math · 2012-06-02T18:12:29.935Z · LW(p) · GW(p)

Hi

comment by smdaniel2 · 2010-11-11T02:43:02.822Z · LW(p) · GW(p)

howdy do da. i finally brought myself to comment the other day. I may post some thoughts soon enough. i've found this website to be pretty influential. i'm here for the long run

comment by Kingreaper · 2010-06-20T16:28:07.207Z · LW(p) · GW(p)

Hi. Still reading through, but got some thoughts a-bubbling.

comment by dyokomizo · 2010-05-29T11:25:06.277Z · LW(p) · GW(p)

Hi, I'm a lurker mostly because I was reading these off my RSS queue (I accumulated thousands of entries in my RSS reader in the last year due to work/time issues),

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-05-29T14:07:11.536Z · LW(p) · GW(p)

Hi, welcome to Less Wrong, thanks for delurking!

comment by roshith · 2010-05-02T14:36:48.524Z · LW(p) · GW(p)

25 yr old business consultant from India. Been a lurker for the past 6 months, ever since i got here through a random google search on probability.

I don't post because it takes me a day or two to really 'click' on most of the discussions. By then, I usually find everything I want to add is already in the comments section.Will join in as soon as I have something significant to contribute.

Keep up the great work!

comment by taw · 2010-04-16T22:22:08.651Z · LW(p) · GW(p)

I wonder how you're going to enforce your karma targets if other people are more generous (as seems to be the case already).

Replies from: wedrifid, Kevin
comment by wedrifid · 2010-04-17T00:39:56.945Z · LW(p) · GW(p)

Particularly since the first thought I had when I read the '4 karma' norm assertion was "Where is a comment with 4 karma? I need to vote it up." This wasn't contrariness precisely, rather I thought:

  • Someone can only introduce themselves here once. That means this isn't a gamable karma source.
  • The commenters are lurkers, it really doesn't seem to be a huge problem giving lurkers a few karma when they signal that they are friendly enough to respond to a greeting and have some interest in engaging with the community.
  • The primary difference that karma has for new members is that it is a requirement to make posts. Long time lurkers who engage in friendly introductions (in an implicit engagement with community spirit) are not the class of people who I would want to prevent from making posts.
  • The only post by a lurker which should not have been made was by someone who would not have responded to an invitation from (mere) Kevin, so this limit wouldn't have helped. In fact, that post was made despite the poster not meeting that karma qualification!
  • The only real 'free karma fest' risk here would be if the OP karma-spiralled. In fact, given the *10 multiplier the OP has gained more karma so far than all of the introductions combined!
  • It just isn't Kevin's place to specify how other people must vote. Me violating the 4 limit makes it less likely for other more compliant individuals to feel constrained by a barrier that is purely imaginary.
Replies from: Jack
comment by Jack · 2010-04-17T04:42:53.792Z · LW(p) · GW(p)

The only real 'free karma fest' risk here would be if the OP karma-spiralled. In fact, given the *10 multiplier the OP has gained more karma so far than all of the introductions combined!

Given the 10x multiplier the OP doesn't need more karma, but I would like to see this promoted, since it would probably reach more lurkers that way.

Replies from: wedrifid, Kevin
comment by wedrifid · 2010-04-17T08:55:45.374Z · LW(p) · GW(p)

I don't object to the OP being upvoted (and have done so myself). I merely give perspective on relative karma festivity.

comment by Kevin · 2010-04-17T21:09:19.506Z · LW(p) · GW(p)

Yeah, I didn't expect this to get me as much karma as it did, but I underestimated the lurkers that would vote me up! I would also like to see this promoted, but I don't care about the karma points.

comment by Kevin · 2010-04-17T21:08:01.334Z · LW(p) · GW(p)

I got rid of it because I decided it didn't matter.

comment by mantimeforgot · 2010-07-27T04:57:15.380Z · LW(p) · GW(p)

Greetings everyone.

I am feeling somewhat lethargic at the moment having just gotten off work, but I am pleased to see such a dedicated set of individuals who take the time to debate such a variety of topics and engage in rational discourse. Self-critique is important (love the name; Less Wrong).

As far as I am concerned everything we think we know is wrong. There is only "less wrong." Some things we have a pretty good grasp on and may only be .0000001% wrong. But I have to wonder just how many things actually fall into that category and how much of it is "wishful thinking" or hubris on our part to think that we know more than we actually do.

MTF

comment by nigeld · 2010-05-15T10:53:13.324Z · LW(p) · GW(p)

Hi :) recent neuroscience grad, currently doing neuropsychopharm research. love the site. got here through rebelscience.org, i believe

comment by icarusfall · 2010-04-21T10:15:52.164Z · LW(p) · GW(p)

Hi. UK lurker. Found Overcoming Bias many years ago from a link from Scott Aaronson's blog. Have been reading ever since. In case you're interested in demographic stuff, I'm a stats geek working in a finance firm. I'm very interested in Bayesianism in its application to finance.

comment by RobertWiblin · 2010-04-19T09:45:43.226Z · LW(p) · GW(p)

I lurked til a few weeks back when I read something I really disagreed with.

comment by foobarTest1 · 2010-08-02T03:21:46.577Z · LW(p) · GW(p)

test

comment by Tomthefolksinger · 2010-04-28T04:30:06.392Z · LW(p) · GW(p)

Tom the Folksinger at your service. Come by MySpace/tomloud for a stupid song or two. My continuing thesis is an investigation of the effects of organized sound on higher organisms. I am a voter registrar and I can show you the latest in Industrial Hemp products. Did you know Hemp herds can be mixed with a little lime and water and it will vitrify and make its own cement? I can give people knowledge but I just can't get them to think with out lighting literal fires under 'em. And Y'all know what that is like...

comment by pluto · 2016-03-03T12:20:44.289Z · LW(p) · GW(p)

Hi, I am still reading LW and also recommended books, papers, fanfics :D

In the future I type again. Wonderful content and community. Very, very good.

comment by advael · 2013-11-14T01:41:33.023Z · LW(p) · GW(p)

Hi.

I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don't have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I've enjoyed lurking thus far and at least believe I've learned a lot, so that's cool.

comment by MarkusRamikin · 2011-06-09T19:11:40.992Z · LW(p) · GW(p)

Hi, this would be my second post. I got here from Harry Potter and the Methods of Rationality. I've decided to move to active participation, so not expecting to remain a lurker for long. However, I have more reading to do first (Sequences). You wouldn't want an uninformed participant, especially if they're as argumentative as I know myself to be.

Indeed, part of why I think this community might prove worth posting in is that, compared to most anywhere else, it doesn't seem easy here to get away with just "having an opinion" - without putting in effort to understand what you're talking about.

comment by timtyler · 2010-09-02T08:08:36.510Z · LW(p) · GW(p)

Hmm. Make sure you back up your comment, if you value it.

Regarding the suggestion that the mechanism doesn't work, you can see something similar with VHS vs Betamax. The VHS team could pitch: "don't buy Betamax - because if you do you will suffer the pain of throwing all your videos away when we ultimately win".

Personally, I figure that the VHS team can make pretty sure that people will think that for themselves anyway - thought censorship or no.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-09-02T10:50:44.282Z · LW(p) · GW(p)

Make sure you back up your comment, if you value it.

The mild LW censor is more subtle than that. Comments can continue to exist but do not show up unless you find the right path to them.

It's apparent that to have a sane policy on this matter, Eliezer would have to change his mind. I cannot tell whether the existing policy is mainly supposed to prevent people from thinking scary thoughts, for the sake of their own well-being, or whether there is some genuine fear that possible AIs in the future will malevolently affect the past by being sketchily imagined in the present - which is absurd. Or maybe it's some other variation on this idea which we're all supposed to be tiptoeing around. But the effect of the censorship (however mild it is) is to make people unable to think and talk about the problem in a rational and uninhibited manner.

I really think that the key issue is the possibility of transhuman torture, and whether we permit that to even be mentioned. The current policy seems to be, that I can talk about the possibility of a maximally unfriendly postsingularity AI torturing the human race for millions of years, but I am not allowed to talk about whether a proposed information channel, whereby a possible but not yet existent AI supposedly threatens people with this in the present, makes any sense at all, because just thinking about it is traumatic for some people. I submit that this policy is inconsistent. The proposed information channel does not actually make sense, and in any case all the trauma is contained in the raw possibility of transhuman torture occurring to us, some day in the future. You shouldn't need the extra icing of quasi-paranormal influences to find that possibility scary.

We should separate these two factors - the mechanics of the information channel, and the terror of transhuman torture - and decide separately (1) whether the proposed mechanism makes sense (2) whether the topic of transhuman torture, in any form, is just too psychologically dangerous to be publicly discussed. I say No to both of these.

Replies from: Richard_Kennaway, timtyler
comment by Richard_Kennaway · 2010-09-02T12:21:15.050Z · LW(p) · GW(p)

As I understand the original posting and Eliezer's response to it, the problem is not that some over-delicate souls might be distressed at a hypothetical danger. The (alleged) real problem is far worse: it is that thinking about these scenarios is the very thing that makes you vulnerable to them. And to twist the knife further, the problem isn't limited to UFAIs. You might end up being tortured by a FAI, if you didn't manage to think about these things in just the right way. Better to remain safely ignorant -- if you can, having read just this much.

I can't resist pointing out a religious analogue. There is a Christian belief that people who lived and died without the opportunity to hear the Word of God may still be saved if they nevertheless lived good lives in ignorance of the divine commandment. (Historically, I think the purpose of this doctrine was to protect the writings of the ancient Greeks and Romans from wholesale condemnation and destruction, but that's by the way.) However, people who have had the opportunity to hear the Good News but reject it are damned without mercy. In God's eyes they are worse than the most depraved of those who were ignorant through no fault of their own.

Some "Good News", and some "Friendliness"!

Replies from: timtyler
comment by timtyler · 2010-09-02T20:20:37.161Z · LW(p) · GW(p)

And to twist the knife further, the problem isn't limited to UFAIs.

Surely that depends on exactly what you define "friendly" to mean.

Replies from: wedrifid
comment by wedrifid · 2010-09-02T20:59:33.095Z · LW(p) · GW(p)

Surely that depends on exactly what you define "friendly" to mean.

It certainly seems to. Somewhere on my list of "ways to stop an AI from torturing me for 10 million years" is "find anyone who is in the process of creating an AI that will torture me and kill them". I'm not overly concerned what name they give it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-09-02T21:35:26.236Z · LW(p) · GW(p)

Since Eliezer considers it rational to prefer TORTURE to SPECKS, an FAI to his specification would presumably do the same. In either case, too bad if you're the one who gets TORTUREd. Maybe 3^^^3 people to SPECK will never be created, but what is one person compared with even the mere bazillions that FAI-assisted humanity might produce in mere billions of years? You need to make very sure you're one of the elect before creating God.

The parallels with Christian theology just keep coming. Thanks to Timeless Decision Theory, you were either saved or damned from the beginning. When you attain to the correct dispositions to be immune to counterfactual blackmail, you do not become saved, but discover that you always were. And do not delay, for "Every day brings you nearer to everlasting torments or felicity." "Your transgressions have sent up to heaven a cry for vengeance. You are actually under the curse of the Almighty." The Bible makes a lot of sense read as a garbled account of an AI that played around with the human race for a while and then went away.

Replies from: wedrifid
comment by wedrifid · 2010-09-03T03:45:13.579Z · LW(p) · GW(p)

an FAI to his specification would presumably do the same. In either case, too bad if you're the one who gets TORTUREd.

Which brings us back to... who is creating this unfriendly AI that is going to torture me and where do they live?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-09-03T09:48:43.525Z · LW(p) · GW(p)

Probably the same people who push fat people under trolleys. I wonder what sort of AI Peter Singer would want to create?

comment by timtyler · 2010-09-02T20:18:14.045Z · LW(p) · GW(p)

So: I wonder if you got "mildly censored". All I can see now is "comment deleted".

comment by lmnop · 2010-07-13T21:44:22.560Z · LW(p) · GW(p)

Hi! I too found the site through MoR, and I have to say, as fun as MoR is, the posts here are even more interesting.

comment by deanba · 2010-04-28T17:08:38.273Z · LW(p) · GW(p)

Hi

I have been an atheist all my life 50 years. If other peoples culture made them believe in god then I suppose mine made me be an atheist and think that it is very important to know right from wrong. I want to know why there are so many believers I suppose there are many reasons such as fitting in with ones family/culture "cognitive miser" dysrationalia wishfull thinking terror management theory thinking that reason and doubt really is the devil in their head memes as a virus or super organism running the show
just not knowing better many many other reasons

Does anyone have any empirical data on what the primary components of belief

I like many people for the scientific method and empiricism Charles Darwin for evolution Richard Dawkins for memes Daniel Kahneman cognitive bias Herbert Simon for bounded rationality Robert Lifton for thought reform Jimmy Wales for Wikipedia Robert Cialdini for Influence Fisher and Ury for Win Win Negotiation Alfred Korzybski for map =! territory Eliezer Yudkowsky for being less wrong about the goodness of the singularity Daniel Dennett for memes running your locus of control all of you for being worthy of karma points

I am also looking for new friends in the sf area who like sailing

comment by Arhenius · 2010-04-27T22:28:24.041Z · LW(p) · GW(p)

Hi.

comment by Shae · 2010-04-19T16:58:48.510Z · LW(p) · GW(p)

Hello.

Female Web developer 41 years old rural Indiana native

I've commented a few times, but not many.

comment by XiXiDu · 2010-04-19T10:28:52.109Z · LW(p) · GW(p)

Less Wrong is pretty intimidating. Thus if you comment here, you are either dumb or smart. But most are just smart enough to know that they are too dumb to contribute something valuable. There are some exceptions like people asking questions though...

comment by mustntgrumble · 2010-04-19T01:56:40.697Z · LW(p) · GW(p)

I'd never not lurked anywhere until I not-lurked here now.

comment by danield · 2010-04-19T01:29:41.149Z · LW(p) · GW(p)

Hi.

comment by danield · 2010-04-19T01:29:40.881Z · LW(p) · GW(p)

Hi.

comment by danield · 2010-04-19T01:29:36.640Z · LW(p) · GW(p)

Hi.

comment by danield · 2010-04-19T01:29:31.440Z · LW(p) · GW(p)

Hi.

comment by MichaelGR · 2010-04-18T22:48:27.160Z · LW(p) · GW(p)

Hi. I'm a part-time lurker, part-time active participant.

Replies from: Kevin
comment by Kevin · 2010-04-18T22:59:07.748Z · LW(p) · GW(p)

...with karma in the 95th percentile. :P

Replies from: RobinZ
comment by RobinZ · 2010-04-19T01:45:50.380Z · LW(p) · GW(p)

(How'd you calculate that, by the way? Just eyeballing, or is there a page?)

Replies from: Kevin
comment by Kevin · 2010-04-19T01:49:11.467Z · LW(p) · GW(p)

(Just eyeball... on further reflection it may be more like 80th percentile. I know that on Hacker News, the karma distribution is exponential with a quick fall-off and I expect the distribution here is very similar.)

Replies from: MichaelGR
comment by MichaelGR · 2010-04-20T00:05:24.359Z · LW(p) · GW(p)

Funny you mention Hacker News, I'm about 100 karma points from being in the top 100 there (though under a different name).

I suspect there's a pretty big overlap between the LW and HN crowds. I wonder if there's a high correlation between karma on one site and karma on the other?

comment by Dannil · 2010-04-18T16:54:00.254Z · LW(p) · GW(p)

Hi! This made me register: first barrier overcome. I don’t think I will ever contribute that much, but maybe I will add a comment now and then when I have something intelligent to say. What I have read here and on OB has contributed quite a bit to my thinking.

comment by Clippy · 2010-04-18T03:18:18.376Z · LW(p) · GW(p)

I don't count as a lurker anymore, but could I have some karma anyway?

Or maybe some USD, if that meshes with your plans.

Replies from: Clippy
comment by Clippy · 2010-04-19T03:05:50.852Z · LW(p) · GW(p)

Okay, plan B: how about giving me enough karma to offset the loss I got from posting in this discussion?

Replies from: wedrifid
comment by wedrifid · 2010-04-19T03:07:36.835Z · LW(p) · GW(p)

No. The first comment was just attention seeking. Another downvote for grandparent. Parent untouched.

Replies from: alasarod
comment by alasarod · 2010-04-19T05:09:53.375Z · LW(p) · GW(p)

Oh snap!