What bothers you about Less Wrong?

post by Will_Newsome · 2011-05-19T10:23:01.201Z · LW · GW · Legacy · 162 comments

Or, what do you want to see more or less of from Less Wrong?

I'm thinking about community norms, content and topics discussed, karma voting patterns, et cetera. There are already posts and comment sections filled with long lists of proposed technical software changes/additions, let's not make this post another one. 

My impression is that people sometimes make discussion posts about things that bother them, and sometimes a bunch of people will agree and sometimes a bunch of people will disagree, but most people don't care that much (or they have a life or something) and thus don't want to dedicate a post just to complaining. This post is meant to make it socially and cognitively easy to offer critique.

I humbly request that you list downsides of existing policies even when you think the upsides outweigh them, for all the obvious reasons. I also humbly request that you list a critique/gripe even if you don't want to bother explaining why you have that critique/gripe, and even in cases where you think your gripe is, ahem, "irrational". In general, I think it'd be really cool if we erred on the side of listing things which might be problems even if there's no obvious solution or no real cause for complaint except for personal distaste for the color green (for example).

I arrogantly request that we try to avoid impulsive downvoting and non-niceness for the duration of this post (and others like it). If someone wants to complain that Less Wrong is a little cultish without explaining why then downvoting them to oblivion, while admittedly kind of funny, is probably a bad idea. :)

162 comments

Comments sorted by top scores.

comment by [deleted] · 2011-05-19T13:16:06.014Z · LW(p) · GW(p)

I'd prefer more posts that aim to teach something the author knows a lot about, as opposed to an insight somebody just thought of. Even something less immediately related to rationality -- I'd love, say, posts on science, or how-to posts, at the epistemic standard of LessWrong. Also I'd prefer more "Show LessWrong" project-based posts.

Replies from: atucker, Emile, badger, lukeprog
comment by atucker · 2011-05-19T15:06:50.200Z · LW(p) · GW(p)

I like how this was phrased positively, and suggested a specific way to fix it.

comment by Emile · 2011-05-19T17:14:09.608Z · LW(p) · GW(p)

Agreed; how about an open thread "what could you teach us about?" to gauge interest? A bit like this thread, but focusing on supply instead of demand. I'll post one in a few days if nobody else does first.

comment by badger · 2011-05-19T17:05:35.700Z · LW(p) · GW(p)

I wonder if an occasional discussion post where people can make requests or float ideas to gauge interest could help this. The motivation would be greater if you know there is an audience for your expertise.

comment by lukeprog · 2011-05-19T15:05:14.813Z · LW(p) · GW(p)

Agreed. There is lots of 'deep knowledge' in the brains of Less Wrongers, and I would love to see it shared!

comment by XiXiDu · 2011-05-19T12:00:37.027Z · LW(p) · GW(p)

Note: The following depicts my personal perception and feelings.

What bothers me is that Less Wrong isn't trying to reach the level of Timothy Gowers' Polymath Project but at the same time acts like being on that level by showing no incentive to welcome lesser rationalists or more uneducated people who want to learn the basics.

One of the few people here who sometimes tries to actually tackle hard problems appears to be cousin_it. I haven't been able to follow much of his posts but all of them have been very exciting and actually introduced me to novel ideas and concepts.

Currently, most of Less Wrong is just boring. Many of the recent posts are superb, clearly written and show that the author put a lot of work into them. Such posts are important and necessary. But I wouldn't call them exciting or novel.

I understand that Less Wrong does not want to intimidate most of its possible audience by getting too technical. But why not combine both worlds by creating accompanying non-technical articles that explain the issue in question and at the same time teach people the maths?

I know that some people here are working on decision theoretic problems and other technical issues related to rationality. Why don't you talk about it here on Less Wrong? You could introduce each article with a non-technical description or write an accompanying article that teaches the basics that are necessary to understand what you are trying to solve.

Replies from: cousin_it, lukeprog
comment by cousin_it · 2011-05-19T15:20:46.066Z · LW(p) · GW(p)

After seeing how highly your comment got upvoted, I just wrote an extremely hardcore post :-)

Replies from: Vladimir_Nesov, wedrifid, Dr_Manhattan
comment by Vladimir_Nesov · 2011-05-20T01:32:27.356Z · LW(p) · GW(p)

Nice, I actually planned to post this as a dependence for a post with a small list of technical improvements to ADT (such as "To avoid confusion, immediately perform any action that implies absurdity."), but didn't get around to writing it up.

comment by wedrifid · 2011-05-19T16:30:32.284Z · LW(p) · GW(p)

After seeing how highly your comment got upvoted, I just wrote an extremely hardcore post :-)

I just upvoted that post based on it having the phrase "Example decision theory problem" in the title. Now I'm going to actually read it. ;)

comment by Dr_Manhattan · 2011-05-19T15:55:54.083Z · LW(p) · GW(p)

I shall name this Karma Surfing. (not that I'm putting it down)

comment by lukeprog · 2011-05-19T15:06:42.481Z · LW(p) · GW(p)

I'd never heard of the Polymath Project. Thanks for the linky.

Replies from: wedrifid
comment by wedrifid · 2011-05-19T16:32:42.646Z · LW(p) · GW(p)

I'd never heard of the Polymath Project. Thanks for the linky.

Likewise. That kind of project is inspirational! Especially the part where it actually worked.

comment by Pavitra · 2011-05-19T13:46:46.162Z · LW(p) · GW(p)

I'd like to see less discussion about karma.

Replies from: badger, None, XiXiDu
comment by badger · 2011-05-19T16:45:05.536Z · LW(p) · GW(p)

I agree. In particular, it bothers me when people complain about downvotes, accuse others of downvoting them, or preface with "I know I'll be downvoted for this, but..."

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-20T12:57:44.931Z · LW(p) · GW(p)

I feel such complaints are justified. downvotes without comments are counter productive.

Replies from: TimFreeman, badger
comment by TimFreeman · 2011-05-20T16:44:13.511Z · LW(p) · GW(p)

There is an incentive to downvote without comment if you feel that your peers are better off if they don't see the post. If you're downvoting someone who happens to regard this as a political exercise rather than an intellectual exercise, they're likely to find an excuse to downvote you on one or more or many unrelated issues, so your karma is better if they don't know who you are. If you comment they will know who you are.

This incentive would go away if we had a reasonable measure of agreement, and only let votes from the 90% or 99% or so of the people closest to the consensus affect what other people see. That might require significant CPU and thinking to implement, though, so I don't know if it's worth doing.

Allowing cliques that are less than 50% might let the community fracture into halves that don't perceive each other, but if the clique size is 90% then the only consequence would be to ignore votes from the outliers, which is probably a good thing.

Replies from: HughRistik
comment by HughRistik · 2011-05-22T09:48:20.117Z · LW(p) · GW(p)

Also, there is a disincentive to downvote bad comments that you want everyone to still see.

Replies from: TimFreeman
comment by TimFreeman · 2011-05-23T03:19:37.692Z · LW(p) · GW(p)

Somebody voted the parent comment down without replying. Given the context, that may have been a strange joke. I voted it up.

In the present system, downvoting a comment causes fewer people to see it, since the system by default doesn't show you comments scoring below a user-settable threshhold. I like that feature.

I can't presently imagine a plausible interpretation for downvoting that yields things I'd want to downvote but still would want my peers to look at. Can you give an example?

Replies from: syllogism
comment by syllogism · 2011-05-26T14:02:49.212Z · LW(p) · GW(p)

I can't presently imagine a plausible interpretation for downvoting that yields things I'd want to downvote but still would want my peers to look at. Can you give an example?

You post a detailed reply to a low-value comment, and want your reply seen even though you don't like the parent.

comment by badger · 2011-05-20T14:09:33.169Z · LW(p) · GW(p)

I agree comments are more useful, but are you saying someone should never downvote without leaving a comment? Why do you think it is actually counterproductive?

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-20T14:19:36.833Z · LW(p) · GW(p)

because it often seems that the person actually doesn't know why they've been downvoted.

comment by [deleted] · 2011-05-19T15:48:08.195Z · LW(p) · GW(p)

I'll comment to state my agreement with this in addition to voting this up, because this was the first criticism that occurred to me, and might be one of the things that bothers me most.

comment by XiXiDu · 2011-05-22T10:23:15.441Z · LW(p) · GW(p)

I'd like to see less discussion about karma.

Has this topic being discussed in detail?

Personally, the reputation system mainly taught me how to play, but not for what reasons, other than maximizing my karma score. It works like a dog-collar, administering electric shocks when the dog approaches a certain barrier. The dog learns where it can go, on grounds of pain.

Humans can often only infer little detail from the change of a number, the little they learn mostly being misinterpreted. People complaining about downvotes are a clear indication for this being the case.

If people write, "I know I'll be downvoted for this, but...", what they mean is, that they learnt, that what they are going to write will be punished, but that they do not know why and are more than superficially interested to learn how they are wrong.

Has it been shown that reputation systems cultivate discourse and teach novel insights rather than turning communities into echo chambers and their members into karma score maximizer's?

If it was my sole intention, I could probably accumulate a lot of karma. Only because I often ignore what I learnt about the reputation system, and write what interests me, I manage to put forth some skepticism. But can a community, that is interested in truth and the refinement of rationality, rely on people to ignore the social pressure and strong incentive being applied by a reputation system, in favor of honesty and diversity?

How much of what is written on Less Wrong, and how it is written, is an effect of the reputation system? How much is left unsaid?

I do not doubt that reputation systems can work, in principle. If everyone involved was perfectly rational, with a clear goal in mind, a reputation system could provide valuable feedback. But once you introduce human nature, it might become practically unfeasible, or have adverse side-effects.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-05-22T14:18:28.792Z · LW(p) · GW(p)

Perhaps we should have a social norm of asking anyone who says "I know I'll be downvoted for this" why they think so.

Replies from: wedrifid
comment by wedrifid · 2011-05-22T14:37:03.775Z · LW(p) · GW(p)

Perhaps we should have a social norm of asking anyone who says "I know I'll be downvoted for this" why they think so.

I am going to stick with downvoting them regardless.

Replies from: XiXiDu, NancyLebovitz
comment by XiXiDu · 2011-05-22T14:52:44.710Z · LW(p) · GW(p)

I am going to stick with downvoting them regardless.

What's so bad about writing that you know that you'll be downvoted? Many of your comments on the recent meta-ethics threads have been downvoted (at least initially, haven't checked again). So you know that another comment that criticizes the moral theory of someone else is likely to be downvoted as well (I think you even wrote something along those lines).

Saying that you are aware that what you are going to say will be downvoted provides valuable feedback.

That you know that you are going to be downvoted doesn't mean that you know that you are wrong and decided to voice your wrongness again.

Replies from: wedrifid, NancyLebovitz, Barry_Cotter, persephonehazard
comment by wedrifid · 2011-05-22T16:33:26.164Z · LW(p) · GW(p)

What's so bad about writing that you know that you'll be downvoted?

Mild spaminess, unhealthy passive aggressive habit, unnecessary insult to the reader.

comment by NancyLebovitz · 2011-05-22T15:39:03.078Z · LW(p) · GW(p)

What's so bad about writing that you know that you'll be downvoted?

I find "I know I'll be downvoted" or "I know I'll be flamed" to be tiresome, even though I don't downvote them.

I'd rather be left to form my own opinion relatively freshly.

Also, (and I'm not saying this applied to wedrifid), I frequently find that IKIB* is attached to something which is either innocuous or that ends up being liked.

comment by Barry_Cotter · 2011-05-22T15:11:59.249Z · LW(p) · GW(p)

I will also continue to downvote them, but I'm more likely to explain why.

comment by persephonehazard · 2011-06-08T02:00:16.663Z · LW(p) · GW(p)

And, of course, being downvoted doesn't necessarily /mean/ that you're wrong.

comment by NancyLebovitz · 2011-05-22T14:46:15.119Z · LW(p) · GW(p)

I hadn't thought about that policy, and I wouldn't presume to ask you to change it.

Replies from: wedrifid
comment by wedrifid · 2011-05-22T16:34:34.897Z · LW(p) · GW(p)

I hadn't thought about that policy, and I wouldn't presume to ask you to change it.

Why thank you. I've also made an exception to my general policy of downvoting all 'should' claims for norms that don't have my complete support. :)

comment by Emile · 2011-05-19T13:34:37.567Z · LW(p) · GW(p)

Minor quibble: I think terms "rational" and "irrational" (and "rationalist", "rationality", etc.) tend to be overused, sometimes as vague "good/bad" qualifiers (I've been guilty of that). As a rule of thumb, I'd recommend against using those terms unless

  • You're using them in a narrow technical meaning, i.e. a rational utility-maximizing agent in economics, or

  • You're discussing "non-traditional mental skills" like changing one's mind, compensating for cognitive biases or dissolving confusion (i.e. not just being smart, open-minded and an atheist), or

  • You have above 10000 karma.

Replies from: wedrifid
comment by wedrifid · 2011-05-19T16:25:37.429Z · LW(p) · GW(p)

Hear, hear! I get a niggling aversive reaction whenever I see those terms used when not absolutely necessary and one of irritation when I see them used as 'good/bad' qualifies. Even more so when the alleged 'rational' action is a subjective claim that I don't even necessarily agree with!

And if those of us with 10k karma don't constrain our usage to the first two cases (and in the case of rationalist/rationality with reluctance even then) then shame on us/them!

comment by mstevens · 2011-05-19T14:33:49.856Z · LW(p) · GW(p)

There's too much talk of Bayesianism in fuzzy conspiracy terms and not enough "here's the maths. learn.".

Replies from: mstevens, gwern, wallowinmaya
comment by mstevens · 2011-05-19T15:17:58.895Z · LW(p) · GW(p)

also, I hate getting karma when I'd rather have a reply.

Replies from: NancyLebovitz, None
comment by NancyLebovitz · 2011-05-19T16:45:40.826Z · LW(p) · GW(p)

I could vote this up, but instead I'll say that it's especially annoying when I post articles which mostly get karma rather than replies.

Replies from: randallsquared
comment by randallsquared · 2011-05-20T13:36:54.750Z · LW(p) · GW(p)

I believe "articles which mostly" would make that much clearer.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-05-20T14:54:34.228Z · LW(p) · GW(p)

You're right. Corrected.

comment by [deleted] · 2011-05-20T12:59:35.802Z · LW(p) · GW(p)

.

comment by gwern · 2011-05-19T18:35:49.909Z · LW(p) · GW(p)

Your own criticism is kind of fuzzy. What exactly does one write about? For example, would http://www.gwern.net/Modafinil#ordering-with-learning be the sort of Bayesian discussion you'd want to see, or is that too elementary and you'd rather something that looks like a later chapter of PT:tLoS?

Replies from: mstevens
comment by mstevens · 2011-05-19T19:02:45.148Z · LW(p) · GW(p)

That's the sort of thing I was thinking of. I want a whole series of content from elementary to advanced.

I suppose what I'm calling for is LW to write a stats textbook with a LW angle on things.

Of course possibly the answer is that such books already exist and I should go read them instead of LW.

Replies from: atucker
comment by atucker · 2011-05-20T02:28:22.517Z · LW(p) · GW(p)

What would you call a LW angle on things in the context of a math textbook?

Replies from: mstevens
comment by mstevens · 2011-05-20T13:05:32.780Z · LW(p) · GW(p)

I'm not totally sure (I want to read the book!), but at the very least it'd have more real-world applications than the books on the subject I've looked at.

comment by David Althaus (wallowinmaya) · 2011-05-19T19:47:37.733Z · LW(p) · GW(p)

Ähm, do you know of this introduction by Eliezer?

Here is another one, which I found to very helpful, by komponisto

If that does not suffice read the "Technical explanation" of ( I meant "from", albeit it's not that funny) Eliezer. And if you aspire to become a Jedi Bayesian, just read E.T. Jaynes himself.

Replies from: wedrifid
comment by wedrifid · 2011-05-19T19:53:01.403Z · LW(p) · GW(p)

If that does not suffice read the "technical explanation" of Eliezer.

Technical Explanation of Eliezer. I'd like to see that. ;)

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-05-19T20:40:06.230Z · LW(p) · GW(p)

;) In German there is simply one word for "by, from, of". Kinda handy.

comment by komponisto · 2011-05-19T16:29:04.143Z · LW(p) · GW(p)

1. Too much emphasis on "altruism" and treatment of "altruists" as a special class. (As opposed to the rest of us who "merely" enjoy doing cool things like theoretical research and art, but also need the world to keep existing for that to continue happening.) No one should have to feel bad about continuing to live in the world while they marginally help to save it.

2. Not enough high-status people, especially scientists and philosophers. Do Richard Dawkins and Daniel Dennett know about LW? If not, why not? Why aren't they here? What can we do about it? Why aren't a serious-looking design and the logo of an Oxford institute enough to gain credibility? (Exception that proves the rule: Scott Aaronson has LW on his blogroll, but he was reading OB before he was high-status, and so far as I am aware, hasn't ever commented on LW as opposed to OB.)

3. Too much downvoting for disagreement, or for making non-blatant errors.

4. It's not that there are too many meetup posts, it's that there are too few content posts by comparison.

5. I sometimes feel that LW is not quite nice enough (see point 3.). Visiting other internet forums quickly snaps me out of this and puts things into perspective; but I still think we could probably do better.

6. Related to 3. and 5.: sometimes people don't read things carefully before reacting (and voting).

7. Art-related topics don't get enough respect. This fact manifests itself both in blatant ways (low scores for comments that discuss them) and in subtle ways (people make assumptions about what subtopic- and position-space look like in these domains, and show impatience with discussions about whether these assumptions are correct ).

Replies from: Vladimir_M, Wei_Dai, steven0461, None
comment by Vladimir_M · 2011-05-19T23:29:34.580Z · LW(p) · GW(p)

Not enough high-status people, especially scientists and philosophers. Do Richard Dawkins and Daniel Dennett know about LW? If not, why not?

Well, to be blunt, arguing on public internet forums is not an effective way to accomplish anything much in practice. The only people who do it are those for whom the opportunity cost in time is low (and are thus necessarily underachievers) and those who find it enjoying enough to be worth the cost (but this is clearly negatively correlated with achievement and high status).

Also, arguing on the internet under one's real identity is a bad idea for anyone who isn't in one of these four categories: (1) those who already have absolute financial security and don't care what others will think of them, (2) those who instinctively converge towards respectable high-status opinions on all subjects, (3) those who can reliably exercise constant caution and iron self-discipline and censor themselves before writing anything unseemly, and (4) those who absolutely lack interest in any controversial topics whatsoever.

comment by Wei Dai (Wei_Dai) · 2011-05-20T07:39:23.410Z · LW(p) · GW(p)

Not enough high-status people, especially scientists and philosophers.

High status people tend to be those whose actions are optimized to maximize status. Participating on Internet forums is not an optimal way to gain status in general. (Of course it can be a good way to gain status within particular forums, but by high-status people you clearly meant more widely-recognized status.)

(I disagree with Vladimir_M that "arguing on public internet forums is not an effective way to accomplish anything much in practice". In my experience it is a good way to get people interested in your ideas, further develop them and/or check them for correctness.)

Do Richard Dawkins and Daniel Dennett know about LW? If not, why not? Why aren't they here? What can we do about it? Why aren't a serious-looking design and the logo of an Oxford institute enough to gain credibility?

Probably not much we can do unless LW somehow gains widespread recognition among the public (but then we probably won't care so much about "not enough high status people"). I note that even the philosophers at FHI rarely participate here.

Replies from: curiousepic
comment by curiousepic · 2011-05-22T13:18:19.931Z · LW(p) · GW(p)

I note that even the philosophers at FHI rarely participate here.

I would be very interested in hearing why this is true, and the resource is at hand.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-22T16:14:56.178Z · LW(p) · GW(p)

You can see here an explanation from Toby Ord why he decided not to continue a discussion despite some of us begging him to.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-22T17:29:13.212Z · LW(p) · GW(p)

By the way, how far is (a saner rendering of) "moral realism" from simply a focus on "objective" in "subjectively objective values"? That is, any given agent can't escape from fixed moral truths no more than physical reality, even though there are other physical realities and agents with other goals. This doesn't look like a disagreement.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-05-22T23:59:34.967Z · LW(p) · GW(p)

Toby mentioned that moral realism went together with value simplicity, so presumably he meant a version of moral realism that implies value simplicity, from which I infer that his position is not close to "subjectively objective values".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-23T01:00:32.407Z · LW(p) · GW(p)

Toby's comment doesn't strongly imply that he believes in value simplicity though. On the other hand, "value simplicity" can be parsed as correct as well, in the sense of pointing to human minds or even to own intuition and saying "values like this" (I weakly guess a moral realist would just use own intuition in this case instead of noticing it), so this needs further disambiguation. :-)

comment by steven0461 · 2011-05-21T23:18:50.456Z · LW(p) · GW(p)

Too much emphasis on "altruism" and treatment of "altruists" as a special class. (As opposed to the rest of us who "merely" enjoy doing cool things like theoretical research and art, but also need the world to keep existing for that to continue happening.) No one should have to feel bad about continuing to live in the world while they marginally help to save it.

Are people doing specific things to make you feel bad about "continuing to live in the world", or does mere discussion of altruist-relevant topics among LW altruists make you feel that way?

comment by [deleted] · 2011-05-19T17:25:05.214Z · LW(p) · GW(p)

.

Replies from: komponisto
comment by komponisto · 2011-05-23T20:08:32.871Z · LW(p) · GW(p)

I was thinking of exchanges like this, in which my interlocutor took it for granted that musical taste is analogous to color preferences (and therefore of no greater intellectual interest), and displayed no interest in updating his beliefs on this question (I assume because of an unverbalized feeling that the topic isn't prestigious enough to think this deeply about).

Generally, what seems to happen is an inescapable spiral of "my heuristics tell me this comment is low-status, so I'm not going to read it carefully enough to notice any argument it may contain that my heuristics are wrong".

comment by Armok_GoB · 2011-05-19T11:08:38.463Z · LW(p) · GW(p)

There are way to many amazing posts with very little karma and mediocre posts with large amounts of karma.

Not enough productive projects related to the site, like site improvements and art. The few that do show up get to little attention and karma.

To much discussion about things like meetups and growing the community and converting people. Those things are important but they dosn't belong on LW and should probably have their own site.

There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn't seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder

Replies from: gscshoyru, JohnH, badger, Kevin, atucker
comment by gscshoyru · 2011-05-19T16:49:40.313Z · LW(p) · GW(p)

There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn't seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder

What I'm about to say has been said before, but it bears repeating. What exactly about all this stuff is setting off cult alarms for you? I had a similar problem with those posts as well, until I actually went and questioned the cult alarm in my head (which was a gut reaction) and realized that it might not be a rational reaction. Just because some scary group does something does not make it a bad thing, even if they're the only people that do it -- reversed stupidity is not intelligence. And a number of those things suggested sound like good, self-improvement suggestions, which are free of religious baggage.

In general, when you're creeped out by something, you should try to figure out why you're being creeped out instead of merely accepting what the feeling suggests. Otherwise you could end up doing something bad that you wouldn't have done if you'd thought it through. Which is of course the basic purpose of the teachings on this site.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-19T17:07:37.899Z · LW(p) · GW(p)

I don't know how the cult alarms work, they're intuitive. I know all those things and indeed it's probably a false alarm but I thought I should mention it anyway.

Still, if religious orgs have anything to say to rationalists about rationality then somehting, somewhere, is very very wrong. That doesn't necessarily mean it's not the case or that we shouldn't listen to them, but at the very least we should have noticed the stuff they're saying on our own long ago.

I never actually stated that I accepted what the feeling said, only that I HAD the feeling. I am in fact unsure of what to think and thus I'm trying to forward the raw data I'm working from (my intuitions) rather than my interpretation of what they mean. I should have made that clearer.

Besides, regardless of if the feeling of being creeped out is justified or not the fact they creep people out is a problem and they should try to communicate the same ideas in ways that don't creep people out so much. I don't like being creeped out.

Replies from: gscshoyru, Eugine_Nier, Armok_GoB, AdeleneDawner
comment by gscshoyru · 2011-05-19T17:32:33.477Z · LW(p) · GW(p)

Ah, ok, I misunderstood you then. Sorry, and thanks for clearing that up.

I don't agree that religious organizations having something to say to rationalists about rationality is a bad thing -- they've been around much, much longer than rationalists have, and have had way more time to come up with good ideas. And the reason why they needed suggest it instead of working it out on their own is probably because of the very thing I was trying to warn against -- in general, we as a community tend to look at religious organizations as bad, and so tend to color everything they do with the same feeling of badness, which makes the things that are actually good harder to notice.

I also do not like being creeped out. But I assume the creepiness factor comes from the context (i.e. if the source of the staring thing was never mentioned, would it have been creepy to you?) But this is probably only doable in some cases and not others (the source of meditation is known to everyone) and I'm not entirely sure removing the context is a good thing to do anyways, if all we want to do is avoid the creepiness factor. I'll have to think about that. Being creeped out and deconstructing it instead of shying away is a good thing, and trains you to do it more automatically more often... but if we want the ideas to be accepted and used to make people stronger, would it not be best to state them in a way that is most acceptable? I don't know.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-19T19:31:33.185Z · LW(p) · GW(p)

Since this seems specifically directed to me I'll say "I agree" in this actual comment rather than only upvoting.

I agree.

comment by Eugine_Nier · 2011-05-19T20:35:37.628Z · LW(p) · GW(p)

Still, if religious orgs have anything to say to rationalists about rationality then somehting, somewhere, is very very wrong.

Nick Szabo has a good essay about why we should expect (religious) traditions to contain valuable insights.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-20T13:04:38.153Z · LW(p) · GW(p)

seconded. tradition as a computational shortcut is a very important insight that I have tried (and mostly failed) to communicate to others.

more generally, memes take advantage of consistent vulnerabilities in human reasoning to transmit themselves. the fact that they use this propagation method says nothing about the value of their memetic payload.

we should pay attention to successful memes if we want to generate new successful memes.

Replies from: Emile
comment by Emile · 2011-05-21T16:14:37.532Z · LW(p) · GW(p)

Your first and second paragraph somewhat contradict each other - I agree that some traditions may be undervalued by people who'd prefer to reinvent things from whole cloth (from a software engineering perspective: rewriting a complex system you don't understand is risky), but as you say, traditions may have been selected for self-relication more than for their actual value to humans.

If you consider selection at the family, village or tribe/nation level, maybe tradition's "fitness" is how much they help the people that follow them, but many traditions are either quite recent, or evolved in a pretty different environment. So I don't know how much value to attribute to tradition in general.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2011-05-21T20:50:45.016Z · LW(p) · GW(p)

More than a teenage atheist typing in all caps, less than an evangelical :p

But seriously, I think us geeky types tend toward the a priori solution in far too many circumstances. We like things neat and tidy. Untangling traditional social hierarchies and looking for lessons seems to appeal to very few.

comment by Armok_GoB · 2011-05-19T22:17:12.412Z · LW(p) · GW(p)

Hmm, I just came up with a good framing metaphor to make it seem less creepy; Biomimicrying viruses for usage in gene therapy. Not very useful for purposes other than that thou.

comment by AdeleneDawner · 2011-05-19T18:47:30.912Z · LW(p) · GW(p)

Still, if religious orgs have anything to say to rationalists about rationality then somehting, somewhere, is very very wrong. That doesn't necessarily mean it's not the case or that we shouldn't listen to them, but at the very least we should have noticed the stuff they're saying on our own long ago.

Not necessarily. I don't find it surprising that we have different priorities than religious organizations when it comes to instrumental rationality, and there are also a lot more of them than there are of us, and they've been working longer.

If we'd had thousands of people working for several generations on the specific problem of outreach, and they still had a nontrivial amount of advice to give us, then you'd be right, but that's just not the case.

comment by JohnH · 2011-05-19T15:47:46.319Z · LW(p) · GW(p)

There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn't seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder

I actually agree with this statement.

comment by badger · 2011-05-19T17:00:41.685Z · LW(p) · GW(p)

I think meetups and discussions about community belong on LW, but occasionally these seem to presume "we've figured all these things out, now we just have to spread them". Even the usage of "we" can be dangerously setting LW readers apart from others. If there is an overarching goal to a LW-based community, it would be better framed as how to be a capable and informative group that others would be interested in than how to attract people per se.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-19T19:32:38.484Z · LW(p) · GW(p)

Yea... Still, there's WAY to much of it relative actual content.

comment by Kevin · 2011-05-25T05:51:35.846Z · LW(p) · GW(p)

The meditation post wasn't about Transcendental meditation.

comment by atucker · 2011-05-19T17:54:52.954Z · LW(p) · GW(p)

To much discussion about things like meetups and growing the community and converting people. Those things are important but they dosn't belong on LW and should probably have their own site.

New meetup groups mostly draw on the site, so exiling them to a different site will probably kill them off. If you had them just post on the site when they're new, you'd wind up with pretty much exactly what we have right now -- the established groups don't regularly post meetup notices.

comment by CharlesR · 2011-05-19T15:48:03.517Z · LW(p) · GW(p)

It is often said that one of our core values is being able to change your mind and admit when you are wrong. That process involves questioning. Am I wrong about X? As a community, we should not punish questioning, and yet my own experience suggests we do.

Replies from: XiXiDu
comment by XiXiDu · 2011-05-22T09:14:58.517Z · LW(p) · GW(p)

Am I wrong about X? As a community, we should not punish questioning, and yet my own experience suggests we do.

Yup, I personally had a post downvoted to -11 where I honestly asked "the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas."

comment by Tesseract · 2011-05-21T15:20:28.832Z · LW(p) · GW(p)

I find that Less Wrong is a conflation of about six topics:

These don't all seem to fit together entirely comfortably. Ideally, I'd split these into three more-coherent sections (singularitarianism and AI, philosophy and epistemic rationality, and applied rationality and community), each of which I think could probably be more effective as their own space.

comment by PlaidX · 2011-05-19T18:16:22.195Z · LW(p) · GW(p)

Creepily heavy reliance on torture-based what-if scenarios.

Replies from: cousin_it, TimFreeman, Dreaded_Anomaly
comment by cousin_it · 2011-05-19T18:30:39.618Z · LW(p) · GW(p)

If you try to do moral philosophy, you inevitably end up thinking a lot about people getting run over by trolleys and such. Also if you want to design good chairs, you need to understand people's butts really well. Though of course you're allowed to say it's a creepy job but still enjoy the results of that job :-)

Replies from: PlaidX
comment by PlaidX · 2011-05-19T21:52:03.977Z · LW(p) · GW(p)

I haven't read TOO much mainstream philosophy, but in what I have, I don't recall even a single instance of torture being used to illustrate a point.

Maybe that's what's holding them back from being truly rational?

comment by TimFreeman · 2011-05-20T16:47:16.323Z · LW(p) · GW(p)

Creepily heavy reliance on torture-based what-if scenarios.

I agree. I wrote the article you're citing. I was hoping that by mocking it properly it would go away.

comment by Dreaded_Anomaly · 2011-05-19T19:18:49.150Z · LW(p) · GW(p)

One of the major goals of Less Wrong is to analyze our cognitive algorithms. When analyzing algorithms, it's very important to consider corner cases. Torture is an example of extreme disutility, so it naturally comes up as a test case for moral algorithms.

Replies from: PlaidX
comment by PlaidX · 2011-05-19T21:29:15.022Z · LW(p) · GW(p)

I've heard that before, and I grant that there's some validity to it, but that's not all that's going on here. 90% of the time, torture isn't even relevant to the question the what-if is designed to answer.

The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with "getting tough" on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.

If the point you're making really only applies to torture, fine. But otherwise, it tends to read like "Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE"

There's a number of things that make me not want to self-identify as a lesswrong user, and not bring up lesswrong with people who might otherwise be interested in it, and this is one of the big ones.

Replies from: Bongo
comment by Bongo · 2011-05-27T23:21:22.052Z · LW(p) · GW(p)

"Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE"

Not necessarily even wrong. The higher the stakes, the more people will care about getting a winning outcome instead of being reasonable. It's a legit way to cut through the crap to real instrumental rationality. Eliezer uses it in his TDT paper (page 51):

... imagine a Newcomb's Problem in which a black hole is hurtling toward Earth, to wipe out you and everything you love. Box B is either empty or contains a black hole deflection device. Box A as ever transparently contains $1000. Are you tempted to do something irrational? Are you tempted to change algorithms so that you are no longer a causal decision agent, saying, perhaps, that though you treasure your rationality, you treasure Earth's life more?

comment by FiftyTwo · 2011-05-20T05:05:10.842Z · LW(p) · GW(p)

As a relative newcomer I've found it quite hard to get a sense of the internal structure of less wrong and the ideas presented. Once you've looked at the 'top' posts and the obvious bits of sequences a lot of it is quite unstructured. Little thing like references to the 'Bayesian conspiracy' or the paperclip AI turn up frequently without explanation, and are difficult to follow up.

Replies from: XiXiDu, badger
comment by XiXiDu · 2011-05-22T09:25:47.973Z · LW(p) · GW(p)

Little thing like references to the 'Bayesian conspiracy' or the paperclip AI turn up frequently without explanation, and are difficult to follow up.

References & Resources for LessWrong. Maybe it helps a little bit, although there is still a lot to do. Once I got more time again I'll overhaul it, add some missing references and remove unnecessary items.

comment by badger · 2011-05-22T15:51:19.574Z · LW(p) · GW(p)

This is what the wiki was intended to address. Has the wiki been helpful or how could it be improved? Is it just a matter of being aware it's a resource?

Replies from: fburnaby
comment by fburnaby · 2011-05-24T19:41:02.877Z · LW(p) · GW(p)

Thanks for mentioning this. I shared FiftyTwo's complaint until now.

The wiki section is placed in a nice location, but I haven't visited it in more than a year. It seemed to be too content-poor. A quick glace suggests that this has now been expended since then, and so may be helpful now.

comment by NihilCredo · 2011-05-21T19:06:52.419Z · LW(p) · GW(p)

In roughly decreasing order of annoyance:

A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.

A little Singularitarianism, specifically the assumption that self-improving AI = InstantGod®, and that donating to SIAI is the best possible EV for your cash. This isn't a big deal because they tend to be confined to their own threads. (Also, in the thankfully rare instance that someone brings up the Friendly AI Rapture even when it brings nothing to the conversation, I get to have fun righteously snarking at them, and usually get cheap karma too, perhaps from the other non-Singularitarians like me.) But it does make me feel less attached and sympathetic to other LessWrongers.

Of late, there's a lot of concern about what content should be on this site and about how to promote the site and its mentality to the 'muggles'. This kind of puzzles me, because I treat LW as just a place where mostly smart INTJ people hang out and flex their philosophical muscles when they feel like it, and I don't feel particularly interested in missionary work. While I do find it desirable to make more people more rational, I thought everyone here - except for those who get their paycheck from SIAI/FHI, I guess - had better and more efficient purposes to which to dedicate their precious, precious willpower-to-do-stuff-I-don't-enjoy than writing posts they don't really feel like writing. If providing "hardcore" content to LW feels like a chore, then we have a tragedy of the commons situation at hand, and is the site important enough to implement one of the standard workarounds to it?

Eliezer's reduced presence. Other contributors' posts are even more productive and useful than his, but none are quite as enjoyable to read.

Some top contributors regularly get double-digit karma for utterly trivial comments. Can't think of a fix that would be less annoying than the issue.

More Anglo prevalence than I would have expected for a site like this.

No Auto-Pager script for people's histories.

Replies from: Risto_Saarelma, steven0461, wedrifid
comment by Risto_Saarelma · 2011-05-24T14:27:09.935Z · LW(p) · GW(p)

More Anglo prevalence than I would have expected for a site like this.

Are there any English-language discussion sites that aren't very Anglo-centric? The more troubling thing for me is the feel that we're just bouncing around ideas that flow out of Silicon Valley instead of having multiple cultural centers generating new ideas with their own slant on stuff and having a back-and-forth. There could be interesting communities that are in Russian, Chinese, German, French or Spanish which are producing interesting ideas and could be aligned with LW if someone would bother to translate stuff, or then we could just be in a situation where the interesting new stuff that's roughly compatible with the LW meme cluster just happens to emerge mostly from the Anglosphere.

The split between analytic philosophy done in English and continental philosophy done in French and German is a bit similar. And that seems to have led into mutual unintelligibility at some conceptual level, not because of language. As far as I can tell, the two schools of philosophy don't have much use or appreciation for each others' stuff even when it gets translated. There seems to be some weird deep intertwining going on with language, culture and the sort of philosophy that gets produced, and LW stuff might be subject to it as well.

It's odd in general that I feel like I have a much better idea about what's going on in the US than in most of Europe since the primary language of most Americans is one I can understand and the primary language of most Europeans is one I can't.

comment by steven0461 · 2011-05-21T22:59:39.670Z · LW(p) · GW(p)

A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged.

This simply isn't true. See, for example, the reception of this post.

Replies from: NihilCredo
comment by NihilCredo · 2011-05-22T00:15:33.233Z · LW(p) · GW(p)

Altruism is a common consequence of utilitarian ideas, but it's not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it's the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency.

Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but - with only a couple of exceptions, which include the OP - people just automatically skip to wondering "how could I save the most people?" (decision theory talk), or "what counts as 'people', i.e. those units of which I should obviously try to save as many as possible?". There's an implicit assumption that any sentient being whatsoever = 1 'moral weight unit', and it's as simple as that. To me, that's insane.

Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.

Replies from: steven0461
comment by steven0461 · 2011-05-22T00:25:31.272Z · LW(p) · GW(p)

Fair enough; I guess komponisto's comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.

comment by wedrifid · 2011-05-21T20:14:23.827Z · LW(p) · GW(p)

A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.

Please do. @#%@#$ utilitarianism.

comment by mstevens · 2011-05-19T14:32:50.109Z · LW(p) · GW(p)

I don't like meetup posts getting in the way of actually interesting content.

I don't like the heavily blog inspired structure - I want something more like a book of core ideas, and perhaps a separate forum for discussing them and extending the core. At the moment it's very hard to "work your way in".

It would be nice to know more about other users rather than just their karma.

Content seems quite light and of low value at the moment. I may well be contributing to this.

I don't like the overlap between SIAI and LW. I'd like a clearer distinction between the two projects even if the people are the same.

I miss MoR and wish EY would finish it.

I like being notified of valuable new content via email, it makes me sad LW doesn't offer this.

Replies from: wedrifid, wedrifid, wedrifid, David_Gerard
comment by wedrifid · 2011-05-19T16:12:55.274Z · LW(p) · GW(p)

I like being notified of valuable new content via email, it makes me sad LW doesn't offer this.

I don't share that preference but it seems solvable in 2 seconds. The time it takes to type "cntrl-T rss to email". http://www.feedmyinbox.com/.

Defining and implementing a 'valuable' metric could be slightly more difficult. An RSS feed for any comments that reach +5 could be worth implementing!

comment by wedrifid · 2011-05-19T16:21:43.978Z · LW(p) · GW(p)

Content seems quite light and of low value at the moment.

It's good to hear a relatively new user say this. Just because it makes me feel less like the old guy reminiscing about the (selectively remembered) glory days and complaining about 'kids these days'. ;)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-05-19T16:44:32.993Z · LW(p) · GW(p)

I agree. I don't know whether it's that the more obvious stuff has been done, or that people aren't doing the work to extend the frontiers.

Replies from: wedrifid
comment by wedrifid · 2011-05-19T17:14:04.208Z · LW(p) · GW(p)

I don't know whether it's that the more obvious stuff has been done, or that people aren't doing the work to extend the frontiers.

That latter. I can be confident in this because I have a mental list of all sorts of posts that I would make if I had unlimited time and motivation. That I choose not to is an indication of priorities, not an indication that there is nothing left at the boundaries.

Replies from: None
comment by [deleted] · 2011-05-19T19:52:47.686Z · LW(p) · GW(p)

Maybe you could do a quick post listing the sorts of posts you would make if you have unlimited time and motivation.

Replies from: atucker
comment by atucker · 2011-05-20T02:30:40.034Z · LW(p) · GW(p)

Aaaah holy crap that sounds awesome!

Like, you're almost making me want to get a PhD (from there)!

comment by wedrifid · 2011-05-19T16:18:56.226Z · LW(p) · GW(p)

I don't like the heavily blog inspired structure - I want something more like a book of core ideas

The 'sequences' link seems to cover this. The difficulty seems to be that reading the book-like format is not nearly as easy to motivate oneself to do.

In the last week I have gone through and converted all of the hundreds of core Eliezer posts into audio format and have them running nearly constantly on my ipod for the purpose of revision. It's going to take days to get through them all even at that constant rate of consumption! I highly recommend this as a way to 'work your way in'. It is not quite the same as reading all of the text but the cost is far, far lower.

PS: For obvious reasons I just had to upvote your other comment!

Replies from: mstevens
comment by mstevens · 2011-05-19T19:04:10.095Z · LW(p) · GW(p)

I find the sequences hard to penetrate. I've actually found MoR to be a much better introduction.

But either way I'd like to see them more prominent on the site.

Replies from: Vaniver
comment by Vaniver · 2011-05-25T11:23:28.137Z · LW(p) · GW(p)

I find the sequences hard to penetrate. I've actually found MoR to be a much better introduction.

It seems like you're not interested in a core, then, but a popularization. (This is intended as a clarification, not an insult.) If one wanted an introduction to Christianity, just opening up the Bible is not a good plan.

Replies from: mstevens
comment by mstevens · 2011-05-25T16:02:52.454Z · LW(p) · GW(p)

That's somewhat true - I think a good introduction is a key part of what I'm looking for.

However I also like the fact that MoR is a well structured work (start reading at the beginning, continue to the end) with some sort of consistent editorial style, which the sequences seem to lack.

comment by David_Gerard · 2011-05-19T15:57:53.557Z · LW(p) · GW(p)

BTW, you should pop along to a London meetup, even if only to boggle slightly. A nice bunch.

Replies from: mstevens
comment by mstevens · 2011-05-19T19:07:59.174Z · LW(p) · GW(p)

But I suspect you're all disturbingly humanoid! I know you are!

Replies from: David_Gerard
comment by David_Gerard · 2011-05-19T20:39:28.035Z · LW(p) · GW(p)

You'll see me sipping water in a real ale pub. A deeply disturbing sight.

Replies from: persephonehazard
comment by persephonehazard · 2011-06-08T02:01:34.776Z · LW(p) · GW(p)

...who are you and what have you done with, you know, /you/?!?

Replies from: David_Gerard
comment by David_Gerard · 2011-06-08T06:51:11.155Z · LW(p) · GW(p)

Someone attempting to keep up with a room full of people smarter than me :-)

I point you at the welcome thread!

Replies from: XiXiDu
comment by XiXiDu · 2011-06-08T10:03:41.547Z · LW(p) · GW(p)

...smarter than me...

I think this is a largely overestimated concept, especially on LW. I doubt most people here are "smarter" than average Joe. A lot of it is due to education, a difference of interest, and a little more ease when it comes to symbol manipulation. Surely there are many more factors, like the ability to concentrate, not getting bored too quickly, being told as a child that one can learn anything if one tries hard enough etc., but little has to do with insurmountable hardware limitations.

Eliezer Yudkowsky recently wrote:

You know how there are people who, even though you could train them to carry out the steps of a Universal Turing Machine, you can't manage to teach them linear algebra...

I haven't heard of any evidence that would suggest that there are human beings who can't understand linear algebra. I myself have not yet arrived at linear algebra, because I didn't bother to learn any math when I was a teenager, but I doubt that it is something only superhuman beings can understand. I would go as far as to bet that you could teach it to someone with down syndrome.

Take for example the number 3^^^^3. Can I hold a model of 3^^^^3 objects in my memory? No. Can I visualize 3^^^^3? No. Does that mean that I am unable to fathom some of its important properties, e.g. its scope? No.

Someone who has no legs can't run faster than you. Similar differences are true about different brains, but we don't know enough about brains, or what it means to understand linear algebra, to indiscriminately claim that someone is "smarter"...

Replies from: persephonehazard, David_Gerard
comment by persephonehazard · 2011-06-08T14:32:41.717Z · LW(p) · GW(p)

I'm not convinced anybody could teach me to understand linear algebra. Or maybe what I mean by that is that I'm not convinced of my own ability to understand linear algebra, which may be a different thing.

I have trouble with maths. More specifically, I have trouble with numbers. What I experience when faced with lots of numbers is akin to how people with dyslexia often describe trying to parse lots of written text - they swim and shift beneath my eyes, and dissolve into a mass of meaningless gobbledegook that I can't pick any sense from. And then after a while, even if I've ploughed through some of this, I start to get what I can only describe as "number fatigue" and things that previously I'd almost started to comprehend seem to slip out from my grasp.

And, when asked to do simple maths, I panic and fly into what is pretty much an anxiety attack. Which, of course, means that I'm not thinking clearly enough to untangle it all and try to start making sense of it.

Maths feels utterly, utterly impenetrable to me. Half the time I can't even work out what the necessary sum is - recent examples include my having no notion of the calculations required for aspect ratio or 10% of a weight in stones and pounds, but this also applies to much simpler things, like the time I couldn't figure out how to calculate the potential eventual fundraising total from the time elapsed, the time remaining and the money so far achieved.

I realise that in a community like this I'm going to stick out like a sore thumb, mind you ;-)

Replies from: ciphergoth, XiXiDu
comment by Paul Crowley (ciphergoth) · 2011-06-08T14:55:49.980Z · LW(p) · GW(p)

This all sounds less like a lack of innate ability and more like a barrier of fear. Not to say that can't be just as disabling.

Replies from: persephonehazard, Desrtopa
comment by persephonehazard · 2011-06-08T17:00:24.834Z · LW(p) · GW(p)

Certainly some of it is. The anxiety and fluster and horrible panic feeling is certainly emotional, and the "number blindness" thing is probably related too. It's much, much worse if there's anyone else around - the only thing more embarrassing than knowing I've failed simple arithmetic is failing simple arithmetic when other people who might assume I'm moronically stupid can see me doing it.

And of course that makes me a nightmare to teach, because I'm horribly resistant to learning maths because I know I'll fail and look stupid and whoever it is will think I'm thick. You of all people have encountered that in me!

Struggling to parse strings of numbers, though, can happen no matter how calm and unpressured and private I am. I've emailed myself things like my debit card number so that I can just cut and paste them when I buy things, because I can't always reliably type them in by looking at the card.

comment by Desrtopa · 2011-06-08T16:11:40.303Z · LW(p) · GW(p)

It could be a case of discalculia.

Replies from: persephonehazard
comment by persephonehazard · 2011-06-08T17:01:16.599Z · LW(p) · GW(p)

That's certainly entirely plausible, and something my mother (a primary school teacher of a quarter-century's experience, who's known a lot of children well) has always suspected. I've never had it checked out, though. Maybe I should.

ETA - particularly as I've just had a look at the wikipedia article and every single thing in the symptoms list applies to me to some degree. I'm even a pretty good writer. Good grief.

comment by XiXiDu · 2011-06-12T17:19:40.720Z · LW(p) · GW(p)

I realise that in a community like this I'm going to stick out like a sore thumb, mind you ;-)

Check the following links, here is my homepage from when I was 18 and this is another page from that year. Looks more like something made by a 14 year old, doesn't it? And since Desrtopa mentioned discalculia, your case might be stronger, but it took me 3 attempts to figure out how old I was in 2002 :-)

There do exist neurological deficits that prevent people from acquiring certain skills, understand some concepts and reach certain performance levels, but I wouldn't jump to a conclusion in your case. A lot of it might very well has to do with what you believe to be the case, rather than what is actual. I haven't hit any barrier yet. And I only learnt to read analog clocks when I was around 14.

I admit that I can't put myself in your position, so maybe I am wrong and you should stop worrying about mathematics. I am only saying that you might as well stop caring, but not give up. In other words, do not panic, just try it and don't expect to succeed. Start small, think about the most basic problem for as long as necessary, without feeling coerced to understand it. Use objects and drawings to approach the problem. Read up on various different sources explaining the same problem. Do not stop reading, or listening to explanations when you feel that you can't follow anymore, just read it over and over again. Then stop for a few hours or days and think about it again. And remember not to push yourself to understand it, you just do it in your spare-time, for fun. If you feel overwhelmed, just forget about it and get back to it later. Write it down, print it out and plaster the walls in your bedroom with it so that you don't need any willpower to approach the problem, the problem will approach you. You have all the time you need, even if it takes decades to understand that one simple problem.

It also helps to remember that almost everyone knows someone who is much better at something. Many people learn to play a musical instrument and never expect to become a professional musician. People play golf or soccer, just for fun or because of the challenge. Almost nobody turns out to be good at what they are doing. Personally I like to play an online racing game called Trackmania. I play it since 2007 and only managed to reach the world rank 34795. And I still play it, even though I almost never win. If you have trouble doing basic arithmetic, well then, try to enjoy the challenge, don't worry, don't panic!

comment by David_Gerard · 2011-06-08T10:13:16.266Z · LW(p) · GW(p)

A lot of it is due to education, a difference of interest, and a little more ease when it comes to symbol manipulation [...] but little has to do with insurmountable hardware limitations.

I wonder if that makes a difference in practical terms. There's all sorts of potential in one's genes, but one has the body, brain and personal history one ends up with.

What I mean is no longer feeling like the smartest person in the room and quite definitely having to put in effort to keep up.

I haven't heard of any evidence that would suggest that there are human beings who can't understand linear algebra.

I first encountered humans who couldn't understand basic arithmetic at university, in the bit of first-year psychology where they try to bludgeon basic statistics into people's heads. People who were clearly intelligent in other regards and not failures at life, who nevertheless literally had trouble adding two numbers with a result in the thirties. I'm still boggling 25 years later, but I was there and saw it ...

Replies from: XiXiDu, XiXiDu, persephonehazard, XiXiDu
comment by XiXiDu · 2011-06-08T11:36:29.958Z · LW(p) · GW(p)

first encountered humans who couldn't understand basic arithmetic at university

When I first saw a fraction, e.g. 1/4, I had real trouble to accept that it equals .25. I was like, "Uhm, why?"...when other people are like, "Okay, then by induction 2/4=.5"...it's not that I don't understand, but do not accept. Only when I learnt that .25 is a base-10 place-value notation, which really is an implicit fraction, with the denominator being a power of ten, I was beginning to accept that it works (it took a lot more actually, like understanding the concept of prime factorization etc.). Which might be a kind of stupidity, but not something that would prevent me from ever understanding mathematics.

The concept of a function is another example:

  • f:X->Y (Uhm, what?)
  • f(x) : X -> Y (Uhm, what?)
  • f(x) = x+1 (Hmm.)
  • f(1) = 1+1 (Okay.)
  • y = f(x) (Hmm.)
  • (x, y)
  • (x, f(x))
  • (1,2) (Aha, okay.)
  • (x,y) is an element of R (Hmm.)
  • R is a binary relation (Uhm, what?)
  • x is R-related to y (Oh.)
  • xRy
  • R(x,y) (Aha...)
  • R = (X, Y, G)
  • G is a subset of the Cartesian product X × Y (Uhm, what?)

...so it goes. My guess is that many people appear stupid because their psyche can't handle apparent self-evidence very well.

comment by XiXiDu · 2011-06-08T10:49:08.346Z · LW(p) · GW(p)

I wonder if that makes a difference in practical terms.

If only by its effect on yourself and other people. If you taboo "smarter" and replace it with "more knowledgeable" or "large inferential distance", you do not claim that one can't reach a higher level:

"That person is smarter than you." = Just give up trying to understand, you can't reach that level by any amount of effort.

vs.

"That person is more knowledgeable than you." = Try to reduce the inferential distance by studying hard.

I first encountered humans who couldn't understand basic arithmetic at university...

I believe that to be the case with literally every new math problem I encounter. Until now I have been wrong each time.

Basic arithmetic can be much harder for some people than others because some just do the logic of symbol manipulation while others go deeper by questioning axiomatic approaches. There are many reasons for why people apparently fail to understand something simple, how often can you pinpoint it to be something that can't be overcome?

comment by persephonehazard · 2011-06-08T14:35:38.458Z · LW(p) · GW(p)

I first encountered humans who couldn't understand basic arithmetic at university, in the bit of first-year psychology where they try to bludgeon basic statistics into people's > heads. People who were clearly intelligent in other regards and not failures at life, who > nevertheless literally had trouble adding two numbers with a result in the thirties. I'm still boggling 25 years later, but I was there and saw it ...

See above, but I am basically one of those people. My own intelligence lies in other areas ;-)

comment by XiXiDu · 2011-06-08T11:57:47.368Z · LW(p) · GW(p)

I first encountered humans who couldn't understand basic arithmetic at university

Thinking about this a bit longer, I think mathematical logic is a good example that shows that their problem is unlikely to be that they are fundamentally unable to understand basic arithmetic. Logic is a "system of inference rules for mechanically discovering new true statements using known true statements." Here the emphasis is on mechanical. Is there some sort of understanding that transcends the knowledge of logical symbols and their truth values? Is arithmetic particularly more demanding in this respect?

comment by Will_Newsome · 2011-05-19T10:29:50.494Z · LW(p) · GW(p)

I am obsessed with group epistemology just enough to suggest the probably-bad idea that a much-better-written version of this post should maybe be posted to the main LW section, and maybe even once a month. This allows people who don't constantly check LW discussion to get in on the fun, and if we want to avoid evaporative cooling those are just the kind of people whose critiques we most want. Not only is this perhaps a good idea for group epistemology but it is also a good signal to the wider aspiring-to-sanity community.

We'll see what the response to this post is, and plan from there, or not.

(ETA: Perhaps a general (both positive and negative) feedback post with relatively lax comment quality expectations would be better; as User:atucker points out in another comment on this post, there is utility to be had in positive feedback.)

Replies from: Dorikka
comment by Dorikka · 2011-05-19T15:45:17.450Z · LW(p) · GW(p)

I think that one should be posted once per month, but not necessarily in the main LW section. My main reason for this is aesthetics, that I don't really think that such a meta post really 'belongs' on the main page. However, I'd reverse this opinion if a substantially greater number of users saw new main page posts than saw discussion posts.

comment by [deleted] · 2011-05-19T15:11:58.622Z · LW(p) · GW(p)

.

comment by nazgulnarsil · 2011-05-20T13:18:12.739Z · LW(p) · GW(p)

Before the discussion section was implemented I envisioned more of a "episodes from rationality in everyday life" rather than the Top Posts Junior it has partly become. I think there are a large fraction of LWers who are interested in discussing the low hanging fruit of everyday life but have been discouraged by the response to those types of posts.

comment by Wei Dai (Wei_Dai) · 2011-05-23T07:19:26.253Z · LW(p) · GW(p)

I dislike the discussion/main section divide. I have to check two places for recent comments/posts. Plus, every time I want to make a post I have to decide which section to post it in, and that seems to be a not insignificant mental cost. Actually I can't really tell which posts belong where, so I've ended up posting all of them in discussion "to be safe".

Replies from: evec, steven0461, Will_Newsome
comment by evec · 2011-05-24T02:21:59.713Z · LW(p) · GW(p)

Does checking http://lesswrong.com/r/all/new solve the problem of checking two places?

comment by steven0461 · 2011-05-23T18:38:47.860Z · LW(p) · GW(p)

Doesn't the choice of top-level post vs open thread comment have the same problems?

comment by Will_Newsome · 2011-05-23T07:47:27.695Z · LW(p) · GW(p)

(Also, people like me can break the implicit rules about what to post where for not-obviously-prosocial reasons, like me posting the thing about meta-ethics to Main for experimental reasons despite it being Discussion material.)

comment by Lila · 2011-05-20T05:41:03.635Z · LW(p) · GW(p)

My chief complaint is that almost none of the other articles here are as engaging, compelling, or fun as Eliezer's sequences. Which I have finished reading. :(

Replies from: Kutta
comment by Kutta · 2011-05-20T08:50:16.061Z · LW(p) · GW(p)

The "Top Articles" list has a multitude of great articles and relatively little Eliezer, for lots of pages.

comment by atucker · 2011-05-19T11:33:22.492Z · LW(p) · GW(p)

I think that it would be helpful to make a "What do you like about Less Wrong" post. Mostly because phrasing things negatively frames them negatively, and knowing what people like is also helpful in making the site better.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-05-19T11:53:20.309Z · LW(p) · GW(p)

I agree, I will make that post in a few days and also link back to this post in order to maximize the number of different viewers and avoid accidentally covering things simultaneously. If anyone thinks those reasons are dumb then they can just go ahead and make a "What do you like about Less Wrong?" post right now and save me the trouble. :)

Combining the positive and negative posts into a single post didn't actually occur to me (embarrassingly). I'm not sure if combining them next time would be better...?

Replies from: Emile
comment by Emile · 2011-05-19T13:25:25.872Z · LW(p) · GW(p)

I vote against combining them, it's good to stay focused.

comment by lukeprog · 2011-11-16T20:03:50.332Z · LW(p) · GW(p)

Not enough LW codebase programmers.

comment by Amanojack · 2011-05-23T18:33:54.849Z · LW(p) · GW(p)

I'd like to see more people questioning orthodox assumptions, and generally more radical arguments, yet without compromising LW standards of rigor. I feel like people are too afraid to stick their necks out and seriously argue something that goes against the majority/high-status views.

Replies from: steven0461
comment by steven0461 · 2011-05-23T18:41:04.863Z · LW(p) · GW(p)

I think LW has a lot of low-quality critics, which in turn may be causing it to underestimate the potential for criticism in the kinds of areas that tend not to attract low-quality critics.

comment by TimFreeman · 2011-05-20T16:35:44.472Z · LW(p) · GW(p)

Ideally I'd like to see a version of the site where the upvotes and downvotes of people I tend to agree with affect what is displayed to me, rather than the upvotes and downvotes of everybody. I'd be happy if I saw votes made by 90% or maybe even 99% of the users, but there's a small minority of users who I'd rather not see. These are people who use downvoting politically against specific users, instead of using them as statements about specific articles.

Implementing this would require some math and perhaps significant CPU, so it may not be worthwhile.

comment by steven0461 · 2011-05-23T19:00:48.874Z · LW(p) · GW(p)

One thing that bothers me about LW is that comments and discussion posts had their karma divided by ten (rather than some more moderate number). Surely that has to have taken away too much of the incentive to post good comments, as well as made karma less informative as the measure of general quality of thought that some are taking it for.

Replies from: Wei_Dai, Nornagest, wedrifid
comment by Wei Dai (Wei_Dai) · 2011-05-24T23:34:19.206Z · LW(p) · GW(p)

I personally wish that people would more often gather their thoughts into coherent arguments and then made into posts, instead of spreading them over many comments. I've tried to encourage people to do this on individual occasions, but mostly without success.

comment by Nornagest · 2011-05-25T00:09:47.066Z · LW(p) · GW(p)

I think this ultimately comes down to whether and to what degree we want to encourage quality over volume in top-level posts. On the whole I'm pretty happy with the current balance, but several of the recurring complaints about this site (i.e. meetup post density) do seem to stem from a lack of top-level volume, so I can see an argument for changing the weighting.

comment by wedrifid · 2011-05-23T19:12:06.219Z · LW(p) · GW(p)

One thing that bothers me about LW is that comments and discussion posts had their karma divided by ten (rather than some more moderate number

I'm not entirely sure what you are trying to say here. To be clear: All comments and discussion posts get one karma per vote. Posts on the main page get a times 10 multiplier. Which part of this do you object to?

Replies from: steven0461, Alicorn
comment by steven0461 · 2011-05-23T19:14:45.563Z · LW(p) · GW(p)

Yes, I was consciously trying to frame it in the opposite way from how it's usually framed, because I'm worried that the way it's usually framed highlights the benefits more than the costs.

comment by Alicorn · 2011-05-23T20:09:57.144Z · LW(p) · GW(p)

I think he may have been making a joke.

Replies from: steven0461
comment by steven0461 · 2011-05-23T20:29:23.363Z · LW(p) · GW(p)

The phrasing was intentionally unusual and apparently confusing (for which I apologize), but the point was meant seriously, though I'm not confident of it. Multiplying main-page karma by 10 means the same as dividing the rest by 10, unless people care about absolute rather than relative amounts of karma.

Replies from: Alicorn
comment by Alicorn · 2011-05-23T20:42:07.931Z · LW(p) · GW(p)

Oh, I thought it was intended to be about status-quo bias or something.

comment by AlphaOmega · 2011-05-19T19:57:51.654Z · LW(p) · GW(p)

What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.

I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they've written papers about "CEV" and are therefore the good guys? He who can save the world can control it. I don't trust anyone with this kind of power, and I am deeply suspicious of any small group of intelligent people that is seeking power in this way.

Am I paranoid? Absolutely. I know too much about recent human history and the horrific failures of other grandiose intellectual projects to be anything else. Call me crazy, but I firmly believe that building intelligent machines is all about power, and that everything else (i.e. most of this site) is conversation.

Replies from: None, timtyler
comment by [deleted] · 2011-05-20T01:14:46.589Z · LW(p) · GW(p)

Keep your friends close...

comment by timtyler · 2011-05-19T20:20:33.650Z · LW(p) · GW(p)

But if it comes down to Us or Them, I'm with Them. You have been warned.

That's from the document where Yudkowsky described his "transfer of allegence".

What puzzles me is how the outfit gets any support. I mean, they are a secretive, closed-source machine intelligence outfit who makes no secret of their plan to take over the world. To me, that is like writing BAD GUY in big, black letters on your forehead.

The "He-he - let's construct machine intelligence in our basement" is like something out of Tin-Tin.

Maybe the way to understand the phenomenon is as a personality cult.

Replies from: nhamann, Bongo, AlphaOmega
comment by nhamann · 2011-05-19T21:59:02.450Z · LW(p) · GW(p)

What. That quote seems to be directly at odds with the entire idea of "Friendly AI". And of course it is, as a later version of Eliezer refuted it:

(In April 2001, Eliezer said that these comments no longer describe his opinions, found at "Friendly AI".)

I'm also not sure it makes sense to call SIAI a "closed-source" machine intelligence outfit, given that I'm pretty sure there's no code yet.

comment by Bongo · 2011-05-19T23:40:40.514Z · LW(p) · GW(p)

WTF? It says right at the top of the page:

(In April 2001, Eliezer said that these comments no longer describe his opinions, found at "Friendly AI".)

comment by AlphaOmega · 2011-05-19T20:27:53.975Z · LW(p) · GW(p)

That's how it strikes me also. To me Yudkowsky has most of the traits of a megalomaniacal supervillain, but I don't hold that against him. I will give LessWrong this much credit: they still allow me to post here, unlike Anissimov who simply banned me outright from his blog.

Replies from: Nornagest, Bongo
comment by Nornagest · 2011-05-19T20:41:41.011Z · LW(p) · GW(p)

I'm pretty sure Eliezer is consciously riffing on some elements of the megalomaniacal supervillain archetype; at the very least, he name-checks the archetype here and here in somewhat favorable terms. There are any number of reasons why he might be doing so, ranging from pretty clever memetic engineering to simply thinking it's fun or cool. As you might be implying, though, that doesn't make him megalomaniacal or a supervillain; we live in a world where bad guys aren't easily identified by waxed mustaches and expansive mannerisms.

Good thing, too; I lost my goatee less than a year ago.

Replies from: timtyler
comment by timtyler · 2011-05-19T21:04:51.342Z · LW(p) · GW(p)

I expect it helps to have your content come up first - if people search for your name and the word "supervillain". Currently 3 of the top 4 posts with those search terms are E.Y. posts.

comment by Bongo · 2011-05-19T23:43:35.662Z · LW(p) · GW(p)

Since the quote is obsolete, as nhamann pointed out and as it says right on the top of the page, maybe you are being struck wrong.

comment by komponisto · 2011-05-19T16:45:14.067Z · LW(p) · GW(p)

While I'm griping:

I have always been puzzled and somewhat disappointed by the reception of this post. Almost all the comments seemed to fall within the following two categories: either they totally didn't understand the post at all, or thought its main point was so utterly obvious that they had trouble understanding why I had bothered to write it.

There seemed to be very few people in the targeted intermediate group, where I myself would have been a year before: those for whom the main idea was a comprehensible yet slightly novel insight.

Replies from: badger, gwern
comment by badger · 2011-05-19T18:51:09.109Z · LW(p) · GW(p)

The issue is people who found it comprehensible yet slightly novel are the least likely to comment. There isn't that much they can add. So, here is a retroactive response from me: Thanks! I've been vaguely aware of this, but it's nice to see it laid out explicitly.

comment by gwern · 2011-05-19T18:38:27.324Z · LW(p) · GW(p)

There seemed to be very few people in the targeted intermediate group, where I myself would have been a year before: those for whom the main idea was a comprehensible yet slightly novel insight.

OK. (FWIW, I upvoted that when you posted it and thought it was a very nifty post that drew out the implications of something I thought I understood already.)

So what does this imply? Are said implications a problem? How would one fix said problems?

Replies from: komponisto
comment by komponisto · 2011-05-23T20:22:39.490Z · LW(p) · GW(p)

I suppose the main implication is that the readers I was targeting make up a smaller proportion of the LW readership than I had realized.

Perhaps the only "fix" is for me to update my estimate of the relative size and influence of "my" audience within the general LW population, so as to better predict reaction to my posts.

(Thanks for the positive feedback, by the way.)

Replies from: gwern
comment by gwern · 2011-05-23T21:11:35.280Z · LW(p) · GW(p)

Those are good conclusions.

I would have added that it'd be a good idea to be clear about who your audience is and how you can target them. This avoids alienating the 'experts', who can see the disclaimers' avowed target group, reason they are not in it and either stop reading or read it as an example of pedagogy.

You can also try to target advanced outsiders by submitting to places like Hacker News or Reddit, something which has worked fairly well for my own 'beginner' pieces.

comment by ikrase · 2013-01-04T10:37:57.260Z · LW(p) · GW(p)

On the subject of powerful self-improving AI, there does not seem to be enough discussion of real-world limitations or chances for manual override on 1. the AI integrating computational power and more importantly 2. the AI manipulating the outside world with limited info and no dedicated or trustworthy manipulators, or manipulators weaker than Nanotech God. I no longer believe that 1 is a major (or trustable!) limit on FOOM since an AI may be run in rented supercomputers, eat the Internet, etc but 2 seems not to be considered very much. I've seen some claims that an AI without too many communication restrictions may be able to anonymously order DNA and stuff and have some idiot mix them and make some kinda self-improving biology up to nanobots, but I haven't seen anything I could really call a threat assessment.