Do people think Less Wrong rationality is parochial?

post by lukeprog · 2012-04-28T04:18:12.916Z · LW · GW · Legacy · 197 comments

Contents

197 comments

I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality (plus some extra stuff about, e.g., language), but... do other people not know this? Do people one step removed from LessWrong — say, in the 'atheist' and 'skeptic' communities — not know this? If this is causing credibility problems in our broader community, it'd be relatively easy to show people that Less Wrong is not, in fact, a "fringe" approach to rationality.

For example, here's Oaksford & Chater in the second chapter to the (excellent) new Oxford Handbook of Thinking and Reasoning, the one on normative systems of rationality:

Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.

From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.

They go on to clarify that by probability they mean Bayesian probability theory, and by rational choice theory they mean Bayesian decision theory. You'll get the same account in the textbooks on the cogsci of rationality, e.g. Thinking and Deciding or Rational Choice in an Uncertain World.

197 comments

Comments sorted by top scores.

comment by JScott · 2012-04-28T20:02:07.830Z · LW(p) · GW(p)

Long time lurker, first time poster here.

My general impression of the general impression of LW is that it's an Eliezer Yudkowsky fanclub. Essentially, if you ask anyone what they think of Eliezer Yudkowsky, you'll know what they think of LW - which is unfortunate, because lots of people seem to think EY is "full of hot air" or "full of himself" or "a self-important wanker", and this maps on to their attitude about LW.

Replies from: IlyaShpitser, handoflixue, taw, Aharon
comment by IlyaShpitser · 2012-04-29T15:22:24.952Z · LW(p) · GW(p)

I am a counterexample. I think Eliezer is a self-important wanker, but I have a favorable view of LW as a whole. I agree that I might be rare. I also wouldn't describe myself as a "part of the LW community." I think I attended a total of 1 meetup.

Replies from: shrink
comment by shrink · 2012-04-29T19:44:30.348Z · LW(p) · GW(p)

Well the issue is that LW is heavily biased towards agreement with the rationalizations of the self important wankery in question (the whole FAI/uFAI thing)...

With the AI, basically, you can see folks who have no understanding what so ever of how to build practical software and whose idea of the AI is 'predict outcomes of actions, choose actions that give best outcome' (entirely impractical model given the enormous number of actions when innovating) accusing the folks in the industry whom do, of anthropomorphizing the AI - and taking it as operating assumption that they somehow know better, on basis of them thinking about some impractical abstract mathematical model. It is like having futurists in 1900 accuse the engineers of bird-morphizing the future modes of transportation when the engineers speak of wings. Then you see widespread agreement with various irrational nonsense, mostly when it's a case of some reverse stupidity, like with 'not anthropomorphizing' the AI far past the point of actually not anthropomorphizing, into the negative anthropomorphizing land whereby if human does some sort of efficient but imperfect trick, the AI necessarily does the terribly inefficient perfect solution, to the point of utter ridiculousness where the inefficiency may be too big for the galaxy of dyson spheres to handle given the quantum computing.

Then there's this association with a bunch of folk whom basically talk other people into giving them money. That puts a very sharp divide - either you agree they are geniuses saving the world, or they are sociopaths, not a lot of middle road here as honest people can't stay self deluded for very long.

Replies from: asr
comment by asr · 2012-04-29T21:56:05.745Z · LW(p) · GW(p)

honest people can't stay self deluded for very long.

This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.

Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up these belief structures -- but they are necessarily the minority of purported believers. You need more victims than con artists for the system to be stable.

People like Newton and Kepler -- and many eminent scientists since -- were serious sincere believers in all sorts of mystical nonsense -- alchemy, numerology, and so forth. I's possible for smart careful people to persistently delude themselves -- even when the same people, in other contexts, are able to evaluate evidence accurately and form correct conclusions.

Replies from: shrink
comment by shrink · 2012-04-30T07:34:21.697Z · LW(p) · GW(p)

That's why I said 'self deluded', rather than just 'deluded'. There is a big difference between believing something incorrect that's believed by default, and coming up yourself with a very convenient incorrect belief that makes you feel good and pays the bills, and then actively working to avoid any challenges to this belief. Honest people are those who put such beliefs to good scrutiny (not just talk about putting such beliefs to scrutiny).

The honesty is elusive matter, when the belief works like that dragon in the garage. When you are lying, you have to deceive computational processes that are roughly your equals. That excludes all straightforward approaches to lying, such as waking up in the morning and thinking 'how can i be really bad and evil today?'. Lying is a complicated process, with many shortcuts when it comes to the truth. I define lying as successful generation of convincing untruths - a black box definition without getting into details with regards to what parts of the cortex are processing the truth and what are processing the falsehoods. (I exclude the inconsistent accidental generation of such untruths by mistake, unless the mistakes are being chosen)

Replies from: asr
comment by asr · 2012-05-02T01:37:29.571Z · LW(p) · GW(p)

Hrm? If Newton and Kepler were deluded by mysticism, they were self-deluded. They weren't toeing a party line and they weren't echoing conventional views. They sat down and thought hard and came up with beliefs that seem pretty nuts to us.

I see that you want to label it as "not honest" if they don't put those beliefs to good scrutiny. I think you are using "honest" in a non-standard (and possibly circular) way here. We can't easily tell from the outside how much care they invested in forming those beliefs, or how self-deluded they are. All we can tell is whether the belief, in retrospect, seems to have been plausible given the evidence available at the time. If you want to label it as "not honest" when it seems wacky to us, then yes, tautologically honest people don't come to have wacky beliefs.

The impression I have is that N and K (and many scientists since) weren't into numerology or mysticism to impress their peers or to receive external benefits: they really did believe, based on internal factors.

Replies from: shrink
comment by shrink · 2012-05-02T16:11:52.996Z · LW(p) · GW(p)

Did they make a living out of those beliefs?

See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.

edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There's nothing circular about the notion that honest person would not be talking a friend into paying him upfront to fix the car when that honest person does not have any discernible objective reason what so ever to think that he could fix the car, and a dishonest person would talk friend into paying. Now, if we were speaking of a very secretive person that doesn't like to talk of himself, there would've been probability of some big list of impressive accomplishments we haven't heard of...

Replies from: asr
comment by asr · 2012-05-03T00:43:20.609Z · LW(p) · GW(p)

It's possible we are just using terms differently. I agree that people are biased by their self-interest. I just don't think that bias is a form of dishonesty. It's a very easy mistake to make, and nearly impossible to prevent.

I don't think SIAI is unusually guilty of this or unusually dishonest.

In science, everybody understands that researchers are biased to believing their own results and to believing new results that make their existing work more important. Most professional scientists are routinely in the position of explaining to funding agencies why their work is extremely important and needs lots of government grant dollars. Everybody, not just SIAI, has to talk donors into funding their Very Important OWrk.

For government grants and prestigious publications, we try to mitigate the bias by having expert reviewers. We also tolerate a certain amount of slop. SIAI is cutting out the government and trying to convince the public, directly, to fund their work. It's an unusual strategy, but I don't see that it's dishonest or immoral or even necessarily unwise.

Replies from: shrink
comment by shrink · 2012-05-04T05:57:51.978Z · LW(p) · GW(p)

You are declaring everything gray here so that verbally everything is equal.

There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?

There are scientists whom are trying very hard to follow processes that are not prone to error, people trying to come up with ways to test their beliefs, do you really see them as all equal in the level of dishonesty?

There are people whom are honestly trying to make a perpetual motion device, whom sink their money into it, and never produce anything that they can show to investors, because they are honest and don't use hidden wires etc. (The equivalent would have Eliezer moving out to a country with a very cheap living, canceling his cryonics subscription, and so on, to maximize the money available for doing the very important work in question)

You can talk all day in qualitative terms how it is the same, state unimportant difference as the only one, and assert that you 'don't see the moral difference', but this 'counter argument' you're making is entirely generic and equally applicable to any form of immoral or criminal conduct. A court wouldn't be the least bit impressed.

Also, I don't go philosophical. I don't care what's going on inside the head unless i'm interested in neurology. I know that the conduct is dishonest, and the beliefs under which the honest agent would have such conduct lack foundation, there isn't some honest error here that did result in belief that leads honest agent to adopt such conduct. The convincing liars don't seem to work by thinking 'how could I lie', they just adopt the convenient falsehood as high priority axiom for talk and as low priority axiom for walk, as to resolve contradictions in most useful way, and that makes it very very murky as to what they actually believe.

You can say that it is honest to act on a belief, but that's an old idea, and nowadays things are more sophisticated and it is a get out of jail free card for almost all liars, whom first make up a very convenient, self serving false belief with not a trace of honesty to the belief making process, and then act on it.

Replies from: asr
comment by asr · 2012-05-04T06:56:49.409Z · LW(p) · GW(p)

Your hypothetical is a good one. And you are correct: I don't think you are dishonest if you are sincerely trying to build or sell a perpetual motion machine. You're still wrong, and even silly, but not dishonest. I need a word to refer to conscious knowing deception, and "dishonest" is the most useful word for the purpose. I can't let you use it for some other purpose; I need it where it is.

The argument is not applicable to all criminal conduct. In American criminal law, we pay a lot of attention to the criminal's state of mind. Having the appropriate criminal state of mind is an essential element of many crimes. It's not premeditated murder if you didn't expect the victim to die. It's not burglary if you thought you lived there. It's utterly routine -- and I think morally necessary -- to ask juries "what what the defendant's intention or state of mind". There is a huge moral and practical difference between a conscious and an unconscious criminal. Education much more easily cures the latter, while punishment is comparatively ineffective. For the conscious criminal, the two are reversed: punishment is often appropriate, whereas education has limited benefits.

I don't believe I am giving liars a get-out-of-jail-free card. Ignorance isn't an unlimited defense, and I don't think it is so easy to convince an outside observer (or a jury) that you're ignorant in cases where knowledge would be expected. If you really truly are in a state of pathological ignorance and it's a danger to others, we might lock you up as a precaution, but you wouldn't be criminally liable.

As to scientific ethics: All human processes have a non-zero chance of errors. The scientists I know are pretty cynical about the process. They are fighting to get papers published and they know it. But they do play by the rules -- they won't falsify data or mislead the reader. And they don't want to publish something if they'll be caught-out having gotten something badly wrong. As a result, the process works pretty much OK. It moves forward on average.

I think SIAI is playing by similar rules. I've never seen them caught lying about some fact that can be reliably measured. I've never seen evidence they are consciously deceiving their audience. If they submitted their stuff to a scientific publication and I were the reviewer, I might try to reject the paper, but I wouldn't think of trying to have them disciplined for submitting it. In science, we don't accuse people of misconduct for being wrong, or pigheaded, or even for being overly biased by your self-interest. Is there some more serious charge you can bring against SIAI? How are they worse than any scientist fighting for a grant based on shakey evidence?

Replies from: shrink
comment by shrink · 2012-05-04T07:37:11.496Z · LW(p) · GW(p)

I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.

How are they worse than any scientist fighting for a grant based on shakey evidence?

Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.

The lying is about having two sets of incompatible beliefs, that are picked between based on convenience.

edit: To clarify, the justice is not about the beliefs held by the person. It is more about the process that the person is using to arriving at the actions (see the whole 'reasonable person' stuff). If A wants to kill B and A edits A's beliefs to be "B is going to kill me", and then acts in self defense and kills B, if the justice system had a log of A's processing, the A would go for premeditated murder. Even though at the time of the murder A is honestly acting in self defense. (Furthermore, lacking a cross neurophysiological anomaly, it is a fact of reality that the justice can only act based on inputs and outputs of agents)

comment by handoflixue · 2012-04-30T20:21:10.192Z · LW(p) · GW(p)

For what it's worth, I think Eliezer is a very bright person who has built a serious fanclub that reinforces all of his existing views, and has thus cemented a worldview that can casually brush off all negative feedback because "my fanclub says I'm right / I'm smarter than them."

This maps quite well to my view of LessWrong as a whole - there's a strong bias to accept affirmations of belief and reject contrary viewpoints. My gut reaction is that the standards of evidence are significantly different for the two categories.

Replies from: private_messaging
comment by private_messaging · 2012-05-01T11:17:09.522Z · LW(p) · GW(p)

That plus I will only think someone is particularly bright if they successfully did something that smart people tried to do and failed. I apply that to myself and I sure won't give a break to someone with a fanclub beyond 'bright enough to get a fanclub'. (I would say he thinks he is very bright, but as of whenever he actually is, that has not been demonstrated as far as I can see)

Replies from: handoflixue
comment by handoflixue · 2012-05-01T19:36:39.182Z · LW(p) · GW(p)

The Sequences strike me as a reasonably large accomplishment, which is why I consider Eliezer bright. I haven't seen another example of someone successfully cultivating a large body of easily accessed rationality information. It's written in such a way that you don't need a ton of grounding to get involved in it, it's available for free online, and it covers a wide variety of topics.

If I'm missing something, please point me towards it, of course!! :)

Replies from: private_messaging
comment by private_messaging · 2012-05-02T06:17:31.940Z · LW(p) · GW(p)

Did other smart people put as much time and fail? It's not about size... find the post that you think required the most intelligence to make, that's from where you estimate intelligence, from size you estimate persistence. With regards to topics, it also covers his opinions, many of which have low independent probability of being correct. That's not very good - think what the reactions very smart people would have - it may be that the community is smarter than average but has an intelligence cut off point. Picture a much narrower bell curve centred at 115.

My first reaction to "Bayesian" this and that was, "too many words about too trivial topic". We have coolest presidents on lowest denomination coins, and we have many coolest mathematician names on things that many 5th graders routinely reinvent on a math olympiad.

Replies from: handoflixue
comment by handoflixue · 2012-05-02T18:34:58.926Z · LW(p) · GW(p)

Did other smart people put as much time and fail?

Well, we have the entirety of academia. Harvard can't afford academic journals, so it seems fair to say that academic journals fail entirely at this goal, and one assumes that the people publishing there are, on average, at least 1 standard deviation above norm (IQ 115+)

It's not about size...

I think this idea sabotages more intelligent people than anything else. Yes, it is about size. Intelligence is useless if you don't use it. Call it "applied intelligence" or some such if you want, but it's what actually matters in the world - not simply the ability to come up with an idea, but to actually put the work in to implementing it. "Genius is one percent inspiration, 99% perspiration"

I don't care about someone who has had a single idea that happens to be smarter than Eliezer's best - it's easy to have a single outlier, it's much harder to have consistently good ideas. And without those other, consistently good ideas, I have no real reason to pay attention to that one idea.

My first reaction to "Bayesian" this and that was, "too many words about too trivial topic".

laughs Okay, here we agree! Except... the sequences aren't just about high-level concepts. They're about raising the sanity line of society. They're about teaching people who didn't come up with this one on their own in 5th grade.

I'm not saying Eliezer is the messiah, or the smartest man on Earth. I'm just saying, he's done some clearly fairly bright things with his life. I think he's under-educated in some areas, and flat-out misguided in others, but I can say that about an incredible number of intelligent people.

Replies from: vi21maobk9vp, private_messaging
comment by vi21maobk9vp · 2012-05-03T06:18:22.352Z · LW(p) · GW(p)

I don't care about someone who has had a single idea that happens to be smarter than Eliezer's best - it's easy to have a single outlier, it's much harder to have consistently good ideas

You are answering to someone who thinks that FOOM description is misguided, for example. And there is not so much evidence for FOOM - inferences are quite shaky there. There are many ideas Eliezer has promoted that dilute the "consistently good" definition unless you agree with his priors.

They're about teaching people who didn't come up with this one on their own in 5th grade.

And it doesn't look like it succeeds on this...

There is a range of intelligence+knowledge where you generally understand the underlying concepts and were quite close but couldn't put it into shape. Those people would like Sequences unless the prior clash (or value clash...) make them too uncomfortable with shaky topics. These people are noticeably above waterline, by the way.

For raising sanity waterline Freakonomics books do more than Sequences.

Replies from: JoshuaZ, handoflixue
comment by JoshuaZ · 2012-05-03T06:29:02.592Z · LW(p) · GW(p)

Minor note- the intellligence explosion/FOOM idea isn't due to Eliezer. The idea originally seems to be due to I.J. Good. I don't know if Eliezer came up with it independently of Good or not but I suspect that Eliezer didn't come up with it on his own.

For raising sanity waterline Freakonomics books do more than Sequences.

This seems dubious to me. The original book might suggest some interesting patterns and teach one how to do Fermi calculations but not much else. The sequel book has quite a few problems. Can you expand on why you think this is the case?

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-03T07:22:01.284Z · LW(p) · GW(p)

Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer's arguments about FOOM there is still some fresh content.

OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between "forward" and "backward" conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn't require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-03T14:12:35.791Z · LW(p) · GW(p)

I guess in Eliezer's arguments about FOOM there is still some fresh content.

Is there anything significant? I haven't looked that hard but I haven't really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.

My point is that Freakonomics is about seeing incentives and seeing the difference between "forward" and "backward" conditional probabilities.

Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won't do much to most people's daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn't impact the sanity waterline that much.

Replies from: IlyaShpitser, vi21maobk9vp
comment by IlyaShpitser · 2012-05-05T18:13:42.108Z · LW(p) · GW(p)

The main value I see in Freakanomics is communicating "the heart of science" to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-05T20:15:35.570Z · LW(p) · GW(p)

namely that science is about reaching conclusions that are uncomfortable but true.

This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I'm not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don't like science so much.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-05-05T21:01:36.940Z · LW(p) · GW(p)

I agree! But it's often easy to arrive at conclusions that are comfortable (and happen to be true). It's harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.

You may be right that the general audience already knows this about science. I am not sure -- I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of "popular science" seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).

It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).

comment by vi21maobk9vp · 2012-05-04T05:39:00.398Z · LW(p) · GW(p)

Is there anything significant? I haven't looked that hard but I haven't really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.

For me FOOM as advertised is dubious, so hard to tell. That doesn't change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and "consistency of good ideas" is only there if you already agree with his ideas.

This won't do much to most people's daily decision making the same way that say the danger of confirmation bias or the planning fallacy would.

Well... It is way easier to concede that you don't understand other people than that you don't understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven't made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.

As for planning fallacy... What do you want when there are often incentives to commit it?

comment by handoflixue · 2012-05-03T19:36:00.238Z · LW(p) · GW(p)

For raising sanity waterline Freakonomics books do more than Sequences.

Hmmm, if I'm going to talk about "applied intelligence" and "practical results", I really have to concede this point to you, even though I really don't want to.

The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there's plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn't really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)

I'd still argue that the Sequences are a clear sign that Eliezer is intelligent ("bright") because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart - a stupid person couldn't make it through college.

Um... thank you for breaking me out of a really stupid thought pattern :)

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-04T05:18:36.310Z · LW(p) · GW(p)

He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.

From the other point of view, some of his writings make me think that he doesn't have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson's economical arguments at the same time.

Replies from: private_messaging
comment by private_messaging · 2012-05-05T16:09:17.929Z · LW(p) · GW(p)

It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.

The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation - similar to lack of mental visualization capability but for innovation - they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can't mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge's "a fire upon the deep", then i can see how you may freak out about foom, 'novamente is going to kill us all' style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-05T17:20:50.568Z · LW(p) · GW(p)

As for computation theory, he didn't skip all the fundamentals, only some parts of some of them. There are some red flags, though.

By the way, I wonder where "So you want to become Seed AI programmer" article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.

Replies from: private_messaging
comment by private_messaging · 2012-05-05T19:36:08.142Z · LW(p) · GW(p)

There's awful lot of fundamentals, though... I've replied to a comment of his very recently. It's not a question of what he skipped, it's a question of what few things he didn't skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that's not even big for innovation). Nothing mysterious about being unable to implement something that'll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don't find successful autodidacts among people who had opportunity to obtain education the normal way at good university.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-06T04:37:24.343Z · LW(p) · GW(p)

At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant "effectively maximizing the expectation of". Also, it would still be somewhat interesting if only precisely one function could be maximizied - at least some local value manipulations could be possible, after all. So it is not that obvious.

About autodidacts - the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.

Replies from: private_messaging
comment by private_messaging · 2012-05-06T06:29:33.242Z · LW(p) · GW(p)

If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of 'effectively' being available for different functions and his rhetorical point with 'mysteriously' falls apart.

I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you're wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn't work for things that are in the slightest bit non rigorous.

comment by private_messaging · 2012-05-02T19:47:22.314Z · LW(p) · GW(p)

I don't see what exactly you think academia failed at.

For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism and consider sane doing some sort of theology with god replaced by 'superintelligence', clearly useless pass time if you ask me.

edit: note on the superintelligence stuff: one could make some educated guesses about what computational process that did N operations could do, but that will involve a lot of difficult mathematics. For example of low hanging fruit - one can show that even scary many operations (think jupiter brain thinking for hours) given perfect knowledge won't let you predict weather very far - length of prediction is ~log(operations) or worse. The powers of prediction though are the easiest to fantasise about.

Replies from: handoflixue
comment by handoflixue · 2012-05-02T20:33:11.227Z · LW(p) · GW(p)

I don't see what exactly you think academia failed at.

Accessibility, both in the sense that much of the published information is NOT freely available to everyone, and in the sense that it tends to be very difficult to approach without a solid grounding in the field (the Sequences are aimed at smart people with maybe a year or two of college under their belt. Academia doesn't have any such obligation to write clearly, and thus tends to collapse in to jargon, etc.)

For the sanity and consistently good ideas, you got to redefine sanity as beliefs in stuff like foomism

A few bad ideas does not necessarily spoil the effort. In my opinion, the 'cult' ideas (such as FOOMism) are fairly easy to notice, and you can still gain quite a lot from the site while avoiding those. More importantly, I think anyone that does buy in to the 'cult' of LessWrong is still probably a few steps ahead of where they started (if nothing else, they're probably prone to find some other, potentially more-dangerous cult-belief if they don't have something benign like FOOMism to focus on)

Replies from: private_messaging
comment by private_messaging · 2012-05-02T21:35:20.043Z · LW(p) · GW(p)

Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.

Apparently knowing of confirmation bias doesn't make people actually try to follow some sort of process thats not affected by bias, instead it is just assumed that because you know of bias it disappears. What I can see here is people learning how to rationalize to greater extent than to which they learn to be rational (if one can actually learn such a thing anyway). I should stop posting, was only meaning to message some people in private.

edit: also, see, foom (and other such stuff) is a good counter example to claim that there's some raising of sanity waterline going on, or some great success at thinking better. TBH whole AI issue looks like EY never quite won the struggle with theist instinct, and is doing theology. Is there even any talks about AI where there's computational complexity etc is used to guess at what AI won't be good at? Did anyone here even arrive at understanding that a computer, what ever it computes, how ever it computes, even with scary many operations per second, will be a bad weather forecaster? (and probably bad many other things forecaster). You can get to human as human to a roundworm and only double the capability on things that are logarithmic in the operations. That's a very trivial thing, that I just don't see understood here.

Replies from: Zack_M_Davis, handoflixue
comment by Zack_M_Davis · 2012-05-02T23:28:45.407Z · LW(p) · GW(p)

I should stop posting, was only meaning to message some people in private.

I understand that you may not reply, given this statement, but ...

Are you sure you're actually disagreeing with Yudkowsky et al.? I agree that it's plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don't think that this disproves the "Foom thesis" (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI's goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.

I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won't help solve much larger problem instances. But (competent) Foom-theorists surely don't disagree with this.

As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don't think this observation is very relevant to the issue at hand. "Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken" is not a compelling argument. (I'm aware that this paraphrasing of the "Belief in powerful AI is like religion" argument takes an uncharitable tone, but it doesn't seem like an inaccurate paraphrase, either.) [EDIT: I shouldn't have written the previous two sentences the way I did; see Eugine Nier's criticism in the child comment and my reply in the grandchild.]

Replies from: Eugine_Nier, private_messaging
comment by Eugine_Nier · 2012-05-02T23:59:59.988Z · LW(p) · GW(p)

As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don't think this observation is very relevant to the issue at hand. "Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken" is not a compelling argument. (I'm aware that this paraphrasing of the "Belief in powerful AI is like religion" argument takes an uncharitable tone, but it doesn't seem like an inaccurate paraphrase, either.)

The correct phrasing of that argument is:

Idea Y is popular and false.

Therefore, humans have a bias that makes them overestimate ideas like Y.

Idea X shares many features with idea Y.

Therefore, proponents of idea X are probably suffering from the bias above.

Replies from: private_messaging, Zack_M_Davis
comment by private_messaging · 2012-05-03T06:19:44.944Z · LW(p) · GW(p)

It's even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were 'wearing same type of hat', then that wouldn't mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that's the shared cause, not just pattern matching.

Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.

comment by Zack_M_Davis · 2012-05-03T00:35:59.163Z · LW(p) · GW(p)

I accept the correction. I should also take this occasion as a reminder to think twice the next time I'm inclined to claim that I'm paraphrasing something fairly and yet in such a way that it still sounds silly; I'm much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.

My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, "Reversed Stupidity Is Not Intelligence.") There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn't very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn't tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.

Replies from: Eugine_Nier, private_messaging
comment by Eugine_Nier · 2012-05-03T02:15:52.245Z · LW(p) · GW(p)

I agree that this kind of outside view argument doesn't provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you're rationalizing. Reduce your probability estimate of X accordingly.

I should also take this occasion as a reminder to think twice the next time I'm inclined to claim that I'm paraphrasing something fairly and yet in such a way that it still sounds silly;

Note, that the formulation presented here is one I came up with on my own while searching for the bayesstructure behind arguments based on the outside view.

comment by private_messaging · 2012-05-03T06:24:37.040Z · LW(p) · GW(p)

I wasn't talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.

One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn't matter to the argument I'm making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren't valid, and aren't even close to being valid. (especially for the post-foom state)

edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn't internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that's wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don't like how they reasoned about it (imagine this for Fermat's last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.

edit2: or rather, disregard the critique as 'not good enough', akin to disregarding critique on a flawed mathematical proof if the critique doesn't prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who's scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.

Replies from: handoflixue
comment by handoflixue · 2012-05-03T19:43:28.692Z · LW(p) · GW(p)

edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough.

From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.

Replies from: private_messaging
comment by private_messaging · 2012-05-04T07:05:58.060Z · LW(p) · GW(p)

so you consider those answers correct?

Replies from: handoflixue
comment by handoflixue · 2012-05-08T23:31:27.181Z · LW(p) · GW(p)

so you consider those answers correct?

I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.

I assume you're refer to that?

A correct ANSWER is different from a correct METHOD. I treat an answer as correct if I can verify it.

Problem: X^2 = 9 Solution X=3

It doesn't matter how they arrived at "X=3", it's still correct, and I can verify that (3^2 = 9, yep!)

comment by private_messaging · 2012-05-03T06:11:15.361Z · LW(p) · GW(p)

But (competent) Foom-theorists surely don't disagree with this.

It's not about whenever they disagree, it's about whenever they actually did it themselves, that would make them competent. Re: Niler, writing reply.

comment by handoflixue · 2012-05-03T19:39:11.598Z · LW(p) · GW(p)

Well, before you can proclaim greater success, you got to have some results that you can measure without being biased. I see a counter example to the teachings actually working, in the very statement that they are working, w/o the solid evidence.

Hmmmm, you're right, actually. I was using the evidence of "this has helped me, and a few of my friends" - I have decent anecdotal evidence that it's useful, but I was definitely over-playing it's value simply because it happens to land in the "sweet spot" of my social circle. A book like Freakonomics is aimed at a less intelligent audience, and I'm sure there's plenty of resources aimed at a more intelligent audience. The Sequences just happen to be (thus far) ideal for my own social circle.

Thank you for taking the time to respond - I was caught up exploring a idea and hadn't taken the time to step back and realize that it was a stupid one.

I do still feel the Sequences are evidence of intelligence - a stupid person could not have written these! But it's not any particular evidence of an extraordinary level of intelligence. It's like a post-graduate degree; you have to be smart to get one, but there's a lot of similarly smart people out there.

Replies from: private_messaging
comment by private_messaging · 2012-05-04T07:21:19.911Z · LW(p) · GW(p)

Well, that would depend to how you define intelligence. What did set us aside from other animals, is that we could invent stone axe (the one with the stone actually attached to the stick, that's hard to do). If I see someone who invented something, I know they are intelligent in this sense. But writings without significant innovation do not let me conclude much. Since the IQ tests, we started mixing up different dimensions to intelligence. The IQ tests have very little loading for the data-heavy or choices-heavy (with very many possible actions) processing, some types of work, too.

Replies from: None
comment by [deleted] · 2012-05-04T07:34:27.011Z · LW(p) · GW(p)

Well, that would depend to how you define intelligence.

Cross-domain optimization. Unless there's some special reason to focus on a more narrow notion.

Replies from: private_messaging
comment by private_messaging · 2012-05-04T09:51:08.788Z · LW(p) · GW(p)

What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives - thanks to the "ready-to-eat" alternatives premade by other people - but by large the interesting are the ones where you generate an action given enormous number of alternatives.

edit: actually i commented on related topic . It's btw why I don't think EY is particularly intelligent. Maybe he's optimizing what he's posting for appearance, instead of predictive power, though, in which case okay he's quite smart. Ultimately, in my eyes, he's either not very bright philosopher or a quite bright sociopath, i don't sure which.

Replies from: handoflixue, None
comment by handoflixue · 2012-05-08T23:36:30.726Z · LW(p) · GW(p)

Just to be sure I understand you:

You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don't see any evidence of that latter ability?

Could you point to some examples that DO demonstrate that latter ability? I'm genuinely curious what sort of resources are available for handling that sort of "large answer space", and what it looks like when someone demonstrates that sort of intelligence, because it's exactly the sort of intelligence I tend to be interested in.

I'd definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I'm not convinced either way on where Eliezer falls on that, though, since I can't really think of any examples of what it looks like to succeed there.

I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving "how to meet women" by memorizing a dozen pickup routines)

comment by [deleted] · 2012-05-04T12:12:18.423Z · LW(p) · GW(p)

What did he optimize?

Presenting a complex argument requires a whole host of sub-skills.

Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives - thanks to the "ready-to-eat" alternatives premade by other people - but by large the interesting are the ones where you generate an action given enormous number of alternatives.

I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I've no particular stake in defending EY -- whether or not he is intelligent (and it's highly probable he's at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that's all that really matters.

On the other hand, you're uncharitable and unnecessarily derogatory.

Replies from: private_messaging
comment by private_messaging · 2012-05-06T06:37:32.000Z · LW(p) · GW(p)

Presenting a complex argument requires a whole host of sub-skills.

Nowadays with the internet you can reach billion people, there's a lot of self selection in audience.

On the other hand, you're uncharitable and unnecessarily derogatory.

He's spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of "AI etc is going to kill us all" already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as 'science') complete misinformed BS that - if he ever gains traction - will be inspiration to more of this. I'm not charitable to any imams, any popes, any priests, and any cranks. Suppose he was "autodidact" biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his 'research'). CS is not any simpler than biochemistry. I'm afraid we have a necessity to not have politeness bias about such issues.

Replies from: drethelin
comment by drethelin · 2012-05-06T17:49:51.965Z · LW(p) · GW(p)

There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about "science fiction" risks.

Replies from: private_messaging, asr, JoshuaZ
comment by private_messaging · 2012-05-06T18:51:14.063Z · LW(p) · GW(p)

Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.

comment by asr · 2012-05-06T19:44:28.359Z · LW(p) · GW(p)

I think we are justified, as a society, in taking biological risks much more seriously than computational risks.

My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can't do anything like it for biology.

Programs basically stay put the way they are created; organisms don't. For practical purposes, software never evolves; we don't have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)

If a virulent disease does break loose, we have a hard time countering it, because we can't re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.

Replies from: drethelin
comment by drethelin · 2012-05-06T20:21:30.321Z · LW(p) · GW(p)

the entire point of researching self improving AI is to move programs from the world of software that stays put the way it's created, never evolving, into a world we don't directly control.

Replies from: asr
comment by asr · 2012-05-07T00:25:25.838Z · LW(p) · GW(p)

Yes. I think the skeptics don't take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.

If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.

comment by JoshuaZ · 2012-05-06T18:05:26.850Z · LW(p) · GW(p)

While Ebola outbreaks in New York haven't happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we've seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.

Replies from: drethelin
comment by drethelin · 2012-05-06T18:17:59.257Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Stuxnet

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-10T00:41:46.653Z · LW(p) · GW(p)

If anything that's underlies it even more- in the small sample we do have in this case things haven't done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven't seen any serious danger from AI seems valid. (Although there's been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)

comment by taw · 2012-05-01T12:19:23.414Z · LW(p) · GW(p)

I think EY is a self-important wanker, and SIAI is a society for self-important wanking, but I enjoy LW quite a lot. (and if you see LW as anything more than entertainment, you're doing it wrong)

And lukeprog is entirely wrong about LW being mainstream - there are parts of LWian beliefs that are "mainstream", but a lot of them are anything but. Cryonics, AI foomism, weird decision theories etc. - that's all extremely far from the mainstream, and also very wrong.

Bayesian fundamentalism is also non-mainstream. Yes, Bayes rule is all neat, but it isn't the answer to universe and everything.

Replies from: JoshuaZ, IlyaShpitser
comment by JoshuaZ · 2012-05-01T14:59:44.902Z · LW(p) · GW(p)

I can see why this comment has been downvoted, but in context the comment is in a thread asking what people think of LW. In that context, it doesn't seem like a good idea to downvote such comments. Requests for honest assessment should be treated well.

comment by IlyaShpitser · 2012-05-01T19:25:44.820Z · LW(p) · GW(p)

I agree that LW can be entertaining. However, if you always treat everything here as a big joke, you will miss opportunities to learn a few things (if that sort of thing is important to you).

Replies from: taw
comment by taw · 2012-05-01T19:58:13.485Z · LW(p) · GW(p)

Entertaining doesn't mean it's a joke. Mythbusters is entertainment, documentaries are entertainment, Wikipedia is (usually) entertainment. Humans learn mostly for fun of it - just like all young mammals (except adult humans do so as well).

If you claim that LW has some serious practical value, in terms of instrumental rationality - then I agree that its utility is greater than zero, but I could easily point you towards places online where you can learn instrumental rationality far better than here. And I'm always first to upvote such posts when they occur (like those about GiveWell for example, they're not terribly common).

If you claim that LW Solves Serious Problems And Is The Only Hope For Humanity, then unfortunately you've fallen for some serious crackpottery.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-05-01T20:20:28.253Z · LW(p) · GW(p)

No, I merely mean that some bright/knowledgeable people occasionally post here. Learning is not always fun, or for fun.

Replies from: taw
comment by taw · 2012-05-01T20:41:17.861Z · LW(p) · GW(p)

I don't deny many people here are smart and interesting.

If you could point me to some things on LW I should definitely learn (and for some reasons more serious than entertainment), I'd definitely like to see that.

comment by Aharon · 2012-04-29T11:38:20.440Z · LW(p) · GW(p)

How many people did you look at to get this general impression of the general impression? Unfortunately, I don't have any comparison because both the site and the person seem to be largely unknown in Germany. I can only speak for myself - I didn't get this impression. Sure, there are some fringe ideas, but the discussion contains tons of stuff not the slightest related to Eliezer Yudkowsky or his opinions.

Replies from: JScott
comment by JScott · 2012-04-29T15:38:55.928Z · LW(p) · GW(p)

Not a large number; this is mostly gathered from discussions on internet forums. The sites I hang around on are generally science-fiction related in nature. While there are a few people who know of LW and think it has something valuable, many (relatively high-status) members think of it as being overly "self-important" or "full of hot air"; most don't outright disagree with the overall point (or never pin down their disagreement exactly), but state that the jargon makes LW useless, or that it states obvious things in a pretentious way, or that it's a close-minded community of "true believers" (the places I hang out are largely atheistic, so this is very much an insult).

In some ways, LW has suffered from HPMoR - anyone who doesn't like the story for whatever reason isn't likely to like the site, and I think it's plausible that some of those people would have liked the site if not for the story - though it's also more likely that many more people wouldn't have found LW without HPMoR.

Introducing people to Less Wrong outside the internet, my much more limited experience is that to most people, it's just an interesting blog. It's shiny, but not useful. I've tried to get a family member to read parts of the Sequences in hopes of getting to a point where we could resolve a long-standing disagreement, but they don't show much interest in it.

Replies from: TheOtherDave, Swimmy
comment by TheOtherDave · 2012-04-29T16:07:27.377Z · LW(p) · GW(p)

I've tried to get a family member to read parts of the Sequences in hopes of getting to a point where we could resolve a long-standing disagreement, but they don't show much interest in it.

In my experience, it works far better to internalize the message of a text and then communicate the pieces of that message that are relevant to my discussions with people as they come up, than to point people to the text.

Of course, it's also a lot more work, and not all discussions (or relationships) are worth it.

comment by Swimmy · 2012-04-30T01:51:46.664Z · LW(p) · GW(p)

I've linked LW several times on a (videogame) forum and the reaction has been mostly positive. A few are regular readers now, though I don't believe any participate in discussion. I think two have read most of the sequences. At least one regularly links EY articles on Facebook.

Another small sample, of course. And I haven't really linked articles on FAI/MWI/cryonics.

comment by Vladimir_M · 2012-04-28T19:29:01.125Z · LW(p) · GW(p)

I don't think "parochial" is the right word here -- a more accurate term for what you're describing would be "contrarian."

In any case, insofar as there exists some coherent body of insight that can be named "Less Wrong rationality," one of its main problems is that it lacks any really useful methods for separating truth from nonsense when it comes to the output of the contemporary academia and other high-status intellectual institutions. I find this rather puzzling: on the one hand, I see people here who seem seriously interested in forming a more accurate view of the world -- but at the same time, living in a society that has vast powerful, influential, and super-high-status official intellectual institutions that deal with all imaginable topics, they show little or no interest in the question of what systematic biases and perverse incentives might be influencing their output.

Now, the point of your post seems to be that LW is good because its opinion is in line with that of these high-status institutions. (Presumably thanks to the fact that both sides have accurately converged onto the truth.) But then what exactly makes LW useful or worthwhile in any way? Are the elite universities so marginalized and powerless that they need help from a blog run by amateurs to spread the word about their output? It really seems to me that if a forum like LW is to have any point at all, it can only be in identifying correct contrarian positions. Otherwise you might as well just cut out the middleman and look at the mainstream academic output directly.

Replies from: sufferer, lukeprog, None
comment by sufferer · 2012-04-29T00:47:14.131Z · LW(p) · GW(p)

Are the elite universities so marginalized and powerless that they need help from a blog run by amateurs to spread the word about their output?

Vladimir_M, what makes you think that elite universities have the desire and money/power to proselytize their "output"? I mean, you surely know about the trouble they are having trying to win the propaganda fight against creationism, and against global warming denial. And then there's anti-vaccination and the moon landing conspiracy.

In fact the statement that I quoted seems to so obviously deserve the answer "yes, they are unable to spread the word" that I wonder whether I am missing something.

Perhaps you were thinking that elite universities only need to spread the word amongst, say, the smartest 10% of the country for it to matter. But even in that demographic, I think you will find that few people know about this stuff. Perhaps with the release of books such as Predictably Irrational things have improved. But still, such books seem somewhat inadequate since they don't aim to cleanly teach rationality, rather they aim to give a few cute rationality-flavoured anecdotes. If someone reads Predictably Irrational I doubt that they would be able to perform a solid analysis of the Allais Paradox (because Predictably Irrational doesn't teach decision theory in a formal way) and I doubt that their calibration would improve (because Predictably Irrational doesn't make them play calibration games).

There are very, very few people employed at universities whose job description is "make the public understand science". As far as I am aware there is literally no-one in the world whose job title is "make the public understand cognitive-biases-style rationality"

Replies from: None, Barry_Cotter
comment by [deleted] · 2012-04-30T06:54:38.402Z · LW(p) · GW(p)

Vladimir_M, what makes you think that elite universities have the desire and money/power to proselytize their "output"?

Mencius Moldbug has convincingly argued on his blog, that intellectual fashion among the ruling class follows intellectual fashion on Harvard by an offset of about one generation. A generation after that the judicial and journalist class exiles any opposition to such thought from public discourse and most educated people move significantly towards it. A generation after that through public schools and the by now decades long exposure to media issuing normative statements on the subject, such beliefs are marginalized even among the uneducated, making any populist opposition to society wide stated value or policy changes a futile gesture destined to live only one season.

It is indeed is a wonderful machine for generating political power through opinion in Western type societies. While I generally have no qualms about Harvard being an acceptable truth generation machine when it comes to say Physics, in areas where it has a conflict of interest, like say economics or sociology let alone political science or ethics it is not a reliable truth generating machine. It is funny how these fields get far more energetic promotion than say Physics or Chemistry.

I am fairly certain the reason creationism is still around as a political force in some US states is because creationism is not a serious threat to The Cathedral. However I do think modern style rationality is indeed only the honest interest of a small small part of academia, a far larger fraction of academia is busy engaged in producing anti-knowledge in the most blatant form of something studies departments and a more convoluted form of ugh field rationalizations found in everything from biology to philosophy.

Academia taken as a whole has no incentives to promote modern rationality, cargo cult rationality and worship of science so anti-knowledge factories can syphon status from say Computer Scientists or Mathematicians perhaps. But not actual rationality.

So yes LessWrong should spend effort on promoting that, however it should not abstain, from challenging and criticizing academia in places where it is systematically wrong.

Replies from: sufferer
comment by sufferer · 2012-04-30T16:42:25.487Z · LW(p) · GW(p)

As Moldbug has convincingly argued on his blog, intellectual fashion among the ruling class follows intellectual fashion on Harvard by an offset of about one generation. A generation after that the judicial and journalist class exiles any opposition to such thought from public discourse

then

creationism is still around

Contradiction much?

because creationism is not a serious threat to The Cathedral

If the "judicial and journalist class" only attacks popular irrational ideas which are "a serious threat to The Cathedral", then what other irrationalities will get through? Maybe very few irrational ideas are a "serious threat to The Cathedral", in which case you just admitted that academia cannot "proselytise it's output". What about antivax? Global warming denial? Theism? Anti-nuclear-power irrationality? Crazy, ill-thought-through and knee jerk anticapitalism of the OWS variety? So many popular irrational beliefs...

Replies from: None, None, None
comment by [deleted] · 2012-05-01T07:54:01.514Z · LW(p) · GW(p)

Maybe very few irrational ideas are a "serious threat to The Cathedral", in which case you just admitted that academia cannot "proselytise it's output". What about antivax? Global warming denial? Theism? Anti-nuclear-power irrationality? Crazy, ill-thought-through and knee jerk anticapitalism of the OWS variety? So many popular irrational beliefs...

At the very least crazy ill-thought through knee jerk anticapitalism and anti-nuclear-power irrationality often are "the output of academia". I mean you can take classes in them and everything. ;)

Sure one can cherry pick and say that only this and that part of academia is or isn't trustworthy and deserves or dosen't our promotion of its output, but hey that was my position remember?

What about antivax? Global warming denial? Theism?

Antivaxers irrationality is just garden variety health related craziness which is regrettable since it costs lives but springs up constantly in new forms. Its cost is actually currently pretty low compared to others.

Global warming or at least talking vaguely about "global warming denial" is considered somewhat mind-killing on LW. Also much as with MM I suggest you do a search and read up and participate in previous debates. My personal position is that it is happening yet spending additional marginal effort on solving it or getting people to solve it has negative net utility. Suggest you read up on optimal philanthropy and efficient charity to get a better feeling of what I mean by this.

Theism, meh, I used to think this was an especially dangerous kind of crazy, yet now I think it is mostly relatively harmless compared to other craziness at least in the context of Western cultures. When happy new atheists first stumble upon LW I sometimes find myself in the awkward position of smiling nervously and then trying to explain that we now have to deal with real problems of irrationality. Like society rationalizing death and ageing as something good or ignoring existential risk.

comment by [deleted] · 2012-05-01T07:02:05.155Z · LW(p) · GW(p)

Contradiction much?

No. I dislike repeating myself:

I am fairly certain the reason creationism is still around as a political force in some US states is because creationism is not a serious threat to The Cathedral.

But the following part of your response amused me and further more provoked some thought on the topic of conspiracy theories so have a warm fuzzy.

Let's at least be consistent about our conspiracy theories ...

I am not quite sure what you mean with that phrase. Can you please clarify?

And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people acting in anything that can be described as self-interest and is a few inferential jumps away. One could argue this is the primary meaning of calling an argument a conspiracy theory in on-line debates.

I'm going to be generous and assume that this last meaning wasn't the primary intended one since you have since edited the line out of your reply.

Tying the content of the linked post back to our topic, I will admit Moldbug shows off his smarts and knowledge with silly, interesting and probably wrong ideas when he talks about his proposals for a neocameralist state. He can be a bit crankish talking about it, but hey show me a man who made a new ideology and wasn't a bit crankish about it! But no I think when he talks recent history, politics and sociology he is a most excellent map maker and not a "conspiracy nut" (though the pattern match is an understandable one to make in ignorance).

First there is a reason I talked about a "power machine" and not a sinister cabal. If you have a trusted authority to which people outsource their thinking from where they upload their favoured memeplexes, then allowing even for some very limited memetic evolution you will see the thing (all else being equal) try and settle. Those structures that aren't by happen-stance built so that the memeplexes they emit increase trust of the source will tend to be out-competed by those who do. Don't we have a working demonstration of this in organized religion? Notice how this does not require a centuries spanning conspiracy of Christian authorities consistently and consciously working to enhance their own status and nothing else while lying to the masses, nope I'm pretty sure most of them honestly believed in their stated map of reality. Yet the Church did end up working as such a belief pump and it even told us it was a belief pump that could be derived as true and necessary from pure reason. Funny how that worked out. Also recall the massive pay-offs in a system where the sillies in the brains of the public or experts directly matter in who the government allots resources to. Not much coordination needed for those peddling their particular wares to individually exploit this, or for them to realize which soap box is the best one to be standing on. If anything like a trusted soap box exists there will be great demand to stand on it, are we sure the winner of such a fight is actually someone who will not abuse the soap boxes truth providing credentials? Maybe the soap box comes equipped with some mechanisms to make it so, still they better be marvellously strong since they will probably be heavily strained. Secondly it is not a model that anthropomorphizes society or groups needlessly, indeed it might do well to incorporate more of it, since large chunks of our civilization where redecorated by the schemes of hundreds of petty and ambitious historically important figures that wanted to mess with ... eh I mean optimize power distribution.

On the story thing, well I do admit that component is present in biasing me and others on LW towards making it seem more plausible. MM is a good if verbose writer. Speaking of verbosity you should consider my current take as a incomplete and abridged version not the full argument, it is also possible I plain misremember some details so I hope other posters also familiar with MM will correct me. I have the impression you simply aren't familiar with his thinking since you so seem to attack a very weak and mangled form of his argument seemingly gleaned only from a ungenerous reading of the parent posts. I strongly recommend, even if you judge the value of additional information gained out of reading his writings low, to do a search on LW for other discussion of these ideas in various comment sections and so on, since a lot has been written on the subject. Browsing the comment history of people who often explicitly talk about such topics also seems like a good idea. Remember this is just some dude on the internet, but this is a dude on the internet that Robin Hanson considered worth debating and engaging and is someone who many LWers read and think about (note I didn't say agree with). Discussions debating his ideas are also often up voted. You will also see respected and much more formidable rationalists than myself occasionally name drop or reference him. If you have some trust in the LessWrong rationalist community, you probably need to update on how seriously you should take this particular on-line hobo distributing photocopied essays.

Note: This reply was written before edits of parent. I will respond to the added edited material in a separate post.

Edit: Abridged text by storing the analysis of conspiracy theory failure mode in a open discussion post.

comment by [deleted] · 2012-04-30T17:16:20.748Z · LW(p) · GW(p)

Contradiction much?

No. Note that I hare repeating myself:

I am fairly certain the reason creationism is still around as a political force in some US states is because creationism is not a serious threat to The Cathedral.

But the following part of your response amused me so also feel free to consider yourself forgiven.

Let's at least be consistent about our conspiracy theories ...

Conspiracy theories are generally used to explain events or trends as the results of plots orchestrated by covert groups or organizations, sometimes people use the term to talk about theories that important political, social or economic events are the products of secret plots that are largely unknown to the general public. Ah poor me alas I seem to have been taken in by crank who ignores the difficulty of coordination, seeks esoteric explanations when plain ones will do and shows off his smarts by spinning tales.

I will admit Moldbug shows of his smarts with silly and probably wrong ideas when he talks about his hypothetical neocameralist state, he can be a bit crankish talking about it, but hey show me a man who made a new ideology that wasn't crankish about it! But no I think when he talks recent history and sociology he is a most excellent map maker.

I have a sneaking feeling that you simply aren't familiar with Moldbugs thinking or extensive LessWrong discussion of it or even Robin Hanson's criticism of it.

I fail to see how this applies since Moldbug's description of political reality needs no wicked men crackling behind the curtain, indeed he elegantly shows a plausible means of how it arises

comment by Barry_Cotter · 2012-04-29T09:45:43.949Z · LW(p) · GW(p)

Vladimir_M, what makes you think that elite universities have the desire and money/power to proselytize their "output"?

I reversed a downvote to this because other people should also suffer by seeing a question this stupid. Fifteen members of the 111th Congress earned bachelor's degrees from Harvard, 11 current congressmen called Stanford home during their undergraduate days, ten members of Congress got their bachelors from Yale. This includes neither MBAs nor JDs Source here

Replies from: None, sufferer
comment by [deleted] · 2012-04-29T16:09:11.841Z · LW(p) · GW(p)

So? There are powerful people with degrees from prestigious universities. That doesn't necessarily imply that those people care about spreading scientific knowledge and that they're willing to use their power to accomplish that goal. Nor does it imply that the universities themselves care about spreading scientific knowledge in an accessible way (just publishing academic papers doesn't count).

Also, don't insult people.

comment by sufferer · 2012-04-29T14:12:17.220Z · LW(p) · GW(p)

But as I said in my comment, there are numerous issues (creationism, moon landing hoax, antivax, global warming denial, and I should add theism) where a large amount of public opinion is highly divergent from the opinions of the vast majority of academics. So clearly the elite universities are not actually that good at proselytizing their output.

Perhaps it has been downvoted because people see elite universities with large endowments and lots of alumni in congress? But still, that money cannot be spent on proselytizing. And how exactly is a politician who went to Stanford or Harvard supposed to have the means and motive to come out against a popular falsehood? Somehow science is not doing so well against creationism. As an example, Rick Santorum went to Penn State (a Public Ivy), but then expressed the view that humans were not evolved from "monkeys". Newt Gingrich actually was a lecturer, and said intelligent design should be taught at school.

EDIT: Also, Yes, I am stupid in an absolute sense. If I were smart, I would be rich & happy ;-0

comment by lukeprog · 2012-04-28T20:57:51.250Z · LW(p) · GW(p)

Presumably thanks to the fact that both sides have accurately converged onto the truth

I think that Eliezer basing 1/4 of The Sequences on articles from the MIT Encyclopedia of the Cognitive Sciences / Judgment under Uncertainty had a lot to do with it.

Replies from: TheOtherDave, fubarobfusco, ChrisHallquist
comment by TheOtherDave · 2012-04-28T21:52:30.678Z · LW(p) · GW(p)

It occurs to me, now that you point this out, that my earlier comment about "mainstream cogsci" may have been misleading.

I was indoctrinated into cogsci as an MIT Course IX major in the 80s, and really that's what I think about when I think about the field. I have no idea if MIT itself is considered "mainstream" or not, though.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-04-29T21:23:22.154Z · LW(p) · GW(p)

As someone who has taken cogsci classes more recently than that, I don't think the timing of the research is that relevant or anything else. Your earlier comment seems to summarize decently what aspects are not part of mainstream cogsci (or possibly even mainstream thought at all).

comment by fubarobfusco · 2012-04-29T00:28:58.855Z · LW(p) · GW(p)

Between those and Jaynes' Probability Theory, Pearl's Causality, and Drescher's Good and Real you have quite a lot of it.

Replies from: David_Gerard
comment by David_Gerard · 2012-04-29T08:35:57.698Z · LW(p) · GW(p)

AIUI Eliezer didn't actually read Good and Real until the sequences were finished.

comment by ChrisHallquist · 2012-04-29T02:44:10.653Z · LW(p) · GW(p)

Really need to read both of these books.

EDIT: On second thought, which sequences were these?

Replies from: lukeprog
comment by lukeprog · 2012-04-29T05:45:11.470Z · LW(p) · GW(p)

All the parts on heuristics and biases and Bayesianism and evolutionary psychology.

comment by [deleted] · 2012-04-29T16:23:31.315Z · LW(p) · GW(p)

I see people here who seem seriously interested in forming a more accurate view of the world -- but at the same time, living in a society that has vast powerful, influential, and super-high-status official intellectual institutions that deal with all imaginable topics, they show little or no interest in the question of what systematic biases and perverse incentives might be influencing their output.

I don't get that impression. The problems of biases and perverse incentives pervading academia seem to be common knowledge around here. We might not have systematically reliable methods for judging the credibility of any given academic publication but that doesn't imply total ignorance (merely a lack of rationalist superpowers).

But then what exactly makes LW useful or worthwhile in any way? Are the elite universities so marginalized and powerless that they need help from a blog run by amateurs to spread the word about their output?

That question sounds weird next to the preceding paragraph. Those perverse incentives mentioned earlier aren't exactly incentives to spread well-established scientific knowledge outside academic circles. The universities need help, not due to lack of power and status but due to lack of effort.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-04-29T20:57:10.490Z · LW(p) · GW(p)

The problems of biases and perverse incentives pervading academia seem to be common knowledge around here.

This is common knowledge in the abstract, i.e., as long as one avoids applying this knowledge to adjust one's estimate of any particular "official position".

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-05-06T21:36:17.028Z · LW(p) · GW(p)

Here is a counterexample from last Thursday. Here is a related one from last month. Both of these threads deal with reasons why a particular scientifically endorsed position may be wrong and how one might find out if they're right.

comment by shminux · 2012-04-28T05:14:11.566Z · LW(p) · GW(p)

I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality.

Is this the opinion of the cogsci experts, as well? If not, then either it is not true, or you have a communication problem.

(My personal feeling, as a complete non-expert, is that, once you shed the FAI/cryonics/MWI fluff (and possibly something about TDT/UDT, though I know next to nothing about that), and align the terminology with the mainstream, there is nothing parochial about "modern rationality". If anything, there is probably enough novel stuff in the sequences for a graduate degree or two.)

Replies from: Thomas
comment by Thomas · 2012-04-28T08:32:13.531Z · LW(p) · GW(p)

once you shed the FAI/cryonics/MWI fluff

I would amputate exactly these. I doubt however, that the site community can live as such with those gone.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-28T17:52:29.510Z · LW(p) · GW(p)

There is at least a non-negligible minority (or a silent majority?) of those who would retroactively call it an improvement if your wish were granted by some magic measure.

Even though I do think decoherence-based MWI is a better model than Copenhagen non-interpretation, it doesn't look like there are any new arguments in support or against it on LW anyway.

But given that LW is run mostly by SingInst people, and they do believe in possibility of FOOM, there is no reason for FAI to become offtopic on LW. Most of the time it is easy to recognize by the thread caption, so it is easy for us to participate only in those discussions that are interesting to us.

Replies from: Oscar_Cunningham, loup-vaillant, Alexei
comment by Oscar_Cunningham · 2012-04-28T18:02:16.641Z · LW(p) · GW(p)

Discussing FAI was banned for two months when LW started. Maybe we could do that again?

comment by loup-vaillant · 2012-05-02T10:53:22.156Z · LW(p) · GW(p)

Even though I do think decoherence-based MWI is a better model than Copenhagen non-interpretation, it doesn't look like there are any new arguments in support or against it on LW anyway.

Having read the Quantum Physics Sequence, I think Eliezer himself would agree with that. I think the actual point was to show the crushing power of already existing arguments, to showcase a foolish long-standing rejection of compelling arguments by very bright people.

Replies from: vi21maobk9vp, Normal_Anomaly
comment by vi21maobk9vp · 2012-05-03T06:24:02.644Z · LW(p) · GW(p)

Still, from time to time people start discussing it here without even using those collected arguments and counterarguments well - ten old posts hurt nobody, the complain seemed to be about ongoing things.

comment by Normal_Anomaly · 2012-05-06T21:50:14.444Z · LW(p) · GW(p)

The point was also to familiarize the readers with MWI and convince them of it so that it could then be used as a source of evidence and examples for some points about identity and free will.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-05-09T08:00:33.775Z · LW(p) · GW(p)

Yes, I forgot that. I recall being… relieved, to see that mundane physics can solve seemingly hard philosophical questions.

comment by Alexei · 2012-04-29T17:07:42.358Z · LW(p) · GW(p)

Actually, SingInst is very busy right now trying to create a Center for Modern Rationality organization, which will have, as its primary focus, increasing the sanity waterline, and will not "polluted" by involvement with any FAI research.

comment by Sarokrae · 2012-04-29T20:14:37.800Z · LW(p) · GW(p)

I have no grounding in cogsci/popular rationality, but my initial impression of LW was along the lines of "hmm, this seems interesting, but nothing seems that new to me..." I stuck around for a while and eventually found the parts that interested me (hitting rocky ground around the time I reached the /weird/ parts), but for a long while the impression was that this site had too high a rhetoric to actual content ratio, and presented itself as more revolutionary than its content justifies.

My (better at rationality than me) OH had a more extreme first impression of approximately "These people are telling me nothing new, or vaguely new things that aren't actually useful, in a tone that suggests that it's going to change my life. They sound like a bunch of pompous idiots." He also stuck around though, and enjoyed reading the sequences as consolidating his existing ideas into concrete lumps of usefulness.

From these two limited points of evidence, I timidly suggest that although LW is pitched at generic rational folk, and contain lots of good ideas about rationality, the way things are written over-represent the novelty and importance of some of the ideas here, and may actively put off people who have good ideas about philosophy and rationality but treat them as "nothing big".

Another note - jumping straight into the articles helped neither of us, so it's probably a good idea to simplify navigation, as has already been mentioned, and make the "About" page more prominent, since that gives a good idea to someone new as to what actually happens on this site - something that is quite non-obvious.

Replies from: shrink, fubarobfusco, CuSithBell
comment by shrink · 2012-04-30T07:47:50.748Z · LW(p) · GW(p)

I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.

Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone 'intelligent' processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself - to avoid self delusion - by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can't honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-04T16:46:04.740Z · LW(p) · GW(p)

Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?

Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?

Replies from: XiXiDu, Eugine_Nier
comment by XiXiDu · 2012-05-04T18:47:27.084Z · LW(p) · GW(p)

Are you aware of another online community where people more rational than LWers gather?

Instrumental rationality, i.e. "winning"? Lots...

Epistemic rationality? None...

Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?

Tell SIAI why they don't support them and thereby provide an incentive to change.

Replies from: shrink
comment by shrink · 2012-05-05T06:41:57.432Z · LW(p) · GW(p)

Instrumental rationality, i.e. "winning"? Lots...

Precisely.

Epistemic rationality? None...

I'm not sure it got that either. It's more like medieval theology / scholasticism. There are questions you think you need answered, you can't answer them now with logical thought, you use empty cargo cult imitation of reasonable thought. How rational is that? Not rational at all. Wei_Dai is here because he was concerned with AI, and calls this community rational because he sees concern with AI as rational and needs confirmation. It is neatly circular system - if the concern with AI is rational, then every community that is rational must be concerned with AI, then the communities that are not concerned with AI are less rational.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-05T07:38:00.880Z · LW(p) · GW(p)

I notice that you didn't actually answer any of my questions. Earlier you said " there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics". It seems to me if that was actually the case, then there would be communities of such people taking about topics they think are interesting, and in a way that is noticeably more rational than typical discussions on lesswrong. If you are right, why can't you give an example, or at least be very interested in trying to create such a community?

Note that my question isn't purely rhetorical. If such a community actually exists then I'd like to join or at least eavesdrop on them.

Replies from: Sarokrae, shrink
comment by Sarokrae · 2012-05-18T19:06:45.304Z · LW(p) · GW(p)

Weighing back in here, I will clarify my comment which the comment you quote was based on, that my OH had this precise thought ("self-important pompous fools") when he came across this site initially. The content of the sequences he found trivial. He generally finds it easy to be rational, and didn't see the point of getting a community together to learn how to be more rational. In fact, it's a large (reverse) inferential distance for him just to understand that some people find figuring these ideas out actually non-trivial (and yet still care about them). He doesn't understand how people can compartmentalise their minds at all.

Very few people sort themselves in bands according to "rationality", and my OH takes part in just regular discussions with regular smart people, except he's better at correcting wrong arguments than most. "Some people being unfixably wrong about things" is just a part of life for him, and without ideas like transhumanism to motivate you, it's quite hard to bring yourself to care about how wrong the rest of the world is - just being right yourself is sufficient.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-05-19T19:21:36.129Z · LW(p) · GW(p)

Thanks for this explanation. Does your OH participate in discussion here? If so does he enjoy them (more than discussions with "regular smart people")? Do you or he have any suggestions how we might better attract people like him (i.e., who are "naturally" rational and find it hard to understand at first why Eliezer is making a big deal out of "rationality")?

Replies from: Sarokrae
comment by Sarokrae · 2012-05-20T10:38:47.405Z · LW(p) · GW(p)

He doesn't participate in discussions here, because he doesn't think he has anything new to add. (This discussion is starting to prove otherwise though!)

I asked him about what more could be done to attract people like him. He said: "I think what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered). That would give a stronger incentive to read on."

So basically, what would help that is having an intro page which says "here's where to start, but if you know the basics already, here's some interesting problems to draw you in"

The other problem for him is a lot of the content reading like what he calls 'pulp-philosophy' - being to philosophy what pulp fiction is to literature. "If you find an average philosophy blog, it is either uninteresting or wrong, but has a really inflated view of itself. There is a lot of philosophy/rationality stuff on the internet, which had primed me to just ignore that kind of website."

If there is a way, then, to distinguish LW from less good websites, without worsening other problems with other audiences, that might be good, though I personally can't think of anything that would help on this front.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-05-21T19:54:24.981Z · LW(p) · GW(p)

what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered

Did your OH read Yudkowsky's posts on free will? If so, what does he think of them?

Replies from: Sarokrae
comment by Sarokrae · 2012-05-21T22:56:13.532Z · LW(p) · GW(p)

He did, that's what prompted the statement. He found them really interesting, and almost got to the right answer before one of our friends spoilered it, but he enjoyed the challenge and does enjoy thinking about things that way.

comment by shrink · 2012-05-05T09:54:17.568Z · LW(p) · GW(p)

Implied in your so called 'question' is the statement that any online community that you know of (I shouldn't assume you know of 0 other communities, right?), you deemed less rational than lesswrong. I would say lesswrong is substantially less rational than average, i.e. if you pick a community at random, it is typically more rational than lesswrong. You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW. But of course, implied in your question is that you won't accept this answer. The LW is rather interested in AI, and the talk about AI here is significantly less rational than talk of almost any technical topic in almost any community of people with technical interest. You would have to go to some alternative energy forums or ufo or conspiracy theorist place to find a match in terms of irrationality of the discussion of the topics of interest.

You would have no problem what so ever finding and joining a more rational place, if you were looking for one. That is why your 'question' is in fact almost purely rhetorical (or you are looking for a place that is more 'foo' than lesswrong, where you use word 'rationality' in place of 'foo').

Replies from: Swimmer963, Wei_Dai
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-05-15T00:20:02.258Z · LW(p) · GW(p)

physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW.

Can you list some specific examples of irrational thinking patterns that occur on LessWrong but not on those communities? The one guess I can make is that they're all technical-sounding, in which case they might exist in the context of a discipline that has lots of well-defined rules and methods for testing success, and so less "bullshit" gets through because it obviously violates the rules of X-technical-discipline. Is this what you mean, or is it something else entirely?

comment by Wei Dai (Wei_Dai) · 2012-05-05T10:42:55.172Z · LW(p) · GW(p)

I see, I had taken your earlier comment (the one I originally replied to) as saying that lesswrong was above average but there were even more rational people elsewhere (otherwise I probably wouldn't have bothered to reply). But since we're already talking, if you actually think it's below average, what are you hoping to accomplish by participating here?

Replies from: shrink
comment by shrink · 2012-05-05T11:18:44.100Z · LW(p) · GW(p)

The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.

comment by Eugine_Nier · 2012-05-05T01:35:16.308Z · LW(p) · GW(p)

If not, any ideas about how to create such a community?

How did Eliezer create LW?

comment by fubarobfusco · 2012-04-29T21:09:15.693Z · LW(p) · GW(p)

Popularizing ideas from contemporary cognitive science and naturalized philosophy seems like a pretty worthy goal in and of itself. I wonder to what extent the "Less Wrong" identity helps this (by providing a convenient label and reference point), and to what extend it hurts (by providing an opportunity to dismiss ideas as "that Less Wrong phyg"). I suspect the former dominates, but the latter might be heard from more.

Replies from: shrink
comment by shrink · 2012-04-30T07:44:28.243Z · LW(p) · GW(p)

Popularization is better without novel jargon though.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-03-17T00:44:28.855Z · LW(p) · GW(p)

Unless there are especially important concepts that lack labels (or lack adequate labels).

comment by CuSithBell · 2012-05-04T21:59:44.219Z · LW(p) · GW(p)

For my part, when I discovered the LW material (via MoR, as it turns out, though I had independently found "Creating Friendly AI" years earlier and thought it "pretty neat"), I was thrilled to see a community that apparently understood and took seriously what I had long felt were uncommonly known but in truth rather intuitive facts about the human condition, and secretly hoped to find a site full of Nietzschean ubermenschen/frauen/etc. who were kind and witty and charming and effortlessly accomplished and vastly intelligent.

It wasn't quite like that but it's still pretty neat.

comment by TheOtherDave · 2012-04-28T06:22:29.351Z · LW(p) · GW(p)

That depends a lot on what "Less Wrong rationality" is understood to denote.

There's a lot of stuff here I recognized as mainstream cogsci when I read it.
There's other stuff that I don't consider mainstream cogsci (e.g. cryonics advocacy, MWI advocacy, confident predictions of FOOMing AI). There's other stuff that drifts in between (e.g., the meta-ethics stuff is embedded in a fairly conventional framework, but comes to conclusions that are not clearly conventional.... though at times this seems more a fact about presentation than content).

I can accept the idea that some of that stuff is central to "LW rationality" and some of it isn't, but it's not at all obvious where one would draw the line.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-04-28T07:04:05.192Z · LW(p) · GW(p)

Things about local rational behaviour look and feel OK and can be considered "LW rationality" in the meta sense. Comparison to cogsci seems to imply these parts.

I am not sure that cogsci ever says something even about large-scale aggregate consequentionalism (beating the undead horse, I doubt that any science can do something with dust specks and torture...).

And some applications of rationality (like trying to describe FOOM) seem to be too prior-dependent.

So no, it's not the methodology of rational decision making that is a problem.

comment by siodine · 2012-04-28T16:56:02.126Z · LW(p) · GW(p)

My first impression of lesswrong was of a community devoted to pop science, sci-fi, and futurism. Also, around that time singulartian was getting a bad name for good reasons (but it was the Kurzweil kind d'oh), and so I closed the tab thinking I wasn't missing anything interesting. It wasn't until years later when I was getting tired of the ignorance and arrogance of the skeptic community that I found my way back to lesswrong with some linked post that showed careful, honest thinking.

It would be a good idea to put up a highly visible link on the front page addressing new visitors' immediate criticisms. For example:

  • Is this place a Kurzweil fanclub?
  • Are you guys pulling most of the content on this site out of your ass?
  • Why should I care what you people have to say? The people here seem weird.
  • I think most of what you people believe is bullshit, am I not welcome here?

Another thing, the layout of this site will take people more than ten seconds to grok which is enough to have most people just leave. For instance, I'd rename 'discussion' to 'forum' and 'main' to 'rationality blog' or just 'blog'.

Replies from: Oscar_Cunningham, Kaj_Sotala, buybuydandavis, shrink
comment by Oscar_Cunningham · 2012-04-28T18:01:24.082Z · LW(p) · GW(p)

I'd rename 'discussion' to 'forum' and 'main' to 'rationality blog' or just 'blog'.

This is a great idea.

comment by Kaj_Sotala · 2012-04-28T22:02:54.659Z · LW(p) · GW(p)

Are you guys pulling most of the content on this site out of your ass?

This is a little hard to answer because we are pulling it some of it from there. For instance, while my most upvoted post briefly mentions some mainstream science, most of it is just speculation. Similarly for this, this, and this. They're are all promoted and reasonably upvoted-posts of mine and while I hope that they're not very badly wrong, I don't remember drawing on much mainstream science for them.

comment by buybuydandavis · 2012-04-28T19:36:06.144Z · LW(p) · GW(p)

The people here seem weird.

HAMLET. Seems, madam! nay it is; I know not 'seems.'

The people here are weird, even by the standards of other rationalist communities, which of course are a bunch of big weirdos themselves by the standards of the general population.

If you're here, you're a big weirdo by conventional standards. Get over it.

But know that there's something a lot worse than being a weirdo - forgetting who you are, and trying to be something you're not. I think I did that. It's hard to be a weirdo alone. Always cutting against the grain. Never quite feeling understood. The worst is feeling that the best of you is not appreciated.

Harry Browne had a classic pop egoist book, How I Found Freedom in an Unfree World. One very good bit of advice. Be who you are, and advertise who you are. Let your natural market come to you, instead of trying package yourself for a market that really doesn't value you.

Lots of talk about akrasia here. In my case, I'm sure a big part of it was trying to package myself for the approval of others, instead of being and doing what I wanted. How could I possibly have motivation for a life I didn't actually want?

We are not a Phyg! We are not a Phyg!

Stop worrying about how people with different values see this place, and start worrying about how to connect to people with the same values. Sell to your natural market.

Be who you are, Loudly and Unapologetically.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2012-04-28T20:33:48.619Z · LW(p) · GW(p)

Be who you are

Keep your (status quo) identity small, don't be who you are, strive to be who you should be.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-04-29T20:52:42.546Z · LW(p) · GW(p)

Given the historical usage of "should", I can't endorse this. Instead, I'd go with "become who you want to be".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-29T20:57:35.916Z · LW(p) · GW(p)

"Want" seems insufficiently reflective, it suggests present opinion on the matter rather than an accurate estimate, whatever that turns out to be, and however well it can be known in practice (which is what I meant). To do as you should is to do the right thing (as opposed to the pebble-sorting thing, say).

To unpack a bit: there is this wanting/liking/approving distinction that already places "wanting" in the wrong corner, but even after that there is a level of reflection distinction between what more explicitly drives your behavior (or evaluation of your behavior) and how you would respond given more time and thought.

comment by [deleted] · 2012-04-28T19:51:40.321Z · LW(p) · GW(p)

Be who you are, Loudly and Unapologetically.

Seems like this is mostly suboptimal.

Replies from: magfrump
comment by magfrump · 2012-04-30T05:21:26.777Z · LW(p) · GW(p)

Certainly as an overarching strategy, when one considers that others' opinions and reactions matter, this seems mostly suboptimal.

But in a variety of arenas (for me: amongst colleagues; with romantic partners; when making new friends) this can be extremely effective as an approximation for more ideal strategies such as "have confidence," "be consistent," and "demonstrate that you're having fun."

I'm pretty weird: when I first meet people I talk about the meaning of beauty in mathematics and problems with academic philosophy; I LARP and play pokemon and half of my T-shirts are references to obscure webcomics.

But when I teach, this gives me funny anecdotes to share with my students. When I date, this gives me confidence and happiness to share. When I study it gives me enthusiasm and keeps smart people interested in me.

So I would say, at the very least, try out the whole "be who you are, loudly and unapologetically" thing and see how it works. Maybe I live in a fantastic fantasy land but it often pays to try things out rather than countersignalling.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2012-04-30T13:55:48.034Z · LW(p) · GW(p)

I would say it depends a lot on who I am and where I am.

For example... the "weird" behavior cluster you describe (focusing social interaction on academic topics, a fondness for LARPing and webcomics, clothing as a verbal messaging medium) is pretty standard around higher-end tech universities, for example, along with a few other traits (e.g., science-fiction fandom or a fondness for medieval reenactment). Back when I was a college student, I adopted a number of those mannerisms as a way of marking myself part of the "weird" tribe, and gave a number of them up after I graduated when there was no longer any particular benefit to them.

So someone for whom that "weird" cluster happens to express "who they really are" is in the incredibly fortunate position of happening to "really be" a way that has a lot of advantages within a particular subculture. If you haven't already done so, I recommend becoming involved with that subculture, you will likely find it very affirming. (Science-fiction conventions are a good place to start, if you don't have any appropriate university campuses near you.)

For people in the less fortunate position of being "weird" in a way that doesn't have that kind of community support, different strategies can be optimal.

comment by [deleted] · 2012-04-30T05:30:33.662Z · LW(p) · GW(p)

For context, my current companion is not openly gay at work because he (correctly) predicts that it will have a significantly negative net effect on his career, despite whatever social/legal framework exists to prevent that from happening.

So in short, yes, you do live in a fantastic fantasy land where you aren't heavily punished for being who you are.

Replies from: magfrump
comment by magfrump · 2012-04-30T05:38:42.250Z · LW(p) · GW(p)

I'm sorry to hear that.

I'd say that I would try to enjoy my fantasy land even more due to the people who don't share my circumstances, but I actually already knew all that and already do.

I hope your companion gets to find his own fantasy land some day.

comment by shrink · 2012-04-28T19:23:17.831Z · LW(p) · GW(p)

Is this place a Kurzweil fanclub?

TBH, I'd rather listen to Kurzweil... I mean, he did create reading OCR software, and other cool stuff. Here we have:

http://lesswrong.com/lw/6dr/discussion_yudowskys_actual_accomplishments/

http://lesswrong.com/lw/bvg/a_question_about_eliezer/

Looks like this gone straight to the hardest problems in the world (I can't see successful practice on easier problems that are not trivial).

This site has captcha, a challenge that people easily solve but bots don't. Despite the possibility that some blind guy would not post a world changing insight because of it, and the FAI effort would go the wrong way, and we all die. That is not seen as irrational. Many smart people, likewise, usually implement an 'arrogant newbie filter'; a genius can rather easily solve things that other smart people can't...

It is kind of hypocritical (and irrational) to assume stupid bot if captcha is not answered, but expects others to assume genius when no challenges were solved. Of course not everyone is filtering, and via internet you can reach plenty of people who won't filter for this reason or that, or people who will only look at superficial signals, but to exploit this is not good.

comment by David_Gerard · 2012-04-28T07:53:01.631Z · LW(p) · GW(p)

IME, skeptics seem to like the stuff on cognitive biases and how not to be stupid. The other local tropes, they take or leave, mostly leave. (Based on anecdotal observation of RationalWiki and elsewhere in the skepticsphere.)

Replies from: TimS
comment by TimS · 2012-04-28T16:44:33.193Z · LW(p) · GW(p)

That's certainly what I think is valuable from the sequences.

comment by Grognor · 2012-04-28T04:40:50.448Z · LW(p) · GW(p)

I thought Less Wrong-style rationality was parochial, or basically "Eliezer's standard". I might have done better to apply this quote from the Quantum Physics Sequence elsewhere:

Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.

Only how was I to know it is more general?

Replies from: fiddlemath, threelier
comment by fiddlemath · 2012-04-28T08:53:41.751Z · LW(p) · GW(p)

I also thought this, and have thus been leery about how enthusiastically I promote it.

comment by threelier · 2012-04-28T16:51:17.567Z · LW(p) · GW(p)

Real answer: Read a book.

Also, your comment still applies to the LessWrong community. The most interesting part of LukeProg's post is how he avoids realizing this is a problem with LessWrong despite having only appreciated the mainstream recently. I mean, really, horrible way to think. Was he the same dude I saw somewhere say intros were grant-filler? If so: Characteristic. Appreciating the mainstream is what intros are about, give or take. LessWrong should have a one-page answer for the question: What part of the established literature are you building upon and what are you doing that is novel? Motivation seems clear enough, though.

Replies from: threelier, Barry_Cotter
comment by threelier · 2012-04-28T18:34:57.728Z · LW(p) · GW(p)

Incidentally, I approve of downvoting this comment for lack of a specific book recommendation. I have now read back through Grognor's comment history for a while. I lay upon you, Grognor of the "read the sequences or go somewhere else (*) but I didn't realize the sequences weren't unique; how was I to know?", the quest to improve LessWrong by reading Shepherd's "The Synaptic Organization of the Brain". Standard classic.

(*)http://lesswrong.com/lw/bql/our_phyg_is_not_exclusive_enough/6ch1

comment by Barry_Cotter · 2012-04-28T23:33:21.912Z · LW(p) · GW(p)

The most interesting part of LukeProg's post is how he avoids realizing this is a problem with LessWrong despite having only appreciated the mainstream recently. I mean, really, horrible way to think. Was he the same dude I saw somewhere say intros were grant-filler?

That's much more likely to be cousin_it than lukeprog. Luke is the kind of guy who expresses bafflement/contempt when someone suggests mentioning the most popular example of an idea in an article's um, history of an idea section, when he had already mentioned the first example of it. He appreciates the format of academic papers as one would expect of the person who wrote Scholarship: How To Do It Efficiently and How SIAI could publish in mainstream cognitive science journals

EDIT the abovementioned lukeprog comment

the abovementioned cousin_it comment

Replies from: threelier
comment by threelier · 2012-04-29T00:08:55.785Z · LW(p) · GW(p)

Socially helpful. For the rest, I'll claim you're making an error, although entirely my fault. I'm claiming there's a type of article you should be citing in your intros. Luke reads a summary of those articles and says, "Wow, do people think we're weird, we're so mainstream". I say "Most of you think you're weird, and you probably did too till recenrlt; it's on you to know where you're mainstream". You say "Dummy, he's clearly interested in that since he already mentioned the first example of it."

Fair description? No, of course not, Luke has read more than a summary. He's read stuff. Mainstream stuff?

Anyway, I shouldn't say I have a good knowledge of the precise thing i mean by "mainstream" in your field, but I meant something pretty specific:

Very recent Research article Not-self or buddies Not too general Not just "classic" Similar methodology or aims (I mean extremely similar except in a few ways) High impact one way or another

What if you're too novel for to come up with articles meeting all those criteria. There's an answer for that.

Replies from: threelier
comment by threelier · 2012-04-29T00:31:21.744Z · LW(p) · GW(p)

Probably I should be clearer.

"LessWrong should have a one-page answer for the question: What part of the established literature are you building upon and what are you doing that is novel?"

is not even close to the same as

"suggests mentioning the most popular example of an idea in an article's um, history of an idea section" if you consider "Oxford Handbook of Thinking and Reasoning" to be "the first example of it"

Anyway, probably different in philosophy, so I'll retract my claim. I've never seen any good introduction from LessWrong and this is not a start. It's such a pre-start it implies the opposite ("I'm rich, I own my own car"), with a jab at the opposition ("Why don't they see I'm rich, I own my own car"). Anyway, that's my perception.

Replies from: Barry_Cotter
comment by Barry_Cotter · 2012-04-29T10:39:16.081Z · LW(p) · GW(p)

I was very unclear in what I wrote. I also find what you wrote less than perfectly clear. Efficient Scholarship will not equip one to do research in many fields but in many fields given a narrow enough goal you'd be the equal of an average maters student.

"suggests mentioning the most popular example of an idea in an article's um, history of an idea section" if you consider "Oxford Handbook of Thinking and Reasoning" to be "the first example of it"

(This comment precedes my edit) abovementioned lukeprog comment makes the point that Luke is an academic in his weighting of priority over exposure when discussing the genealogy of an idea. Apologies for my previous lack of clarity.

The overwhelming majority of LW lurkers will have barely dipped their toes in the Sequences. The more committed commenters have certainly not universally read them all. Luke has read dozens of books on rationality alone.

Upon reflection we probably disagree on very little seeing as you were originally addressing an average LW reader's view on the origins of the ideas LW talks about, not a specific person's. If you could expand upon your last paragraph I'd be grateful. The article (and as of writing, the comments) do not mention philosophy (by name) at all.

Replies from: fourlier
comment by fourlier · 2012-04-29T14:45:47.087Z · LW(p) · GW(p)

Yes, Lukeprog seems mostly academically likeable, and the text snippet in the comments on this page from the paper he's prepping is more like what I would hope for.

I was using "philosophy" as a byword for far, unlike, and what Luke may be closer to. The specific ref is that it's what Luke used (I believe) before explicitly just replacing with "Cog Sci".

Anyway, I don't disagree with you much (within the range of my having misinterpreted), so I'll skip the meta-talking and just try to say what I am thinking.

I try to imagine myself as a reviewer of a Singularity Institute paper. I'm not an expert in that field, so I'm trying to translate myself, and probably failing. Nonetheless, sometimes I would refuse to review the paper. In the SI institute's case, basically that would happen because I thought the journal wasn't worth my time or the intro was such a turn-off, I decided the paper was not worth deciphering. I'm assuming, in these cases, that I'm a well meaning but weak reviewer in the sense that this is not my exact area of expertise. In these cases, I really need a good intro, and typically I would skim all the cited papers in the intro once I committed to reviewing. Reading those papers should make me feel like the paper I then review is an incremental improvement over that collection. People talk about "least publishable units" a lot, but there's probably also a most publishable unit. With rare exceptions. If it's one of those exceptions, then it should be published in Science, say (or another high-profile general interest journal).

So, I now imagine myself as a researcher at the Singularity Institute (again, badly translating myself). I have ideas I think are pretty good that are also a little too novel and maybe a little too contrarian. I have a problem in that the points are important enough to be high-profile, but my evidence is maybe not correspondingly fantastic (no definitive experiments, etc); in other words, I'm coming out of left field, a bit. I'd first submit to quite high profile journals that occasionally publish wacky stuff (PNAS is the classic example). One such publication would be a huge deal, and failure often leads to PLoS ONE (which is why its impact factor is relatively high despite its acceptance rate being very high; not perfectly on topic for you, depending). I would simultaneously put out papers that had lesser but utterly inarguable results in appropriately specialist journals; this would convince people I was sane (but would probably diverge pretty strongly from my true interests). So, this may seem a bit like what the Singularity Institute is doing (publishing more mainstream articles outside FAI), but the bar seems (to me, in my ignorance of this area) set too low. A lot of really low impact articles do not help. I'd look for weird areas of overlap where I could throw my best idea into a rigorous framework and get published because the framework is sane and the wackiness of the premise is then fine (two Singularity-ish examples I've seen: economics of upload, computational neuroscience of uploads)

If this is all totally redundant to your thinking already, no worries, and I won't be shocked. Cheers and small apologies to Luke.

Replies from: XiXiDu
comment by XiXiDu · 2012-04-29T16:52:56.759Z · LW(p) · GW(p)

I try to imagine myself as a reviewer of a Singularity Institute paper. I'm not an expert in that field, so I'm trying to translate myself, and probably failing.

To become an expert in the field you will have to at least once build an AI going FOOM or instead read Omohundro's 'The Basic AI Drives' paper and cite it as corroborative evidence for any of your claims suffering from a lack of knowledge of how actual AGI is to come about :-)

comment by ChrisHallquist · 2012-04-28T12:08:01.907Z · LW(p) · GW(p)

When I read this, my thinking goes off in a couple different directions. On the one hand, my impression is that there's been a bit of a dustup in the literature between the heuristics and biases tradition and the evo psych folks, and that furthermore LW tends to go with the heuristics and biases tradition, where as I find what Steven Pinker, along with Samuels and Stich, have written about that issue more persuasive.

But that may be more specific than what you have in mind. Because I've also been thinking lately that there's way too little reflection about improving one's own rationality in most of the atheist/skeptic communities. Understanding the value of double-blind randomized studies seems to be as far as a lot of people get.

Replies from: lukeprog
comment by lukeprog · 2012-04-28T20:59:41.506Z · LW(p) · GW(p)

Which articles by Pinker, Samuels, and Stich were you thinking of, and which claims within them? I'm mostly familiar with why I think Gigerenzer's group's approach to normative rationality is wrong; I'm less familiar with Pinker, Samuels, and Stich on this topic.

Replies from: fubarobfusco, ChrisHallquist
comment by fubarobfusco · 2012-04-29T00:31:06.419Z · LW(p) · GW(p)

why I think Gigerenzer's group's approach to normative rationality is wrong

Details, please!

Replies from: lukeprog
comment by lukeprog · 2012-04-29T05:51:45.136Z · LW(p) · GW(p)

Details here: Griffiths et al. (2012:27); Stanovich (2010, ch. 1); Stanovich and West (2003); Stein (1996). As of Todd et al. (2012), Gigerenzer still has no response to these criticisms.

Also see my comments here and here.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-04-29T15:47:16.425Z · LW(p) · GW(p)

Thank you :)

comment by ChrisHallquist · 2012-04-29T02:26:46.202Z · LW(p) · GW(p)

For Samuels and Stich: Everything.

Well actually, a lot of their papers repeat the same points. Perhaps as a result of publish or perish, least publishable units, and all that. If you want to read just one article to save time, go with "Ending the Rationality Wars."

For Pinker, I was mainly thinking of chapter 5 of his book How the Mind Works (an excellent book, I'm somewhat surprised few people around LW seem to have read it.) His discussion is based on lots and lots of other peoples' work. Cosmides and Tooby show up a lot, and double checking my copy so does Gigerenzer, though Pinker says nothing about Gigerenzer's "group approach" or anything like that. (Warning: Pinker shows frequentist sympathies regarding probability.)

Replies from: lukeprog
comment by lukeprog · 2012-04-29T06:44:02.597Z · LW(p) · GW(p)

Samuels et al.'s Ending the Rationality Wars is a good paper and I generally agree with it. Though Samuels et al. mostly show that the dispute between the two groups has been exaggerated, they do acknowledge that Gigerenzer's frequentism leads him to have different normative standards for rationality than what Stein (1996) called the "Standard Picture" in cognitive science. LessWrong follows the Standard Picture. Moreover, some of the criticisms of Gigerenzer & company given here still stand.

I skimmed chapter 5 of How the Mind Works but didn't see immediately the claims you might be referring to — ones that disagree with the Standard Picture and Less Wrong.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2012-04-29T08:12:49.468Z · LW(p) · GW(p)

I don't have access to Stein, so this may be a different issue entirely. But:

What I had in mind from Pinker was the sections "ecological rationality" (a term from Tooby and Cosmides that means "subject-specific intelligence") and "a trivium."

One key point is that general-purpose rules of reasoning tend to be designed for situations where we know very little. Following them mindlessly is often a stupid thing to do in situations where we know more. Unsurprisingly, specialized mental modules often beat general-purpose ones for the specific tasks their adapted to. That's reason not to make too much of the fact that humans fail to follow the general-purpose rules.

And in fact, some "mistakes" are only mistakes in particular circumstances. Pinker gives the example of the "gambler's fallacy," which is only a fallacy when the probabilities of the events are independent, which outside of a casino they very often aren't.

Replies from: lukeprog
comment by lukeprog · 2012-04-29T08:42:40.327Z · LW(p) · GW(p)

Pinker seems to be missing the same major point that Gigerenzer et al. continuously miss, a point made by those in the heuristics and biases tradition from the beginning (e.g. Baron 1985): the distinction between normative, descriptive, and prescriptive rationality. In a paper I'm developing, I explain:

Our view of normative rationality does not imply, however, that humans ought to explicitly use the laws of rational choice theory to make every decision. Neither humans nor machines have the knowledge and resources to do so (Van Rooij 2008; Wang 2011). Thus, in order to approximate normative rationality as best we can, we often (rationally) engage in a "bounded rationality" (Simon 1957) or "ecological rationality" (Gigerenzer and Todd 2012) that employs simple heuristics to imperfectly achieve our goals with the limited knowledge and resources at our disposal (Vul 2010; Vul et al. 2009; Kahneman and Frederick 2005). Thus, the best prescription for human reasoning is not necessarily to always use the normative model to govern one's thinking (Stanovich 1999; Baron 1985).

Or, here is Baron (2008):

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

Replies from: Wei_Dai, ChrisHallquist
comment by Wei Dai (Wei_Dai) · 2012-04-29T12:35:38.633Z · LW(p) · GW(p)

What does mainstream academic prescriptive rationality look like? I get the sense that's where Eliezer invented a lot of his own stuff, because "mainstream cogsci" hasn't done much prescriptive work yet.

Replies from: lukeprog
comment by ChrisHallquist · 2012-04-29T09:11:04.780Z · LW(p) · GW(p)

This is helpful. Will look at Baron later.

comment by byrnema · 2012-04-28T10:08:40.610Z · LW(p) · GW(p)

When I first encountered Less Wrong, two or three years ago, I would have agreed with the Oaksford & Chater quotation and would have found it completely mainstream. The intellectual paradigm of my social circles was that one needed to be self-consistent in their worldview, and beyond that there was room for variation, especially as people would have a lot of different experiences swaying them one way or another.

I thought Less Wrong was extremely, iconoclastically, over-confident in its assertion that people should or must be atheists to be rational. So I positioned for a while (perhaps a week) that one could be religious and still have a self-consistent world view, but then I began to see that wasn't the entire criterion for rationality here.

People most often threw the idea of 'Occam's razor' at me, which I never did find compelling or applied very well to theism, but I eventually identified that the most important secondary criterion for rationality here, which I think is what makes physical materialism different from mainstream rationality, is that belief should only be supported by the positive existence of (physical) evidence.

For example, a theist has lots of 'evidence' for their faith, including affirming emotional and mental states, the example of the religious conviction of their friends and family, and the authority of religious leaders. However, physical materialism would train us to reject all that as evidence of a personal God -- there are other explanations for these observations.

I think that physical materialism is the natural extension of the scientific worldview. Basically, if you can't distinguish between hypotheses you have to reserve judgment.

So in the end it's not enough to have a self-consistent world view. Your world view must also be "justified", where the criteria for justification is a little different (or at least more strongly applied, since we're already such a strongly science-based culture) than the mainstream. It's closely connected to the idea, "Absence of evidence is evidence of absence," which is also less mainstream.

This is just to analyze the most immediately striking difference, which is the atheism/theism divide with mainstream views.

The concern with existential risk / AI / cryonics divide would also be interesting to analyze. My first proposal would be that it isn't so much that these concerns aren't mainstream, but that for the mainstream there is a lot of positive interest in these things in the "far" mode (we love such themes in science fiction movies) but somehow the LW point of view is to think of these things in much more near mode. I attribute this to a personality or value difference; this paradigm will attract some people naturally, other people like me seem to be immune to considering such things 'near', whereas I expect many people would be waiting for a social phase shift before they altered their views.

Replies from: JGWeissman
comment by JGWeissman · 2012-04-28T18:06:11.236Z · LW(p) · GW(p)

I eventually identified that the most important secondary criterion for rationality here, which I think is what makes physical materialism different from mainstream rationality, is that belief should only be supported by the positive existence of (physical) evidence.

So, it's true that all the evidence I use is physical, but I don't even know what it would mean for evidence to be non-physical. My brain is physical, evidence that can interact with my brain is physical, mental events that my brain experiences are physical. If I had some sort of soul that was not made of quarks and gluons or anything else that physicists talk about, but it interacted with my physical brain, by that interaction, I would consider that the soul had to be physical, even if it is a different sort of physical stuff that anything else we know about.

For example, a theist has lots of 'evidence' for their faith, including affirming emotional and mental states, the example of the religious conviction of their friends and family, and the authority of religious leaders. However, physical materialism would train us to reject all that as evidence of a personal God

So, we would not actually dismiss the emotional and mental states as non-physical, because they, having been observed by a physical system, clearly are physical.

there are other explanations for these observations.

That points to part of the correct explanation. The emotional and mental states are not considered strong evidence because, although they are likely given that God exists, they are not unlikely given that God does not exist.

Replies from: byrnema
comment by byrnema · 2012-04-28T21:52:36.475Z · LW(p) · GW(p)

Yes, certainly. I hope you don't think I disagree with any of your points.

Well, actually, if I was required to take issue with any of them, it would be with the importance of probability in deciding that the existence of God is not compelling. I don't think probability has much to do with it, especially in that perhaps in a counterfactual reality there ought to be a high probability that he exists. But what is compelling is that once you are detached from the a priori belief he is present, you notice that he isn't. For me, it isn't so much a question of "existence" but failed promise.

So, it's true that all the evidence I use is physical, but I don't even know what it would mean for evidence to be non-physical. My brain is physical, evidence that can interact with my brain is physical, mental events that my brain experiences are physical. If I had some sort of soul [...] I would consider that the soul had to be physical, even if it is a different sort of physical stuff that anything else we know about.

It seems you might have forgotten, if you were ever familiar, with the pre-materialist understanding of the concept of 'non-physical'. Personally, I've forgotten. It's hard to hang on to a concept that is rendered inconsistent. I don't think it was as coarse as 'these thoughts make me happy, so they must be true' or that feelings and 'mental states' are considered to be independent somehow of scientific analysis. Though maybe. Maybe it was the idea that a person could figure stuff out about the world by thinking in a certain way about what "ought" to be, where 'ought' is pulled from some Platonian ideal value system.

Yes ... that you can sit in a chair and decide that circles exist, and would exist even if there didn't happen to be any. That's there another source of knowing.

comment by Desrtopa · 2012-04-28T06:06:44.660Z · LW(p) · GW(p)

My experience is that plenty of people view the Less Wrong approach to rationality as parochial, but I suspect that if most of these same people were told that it's largely the same as the mainstream cogsci approach to rationality, they would conclude that the mainstream cogsci approach to rationality is parochial.

How wide an audience are you concerned with here?

comment by shrink · 2012-04-28T07:54:17.039Z · LW(p) · GW(p)

Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoning (anything other than very well verified mathematical proof) accumulates errors with each step, regardless of whenever it is 'biased' or not.

Choosing such a source for self education is definitely not common. As is the undue focus on what is 'wrong' about thinking (e.g. lists of biases) rather than on more effective alternatives to biases; if you remove the biases that won't in itself give you extra powers of rational thinking; your reasoning will be as sloppy as before and you'll simply be wrong in an unusual way (for instance you'll end up believing in unfalsifiable unjustified propositions other than God; it seems to me that this has occurred in practice)

edit: Note: he asked a question, I'm answering why it is seen as fringe, it may sound like unfair critique but I am just explaining what it looks like from outside. The world is not fair; if you use dense non-standard jargon, that raises the costs, and lowers the expected utility of reading what you wrote (because most people using non-standard jargon don't really have anything new to say). Processing has non zero utility cost, that must be understood, if the mainstream rationalists don't instantly see you as worth reading, they won't read you, that's only rational on their part. You must allow for other agents to act rationally. It is not always rational to even read an argument.

Actually, given that one could only read some small fraction of rationality related material, it is irrational to read anything but known best material, where you have some assurance that the authors have good understanding of the topic, including those parts that are not exciting, or seem too elementary, or go counter to the optimism - the sort of assurance you get when the authors of the material have advanced degrees.

edit: formatting, somewhat expanded.

Replies from: None
comment by [deleted] · 2012-04-28T15:43:58.608Z · LW(p) · GW(p)

I don't have enough knowledge to agree/disagree with points before your "edit: Note."

I do agree with what you said after that. And applying your own advice, please add some paragraph breaks to your post. If nothing else, add a break between "extra powers of rational thinking" and "he asked a question." It should make your post much easier to read and, as a consequence, more people are likely to read it.

comment by EHeller · 2012-04-29T03:58:19.441Z · LW(p) · GW(p)

I would turn this around- what core part of Less Wrong is actually novel? The sequences seem to be popularizations of various people's work. The only thing unique to the site seems to be the eccentricity of its choice in topics/examples (most cog sci people probably don't think many worlds quantum mechanics is pedagogically useful for teaching rationality).

There also appears to be an unspoken contempt for creating novel work. Lots of conjecture that such-and-such behavior may be signaling, and such-and-such belief is a result of such-and-such bias, with little discussion of how to formalize and test the idea.

Replies from: Eugine_Nier, JScott
comment by Eugine_Nier · 2012-04-29T04:28:07.043Z · LW(p) · GW(p)

I sometimes think a quote I've heard in reference to Wolfram's "A New Kind of Science", might apply equally well to the sequences:

Much that is new, much that is true, and very little overlap between the two.

comment by JScott · 2012-04-30T02:48:52.048Z · LW(p) · GW(p)

There also appears to be an unspoken contempt for creating novel work. Lots of conjecture that such-and-such behavior may be signaling, and such-and-such belief is a result of such-and-such bias, with little discussion of how to formalize and test the idea.

Can you think of any ways to formalize and test this idea?

Replies from: EHeller, None
comment by EHeller · 2012-05-01T22:13:01.275Z · LW(p) · GW(p)

Of course- here is a simple test. Go through discussion threads classifying them as apparently novel/not novel, and attempts to test ideas/conjecture. Check ratios. Compare the karma of the different categories. Obviously, its not a perfect test, but its good enough to form the belief I shared in my post.

And of course, I didn't suggest that every idea needs to be formalized or tested- merely that there is not enough present here for Less Wrong to develop from a source of popularization of other's ideas into a forum for the creation of novel ideas about rationality.

comment by [deleted] · 2012-05-01T02:25:21.534Z · LW(p) · GW(p)

Can you think of any ways to formalize and test this idea?

Applause lights. You should really read the sequences.

Replies from: J_Taylor, JScott, TheOtherDave
comment by J_Taylor · 2012-05-01T03:29:18.158Z · LW(p) · GW(p)

This sort of far-mode thinking is usually [1] evidence of an attempt to signal not-"Straw Vulcan Rationality" while simultaneously earning warm fuzzies in those possible worlds in which [DELETED] (ed. Explaining the reason for this edit would either reveal excessive information about the deleted content or require mentioning of true ideas which are considered abhorrent by mainstream society.) and is ultimately the result of having a brain which evolved to have hypocritical akrasia regarding skepticism and to guess the teacher's password [2].

[1] p(parent post is mere signalling | p-zombie Mary in a Chinese room would claim that "semantic stop-signs are red" is a map-territory-map-mapitory confusion) = .7863, but I may have performed an Aumann update with a counterfactual-me who generalized from fictional fictional-evidence.

[2] The password is Y355JE0AT15A0GNPHYG.

Replies from: pedanterrific, None, JoshuaZ, Multiheaded, JoshuaZ
comment by pedanterrific · 2012-05-01T03:39:05.639Z · LW(p) · GW(p)

Y355JE0AT15A0GNPHYG

I think you mean Y355JE0AT15G00NPHYG.

(This comment is a thing of beauty.)

comment by [deleted] · 2012-05-01T03:35:53.258Z · LW(p) · GW(p)

This coupon entitles the bearer to one free internet.

comment by JoshuaZ · 2012-05-01T03:49:31.401Z · LW(p) · GW(p)

Also, a related comic and essay.

comment by Multiheaded · 2012-05-10T11:00:35.191Z · LW(p) · GW(p)

I was attempting to parse that comment seriously at 8AM in the morning before drinking any coffee. Never again.

comment by JoshuaZ · 2012-05-01T03:37:01.344Z · LW(p) · GW(p)

I'm not sure if I should upvote this as an amusing parody of f Less Wrong posts or downvote because it seems to actually undermine the implicit claim that LW posts are full of jargon with little content- it doesn't take much effort to see that the post really is nonsense. Overall, a mildly amusing but not terribly well-done parody.

comment by JScott · 2012-05-01T05:21:55.784Z · LW(p) · GW(p)

Applause lights. You should really read the sequences.

It took me a moment to understand that you were creating a parody. I'm not sure if that moment was indicative of EHeller, in fact, being on to something.

Anyway, on the original comment - yes, there was a little bit of tu quoque involved. How could I not? It was just too deliciously ironic. Even when accusing someone else of failing to formalize and test their ideas, it's easy to fail at formalizing and testing ideas. It's not meant (entirely) as a tu quoque - just as a warning that it really is easy to fall for that, that even consciously thinking testability isn't enough to actually get people to make explicit predictions. So, I decided to spend a few seconds actually trying to dissect the claim, and ask what sort of testable predictions we can derive from the "There also appears to be an unspoken contempt for creating novel work."*

The obvious signs would be a significant number downvotes on posts that deal with original work, or disparaging statements toward anyone presenting work for the first time on LW. Undue or unreasonable skepticism toward novel claims, perhaps, above and beyond what is warranted by their novelty. I have no idea how to formalize this - and, in fact, the more I look at the statement, the more am convinced it really is vague and untestable. I dunno - anyone else want to take a crack at it? EHeller, do you have something more precise that you were trying to get at?

It was an interesting exercise - even if it turns out to be less meaningful or reducible than I thought, it's good exercising in noticing so.

*Of course, there are good reasons why one might not want to spend time and effort trying to formalize and test an idea. The statement "Lots of conjecture that such-and-such behavior may be signaling, and such-and-such belief is a result of such-and-such bias, with little discussion of how to formalize and test the idea" isn't as interesting not only because it's imprecise, but also because it does in fact take effort and energy to formalize and test an idea - it's not always worth it to test every idea; the entire point of having general concepts about biases is that you can quickly identify problems without having to spend time and energy trying to do the math by hand.

Replies from: None
comment by [deleted] · 2012-05-01T05:39:18.792Z · LW(p) · GW(p)

The obvious signs would be a significant number downvotes on posts that deal with original work, or disparaging statements toward anyone presenting work for the first time on LW. Undue or unreasonable skepticism toward novel claims, perhaps, above and beyond what is warranted by their novelty.

I rather doubt that. It's probably just you confusing the map for the territory.

Okay, okay, I'm done. >>

comment by TheOtherDave · 2012-05-01T03:31:24.692Z · LW(p) · GW(p)

Hm.
EHeller says "LW has too much conjecture without enough discussion of how to formalize and test it!"
JScott points out that EHeller's assertion lacks any discussion of how to formalize and test it.

I would call that tu quoque more than applause lights.

Replies from: None
comment by [deleted] · 2012-05-01T03:34:42.361Z · LW(p) · GW(p)

That was a joke. ;p

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-01T03:36:08.327Z · LW(p) · GW(p)

Oh.
Oops.

comment by Bill_McGrath · 2012-04-29T15:23:56.516Z · LW(p) · GW(p)

Before coming across Less Wrong, I wasn't really aware of rationality as a community or a lifestyle - I don't think there's anything like that where I live, though there are small but reasonably strong skeptic and atheist communities - so I don't think I'm necessarily able to answer the question you're asking. I will say that some of the local beliefs - Singularity, transhumanism, cyronics - are certainly a bit alien to some people, and may undermine people's first impression of the site.

comment by [deleted] · 2012-04-28T15:12:12.381Z · LW(p) · GW(p)

Can anyone share a copy of Oxford Handbook of Thinking and Reasoning? I'd like to read more about how "LW rationality" compares to "mainstream cogsci rationality."

comment by RomeoStevens · 2012-04-28T08:01:48.124Z · LW(p) · GW(p)

No one LW position is all that parochial. It's taking them all seriously simultaneously that is considered weird. You aren't supposed to really believe in all these words. Words are for status, what are you a bunch of nerds?

Replies from: threelier, Viliam_Bur
comment by threelier · 2012-04-28T16:41:22.397Z · LW(p) · GW(p)

Self-aggrandizing tribalistic straw-manning. Currently upvoted to +5. If the upvotes are meant to be amusingly ironic, come home, all is forgiven.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-04-28T20:26:19.819Z · LW(p) · GW(p)

I thought my tone sufficiently separated the first half from the second half. Would a line break make you feel better?

Replies from: threelier, AlexSchell
comment by threelier · 2012-04-29T00:15:04.501Z · LW(p) · GW(p)

I have no idea what you mean to say, unless it is simply a way of saying I'm an idiot while muddying the discussion. Clearly there are two parts to your post. I am saying the underlying sentiments I attribute to you from that post are: self-aggrandizing, tribalistic, straw-manning

If you think that I am criticizing your exaggerating rherotical expression, you're mistaken. I think the underlying sentiment, stripped of rhetoric is: self-aggrandizing, tribalistic, straw-manning. I'm repeating for clarity.

Not only is no one accusing you of being nerds, it is not even a reasonably exaggeration of what anybody says about this place. You do know you' (Less Wrong) are perceived as crazy, right?

Replies from: RomeoStevens, fubarobfusco
comment by RomeoStevens · 2012-04-29T01:43:29.044Z · LW(p) · GW(p)

Maybe you can help me unpack your original statement then.

Self aggrandizing: analytic people NEED to be self aggrandizing, they're calibrated poorly on how much better they are than other people IME.

Tribalistic: you mean my statement is applause lights for LW beliefs? I didn't think my point was totally obvious. I've been making an effort to say more things I consider obvious due to evidence that I'm poorly calibrated on what is obvious.

Straw manning: You say yourself that others think we are crazy. To say I am straw manning implies that I am attributing to others shoddy reasons for believing we are crazy. But evidence points to exactly that, most of the critiques of less wrong positions have been absolutely terrible, including the ones coming from scholars/academia*. If you have any material on this I'm always looking for more.

*If someone believes that each of the beliefs held by the average LW has low probability of being true then it is reasonable for them to conclude that the likelihood of them all being true simultaneously is abysmally low.

comment by fubarobfusco · 2012-04-29T00:35:01.545Z · LW(p) · GW(p)

This is excessively meta.

Replies from: threelier
comment by threelier · 2012-04-29T00:53:46.929Z · LW(p) · GW(p)

Our interests diverge (re: excessive). Interesting silliness, from my perspective. Both responses to my comments on this page imply I'm illiterate. I was interesting in probing the examples (just who isn't reading, generalized: LessWrong or?, etc - common). More broadly, I'm seeing if I can have fun.

Replies from: jmmcd, Barry_Cotter
comment by jmmcd · 2012-04-30T09:45:16.900Z · LW(p) · GW(p)

I was interesting in probing the examples (just who isn't reading, generalized: LessWrong or?, etc - common).

Welp, that was impossible to parse.

Replies from: fourlier
comment by fourlier · 2012-04-30T15:22:27.314Z · LW(p) · GW(p)

Something dumb people say an awful lot: If only you read blank, you'd agree with me.

It's also something LessWrongians say a lot. Even the better breed of cat hereabouts has this tendency. The top post is something of an example, without the usual implied normative: If only other people read LessWrong more closely, they'd realize it's mainstream (in parts). Luke, kindly, places the onus on himself, doubtless as an act of (instrumentally rational) noblesse oblige.

That's background. I feel it is background most LessWrongians are somewhat aware of. I feel Luke's attitude in the top post (placing the onus on himself) is an extension of the principle that lightly or easily telling people to read the sequences is dopey.

So, now to the converation.

Romeo: People are idiots. Me: You're an idiot. Romeo: You can't read. Me: Just who isn't reading, you or me? Or more generally, LessWrong or the rest of the world? Seems like a common problem.

I enjoyed casting myself in the part of "the rest of the world".

comment by Barry_Cotter · 2012-04-29T10:38:57.235Z · LW(p) · GW(p)

Does this page mean the comments originating from here or do my comments read to you as implying you're illiterate?

comment by AlexSchell · 2012-04-28T23:25:49.593Z · LW(p) · GW(p)

For me, your tone did indeed clearly demarcate your point from the rest (before I read the parent).

comment by Viliam_Bur · 2012-05-02T09:46:07.681Z · LW(p) · GW(p)

Individual thoughts from LW can be found elsewhere too. Which is not surprising -- they are supposed to reflect reality.

The added value of LW, in my opinion, is trying to be rational thoroughly. I don't know any other website which does the same thing, and is accessible to a reader like me. If anyone knows such site, please give me a link.

comment by pianoforte611 · 2012-05-01T15:18:51.435Z · LW(p) · GW(p)

Could anyone who is familiar with the modern cogsci literature comment on the current relevance of Bermudez's Introduction to Cognitive Science? I mean this one: http://www.amazon.com/Cognitive-Science-An-Introduction-Mind/dp/0521708370 There was a fascinating mega-post on it at commonsenseatheism.

comment by [deleted] · 2012-05-01T23:52:28.661Z · LW(p) · GW(p)

Why doesn't Yudkowsky publish the sequences--at least as an e-book? One likely reason: it would require extensive editing, not only to eliminate the inconsistencies that have arisen but, more so, to eliminate the prolix prose and filler that might make the postings entertaining to read on a blog (or which he just didn't take the time to cut), but which make for tedious sustained reading. A thorough rewrite would make a real contribution; Yudkowsky has a lot to say--but not that much.

Replies from: beoShaffer, CronoDAS
comment by beoShaffer · 2012-05-02T00:00:59.100Z · LW(p) · GW(p)

The sequences are available as e-books(scroll down to alternative formats) and writing two books based on the sequences is his current main project.

comment by CronoDAS · 2012-05-02T12:26:37.385Z · LW(p) · GW(p)

Eliezer is working on an actual book based on the Sequences.