An Outside View on Less Wrong's Advice

post by Mass_Driver · 2011-07-07T04:46:17.611Z · LW · GW · Legacy · 162 comments

Contents

  Wishful Thinking
  Partisanship
  Bullshit
  Conclusion
None
162 comments

Related to: Intellectual Hipsters, X-Rationality: Not So GreatThe Importance of Self-Doubt, That Other Kind of Status,

This is a scheduled upgrade of a post that I have been working on in the discussion section. Thanks to all the commenters there, and special thanks to atucker, Gabriel, Jonathan_Graehl, kpreid, XiXiDu, and Yvain for helping me express myself more clearly.

-------------------

For the most part, I am excited about growing as a rationalist. I attended the Berkeley minicamp; I play with Anki cards and Wits & Wagers; I use Google Scholar and spreadsheets to try to predict the consequences of my actions.

There is a part of me, though, that bristles at some of the rationalist 'culture' on Less Wrong, for lack of a better word. The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them. Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.

Note that I am *not* claiming that Less Wrong is a cult. Nobody who runs a cult has such a good sense of humor about it. And if they do, they're so dangerous that it doesn't matter what I say about it. No, if anything, "cultishness" is a straw man. Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid. But, he *might* convince you to believe in some very silly things and take some very silly actions.

Therefore, in the spirit of John Stuart Mill, I am writing a one-article attack on much of we seem to hold dear. If there is anything true about what I'm saying, you will want to read it, so that you can alter your commitments accordingly. Even if, as seems more likely, you don't believe a word I say, reading a semi-intelligent attack on your values and mentally responding to it will probably help you more clearly understand what it is that you do believe. 

Wishful Thinking

As far as I can tell, some of the most prominent themes in terms of short-term behavioral advice being given on Less Wrong are:  

1) Sign up for cryonics,

2) Donate to SIAI,

3) Drop out of any religious groups you might belong to, and

4) Take chemical stimulants

I don't mean to imply that this is the *only* advice given, or even that these are the four most important ones. Rather, I claim that these four topics, taken together, account for a large share of the behavioral advice dispensed here. I predict that you would find it difficult or impossible to construct a list of four other pieces of behavioral advice such that people would reliably say that your list is more fairly representative of the advice on Less Wrong. As XiXiDu was kind enough to put it, there is numerical evidence to suggest that my list is "not entirely unfounded."

The problem with this advice is that, for certain kinds of tech/geek minds, the advice is extremely well-optimized for cheaply supporting pleasurable yet useless beliefs -- a kind of wireheading that works on your prefrontal cortex instead of directly on your pleasure centers.

By cheaply, I mean that the beliefs won't really hurt you...it's relatively safe to believe in them. If you believe that traffic in the U.S. drives on the left-hand-side of the street, that's a very expensive belief; no matter happy you are thinking that you, and only you, know the amazing secret of LeftTrafficIsm, you won't get to experience that happiness for very long, because you'll get into an auto accident by tomorrow at the latest. By contrast, believing that your vote in the presidential primaries makes a big difference to the outcome of the election is a relatively cheap belief. You can go around for several years thinking of yourself as an important, empowered, responsible citizen, and all it costs you is a few hours (tops) of waiting in line at a polling station. In both cases, you are objectively and obviously wrong -- but in one case, you 'purchase' a lot of pleasure with a little bit of wrongness, and in the other case, you purchase a little bit of pleasure with a tremendous amount of wrongness.

Among the general public, one popular cheap belief to 'buy' is that a benevolent, powerful God will take you away to magical happy sunshine-land after you die, if and only if you're a nice person who doesn't commit suicide. As it's stated, indulging in that belief doesn't cost you much in terms of your ability to achieve your other goals, and it gives you something pleasant to think about. This belief is unpopular with the kind of people who are attracted to Less Wrong, even before they get here, because we are much less likely to compartmentalize our beliefs.

If you have a sufficiently separate compartment for religion, you can believe in heaven without much affecting your belief in evolution. God's up there, bacteria are down here, and that's pretty much the end of it. If you have an integrated, physical, reductionist model of the Universe, though, believing in heaven would be very expensive, because it would undermine your hard-won confidence in lots of other practically useful beliefs. If there are spirits floating around in Heaven somewhere, how do you know there aren't spirits in your water making homeopathy work? If there's a benevolent God watching us, how do you know He hasn't magically guided you to the career that best suits you? And so on. For geeks, believing in heaven is a lousy bargain, because it costs way too much in terms of practical navigation ability to be worth the warm fuzzy thoughts.

Enter...cryonics and friendly AI. Oh, look! Using only physical, reductionist-friendly mechanisms, we can show that a benevolent, powerful entity whose mind is not centered on any particular point in space (let's call it 'Sysop' instead of God) might someday start watching over us. As an added bonus, as long as we don't commit suicide by throwing our bodies into the dirt as soon as our hearts stop beating, we can wake up in the future using the power of cryonics! The future will be kinder, richer, and generally more fun than the present...much like magical happy sunshine-land is better than Earth.

Unlike pre-scientific religion, the "cryonics + Friendly AI" Sysop story is 'cheap' for people who rarely compartmentalize. You can believe in Sysop without needing to believe in anything that can't be explained in terms of charge, momentum, spin, and other fundamental physical properties. Like pre-scientific religion, the Sysop story is a whole lot of fun to think about and believe in. It makes you happy! That, in and of itself, doesn't make you wrong, but it is very important to stay aware of the true causes of your beliefs. If you came to believe a relatively strange and complicated idea because it made you happy, it is very unlikely that this same idea just happens to also be strongly entangled with reality.

Partisanship

As for dropping out of other religious communities, well, they're the quintessential bad guys, right? Not only do they believe in all kinds of unsubstantiated woo, they suck you into a dense network of personal relationships -- which we at Less Wrong want earnestly to re-create, just, you know, without any of the religion stuff. The less emotional attachment you have to your old community, the more you'll be free and available to help bootstrap ours!

Why should you spend all your time trying to get one of the first rationalist communities up and running (hard) instead of joining a pre-existing, respectable religious community (easy)? Well, to be fair, there are lots of good reasons. Depending on how rationalist you are, you might strongly prefer the company of other rationalists, both as people to be intimate with and as people to try to run committee meetings with. If you're naturally different enough from the mainstream, it could be more fun and less frustrating for you to just join up with a minority group, despite the extra effort needed to build it up.

There is a meme on Less Wrong, though, that rationalist communities are not just better-suited to the unique needs of rationalists, but also better in general. Rationality is the lens that sees its own flaws. We get along better, get fit faster, have more fun, and know how to do more things well. Through rationality, we learn to optimize everything in sight. Rationality should ultimately eat the whole world.

Again, you have to ask yourself: what are the odds that these beliefs are driven by valid evidence, as opposed to ordinary human instincts for supporting their own tribe and denigrating their neighbors? As Eliezer very fairly acknowledges, we don't even have decent metrics for measuring rationality itself, let alone for measuring the real-world effects that rationality supposedly has or will have on people's wealth, health, altruism, reported happiness, etc.

Do we *really* identify our own flaws and then act accordingly, or do we just accept the teachings of professional neuroscientists -- who may or may not be rationalists -- and invent just-so stories 'explaining' how our present or future conduct dovetails with those teachings? Take the "foveal blind spot" that tricks us into perceiving stars in the night sky as disappearing when you look straight at them. Do you (or anyone you know) really have the skill to identify a biological human flaw, connect it with an observed phenomenon, and then deviate from conventional wisdom on the strength of your analysis? If mainstream scientists believed that stars don't give off any light that strikes the Earth directly head-on, would you be able to find, digest, and apply the idea of foveal blind spots in order to prove them wrong? If not, do you still think that rationalist communities are better than other communities for intelligent but otherwise ordinary people? Why?

It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life. In case it turns out that life is not quite so convenient, maybe we should be a little humbler about our grand experiment. Even if we have good reason to assert that mainstream religious thinking is flawed, maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community.

Bullshit

Finally, suppose you publicly declared yourself to have nigh-on-magical powers -- by virtue of this strange thing that few people in your area understand, "rationality," you can make yourself smarter, more disciplined, more fun to be around, and generally awesome-r. Of course, rationality takes time to blossom -- everyone understands that; you make it clear. You do not presently have big angelic powers, it is just that you will get your hands on them soon.

A few months go by, maybe a year, and while you have *fascinating* insights into cognitive biases, institutional inefficiencies, and quantum physics, your friends either can't understand them or are bored to tears by your omphaloskepsis. You need to come up with something that will actually impress them, and soon, or you'll suffer from cognitive dissonance and might have to back off of your pleasurable belief that rationality is better than other belief systems.

Lo and behold, you discover the amazing benefits of chemical stimulants! Your arcane insights into the flaws of the institutional medical establishment and your amazing ability to try out different experimental approaches and faithfully record which ones work best have allowed you to safely take drugs that the lay world shuns as overly dangerous. These drugs do, in fact, boost your productivity, your apparent energy, and your mood. You appear to be smarter and more fun than those around you who are not on rationally identified stimulants. Chalk one up for rationality.

Unless, of course, the drugs have undesirable long-term or medium-term effects. Maybe you develop tolerance and have to take larger and larger doses. Maybe you wear out your liver or your kidneys, or lower your bone density. Maybe you mis-underestimate your ability to operate heavy machinery on polyphasic sleep cycles, and drive off the side of the road. Less Wrong is too young, as a meme cluster, for most of these hazards to have been triggered. I wouldn't bet on any one of those outcomes for any one drug...but if your answer to the challenges of life is to self-medicate, you're taking on a whole lot more risk than the present maturity of the discipline of rationality would seem to warrant.

Conclusion

I've tried my best not to frontally engage any of the internal techniques or justifications of rationality. On what Robin Hanson would call an inside view, rationality looks very, very attractive, even to me. By design, I have not argued here that, e.g., it is difficult to revive a frozen human brain, or that the FDA is the best judge of which drugs are safe.

What I have tried to do instead is imagine rationality and all its parts as a black box, and ask: what goes into it, and what comes out of it? What goes in is a bunch of smart nonconformists. What comes out, at least so far, is some strange advice. The advice pays off on kind of a bimodal curve: cryonics and SIAI pay off at least a decade in the future, if at all, whereas drugs and quitting religion offer excellent rewards now, but may involve heavy costs down the road.

The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without 'paying' too much in bad outcomes.

I hold this suspicion with about 30% confidence. That's not enough to make me want to abandon the rationalist project -- I think, on balance, it still makes sense for us to try to figure this stuff out. It is enough for me to want to proceed more carefully. I would like to see an emphasis on low-hanging fruit. What can we safely accomplish this month? this year? I would like to see warnings and disclaimers. Instead of blithely informing everyone else of how awesome we are, maybe we should give a cheerful yet balanced description of the costs and benefits. It's OK to say that we think the benefits outweigh the costs...but, in a 2-minute conversation, the idea of costs should be acknowledged at least once. Finally, I would like to see more emphasis on testing and measuring rationality. I will work on figuring out ways to do this, and if anyone has any good measurement schemes, I will be happy to donate some of my money and/or time to support them.

162 comments

Comments sorted by top scores.

comment by Normal_Anomaly · 2011-07-03T02:46:43.049Z · LW(p) · GW(p)

I've tried my best not to frontally engage any of the internal techniques or justifications of rationality. On what Robin Hanson would call an inside view, rationality looks very, very attractive, even to me. By design, I have not argued here that, e.g., it is difficult to revive a frozen human brain, or that the FDA is the best judge of which drugs are safe.

To be honest, I think you should have. Meta-arguments for why the causes of our beliefs are suspect are never going to be as convincing as evidence for why the beliefs themselves are wrong.

Also, I think the parts about psychoactive drugs are somewhere between off-topic and a straw man. One of the posts you linked is titled "Coffee: When it helps, when it hurts"--two sides of an argument for a stimulant that probably a supermajority of adults use regularly. In another, 2 of the 18 suggestions offered involve substance use.

Thirdly, while rationality in the presence of akrasia does not have amazing effects on making us more effective, rationality does have one advantage that's been overlooked a lot lately: it results in true beliefs. Some people, myself included, value this for its own sake, and it is a real benefit.

Replies from: Kaj_Sotala, Mass_Driver, XiXiDu
comment by Kaj_Sotala · 2011-07-04T13:45:46.656Z · LW(p) · GW(p)

Meta-arguments for why the causes of our beliefs are suspect are never going to be as convincing as evidence for why the beliefs themselves are wrong.

However, even if the beliefs are correct, many people will still accept them for the wrong reasons. These "meta-arguments" are powerful psychological forces, which affect all people.

I would suspect that LW has a small bunch of people who have arrived at LW:ish beliefs because of purely rational reasoning. There's a larger group that has arrived at the same beliefs ~purely because of human biases (including the factors listed in the post). And then there's a larger yet group that has arrived at them partially because of rational reasoning, and partially because of biases.

comment by Mass_Driver · 2011-07-03T06:17:07.514Z · LW(p) · GW(p)

Maybe I should drop the stimulants! What other advice have you noticed on Less Wrong?

comment by XiXiDu · 2011-07-03T10:18:23.747Z · LW(p) · GW(p)

...rationality does have one advantage that's been overlooked a lot lately: it results in true beliefs. Some people, myself included, value this for its own sake, and it is a real benefit.

Not really. I have been saying the same as you, that a true belief is valueable in and of itself, even if you don't like the consequences. But I don't believe that to be true anymore. As Roko once wrote, "I wish I would have never learnt about existential risks".

Replies from: byrnema
comment by byrnema · 2011-07-07T17:17:09.532Z · LW(p) · GW(p)

I also used to feel very optimistic and excited about 'true beliefs', believing having more of them would represent such incredible progress, but now I only have the memory of valuing them, and continue to pursue them a little out of discipline and habit. Scientific belief is an exception, but regarding anything that I would call 'philosophical' (for lack of a better word), pursuing true belief seems empty after all.

My reasons for this is that I thought 'true belief' (how I define it, as a collection of metaphysical/philosophical ideas) would reflect some kind of reality (for example, a framework of objective value) but since such ideas aren't entangled with reality, they don't matter.

By the way, I consider 3^^^^3 years from now to be not entangled with reality. Having just read through your link and the helpful comments people made throughout, could you comment on which advice was most immediately helpful, or having you found any temporary or ameliorating patches since then?

Replies from: XiXiDu
comment by XiXiDu · 2011-07-08T10:30:39.003Z · LW(p) · GW(p)

Having just read through your link and the helpful comments people made throughout, could you comment on which advice was most immediately helpful, or having you found any temporary or ameliorating patches since then?

I would have to read the replies again to give a definite answer, but mostly I now reason along the lines of this comment.

comment by Scott Alexander (Yvain) · 2011-07-03T08:31:44.687Z · LW(p) · GW(p)

The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them.

To come up with a theory on the fly, maybe there are two modes of expansion for a group: by providing some service, and by sheer memetic virulence. One memetic virulence strategy operates by making outlandish promises that subscribing to it will make you smarter, richer, more successful, more attractive to the opposite sex, and just plain superior to other people - and then doing it in a way that can't obviously be proven wrong. This strategy usually involves people with loads of positive affect going around telling people how great their group is and how they need to join.

As a memetic defense strategy, people learn to identify this kind of spread and to shun groups that display its features. From the inside, this strategy manifests as a creepy feeling.

LW members have lots of positive affect around LW and express it all the time, the group seems to be growing without providing any services (eg no one's worried when a drama club grows, because they go and put on dramas, but it's not clear what our group is doing besides talking about how great we are), and we make outrageously bold claims about getting smarter and richer and sexier of the sort which virulent memes trying to propagate would have to make.

I don't think this creepiness detector operates on the conscious level, any more than the chance-of-catching-a-disease detector that tells us people with open sores all over their body are disgusting operates on a conscious level. We don't stop considering the open sores disgusting if we learn they're not really contagious, and we don't stop considering overblown self-improvement claims from an actively recruiting community to be a particularly virulent memetic vector even if we don't consciously believe that's what's going on.

(I'm still agnostic on that point. I'm sure no one intended this to be a meme optimized for self-propagation through outlandish promises, but it's hard to tell if it's started mutating that way.)

Replies from: fubarobfusco, malthrin, Bongo
comment by fubarobfusco · 2011-07-03T20:30:04.195Z · LW(p) · GW(p)

we make outrageously bold claims about getting smarter and richer and sexier

I'd like to know where all this LW-boasting is going on. I don't think I hear it at the meetups in Mountain View, but maybe I've been missing something.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-07-03T22:30:57.866Z · LW(p) · GW(p)

Darnit, I don't like being vague and I also don't like pointing to specific people and saying "YOU! YOU SOUND CULTISH!" so I'm going to have a hard time answering this question in a satisfying way, but...

Lots of people are looking into things like nootropics/intelligence amplification, entrepreneurship, and pick-up artistry. And this is great. What gives me the creepy vibe is when they say (more on the site than at meetups) "And of course, we'll succeed at these much faster than other people have, even though they are professionals in this field, because we're Rationalists and they weren't." Anything involving the words "winning", "awesomeness", or gratuitous overuse of community identification terms like "primate" or "utility".

Trying to look for examples, I notice it is a smaller proportion of things than I originally thought and I'm probably biased toward overcounting them, which makes sense since in order to justify my belonging to a slightly creepy group I need to exaggerate my opposition to the group's creepiness.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-07-04T02:10:02.030Z · LW(p) · GW(p)

Nonetheless, perhaps we need to adopt a new anti-cultishness norm against boasting about the predicted success of rationalists; or against ascribing personal victories to one's rationality without having actually done the math to demonstrate the correlation between success and rationality. The cult attractor is pretty damn bad, after all, and ending up in it could easily destroy one hell of a lot of value.

Replies from: Bongo
comment by Bongo · 2011-07-04T14:25:03.937Z · LW(p) · GW(p)

norm against boasting about ... predicted success

This is a great idea!

comment by malthrin · 2011-07-03T12:44:40.034Z · LW(p) · GW(p)

One memetic virulence strategy operates by making outlandish promises that subscribing to it will make you smarter, richer, more successful, more attractive to the opposite sex, and just plain superior to other people - and then doing it in a way that can't obviously be proven wrong.

That similarity is the key to both the perceived creepiness factor and the signal:noise ratio on this site. Groups formed to provide a service have performance standards that their members must achieve and maintain: drama clubs and sports teams have tryouts, jobs have interviews, schools have GPA requirements, etc. By contrast, groups serving as vehicles for contagious memes avoid standards. Every believer, even if personally useless to the stated aims of the group, is a potential transmission vector.

I see two reasons to care which of those classes of groups LW more closely resembles: first, to be aware of how we're coming across to others; and second, as a measure of whether anything is actually being accomplished here.

Personally, I try to avoid packaging LW's community and content into an indivisible bundle. From Resist the Happy Death Spiral:

To summarize, you do avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that "you can't prove are wrong"; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.

There are a great many insightful posts on LW, mostly from Eliezer, Yvain, and a few others. There are other posts that are less specific and of correspondingly smaller insight. There is also a community centered in the discussion section that spends most of its time espousing the beliefs in the main post. Rather than allowing all these ideas to prop each other up, I'm content to wield the supported and useful techniques and discard the rest.

comment by Bongo · 2011-07-03T14:24:50.542Z · LW(p) · GW(p)

LW members have lots of positive affect around LW and express it all the time, the group seems to be growing without providing any services (eg no one's worried when a drama club grows, because they go and put on dramas, but it's not clear what our group is doing besides talking about how great we are), and we make outrageously bold claims about getting smarter and richer and sexier of the sort which virulent memes trying to propagate would have to make.

  • reminder that it wasn't always like this
comment by fubarobfusco · 2011-07-03T07:52:07.634Z · LW(p) · GW(p)

I'm confused by the claim that those four are "the most prominent themes in terms of short-term behavioral advice" around here.

When I think of advice given here, "use drugs!" doesn't seem to be in my top seven. Most of the advice I've heard around here, both from the Sequences and others, seems to be in what might be considered "epistemic hygiene" or good practices with regards to discovering and recognizing truth. This would include being aware of cognitive biases, noticing confusion, and so on. And many of these are indeed "short-term" advice, at least in the sense that some of them can be implemented very quickly.

(They're certainly more "short-term" than dropping out of a religious group would be, for a person who's actually involved in religion.)

What I think you might have in your list of four, there, is not a list of the most prominent themes here, but rather a list of some themes that worry you. And, as you note, they're worth worrying about — to a certain extent, these are themes that, if taken in the wrong direction, might participate in a cult attractor.

Avoiding becoming a cult is a recurring theme here. And as we know, you don't avoid being a cult by merely declaring "we're not a cult", or by declaring your group to be too smart, too moral, or too practical to possibly become a cult. In order to avoid being a cult (if you're at any risk of it), you have to notice what kinds of behavior count as "cultish" and avoid doing them.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-12T14:58:20.114Z · LW(p) · GW(p)

I agree that "use drugs!" didn't seem to be on the list. For a little while, "be a vegetarian!" seemed to be in play, but it faded.

comment by [deleted] · 2011-07-07T11:40:25.940Z · LW(p) · GW(p)

Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy. Not everybody makes it; but I've seen a lot of people have lives more satisfying than their parents ever could. People who've broken the conventional wisdom that you have to put up with a lot of bullshit because "that's life." Mainly, because instead of asking "What is the Thing To Do?" they've got the hang of asking "What is the best thing I could be doing?"

If cryonics is a bust, I'll grant that it's a genuine waste of money. The same is true for SIAI. (Though I'll mention that lots of otherwise fulfilled people donate to demonstrably inefficient charities that spend most of their money on employee salaries. Most middle-class people throw some money down the toilet and don't even notice it.) The other issues are not such a big deal. Leaving religious communities is not a blow to people who have figured out how to optimize life, because they aren't isolated any more. I don't even know if overuse of stimulants is that widespread -- I certainly know they aren't good for me.

As for having self-gratifying beliefs that aren't of much use ... well, everybody does that a little. Guilty as charged. But for me at least, LessWrong's favorite issues led me to interests in similar-but-not-identical issues. General AI is pretty opaque to me, but now I'm interested in narrow AI (and its statistical/mathematical cousins.) The abstract discussion of rationality has led me to take psychology and motivational advice more seriously.

Are LessWrong memes pretty confined to a subset of tech geeks and some young scientists and professionals? Yeah. For the moment, so what? That's the environment I want to be in; those are my friends, collaborators, and role models. Not everybody is suited to be a world-wide evangelist with a big bullhorn; I'm satisfied that there will always be people in the world who disagree with me.

Replies from: Vladimir_M, Mass_Driver
comment by Vladimir_M · 2011-07-07T21:46:25.397Z · LW(p) · GW(p)

Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy.

As a general rule, nonconformists aren't happy: they must choose between hiding their nonconformity and living a double life, which is never a happy situation, or being open nonconformists and suffering severe penalties for it. What you have in mind would probably be better described as people who know how to send off fashionable signals of officially approved pseudo-nonconformity, and to recognize and disregard rules that are only paid lip-service (and irrelevant except as a stumbling block for those not smart enough to realize it), but are perfect and enthusiastic conformists when it comes to things that really matter.

Replies from: syllogism, None, Nick_Tarleton, Mycroft65536
comment by syllogism · 2011-07-08T00:05:05.334Z · LW(p) · GW(p)

The key to successful non-conformity is to find your tribe later. If you look at people who've done this now, they seem like conformists, because they do what their peer-group does. But they've fit their peer-group to their personality, rather than trying to fit their personality to their peer-group. They've had to move through local minima of non-conformity.

Here are some examples of where I've made what have at the time been socially brave choices that have paid off big. This is exactly all about asking "what is the best thing I could be doing", not "what is the thing to do".

  • Decided to accept and admit to my bisexuality. This was very uncomfortable at first, and I never did really find a "home" in gay communities, as they conformed around a lot of norms that didn't suit me well. What accepting my sexuality really bought me is a critical stance on masculinity. Rejecting the normal definition of "what it means to be a man" has been hugely liberating. Being queer has a nice signalling perk on this, too. It's much harder to be straight and get away with this. If you're queer people shrug and put you in that "third sex" category of neither masculine nor feminine.

  • Decided not to pursue any of the "typical" careers. I was getting top marks in English and History in high school, and all the other kids with that academic profile were going into law. I chose to just do an arts degree in linguistics, with an eye on academia. This turned out to be a very important decision, as I'm very happy with my academic career in computational linguistics. When I meet people, they're amazed at how "lucky" I am to have found something so niche that fits me so well. Well, it isn't luck at all: I decided what everyone else was doing was not for me, and had to suck it up when people called me a fool for leaving all that near-certain law money on the table.

  • Decided the "school" of linguistics I'd trained in all through my undergraduate was completely wrong, requiring me to abandon my existing professional network and relearn almost everything. It was kind of a scientific crisis of faith. But I think I'm happier now than I would've been if I hadn't.

  • Decided to become vegetarian. This benefited me by reducing my cognitive dissonance between the empirical facts of the meat industry and my need to feel that I was making the world a better place, and wouldn't do something I had believed caused great harm just because it was normal. Now I have a network of vegetarian friends (not that I abandoned my old one, mind), so it doesn't feel like lonely dissent. And I did only "convert" after meeting a rationalist vegetarian friend. But the non-conformity pain was still there when I did it. I had to deal with feeling like a weirdo, which is unpleasant.

  • Hired a domestic cleaner. Domestic help is fairly socially unacceptable in my champagne socialist slice of Australia. How bourgeois! Well, yes --- we are totally bourgeois. Champagne socialists are very uncomfortable about this. This exchange of goods for services is very high utility for me, though.

So I disagree that "non-conformists" are worse off, for this definition of "non-conformist". People willing to make socially brave choices stand to gain a lot; people who are completely craven in the face of any social opprobrium wind up trapped in circumstances that don't suit them well.

Replies from: Vladimir_M, sam0345, Sniffnoy
comment by Vladimir_M · 2011-07-08T18:29:56.859Z · LW(p) · GW(p)

So I disagree that "non-conformists" are worse off, for this definition of "non-conformist". People willing to make socially brave choices stand to gain a lot; people who are completely craven in the face of any social opprobrium wind up trapped in circumstances that don't suit them well.

We clearly disagree on the definition of "nonconformity." If you use this word for any instance of resisting social pressure, then clearly you are right, but it also means that everyone is a nonconformist except people who live their entire lives as silent, frightened, and obedient doormats for others. Any success in life is practically impossible if you don't stand up for yourself when it's smart to do so, and if you don't exploit some opportunities opened by the hypocritical distinctions between the nominal and real rules of social interactions and institutions. But I wouldn't call any of that "nonconformity," a term which I reserve for opposition to truly serious and universally accepted rules and respectable beliefs. Of course, it makes little sense to argue over definitions, so I guess we can leave it at that.

Replies from: syllogism, None
comment by syllogism · 2011-07-09T04:40:52.508Z · LW(p) · GW(p)

Thanks for the clarification. I tend to call what you call non-conformists "sole dissenters". I've never done this.

comment by [deleted] · 2011-07-09T03:15:32.963Z · LW(p) · GW(p)

If it makes any difference to you, my definition of "nonconformist" was someone who exhibits some social courage. For example, someone who decides to leave college to pursue plans of his own. Many people don't stand up for themselves even a little. Or acknowledge to themselves that they don't desire what other people expect for them. I have a hard time with this myself, which is why I don't take this ability for granted. That's all I meant by "nonconformist." Don't take the terminology too seriously

comment by sam0345 · 2011-07-08T05:21:24.190Z · LW(p) · GW(p)

None of these are non conformity: All of them are fashionable signals of officially approved affluent pseudo-nonconformity. For example, the vast majority of people who claim to vegetarians, are not, but claim to vegetarians for the status.

And it simply absurd to suggest that Australian champagne socialists disapprove of hiring domestic help They are always one upping each other on how little housework they do.

Replies from: syllogism, Nornagest, None, syllogism
comment by syllogism · 2011-07-08T06:33:49.178Z · LW(p) · GW(p)

Almost everything's fashionable to someone, somewhere. You can start with a certain in-group and non-conform by deciding to eat meat. You can non-conform out of the gay community by deciding you're actually straight.

The issue of conformity arose in this thread from SarahC's comment:

Honestly, I think the cluster of tech-savvy, young, smart-but-nonconformist types is really winning at the goal of being productive while happy. Not everybody makes it; but I've seen a lot of people have lives more satisfying than their parents ever could. People who've broken the conventional wisdom that you have to put up with a lot of bullshit because "that's life." Mainly, because instead of asking "What is the Thing To Do?" they've got the hang of asking "What is the best thing I could be doing?"

I think this really applies to me. My assessment of my life is that I'm much happier because of these moments where I've exercised even a little bit of courage in the face of social pressure. It wasn't a huge amount of courage, but it was non-zero --- which is more than many people are willing to do. I do believe that being utterly craven in the face of social opprobrium is a common failure mode, and it's an area where rationality pays dividends.

comment by Nornagest · 2011-07-08T05:27:00.040Z · LW(p) · GW(p)

For example, the vast majority of people who claim to vegetarians, are not, but claim to vegetarians for the status.

Got a cite for that? Vegetarianism might be a questionable indicator of nonconformity, but I'd be much more willing to believe that vegetarianism's become common enough in a broad spectrum of subcultures to be disqualified as such than that a vast, or even a simple, majority of professed vegetarians aren't actual vegetarians. Perhaps modulo some wiggle room for culturally mandated meat-eating, like Thanksgiving turkeys in the US.

Now that I think about it, actually, it's a non sequitur either way. The hypocrisy/sincere profession ratio of a feature doesn't tell us much of anything about how acceptable it is in the mainstream: I'd expect many more people to claim to have Mafia ties than do in fact, but membership in a criminal fraternity is almost by definition nonconformist!

Replies from: sam0345
comment by sam0345 · 2011-07-08T07:51:27.828Z · LW(p) · GW(p)

Got a cite for that?

Merely a personal observation. I do however have a cite for the proposition that vegan is conformity, and omnivory a sinful deviation.

Replies from: Desrtopa
comment by Desrtopa · 2011-07-09T04:22:25.479Z · LW(p) · GW(p)

Citing a source that aims for humor rather than accuracy is a lot more helpful if you're aiming for flippancy rather than credibility.

comment by [deleted] · 2011-07-08T21:59:56.637Z · LW(p) · GW(p)

By the most naive rational appraisal, eating n% less meat than usual is fully n% as good -- say, for suffering animals -- as being a pure vegetarian. However, the social consequences of being a pure vegetarian seem to be entirely different than those of simply eating less meat. (I agree with sam0345 that those social consequences are largely positive.) It's interesting to think about why.

comment by syllogism · 2011-07-08T06:51:10.599Z · LW(p) · GW(p)

Also, a question about the "not vegetarians" thing. I'll grant you ahead of time that a great many vegetarians/vegans aren't doing it for any particularly rational reason. E.g., they think it's healthier (it's not), they think meat's gross (subjective --- but they're wrong anyway :p), they exaggerate the environmental case, etc. But I have a hard time believing they actually fail to eat little to no meat.

What are you counting as "failing to be vegetarian"? If they eat meat once a month? Once a week? Once a day? I'd say that someone that eats meat once a day is not vegetarian. But I'd also say it's reasonable for someone who eats meat even once a week to call themselves vegetarian. Are you claiming that there are lots of people who call themselves vegetarian but eat almost as much meat as "normal" people?

Even if vegetarianism were entirely status neutral, you need to communicate to people what you want to be eating. If you tell everyone "okay, I eat meat once a week", then chances are high two people per week are going to say "great, here's some meat". So you won't even be able to maintain this very liberal ratio.

I sometimes eat certain types of seafood, such as oysters or prawns, because I don't believe this is actually cruel. An oyster is not a pig. It doesn't have much of a nervous system to speak of. So why should I avoid eating them, just to meet someone's definition?

Similarly, if someone can't live healthily on a strictly vegetarian diet, but needs to eat some meat, why do they need to snap back to "no special diet" status? If they still think the meat industry is largely cruel, they can probably meet their health requirements by eating only a little meat. Why should this person not call themselves a vegetarian?

Replies from: taryneast, sam0345
comment by taryneast · 2011-07-08T16:13:36.257Z · LW(p) · GW(p)

I personally eat very little meat. I don't consider myself to be vegetarian.

I have never met a self-professed vegetarian that I've seen to eat meat. Not that this means there aren't any... but my experience suggest to me that meat-eating vegetarians are not "the majority"

I can, however, conceive that some vegans might say that non-vegan vegetarians are not "really" vegetarian.

I am also aware of a certain movement, sprung from the vegetarian community, to spruik the "eat less meat" philosophy.

One of these may be where sam0345 is hearing about non-vegetarian vegetarians...

comment by sam0345 · 2011-07-08T08:12:30.913Z · LW(p) · GW(p)

Why should this person not call themselves a vegetarian?

Of course this person can call himself a vegetarian. But that he is inclined to do so would indicate that vegetarianism is not non conformity.

Replies from: syllogism
comment by syllogism · 2011-07-08T09:05:12.734Z · LW(p) · GW(p)

I don't follow? Even if vegetarianism is highly negative starus, the word's useful as a way to communicate your pre-commitments. Again, imagine the person who eats meat once a week attending several events per week where they will be expected to eat meat. If they don't call themselves vegetarian, they won't be able to keep their commitment. This says nothing about how much status they are gaining or losing, or how much they are 'conforming'.

comment by Sniffnoy · 2011-07-11T11:52:42.482Z · LW(p) · GW(p)

Academics are lower-status than lawyers where you are?

Replies from: syllogism
comment by syllogism · 2011-07-11T13:19:52.673Z · LW(p) · GW(p)

It sure seemed that way when I was 17.

comment by [deleted] · 2011-07-08T12:40:47.713Z · LW(p) · GW(p)

Different things can be meant by the word 'nonconformist'. Is it someone who doesn't care about conforming or someone who cares about non-conforming? The first kind of person will act weird as long as it doesn't hurt them too much but they will not engage in any norm-breaking that could put them in actual danger. They will even signal their harmless weirdness if they feel like it. The second kind will set out to prove to themselves that they are truly different and unique.

There are also people who feel strongly about being 'normal' and they also feel strongly about adhering to the cultural ideal of romantic rebelliousness that you talk about in your later comment so they will indeed seek cheap ways to signal nonconformist traits.

I read SarahC's comment as referring to nonconformists of the first kind, while from your comment I got the impression that you divide the space of weird people into 'true nonconformists' who seek weirdness for its own sake and pseudo-conconformists who really want to fit in but at the same time try to give out a rebel vibe.

comment by Nick_Tarleton · 2011-07-08T00:22:21.450Z · LW(p) · GW(p)

fashionable signals of officially approved pseudo-nonconformity

things that really matter

Examples of these categories would be helpful. (In general, I find your comments interesting and your perspective important, but have a hard time understanding you due to frequent oblique allusions like these. I know that some of the time you're trying not to step on landmines, which seems like a good idea, but this doesn't seem like one of those cases.)

(On-topic, I agree with Mycroft65536.)

Replies from: Vladimir_M, AlexM, sam0345
comment by Vladimir_M · 2011-07-08T05:03:29.264Z · LW(p) · GW(p)

It seems to me that every human society has some romantic notion of heroic rebels and nonconformists, but for reasons that are interesting to speculate on, ours is obsessed with it to a very exceptional degree. (So much that people nowadays typically use the word "nonconformist" with a tone of approval, and rarely for those who fail to conform with norms and views that they themselves actually like.) This opens the way for people to gain status if they are capable of doing things that signal in a way that resonates with this heroic "nonconformist" image, while at the same time avoiding any really dangerous nonconformity.

Take for example all those artists and authors who get praised as "daring," "transgressive", "challenging taboos," etc., even though the things they do have been run-of-the-mill for many decades (or even much longer), the views they express (insofar as they express any) are entirely predictable for anyone familiar with the respectable intellectual mainstream, their high status is acknowledged by the mainstream media and academia, and some of them even get rich off of this "nonconformity." There are many other similar examples of cheap "nonconformist" signaling that is not backed by any serious nonconformity, including most (if not all) of the contemporary "subcultures."

(An even more extreme and farcical phenomenon occurs when the establishment itself includes some sort of fake "opposition" or orchestrates supposedly authentic "protests" or "activism." I won't get into any examples of this to avoid stirring up controversy.)

In contrast, true nonconformity would mean adopting views (and undertaking consequent actions) that seriously lower your status and risk severe loss of reputation, unemployability, criminal penalties, or even violent confrontation with the powers-that-be. Examples would be refusing to recognize the authority of the government over some laws whose enforcement is taken seriously, or becoming an outspoken propagandist for some shockingly extremist fringe group. Clearly this is not a way to a happy life, regardless of whether you have any sympathy for any such sort of people and their views.

Please also see my reply to Mycroft65536 below regarding non-conformist groups.

Replies from: taryneast
comment by taryneast · 2011-07-08T16:20:10.476Z · LW(p) · GW(p)

In contrast, true nonconformity would mean adopting views (and undertaking consequent actions) that seriously lower your status and risk severe loss of reputation, unemployability, ...

Doesn't have to be as serious as bucking the law. It can even be as simple as telling your boss that his idea won't work (because of X, Y and Z). Or deciding to buck the corporate dress-requirements because you know you will never be put in front of a real customer and therefore should be allowed to be comfortable at work... etc etc

Replies from: Vladimir_M
comment by Vladimir_M · 2011-07-08T18:11:14.635Z · LW(p) · GW(p)

Only if you stretch the definition of "nonconformity" to the point of meaninglessness. If you define it so broadly to include things like these you mention -- polite disagreement with authority figures over technical matters and slight bending of rules to make things easier -- then practically every human being who has ever lived has been a "nonconformist."

Replies from: taryneast
comment by taryneast · 2011-07-09T16:20:01.699Z · LW(p) · GW(p)

Ah... by this I take it that you've never worked in a job where telling the boss what to do will end in your being disciplined for not toeing the company line. We're not talking "polite disagreement over technical matters" here. There are situations of this kind where you definitely suffer social stigma for speaking out. ...mostly when the company has become a cult... and it's much better to avoid this kind of company if you can - but that's very difficult in today's corporate culture.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-07-10T01:18:25.382Z · LW(p) · GW(p)

Now I understand better what you're talking about. I have seen such examples of institutional mendacity, and I certainly agree that in some sorts of institutions it is so widespread that you may be faced with unpleasant trade-offs between your career (or other) interest and your integrity. So yes, I'd certainly count it as real nonconformity if you opt for the latter.

comment by AlexM · 2011-07-08T05:16:02.738Z · LW(p) · GW(p)

Here is fake nonconformist and here is real one.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-12T15:14:44.045Z · LW(p) · GW(p)

Your real non-conformist still has buddies. That's why it's worth his while to get coded tattoos which have online explanations.

comment by sam0345 · 2011-07-08T05:33:46.358Z · LW(p) · GW(p)

For a complete list of examples of officially approved pseudo non conformity, see "Stuff White People Like"

Vegetarianism, or the pious pretense of vegetarianism, is pretty high on the list.

Deep fry pork belly in smoking hot pig fat till light brown, then gently simmer on a slow heat. It will cure most people's foodie affectations. The slow simmer will produce some nice meat juices, which you should add to the roast potatoes.

comment by Mycroft65536 · 2011-07-07T23:50:11.583Z · LW(p) · GW(p)

I'm not sure about that. The world is big enough that you can live most of your life mostly in contact with other non-conformists in your particular cluster. I'm doing that right now.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-07-08T05:02:17.490Z · LW(p) · GW(p)

The world is big enough that you can live most of your life mostly in contact with other non-conformists in your particular cluster.

The critical issue here is whether your nonconformist group has a truly independent status hierarchy and mechanisms of social support, i.e. if it really allows you to sever ties with the mainstream society and institutions so that you don't have to care about your status and reputation with them without severe negative consequences. I can hardly think of any such nonconformist groups except for some very insular religious sects -- the modern trend is almost uniformly towards strong consolidation of a single and universal status hierarchy whose rules apply to everyone.

Replies from: taryneast
comment by taryneast · 2011-07-08T16:21:52.312Z · LW(p) · GW(p)

Geeks, Scifi-fans, role-playing gamers....

Replies from: AlexM
comment by AlexM · 2011-07-08T16:50:53.786Z · LW(p) · GW(p)

Maybe in the 80's, when we were only a few Satan worshipping nuts, but we are now in 21st century where science fiction and fantasy decisively won and is The Mainstream Culture now. Of 10 biggest grossing movies of the year, all are F/SF and role playing games are worldwide multibillion industry. Someone who does not know who is Harry Potter and what is Warcraft is the crazy weirdo :P

Replies from: taryneast
comment by taryneast · 2011-07-09T16:17:01.698Z · LW(p) · GW(p)

Some SF movies have been popular of late - and most mainstream films have become more science-heavy... but people that watch these shows are in no way fans of the genre. I don't know of any Mainstream types who read SF books regularly or who avidly watch more than one or two of the most main of mainstream SF shows/series.

By contrast, even the non-uber-geek SF fans will know a Ferengi from a Centari by sight, will understand what the odd-even rule is and can probably rattle of the three laws of robotics (plus the extra one) on the spot. These are the people I mean - and I still think there's a difference between them and people who may have just watched The Matrix, LOTR or one of the X-Men movies.

Actually being part of the full on SF fandom culture is definitely non-conformist. Think trekkies (or trekkers if you prefer).

And by role-playing gamers... I don't mean video games... I mean classic dice-rolling "your elven warlock spots three kobolds" kind of role-playing games. WoW is a different kettle of fish... but I know of nobody that thinks classic RPGs are "mainstream".

comment by Mass_Driver · 2011-07-08T05:17:47.795Z · LW(p) · GW(p)

Awesome. I really like this position; it feels right. It was too stingy of me to say that all rationalism buys is a chance to conform, and it's too optimistic of certain LW cheerleaders to claim that rationality will promptly sweep the world or grant us superpowers. Your comment nails the middle path between these two extremes. :-)

Replies from: Multiheaded
comment by Multiheaded · 2011-07-12T00:07:32.944Z · LW(p) · GW(p)

Your comment nails the middle path between these two extremes. :-)

Are you deliberately trying to invoke our Deep Wisdom alarms with an incredibly blatant golden mean fallacy?

Replies from: Mass_Driver
comment by Mass_Driver · 2011-07-12T03:00:45.104Z · LW(p) · GW(p)

[Grin] Guilty as charged. That said, I do really like SarahC's position.

comment by Multiheaded · 2011-07-03T10:09:35.255Z · LW(p) · GW(p)

The bit about drugs is just stupefying. Did you really, really, mean what came out?

"Lots and lots of people on Less Wrong love drugs that are outlawed in the U.S., use them all the time for the explicit purpose of intelligence stimulation, and refuse to hear anything about their harmful effects, because Less Wrongers are extremely quick to explain away evidence they don't want to believe in - especially when it's supported by "uncool" people and groups - and probably can't even contemplate any long-term effects due to their geekishness and hidden immaturity. Here's a wise, fatherly-sounding warning to them, full of ol' good common sense that those naive kids haven't learned to trust yet."

I'm not voting this down because I'm just feeling a flat what.

Replies from: Multiheaded
comment by Multiheaded · 2011-07-07T08:46:48.269Z · LW(p) · GW(p)

Hmm, is the intent behind LW's karma system really OK with me being cleared for main-level posting for a single comment along these lines (exploiting a silly mistake by the community's opponent while citing the community's values, as long as I maintain a solid image of fairness; yes, the sole true reason for not voting down was my hunger for karma)?

Replies from: Barry_Cotter
comment by Barry_Cotter · 2011-07-07T21:45:09.814Z · LW(p) · GW(p)

Yes. The karma barrier for posting is to prevent spammers, and maybe, maaaybe, n00bs who are about to post something very, very stupid. I mean, if you post something unsuitable in Main all it'll take is two downvotes and your posting privileges will disappear.

comment by falenas108 · 2011-07-03T03:12:49.932Z · LW(p) · GW(p)

To be fair to LessWrong, although we do encourage quitting religion, we don't condemn attending. This post got 44 upvotes, and a decent chunk of the post was explaining how she went to church. I personally think the "don't attend church" mentality is more about the path being closed to us than anything against it.

comment by Wei Dai (Wei_Dai) · 2011-07-03T05:39:53.196Z · LW(p) · GW(p)

these are four of the seven most important themes on the site in terms of immediate advice about what to do

What are the other three? And shouldn't there be an explanation why they are excluded from this outside view analysis? (EDIT: See Mass_Driver's explanation here.)

let's call it 'Omega' instead of God

Please call it something else? Using 'Omega' seems unnecessarily confusing given that there's already a convention for using that name to denote a powerful and trustworthy (but not necessarily Friendly) entity in decision theory problems.

Replies from: Mass_Driver
comment by Mass_Driver · 2011-07-03T06:09:03.879Z · LW(p) · GW(p)

An excellent point. Can you suggest a better title? I could call it the "Singularity" story, but that would be a bit unfair as well.

Replies from: kpreid, Morendil
comment by kpreid · 2011-07-03T20:38:53.969Z · LW(p) · GW(p)

"Sysop".

comment by Morendil · 2011-07-03T12:58:11.433Z · LW(p) · GW(p)

What are the other three?

Question seconded. I'd also like to ask how you came to your conclusions as to the "themes of short-term advice" encountered on LW.

What is most salient or most available for you may not be so for everyone else or even most of the readership.

Replies from: Mass_Driver, XiXiDu
comment by Mass_Driver · 2011-07-03T19:24:12.275Z · LW(p) · GW(p)

I seem to have worded the bit about "four out of seven" poorly -- it's just meant to be a sort of confidence interval, but people seem to think I'm jealously hoarding my pile of three extra advice themes which I (for some reason) refuse to share with you all. I don't know what they are. If I had three more themes, I'd have listed them, and then said I had "seven out of twelve" or something like that. It's precisely because what's salient for me may not be so for others that I'm trying to be humble about how many of the top themes I've managed to identify.

comment by XiXiDu · 2011-07-03T13:26:47.526Z · LW(p) · GW(p)

I don't agree with the list of "most prominent themes in terms of short-term behavioral advice" mention in the original post, but I also don't think that it is completely unfounded:

  • 55 upvotes on a comment of someone donating the the current balance of his bank account.
  • 15 upvotes on a request for more discussions of general cognitive enhancing tools such as Adderall and N-Back.
  • 42 upvotes on a post that claims that you are a lousy parent if you don't sign up your kids for cryonics.
  • Eliezer telling people to put hope into cryonics and advanced nanotechnology rather than "Noble Lies".
Replies from: Morendil, Morendil
comment by Morendil · 2011-07-03T17:30:16.970Z · LW(p) · GW(p)

Let me try to clarify what I mean.

Right now, as I'm writing this, someone coming across the LW home page would have some grounds to conclude that LW is advising:

  • reading interesting books or articles with quotable material
  • attending meetups
  • introspective exercises on why we reject some actions
  • watching your thoughts and words
  • brainstorming exercises
  • measuring your aversions
  • writing (or maybe reading) horoscopes

(I'm omitting posts which appear to be purely informational.)

Over the course of the next few weeks, this list will change until new content has entirely replaced the old; at that point if you asked again the question "what is LW advising" you'd see something different, maybe with substantial overlap with the above list, maybe not.

So that is one procedure to (attempt to) determine what are LW's major themes of short-term advice.

My point is that different procedures may yield different results, for different readerships.

Cryonics comes up every so often, but may or may not be perceived as a major theme - depending on how you read LW.

ETA: if you're going to count all-time upvotes, then it would make more sense for me to do a systematic survey: rank all posts by number of upvotes, possibly normalize by how long ago the material has been posted (more recent material has had less time to accumulate upvotes), extract from each post what advice it gives if applicable. What seems to be going on for both you and the OP is that you rank as "major" the things that have struck you the most. They may have struck you the most precisely because they were most unconventional, in which case you will come to unsound conclusions.

Replies from: XiXiDu
comment by XiXiDu · 2011-07-03T18:03:02.067Z · LW(p) · GW(p)

This doesn't prove anything, but I thought it was interesting. You can conduct your own searches, what results do you anticipate on a site like lesswrong if it cares most strongly about rationality and much less about topics like AI and cryonics?

Replies from: Bongo
comment by Bongo · 2011-07-03T18:32:25.055Z · LW(p) · GW(p)

site:lesswrong.com "artificial intelligence" = 30,700 results site:lesswrong.com "Singularity" = 32,000 results

Thought this was because of the logo at the top of the page, so searched for "Singularity Institute for Artificial Intelligence" and got:

So something's weird. Also, if you move "site:lesswrong.com" to the right side you get 116,000 instead.

Replies from: saturn, XiXiDu, fubarobfusco
comment by saturn · 2011-07-03T22:32:42.736Z · LW(p) · GW(p)

Google's result counter is an estimate, and not a very good one. It's within 2 or 3 orders of magnitude... usually.

comment by XiXiDu · 2011-07-03T18:44:26.296Z · LW(p) · GW(p)

You're right.

comment by fubarobfusco · 2011-07-03T22:35:34.198Z · LW(p) · GW(p)

Or maybe those result counts don't measure what you think they measure.

comment by Morendil · 2011-07-03T15:38:41.554Z · LW(p) · GW(p)

What is most salient or most available for you may not be so for everyone else or even most of the readership.

comment by calcsam · 2011-07-04T13:19:12.846Z · LW(p) · GW(p)

It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.

I think this is the strongest point in the whole argument.

Data point: I brought my parents to a Mountain View LW meetup. My parents aren't religious, and my dad is a biochemist that studies DNA repair mechanisms; they define themselves by their skepticism and emphasis on science. So the perfect target audience. But they seemed unenthusiastic, that it was by and for tech-savvy smart young adults but not really for the population as a whole.

This is the most coherent argument I\ve seen against memeticizing Less Wrong. Thank you.

comment by Nisan · 2011-07-04T22:37:30.487Z · LW(p) · GW(p)

You seem to be making the point that our[1] recommendation of cryonics facilitates an unfounded belief that one day there will be a benevolent superintelligence that will revive the corpsicle patients. I think that criticism could be appropriately aimed at zealous and silly transhumanists, but not at Less Wrong. Here you will be told that signing up for cryonics gives you only a 5% chance at living forever. You'll be told that there's a pretty good chance of superintelligence existing in the future, but there are at least even odds of it being not benevolent. And Eliezer, who came up with the Sysop scenario in the first place, explicitly warned against wasting time thinking about such things. You won't find that kind of shiny eschatology here.

[1] It's fair to say that Less Wrong advises signing up for cryonics, although there isn't a consensus on this point.

comment by Emile · 2011-07-03T11:47:43.922Z · LW(p) · GW(p)

Drugs aren't a big part of this site; there may be a few members who recommend some chemical stimulants, but it's far from being a consensus. If you asked me to list important and useful ideas and advice from LessWrong, I don't think I would list "use drugs" except maybe in 100th position or so.

As for quitting religion, I recall seeing anybody actually recommend that people drop out of religious groups (though there may be some - any links?); it's just that some people have done so as a result of just not believing in religion any more.

comment by Will_Sawin · 2011-07-07T14:49:43.604Z · LW(p) · GW(p)

If you believe we live in a universe where most things are possible, you will focus on the best things and how to achieve them, and the worst things and how to avoid them.

Separately, if you want to construct a highly viral religion meme, you will focus on the best things and how to achieve them, and the worst things and how to avoid them.

Taking the really awesome ideas of religions (God, afterlife) and figuring out the most plausible scientific explanation for them is exactly what we should be doing, since we want to maximize the probability of God and afterlife.

comment by [deleted] · 2011-07-03T17:05:05.763Z · LW(p) · GW(p)

Enter...cryonics and friendly AI. Oh, look! Using only physical, reductionist-friendly mechanisms, we can show that a benevolent, powerful entity whose mind is not centered on any particular point in space and whose existence cannot presently be confirmed (let's call it 'Omega' instead of God) might someday be watching over us.

I see analogies with three religious tropes here: the omnipresence of god, religions claim to be non-disprovable and the tendency of religions to give their gods cool-sounding names. The last one is simply confused ('Omega' is the designation of a perfect and trustworthy predictor postulated in philosophical thought-experiments which are quite different from speculations about the possibility of building an artificial intelligence). The middle one vaguely misleading -- the presence of a powerful and benevolent entity of the AI persuasion in the vicinity of Earth can presently be thoroughly disconfirmed and I've never seen anyone claiming otherwise or hedging about it. The first one has some loose connection to the ways things have been discussed around here but I still wouldn't call it a good characterization of beliefs common on this site.

Seriously, why is it that people have to get all strawman-y and hyperbolic whenever they talk about the obvious similarities between transhumanist ideas and religious thought?

comment by XiXiDu · 2011-07-03T14:17:47.732Z · LW(p) · GW(p)

I hold this suspicion with about 30% confidence, which is enough to worry me, since I mostly identify as a rationalist. What do you think about all this? How confident are you?

I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself. There is too much unjustified agreement here to convince me that people really mostly care about superior beliefs. Sure, the available methods might not allow much disagreement about their conclusions, but what about doubt in the very methods that are used to evaluate what to do?

Most of the posts on LW are not wrong, but many exhibit some sort of extraordinary idea. Those ideas seems mostly sound but if you take all of them together and arrive at something really weird, I think some skepticism is appropriate (at least more than can currently be found).


Here is an example:

1.) MWI

The many-worlds interpretation seems mostly justified, probably the rational choice of all available interpretations (except maybe Relational Quantum Mechanics). How to arrive at this conclusion is also a good exercise in refining the art of rationality.

2.) Belief in the Implied Invisible

P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X)

In other words, logical implications do not have to pay rent in future anticipations.

3.) Decision theory

Decision theory is an important field of research. We can learn a lot by studying it.

4.) Intelligence explosion

Arguments in favor of an intelligence explosion, made by people like I.J. Good, are food for thought and superficially sound. This line of reasoning should be taken seriously and further research should be conducted examining that possibility.


Each of those points (#1,2,3,4) are valuable and should be taken seriously. But once you build conjunctive arguments out of those points (1∧2∧3∧4) you should be careful about the overall credence of each point and the probability of their conjunction. Because even if all of them seem to provide valuable insights, any extraordinary conclusions that are implied by their conjunction might outweigh the benefit of each belief if the overall conclusion is just slightly wrong.

An example of where 1∧2∧3∧4 might lead:

"We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse."

or

"We should walk into death camps if it has no effect on the probability of being blackmailed."

Careful! The question is not if our results are sound but if the very methods we used to come up with those results are sufficiently trustworthy. This does not happen enough on LW, the methods are not examined, even though they lead to all kinds of problems like Pascal's Mugging or the 'The Infinitarian Challenge to Aggregative Ethics'. Neither are the motives and trustworthiness of the people who make those claims examined. Which wouldn't even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.

Replies from: Wei_Dai, Bongo, orthonormal
comment by Wei Dai (Wei_Dai) · 2011-07-04T02:49:42.138Z · LW(p) · GW(p)

I sympathize with the overall thrust of this comment, that we should be skeptical of LW methods and results. I see lots of specific problems with the comment itself, but I'm not sure if it's worth pointing them out. Do the upvoters also see these problems, but just think that the overall point should be made?

To give a couple of examples, take the first and last sentences:

I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself.

I don't see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?

Which wouldn't even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.

(I guess "interested" should be "disinterested" here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?

Replies from: XiXiDu, XiXiDu, XiXiDu
comment by XiXiDu · 2011-07-04T10:29:31.294Z · LW(p) · GW(p)

I don't see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups?

That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don't want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.

Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don't care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.

Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.

(I guess "interested" should be "disinterested" here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?

I wasn't clear enough, I didn't expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by "interested researchers rather than people who ask others to take their ideas seriously" is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don't know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren't even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.

To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.

Please let me know if you want me to elaborate on any of the specific problems you mentioned.

Replies from: None
comment by [deleted] · 2011-07-07T14:18:35.981Z · LW(p) · GW(p)

It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you'd better study something that will let you write a paper or two.

Yes, you should watch out for bias in blog posts written by people you don't know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you're reading has Ph. D.

comment by XiXiDu · 2011-07-04T10:53:29.854Z · LW(p) · GW(p)

Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?

Yes, but lesswrong is missing the ecological system of dissenting, mutually exclusive opinions and peer review. Here we only have one side that cares strongly about certain issues while those that only care about other issues tend to keep quiet about it as not to offend those who care strongly. That isn't the case in academic circles. And since those who care strongly refuse to enter the academic landscape, this won't change either.

comment by XiXiDu · 2011-07-04T10:49:00.230Z · LW(p) · GW(p)

I don't see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?

It doesn't follow, I was wrong there. I meant to provoke three questions 1.) Are people joining this community mainly because they are interested in rationality and truth or in other people who think like them? 2.) Are meetups instrumental in refining rationality and seeking truth or are they mainly done for the purpose of socializing with other people? 3.) Are people who attend meetups strong enough to withstand the social pressure when it comes to disagreement about explosive issues like risks from AI?

Replies from: Peterdjones
comment by Peterdjones · 2011-07-04T13:50:13.670Z · LW(p) · GW(p)

You can care about an issue and dissent.

comment by Bongo · 2011-07-04T15:13:11.857Z · LW(p) · GW(p)

I think "we should be skeptical of our very methods" is a fully general counterargument and "the probability of the conjunction of four things is less than the probability of any one of them" is true but weak, since the conjunction of (only!) four things that it's worth taking seriously is still worth taking seriously.

Also,

Neither are the motives and trustworthiness of the people who make those claims examined.

Seems just obviously false. They're examined all the time. (And none of these links are even to your posts!)

Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let's not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.

Replies from: XiXiDu
comment by XiXiDu · 2011-07-04T16:16:31.962Z · LW(p) · GW(p)

Seems just obviously false. They're examined all the time. (And none of these links are even to your posts!)

You're right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.

Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can't justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn't commit quantum suicide even given a high confidence in MWI being true. Logical implications don't seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-10T16:03:13.394Z · LW(p) · GW(p)

I think your communication would really benefit from having a clear dichotomy between "beliefs about policy" and "beliefs about the world". All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of "logical implication" seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn't work. I think if you really made this dichotomy explicit, and made explicit that you're worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political 'decision theoretic' insanity.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-12T15:01:35.938Z · LW(p) · GW(p)

Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

Replies from: Humbug, Will_Newsome
comment by Humbug · 2011-07-12T15:21:24.876Z · LW(p) · GW(p)

...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.

comment by Will_Newsome · 2011-07-13T19:53:39.890Z · LW(p) · GW(p)

No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.

Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.

comment by orthonormal · 2011-07-03T22:47:36.259Z · LW(p) · GW(p)

This is an excellent point, and well stated. I don't have anything to add, but an upvote didn't suffice.

comment by katydee · 2011-07-07T11:00:36.265Z · LW(p) · GW(p)

You know, I think a lot of this stuff really misses the mark. I would say that I agree with many of the LW "mainstream" beliefs, generally find my posts being upvoted, have attended meetups before and enjoyed myself, and so on-- but I've never tried nootropics, I think cryonics is an expensive way to buy optimism and signaling, I'm fairly sympathetic to religious groups, and I've said so explicitly several times without any real fear of retaliation or even downvoting.

As long as you express your opinions in a reasonable, self-reflective, and well thought out way, I've found you have nothing to worry about here, and that's really not the case in most other communities I've participated in. What are the "heresies" of the LW/human rationality community? It's hard to say.

comment by XiXiDu · 2011-07-03T09:57:00.120Z · LW(p) · GW(p)

Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid.

A post by Roko came to my mind (it all went down the road of insanity after that with other people suffering as a result):

I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.

Although if all works out well with those 'rationality camps', or whatever they are called, this might not be a problem anymore.

Replies from: orthonormal
comment by orthonormal · 2011-07-13T16:50:34.659Z · LW(p) · GW(p)

Given Less Wrong's size and demographics, it doesn't seem unreasonable to expect at least one flaming wreck like the Roko situation (especially given his prior behavior patterns); can you think of an online community centered on ideas that doesn't occasionally have someone over-commit to an extreme version of one of their ideas?

It's worth asking whether LW is doing a good job of keeping the potential for this low, though.

Replies from: wedrifid
comment by wedrifid · 2011-07-14T06:30:51.482Z · LW(p) · GW(p)

Given Less Wrong's size and demographics, it doesn't seem unreasonable to expect at least one flaming wreck like the Roko situation (especially given his prior behavior patterns); can you think of an online community centered on ideas that doesn't occasionally have someone over-commit to an extreme version of one of their ideas?

I take offence on Roko's behalf (and as someone who wishes to influence norm such that the above would not be accepted if directed at me at some time in the future). Over commiting to an extreme version of one of their ideas is an absurd thing to imply. Roko was not committed to an idea - it was casual speculation. The problem was entirely social. In this instance - and in the general case it is personal behaviours, particularly of leaders, that make all the difference.

comment by Icelus · 2011-07-03T03:30:43.389Z · LW(p) · GW(p)

I just want to say thank you for posting to /r/discussion.

This kind of posting workflow is something I've tried to encourage through advice on the IRC channel and hope more people adopt it because I see a lot of potential in it. Namely, people that might not be totally ready for front page posting can get good feedback, learn a lot, and then LW winds up with more high quality articles than it would have otherwise. The more quality writing for LW, the better.

This is what I'd like to see more of!

Replies from: Mass_Driver
comment by Mass_Driver · 2011-07-03T06:12:37.942Z · LW(p) · GW(p)

Thank you for your comment! I respect and admire your demonstrated ability to make a meta-comment without getting into the details of the article. (For me at least), that takes quite a bit of self-restraint.

comment by multifoliaterose · 2011-07-03T21:31:54.412Z · LW(p) · GW(p)

Upvoted.

The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs. The main sense in which the beliefs are superior in terms of their ability to make tech/geek people think happy thoughts without 'paying' too much in bad outcomes.

Presumably there's at least some of this going on. But there's not an "either/or" dichotomy here. Some of the Less Wrong advice will turn out to fall into the above and other such advice will turn out to be solidly grounded.

For example, I think that more likely than not, focus on x-risk reduction as a philanthropic cause is grounded and that this is something that the LW community has gotten right but that more likely than not, donating to SIAI is not the best x-risk reduction opportunity on the table. I'm bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one; a priori it seems that the one that's most salient initially is unlikely to be the best one altogether. (That being said, contingencies may point toward SIAI being the best possible option even after an analysis of all available options.)

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-07-07T12:36:59.622Z · LW(p) · GW(p)

I'm bothered by the fact that it appears to me that most SIAI supporters have not carefully considered the collection of all x-risk opportunities on the table with a view toward picking out the best one

I'm really interested in this issue since I'm considering donating to x-risk-organizations. Which organization do you think is best suited for existential risk reduction? Besides SIAI I can only think of FHI. IMO they both are preferable to the Forsight Institute and the Center for Responsible Nanotechnology. I don't know of any other organizations whose main focus are x-risks.

In another thread you said that the best way to contribute to x-risk-reduction is

to increase public interest in and concern for existential risk.

I agree!

You added that

SIAI seems poorly suited to generating interest in and concern for existential risk and may very well be lowering the prestige attached to investigating existential risk rather than raising the prestige attached to investigating existential risk.

Why do you think that this is the case? IMHO e.g. the Singularity Summit's have increased public interest in and prestige attached to the Singularity and x-risks.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-07-07T20:33:51.511Z · LW(p) · GW(p)

I like your screen name! (Reference to Buddhism?)

My impressions of SIAI and views on these things have evolved considerably since a year ago in when I commented in the thread that you wrote. I have a considerably more favorable impression of SIAI than I did at the time.

But regarding:

IMHO e.g. the Singularity Summit's have increased public interest in and prestige attached to the Singularity and x-risks.

Increasing public interest in and prestige attached to the Singularity may increase the rush toward advanced technologies which could raise the probability of an unfriendly AI.

I'm really interested in this issue since I'm considering donating to x-risk-organizations. Which organization do you think is best suited for existential risk reduction? Besides SIAI I can only think of FHI. IMO they both are preferable to the Forsight Institute and the Center for Responsible Nanotechnology. I don't know of any other organizations whose main focus are x-risks.

I'm still in the process of gathering information on this topic and haven't come to a conclusion (not even a tentative one). Beyond the organizations that you list, there are organizations working against nuclear war, asteroid strike risk, global pandemics, etc. Friendly AI is the most important issue on the table but efficacy of working toward it may be less than that of working against other risks after time discounting for information uncertainty.

Would you like to correspond? If you PM me your email address I'd be happy to talk about this some more.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-07-08T08:08:49.887Z · LW(p) · GW(p)

I like your screen name! (Reference to Buddhism?)

Yes! Actually it also involves LSD and The Doors;) Googling suggests your screen name has something to do with Dante or T.S. Eliot ?

Anyway, you wrote:

Increasing public interest in and prestige attached to the Singularity may increase the rush toward advanced technologies which could raise the probability of an unfriendly AI.

Good point, but it seems that SIAI has one of the most pessimistic Singularity-concepts (especially if you compare it to the views of, say, Ben Goertzel, Ray Kurzweil or Max More) and therefore advocates strong precautionary measures, which in turn reduce x-risks.

Beyond the organizations that you list, there are organizations working against nuclear war, asteroid strike risk, global pandemics, etc. Friendly AI is the most important issue on the table but efficacy of working toward it may be less than that of working against other risks after time discounting for information uncertainty.

True, in fact thinking about possible nuclear war made me realize how important x-risks are. My main arguments against working for organizations against nuclear war are: 1.They already have huge budgets ( e.g. from Warren Buffet) so my money doesn't make a big difference

  1. Many people, indeed whole countries try to address those problems, so my efforts don't weigh much.

  2. The problem has existed for almost 70 years. Folks like Einstein and Russell, which I greatly admire, have thought about these problem for years and, well to be frank, I don't know if their efforts did actually decrease or increase the risks! Maybe strategies like MAD are better than the ones proposed by Einstein and Russell. So why should I have any confidence in my strategies? Whereas with regards to AI-x-risks SIAI and in particular Yudkowsky seem to be way more competent than the other folks. ( Excluding Bostrom, Hanson, Omohundro and probably others that I don't know, but the ones I find competent usually work for or with SIAI.)

  3. The whole issue involves too much politics ( for my taste) -> Rational argumentation is often frowned upon.

  4. Are nuclear wars really existential risks? I think they are only Global Catastrophic Risks, i.e. they won't lead to human extinction. ( Of course, if you are a negative utilitarian this point is an advantage, but I'm not, at least I think so) You can apply these arguments, mutatis mutandis, to global pandemics, biotechnology, Super-volcanos, asteroid strikes, Global warming and ,to a lesser degree, nanotechnology.

I think this greatly outweighs the informational uncertainty of FAI. And, the final knock-down argument, IMHO: If you solve the FAI-problem you solve all the above listed problems, at a single blow!!!

But, I could be wrong! No, hopefully I'm wrong! Because the likely conclusion of my reasoning is the following: Work in finance and earn money in order to donate to SIAI! That sucks soooo much, to put it charitably. Anyway, the comment is already too long. Oh, and I would like to correspond! I PM you my mail adress.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-07-08T09:08:48.850Z · LW(p) · GW(p)

Googling suggests your screen name has something to do with Dante or T.S. Eliot ?

Yes, it's from "The Hollow Men"

Good point, but it seems that SIAI has one of the most pessimistic Singularity-concepts (especially if you compare it to the views of, say, Ben Goertzel, Ray Kurzweil or Max More) and therefore advocates strong precautionary measures, which in turn reduce x-risks.

But Goertzel and Kurzweil are speakers at the Singularity Summit! :-) I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it's not clear to me that the Singularity Summit is positive on balance.

True, in fact thinking about possible nuclear war made me realize how important x-risks are. My main arguments against working for organizations against nuclear war are: 1.They already have HUGE budgets ( e.g. from Warren Buffet) so my money doesn't make a big difference

Even if nuclear deproliferation is overfunded on aggregate there may be particular organizations which are especially effective and in need of room for more funding (the philanthropic world isn't very efficient). I agree that a priori it looks as though SIAI has a stronger case for room for more funding thanorganizations working against nuclear war but also think that the matter warrants further investigation.

The problem has existed for almost 70 years. Folks like Einstein and Russell, which I greatly admire, have thought about these problem for years and, well to be frank, I don't know if their efforts did actually decrease or increase the risks! Maybe strategies like MAD are better than the ones proposed by Einstein and Russell. So why should I have any confidence in my strategies?

I agree that uncertainty as to which strategies work drives the expected value down, but not to zero.

Whereas with regards to AI-x-risks SIAI and in particular Yudkowsky seem to be way more competent than the other folks. ( Excluding Bostrom, Hanson, Omohundro and probably others that I don't know, but the ones I find competent usually work for or with SIAI.)

I agree that the best people thinking about AI x-risk are at SIAI. This doesn't imply that their efforts are strong enough for them to make a meaningful dent in the problem (nature doesn't grade on a curve, etc.).

Are nuclear wars really existential risks? I think they are only Global Catastrophic Risks, i.e. they won't lead to human extinction.

I'm presently inclined to agree that the immediate effect of nuclear war is unlikely to be extinction (although I've heard smart people express views to the contrary). But plausibly nuclear war would leave humanity in a much worse position to address other x-risks (e.g. political & economic instability seem more likely to be conducive to unfriendly AI than political & economic stability). Furthermore, even if nuclear war doesn't cause human extinction it could still cause astronomical waste on account of crippling civilization to the point that it couldn't yield an intelligence explosion.

You can apply these arguments, mutatis mutandis, to global pandemics, biotechnology, Super-volcanos, asteroid strikes, Global warming and ,to a lesser degree, nanotechnology.

Some of your arguments apply to some of the risks but not all of the arguments apply to all of the risks. In particular, none of the arguments seem to apply to asteroid strike risk.

And, the final knock-down argument, IMHO: If you solve the FAI-problem you solve all the above listed problems, at a single blow!!!

This is definitely a point in favor of focus on FAI but it's not clear to me that it's a strong enough.

But, I could be wrong! No, hopefully I'm wrong! Because the likely conclusion of my reasoning is the following: Work in finance and earn money in order to donate to SIAI! That sucks soooo much, to put it charitably.

(a) The existence of any x-risk / catastrophic risk charity with room for more funding suggests that donating money is highly cost-effective.

(b) Donating money is not the only way to reduce x-risk. One can work against one of the risks oneself (e.g. work for SIAI as a volunteer, work for a government agency working on one of the relevant x-risks). One can also try to influence the donors of others.

(c) Regarding your discomfort with the lifestyle that your reasoning seems to lead you to, see paragraphs 2, 3, and 4 of Carl Shulman's comment here.

Replies from: XiXiDu, orthonormal, wallowinmaya
comment by XiXiDu · 2011-07-08T10:24:21.727Z · LW(p) · GW(p)

I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it's not clear to me that the Singularity Summit is positive on balance.

Personally I gave up trying to take into account such considerations. Otherwise I would have to weigh the positive and negative effects of comments similar to yours according to influence they might have on existential risks. This quickly leads to chaos theoretic considerations like the butterfly effect which in turn leads to scenarios resembling Pascal's Mugging where tiny probabilities are being outweighed by vast utilities. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I decided to neglect small probability events.

comment by orthonormal · 2011-07-13T16:54:07.969Z · LW(p) · GW(p)

Googling suggests your screen name has something to do with Dante or T.S. Eliot ?

Yes, it's from "The Hollow Men"

Whoa- I've been parsing it as a chemical name all along (and subconsciously suppressing the second i). Eliot's one of my favorites, but I never made the connection.

comment by David Althaus (wallowinmaya) · 2011-07-08T10:06:49.753Z · LW(p) · GW(p)

Good points, thx for the link to Carl Shulman's comment, I love his reasoning.

Just for the record: The reason why I don't like the conclusion of working in finance to earn money in order to donate is that I guess I can't do it. I simply hate finance too much and I know I'm too selfish. Just wearing a suit is probably more I could bear;) I will respond to the rest of your comment in private.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2011-07-08T13:57:18.235Z · LW(p) · GW(p)

Please consider posting your reply here, I would be interested in reading it!

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-07-09T21:23:02.552Z · LW(p) · GW(p)

I wrote you a PM.

comment by scientism · 2011-07-03T16:06:27.891Z · LW(p) · GW(p)

I think some people expect too much too soon. Here's what I think it's reasonable to expect, short-term: (1) improved problem solving skills; (2) a clearer idea of what it will take to achieve your goals; and (3) worthwhile interaction with a community of peers. A lot of problems are hard. Psychology and sociology are difficult, unsolved subjects. I don't expect rationalists to become wealthy, highly accomplished and socially successful short-term, because systematically achieving those goals would require a high-level of knowledge about how the social world operates. I would expect them to have a better idea of how much work would be involved in achieving those goals and to be able to make progress on more modest goals.

comment by [deleted] · 2011-07-08T12:04:08.440Z · LW(p) · GW(p)

What bugs me about your perception of this community is that you seem to conflate goals with beliefs. What I see on LessWrong is the idea that artificial general intelligence, if done properly, would be a powerful invention that could solve the most important problems of humanity and that therefore we should pursue the goal of inventing and building it.

What you seem to see is the idea that because we thougth of a way in which future could be awesome, therefore it will be awesome and we can feel good about it just like other people feel good about religion. I just don't see it. Maybe it's because I used to have that kind of vague feel-good transhumanist beliefs and then I stumbled upon Eliezer's writings and got convinced that no, I have no reason to relax and believe that powerful, abstract forces of technological progress will make everything work out in the end. So it's surprising to me that anyone could end up with that kind of overly enthusiastic beliefs because of LessWrong.

This discrepancy of perception extends to your depiction of community-building efforts. Once again there's the goal of doing everything better, having the most fun and being awesome and the belief that we are already there and can feel good about ourselves. But here I'm far less willing to trust my percepions. I don't really interact with the community beyond reading the website and I tend to ignore things that don't appeal to me so I might have filtered out this unfortunate aspect of the local memesphere.

comment by pjeby · 2011-07-06T15:51:18.377Z · LW(p) · GW(p)

The relative dearth of sustainable yet immediate behavioral payoffs coming out of the box leads me to suspect that the people who go into the box go there not so much to learn about superior behaviors, but to learn about superior beliefs.

Bingo.

Excellent analysis throughout, btw, but that bit hits it right on the head.

(In fairness, though, I think it should be pointed out that there's plenty of other good advice to be found on LW. It's only natural to expect that the most popular memes would be ones that have more going for them than mere truth or usefulness.)

Replies from: Will_Sawin
comment by Will_Sawin · 2011-07-07T14:51:12.334Z · LW(p) · GW(p)

It has always seemed like your ideas on how to learn superior behaviors are a pretty significant part of the LW memecluster.

comment by Eugine_Nier · 2011-07-08T06:40:20.888Z · LW(p) · GW(p)

Thanks for the post. Now I can pat myself on the back for reading and upvoting a post critical of my beliefs and then go back to doing what I was doing before. ;)

Replies from: gwern
comment by gwern · 2011-07-11T21:39:09.757Z · LW(p) · GW(p)

You're such a hipster. That's pretty lame; I'm a meta-contrarian, so I didn't upvote just because it was critical of my beliefs.

comment by FiftyTwo · 2022-01-25T23:41:47.077Z · LW(p) · GW(p)

I'd be curious what you think now after many years to see the effects of things in practice

Replies from: Mass_Driver
comment by Mass_Driver · 2022-06-06T02:06:36.526Z · LW(p) · GW(p)

I think we're doing a little better than I predicted. Rationalists seem to be somewhat better able than their peers to sift through controversial public health advice, to switch careers (or retire early) when that makes sense, to donate strategically, and to set up physical environments that meet their needs (homes, offices, etc.) even when those environments are a bit unusual. Enough rationalists got into cryptocurrency early enough and heavy enough for that to feel more like successful foresight than a lucky bet. We're doing something at least partly right.

That said, if we really did have a craft of reliably identifying and executing better decisions, and if even a hundred people had been practicing that craft for a decade, I would expect to see a lot more obvious results than the ones I actually see. I don't see a strong correlation between the people who spend the most time and energy engaging with the ideas you see on Less Wrong, and the people who are wealthy, or who are professionally successful, or who have happy families, or who are making great art, or who are doing great things for society (with the possible exception of AI safety, and it's very difficult to measure whether working on AI safety is actually doing any real good).

If anything, I think the correlation might point the other way -- people who are distressed or unsuccessful at life's ordinary occupations are more likely to immerse themselves in rationalist ideas as an alternate source of meaning and status. There is something actually worth learning here, and there are actually good people here; it's not like I would want to warn anybody away. If you're interested in rationality, I think you should learn about it and talk about it and try to practice it. However, I also think some of us are still exaggerating the likely benefits of doing so. Less Wrong isn't objectively the best community; it's just one of many good communities, and it might be well-suited to your needs and quirks in particular.

comment by Nisan · 2011-07-04T22:57:04.563Z · LW(p) · GW(p)

Does Less Wrong really recommend withdrawing from religious groups? I don't see that recommendation in any of the four links you give as support. Less Wrong will tell you that religions' supernatural claims are false wherever they're meaningful, and that a lot of religious beliefs are harmful as well as false. And it will tell you to be an atheist[1]. It's understandable that most of us who realized at some point that God isn't real decided to stop going to church. But some of us are involved with religious groups and it doesn't seem to be problematic.

[1] although you can get away with ignosticism or "I assign low credence to any recognizably theistic hypothesis, but I don't like the word 'atheism'" or "blah blah Tegmark".

comment by atucker · 2011-07-03T04:13:10.262Z · LW(p) · GW(p)

On the outside view, this rationality community is very young, and most young organizations lack sophistication, easily repeatable methods, and proof of whatever they claim. Changing yourself takes lots of time (on the order of years), if it can be done at all (it can be, but it's not particularly easy).

On the outside view, any organization which dissolves for lack of proof in its methods takes a very very long time to arise and stay, or never gets off the ground.

I really think that the issue is more one of time and organization, and I'm not super surprised that Less Wrong isn't obviously able to deliver what it wants to over the internet.

Replies from: Mass_Driver
comment by Mass_Driver · 2011-07-03T06:11:48.755Z · LW(p) · GW(p)

Yes, of course. My argument isn't that Less Wrong should dissolve itself. If your inside view suggests that you have a good shot at a massive success in the medium-to-long term, and your outside view puts you in a reference class that includes some potential for big successes but many more failures, the thing to do is pursue the opportunity, but to do some humbly and carefully. Emphasize low-hanging fruit. Hedge. Warn. Some of us do those things, but others quit their jobs to work full-time for SIAI and then urge their friends to do the same. I'll try to make this clearer in the next version.

Replies from: None, atucker
comment by [deleted] · 2011-07-03T15:22:09.635Z · LW(p) · GW(p)

Some of us do those things, but others quit their jobs to work full-time for SIAI and then urge their friends to do the same.

This behaviour of SIAI employees is extremely surprising. Please provide details.

Replies from: Rain
comment by Rain · 2011-07-05T19:21:20.822Z · LW(p) · GW(p)

The US Peace Corps prompted over 200,000 people to do something I consider even more extreme by committing to multiple years of service in foreign countries for very modest goals. I'd be surprised if something like Existential Risk didn't provoke such a reaction.

comment by atucker · 2011-07-03T06:18:12.067Z · LW(p) · GW(p)

Ah.

Yeah, looks like we pretty much agree then about what to do then.

I'm considering trying to work for SIAI in the future though, but I also don't have an established job or lifestyle so the costs of doing so are fairly low for me. To the extent that my pre-LW heuristic for deciding what I'd like to do with my life was mostly based on what I'd find interesting, that's not too huge of a disruption for me.

comment by jsalvatier · 2011-07-03T02:59:24.091Z · LW(p) · GW(p)

Did you get the same bristling in live communication (minicamp/meetups)?

Replies from: Mass_Driver
comment by Mass_Driver · 2011-07-03T06:14:37.355Z · LW(p) · GW(p)

Minicamp, no, because it was so skills-focused. There was a real sense that we could apply the skills to any goals that seemed interesting to us. Meetups, yes. Many of the meetups I've been to have involved praise competitions, i.e., let's see who can all suck up to the SIAI more intelligently.

Replies from: orthonormal
comment by orthonormal · 2011-07-13T16:41:37.737Z · LW(p) · GW(p)

Many of the meetups I've been to have involved praise competitions, i.e., let's see who can all suck up to the SIAI more intelligently.

Really? Huh. I've now gone to ~10 meetups in 5 geographic areas, and the only one that might have fit that description (although I don't think it did) was the very first Overcoming Bias meetup in the Bay Area.

It would probably be rude for me to ask which meetups felt that way to you.

comment by lucidfox · 2011-07-03T09:32:28.572Z · LW(p) · GW(p)

As far as I can tell, the most prominent themes in terms of short-term behavioral advice being given on Less Wrong are:

1) Sign up for cryonics,

2) Donate to SIAI,

3) Drop out of any religious groups you might belong to, and

4) Take chemical stimulants.

If that's the case, then I find it worrying - and seeing how unacceptable to myself personally I find points 1, 2 and 4, it may just make me rethink about my presence here, and whether I'm trying to fit with the wrong crowd.

Replies from: XiXiDu
comment by XiXiDu · 2011-07-03T15:32:47.386Z · LW(p) · GW(p)

how unacceptable to myself personally I find points 1, 2 and 4

Could you elaborate on what you find unacceptable about 1 and 4? I personally never bothered about cryonics, but I don't see any fundamental problem with it. Same about chemical stimulants, if someone wants to take the risk, fine, I don't.

Replies from: lucidfox
comment by lucidfox · 2011-07-03T16:17:27.725Z · LW(p) · GW(p)

Well, that's what I said. I don't have a problem with people signing up for cryonics or using stimulants (as long as it the latter doesn't deteriorate their minds), it's just that I personally don't have faith in cryonics and refuse to touch anything mind-altering. (I'm sometimes mocked for the latter, which irritates me.)

So I have a problem with it if it's something the majority of LW users is assumed or expected to do. Places me in a minority here.

Replies from: JoshuaZ, Document
comment by JoshuaZ · 2011-07-04T01:07:22.840Z · LW(p) · GW(p)

I'm curious what you mean by mind-altering in this context. While there are actual drugs discussed, most of the substances discussed are both legal and exist in the human diet to start with. It seems pretty clear that many different foods impact cognition. For example, people with higher blood sugar are more trusting.{Citation needed} Most of the substances discussed in the cognitive enhancement threads are substances which occur naturally in most human diets anyways. A society that is not getting enough vitamin B12 will have people feeling a lot more fatigued. Simillarly, B6 deficiency is linked to insomnia, irritability, and other negative emotional and cognitive traits. In societies not getting enough of such vitamins, eating more of those foods might be considered to be taking congitively enhancing foods. So how one defines these terms seems important. (In that regard, the vast majority of discussion of cognitive enhancement here seems to be very distinct from what would normally be called mind-altering, e.g. marijuana and LSD which have short-term extreme effects).

(Disclaimer: I have not experimented with any of the discussed supplements here primarily due to heuristics similar to those described by Mass Driver in his initial post. I do however think that that advocating the use of either drugs or other cognitive modifiers is less common here than Mass Driver describes.)

comment by Document · 2011-07-08T23:36:46.298Z · LW(p) · GW(p)

I'm sometimes mocked for the latter, which irritates me.

That's slightly surprising.

comment by lsparrish · 2011-07-07T17:59:42.898Z · LW(p) · GW(p)

Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.

I think there is merit in this suggestion, or at least along the lines of "there's a (instrumentally rational in at least some circumstances) mimetic immune reaction going on". I've seen a fellow cryonics advocate (who I gather has a substantial amount of business experience) advancing the opinion that Eliezer and SIAI are phony. He's concerned that the whole thing will collapse and make cryonics look bad by association. It's surprising to me because both give me the impression of strong internal consistency and accountability.

There must be some kind of heuristic at work which I haven't developed (either due to personality or experience) which gives some people the feeling that something is fishy. It occurs to me that such heuristics could be useful even with high false positive rates, given that the costs of doing business with a phony usually (in certain business domains at least) outweigh the costs of failing to do business with someone genuine.

comment by jschulter · 2011-07-06T06:04:38.196Z · LW(p) · GW(p)

It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.

As I've seen it used here, "rationality" most commonly refers to "the best [memecluster] for winning at life" whatever that actual memecluster may be. If it could be shown that believing in the christian god uniformly improved or did not affect every aspect of believers lives regardless of any other beliefs held, I think a majority of lesswrongers would take every effort necessary to actually believe in a christian god. The problem seems to be how rationality and "the meme-cluster that we most enjoy and are best-equipped to participate in" are equated- these two are currently very similar memeclusters for the current lesswrong demographic, but they are not necessarily so. "It would be really convenient if the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life, rationality." seems more sensical.

comment by Stuart_Armstrong · 2011-07-07T16:21:20.998Z · LW(p) · GW(p)

Unlike pre-scientific religion, the "cryonics + Friendly AI" Sysop story is 'cheap' for people who rarely compartmentalize. [...] It makes you happy!

AI makes me very very afraid, and sad.

Replies from: loup-vaillant
comment by loup-vaillant · 2011-07-08T12:19:55.347Z · LW(p) · GW(p)

I understand "afraid" (let be an Unfriendly AI go Foom, and poof, we're all dead, or worse). But I don't get "sad". Could you elaborate a bit?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-07-09T21:55:31.797Z · LW(p) · GW(p)

Well, to use dramatic language, if you thought that everying that that was good and fine about humanity - about existence - had a reasonable chance of getting erased forever, wouldn't that make you sad?

Replies from: loup-vaillant
comment by loup-vaillant · 2011-07-09T22:32:33.201Z · LW(p) · GW(p)

Not yet. I'm just afraid for the moment. And certainly not because of AI. AI is an existential risk, but it also have potentially great benefits as well. Among them is the reduction of other existential risks. So, I'm not sure AI is a net increase in the existential risks. Yet.

Of course, whatever is a net increase in existential risks might make me sad. But even then, I tend to be sad when I lost something, not when I think there's still a chance.

Replies from: MrCheeze
comment by MrCheeze · 2011-07-12T14:49:24.699Z · LW(p) · GW(p)

It makes me sad because it means smart people aren't doing things that are actually useful.

comment by lessdazed · 2011-07-07T13:27:29.596Z · LW(p) · GW(p)

As for dropping out of other religious communities, well, they're the quintessential bad guys, right? Not only do they believe in all kinds of unsubstantiated woo, they suck you into a dense network of personal relationships -- which we at Less Wrong want earnestly to re-create, just, you know, without any of the religion stuff.

Churches have art. I like art.

There is a meme on Less Wrong, though, that rationalist communities are not just better-suited to the unique needs of rationalists, but also better in general...back off of your pleasurable belief that rationality is better than other belief systems.

I think "better in general" is a stand in for a lot of specific things that can be better or worse. I shouldn't expect one action to help according to every metric.

...if your answer to the challenges of life is to self-medicate, you're taking on a whole lot more risk than the present maturity of the discipline of rationality would seem to warrant.

Rationality, the gateway drug.

...quitting religion offer excellent rewards now, but may involve heavy costs down the road.

Words fail me.

Replies from: TomM
comment by TomM · 2011-07-08T04:08:11.532Z · LW(p) · GW(p)

I think the "heavy costs" of quitting religion referred to are the documented differences in health outcomes based on membership (or lack thereof) of religious communities:

"Even if we have good reason to assert that mainstream religious thinking is flawed, maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community."

comment by timtyler · 2011-07-07T09:06:20.978Z · LW(p) · GW(p)

By cheaply, I mean that the beliefs won't really hurt you...it's relatively safe to believe in them.

They don't seem too "cheap" to me. We are potentially talking about many thousands of dollars.

comment by [deleted] · 2011-07-03T02:45:32.394Z · LW(p) · GW(p)

whereas drugs and quitting religion offer excellent rewards now, but may involve heavy costs down the road.

What long-term costs would quitting religion have?

ETA: The answer is presumably in the post:

maybe we should be slower to advise people to give up the health benefits (footnote 15) of belonging, emotionally, to one or another religious community.

Replies from: Normal_Anomaly, Eugine_Nier
comment by Normal_Anomaly · 2011-07-03T02:50:59.893Z · LW(p) · GW(p)

I think the post was referring to the loss of friends and status that can sometimes result. The benefits may outweigh the costs sometimes, if one is faced with the choice between "staying in the closet" and living a lie, and leaving the church and losing friends. I have no direct experience with this, but many atheists unaffiliated with Less Wrong do come out and don't regret it, so I don't think this counts as a "weird belief" that LW pushes on its members. At any rate, it's not all that weirder than atheism itself.

comment by Eugine_Nier · 2011-07-08T06:05:29.889Z · LW(p) · GW(p)

What long-term costs would quitting religion have?

Another example is loosing access to useful intersubjective truths that religions have accumulated over the centuries.

Replies from: Peterdjones, AlexM
comment by Peterdjones · 2011-07-08T17:35:46.990Z · LW(p) · GW(p)

If we are to continue having access to the useful intersubjective truths stemming from the Enlightenment, I don't think we can wholely buy itnto the rival intesubjective truths of religion.

comment by AlexM · 2011-07-08T16:57:15.833Z · LW(p) · GW(p)

Burkean conservatism translated into modern philosophical jargon. This argument would apply only to religions that are at least centuries old. How many of these remain in unchanged form in modern Western world?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-12T14:52:31.530Z · LW(p) · GW(p)

Wouldn't Burkean conservativism recommend being a member of any reasonably stable religion, perhaps preferably the one you were born into?

comment by Nisan · 2011-07-04T22:08:02.555Z · LW(p) · GW(p)

I think your "Partisanship" section is your strongest point. The question of whether our rationality shindig actually helps people is a good question. Similar points have been made before.

comment by Jonathan_Graehl · 2011-07-04T00:26:39.498Z · LW(p) · GW(p)

Stars twinkle because of the atmosphere's slightly fluctuating refractive properties (compare to mirages). I'm sure you can notice dim stars disappearing when you look straight at them, but I'm going to keep the atmosphere story for now - even though the only way I've tested it is to compare with planets (whose images are disclike rather than pointlike).

See any number of google hits on "why stars twinkle" e.g. http://astroprofspage.com/archives/1168

Replies from: rmmh
comment by rmmh · 2011-07-04T15:00:36.909Z · LW(p) · GW(p)

I take it you haven't spent much time star gazing?

The "foveal blind spot" is how the fovea has a very high density of cones, which gives great acuity with color vision, but unfortunately almost no rods, so very poor performance under low-light conditions.

To view faint stars, you look slightly off to the side while still concentrating on the object. This is called averted vision.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-07-04T21:47:48.254Z · LW(p) · GW(p)

I've stargazed, though now only when I go camping (boo Los Angeles). You're right - I'd forgotten about the experience of averted vision.

It's funny that I said "with one eye" - I was assuming that the (optic nerve) blind spot was off-center in each eye. Obviously parallax hardly applies to stars, so relative night blindness to stars in the center few degrees of your vision wouldn't depend on having only one eye open.

Replies from: saturn
comment by saturn · 2011-07-04T22:10:53.128Z · LW(p) · GW(p)

The optic nerve blind spot is off-center, toward the side away from your nose in your field of vision, and it's a total blind spot. The fovea is in the center of your field of vision and isn't really a blind spot, it's just specialized for higher resolution at the expense of sensitivity so it becomes like a second blind spot in dim conditions.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-07-05T06:36:35.663Z · LW(p) · GW(p)

toward the side away from your nose in your field of vision

So the optic nerve blind spot wouldn't really be noticeable when stargazing with both eyes. That's what I was expecting or vaguely recalling.

comment by Dorikka · 2011-07-03T05:44:38.759Z · LW(p) · GW(p)

Note: I'm typing this without looking at other comments because I judge that it would be really easy for one of the better-sounding argumants to hijack my train of thought, leaving my previous thoughts to be crushed under the wheels of the huge locomotive.

I'm going to do my own 'black box' treatment of rationality, trying to figure out what I've actually got from it in list form.

  1. I have a model of the world which allows me to suffer less emotional harm when other people are upset that I (to paraphrase) don't share their preference and/or utility function.

Truth Finding?: I note that this benefit is actually independant of whether rationality is an effective method of finding truth.

  1. I have something to protect, and emotionally benefit from this.

Truth Finding?: See 'Truth Finding?' notes above.

  1. I found out about using Spaced Repetition.

Truth Finding?: I haven't used it long enough to experience significant benefit from it yet. I also note that I started using Anki mostly because a bunch of people that I considered smart said it worked and recommended it.

Interesting. I'm going to hold off on proposing solutions, think a bit more, and look at other comments.

EDIT: Odd. I listed my items as 1, 2, and 3, but they all show as '1'

Replies from: luminosity, loup-vaillant
comment by luminosity · 2011-07-03T23:51:00.734Z · LW(p) · GW(p)

They're HTML lists, and you have paragraphs between each, meaning each becomes its own new list which naturally starts at 1.

comment by loup-vaillant · 2011-07-08T13:27:35.242Z · LW(p) · GW(p)

Mind the amount of indenting.

  1. One Space before the number. One space after. Total amound of indenting: 4

    Paragraph indented by 4 spaces. In a fixed font width, it would be aligned with the previous paragraph of the same list item.

  2. Same indentation as 1.

Looks like it works.

comment by knb · 2011-07-09T19:39:36.147Z · LW(p) · GW(p)

This part is totally unfair:

1) Sign up for cryonics,

2) Donate to SIAI,

3) Drop out of any religious groups you might belong to, and

4) Take chemical stimulants.

comment by Armok_GoB · 2011-07-03T13:09:08.008Z · LW(p) · GW(p)

Guess I don't have to worry then.

Everything here has been the opposite of cheap. I don't even have a memory of what it's like to have a goal other than sacrifice everything for making microscopic changes in the probabilities of distant abstract outcomes. I don't even remember what goals this brain used to have in the way of goals before being rewritten.

I disallow myself from thinking that kind of pleasurable though for exactly that reason, instead thinking of things I don't think could happen if I feel tempted.

Never had any religious group to drop out of. Don't have any IRL LW group to join although I'm looking for one. If I took your argument more seriously my response might have been giving that up.

Cryonics not available in my area.

I don't take stimulants and am a bit chocked by that being advised. I write this willing/unwillingness to medicate everything of as cultural differences between here and the USA.

Those who actually FIT the description you give? Yea, they should take a look at why they believe what they do, it's likely they believe the right thing but for the wrong reasons.

comment by endoself · 2011-07-03T04:13:34.601Z · LW(p) · GW(p)

I'm an interesting data point in the context of this article. I accept the LW-mainstream cryonics analysis, but I am not signed up and I do not intend to do so. I also do not plan for events after the singularity in order to prevent excessive optimism from causing bias, though I started this because it is very difficult to form such plans and only later noticed this additional benefit.

Replies from: MatthewBaker, Armok_GoB
comment by MatthewBaker · 2011-07-05T21:35:30.544Z · LW(p) · GW(p)

What does redacted mean? I mean in this instance, not general terms.

Replies from: endoself
comment by endoself · 2011-07-05T22:33:37.497Z · LW(p) · GW(p)

The ability to delete comments was removed (though it is back now). This was the only way to remove the text of the comments.

Replies from: timtyler
comment by timtyler · 2011-07-07T09:07:57.529Z · LW(p) · GW(p)

Note that you can now delete comments with no children.

comment by Armok_GoB · 2011-07-03T13:23:27.120Z · LW(p) · GW(p)

redacted

Replies from: jimrandomh
comment by jimrandomh · 2011-07-03T14:13:10.276Z · LW(p) · GW(p)

redacted

Replies from: endoself, Armok_GoB
comment by endoself · 2011-07-03T16:48:58.928Z · LW(p) · GW(p)

redacted

comment by Armok_GoB · 2011-07-03T15:36:28.168Z · LW(p) · GW(p)

redacted

comment by btipling · 2011-07-07T15:52:16.403Z · LW(p) · GW(p)

My perception of this advice is that it is general, and that it is the individual's responsibility to determine if in the context of their lives this advice will have more benefit than cost.

comment by nazgulnarsil · 2011-07-03T08:50:01.504Z · LW(p) · GW(p)

A stegosaurus that went around talking about the risk of meteors probably would have been looked at sideways too.