What I've learned from Less Wrong

post by Louie · 2010-11-20T12:47:42.727Z · LW · GW · Legacy · 235 comments

Contents

235 comments

Related to: Goals for which Less Wrong does (and doesn’t) help

I've been compiling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the beginning of this blog, perhaps my personal experience from reading the back-log of articles known as the sequences can introduce you to some of the more useful insights you might get from reading and using Less Wrong.

1. Things can be correct - Seriously, I forgot. For the past ten years or so, I politely agreed with the “deeply wise” convention that truth could never really be determined or that it might not really exist or that if it existed anywhere at all, it was only in the consensus of human opinion. I think I went this route because being sloppy here helped me “fit in” better with society. It’s much easier to be egalitarian and respect everyone when you can always say “Well, I suppose that might be right -- you never know!”

2. Beliefs are for controlling anticipation (Not for being interesting) - I think in the past, I looked to believe surprising, interesting things whenever I could get away with the results not mattering too much. Also, in a desire to be exceptional, I naïvely reasoned that believing similar things to other smart people would probably get me the same boring life outcomes that many of them seemed to be getting... so I mostly tried to have extra random beliefs in order to give myself a better shot at being the most amazingly successful and awesome person I could be.

3. Most peoples' beliefs aren’t worth considering - Since I’m no longer interested in collecting interesting “beliefs” to show off how fascinating I am or give myself better odds of out-doing others, it no longer makes sense to be a meme collecting, universal egalitarian the same way I was before. This includes dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart.

4. Most of science is actually done by induction - Real scientists don’t get their hypotheses by sitting in bathtubs and screaming “Eureka!”. To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.

5. I have free will - Not only is the free will problem solved, but it turns out it was easy. I have the kind of free will worth caring about and that’s actually comforting since I had been unconsciously ignoring this out of fear that the evidence appeared to be going against what I wanted to believe. Looking back, I think this was actually kind of depressing me and probably contributing to my attitude that having interesting rather than correct beliefs was fine since it looked like it might not matter what I did or believed anyway. Also, philosophers failing to uniformly mark this as “settled” and move on is not because this is a questionable result... they’re just in a world where most philosophers are still having trouble figuring out if god exists or not. So it’s not really easy to make progress on anything when there is more noise than signal in the “philosophical community”. Come to think of it, the AI community and most other scientific communities have this same problem... which is why I no longer read breaking science news anymore -- it's almost all noise.

6. Probability / Uncertainty isn’t in objects or events - It’s only in minds. Sounds simple after you understand it, but I feel like this one insight often allows me to have longer trains of thought now without going completely wrong.

7. Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.


There are countless other important things that I've learned but haven't documented yet. I find it pretty amazing what this site has taught me in only 8 months of sporadic reading. Although, to be fair, it didn't happen by accident or by reading the recent comments and promoted posts but almost exclusively by reading all the core sequences and then participating more after that.

And as a personal aside (possibly some others can relate): I still love-hate Less Wrong and find reading and participating on this blog to be one of the most frustrating and challenging things I do. And many of the people in this community rub me the wrong way. But in the final analysis, the astounding benefits gained make the annoying bits more than worth it.

So if you've been thinking about reading the sequences but haven't been making the time do it, I second Anna’s suggestion that you get around to that. And the rationality exercise she linked to was easily the single most effective hour of personal growth I had this year so I highly recommend that as well if you're game.

 

So, what have you learned from Less Wrong? I'm interested in hearing others' experiences too.

235 comments

Comments sorted by top scores.

comment by cousin_it · 2010-11-20T18:17:16.903Z · LW(p) · GW(p)

LW has helped me a lot. Not in matters of finding the truth; you can be a good researcher without reading LW, as the whole history of science shows. (More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?) No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I believe that Eliezer has succeeded in creating, and communicating through the Sequences, a valuable technique for seeing through words to their meanings and trying to think correctly about those instead. When you do that, you inevitably notice how much of what you considered to be "meanings" is actually yay/boo reactions, or cached conclusions, or just fine mist that dissolves when you look at it closely. Normal folks think that the question about a tree falling in the forest is kinda useless; nerdy folks suppress their flinch reaction and get confused instead; extra nerdy folks know exactly why the question is useless. Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive. I liked reading Moldbug before LW. Now I find him... occasionally entertaining, I guess?

Better people than I are already turning this into a sort of martial art. Look at Yvain cutting down ten guys with one swoop, and then try to tell me LW isn't useful!

Replies from: Vladimir_M, Louie, wedrifid, XiXiDu
comment by Vladimir_M · 2010-11-21T09:21:03.045Z · LW(p) · GW(p)

cousin_it:

Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive.

Trouble is, the question still remains open: how to understand politics so that you're reasonably sure that you've grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset -- something that a self-proclaimed "rationalist" should definitely take into account.

Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place -- late 1980s and early 1990s in ex-Yugoslavia -- where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-headed objective observer should have been able to foresee it. My own life was very negatively affected by my family's inability to understand the situation before all hell broke loose. This has perhaps made me so paranoid that I'm unable to understand why the present political situation in the Western world is guaranteed to be so stable that I can safely forget about it. Yet I still have to see some arguments for this conclusion that would pass the standards that LW people normally apply to other topics.

Replies from: MichaelVassar, None, cousin_it, CBHacking
comment by MichaelVassar · 2010-11-21T17:42:06.486Z · LW(p) · GW(p)

I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2010-11-30T01:32:03.907Z · LW(p) · GW(p)

Trickier than the 'how stable' question is that of what is likely to result from a failure. To the extent that such knowledge is missing the problem of what to do about it gains faint hints reminiscent of Pascal's Mugging.

comment by NancyLebovitz · 2010-11-30T00:22:44.335Z · LW(p) · GW(p)

That sounds plausible, but should probably have a time frame added.

comment by [deleted] · 2010-11-21T12:37:18.744Z · LW(p) · GW(p)

Now, for full disclosure, there are many reasons why I could be biased about this.

With emphasis on "could be" as opposed to "am". Different past experiences leading to different conclusions isn't necessarily "bias". This is a bit of a pet peeve of mine. I often see the naive, the inexperienced, quite often the young, dismiss the views of the more experienced as "biased" or by some broad synonym.

The implicit reasoning seems to be as follows: "Here is the evidence. The evidence plus a uniform prior distribution leads to conclusion A. Yet this person sees the evidence and draws conclusion B different from A. Therefore he is letting his biases affect his judgment."

One problem with the reasoning is that "the evidence" is not the (only) evidence. There is, rather, "evidence I'm aware of" and "evidence I'm not aware of but the other person might be aware of". It's entirely possible for that other evidence to be decisive.

comment by cousin_it · 2010-11-21T22:11:05.325Z · LW(p) · GW(p)

Your comment is an instance of the "forcing fallacy" which really deserves a post of its own: claiming that we should spend resources on a problem because a lot of utility depends, or could depend, on the answer. There are many examples of this on LW, but to choose an uncontroversial one from elsewhere: why aren't more physicists working on teleportation? The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all. I don't see a viable attack for applying LW-style rationality to political prediction, do you?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-21T22:35:42.357Z · LW(p) · GW(p)

The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all.

This is valid where there are experts that can confidently estimate that there are no attacks. There are lots of expert physicists, so if steps towards teleportation were feasible, someone would've noticed. In case there are no experts to produce such confidence, correct course of action is to create them (perhaps from more general experts, by way of giving a research focus).

The rule "If it's an important problem, and we haven't tried to understand it, we should" holds in any case, it's just that in case of teleportation, we already did try to understand what we presently can, as a side effect of widespread knowledge of physics.

comment by CBHacking · 2014-12-07T12:03:00.169Z · LW(p) · GW(p)

This is one of the reasons I actually rather like the politics in Heinlein's writing; while it occasionally sounds preachy, and I routinely disagree with the implicit statement that the proposed system has higher utility than current ones, it does expose some really interesting ideas. This has led me to wonder, on occasion, about other potential government systems and to attempt to determine their utility compared to what we have.

Of course, I'm not really a student of political science and therefore am ill-equipped for this purpose, and estimate insufficient utility to attempting to undertake the scholarship needed to correct this (mostly due to opportunity cost; I am active in a field where I can contribute significant utility today, and it's more efficient to update and expand my knowledge there than to branch into a completely different field in any depth). Nonetheless, inefficient though it may be, it's an open question that I find my mind wandering to on occasion.

The conclusion I've reached is that if the US government (as we currently recognize it) continues until the technological singularity, it will be because the singularity comes soon (requires within ~50 years at a low-confidence estimate, at 150 years I'm 90% confident the US government either won't exist or won't be recognizable). There are too many problems with the system; it wasn't optimized for the modern world, to the extent was optimized at all, and of course the "modern world" keeps advancing too. The US has tried to keep up (universal adult suffrage, several major changes to how political parties are organized (nobody today seriously proposes a split ticket), the increasing authority of the federal government over the states, etc.) but such change is reactive and takes time. It will always lag behind the bleeding edge, and if it gets too far behind the then-current institution will either be overthrown or will lose its significance and become something like the 21st century's serious implementations of the feudal system (rare, somewhat different from how it was a few hundred years back, and nonetheless mostly irrelevant).

comment by Louie · 2010-11-21T00:17:36.497Z · LW(p) · GW(p)

(More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?)

Saying that "Having incorrect views isn't that crippling, look at Scott Aaronson!" is a bit like saying "Having muscular dystrophy isn't that crippling, look at Stephen Hawking!" It's hard to learn much by generalizing from the most brilliant, hardest working, most diplomatically-humble man in the world with a particular disability. I know they're both still human, but it's much harder to measure how much incorrect views hurt the most brilliant minds. Who would you measure them against to show how much they're under-performing their potential?

Incidentally, knowing Scott Aaronson, and watching that Blogging Heads video in particular was how I found out about SIAI and Less Wrong in the first place.

Replies from: cousin_it
comment by cousin_it · 2010-11-21T05:39:53.563Z · LW(p) · GW(p)

How would Aaronson benefit from believing in MWI, over and above knowing that it's a valid interpretation?

Replies from: Louie
comment by Louie · 2010-11-21T13:08:13.534Z · LW(p) · GW(p)

Upvoted. This is definitely the right question to ask here... thanks for reminding me.

I hesitate to speculate on what gaps exist in Scott Aaronson's knowledge. His command of QM and complexity theory greatly exceed mine.

[...]

OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson's knowledge and their implications!

Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require... except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn't automatically do this and it's not a habit that most people have. Another possibility is that perhaps it's not obvious to him that Occam's razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones... or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.

OK... wild speculation complete!

My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn't need to ruffle his buddies' feathers -- he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it's pointless to get others to switch to a new interpretation if they don't see the fundamental reason why it's right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2010-11-22T11:31:18.314Z · LW(p) · GW(p)

Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English?

Just to be clear: there are two unrelated notions of "complexity" blurred together in the above comment. The Complexity Zoo discusses computational complexity theory -- it discusses how the run-time of an algorithm scales with algorithm's inputs (and thereby classes algorithms into P, EXPTIME, etc.).

Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer's argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.

I'm sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren't confused about it.

Replies from: XiXiDu
comment by wedrifid · 2010-11-20T21:16:24.921Z · LW(p) · GW(p)

No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I couldn't agree more. The "extra nerdy folks know exactly why the question is useless" theme is similarly incisive.

comment by XiXiDu · 2010-11-20T19:49:32.695Z · LW(p) · GW(p)

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind? It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it. The problem is, how do we get people to read it who disagree? I've recently introduced a neuroscientist to Less Wrong via that post. He read it and agreed with everything. Then he said it's naive to think that this will be adopted any time soon. What he meant is that all this wit is useless if we don't get the right people to digest it. Not people like us who agree anyway, probably before ever reading that post in the first place.

Regarding Eliezers post I even have my doubts that it is very useful given confused nerdy folks. The gist of that post seems to be that people should pinpoint their disagreements before one talks at cross-purposes. But it gives the impression that propositional assertions do not yield sensory experience. Yet human agents are physical systems just as trees. If you tell them certain things you can expect certain reactions. I believe that article might be inconsistent with other assertions made in this community like taking logical implications of general beliefs serious. The belief that the decimal expansion of Pi is infinite will never pay rent in future anticipations.

I'm also skeptic about another point in the original post, namely that most people’s beliefs aren’t worth considering. This I believe might be conterproductive. Consider that most people express this attitude towards existential risks from artificial intelligence. So if you link up people to that one post, out of context and then they hear about the SIAI, what might they conclude if they take that post serious?

The point about truth is another problematic idea. I really enjoyed The Simple Truth, but in the light of all else I've come across I'm not convinced that truth is a useful term to adopt anywhere but in the most informal discussions. If you are like me and grew up in a religious environment you are told that there exist absolute truth. Then if you have your doubts and start to learn more you are told that skepticism is an epistemological position, and ‘there is no truth-there is truth’ are metaphysical/linguistic positions. When you learn even more and come across concepts like the uncertainty principle, Gödel's incompleteness theorems, halting problem or Tarski’s Truth Theorem the nature of truth becomes even more uncertain. Digging even deeper won't revive the naive view of truth either. And that is just the tip of the iceberg, as you will see once your learn about Solomonoff induction and Minimum Message Length.

ETA Fixed the formatting. My last paragraph was eaten before!

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-11-20T19:59:10.408Z · LW(p) · GW(p)

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind?

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally). The progress is made by putting such arguments into words, to be followed by other people faster and more reliably than they were arrived at, even if arriving at them is in some contexts almost inevitable.

Additionally, clarity offered by a carefully thought-through exposition isn't something to expect without a targeted effort. This clarity can well serve as the enabling factor for making the next step.

Replies from: shokwave, patrissimo, None
comment by shokwave · 2010-11-21T09:59:06.849Z · LW(p) · GW(p)

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

And to avoid people giving in to their motivated cognition, you present the steps in order, and the conclusion at the end. To paraphrase Yudkowsky's explanation of Bayes Theorem:

By this point, conclusion may seem blatantly obvious or even tautological, rather than exciting and new. If so, this argument has entirely succeeded in its purpose.

This method of presenting great arguments is probably the most important thing I learned from philosophy, incidentally.

comment by patrissimo · 2010-12-15T05:03:44.535Z · LW(p) · GW(p)

"That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally)."

Also how great propaganda works.

If you are going to describe a "great argument" I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.

Replies from: shokwave
comment by shokwave · 2010-12-15T06:54:04.531Z · LW(p) · GW(p)

put more emphasis on it being tied to the truth rather than being agreeable.

Oh. I only agree with argument steps that are truthful.

comment by [deleted] · 2010-11-21T13:39:03.347Z · LW(p) · GW(p)

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

There are nevertheless also conclusions that you agreed with all along. Sometimes hindsight bias makes you think you agreed all along when you really didn't. But other times you genuinely agreed all along.

You can skip to the end of Yvain's post (the one referenced here) and read the summary - assuming you haven't read the post already. Specifically, this statement: "We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective." If you agree with this statement without first reading Yvain's argument for it, then that's evidence that you already agreed with Yvain's conclusions without needing to be led gradually step by step through his long argument.

comment by wedrifid · 2010-11-20T21:18:54.718Z · LW(p) · GW(p)

It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it

The best essays will usually leave you with that impression. As will the best teachers.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-20T22:08:06.057Z · LW(p) · GW(p)

Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that. A meme - particularly a parasitic meme - can get itself a privileged position in your head by feeding your biases to make itself look good, e.g. your hindsight bias.

When you see a new idea and you feel your eyes light up, that’s the time to put it in a sandbox - yes, thinking a meme is brilliant is a bias to be cautious of. You need to know how to take the thing that gave you that "click!" feeling and evaluate it thoroughly and mercilessly.

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

Replies from: wedrifid, bbleeker, Vladimir_Nesov
comment by wedrifid · 2010-11-20T22:57:38.294Z · LW(p) · GW(p)

Be careful. So will the less-than-best essays and teachers.

Less often. Learning bullshit is more likely to come with the impression that you are gaining sophistication. If something is so banal as to be straightforward and reasonable you gain little status by knowing it.

Yes, people have biases and believe silly things but things seeming obvious is not a bad sign at all. I say evaluate mercilessly those things that feel deep and leave you feeling smug that you 'get it'. 'Clicking' is no guarantee of sanity but it is better than learning without clicking.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-20T23:46:23.135Z · LW(p) · GW(p)

Yes, I suspect I'm being over-cautious having been thinking about memetic toxic waste quite a lot of late. This suggests that when I'm describing the scary stuff in detail, I'll have to take care not to actually scare people out of both neophilia and decompartmentalisation.

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

Replies from: wedrifid
comment by wedrifid · 2010-11-21T01:32:19.403Z · LW(p) · GW(p)

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

Scary indeed. I suspect what we are each 'vulnerable' to will vary quite a lot from person to person.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-21T01:48:38.948Z · LW(p) · GW(p)

Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous "how could they be that stupid?" Because, of course, it contains an implicit "I could never be that stupid" and "poor victim, I am of course far more rational". This just means your mind - in the context of being a general-purpose operating system that runs memes - does not have that particular vulnerability.

I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn't any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.

My message is: it can happen to you, and thinking it can't is more dangerous than nothing. Here are some defences against the dark arts.

[That's the thing I'm working on. Thankfully, the commonest delusion seems to be "it can't happen to me", so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]

This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can't design a program not to have bugs - just not to have ones that we know about. As a subset of that, we can't design a constructed intelligence not to have cognitive biases - just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I'd just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.

Replies from: wedrifid, CronoDAS
comment by wedrifid · 2010-11-21T10:18:20.903Z · LW(p) · GW(p)

My message is: it can happen to you, and thinking it can't is more dangerous than nothing.

With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.

Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-21T11:02:09.692Z · LW(p) · GW(p)

I can see that I've failed to convince you and I need to do better.

In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.

You really aren't running OpenBSD with those less rational people running Windows.

I do think being able to make such statements of confidence in one's immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience - but that isn't what you said.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.

(I can see that this bit is going to have to address the standard objection more.)

Replies from: wedrifid
comment by wedrifid · 2010-11-27T01:19:18.892Z · LW(p) · GW(p)

I can see that I've failed to convince you and I need to do better.

This is a failure mode common in when other-optimising. You assume that I need to be persuaded, put that as the bottom line and then work from there. There is no room for the possibility that I know more about my relative areas of weakness than you do. This is a rather bizarre position to take given that you don't even have significant familiarity with the wedrifid online persona let alone me.

In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.

It isn't so much that I dislike what you are saying as it is that it seems trivial and poorly calibrated to the context. Are you really telling a lesswrong frequenter that they may have security holes as though you are making some kind of novel suggestion that could trigger insecurity or offence?

I suggest that I understand the entirety of the point you are making and still respond with the grandparent. There is a limit to how much intellectual paranoia is helpful and under-confidence is a failure of epistemic rationality even if it is encouraged socially. This is a point that you either do not understand or have been careful to avoid acknowledging for the purpose of presenting your position.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

I would be more inclined to answer such questions if they didn't come with explicitly declared rhetorical intent.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-27T10:19:09.048Z · LW(p) · GW(p)

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

I would be more inclined to answer such questions if they didn't come with explicitly declared rhetorical intent.

No, I'm actually interested in knowing. If "nothing", say that.

comment by CronoDAS · 2010-11-21T05:11:32.597Z · LW(p) · GW(p)

Regarding Scientology, I had the impression that they usually portray themselves to those they're trying to recruit as being like a self-help community ("we're like therapists or Tony Robbins, except that our techniques actually work!") before they start sucking you into the crazy?

Replies from: wedrifid
comment by wedrifid · 2010-11-21T10:12:37.583Z · LW(p) · GW(p)

Wait... did you just use Tony Robbins as the alternative to being sucked into the crazy?

Replies from: CronoDAS, Kaj_Sotala, CronoDAS
comment by CronoDAS · 2010-11-22T01:55:41.230Z · LW(p) · GW(p)

I'm sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story. (Although Scientology doesn't seem any crazier than the crazier versions of mainstream religions...)

Replies from: pjeby
comment by pjeby · 2010-11-22T02:43:47.333Z · LW(p) · GW(p)

I'm sure that whatever it is that Tony Robbins preaches is less crazy than the Xenu story.

Here's a video in which he lays out what he sees as the critical elements of human motivation and action. Pay extra attention to the slides -- there's more stuff there than he talks about.

(It's a much more up-to-date and compact model than what he wrote in ATGW, by the way.)

Replies from: Craig_Heldreth
comment by Craig_Heldreth · 2010-11-22T13:52:22.367Z · LW(p) · GW(p)

I got through 11:00 of that video. If that giant is inside me I do not want him woken up. I want that sucker in a permanent vegetative state.

Many years ago I had a friend who is a television news anchor person. The video camera flattens you from three dimensions to two, and it also filters the amount of non-verbal communication you can project onto the storage media. To have energy and charisma on the replay, a person has to project something approaching mania at record time. I shudder to think what it would be like to sit down in the front row of the Robbins talk when he was performing for that video. He comes across as manic, and the most probable explanation for that is amphetamines.

The transcript might read rational, but that is video of a maniac.

Replies from: pjeby
comment by pjeby · 2010-11-22T15:18:31.542Z · LW(p) · GW(p)

He comes across as manic

A bit of context: that's not how he normally speaks.

There's another video (not publicly available, it's from a guest speech he did at one of Brendon Burchard's programs) where he gives the backstory on that talk. He was actually extremely nervous about giving that talk, for a couple different reasons. One, he felt it was a big honor and opportunity, two, he wanted to try to cram a lot of dense information into a twenty minute spot, and three, he got a bad introduction.

Specifically, he said the intro was something like, "Oh, and now here's Tony Robbins to motivate us", said in a sneering/dismissive tone... and he immediately felt some pressure to get the audience on his side -- a kind of pressure that he hasn't had to deal with in a public speaking engagement for quite some time. (Since normally he speaks to stadiums full of people who paid to come see him -- vs. an invited talk to a group where a lot of people -- perhaps most of the audience -- sees him as a shallow "motivator".)

IOW, the only drug you're seeing there is him feeling cornered and wanting to prove something --plus the time pressure of wanting to condense material he usually spends days on into twenty minutes. His normal way of speaking is a lot less fast paced, if still emotionally intense.

One of his time management programs that I bought over a decade ago had some interesting example schedules in it, that showed what he does to prepare for his time on stage (for programs where he's speaking all day) -- including nutrition, exercise, and renewal activities. It was impressive and well-thought out, but nothing that would require drugs.

comment by Kaj_Sotala · 2010-11-21T23:36:01.167Z · LW(p) · GW(p)

One of Tony Robbins' books has been really helpful to me. Admittedly the effects mostly faded after the beginning, but applying his techniques put me into a rather blissful state for a day or two and also allowed for a period of maybe two weeks to a month during which I did not procrastinate. I also suspect I got a lingering boost to my happiness setpoint even after that. This are much better results than I've had from any previous mind-hacking technique I've used.

Fortunately I think I've been managing to figure out some of the reasons why those techniques stopped working, and have been on an upswing, mood and productivity-wise, again. "Getting sucked into the crazy" is definitely not a term I'd use when referring to his stuff. His stuff is something that's awesome, that works, and which I'd say everyone should read. (I already bought my mom an extra copy, though she didn't get much out of it.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-11-21T23:51:44.881Z · LW(p) · GW(p)

What book?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-11-21T23:59:08.337Z · LW(p) · GW(p)

Awakening the Giant Within.

You need to apply some filtering to pick out the actual techniques out of the hype, and possibly consciously suppress instinctive reactions of "the style of this text is so horrible it can't be right", but it's great if you can do that.

I will post a summary of the most useful techniques at LW at some point - I'm still in the process of gathering long-term data, which is why I haven't done so yet. Though I blogged about the mood-improving questions some time back.

Replies from: pjeby
comment by pjeby · 2010-11-22T02:33:59.361Z · LW(p) · GW(p)

You need to apply some filtering to pick out the actual techniques out of the hype

It's not so much hype as lack of precision. Robbins tends to specify procedures in huge "steps" like, "step 1: cultivate a great life". (I exaggerate, but not by that much.) He also seems to think that inspiring anecdotes are the best kind of evidence, which is why I had trouble taking most of ATGW seriously enough to really do much from it when I first bought it (like a decade or more ago).

Recently I re-read it, and noticed that there's actually a lot of good stuff in there, it's just stuff I never paid any attention to until I'd stumbled on similar ideas myself.

It's sort of like that saying commonly (but falsely) attributed to Mark Twain:

"When I was a boy of fourteen, my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty-one, I was astonished at how much the old man had learned in seven years."

Tony seems to have learned a lot in the years since I started doing this sort of thing. ;-)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-11-22T09:59:40.009Z · LW(p) · GW(p)

It's not so much hype as lack of precision. Robbins tends to specify procedures in huge "steps" like, "step 1: cultivate a great life". (I exaggerate, but not by that much.)

That's odd - I didn't get that at all, and I found that he had a lot of advice about various concrete techniques. Off the top of my head: pattern interrupts, morning questions, evening questions, setback questions, smiling, re-imagining negative memories, gathering references, changing your mental vocabulary.

Replies from: pjeby
comment by pjeby · 2010-11-22T15:21:49.983Z · LW(p) · GW(p)

I found that he had a lot of advice about various concrete techniques.

He does, but they're mostly in the areas that I ignored on my first few readings of the book. ;-)

comment by CronoDAS · 2010-11-22T01:43:42.429Z · LW(p) · GW(p)

Well, there's crazy, and then there's crazy...

comment by Sabiola (bbleeker) · 2010-11-24T12:53:37.201Z · LW(p) · GW(p)

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

I'm very interested in that, I think I need it. I just read this article about Mere Christianity by C. S. Lewis, and I was like "what the hell is wrong with me, that I didn't see at least some of those points myself?" It really scared me, and made me wonder what other nonsense I believe in, that I ought to have seen through right away...

Replies from: NancyLebovitz, wedrifid, David_Gerard
comment by NancyLebovitz · 2010-11-24T16:24:51.780Z · LW(p) · GW(p)

It might be worth doing some analysis on the authoritative voice (the ability to sound right), and I speak as someone who's been a CS Lewis, GK Chesterton, Heinlein, Rand, and Spider Robinson fan. At this point, I suspect it's a pathology.

Replies from: David_Gerard, Blueberry, bbleeker, Eliezer_Yudkowsky
comment by David_Gerard · 2010-11-26T20:26:32.283Z · LW(p) · GW(p)

Dude. AN ASSERTION IS PROVEN BY SOUNDING GOOD. It's a form of the Steve Jobs reality distortion superpower: come up with a viewpoint so compelling it will reshape people's perception of the past as well as the present.

(I must note that I'm not actually advocating this.)

Argument by assertion amusement from my daughter: "I'm running around the kitchen, but I'm not being annoying by running around the kitchen." An argument by assertion of rich depth, particularly from a three-year-old.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-11-27T14:18:02.038Z · LW(p) · GW(p)

Did you ever get around to reading either of the papers I linked you to there btw?

Replies from: David_Gerard
comment by David_Gerard · 2010-11-27T17:00:04.431Z · LW(p) · GW(p)

Nuh. Still in the Pile(tm) with yer talk, which I have watched the first 5 min of ... I hate video so much.

Did you dislike your talk's content or your presentation? So far it looks like something that should be turned into a series of blog posts, complete with diagrams.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-11-27T17:08:17.019Z · LW(p) · GW(p)

Neither really, it's the video itself I dislike. I've put the slides on Scribd, and I'm thinking of re-recording the soundtrack. Only trouble is, I'd have to watch the video first to remember what I said... and I hate video so much.

comment by Blueberry · 2012-03-28T03:39:43.420Z · LW(p) · GW(p)

This was over a year ago but I see that you're still around. I wanted to ask you more about this. How does Spider Robinson fit in with the others? I would also add Orwell, Kipling, and Christopher Hitchens. Maybe even Eliezer a bit.

A big part of it is that these authors talk about truth a lot and the harm of denying that it's there, and rail against and strawman other groups for refusing to accept the truth or even that truth exists.

What do you mean by a pathology? You think there was something wrong with those authors? Are you talking about overconfidence?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-03-28T03:58:49.090Z · LW(p) · GW(p)

Spider Robinson is very definite and explicit about how things ought to be. Unfortunately, he extends this to the idea that people who are worth knowing like good jazz, Irish coffee, and puns.

I meant that there may be a pathology at my end-- being so fond of the authoritative voice that I could be a fan of writers with substantially incompatible ideas, and not exactly notice or care.

Replies from: Blueberry
comment by Blueberry · 2012-03-28T04:58:12.195Z · LW(p) · GW(p)

I suspect you may be reading his exaggerated enthusiasm for these things as a blanket statement about people who aren't worth knowing. For instance, I might, in a burst of excitement, say that people who don't like the song Waterfall aren't worth talking to, but I wouldn't mean it literally. It would be a figure of speech.

For instance, in one of the Callahan books he states (in the voice of the author, not as a character, IIRC) that if he had a large sum of money he'd buy everyone in the US a copy of "Running, Jumping, Standing Still" on CD because it would make the world so much better. I read this as hyperbole for how much he likes that CD, and I don't take it literally.

I may be misremembering or have missed something in his writing, though.

As far as you liking the voice, I doubt it's a pathology. I feel the same way you do and it's not surprising to me that a lot of people would find that kind of objectivity and confidence appealing. It is a bias, if you confuse the pleasure of reading those writers with their actual ideas, but since I vehemently disagree with most of the above writers I'm not too worried about it. (Do you still read or like those writers?)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-03-28T09:23:50.970Z · LW(p) · GW(p)

I recently started rereading Atlas Shrugged, and was having fun with it-- no matter what else, Rand created a world where interesting things happen. It was also interesting because some things have changed. Her bad guy rich people were bad because they were slack-- they weren't interested in running their businesses, they had barely enough energy to get government favors. The modern type who's energetically taking as much money as possible out of the business with the intent of going somewhere else is barely present.

I can't stand Robinson any more. The tone of "we're cooler than the mundanes" has revolted me to the point where even the milder earlier version gets on my nerves. It's possible that I should give Stardance another chance some time. It's also possible that the effects of Very Bad Dreams have faded. Robinson has a sadistic imagination.

Back when, I bought a copy of Running Jumping Standing Still when I happened to see it, and was annoyed to find that I liked it.

I reread "Magic, Inc." recently, and liked it very much. I haven't read much Lewis or Chesterton lately.

My concern about pathology is a suspicion that what I like is the comfort of being told what to think in a palatable way.

I obviously haven't completely lost my taste for didactic fiction.

Replies from: hairyfigment, Blueberry
comment by hairyfigment · 2012-03-29T20:16:16.497Z · LW(p) · GW(p)

The tone of "we're cooler than the mundanes" has revolted me to the point where even the milder earlier version gets on my nerves.

Second the confusion about this. I don't see what changed in him unless you mean the author's fictionalized daughter.

comment by Blueberry · 2012-03-28T10:34:37.415Z · LW(p) · GW(p)

The modern type who's energetically taking as much money as possible out of the business with the intent of going somewhere else is barely present.

Yeah, that's one of the major criticisms of her book, that the poor honest robber-barons were being exploited by the mean old federal regulations, which has nothing to do with the real world.

I actually liked Anthem best of Rand's books, since it didn't pretend to take place in our world, but was set in a dystopian world instead.

You have to admit Rand can really write a page turner, even though her ideas are shit.

Heh, why were you annoyed that you liked Running Jumping Standing Still? You're opposed to music recommendations from writers?

I haven't read Stardance or Very Bad Dreams: what had the tone of being cooler than the mundanes, and what was the sadistic imagination? Why can't you stand him? I'm really not familiar with the tone you're talking about. The only tone that bothers me about SR is the whole "Let's be hippies and work everything out and it'll all be ok" thing. "Free Lunch" in particular. And his argument in one of the Callahan stories that his AGI character would have to be friendly because it wouldn't have human fear or insecurity. And have you read his "Night of Power"?

My favorite Heinlein are any of his short stories, and the novels Methuselah's Children, Time Enough for Love, To Sail Beyond the Sunset, The Cat Who Walked Through Walls, Number of the Beast, and The Moon is a Harsh Mistress.

As far as Lewis, you have to get past the religious stuff obviously, but I loved The Great Divorce.

I'm guessing you might like Robert Sheckley, who has some of the same "telling you what to think" but it's couched in extremely clever, biting satire. Sheer brilliance. He's SF's Mark Twain.

Replies from: Vaniver, hairyfigment, Incorrect, Eliezer_Yudkowsky, katydee
comment by Vaniver · 2012-03-29T04:00:34.383Z · LW(p) · GW(p)

Yeah, that's one of the major criticisms of her book, that the poor honest robber-barons were being exploited by the mean old federal regulations, which has nothing to do with the real world.

One of the things I find incredibly interesting about Rand and her followers is that Rand is rather good at capturing the spirit of the envious and the bureaucratic, but not very good at making likeable heroes. They tend to be the Steve Jobs sort- it's nice that he exists somewhere far away from me and will sell me things, and he should be as unregulated as possible, but I'd rather not work for him or be his friend.

And so when I've gone to Objectivist meetings, most people there have the same hatreds and same resentments and feel them pretty strongly, but that seems to be the primary binding factor, rather than interest in rationality or personal kindness or shared goals. (I'm not counting everyone wanting to make a bunch of money for themselves as a shared goal.)

Rand looks like she's talking about production, but her real interest is in envy. And I agree with her that it's a terrible thing we shouldn't reward.

Replies from: Nornagest
comment by Nornagest · 2012-03-29T06:40:12.527Z · LW(p) · GW(p)

I've always thought -- even when I was fourteen and reading it for the first time -- that Atlas Shrugged would have been a better book along every conceivable dimension if Dagny and Rearden had told Galt where to stuff it when they got the chance. Never mind what would have happened later. The guy had all the personality of a wind-up pocketwatch; more importantly, though, (allegedly) charismatic figures brandishing totalizing economic ideologies and apocalyptic predictions tend to get a lot of people killed, and Rand as a child of the Soviets should have known that. Bright engineers and executives that actually struggle and solve problems on-page and appear to feel empathy are a lot more fun to read about.

Of course, then it wouldn't have been a Rand book. You wouldn't be too far wrong if you said -- of any of her books -- that all the economic and political content was window dressing for her depiction of her ideal man, and not the other way around.

comment by hairyfigment · 2012-03-29T20:30:13.655Z · LW(p) · GW(p)

The only tone that bothers me about SR is the whole "Let's be hippies and work everything out and it'll all be ok" thing. "Free Lunch" in particular.

Are we both thinking of the book where vg gnxrf n qrhf rk znpuvan gb cerirag uhznavgl sebz qrfgeblvat vgfrys? Gur obbx va juvpu ng yrnfg bar punenpgre'f rkgencbyngrq ibyvgvba jbhyq cebonoyl qrfgebl uhznavgl, cnvashyyl?

Now the AI does seem absurd. I'm tempted to give SR a pass on that one because he had the characters talk about science fiction so much, they almost break the fourth wall to explain his motives. But the same author went on a rant elsewhere about the dangers of Star Trek science fantasy. His apparent exception for Callahan's seems a little forced.

Replies from: Blueberry
comment by Blueberry · 2012-03-29T21:37:18.305Z · LW(p) · GW(p)

Are we both thinking of the book where vg gnxrf n qrhf rk znpuvan gb cerirag uhznavgl sebz qrfgeblvat vgfrys? Gur obbx va juvpu ng yrnfg bar punenpgre'f rkgencbyngrq ibyvgvba jbhyq cebonoyl qrfgebl uhznavgl, cnvashyyl?

When and which character? I'm not sure where you're getting that.

Now the AI does seem absurd.

Well, it's a pretty common error to think that with enough intelligence, an AI (or person) will be ethical and friendly. Eliezer himself made that mistake back in 2000 before he realized that intelligence is optimizing the world towards goals and those goals can be arbitrary. Spider was right that an AI would probably not have human emotions like greed or revenge, but he missed the idea that we're made of atoms that the AI could use for something else.

Replies from: hairyfigment
comment by hairyfigment · 2012-03-29T22:37:16.636Z · LW(p) · GW(p)

When and which character? I'm not sure where you're getting that.

In the book you're talking about, what do we learn in the big reveal? What happens immediately after the big reveal? Do we both mean this book?

Replies from: Blueberry
comment by Blueberry · 2012-03-30T00:21:11.795Z · LW(p) · GW(p)

Yes, that book. By big reveal, do you mean gung gur vagehqref ner gvzr geniryref? Please elaborate.

Replies from: hairyfigment
comment by hairyfigment · 2012-03-30T00:58:55.615Z · LW(p) · GW(p)

Jura Ubezng gur gvzr geniryre rkcynvaf uvf zbgvirf, ur rkcyvpvgyl fnlf gur uhzna enpr vf "qbbzrq" va uvf gvzr. Gurl pna'g ercebqhpr cebcreyl, naq gurl qba'g frrz gb unir nal cebfcrpgf sbe vzzbegnyvgl. Gurl ubcr gvzr geniry jbexf va rknpgyl gur evtug jnl gb yrg gurz punatr uvfgbel sbe gur orggre, orpnhfr jung qb gurl unir gb ybfr? V pnyyrq guvf n qrhf rk znpuvan (nffhzvat vg jbexf).

Nsgre Ubezng rkcynvaf rirelguvat, gur onq thl'f Qentba be ungpurg-zna erirnyf gung ur urneq vg nf jryy naq cynaf gb xvyy gurz. Ur frrzf snveyl vagryyvtrag, pregnvayl fznegre guna uvf rzcyblre. Ohg ur oryvrirf ur cersref n jbeyq jurer ur trgf gb xrrc gur wbo ur ybirf, naq yngre nyy uhznaf qvr.

comment by Incorrect · 2012-03-29T15:16:49.340Z · LW(p) · GW(p)

And his argument in one of the Callahan stories that his AGI character would have to be friendly because it wouldn't have human fear or insecurity.

I'll get the refrigerator.

Replies from: Blueberry
comment by Blueberry · 2012-03-30T02:04:37.978Z · LW(p) · GW(p)

Hmm?

Replies from: Incorrect
comment by Incorrect · 2012-03-30T02:50:52.313Z · LW(p) · GW(p)

There was this post on LessWrong about thinking an AI could be prevented from being angry by cooling it down using a freezer.

I can't seem to find it now though.

Replies from: Blueberry
comment by Blueberry · 2012-03-30T03:19:51.410Z · LW(p) · GW(p)

Heh, I like to joke about giving my computer cocaine to make it run faster.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T06:24:33.408Z · LW(p) · GW(p)

I love Sheckley - but when does he tell you what to think? I read him when I was young, so maybe I didn't notice...?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-03-29T16:33:06.750Z · LW(p) · GW(p)

I'd have to reread to have a strong opinion, but Sheckley at least has a lot about how thought works. Example: the short story about being stuck in a spaceship with a replicator which refuses to repeat itself. It's a comic story (iirc, they need six copies of something to get the spaceship to work properly, and the only source of food is the recalcitrant replicator), but it's got something to say about how categories work.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-29T20:55:52.582Z · LW(p) · GW(p)

I distantly remember that... I don't suppose you happen to recall the name of the story? If Sheckley has been teaching "how to think" then I really should read to find out how one of my favorite authors does it.

Replies from: NancyLebovitz, thomblake
comment by NancyLebovitz · 2012-03-29T21:15:25.370Z · LW(p) · GW(p)

I'm heading out for the weekend, but if there isn't a definitive answer by the time I'm back, I'll look into it.

There's also a very cool story about a man who tries to get computerized therapy, but the program is optimized for treating aliens.

comment by thomblake · 2012-03-29T20:59:36.093Z · LW(p) · GW(p)

I want to say it's The Necessary Thing.

comment by katydee · 2012-03-28T12:34:26.564Z · LW(p) · GW(p)

Night of Power was the alarm bell that made me realize Robinson was off the rails.

I'm not sure what the two of you mean by "Very Bad Dreams--" perhaps a misrecollection of "Very Bad Deaths?" If so, Very Bad Deaths is almost certainly the sadistic one.

Replies from: Blueberry, NancyLebovitz
comment by Blueberry · 2012-03-28T19:06:00.663Z · LW(p) · GW(p)

I don't know the book Very Bad Deaths. Why is it sadistic?

And I loved Night of Power. What made you think he was "off the rails"? It's speculative fiction, remember. I don't read him as actually supporting a racial war or saying one is likely.

Replies from: katydee
comment by katydee · 2012-03-29T01:49:54.730Z · LW(p) · GW(p)

Very Bad Deaths has a fair amount of torture-porn and also contains extended descriptions of the effects of an extraordinarily painful medical condition.

Night of Power made me realize Robinson was off the rails because his didactic tone and self-righteous presentation continued even when he was describing horrible and outrageous actions, cold-blooded murders on the parts of the protagonists, etc. Essentially, it was the book that made me stop trusting Robinson as an author, since it demonstrated his ability to justify (and even advocate for) ridiculous excesses, leading me to evaluate the rest of his work much more critically.

Replies from: Blueberry
comment by Blueberry · 2012-03-29T09:45:44.696Z · LW(p) · GW(p)

That makes me want to read Very Bad Deaths very much, which was probably not your intended effect.

his didactic tone and self-righteous presentation continued even when he was describing horrible and outrageous actions, cold-blooded murders on the parts of the protagonists, etc.

Are you sure you're not making the mistake of confusing a character's beliefs with the authors?

As far as the murders, have you ever seen an action movie?

it demonstrated his ability to justify (and even advocate for) ridiculous excesses

please give me more details on this. I take it you're not a rational anarchist and don't support Michael's revolution? What ridiculous excesses?

I'm just very surprised that you think it's didactic or self-righteous; I didn't see it that way at all.

Just curious, have you read Heinlein's "The Moon is a Harsh Mistress"? Night of Power is full of allusions to it and it may not make as much sense if you haven't.

Replies from: NancyLebovitz, katydee, Risto_Saarelma
comment by NancyLebovitz · 2012-03-29T12:17:22.949Z · LW(p) · GW(p)

Robinson's fiction has a sadistic streak (very bad things happening to the unattractive characters) that Heinlein's doesn't. One of the later Callahan's novels has a plot turn which indicates that Robinson had some idea that this was problematic.

In any case, I hope you read some Robinson and let us know what you think.

Replies from: Blueberry
comment by Blueberry · 2012-03-30T02:05:46.823Z · LW(p) · GW(p)

I love what I have read. I've only read a few of his novels though. Which one has that plot turn and what's the plot turn?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-03-31T12:29:49.823Z · LW(p) · GW(p)

It was a Callahan's novel which came out in the past ten years or so. It might have been Callahan's Key.

Wnxr, gur ivrjcbvag punenpgre, chavfurf na vashevngvat naq culfvpnyyl htyl punenpgre engure frireryl (snvag zrzbel fhttrfgf yvgrenyyl qhzcvat fuvg ba ure). Xnezn rafhrf.

I'm really sorry, but I don't remember the details.

Replies from: Blueberry, NancyLebovitz
comment by Blueberry · 2012-04-01T19:53:12.163Z · LW(p) · GW(p)

Thanks... I'm still going through the most recent Callahan novels. Jake Stonebender does kinda have a temper.

comment by NancyLebovitz · 2012-04-01T01:52:11.322Z · LW(p) · GW(p)

I checked with a friend who's a Robinson fan. It was Callahan's Key, and it was n yvgre bs hevar.

comment by katydee · 2012-03-29T19:23:40.867Z · LW(p) · GW(p)

That makes me want to read Very Bad Deaths very much, which was probably not your intended effect.

Oh no, I thought it was quite good, but it's not really for the weak of stomach. One of the main characters is also basically Spider Robinson himself, so if that's not your cup of tea I would suggest looking elsewhere-- personally, though, I did find it quite entertaining.

Are you sure you're not making the mistake of confusing a character's beliefs with the authors?

No-- in fact I'm nearly positive that I am making that mistake, but I find it comparatively hard to not make given Robinson's general style. The whole thing just squicks me out.

I would also argue that, for much of Robinson's work, the characters' beliefs are those of the author (and indeed the characters themselves are essentially the author)-- though I don't think Night of Power suffers from this.

Just curious, have you read Heinlein's "The Moon is a Harsh Mistress"? Night of Power is full of allusions to it and it may not make as much sense if you haven't.

Certainly I have. Robinson has always struck me as sort of a bargain-basement Heinlein.

Replies from: Blueberry
comment by Blueberry · 2012-03-30T02:07:09.093Z · LW(p) · GW(p)

I'm trying to understand exactly what squicks you, and I'm not doing a very good job... the Revolution in Night of Power was pretty peaceful as revolutions go.

Replies from: katydee
comment by katydee · 2012-03-31T20:53:24.299Z · LW(p) · GW(p)

Gur cneg jurer gur znva punenpgref zheqre n pncgvir (be pncgvirf? Vg'f orra n juvyr) ol fcenlvat tyhr vagb gurve abfr/zbhgu naq pnhfvat gurz gb nfculkvngr vf n tbbq rknzcyr bs jung V sbhaq fdhvpxl nobhg gung obbx.

Replies from: Blueberry
comment by Blueberry · 2012-04-01T02:29:35.452Z · LW(p) · GW(p)

Jura gurl'er nobhg gb encr Wraavsre? Ur qrfreirf gung naq vg'f frys-qrsrafr

Replies from: katydee
comment by katydee · 2012-04-01T15:22:40.102Z · LW(p) · GW(p)

Ertneqyrff bs jurgure fbzrbar "qrfreirf vg," zheqrevat pncgvirf va tehrfbzr naq rkpehpvngvat znaaref vf orlbaq gur cnyr. Gung'f nyfb abg frys-qrsrafr ol nal fgnaqneq gung V xabj bs, fvapr gur crefba va dhrfgvba jnf nyernql haqre gurve pbageby.

Replies from: Blueberry
comment by Blueberry · 2012-04-01T19:35:22.758Z · LW(p) · GW(p)

Ok, but... wouldn't the same objection apply to virtually any action/adventure movie or novel? Kick Ass, all the Die Hard movies, anything Tarantino, James Bond, Robert Ludlum's Bourne Identity novels and movies, et cetera. They all have similar violent scenes.

Replies from: katydee
comment by katydee · 2012-04-01T20:15:13.132Z · LW(p) · GW(p)

I can't think of any point in Die Hard where John McClane kills prisoners in cold blood (in fact, there are two times where he almost dies because he tries to arrest terrorists instead of just shooting them). And I do consider all such scenes objectionable-- for instance, in Serenity, when Zny fubbgf gur fheivivat Nyyvnapr thl sebz gur fuvc gung qrfgeblrq Obbx'f frggyrzrag, or when gur Bcrengvir fnlf ur vf hanezrq, fb Zny whfg chyyf n tha naq fubbgf uvz, I had the same squicky reaction.

Replies from: None, wedrifid
comment by [deleted] · 2012-04-01T21:50:09.483Z · LW(p) · GW(p)

See, I liked that scene. Gur Bcrengvir jnf gelvat gb pngpu crbcyr haqre Zny'f cebgrpgvba fb gurl pbhyq or neerfgrq naq gbegherq be rkrphgrq. Ur jnf jvyyvat gb xvyy Zny naq rirelbar ryfr ur pnerq nobhg va beqre gb qb fb. Pngpuvat uvz bss thneq naq xvyyvat uvz jbhyq unir fnirq n ybg bs yvirf, rira vs vg jnfa'g va nal jnl snve. Squicky? Sure. Actually the wrong thing to do? Not so much.

comment by wedrifid · 2012-04-01T20:30:51.978Z · LW(p) · GW(p)

I can't think of any point in Die Hard where John McClane kills prisoners in cold blood (in fact, there are two times where he almost dies because he tries to arrest terrorists instead of just shooting them). And I do consider all such scenes objectionable

Which scenes are you saying are objectionable? The ones where MClane puts the lives of himself and all those he is trying to protect in danger by not shooting terrorists when he should have? Those squick me out. Utter negligence when so many lives are at stake.

Replies from: katydee
comment by katydee · 2012-04-01T20:48:01.071Z · LW(p) · GW(p)

McClane is probably too far in the other direction, but to be fair he's a cop (so he has extra rules to abide by, not just normal morality) and he definitely doesn't understand the magnitude of the situation at first.

comment by Risto_Saarelma · 2012-03-29T10:17:32.330Z · LW(p) · GW(p)

Are you sure you're not making the mistake of confusing a character's beliefs with the authors?

As far as the murders, have you ever seen an action movie?

Have you ever read a novel and gotten an insistent background vibe from it that says "something isn't quite right with the person who wrote this"? I got this pretty strong from John C. Wright's The Golden Age trilogy, even though I started reading it knowing next to nothing about Wright.

This doesn't seem very consistent though. Most people I've talked with seem to like The Golden Age a lot.

Replies from: APMason
comment by APMason · 2012-03-29T12:14:52.952Z · LW(p) · GW(p)

Have you ever read a novel and gotten an insistent background vibe from it that says "something isn't quite right with the person who wrote this"?

I get this a lot from A Song of Ice and Fire.

comment by NancyLebovitz · 2012-03-28T13:06:17.557Z · LW(p) · GW(p)

Yes, Very Bad Deaths.

comment by Sabiola (bbleeker) · 2010-11-25T15:11:55.354Z · LW(p) · GW(p)

Hm, I'm a fan of Heinlein too, I guess I'd better not start reading those others. ;p Any idea where I can look for clues about the 'authoritative voice'?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-03-28T05:48:06.236Z · LW(p) · GW(p)

That's odd. I've been a fan of Heinlein and Spider Robinson but never Rand or Lewis. Haven't tried Chesterton.

Replies from: Blueberry
comment by Blueberry · 2012-03-28T06:00:48.740Z · LW(p) · GW(p)

You're actually the reason I started reading Spider Robinson.

comment by wedrifid · 2010-11-27T01:01:41.874Z · LW(p) · GW(p)

I'm very interested in that, I think I need it. I just read this article about Mere Christianity by C. S. Lewis, and I was like "what the hell is wrong with me, that I didn't see at least some of those points myself?"

The strength of C. S. Lewis's works seem to be that they were a whole lot less bad than the alternate sources of the same message.

comment by David_Gerard · 2010-11-24T14:40:35.149Z · LW(p) · GW(p)

The hard part with something like that not being how to question your ideas, but to notice that you have an idea that needs questioning. It's like reading Michael Behe's books on intelligent design and trying to understand the view inside his head, how a tenured biology professor could come up with such obvious-to-others defective arguments and fail to notice the low quality of his own thinking.

comment by Vladimir_Nesov · 2010-11-23T22:07:37.623Z · LW(p) · GW(p)

Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that.

Given a clear explanation, it's more probably correct than secretly wrong. We don't live in a world dominated by true-sounding lies. Incorrect things should be generally more surprising than correct things, even if there are exceptions.

(It's confirmation bias, not hindsight bias. Hindsight bias is overestimation of prior probability upon observing a positive instance of an event.)

comment by Swimmy · 2010-11-22T07:43:25.677Z · LW(p) · GW(p)

Going back and looking at the sequences is funny. Across many posts, comments accuse Eliezer of simplifying and attacking straw-men. But as someone who was religious when he was first reading OB, and who got deconverted specifically because of the arguments therein, I think that Eliezer had it right and the accusers had it wrong: many of the arguments he refutes seem like straw-men to people who associate with other rationalists, but to those steeped in irrationality they are basically the world. Witness, for instance, a former Christian's revisiting of CS Lewis to find that he not only fails to provide a strong defense of Christianity, he's basically a joke to anyone who knows enough history or biology or sociology or psychology. But when you're in an affective death spiral, you often can't notice such things.

While I worry about the self-congratulation of threads like these, I want to nominate a lesson I learned from Robin Hanson (and Daniel Klein, another GMU economist), which will probably affect me professionally as much as my religious deconversion affected me personally:

It is ok to believe things that are obvious, even if they are unpopular.

It seems non-controversial, but when you actually find yourself in a discussion with an intelligent, like-minded person with similar interests and arguments with the backing of high-status individuals, the temptation is enormous to switch sides.

comment by David_Gerard · 2010-11-20T18:46:43.775Z · LW(p) · GW(p)

See, this is the way to get people to read the posts in the sequences: give them a reason and speak personally. For example, you've just given me another attack of tab explosion ...

comment by FrankAdamek · 2010-11-21T17:38:32.063Z · LW(p) · GW(p)

My gains from LessWrong have come in several roughly distinct steps, all of which have come as I've been working my way through the Sequences. (Taking notes has really helped me digest and cement the information.)

1) Internalizing that there is a real world out there, and like Louie said, that ideas can be right or wrong. Making beliefs pay rent, referents and references, etc. A perspective on beliefs that they should accurately reflect the world and be used for achieving things I care about; all else is fluff. That every correct map should agree with every other, so that life does not seem such a disconnected jumble of different domains. Overall these kinds of insights really helped to give focus to my thoughts and clear out the clutter of my mind.

2) Having a conception of what beliefs should do, LessWrong helps me be aware of and combat various biases that interfere with the formation of accurate beliefs, and with taking coherent action based on those beliefs. I've made large gains here, but of course I'm not finished.

3) Forming a coherent, productive, happy me. Bootstrapping and snowballing effects. As I learn more, I seek out more good information, better. On this point, see Anna Salamon's posts going back to "Humans are not automatically strategic." The book "The Art Of Learning" by Josh Waitzkin has been immensely helpful. Learning about Cognitive Behavioral Therapy (this book is good) has been very helpful in being empirical and rational about the self. I believe this is basically the material of the Luminosity sequence, though I read those posts some time ago and should probably review them.

There's far too much to go into specifically, but the transformation has been huge, and continues. When conversing with non-rationalists, arguments feel like a match between Bruce Lee and some guy off the street. It's not that I have any more raw intellectual power than I had before, but my set of tools/training has improved tremendously. Unfortunately without being a rationalist, a non-rationalist doesn't (much) realize the extent to which they're outmatched, and indeed there is seldom a point to "beating someone." Instead you realize that even a poorly argued-position can be correct, looking out for points you may have missed, and perhaps try and introduce a few concepts. It feels like I'm working at a level above most people, that the conversation is a different thing to me and them; it's not like I can just tell them all this. I discovered LessWrong through an interest in existential risk and at first it seemed kind of boring, not very useful, this weird academic exercise. I wish I could convey to more people how helpful it's been, and the extent to which I didn't know what I didn't know.

(A note on the community: I think it's great that it's here, and I think that some really great material has been produced, and continues to be produced, beyond the "core" material by Eliezer. That said, I almost never read comments and I only read front-page, promoted posts; the return on time for reading anything else doesn't seem great enough right now, compared to my other work and studies. Just to give an idea on how I'm using LessWrong.)

comment by Meryseshat · 2010-11-30T22:32:31.733Z · LW(p) · GW(p)

I've been looking around the site for awhile, having several people I know who go here. What I've learned is unfortunately that I'm unlikely to be able to learn from this site unless something changes. Which is too bad because I don't think I'm unable to learn in general.

I have no academic background whatsoever, and no expertise in science or philosophy. I am not an intellectual. I am good at noticing jargon, but terrible at picking it up and being able to use and understand it. I have no particular skill in abstract thinking. While tests aren't everything, I score in the range of borderline intellectual functioning on IQ tests and I do so for a reason: I am quite lacking in several standard cognitive abilities.

I also have obvious cognitive strengths, writing among them, but they don't match up with the ones necessary to navigate this site. From my perspective, reading this site is like trying to read a book with several words per sentence chopped out, and the words that remain being used in /ways/ that don't match well with my ability to comprehend.

Normally I would just turn around and walk away. I don't think anyone here has any particular desire to see someone like me shut out. I find it saddening though that a site dedicated to helping people think more accurately is mostly dominated by people who have a good deal of intellectual skills already. I would be curious to see how the ideas here could be modified to assist people who are not typical users here. People who can't read mountains of text in order to prepare themselves for the conversations that are taking place, and who need things explained in ways that are understandable even if you're average or even a slow learner.

This isn't meant as an attack, just a suggestion for new directions the site could take in order to benefit people who aren't all that intellectual. You don't have to have all the traditional cognitive abilities to appreciate the importance of thinking clearly about reality. I even bet that the techniques would have to be modified for some of us who can't hold complex ideas in our heads. But modifying them would be a good way to show there's more than a single set of cognitive techniques to get to the same goal of understanding the world as accurately as possible.

Replies from: DSimon, wedrifid
comment by DSimon · 2010-12-01T06:03:43.264Z · LW(p) · GW(p)

Please excuse me if you've already had someone suggest this to you (and you probably have), but: have you looked through the sequences? They're the closest thing to a tutorial this site has, and many of them are (a) written in everyday language and (b) pretty darn useful, and not just for sounding informed while participating in discussions on this website. :-)

Replies from: BenLowell
comment by BenLowell · 2010-12-07T05:55:07.860Z · LW(p) · GW(p)

Many of the sequences are still quite difficult and dense. Some of them lead to relatively simple conclusions, but for me the insights only come after rereading, reflecting, and sometimes discussing them with others.

I think that having more summary pages, such as the one for 37 Ways Words Can Be Wrong, would be helpful. Also, we could make a page in the wiki for techniques that cataloged techniques in very simple way, similar to 5 minute techniques. It would be something good that I could link people to. They could see what we have produced, learn something, and then decide to dig deeper. Linking to the sequences often scares people away, and they don't learn anything during their visit here because all of the insights are hidden.

Replies from: DSimon
comment by DSimon · 2010-12-07T18:15:54.615Z · LW(p) · GW(p)

Agreed, good ideas!

I also think it would be helpful if we had tighter association between the sequences and the site's UI. I like TV Tropes' index system where a trope will have a list at the bottom of what indexes it belongs to, with arrow buttons that make it easy to go through an index's articles in order.

comment by wedrifid · 2010-12-01T05:37:22.014Z · LW(p) · GW(p)

Damn, your writing is absolutely brilliant. If only you could understand all the ideas here in the first place - you would be able to take them to a whole new level of accessibility.

comment by Academian · 2010-11-20T18:17:22.212Z · LW(p) · GW(p)

Very nice post! My personal favorite things I've learned about from reading LessWrong:

  • Causality: Models, Reasoning, and Inference, a book by Judea Pearl written in 2000 which is frequently referenced by the SIAI and on LessWrong.

  • Spaced Repetition Software.

  • Politics as charity: that in terms of expected value, altruism is a reasonable motivator for voting (as opposed to common motivators like "wanting to be heard").

  • That a significant number of people are productively working on philosophical problems relevant to our lives.

  • Lots of little sanity checks to keep in mind, like Conservation of Expected Evidence, i.e. that without evidence, your expectation of what your confidence will be after seeing evidence is equal to your prior confidence. (But see this comment on things you can expect from your beliefs.)

I can't claim to be "converted to rationality" or any particular school of thought by LessWrong, because most of the ideas in the sequences were not new to me when I read them, but it was extremely impressive and relieving to see them all written down in one place, and they would have made a huge impact on me if I'd read them growing up!

Replies from: multifoliaterose
comment by multifoliaterose · 2010-11-20T21:37:13.850Z · LW(p) · GW(p)

Politics as charity: that in terms of expected value, altruism is a reasonable motivator for voting (as opposed to common motivators like "wanting to be heard").

Yes, I was impressed by Carl's posting as well - I look forward to seeing his followup postings.

I can't claim to be "converted to rationality" or any particular school of thought by LessWrong, because most of the ideas in the sequences were not new to me when I read them, but it was extremely impressive and relieving to see them all written down in one place, and they would have made a huge impact on me if I'd read them growing up!

Same here :-).

Replies from: CarlShulman
comment by CarlShulman · 2010-11-24T17:03:54.749Z · LW(p) · GW(p)

Here's the followup.

comment by Vive-ut-Vivas · 2010-11-20T13:34:58.784Z · LW(p) · GW(p)

And many of the people in this community rub me the wrong way.

Yes, like you, for stealing my post idea! Kidding, obviously.

At the risk of contributing to this community becoming a bit too self-congratulatory, here are some of the more significant concepts that I've grokked from reading LW:

Most of all, LW has taught me that being the person that I want to be takes work. To actually effect any amount of change in the world requires understanding the way it really is, whether you're doing science or trying to understand your own personality flaws. Refusing to recognize said flaws doesn't make them go away, reality doesn't care about your ego, etc.

And apparently there was this Bayes guy who had a pretty useful theorem...

comment by komponisto · 2010-11-21T17:22:37.882Z · LW(p) · GW(p)

Interestingly, although reading the Sequences and other LW articles significantly affected my thinking style and general outlook over time, I've probably learned as much if not more from participating -- writing posts and comments, and receiving feedback.

...which feels strange to say, because I was skeptical in the beginning of the whole transition of Overcoming Bias into LW. For one thing, I didn't like the idea of having to "move". And I was highly suspicious of the karma system, because I was afraid of having my status numerically measured. I had been perfectly content to sit back and passively read Hanson and Yudkowsky posts, skim the comments, and only rarely chime in with a comment of my own when I thought it was particularly important.

But now, I think the interactive, community aspect of LW is probably its greatest feature.

Replies from: MartinB
comment by MartinB · 2010-11-22T20:13:20.656Z · LW(p) · GW(p)

But now, I think the interactive, community aspect of LW is probably its greatest feature.

It was pointed out by EY how the easier access to posting made some high quality poster appear from behind their viewscreens.

comment by multifoliaterose · 2010-11-20T17:51:55.345Z · LW(p) · GW(p)

Great post!

My experience on Less Wrong has been that many of the top-voted articles initially have seemed sort of mundane and obvious if mildly pleasant to read, but that returning to them and having them reverberate in my mind has been very helpful to me in framing the issues that come up in my day to day life. Over and over again I've had the experience of being subliminally aware of a given phenomenon discussed on Less Wrong but that reading a well-written explanation is very helpful to me in drawing the key issues at hand into focus.

  1. Eliezer's articles listed under Shut Up and Multiply helped me become more comfortable with expected utility theory. (Disclaimer: I do not fully agree with all points that he makes therein.)

  2. Yvain's The Trouble With Good and Missing the Trees for the Forest have been helpful to me in dispelling halo effects.

  3. I'm continually amazed by how relevant I find Yvain's Generalizing From One Example and Typical Mind and Politics to my own life and to understanding the thinking of others.

  4. I'm a very unusual person and have had little opportunity to meet people who I have a lot in common with in the past. On Less Wrong I've found some people who think in terms similar to the ones that I do and interacting with them has given me the opportunity to trade insights with them, about self-improvement, about the world at large and about interacting with more mainstream people.

  5. I think that people's willingness to engage with those who disagree with them is noticeably higher (on average) on Less Wrong than it is on most online forums. A major benefit that I've reaped from this is that I've learned more about communication with those who have different worldviews here than I would have had the chance to elsewhere. I describe a special case of this in the conclusion to my Reflections on a Personal Public Relations Failure: A Lesson in Communication posting.

comment by peuddO · 2010-11-27T18:55:37.165Z · LW(p) · GW(p)

I've learned that people significantly more knowledgeable and intelligent than me do exist, and not just as some mythical statistical entity at the fringes of what I'll realistically encounter in my everyday life.

The internet - and indeed communications technology in general - is beneficial like that, even if it takes some searching to find a suitable domain.

comment by Kevin · 2010-11-20T16:43:39.340Z · LW(p) · GW(p)

I have learned that philosophy remains a big unsolved problem where no one seems to have really gotten anywhere for a long time, yet concerted effort by determined smart people might lead to us answering some of the most important questions that have always plagued human philosophers. I have learned that solving philosophy (where philosophy includes questions like "what is human value?", "what is the nature of intelligence?", "what are the simple equations that unify the physical laws of our universe/multiverse?") is of importance on a mind bogglingly cosmological level.

Keep on thinking, friends.

Replies from: ata
comment by ata · 2010-11-20T17:53:35.112Z · LW(p) · GW(p)

I have learned that philosophy remains a big unsolved problem where no one seems to have really gotten anywhere for a long time

I disagree. I think most of what has historically been considered "philosophy" has been solved at this point, it just doesn't seem that way because once we understand a philosophical problem well enough to solve it, it doesn't seem like a philosophical problem anymore. Usually it turns into a scientific problem, or an easy question of inference from scientific knowledge, thus losing its aura of respectable mysteriousness.

Replies from: Kevin
comment by Kevin · 2010-11-20T22:19:42.360Z · LW(p) · GW(p)

The difference between our beliefs is that I see philosophy as a superset of science. Just because "what is human value?" starts mapping to science doesn't mean it stops being philosophy.

I wasn't referring to historical philosophy. I was referring to the specific hard problems I listed, namely "what is human value?" which even though it decomposes to being a problem of science, still has much more of the philosophy problem nature than the science problem nature.

Anyways this is a disagreement about the meaning of words only.

Whether you call a problem like "what is human value?" a science problem or a philosophy problem, it is still an important unsolved problem that via concerted effort we have a very real chance at solving.

comment by ChristianKl · 2010-11-25T18:26:27.000Z · LW(p) · GW(p)

Even when most peoples beliefs are junk you won't know before you considered the belief in detail. You probably just increase the effect of confirmation bias when you reject beliefs without examining them.

Thinking outside your own set of beliefs is also good training.

Replies from: Perplexed, wedrifid
comment by Perplexed · 2010-11-30T00:34:18.408Z · LW(p) · GW(p)

You probably just increase the effect of confirmation bias when you reject beliefs without examining them.

I understood the strengthened confirmation bias as being in the person with 'junk' beliefs and 'reject' to mean public rejection. Wedrifid apparently interpreted the person in danger of having confirmation bias strengthened as being 'you', and to 'reject' meaning simply to not accept. Which did you intend?

Replies from: wedrifid
comment by wedrifid · 2010-11-30T00:41:14.993Z · LW(p) · GW(p)

Wedrifid apparently interpreted the person in danger of having confirmation bias strengthened as being 'you', and to 'reject' meaning simply to not accept. Which did you intend?

I make the first of those interpretations. With respect to 'reject' I would maintain my comment with either definition. (Although with acknowledgement that pubic rejections can be mere politics and barely relevant to beliefs and explorations thereof.)

comment by wedrifid · 2010-11-30T00:21:16.997Z · LW(p) · GW(p)

Even when most peoples beliefs are junk you won't know before you considered the belief in detail. You probably just increase the effect of confirmation bias when you reject beliefs without examining them.

Of course, there is an opportunity cost associated with exploring any given belief. The prior probability of the belief and the potential benefits and costs associated with the topic determine whether or not it is worth investigating further. It is not confirmation bias to ignore ideas that have a low expected value of investigation. You simply leave your level of confidence unchanged.

Thinking outside your own set of beliefs is also good training.

That is one factor consider. Another is "thinking outside your own set of beliefs can be fun".

comment by spriteless · 2010-11-23T13:21:41.272Z · LW(p) · GW(p)

I learned that humans are all very alike.

I learned that natural selection uses up diversity.

I learned some more graceful words and arguments for what I wanted than I had. For instance, previously I explained that I think about religion logically because I used to be Catholic and we do that, now I can say that it is because logic is useful for thinking about everything and tell people my backstory later if they ask.

I learned that emotion and rationality are not enemies. Vulcans we are not.

I learned that normally rational people will take sides in emotional name calling once you blame them. Much like everyone else. (See most any mention of gender.)

comment by JamesAndrix · 2010-11-20T22:44:12.626Z · LW(p) · GW(p)

That it is possible to take confusing issues and write clearly about them.

That this may require sequences.

comment by Upset_Nerd · 2010-11-29T07:10:37.793Z · LW(p) · GW(p)

I've found out about PJ Ebys ideas and even though I just recently managed to use them to make a substantial change, I'm pretty sure it's the largest positive change in my entire life so far.

Replies from: Louie
comment by Louie · 2010-11-29T07:47:07.001Z · LW(p) · GW(p)

Really? Which idea of his helped you make that substantial change? Maybe I should take another look at his stuff. I've tried reading him before but found it to be a mix of obvious life insights + harmfully wrong motivational advice.

Replies from: Upset_Nerd, pjeby
comment by Upset_Nerd · 2010-11-29T09:38:15.205Z · LW(p) · GW(p)

I'm a member of his group so I've gotten personal assistance but what I've done is basically first diagnose my problems by using his so called RMI technique, which I'm pretty sure he's mentioned several times here in the comments, which basically just consists of sincerely questioning yourself about your problem and passively notice what comes to mind without trying to rationalize it away logically.

Through that technique I found out that I've unconsciously judged all my decisions in life for "goodness", that is I've constantly feared that I'll not be a good person if I make the wrong decisions. Unfortunately the number of rules for things which make me a bad person have been very large so I've basically lived a passive lonely life waiting for someone to come and tell me what to do. One particularly frustrating thing has been that I've felt that I'm a bad person if I actually try to take control over my life, and that includes using PJs methods, so for about six months I've been completely clear on what my problem is, how to solve it, believed that it would work on a rational level, but at the same time feeling completely uninterested in actually doing anything about it. The trigger for action was when my girlfriend broke up with me and I temporarily got into an emotional state where I felt that I had nothing to lose, and since I knew PJs techniques I managed to use the opportunity to break the deadlock.

The specific technique I used is his so called "rights work", which I also think he's mentioned here. You basically tell yourself that you have the right to feel feeling X even if condition Y is true. The big one for me was when I hit upon the phrase: "I have the right to feel like a good person no matter what I do."

Realising that instantly made me start to cry what can best be described as tears of joy mixed up with some anger and indignation. Then after a couple of minutes it was over and now I feel like a completely different person. Or rather closer to the person I've always wanted to be but never felt I've been allowed to be. For example, writing this answer has been trivial whereas I've previously been a chronic lurker on all forums I frequent due to worrying about what everyone will think of my writings.

Replies from: pjeby, wedrifid, nikson
comment by pjeby · 2010-11-29T17:58:46.663Z · LW(p) · GW(p)

The specific technique I used is his so called "rights work", which I also think he's mentioned here. You basically tell yourself that you have the right to feel feeling X even if condition Y is true. The big one for me was when I hit upon the phrase: "I have the right to feel like a good person no matter what I do."

I think it's important to clarify here that the "rights" in this method are not directly about morality, but rather access or ability, like an ACL in a filesystem grants you the "right" to read a file.

IOW, it's a method used to counteract learned helplessness and restore your ability to control a portion of your mind, rather than a method of moral rationalization. ;-)

There are also four general categories of ACL: to desire, acquire, respond, and experience -- the D.A.R.E. rights -- and the one you described here is an E - the right to experience the feeling of being a good person.

(You of course probably realize all this already from the workshops, but I can imagine what some people here are likely to say about the small bits you've just mentioned, so I'd like to nip that in the bud if possible.)

Unfortunately the number of rules for things which make me a bad person have been very large so I've basically lived a passive lonely life waiting for someone to come and tell me what to do.

Yeah, that's the essential insight of rights work, which is that the rules we learn for which emotions to have are not symmetrical. That is, a rule that says "X makes you a bad person" does NOT automatically imply to your (emotional/near) brain that the opposite of X makes you a good person. It only tells your brain to rescind your (access) right to feeling good when condition X occurs.

Btw, feeling like a "good person" is normally an Affiliation-category need; it's not about judging yourself good per se, but rather, whether other people will consider you likable, lovable, and a good/worthy ally.

(Again, I know you know this, because you already mentioned it on the Guild forum, but for the benefit of others, I figure I should add the clarifications.)

Affiliation, of course, being the second of the S.A.S.S. need groups - Significance, Affiliation, Stability, and Stimulation. (Based on feedback here, and more recent personal experiences, I've renamed Status and Safety to better cover the true scope of those groups.)

Anyway, if you multiply DARE by SASS, you get a sixteen-element search grid within which the access rights to X can be sought for and restored (relative to a given condition Y) -- assuming you have the necessary skill at RMI.

It is not really a "system", however, in the way that so many gurus claim their acronyms and formulations to be. That is, I do not claim DARE and SASS are natural divisions that actually exist in the world; they are only a convenient mnemonic to create a search grid that can be overlaid on the territory, without claiming that they are an accurate map of that territory.

And if you search using only that grid, then of course you will only find the things that are already within it... and the fact that I've tweaked the names of two of the SASS categories, already shows that there may be other things that still lie outside our current search grid. Nonetheless, having some search grid is better than none at all.

(Tony Robbins, for what it's worth, claims that there are two additional categories that should belong on the SASS dimension of this grid; he may be right in a general sense, but I have not really found them to be useful/relevant for fixing learned helplessness.)

One last point, which again is intended for bystanders rather than you, U.N., is that merely saying the words "I have the right" has no particular consequence. It is not a magical incantation like "wingardium leviosa"!

It is merely the expression of a realization that you already have that right, the forehead-slapping epiphany that really, you were wearing the magic shoes this whole time, and could have gone back to Kansas at any moment up till now, and just didn't notice.

And this realization cannot be faked or brought about by a mere ritual; the function of the DARE/SASS search grid is merely to help you find that within yourself that you haven't been noticing you were even capable of. That's why, when it works, as in U.N.'s case here, the result can often be... intense.

But it's also why you should not be fooled by reading U.N.'s comments or mine, that this is a simple matter of following a grid and making the appropriate incantations. It is a search process, not a quick fix technique.

And the process of your search is hindered by the nature of your own blind spots: U.N. mentions his meta-akrasia here, but there are subtler forms of complexity that can arise from this basic pattern. For example, one may believe that feeling you're a good person, makes you a bad person... and in order to fix that, you have to remove the second rule first.

(Otherwise, what happens is that your attempted right statement sort of fizzles like a mis-cast spell... you say, "I have the right to X..." and your brain goes, "Yeah right," or, "maybe, but I'm not gonna DO that.... 'cause then I'd be bad.")

Anyway, I won't say, "don't try this at home," because really, you should. ;-)

But you should know that it is not a trivial process, and if done correctly it will bring you face to face with your own mental blind spots... by which I mean, things you do not want to know about yourself.

(For example, one thing that often happens is that, in the process of restoring a right, you realize that you are actually going to have to give up your righteous judgment of some group of people who you previously felt yourself superior to, because that judgment depends on one of the SASS rules that you are about to give up... and both the realization that you have been misjudging those people, and the realization that you still don't really want to give up that judgment, can be painful.)

Anyway... it's fun stuff... but not necessarily while you're doing it, if you get my drift. ;-)

Replies from: wedrifid
comment by wedrifid · 2010-11-30T00:12:28.494Z · LW(p) · GW(p)

You of course probably realize all this already from the workshops, but I can imagine what some people here are likely to say about the small bits you've just mentioned, so I'd like to nip that in the bud if possible.

Thankyou. If someone had the gall to moralize at someone who had just broken free from the 'goodness' cage I would have been displeased, to put it mildly.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-12-01T06:31:01.914Z · LW(p) · GW(p)

Thankyou. If someone had the gall to moralize at someone who had just broken free from the 'goodness' cage I would have been displeased, to put it mildly.

On the other hand, without further context "I have the right to feel like a good person no matter what I do" is a dangerous thing to internalize. In fact I suspect that rules, like "If I want to murder someone, I should feel like a bad person" exist in the brain using the same mechanism. Obviously this is one rule you shouldn't get rid of.

Replies from: Upset_Nerd, pjeby, David_Gerard, wedrifid
comment by Upset_Nerd · 2010-12-01T09:30:32.896Z · LW(p) · GW(p)

This sounds very similar to the argument against atheism where the believer is afraid that he might start to do a whole bunch of horrible things if he'll no longer fear punishment from God.

What I've noticed in my case is that yes, I now do think I could feel like a good person even if I do bad things to others. However, I now genuinely don't want to hurt other people. In a way it feels like this is the first time in my life where I'm actually able to really care for and empathise with other people since I no longer have to be so preoccupied with myself.

Replies from: pjeby
comment by pjeby · 2010-12-01T14:29:06.534Z · LW(p) · GW(p)

What I've noticed in my case is that yes, I now do think I could feel like a good person even if I do bad things to others. However, I now genuinely don't want to hurt other people.

Yep. Motivation is not symmetric.

What used to boggle my mind about this, is how it could be that our brains are built in such a way as to seemingly automatically believe that motivation is symmetric, even though it isn't.

My working hypothesis is that the part of our brain that predicts other minds -- i.e. our built-in Theory Of Mind -- uses a symmetric model for simplicity's sake (i.e., it's easier to evolve, and "good enough" for most purposes), and that we use this model to try to predict our own future behavior when anticipating self-modification.

comment by pjeby · 2010-12-01T14:49:32.668Z · LW(p) · GW(p)

On the other hand, without further context "I have the right to feel like a good person no matter what I do" is a dangerous thing to internalize.

Not really. Our experiences indicate that the brain's ACL system matches rules by specificity. A blanket rule change like this one will only remove the specific generalizations matched during the retrieval process, not any broader or narrower rules. (This is implied by memory reconsolidation theory, btw.)

In fact I suspect that rules, like "If I want to murder someone, I should feel like a bad person" exist in the brain using the same mechanism. Obviously this is one rule you shouldn't get rid of.

Actually, funny you should mention, because that's an ill-specified rule right there, and it's precisely the sort I would say you ought to get rid of!

Why? Because you said "if I want to murder someone". Merely wanting something bad doesn't make you a bad person. Who hasn't wanted to murder somebody, at some point in their life?

If the rule you state ("If I want to murder someone, I should feel like a bad person") were a genuine SASS rule that you'd internalized, then every time you got mad enough at somebody, you'd suppress the anger... and keep right on feeling it. Most likely, you'd have people or situations you'd avoid because you'd feel chronically stressed around them -- vaguely angry and disappointed in yourself at the same time.

Usually, though, unless you actually said you wanted to murder somebody when you were a kid, and shocked an adult into shaming you for being bad, you probably don't have an explicit SASS rule against wanting to murder people, and don't actually need one in order to avoid actually murdering people. ;-)

Negative SASS rules are compulsions that override reflective thinking and outcome anticipation; they hijack logical thought processes and direct them into motivated reasoning. Oddly enough, positive SASS rules don't seem to have the same degree of power... although it occurs to me that perhaps my current model is flawed in this description of "positive" and "negative" -- better words might be "surplus" and "deficit".

(That is, if your brain thinks a desired positive SASS quality is scarce, you can be just as compulsive in acquiring it, as you can be compulsive in avoiding things with negative SASS. However, the rules themselves seem to influence what levels are perceived as surplus or deficit, so there's a bit of recursion involved.)

comment by David_Gerard · 2010-12-01T09:00:28.481Z · LW(p) · GW(p)

Yes, it was only with pjeby's explanation that I realised "I have the right to" in this context actually means "I am not denied the right to" - I am not barred by access control list - rather than "I am justified in". Like "pride" meaning "not ashamed".

I have known too many people who do in fact use it to mean "I am automatically justified in feeling great about myself, therefore you should not criticise my behaviour." This suggests the ambiguity in wording may be problematic. (On the other hand, I suspect the process is that the conclusion is assumed and then arguments are found to justify it, so the wording may make little difference.)

Replies from: pjeby
comment by pjeby · 2010-12-01T14:53:42.971Z · LW(p) · GW(p)

Like "pride" meaning "not ashamed".

There's that, but there's also the ability to feel pride. "I have the right to feel proud when I make a mistake" means that you can be proud that you tried.

You will notice, though, that this rights stuff tends to be very controversial, in that everybody on first encountering it will tend to start listing the exceptions they think should be made, i.e., the access rights that should never be granted.

Usually (though not always), that list of exceptions is effectively an excerpt from the list of rules that are keeping them from succeeding at whatever prompted them to seek out my help in the first place. ;-)

comment by wedrifid · 2010-12-01T13:48:49.353Z · LW(p) · GW(p)

You may control another's behaviour but you never have the right to control another person's feelings.

comment by wedrifid · 2010-11-30T00:08:40.074Z · LW(p) · GW(p)

I must congratulate you. Trauma of some kind seems to be required for significant rapid changes to identity (and so behavior). You seem to have harnessed a negative, undesired trauma and executed positive considered change. That sort of navigation of human psychological quirks always impresses me.

Replies from: Upset_Nerd
comment by Upset_Nerd · 2010-11-30T06:50:44.872Z · LW(p) · GW(p)

Thanks :-)

And I agree in that I don't think I could have made this change without any kind of dramatic incident; I'm pretty sure that it would never have happened on it's own since my behaviour was stuck in a kind of stable equillibrium.

I suspect that another person could have triggered the change in me though by kind of forcing me through this process and not relenting even if I try to make them stop. I imagine that when then feeling completely exposed they could give me the basic need that I've always feared that I don't have and finally support me in realizing that I can give it to myself. This probably has to be done in person though so you can't easily get away.

The big problem is of course that if you're the person who's trying to help you have a huge responsibility for actually diagnosing the other persons problems correctly. Since it unavoidably is a traumatic process I can imagine how horrible it must feel if the person who forced you to completely expose yourself turned out to completely misunderstand what you actually feared.

Replies from: wedrifid
comment by wedrifid · 2010-11-30T07:59:04.321Z · LW(p) · GW(p)

Since it unavoidably is a traumatic process I can imagine how horrible it must feel if the person who forced you to completely expose yourself turned out to completely misunderstand what you actually feared.

Being misunderstood is annoying all right, for some more than others. I find that it mostly makes inclined to disengage - unless, of course, the misunderstander is maintaining active engagement with new information that I provide.

I'm curious how long has your newfound identity has lasted? Weeks or months? I got the 'months' impression.

Replies from: Upset_Nerd, Upset_Nerd
comment by Upset_Nerd · 2010-11-30T08:33:58.910Z · LW(p) · GW(p)

I actually just started to get my new identity at the end of last week. And the big realization that I'm allowed to feel like a good/likeable/worthwhile person no matter the circumstances was made just about 50 hours ago.

The reason you might get the impression that I've had it for a longer time is that for many months I've been pretty clear on what my new identity would be like on a rational level. I've been expecting many of my new behaviours to turn out as they've now did for example. The big difference is that now I finally get to know what it feels like to have this new identity, and of course, that I'm able to implement it in practice. :-)

comment by Upset_Nerd · 2010-12-01T05:39:01.266Z · LW(p) · GW(p)

Just wanted to add that I also felt very inclined to disengage with PJ on many occasions, something which I also did for long periods. That feeling was the very thing that kept me stuck and not being able to make a change.

Now from my new vantage point I can see what was going on. The crucial part was my rule that in effect said that I should start to feel like a bad person as soon as I started thinking about taking a major initiative on my own. It made me feel uncomfortable and I unconsciously felt an urge to find some kind of authority figure whom I could check the decision with to find out if it is okay to do.

So when PJ told me to give myself these rights, my brain automatically interpreted it as being a major initiative and therefore as a demand for doing something bad. I started dragging my feet and coming up with a whole bunch of bogus rationalizations for why I couldn't follow his request and when he didn't buy them and simply insisted that I'd do the technique, I instead started to feel kind of resentful and angry that he wouldn't listen to me or understand me. Sometimes I even started to feel a personal dislike towards him since my brain automatically jumped to the conclusion that since he's insisting that I'd do something that will make me feel bad, he obviously doesn't care about me and thinks I'm a bad person who deserves to feel bad.

Now I tried my best to constantly reflect about and rationally analyze these emotions when they came up but I can tell you that it's extremely hard to do when you're engulfed by them. I remember that often when I started to feel angry and frustrated I tried to ask myself something like: "Is this feeling actually justified? Isn't this is just what you'd expect to feel based on your understanding of this process?"

Unfortunately if I'd fallen to deep into the emotion the answer I often got back was a kind of childish answer that stopped me from going further. "But I'm angry with him! I don't wan't to let him get away with a bunch of unreasonable and uncaring demands!"

Replies from: pjeby, wedrifid
comment by pjeby · 2010-12-01T14:13:19.283Z · LW(p) · GW(p)

Btw, it'd be awesome if you shared this comment on the Guild forum as well, and I would like to be able to use it in future training materials.

I mean, sure, I tell people that this kind of thing is going to happen, but it's easier to absorb hearing it from somebody else.

comment by wedrifid · 2010-12-01T13:42:16.660Z · LW(p) · GW(p)

Just wanted to add that I also felt very inclined to disengage with PJ on many occasions, something which I also did for long periods.

I've disengaged with PJ from time to time but never when he's been giving advice. I suspect it is a different scenario. :P

comment by nikson · 2010-12-05T22:58:08.705Z · LW(p) · GW(p)

Very strange, Upset_Nerd. I have been living my life more or less the same way as you have. When I read your post it sent chills down my spine. I thought I was the only one. Now we are two of a kind. :)

Replies from: Upset_Nerd
comment by Upset_Nerd · 2010-12-09T02:46:05.004Z · LW(p) · GW(p)

I guess that our situation isn't that uncommon unfortunately. I hope you'll also be able to improve your mind state similar to what I've done. I recommend reading PJ Ebys comments here on Less Wrong since he's mentioned a large amount of his important ideas in them. You can also PM me if you'd like.

Replies from: pjeby
comment by pjeby · 2010-12-09T03:19:52.708Z · LW(p) · GW(p)

I guess that our situation isn't that uncommon unfortunately.

It's ridiculously common, actually. In the next Guild newsletter I've written about the impact of social signaling emotions on our motivation, and the unintended consequences of same in our non-evolutionary environment -- where we're all basically the tribal chieftains or feudal lords of our lives, even though we were mostly raised to be serfs.

(I'll probably do an LW post at some point on this same topic, though with less how-to and personal stories. But first I gotta finish the training CD.. which incidentally discusses how to apply the Litanies of Gendlin and Tarski to motivational issues. Fun stuff, having a little Guild in my LW and a little LW in the Guild. ;-) )

comment by pjeby · 2010-11-29T23:46:47.692Z · LW(p) · GW(p)

I've tried reading him before but found it to be a mix of obvious life insights + harmfully wrong motivational advice.

Just out of curiosity, which motivational advice did you consider wrong, and why?

Replies from: Louie, wedrifid
comment by Louie · 2010-11-30T08:03:42.108Z · LW(p) · GW(p)

Everything related to the "don't use willpower" idea.

It's the kind of advice that sounds just reasonable enough for someone desperate to try. But then when it comes time to actually develop a new habit (the way real people avoid needing willpower in the long run), they will be unable to get through the first week.

I agree that being on life-hating auto-pilot and just continuing to push is an awful way to go through life. But if you're not there, waiting until all your internal sub-agents align with your goals is the perfect strategy for high motivation, low productivity, and no success.

Replies from: pjeby, wedrifid
comment by pjeby · 2010-11-30T15:47:37.081Z · LW(p) · GW(p)

I agree that being on life-hating auto-pilot and just continuing to push is an awful way to go through life.

Right. The point is, if whatever you call "willpower" isn't working for you now, doing more of it is not likely to produce any better results. (Definition of insanity, and all that.)

But then when it comes time to actually develop a new habit (the way real people avoid needing willpower in the long run), they will be unable to get through the first week.

The problem with your hypothesis here is that there are two very different ways to build a habit that can be described as using "willpower"... but the one that actually works is really a special kind of pre-commitment, and isn't willpower at all.

In the less-useful way, somebody simply "decides" that they're going to build this habit, and they attempt to deal with conflicts as they come up. So, they haven't, for example, already decided that if they don't feel like exercising, they're still going to do it. Instead, at the point of precommitment, they simply assume they're still going to feel the same way about their decision all week.

And that's what I'm referring to as using willpower: attempting to override conflicts on-the-fly by pushing through them.

The type of precommitment that works, OTOH, (and this is backed by at least one study that I know of) is to identify in advance what kinds of obstacles you're likely to face, imagining them in experiential detail, and preparing for how to handle them.

People who take this approach more-or-less automatically (i.e. without having explicitly been taught or told to do so) are likely to still describe this as "willpower" or "gutting it out" or, "you just have to decide/make up your mind", or any number of other descriptions that sound like they're the same thing as using raw willpower to override conflicts as they come up.

comment by wedrifid · 2010-11-30T08:27:47.895Z · LW(p) · GW(p)

Good answer. I don't agree with it but it is a good answer all the same. I disagree only in as much as I would describe PJ's suggestions somewhat differently. "Use willpower wisely" instead of as a tool for self flagellation and definitely no waiting.

comment by wedrifid · 2010-11-30T00:01:37.782Z · LW(p) · GW(p)

I'm curious too. Harmfully wrong motivational advice seems rather drastic.

comment by Caspian · 2010-11-24T22:34:48.360Z · LW(p) · GW(p)

I learned that meditation can be fun, and there are instructions available.

I learned that trying to get an exact definition of a term can be futile, since the meaning in one's mind is structured more like a simple artificial neural network than like the expected kind of verbal definition. Examples: "what is science fiction", "what is a fish".

comment by XiXiDu · 2010-11-22T11:24:28.905Z · LW(p) · GW(p)

It states that Less Wrong is a blog devoted to refining the art of rationality. Rationality is about winning and you and me and the rest of humanity can only win if we are able to solve the problem of provably Friendly AI. What I have learnt is that one should take risks from artificial intelligence serious. And I still believe that it is the most important message Less Wrong is able to convey.

Why shouldn't the discussion of risks posed by AI be a central part of this community? If risks from artificial intelligence are the most dangerous existential risk that we face how is it not rational to inquire about it and try to improve how this risk is communicated towards outsiders?

“The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.” via Twelve Virtues of Rationality

This is what I learnt and that the enemy is unfriendly AI.

comment by RobinHanson · 2010-11-22T03:58:13.171Z · LW(p) · GW(p)

"Most peoples' beliefs aren’t worth considering ... dropping the habit of seriously considering all others’ improper beliefs that don’t tell me what to anticipate and are only there for sounding interesting or smart."

Seems you assume that most peoples' beliefs are "improper." Did LW offer you evidence for that conclusion? And don't you also need to assume you have a way to generate beliefs that is substantially better at avoiding the desire to sound interesting or smart?

Replies from: Louie, jsalvatier
comment by Louie · 2010-11-25T04:03:56.964Z · LW(p) · GW(p)

Seems you assume that most peoples' beliefs are "improper." Did LW offer you evidence for that conclusion?

Most of my evidence for this comes from my own observations. It's pretty easy to see just from looking at how people's lives end up that almost no one can make sound decisions over the time-frame of years. My working hypothesis is that most people can make what looks like an approximation to rational decisions on the order of hours or days in situations where there's enough at stake for them in the short-term. But the error coefficients compound over time and people carry the scars of their worst failures (ie religion, drug abuse, bigotry, poverty, life threatening obesity, illiteracy, etc).

What Less Wrong helped me realize was that the problem was even worse for abstract reasoning. Almost no one can reason through abstract inference chains longer than 2 or 3 steps, so if people don't have concepts big enough (or small enough) to explain everything with 1 or 2 steps of inference, they can never learn truth from falsehoods in those domains. I think this is a big reason for the "10 year rule" to become an expert in any field. It takes that long to cache out all the mental boxes the right size so that the experts without natural inference ability (most?) can turn everything into (obvious) 1 step inferences.

The other thing that Less Wrong taught me was that even of those with the ability to reason abstractly, most don't feel bound to accept conclusions that follow from the premises they believe unless they like the conclusions they get. "Everyone is entitled to their own opinion" is an improvement on "Everyone is entitled to be Catholic", but it's a shame that the aftermath of religion accidentally turned being inconsistent into such a cherished personal freedom in our society. So that's a big problem.

And of the few people left who can and do use reason and aren't egregiously inconsistent, most are so unprepared to correct for detectable (and correctable) human biases that they can't reliably reach sound conclusions on an abstract topic anyway even if they do have 10 years of thought put into a field. So where previously, I thought there was something akin to a sizable group of experts who were more or less "above the system" and could look down on problems from a higher level than me and just inevitably get the correct answers for correct reasons, I now accept the less magical (and obvious in retrospect) belief that scientists and other thinkers are inside the system too. And because of the reasons I mentioned above combined with lots of predictably biased behavior, most scientists / thinkers are "worse than noise" in terms of their contribution to the progress of human thought.

And don't you also need to assume you have a way to generate beliefs that is substantially better at avoiding the desire to sound interesting or smart?

Intellectually engage more with people who are at least trying to use reason. You know, like instead of 4chan or people in my real life.

comment by jsalvatier · 2010-11-23T05:31:23.727Z · LW(p) · GW(p)

I am not sure LW offers evidence for that conclusion, and I don't think that conclusion is correct. Caplan's "rational irrationality"(link) gives evidence (in the form of theory) for the narrower idea that people's beliefs that don't have strong causal influences on their lives and do offer psychological rewards are suspect.

comment by shokwave · 2010-11-21T10:08:34.150Z · LW(p) · GW(p)

The most important thing I learned from LessWrong is that my brain isn't always right.

This was a huge thing for me.

I already had the reductionist viewpoint, that I was just a brain. But I only had a part of it. I basically presumed that my thought processes were right: they couldn't be wrong, since if they were wrong, correcting it was merely a matter of changing some of the biological structures and firing patterns. But since I was that structure and those patterns, the 'corrected' version wouldn't be me; it would be someone else. The way I was, was the only way I could be, if I wanted to be me. So I had what you might call biological relativism.

The sequences' focus on biases let me realise that wasn't the case.

Replies from: timtyler
comment by timtyler · 2010-11-21T16:37:28.355Z · LW(p) · GW(p)

That sounds pretty strange! Were there adverse social effects, I wonder?

Replies from: shokwave
comment by shokwave · 2010-11-21T16:55:27.252Z · LW(p) · GW(p)

Social effects were actually positive: although I disagreed with people and could sometimes spot why they were wrong, I didn't voice my opinion. I made exceptions in the case of genuine mistakes, things like doing your math wrong, but if someone wanted to believe something, I didn't feel like I should take that away from them if it meant taking away part of their biological identity. So, during that time, I was noticeably easy-going and agreeable.

In terms of adverse effects, there were really only mental ones, in that I wasn't correcting my mistakes. It wasn't as literal a belief as I make it sound - it was more like belief in belief and such. Most of the time I think I rounded it off the cached wisdom of respecting others' opinions.

comment by Manfred · 2010-11-20T23:56:03.312Z · LW(p) · GW(p)

Not only is the free will problem solved, but it turns out it was easy.

Haha. Ha.

Although it is easy to resolve it to your own satisfaction, it is more difficult to resolve it to other peoples' satisfaction. Which suggests that there is a problem, at least if you want to avoid retreating to fully general counterarguments like "you disagree with me, so you must be irrational." A quote comes to mind here: "The first principle is that you must not fool yourself - and you are the easiest person to fool." - Richard Feynman.

A good resource for getting you to ask the right questions about free will might be Yvain's excellent post.

In general, though, thank you for summarizing what you've learned, it was interesting.

Replies from: nshepperd
comment by nshepperd · 2010-11-22T12:19:55.650Z · LW(p) · GW(p)

I'm confused as to what you mean by this. The link discusses dissolving the question; isn't that what Eliezer's solution did? It feels like the question has been dissolved, anyway.

Replies from: Manfred
comment by Manfred · 2010-11-22T22:16:01.199Z · LW(p) · GW(p)

Eliezer's solution is to say, to give it the strongest interpretation I can, "us being determined by physics doesn't make us not us. Therefore if we seemed to have free will before figuring out physics, we have free will with it too." This is like approaching the heap problem by saying "I know when it's a heap by looking at it, so there's no problem with saying (thing X) is a heap." Approaching the problem from "below" would be an argument like "a deterministic object like a billiard ball doesn't seem to have free will, so we don't either."

Like in the heap problem, there's a fundamental divide that wasn't addressed. Dissolving the problem should involve asking the question "what do we mean when we say "free will?"," and trying to answer as well as Yvain did about disease.

It might be helpful to give away some of my thoughts (and probably someone else's): one thing free will means is "unpredictable." But there's no problem with having unpredictable objects in the real world, and not just by quantum-mechanical randomness, which doesn't seem much like free will. You can have objects where the quickest way to predict them is to just watch them run. Humans are such objects - there's no way to predict a human with 100% certainty except to watch them. Two pieces of metal can also make such an object,so obviously there are a few other parts of the definition of free will. But I think unpredictability is what a lot of people see missing in the real world (or, more philosophically, in a deterministic universe) that causes them to reject free will, so it's a good one to share.

EDIT: Apparently the unpredictable thing may have been thought first by Daniel Dennet, though he seems to use it as a thing by itself rather than one part of a definition. Also, I edited the first paragraph slightly to better translate things into the heap problem.

Edit Two: If whoever downvoted simple stuff like this (or someone who wants to express objections in their stead) wants to reply, that would be nice of them.

Replies from: ArisKatsaris, wedrifid, NihilCredo
comment by ArisKatsaris · 2010-11-29T14:51:27.285Z · LW(p) · GW(p)

If unpredictability is part of free will, then I don't want free will.

I want to be governed by my own purposes - I don't want my behaviour to be random and unpredictable.

Replies from: Perplexed, Manfred, Vladimir_Nesov
comment by Perplexed · 2010-11-29T18:13:53.343Z · LW(p) · GW(p)

Even when playing Paper, Stone, Scissors?

I think that when the word 'unpredictable' is used, it is important to specify: unpredictable by whom?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2010-11-30T10:01:29.941Z · LW(p) · GW(p)

In "Paper, Stone, Scissors," like in other contests and conflicts, (and same as in humour), you just need to be unpredicted, not really to be "unpredictable". True complete unpredictability is neither good humour ("Two men walk into a bar, then the moon exploded. Why aren't you laughing?"), nor good gaming ("My rocket-launcher defeats your paper, your stone and your scissors"), nor good storytelling ("The killer was this guy that had never appeared, and you could have never guessed at, and which were were never clued about").

Sure, it would be dull if everyone predicted everything everyone else did; but that's different to being capable of being predicted in the theoretical/philosophical sense that was being discussed -- in the sense of existing inside a deterministic universe, and that we theoretically could predict other people's behaviours.

Replies from: Perplexed
comment by Perplexed · 2010-11-30T18:35:33.322Z · LW(p) · GW(p)

A good analysis.

What I am struggling with here is an intuition that the whole idea of unpredictability in "the theoretical/philosophical sense" is a bad, ill-formed idea. I know roughly what it means to have predictability as a two-place predicate. P(E, A) means that person A (a person equipped with the theory and empirical information that A has) is capable of predicting event E. Fine. But now how do we turn that into a one-place predicate. Do we define:

  • P1(E) == Forall persons A . P(E,A)

or is it

  • P1(E) == Forall physically possible persons A . P(E,A)

or is it

  • P1(E) == For some hypothetical omniscient person A . P(E,A)

or is it something more complicated, involving light cones and levels of knowledge that are still supernatural.

The thing is, even if you are able to come up with a precise definition, my intuition makes me doubt that anything so contrived could be of any possible use in a philosophical enquiry.

comment by Manfred · 2010-11-29T17:41:17.283Z · LW(p) · GW(p)

You appear to be conflating random and unpredictable. A double pendulum is not random, in the typical sense, its course is merely unknown. You can be governed by your own purposes and still be unpredictable to someone else, not in the sense that you go out of your way to defy all predictions, but in the sense that such predictions are never totally accurate - the fastest way to find out what a human will do with 100% accuracy is to watch them.

comment by Vladimir_Nesov · 2010-11-29T15:04:24.582Z · LW(p) · GW(p)

If unpredictability is part of free will, then I don't want free will.

This is logically rude. You must judge on the whole of consequences, and accept or reject any argument only based on its validity, without singling out particular detail.

comment by wedrifid · 2010-11-29T14:43:53.729Z · LW(p) · GW(p)

one thing free will means is "unpredictable."

No it doesn't. Fortunately. Otherwise my solution to Newcomb's problem would be "Forget the damn boxes. I'm hunting down Omega, killing him and freeing the will of every creature in the universe!"

Replies from: Manfred
comment by Manfred · 2010-11-29T17:36:15.262Z · LW(p) · GW(p)

Major depressio time:

Omega could find something to say to you that you would disregard even though you knew it was a vitally important truth. Omega could tell Ghandi things that would make him kill someone. To Omega, you are as complicated as game of billiards. If you asked Omega if you had free will, Omega would say "no," because games of billiards do not have free will. And Omega would be right, because Omega is always right.

Fortunately, Omega is unphysical.

But really, you're free to your definition of free will, so long as we're both just going by intuition. I don't want to commit the typical mind fallacy too hard, here. It's just that my intuition thinks that a creature that can be perfectly predicted and therefore manipulated by Omega doesn't feel free-willed.

Replies from: wedrifid
comment by wedrifid · 2010-11-29T23:40:23.344Z · LW(p) · GW(p)

I am not going by my intuition.

Replies from: Manfred
comment by Manfred · 2010-11-30T01:21:36.872Z · LW(p) · GW(p)

Because your argument from the implications for Newcomb's problem is so empirical :D

Replies from: wedrifid
comment by wedrifid · 2010-11-30T01:27:00.209Z · LW(p) · GW(p)

It is quite clearly deductive, not empirical.

Replies from: Manfred
comment by Manfred · 2010-11-30T01:29:27.262Z · LW(p) · GW(p)

What are your premises, and where did they come from?

Replies from: wedrifid
comment by wedrifid · 2010-11-30T01:48:54.107Z · LW(p) · GW(p)

The comment's parent and descriptions of Newcomb's Problem.

I don't think this line of questioning is serving you. You don't want to challenge the obvious logical implications of your 'unpredictable' partial definition. They are hard to deny but don't technically rule it out. Instead you want to question just where my own definition of 'Free Will' comes from if not my intuition. That, if followed through, would require appeals to authority, etc.

I would actually not argue too hard on the point of what the 'true' definition of Free Will is. The point that I do consider important is the assertion "If the concept Free Will requires unpredictability then it is stupid and pointless and should be discarded entirely". I already avoid the phrase myself by habit - it just confuses people.

Replies from: Manfred
comment by Manfred · 2010-11-30T02:17:24.416Z · LW(p) · GW(p)

I'm not particularly interested in serving myself, so that's alright. I would find it interesting if you followed through to where your definition of free will comes from. By "premises" I meant a more formal list, coming from tracing your logic.

I'm still finding this pretty interesting in part because it's highlighting that I was prey to the typical mind fallacy. Apparently some other people don't find it at all problematic to free will if their life is written down ahead of time, and some people do! But I still don't know what these other people (yes, you!) do find problematic, or if they just avoid that thought.

A note: I thought this was obvious, but after some thought it may be good to mention anyhow. Killing Omega will not restore free will. Unless Omega is itself responsible for the structure of the universe - which is what my definition cares about.

comment by NihilCredo · 2010-11-29T11:25:48.078Z · LW(p) · GW(p)

Disclaimer 1: I didn't downvote your comment. Disclaimer 2: I have only quickly skimmed Eliezer's take on the free will question, since it includes part of the Quantum Physics sequence which I intend to read as a whole and without hurry. But I didn't spot anything that conflicted with my take on it, and I would be very surprised if that were the case since it's basically a matter of epistemic hygiene.

I think you're falling into the assumption that just because people use a term a lot, that term must have some unique value, even if its borders are fuzzy (hence your comparisons to "heap" and "disease"). But that is not always the case. Free will is supposed to describe an objective property of ourselves - either you have it or you don't, true or false tertium non datur - but is there any concept of how Universe[PeopleHaveFreeWill] and Universe[PeopleHaveNoFreeWill] would look different to us (or to anyone else, full brain scanner included)? No, there isn't. We cannot imagine the experience of a world where our HasFreeWill boolean variable has been flipped (whatever its value used to be!), any more than we can imagine the experience of a world where we are dead. As a predicate, "free will" is a complete and utter failure.

So where does the flatus vocis "free will" come from, then? (That question, which is more historical than philosophical, always has an answer, even if the term is a delusion that pretends to be a reality, e.g. "soul") Here's how I put it: "'Free will' means 'what decisional brain activity looks like from the inside'". That's where I spot the seed of meaningfulness in the term, and the less rigorous usage started when people tried to connect it to the difficulties of cosmology - at first God's puppeteering, and later the alienness of physics (I suppose I could say "Free will is an illusion of the self" if I didn't hate to sound like a street corner preacher). If you try the straight replacement, the usual statements and questions about free will generally appear to be either trivial or nonsensical - and yes, I'm aware that that doesn't prove anything on its own.

Replies from: Manfred
comment by Manfred · 2010-11-29T17:09:31.376Z · LW(p) · GW(p)

Ah, right. The good ol' "the only consistent meaning of 'free will' is 'what humans do'" approach.

However, I think that it IS possible to imagine how it matters if PeopleHaveFreeWill=false (though it's quite difficult to visualize it from inside - I can only imagine "toning down" the free will by eliminating certain desiderata). Imagine that Laplace's demon could exist, and it wrote down the story of your life in a book when you were born. Someone else could read the book and know exactly what you do next year. My intuition doesn't think this sounds like free will.

Or imagine a universe where all your decisions were completely random. That doesn't sound like free will either, right? But all your (note: my definition of "your," i.e. "the measured you") decisions are random, to the extent that a muon could come screaming out of the atmosphere and make your brain misfire at any time.

So if free will is really poorly defined (and it is), then the simple definition that makes sense is "what humans do;" importantly this definition agrees with our intuition that we have free will. However, if our intuition is allowed to speculate a bit more, we can think up scenarios where we might not have free will. But this contradicts the intuition from two sentences ago that we definitely have free will! What I am trying to demonstrate is that there is a problem after all, and it is in the murky way in which our intuition handles the question "does X have free will?" If the problem is really dealt with, we should end up understanding how our intuition works here, at least to a large degree. That's why I think Yvain's post is a good model.

New idea: Laplace's demon slasher movie: I know what you did next summer!

Replies from: NihilCredo
comment by NihilCredo · 2010-11-29T17:39:55.461Z · LW(p) · GW(p)

Someone else could read the book and know exactly what you do next year. My intuition doesn't think this sounds like free will. Or imagine a universe where all your decisions were completely random. That doesn't sound like free will either, right?

So, you suddenly realise you live in either of those universes and go "oh, well, I have no free will".

Does that imply anything for you? Do you start behaving any differently? Is there any practical conclusion that you would reach in both of those universes that you wouldn't in one where you had free will (which shouldn't exist since you ruled out both determinism and non-determinism, but we'll allow it since the lack of a counterfactual would also make free will meaningless)? Emphasis on 'both' - there are interesting consequences to determinism and non-determinism, but you need free will to be the discriminating factor for the concept to be worth existing.

(As a side note, my "intuitive answers" aren't the same as yours, but I won't bring them up since I'm arguing that everyone's "intuitive answers" are just non-answers to a non-question.)

Replies from: Manfred
comment by Manfred · 2010-11-29T18:04:27.024Z · LW(p) · GW(p)

Well, it would certainly shake up my morality a bit, which would then change my actions. My ideas of punishment and reward would become more utilitarian as I held people less "responsible" for doing good or bad, for example.

However, if you're asking "what would be different if you'd been living in that universe all along and never found out," I must admit I can't think of anything. Wait, nevermind. "The bell inequalities wouldn't be violated." Or "fermions wouldn't be identical particles." "Arithmetic would be inconsistent." But it's possible to imagine "just so" theories that would fit observations without having much free will. I wouldn't say a Boltzmann brain has free will in the second before it boils away into the plasma.

Still, I think Occam's razor helps rule that stuff out. I'll have to think about it more.

comment by Laoch · 2013-12-28T17:53:52.970Z · LW(p) · GW(p)

Right, I've read the solution sequence to "free will" and all I've managed to glean from it is that a) I'm physics, whose ontology I'm quite ignorant of and b) free will is conceptually incoherent and needs dissolving. I certainly don't feel like or believe I have free will or that I could influence the creation of FAI by desire for example. Is there something Louis(me) is missing that Louie isn't from the sequence? I find the sequence too long and prosaic to fit in my head to make a visceral impact. Is there a more concise alternative or even just an alternative that would make Louis.belief == Louie.belief? I'm struggling guys please help.

comment by [deleted] · 2015-02-10T01:23:34.915Z · LW(p) · GW(p)

I will read more into this strange rationalist blog, but so far the blog does seem rather arrogant to me. Every other post claims to "solve" a traditional philosophical problem "easily". This post doesn't even bother to do that; it just states the problems have been easily solved. What have I seen so far: the problem of induction, the problem of free will, what the correct meta ethics is, an exact analysis of belief, a proof of metaphysical realism. I hope the writers in this blog are aware that countless people throughout history, with a wide variety of viewpoints, have presented supposedly "easy" solutions to these diverse problems. We tend to view the ones further back in history as philosophers, and the ones closer (from the 20th century onwards) as bad philosophers. Not always. There have been some serious modern attempts to easily solve philosophical problems. Nevertheless, this blog does come off as arrogant. For an outsider like myself, I could give a lot more of my confidence to this blog if any of it had been published in an academic journal. I don't know if any has; please direct me if it has.

comment by AlephNeil · 2010-11-20T21:16:53.790Z · LW(p) · GW(p)

Number 4 is totally wrong.

"In order to be able to think up a hypothesis which has a significant chance of being correct, I must already possess a sufficient quantity of information" is obvious, following immediately from the mathematics of information. But that's emphatically not the same thing as "I obtain my hypothesis by applying a 'principle of induction' to generalize the data I have so far."

The way induction was supposed to work was that your observation statements served as the premises of a kind of inference. Just as one can use deductive logic to infer "Swan[1] is white" from "All swans are white", so one was supposed to be able to infer "All swans are (probably) white" from ("Swan[1] is white", ..., "Swan[N] is white") for sufficiently large N.

But there is no such thing as a "method of induction" which finds hypotheses for you. Consider those swans: In order to even write down the data we needed to have the concepts 'white' and 'swan', and something must have motivated us to look specifically at swans and note down specifically what colour they are. In other words, by the time we get round to actually applying our "method of induction" we must already have formed the very hypothesis that the method was supposed to return, or something close to it (like "all swans are some colour - perhaps white").

This becomes comical when we turn to GR:

Our raw, unprocessed 'sense data' comes streaming in: ("The precession of Mercury's perihelion looks exactly as if (long description of the mathematics of general relativity)", "The apparent position of this star as the sun moves in front of it changes in a manner that looks exactly as if (long description of the mathematics of general relativity)", "A clock aboard this high-flying plane runs slightly faster than one on earth, exactly as if (long description of general relativity)") ... and then, as if by magic, the Method Of Induction selects for us the appropriate hypothesis: (long description of the mathematics of general relativity).

Replies from: JamesAndrix, wedrifid
comment by JamesAndrix · 2010-11-22T01:51:40.868Z · LW(p) · GW(p)

No.

There must be a process to turn data into hypotheses. It may be that that process in our brains is biased towards dealing with things like animal colors, but even that came from evolution being handed raw data.

The thing to keep in mind is that all the intermediate sensory processing may also be part of the process of induction (or a biased version of it.) If the data is pre-selected, then that just means that much of the inductive work has already been done. The selection could not have happened otherwise.

A less biased, more raw system might systematically look for correlations between variables, including computed variables like "What species is this?" Which can themselves be inferred from the raw data.

Doing this efficiently is a trick.

comment by wedrifid · 2010-11-20T21:22:44.016Z · LW(p) · GW(p)

But there is no such thing as a "method of induction" which finds hypotheses for you.

Yes there is although one must of course already have some kind of vocabulary within which to represent hypotheses. It is finding a hypothesis out of an infinite number of hypothesis that such a method is useful for.

Replies from: AlephNeil
comment by AlephNeil · 2010-11-20T21:29:34.409Z · LW(p) · GW(p)

No there isn't, because as I have illustrated above, an 'inductive inference' pointing to a hypothesis presupposes a set of data selectively chosen and written down in such a way that the hypothesis is already present.

I think you probably have something else in mind, perhaps "abductive inference" (i.e. "inference to the best explanation").

Replies from: wedrifid, anonym, wedrifid
comment by wedrifid · 2010-11-20T23:12:18.233Z · LW(p) · GW(p)

abductive inference

That's the kind of science aliens use.

comment by anonym · 2010-11-21T11:15:40.837Z · LW(p) · GW(p)

Yes, abductive inference or some form of analogical thinking are how powerful hypotheses are really generated. Neither of the posts linked to in number 4 above even mention induction, so I'm not sure why the author thought they were evidence for the thesis.

comment by wedrifid · 2010-11-20T23:02:36.595Z · LW(p) · GW(p)

I speak of the sense used in "4", that which you were objecting to. Louie is right and you are wrong.

comment by Vaniver · 2010-11-20T22:25:36.455Z · LW(p) · GW(p)

I enjoyed the post (enough for a vote up!) but I find myself wishing it had stopped at #5.

6 is mostly correct but has significant edge cases (even if you subscribe to MWI, probabilities pop up when dealing with tiny things). Something like "Probabilities exist in minds" is a much more agreeable statement than "Probabilities don't exist elsewhere," and has the same framing benefits.

7 just flat out bothers me. Many Worlds is just an interpretation, a flavor- it shares the exact same math with all other flavors of quantum mechanics. I agree with Eliezer that it's a far more agreeable flavor than Copenhagen- but those aren't the only two flavors available. And if you are making predictions based on your flavor preferences, something went wrong somewhere. I cannot see how your tastes when it comes to QM should impact whether or not you sign up for cryonics with the currently existing firms offering cryonic services.

Replies from: JamesAndrix, wnoise, timtyler
comment by JamesAndrix · 2010-11-20T22:42:31.876Z · LW(p) · GW(p)

I didn't get the impression that MWI mattered to cryonics. The connection from the Quantum physics sequence to cryonics that I got was "This atom is essentially the same as that atom, Replacing all your atoms wouldn't change 'you'. " And related to that, that your atoms could be computer simulated and you'd still be you.

Replies from: Vaniver
comment by Vaniver · 2010-11-20T23:01:51.445Z · LW(p) · GW(p)

That's a very reasonable interpretation, but it's orthogonal to why I'm bothered.

If the argument is "my objection to cryonics was I wasn't convinced a remade me would be me, but as soon as I realized the configuration was important and not the pieces inside the configuration, that toppled my last objection," then I don't have an issue with that.

What it looked like to me was "I am convinced of Eliezer's viewpoint" instead of "I believe Eliezer's arguments are correct in the domain that they are argued." The linked argument that cryonics is reasonable is an argument that cryonics is possible, not an argument that signing up with Alcor or CI actually increases your likelihood of being awoken in the future. The linked argument is necessary but not sufficient for the action stated.

That came to the forefront of my mind because Eliezer's declaration that MWI is "correct" could mean two things- either MWI is the single truest / best flavor of QM, which I do not think he is qualified to state, or MWI gives the right answers when you ask it relevant questions, just like Copenhagen. Eliezer can rightly say MWI is more satisfactory than Copenhagen, but when you go further and make plans based on multiverses that you would not make if you were just planning for a singular future, that is a giant red flag.

comment by wnoise · 2010-11-21T02:04:23.335Z · LW(p) · GW(p)

MWI agrees with Copenhagen in all currently reasonably accessible experimental regimes. But it is not just a flavor -- it allows for the possibility of "uncollapse" after an observation by delicate recoherence. (Though after such a demonstration the Copenhagenite could just say that the collapse was inferred too soon.)

Replies from: Vaniver, Nisan
comment by Vaniver · 2010-11-21T19:30:51.687Z · LW(p) · GW(p)

I agree that the question "Has this system collapsed?" is a bad question, and people shouldn't be interested in it. (That's the main reason I don't like Copenhagen; it invented that question and still considers it relevant.)

The real question is "if we set up a bunch of these systems in identical conditions, what distribution of results do we expect?". The reason I am not optimistic about MWI 'beating' Copenhagen with such an experiment is that any physical process that "uncollapses" an observation is readily understandable by both MWI and Copenhagen. The Copenhagenite would just say "well, the system collapsed here, and then you uncollapsed it there, and then you recollapsed it in this last place" and come up with the same answer for the final state as the MWI believer.

Replies from: topynate
comment by topynate · 2010-11-21T19:57:43.802Z · LW(p) · GW(p)

Wave function collapse deletes eigenvalues. 'Uncollapse' would have to put them back, in which case you have to keep track of those eigenvalues, in which case you never deleted them in the first place, in which case no collapse occured. So the Copenhagen interpretation can deal with uncollapse, so long as nothing ever collapses.

Replies from: Vaniver
comment by Vaniver · 2010-11-21T20:24:16.124Z · LW(p) · GW(p)

Illustration: if we use the example of the experiment here, the Copenhagenite would just point to the steps between measurement 2 and measurement 3 that reverse measurement 2 and say "look, to do this you need to put your subject in either |+> or |-> according to what's still in your memory, and so the collapse at measurement 2 is entirely separate from what the result of measurement 3 will be."

The only difference between the two physicists will be their vocabulary- one will have the unfortunate word "collapse" and the other will have the unfortunate word "multiverse"- but they'll agree on the final result.

Replies from: topynate
comment by topynate · 2010-11-21T23:41:49.054Z · LW(p) · GW(p)

OK, the example linked is defective, in that there are two different operations that get the same result when the machine reverses its x-axis measurement. The first is the time-reversal of the measurement operation; the second is the recreation of the state created by measurement 1. You seem to be saying that the Copenhagenite would assume the latter.

Here is a modification of the experiment that tests the idea of collapse more severely. Instead of preparing an electron in a |+z> or |-z> state, I prepare an entangled pair of electrons with opposite z-axis spin (a spin anti-correlated pair). I now give one electron to the machine intelligence, which measures its spin in the x-axis, and then applies the time-reversal of the measurement, restoring the electron's original state and erasing its memory of the x-axis state. It then passes the electron back to me, and I measure the two electrons' z-axis spins.

If the machine intelligence's measurement had caused a collapse, the anti-correlation would be erased. But in fact everything we know about quantum mechanics says that the electrons should remain anti-correlated.

Replies from: Vaniver
comment by Vaniver · 2010-11-22T00:14:35.781Z · LW(p) · GW(p)

I now give one electron to the machine intelligence, which measures its spin in the x-axis, and then applies the time-reversal of the measurement, restoring the electron's original state and erasing its memory of the x-axis state.

I don't see why the Copenhagenite can't make the exact same objection here. Perhaps it would be clearer if you gave an example of how one would perform the time reversal of a measurement? If I have a z spin up electron, and I put it through a Stern-Gerlach device and find it is now a x spin up electron, how do I go back to a z spin up electron?

Replies from: topynate, wnoise
comment by topynate · 2010-11-22T00:44:56.878Z · LW(p) · GW(p)

The link doesn't make it explicit, but a reversible machine intelligence which can actually reverse a measurement is a quantum computer. In this context, a measurement occurs when the AI purposefully entangles its computing elements with the electron. The AI can now choose whether to let the information it gains leak out of it or not. Provided it does not allow the entanglement between the electron and the outside world to increase, it can choose to unentangle its state from that of the electron. In the simplest case, where it does not allow the rest of its mind to become entangled with the part of itself that it is using as a measurement apparatus, all it need do is run the inverse of the unitary transform that it used to entangle the apparatus with the electron. However, it can theoretically do quite a bit more. It can use the information in other computations, and then carefully carry out an operation that restores the original state of the electron and turns the results it obtains into superpositions.

Humans don't have such fine-grained control over where they shuffle quantum information, nor can they keep themselves from becoming entangled with their environment. Using macroscopic devices to register phosphorescence is right out.

Replies from: Vaniver
comment by Vaniver · 2010-11-22T01:08:30.452Z · LW(p) · GW(p)

It seems to me that this makes assumptions about entanglement and disentanglement which I find suspect (but I am not an expert on entanglement, so they may hold). It doesn't appear to be "choosing" to unentangle its state from the electron- we're assuming that the information it generates through entanglement is not leaked to the outside world, and that the information can be thrown away and the system returned to where it was before. If it's making a choice, it seems that that choice would cause information leak.

If those assumptions hold, I don't see why they hold for just MWI. That is, I believe it may be possible to get to a final situation where you have your initial configuration despite the fact that your apparatus poked the system- but I don't think that gives you any meaningful information differentiating the flavors of QM.

Replies from: wnoise
comment by wnoise · 2010-11-22T01:25:07.423Z · LW(p) · GW(p)

that the information can be thrown away

Quantum Information cannot be thrown away. Nor can it be copied. Information is conserved. *Apart, perhaps, from Copenhagen collapse). Information can be made difficult to retrieve by e.g. entanglement with the environment, specifically propagating modes that take it beyond your control, but it's still "in principle" there.

Replies from: Vaniver
comment by Vaniver · 2010-11-22T01:31:40.183Z · LW(p) · GW(p)

Is there a meaningful difference between "propagating modes that take it beyond your control" and "throwing it away"? In my mind, the first is a much longer restatement of the second, but I apologize that it was unclear. (Here, you're throwing it back into the electron, not the outside world, but the idea is the same from the computer's point of view.)

Replies from: wnoise
comment by wnoise · 2010-11-22T01:36:26.700Z · LW(p) · GW(p)

Yes, they have very different effects. Throwing it into the electron allows recoherence in principal. Throwing it into the environment makes that impossible.

comment by wnoise · 2010-11-22T01:33:58.956Z · LW(p) · GW(p)

As stated you can't. In the MWI picture, you are split into two one who has measured it x+, the other x-. Both must send it back to have it recohere, and they must at the same time erase their measurement of which way it went -- really anything that distinguishes those two branches. They can record the fact that they did measure it, as this is the same in the two branches.

A person obviously can't just "forget", and will at best leak information into the environment, encoded as correlations in the noise of heat. A (reversible) computer, on the other hand, works quite well for doing this.

comment by Nisan · 2010-11-21T04:25:46.711Z · LW(p) · GW(p)

Right, and after several such experiments it would become apparent that the Copenhagenite doesn't know how to predict when collapse happens.

comment by timtyler · 2010-11-21T14:52:19.803Z · LW(p) · GW(p)

Many Worlds is just an interpretation

It isn't according to The Everett FAQ's: Q16 Is many-worlds (just) an interpretation?

Replies from: Vaniver
comment by Vaniver · 2010-11-21T19:21:10.419Z · LW(p) · GW(p)

Have you read that and considered it convincing?

They use four supports, all of which collapse under examination (I don't number them the way they do, because they seem confused about what are separate supports):

  1. Though it makes the same predictions about our world as de Broglie-Bohm, they have different philosophical implications. Believe in something because of math, not philosophy.

They list three predictions made MWI, all of which are already disproved or nonsense:

  1. If memory is reversible, it's not memory because thermodynamic fluctuations make it unreliable. Beyond that confusion, the crux of this argument is whether or not a spin measurement can be reversed- if so, it should work for any flavor, and not depend on whether or not you also erase what's in memory.

  2. Their discussion of quantum gravity serves to make MWI not more plausible, as it supposedly requires quantum gravity, while other flavors function whether gravity is quantum or classical.

  3. Their discussion of linearity is flat-out bizarre. Paraphrased: 'We're pretty damn sure that QM is linear, but if it weren't and MWI were true, aliens would have teleported to our dimension, and that hasn't happened yet.' Why they think that is evidence for MWI is beyond me- using Bayesian logic, it strictly cannot increase the probability of MWI.

Replies from: nshepperd, timtyler
comment by nshepperd · 2010-11-22T01:50:05.979Z · LW(p) · GW(p)

I don't think the intention was to offer these as evidence for MWI. The evidence for MWI is that it has one less postulate (and therefore is "simpler"). They're just showing what MWI rules out. That these predictions are different correctly justifies saying "MWI is not just an interpretation".

comment by timtyler · 2010-11-21T19:40:03.459Z · LW(p) · GW(p)

It is best not to use quotation marks - unless you are actually quoting - or otherwise make it very clear what you are doing. The resulting self-sabotage is too dramatic.

I read that - and your incorrect comments about reversible memory - and concluded that you didn't know what you were talking about.

Replies from: Vaniver
comment by Vaniver · 2010-11-21T19:54:04.393Z · LW(p) · GW(p)

At your suggestion, I've revised my comment to make clear that I'm paraphrasing my interpretation of their comment instead of quoting it directly.

I was least sure about my reversible memory objection, and was considering placing a disclaimer on it; however, I feel I should stand by it unless given evidence that my understanding of information entropy is incorrect. My statement is in accord with Landauer's Principle, which I see is not known to be true (but is very strongly suspected to be). There appears to be a fundamental limit that their trend is bucking up against, and so I feel confident saying the trend will not continue as they need it to.

Even if we shelve the discussion of whether or not memory can be reversible, the other objection- that any process which reverses a measurement can be understood by both MWI and Copenhagen- demolishes the usefulness of such an experiment, as none of the testable predictions differ between the two interpretations.

Replies from: timtyler
comment by timtyler · 2010-11-21T21:59:20.253Z · LW(p) · GW(p)

If it helps, this seems relevant: http://en.wikipedia.org/wiki/Reversible_computing

Landauer's Principle doesn't seem particularly relevant - since in reversible computing there is no erasure of information.

Replies from: Vaniver
comment by Vaniver · 2010-11-21T23:26:53.765Z · LW(p) · GW(p)

I don't see the relevance- the description of the experiment linked purports to hinge on the reversibility of information erasure. It sounds like both of us agree that's impossible.

(It actually hinges on whatever steps they take to 'reverse' the measurement they take, which is why it's not an effective experiment.)

Replies from: timtyler
comment by timtyler · 2010-11-22T18:34:37.283Z · LW(p) · GW(p)

It seems relevant to the comment that "if memory is reversible, it's not memory". Reversible computers have reversible memory.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-11-22T19:33:48.870Z · LW(p) · GW(p)

Reversible computer designs people actually consider building do a small bit of irreversible computation copying end results of the reversible computations into irreversible memory before rolling back the reversible computation. Perfectly reversible computations are a bit useless since they erase their results when they start rolling backwards.

Replies from: wnoise, timtyler
comment by wnoise · 2010-11-22T20:01:30.312Z · LW(p) · GW(p)

You can erase some of their results without erasing others, of course.

comment by timtyler · 2010-11-22T19:41:30.124Z · LW(p) · GW(p)

Nobody says you have to run a reversible computer backwards.

A big part of the point is to digitise heat sinks and power management. For details about that, see here.

comment by timtyler · 2010-11-20T13:42:53.546Z · LW(p) · GW(p)

How constructive is:

  1. Beliefs are for controlling anticipation (Not for being interesting)

...? ...since beliefs, do in fact, serve all kinds of signalling purposes among humans.

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-11-20T13:45:52.537Z · LW(p) · GW(p)

It's probably useful at this point to differentiate between actual beliefs and signaled beliefs, particularly because if your beliefs control anticipation (and accurately!), you would know which beliefs you want to signal for social purposes.

Replies from: timtyler
comment by timtyler · 2010-11-20T14:12:31.667Z · LW(p) · GW(p)

...though it is also worth noting that humans are evolved to be reasonable lie-detectors.

If your actual beliefs don't match your signalled beliefs, others may pick up on that, expose you as a liar, and punish you.

Replies from: saturn, Vive-ut-Vivas
comment by saturn · 2010-11-20T20:56:25.984Z · LW(p) · GW(p)

You can choose to think of signaling beliefs as lying, but that's not very helpful to anyone. It's what most people do naturally and therefore not a violation of anyone's expectations in most contexts. Maybe instead it should be called speaking Statusese.

People don't pick up on the literal truth of your statements but on your own belief that you are doing something wrong. For instance, writers of fiction aren't typically considered immoral liars.

Replies from: sark, timtyler, wedrifid
comment by sark · 2010-11-20T22:29:49.434Z · LW(p) · GW(p)

People will agree to fiction not being true, but not to their professed beliefs not being true.

comment by timtyler · 2010-11-20T21:15:00.661Z · LW(p) · GW(p)

Signalling beliefs that don't match your actual beliefs is what I said and meant.

Like claiming to be a vegan, and then eating spam.

Replies from: saturn
comment by saturn · 2010-11-20T21:26:15.024Z · LW(p) · GW(p)

If the whole world claims to be vegan and then eats spam, and moreover sees this as completely normal and expected, and sees people who don't do it as weird and untrustworthy, what exactly are you accomplishing by refusing to go along with it?

Replies from: sark, timtyler
comment by sark · 2010-11-20T22:33:18.955Z · LW(p) · GW(p)

Some of us have trouble keeping near and far modes separate. People like us if we try professing veganism, will find ourselves ending up not eating spam.

My personal solution is to lie, I'm actually quite good at it!

comment by timtyler · 2010-11-20T21:32:58.815Z · LW(p) · GW(p)

What does that have to do with the topic? That was just an example of signalling beliefs that don't match your actual beliefs.

comment by wedrifid · 2010-11-20T21:03:21.092Z · LW(p) · GW(p)

One could as easily say that it isn't useful to consider lying from the viewpoint of morality.

comment by Vive-ut-Vivas · 2010-11-20T14:16:53.483Z · LW(p) · GW(p)

And ideally, you'd take that fact into account in forming your actual beliefs. I think it's pretty well-established here that having accurate beliefs shouldn't actually hurt you. It's not a good strategy to change your actual beliefs so that you can signal more effectively -- and it probably wouldn't work, anyway.

Replies from: timtyler, wedrifid
comment by timtyler · 2010-11-20T14:22:21.242Z · LW(p) · GW(p)

I think it's pretty well-established here that having accurate beliefs shouldn't actually hurt you.

Hmm: Information Hazards: A Typology of Potential Harms from Knowledge ...?

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-11-20T14:35:13.097Z · LW(p) · GW(p)

I haven't read that paper - but thanks for the link, I'll definitely do so - but it seems that that's a separate issue from choosing which beliefs to have based on what it will do for your social status. Still, I would argue that limiting knowledge is only preferable in select cases -- not a good general rule to abide by, partial knowledge of biases and such notwithstanding.

comment by wedrifid · 2010-11-20T21:07:58.214Z · LW(p) · GW(p)

I think it's pretty well-established here that having accurate beliefs shouldn't actually hurt you.

Not at all. It is well established having accurate beliefs should not hurt a perfect bayesian intelligence. Believing it applied to mere humans would be naive in the extreme.

It's not a good strategy to change your actual beliefs so that you can signal more effectively -- and it probably wouldn't work, anyway.

The fact that we are so damn good at it is evidence to the contrary!

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-11-20T21:36:52.458Z · LW(p) · GW(p)

I'm not understanding the disagreement here. I'll grant that imperfect knowledge can be harmful, but is anybody really going to argue that it isn't useful to try to have the most accurate map of the territory?

Replies from: wedrifid
comment by wedrifid · 2010-11-20T22:48:42.340Z · LW(p) · GW(p)

We are talking about signalling. So for most people yes.