Rationality Quotes January 2013

post by katydee · 2013-01-02T17:23:36.506Z · LW · GW · Legacy · 604 comments

Happy New Year! Here's the latest and greatest installment of rationality quotes. Remember:

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself
  • Do not quote comments/posts on LessWrong or Overcoming Bias
  • No more than 5 quotes per person per monthly thread, please

604 comments

Comments sorted by top scores.

comment by RolfAndreassen · 2013-01-01T21:25:47.267Z · LW(p) · GW(p)

"Ten thousand years' worth of sophistry doesn't vanish overnight," Margit observed dryly. "Every human culture had expended vast amounts of intellectual effort on the problem of coming to terms with death. Most religions had constructed elaborate lies about it, making it out to be something other than it was—though a few were dishonest about life, instead. But even most secular philosophies were warped by the need to pretend that death was for the best."

"It was the naturalistic fallacy at its most extreme—and its most transparent, but that didn't stop anyone. Since any child could tell you that death was meaningless, contingent, unjust, and abhorrent beyond words, it was a hallmark of sophistication to believe otherwise. Writers had consoled themselves for centuries with smug puritanical fables about immortals who'd long for death—who'd beg for death. It would have been too much to expect all those who were suddenly faced with the reality of its banishment to confess that they'd been whistling in the dark. And would-be moral philosophers—mostly those who'd experienced no greater inconvenience in their lives than a late train or a surly waiter—began wailing about the destruction of the human spirit by this hideous blight. We needed death and suffering, to put steel into our souls! Not horrible, horrible freedom and safety!"

-- Greg Egan, "Border Guards".

comment by NoisyEmpire · 2013-01-02T20:01:12.373Z · LW(p) · GW(p)

What does puzzle people – at least it used to puzzle me – is the fact that Christians regard faith… as a virtue. I used to ask how on Earth it can be a virtue – what is there moral or immoral about believing or not believing a set of statements? Obviously, I used to say, a sane man accepts or rejects any statement, not because he wants or does not want to, but because the evidence seems to him good or bad. If he were mistaken about the goodness or badness of the evidence, that would not mean he was a bad man, but only that he was not very clever. And if he thought the evidence bad but tried to force himself to believe in spite of it, that would be merely stupid…

What I did not see then – and a good many people do not see still – was this. I was assuming that if the human mind once accepts a thing as true it will automatically go on regarding it as true, until some real reason for reconsidering it turns up. In fact, I was assuming that the human mind is completely ruled by reason. But that is not so. For example, my reason is perfectly convinced by good evidence that anesthetics do not smother me and that properly trained surgeons do not start operating until I am unconscious. But that does not alter the fact that when they have me down on the table and clap their horrible mask over my face, a mere childish panic begins inside me. I start thinking I am going to choke, and I am afraid they will start cutting me up before I am properly under. In other words, I lose my faith in anesthetics. It is not reason that is taking away my faith; on the contrary, my faith is based on reason. It is my imagination and emotions. The battle is between faith and reason on one side and emotion and imagination on the other…

Faith, in the sense in which I am here using the word, is the art of holding onto things your reason has once accepted, in spite of your changing moods. For moods will change, whatever view your reason takes. I know that by experience. Now that I am a Christian, I do have moods in which the whole thing looks very improbable; but when I was an atheist, I had moods in which Christianity looked terribly probable... Unless you teach your moods "where they get off" you can never be either a sound Christian or even a sound atheist, but just a creature dithering to and fro, with its beliefs really dependent on the weather and the state of its digestion. Consequently one must train the habit of faith.

C. S. Lewis, Mere Christianity

Caveat: this is not at all how the majority of the religious people that I know would use the word "faith". In fact, this passage turned out to be one of the earliest helps in bringing me to think critically about and ultimately discard my religious worldview.

Replies from: simplicio, SaidAchmiz, Jay_Schweikert
comment by simplicio · 2013-01-02T21:44:48.024Z · LW(p) · GW(p)

Now that I am a Christian, I do have moods in which the whole thing looks very improbable; but when I was an atheist, I had moods in which Christianity looked terribly probable...

Dear LWers: do you have these moods (let us gloss them as "extreme temporary loss of confidence in foundational beliefs"):

[pollid:377]

Replies from: Desrtopa, Oligopsony, Qiaochu_Yuan, duckduckMOO, Jay_Schweikert, MaoShan, FiftyTwo
comment by Desrtopa · 2013-01-03T02:57:38.888Z · LW(p) · GW(p)

I have had "extreme temporary loss of foundational beliefs," where I briefly lost confidence in beliefs such as the nonexistence of fundamentally mental entities (I would describe this experience as "innate but long dormant animist intutions suddenly start shouting,") but I've never had a mood where Christianity or any other religion looked probable, because even when I had such an experience, I was never enticed to privilege the hypothesis of any particular religion or superstition.

comment by Oligopsony · 2013-01-03T04:41:37.785Z · LW(p) · GW(p)

I answered "sometimes" thinking of this as just Christianity, but I would have answered "very often" if I had read your gloss more carefully.

I'm not quite sure how to explicate this, as it's something I've never really though much about and had generalized from one example to be universal. But my intuitions about what is probably true are extremely mood and even fancy-dependent, although my evaluation of particular arguments and such seems to be comparatively stable. I can see positive and negative aspects to this.

Replies from: lavalamp
comment by lavalamp · 2013-01-06T23:34:23.570Z · LW(p) · GW(p)

Oh whoops, I didn't read the parenthetical either. Not sure if it changes my answer.

comment by Qiaochu_Yuan · 2013-01-02T23:00:43.523Z · LW(p) · GW(p)

I am fascinated by all of the answers that are not "never," as this has never happened to me. If any of the answerers were atheists, could any of you briefly describe these experiences and what might have caused them? (I am expecting "psychedelic drugs," so I will be most surprised by experiences that are caused by anything else.)

Replies from: someonewrongonthenet, FeepingCreature, None, GDC3, simplicio, NoisyEmpire, MoreOn, Toddling, Sarokrae, jooyous, MugaSofer
comment by someonewrongonthenet · 2013-01-03T01:57:40.185Z · LW(p) · GW(p)

Erm...when I was a lot younger, when I considered doing something wrong or told a lie I had the vague feeling that someone was keeping tabs. Basically, when weighing utilities I greatly upped the probability that someone would somehow come to know of my wrongdoings, even when it was totally implausible. That "someone" was certainly not God or a dead ancestor or anything supernatural...it wasn't even necessarily an authority figure.

Basically, the superstition was that someone who knew me well would eventually come to find out about my wrongdoing, and one day they would confront me about it. And they'd be greatly disappointed or angry.

I'm ashamed to say that in the past I might have actually done actions which I myself felt were immoral, if it were not for that superstitious feeling that my actions would be discovered by another individual. It's hard to say in retrospect whether the superstitious feeling was the factor that pushed me back over that edge.

Note that I never believed the superstition...it was more of a gut feeling.

I'm older now and am proud to say that I haven't given serious consideration to doing anything which I personally feel is immoral for a very, very long time. So I do not know whether I still carry this superstition. It's not really something I can test empirically.

I think part of it is that as I grew older my mind conceptually merged "selfish desire" and "morality" neatly into one single "what is the sum total of my goals" utility function construct (though I wasn't familiar with the term "utility function" at the time).

This shift occurred sometime in high school, and it happened around the same time that I overcame mind-body dualism at a gut level. Though I've always had generally atheist beliefs, it wasn't until this shift that I really understood the implications of a logical universe.

Once these dichotomies broke down, I no longer felt the temptation to "give in" to selfish desire, nor was I warded off by "guilt" or the superstitious fear. I follow morals because I want to follow them, since they are a huge part of my utility function. Once my brain understood at a gut level that going against my morality was intrinsically against my interests, I stopped feeling any temptation to do immoral actions for selfish reasons. On the flip side, the shift also allows be to be selfish without feeling guilty. It's not that I'm a "better person" thanks to the shift in gut instinct...it's more that my opposing instincts don't fight with each other by using temptation, fear, and guilt anymore.

I think there is something about that "shift" experience I described (anecdote indicates that a lot of smart people go through this at some point in life, but most describe it in less than articulate spiritual terms) which permanently alters your gut feelings about reality, morality, and similar topics in philosophy.

I'm guessing those who answered "never" either did not carry the illusions in question to begin with and therefore did not require a shift in thought, or they did not factor in how they felt pre-shift into their introspection.

comment by FeepingCreature · 2013-01-07T15:22:22.216Z · LW(p) · GW(p)

Occasionally the fundamental fact that all our inferences are provisional creeps me out. The realization that there's no way to actually ground my base belief that, say, I'm not a Boltzmann brain, combined with the fact that it's really quite absurd that anything exists rather than nothing at all given that any cause we find just moves the problem outwards is the closest thing I have to "doubting existence".

comment by [deleted] · 2013-01-14T15:23:41.389Z · LW(p) · GW(p)

I have been diagnosed with depression in the past, so it's not terribly surprising to me when "My life is worth living" is considered a foundational belief, that has it's confidence fade in and out quite a lot. In this case, the drugs would actually restore me back to a more normal level.

Although, considering the frequency with which it is still happening, I may want to reconsult with my Doctor. Saying "I have been diagnosed with mental health problems, and I'm on pills, but really, I still have some pretty bad mental health problems." pattern matches rather well to "Perhaps I should ask my Doctor about updating those pills."

Replies from: DaFranker
comment by DaFranker · 2013-01-14T15:39:35.413Z · LW(p) · GW(p)

Although, considering the frequency with which it is still happening, I may want to reconsult with my Doctor. Saying "I have been diagnosed with mental health problems, and I'm on pills, but really, I still have some pretty bad mental health problems." pattern matches rather well to "Perhaps I should ask my Doctor about updating those pills."

Yep. Medical professionals often err on the side of lesser dosage anyway, even for life-threatening stuff. After all, "we gave her medication but she died anyway, the disease was too strong" sounds like abstract, chance-and-force-of-nature-and-fate stuff, and like a statistic on a sheet of paper.

"Doctor overdoses patient", on the other hand, is such a tasty scoop I'd immediately expect my grandmother to be gossiping about it and the doctor in question to be banned from medical practice for life, probably with their diplomas revoked.

They also often take their guidelines from organizations like the FDA, which are renowned to explicitly delay for five years medications that have a 1 in 10000 side-effect mortality rate versus an 80% cure-and-survival rate for diseases that kill 10k+ annually (bogus example, but I'm sure someone more conscientious than me can find real numbers).

Anyway, sorry for the possibly undesired tangent. It seems usually-optimal to keep returning to your doctor persistently as much as possible until medication really does take noticeable effect.

comment by GDC3 · 2013-01-06T06:20:36.275Z · LW(p) · GW(p)

I put sometimes.

I believe all kinds of crazy stuff and question everything when I'm lying in bed trying to fall asleep, most commonly that death will be an active and specific nothing that I will exist to experience and be bored frightened and upset by forever. Something deep in my brain believes a very specific horrible cosmology as wacky and specific as any religion but not nearly as cheerful. When my faculties are weakened it feels as if I directly know it to be true and any attempt to rehearse my reasons for materialism feels like rationalizing.

I'm neither very mentally healthy nor very neurotypical, which may be part of why this happens.

comment by simplicio · 2013-01-02T23:10:07.917Z · LW(p) · GW(p)

could any of you briefly describe these experiences and what might have caused them?

Hasn't happened to me in years. Typically involved desperation about how some aspect of my life (only peripherally related to the beliefs in question, natch) was going very badly. Temptation to pray was involved. These urges really went away when I discovered that they were mainly caused by garden variety frustration + low blood sugar.

I think that in my folly-filled youth, my brain discovered that "conversion" experiences (religious/political) are fun and very energizing. When I am really dejected, a small part of me says "Let's convert to something! Clearly your current beliefs are not inspiring you enough!"

comment by NoisyEmpire · 2013-01-03T01:34:02.049Z · LW(p) · GW(p)

My own response was “rarely”; had I answered when I was a Christian ten years ago, I would probably have said “sometimes”; had I answered as a Christian five years ago I might have said “often” or “very often” (eventually I allowed some of these moments of extreme uncertainty to become actual crises of faith and I changed my mind, though it happened in a very sloppy and roundabout way and had I had LessWrong at the time things could’ve been a lot easier.)

And still, I can think of maybe two times in the past year when I suddenly got a terrifying sinking feeling that I have got everything horribly, totally wrong. Both instances were triggered whilst around family and friends who remain religious, and both had to do with being reminded of old arguments I used to use in defense of the Bible which I couldn’t remember, in the moment, having explicitly refuted.

Neither of these moods was very important and both were combated in a matter of minutes. In retrospect, I’d guess that my brain was conflating fear of rejection-from-the-tribe-for-what-I-believe with fear of actually-being-wrong.

Not psychedelic drugs, but apparently an adequate trigger nonetheless.

comment by MoreOn · 2013-05-08T05:20:39.949Z · LW(p) · GW(p)

I am firmly atheist right now, lounging in my mom's warm living room in a comfy armchair, tipity-typing on my keyboard. But when I go out to sea, alone, and the weather turns, a storm picks up, and I'm caught out after dark, and thanks to a rusty socket only one bow light works... well, then, I pray to every god I know starting with Poseidon, and sell my soul to the devil while at it.

I'm not sure why I do it.

Maybe that's what my brain does to occupy the excess processing time? In high school, when I still remembered it, I used to recite the litany against fear. But that's not quite it. When waves toss my little boat around and I ask myself why I'm praying---the answer invariably comes out, ``It's never made things worse. So the Professor God isn't punishing me for my weakness. Who knows... maybe it will work? Even if not, prayer beats panic as a system idle process.''

comment by Toddling · 2013-01-04T05:41:50.262Z · LW(p) · GW(p)

I answered Sometimes. For me the 'foundational belief' in question is usually along the lines: "Goal (x) is worth the effort of subgoal/process (y)." These moods usually last less than 6 months, and I have a hunch that they're hormonal in nature. I've yet to systematically gather data on the factors that seem most likely to be causing them, mostly because it doesn't seem worth the effort right now. Hah. Seriously, though, I have in fact been convinced that I need to work out a consistent utility function, but when I think about the work involved, I just... blah.

comment by Sarokrae · 2013-03-12T02:02:33.013Z · LW(p) · GW(p)

I'm a bit late here, but my response seems different enough to the others posted here to warrant replying!

My brain is abysmally bad at storing trains of thought/deduction that lead to conclusions. It's very good at having exceptionally long trains of thoughts/deductions. It's quite good at storing the conclusions of my trains of thoughts, but only as cached thoughts and heuristics. It means that my brain is full of conclusions that I know I assign high probabilities to, but don't know why off the top of my head. My beliefs end up stored as a list of theorems in my head, with proofs left as an exercise to the reader. I occasionally double-check them, but it's a time-consuming process.

If I'm having a not very mentally agile day, I can't off the top of my head re-prove the results I think I know, and a different result seems tempting, I basically get confused for a while until I re-figure out how to prove the result I know I've proven before.

Basically on some days past-me seems like a sufficiently different person that I no longer completely trust her judgement.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-03-12T02:07:39.073Z · LW(p) · GW(p)

Interesting. I've only had this experience in very restricted contexts, e.g. I noticed recently that I shouldn't trust my opinions on movies if the last time I saw them was more than several years ago because my taste in movies has changed substantially in those years.

comment by jooyous · 2013-01-02T23:07:44.276Z · LW(p) · GW(p)

Sometimes, I am extremely unconvinced in the utility of "knowing stuff" or "understanding stuff" when confronted with the inability to explain it to suffering people who seem like they want to stop suffering but refuse to consider the stuff that has potential to help them stop suffering. =/

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-02T23:41:11.221Z · LW(p) · GW(p)

Interesting. My confidence in my beliefs has never been tied to my ability to explain them to anyone, but then again I'm a mathematician-(in-training), so...

Replies from: jooyous
comment by jooyous · 2013-01-03T00:22:44.788Z · LW(p) · GW(p)

Well, it's not that I'm not confident that they're useful to me. They are! They help me make choices that make me happy. I'm just not confident in how useful pursuing them is in comparison to various utilitarian considerations of helping other people be not miserable.

For example, suppose I could learn some more rationality tricks and start saving an extra $100 each month by some means, while in the meantime someone I know is depressed and miserable and seemingly asking for help. Instead of going to learn those rationality tricks to make an extra $100, I am tempted to sit with them and tell them all the ways I learned to manage my thoughts in order to not make myself miserable and depressed. And when this fails spectacularly, eating my time and energy, I am left inclined to do neither because that person is miserable and depressed and I'm powerless to help them so how useful is $100 really? Blah! So, to answer the question, this is the mood in which I question my belief in the usefulness of knowing and doing useful things.

I am also a computer science/math person! high five

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-03T00:47:46.279Z · LW(p) · GW(p)

So, to answer the question, this is the mood in which I question the usefulness of doing useful things.

Aren't useful things kind of useful to do kind of by definition? (I know this argument is often used to sneak in connotations, but I can't imagine that "is useful" is a sneaky connotation of "useful thing.")

What you describe sounds to me like a failure to model your friend correctly. Most people cannot fix themselves given only instructions on how to do so, and what worked for you may not work for your friend. Even if it might, it is hard to motivate yourself to do things when you are miserable and depressed, and when you are miserable and depressed, hearing someone else say "here are all the ways you currently suck, and you should stop sucking in those ways" is not necessarily encouraging.

In other words, "useful" is a two- or even three-place predicate.

comment by MugaSofer · 2013-01-10T08:59:51.088Z · LW(p) · GW(p)

If any of the answerers were atheists

I should think most of them were. Of course, "foundational belief" is a subjective term.

comment by duckduckMOO · 2013-01-08T19:12:29.410Z · LW(p) · GW(p)

I put never, but "not anymore" would be more accurate

Replies from: Endovior
comment by Endovior · 2013-01-10T13:31:31.962Z · LW(p) · GW(p)

This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called 'empiricism', and it's actually more fundamental than belief in only the physical world (ie: disbelief in spiritual phenomena, 'materialism'), because you need a thing that says what evidence is considered valid before you have a thing that says 'and based on this evidence, I conclude'.

comment by Jay_Schweikert · 2013-01-09T16:22:41.270Z · LW(p) · GW(p)

I answered "rarely," but I should probably qualify that. I've been an atheist for about 5 years, and in the last 2 or 3, I don't recall ever seriously thinking that the basic, factual premises of Christianity were any more likely than Greek myths. But I have had several moments -- usually following some major personal failing of mine, or maybe in others close to me -- where the Christian idea of man-as-fallen living in a fallen world made sense to me, and where I found myself unconsciously groping for something like the Christian concept of grace.

As I recall, in the first few years after my deconversion, this feeling sometimes led me to think more seriously about Christianity, and I even prayed a few times, just in case. In the past couple years that hasn't happened; I understand more fully exactly why I'd have those feelings even without anything like the Christian God, and I've thought more seriously about how to address them without falling on old habits. But certainly that experience has helped me understand what would motivate someone to either seek or hold onto Christianity, especially if they didn't have any training in Bayescraft.

comment by MaoShan · 2013-01-03T18:09:19.697Z · LW(p) · GW(p)

I thought the most truthful answer for me would be "Rarely", given all possible interpretations of the question. I think that it should have been qualified "within the past year", to eliminate the follies of truth-seeking in one's youth. Someone who answers "Never" cannot be considering when they were a five-year-old. I have believed or wanted to believe a lot of crazy things. Even right now, thinking as an atheist, I rarely have those moods, and only rarely due to my recognized (and combated) tendency toward magical thinking. However, right now, thinking as a Christian, I would have doubts constantly, because no matter how much I would like to believe, it is plain to see that most of what I am expected to have faith in as a Christian is complete crap. I am capable of adopting either mode of thinking, as is anyone else here. We're just better at one mode than others.

comment by FiftyTwo · 2013-01-29T13:37:09.131Z · LW(p) · GW(p)

I said very often, but I do have clinical depression so its not unexpected.

comment by Said Achmiz (SaidAchmiz) · 2013-01-02T21:50:34.567Z · LW(p) · GW(p)

Sounds like Lewis's confusion would have been substatially cleared up by distinguishing between belief and alief, and then he would not have had to perpetrate such abuses on commonly used words.

Replies from: simplicio
comment by simplicio · 2013-01-02T23:00:14.953Z · LW(p) · GW(p)

To be fair, the philosopher Tamar Gendler only coined the term in 2008.

comment by Jay_Schweikert · 2013-01-09T16:10:02.995Z · LW(p) · GW(p)

Upvoted. I actually had a remarkably similar experience reading Lewis. Throughout college I had been undergoing a gradual transformation from "real" Christian to liberal Protestant to deist, and I ended up reading Lewis because he seemed to be the only person I could find who was firmly committed to Christianity and yet seemed willing to discuss the kind of questions I was having. Reading Mere Christianity was basically the event that let me give Christianity/theism one last look over and say "well said, but that is enough for me to know it is time to move on."

comment by nabeelqu · 2013-01-01T15:33:09.983Z · LW(p) · GW(p)

Not long ago a couple across the aisle from me in a Quiet Car talked all the way from New York City to Boston, after two people had asked them to stop. After each reproach they would lower their voices for a while, but like a grade-school cafeteria after the lunch monitor has yelled for silence, the volume crept inexorably up again. It was soft but incessant, and against the background silence, as maddening as a dripping faucet at 3 a.m. All the way to Boston I debated whether it was bothering me enough to say something. As we approached our destination a professorial-looking man who’d spoken to them twice got up, walked back and stood over them. He turned out to be quite tall. He told them that they’d been extremely inconsiderate, and he’d had a much harder time getting his work done because of them.

“Sir,” the girl said, “I really don’t think we were bothering anyone else.”

“No,” I said, “you were really annoying.”

“Yes,” said the woman behind them.

“See,” the man explained gently, “this is how it works. I’m the one person who says something. But for everyone like me, there’s a whole car full of people who feel the same way.”

-- Tim Kreider, The Quiet Ones

Replies from: roystgnr, Eliezer_Yudkowsky, Jotto999, tut
comment by roystgnr · 2013-01-02T21:16:53.276Z · LW(p) · GW(p)

"This is how it sometimes works", I would have said. Anything more starts to sound uncomfortably close to "the lurkers support me in email."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T21:34:32.590Z · LW(p) · GW(p)

...but why wait until they'd almost gotten to Boston?

Replies from: SaidAchmiz, shminux, nabeelqu
comment by Said Achmiz (SaidAchmiz) · 2013-01-02T21:46:16.081Z · LW(p) · GW(p)

Perhaps because at that point, one is not faced with the prospect of spending several hours in close proximity to people with whom one has had an unpleasant social interaction.

comment by shminux · 2013-01-02T21:54:47.252Z · LW(p) · GW(p)

No one wants to appear rude, of course. As this was almost the end of the ride, the person who rebuked them minimized the time he'd have to endure in the company of people who might consider him rude because of his admonishment, whether or not they agree with him. I wonder if this is partly a cultural thing.

comment by nabeelqu · 2013-01-02T21:51:40.917Z · LW(p) · GW(p)

The passage states that he'd already spoken to them twice.

comment by Jotto999 · 2013-01-06T23:12:49.685Z · LW(p) · GW(p)

I don't know the circumstances, but I would have tried to make eye contact and just blatantly stare at them for minutes straight, maybe even hamming it up with a look of slight unhinged interest. They would have become more uncomfortable and might have started being anxious that a stranger is eavesdropping on them, causing them to want to be more discrete, depending on their disposition. I've actually tried this before, and it seems to sometimes work if they can see you staring at them. Give a subtle, slight grin, like you might be sexually turned on. If you won't see them again then it's worth a try.

comment by tut · 2013-01-02T21:13:05.493Z · LW(p) · GW(p)

Since this has got 22 upvotes I must ask: What makes this a rationality quote?

Replies from: simplicio, nabeelqu, Toddling
comment by simplicio · 2013-01-02T22:02:46.459Z · LW(p) · GW(p)

Every actual criticism of an idea/behaviour is likely to imply a much larger quantity of silent doubt/disapproval.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-02T22:20:38.330Z · LW(p) · GW(p)

Sometimes, but you need to take into account what P(voices criticism | has criticism) is. Otherwise you'll constantly cave to vocal minorities (situations where the above probability is relatively large).

comment by nabeelqu · 2013-01-02T21:55:50.842Z · LW(p) · GW(p)

I'd say it comes under the 'instrumental rationality' heading. The chatter was clearly bothering the writer, but - irrationally - neither he nor the others (bar one) actually got up and said anything.

comment by Toddling · 2013-01-03T10:05:28.679Z · LW(p) · GW(p)

You could argue that the silence of the author and the woman behind the couple is an example of the bystander effect.

comment by dspeyer · 2013-01-01T16:37:36.394Z · LW(p) · GW(p)

You're better at talking than I am. When you talk, sometimes I get confused. My ideas of what's right and wrong get mixed up. That's why I'm bringing this. As soon as I start thinking it's all right to steal from our employees, I'm going to start hitting you with the stick.

later

If it makes you feel any better, I agree with your logic completely.

No, what would make me feel better is for you to stop hitting me!

--Freefall

comment by Qiaochu_Yuan · 2013-01-03T08:49:14.968Z · LW(p) · GW(p)

In Japan, it is widely believed that you don't have direct knowledge of what other people are really thinking (and it's very presumptuous to assume otherwise), and so it is uncommon to describe other people's thoughts directly, such as "He likes ice cream" or "She's angry". Instead, it's far more common to see things like "I heard that he likes ice cream" or "It seems like/It appears to be the case that she is angry" or "She is showing signs of wanting to go to the park."

-- TVTropes

Edit (1/7): I have no particular reason to believe that this is literally true, but either way I think it holds an interesting rationality lesson. Feel free to substitute 'Zorblaxia' for 'Japan' above.

Replies from: simplicio, roryokane, abody97, MugaSofer
comment by simplicio · 2013-01-03T15:44:34.419Z · LW(p) · GW(p)

Interesting; is this true?

Replies from: beoShaffer
comment by beoShaffer · 2013-01-04T05:59:36.962Z · LW(p) · GW(p)

Yes, my Japanese teacher was very insistent about it, and IIRC would even take points off for talking about someones mental state with out the proper qualifiers.

Replies from: Vaniver, Toddling
comment by Vaniver · 2013-01-05T20:13:00.404Z · LW(p) · GW(p)

Yes, my Japanese was very insistent about it

I think you're missing a word here :P

Replies from: beoShaffer
comment by beoShaffer · 2013-01-05T20:23:57.421Z · LW(p) · GW(p)

Fixed.

comment by Toddling · 2013-01-04T06:06:42.424Z · LW(p) · GW(p)

This is good to know, and makes me wonder whether there's a way to encourage this kind of thinking in other populations. My only thought so far has been "get yourself involved with the production of the most widely-used primary school language textbooks in your area."

Thoughts?

Replies from: Desrtopa, ChristianKl
comment by Desrtopa · 2013-01-07T15:55:44.973Z · LW(p) · GW(p)

It's not necessarily an advantageous habit. If a person tells you they like ice cream, and you've seen them eating ice cream regularly with every sign of enjoyment, you have as much evidence that they like ice cream as you have about countless other things that nobody bothers hanging qualifiers on even in Japanese. The sciences are full of things we can't experience directly but can still establish with high confidence.

Rather than teaching people to privilege other people's mental states as an unknowable quality, I think it makes more sense to encourage people to be aware of their degrees of certainty.

Replies from: Toddling
comment by Toddling · 2013-01-08T03:15:57.025Z · LW(p) · GW(p)

Rather than teaching people to privilege other people's mental states as an unknowable quality, I think it makes more sense to encourage people to be aware of their degrees of certainty.

Increased awareness of degrees of certainty is more or less what I was thinking of encouraging. It hadn't occurred to me to look for a deeper motive and try to address it directly. This was helpful, thank you.

comment by ChristianKl · 2013-01-07T17:08:34.997Z · LW(p) · GW(p)

You can look at this way of thinking as a social convention. Japanese people often care about signaling respect with language. Someone who direct speaks about the mental state of another can be seen as presumtious.

High status people in any social circle can influence it's social customs. If people get put down for guessing other other's mental states wrong without using qualifiers they are likely to use qualifiers the next time.

If you actually want to do this, E-Prime is an interesting. E-Prime calls for tabooing to be.

I meet a few people in NLP circles that valued to communicate in E-Prime.

comment by roryokane · 2013-01-05T01:26:52.786Z · LW(p) · GW(p)

Specific source: Useful Notes: Japanese Language on TV Tropes

Replies from: Nornagest
comment by Nornagest · 2013-01-16T20:08:49.166Z · LW(p) · GW(p)

TV Tropes is unreliable on Japanese culture. While it's fond of Japanese media, connection demographics show that Japanese editors are disproportionately rare (even after taking the language barrier into account); almost all the contributors to a page like that are likely to be language students or English-speaking Japanophiles, few of whom have any substantial experience with the language or culture in the wild. This introduces quite a bit of noise; for example, the site's had problems in the past with people reading meanings into Japanese words that don't exist or that are much more specific than they are in the wild.

I don't know myself whether the ancestor is accurate, but it'd be wise to take it with a grain of salt.

comment by abody97 · 2013-01-07T09:24:52.499Z · LW(p) · GW(p)

I have to say that's fairly stupid (I'm talking about the claim which the quote is making and generalizing over a whole population; I am not doing argumentum ad hominem here).

I've seen many sorts of (fascinated) mythical claims on how the Japanese think/communicate/have sex/you name it differently and they're all ... well, purely mythical. Even if I, for the purposes of this argument, assume that beoShaffer is right about his/her Japanese teacher (and not just imagining or bending traits into supporting his/her pre-defined belief), it's meaningless and does not validate the above claim. Just for the sake of illustration, the simplest explanation for such usages is some linguistic convention (which actually makes sense, since the page from which the quote is sourced is substantially talking about the Japanese Language).

Unless someone has some solid proof that it's actually related to thinking rather than some other social/linguistic convention, this is meaningless (and stupid).

Replies from: army1987, Qiaochu_Yuan
comment by A1987dM (army1987) · 2013-01-07T17:27:03.327Z · LW(p) · GW(p)

Agreed. Pop-whorfianism is usually silly.

Replies from: Toddling
comment by Toddling · 2013-01-08T03:19:55.436Z · LW(p) · GW(p)

I'm not familiar with this term and your link did not clarify as much as I had hoped. Could you give a clearer definition?

Replies from: arborealhominid, army1987
comment by arborealhominid · 2013-01-08T04:08:19.258Z · LW(p) · GW(p)

Well, the Sapir-Whorf hypothesis is the idea that language shapes thought and/or culture, and Whorfianism is any school of thought based on this hypothesis. I assume pop-Whorfianism is just Whorfian speculation by people who aren't qualified in the field (and who tend to assume that the language/culture relationship is far more deterministic than it actually is).

Replies from: Toddling
comment by Toddling · 2013-01-08T04:26:31.533Z · LW(p) · GW(p)

Thanks.

comment by A1987dM (army1987) · 2013-01-08T09:46:45.821Z · LW(p) · GW(p)

Just-so stories about the relationships between language and culture. (The worst thing is that, while just-so stories about evolutionary psychology are generally immediately identified as sexist/classist/*ist drivel, just-so stories about language tend to be taken seriously no matter how ludicrous they are.)

comment by Qiaochu_Yuan · 2013-01-07T10:07:02.106Z · LW(p) · GW(p)

I don't care whether it's actually true or not; either way it still holds an interesting rationality lesson and that's why I posted it.

Replies from: abody97, army1987
comment by abody97 · 2013-01-07T10:39:20.833Z · LW(p) · GW(p)

With all respect that I'm generically required to give, I don't care whether you care or not. The argument I made was handling what you posted/quoted, neither you as a person nor your motives to posting.

comment by A1987dM (army1987) · 2013-01-07T17:21:16.521Z · LW(p) · GW(p)

I don't care whether it's actually true or not; ... I posted it.

I think the technical term for that is “bullshit”.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-07T22:40:51.925Z · LW(p) · GW(p)

That is a hugely unfair assessment of my motives (unlike abody97's comment which claims not to be about my motives, which I also doubt). People say untrue things all the time, e.g. when storytelling. The goal of storytelling is not to directly relate the truth of some particular experience, and I didn't think the goal of posting rationality quotes was either, considering how many quotes these posts get from various works of fiction. I posted this quote for no reason other than to suggest an interesting rationality lesson, and calling that "bullshit" sneaks in unnecessary connotations.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-07T23:41:51.688Z · LW(p) · GW(p)

Yes, but that quote is written in such a way that most readers¹ would assume it's true (or at least that the writer believes it's true); so it's not like storytelling. And most readers¹ would find it interesting because they'd think it's true; if I pulled some claim about $natural_language having $weird_feature directly out of my ass and concluded with “... Just kidding.”, I doubt many people¹ would find it that interesting.


  1. OK, I admit I'm mostly Generalizing From One Example.
Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-07T23:48:36.493Z · LW(p) · GW(p)

Would you be satisfied if I edited the original post to read something like "note: I have no particular reason to believe that this is literally true, but I think it holds an interesting rationality lesson either way. Feel free to substitute 'Zorblaxians' for 'Japanese'"?

Replies from: army1987
comment by MugaSofer · 2013-01-08T11:17:54.595Z · LW(p) · GW(p)

Feel free to substitute 'Zorblaxia' for 'Japan' above.

Have you considered replacing it with "[country]" or similar, then noting at the bottom what page it came from?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-08T22:12:34.316Z · LW(p) · GW(p)

I added a link, but I would prefer to suggest a fake name over a generic name.

comment by roystgnr · 2013-01-02T21:24:29.144Z · LW(p) · GW(p)

I think, actually, scientists should kinda look into that whole 'death' thing. Because, they seem to have focused on diseases... and I don't give a #*=& about them. The guys go, "Hey, we fixed your arthritis!" "Am I still gonna die?" "Yeah."

So that, I think, is the biggest problem. That's why I can't get behind politicians! They're always like, "Our biggest problem today is unemployment!" and I'm like "What about getting old and sick and dying?"

  • Norm MacDonald, Me Doing Stand Up

(a few verbal tics were removed by me; the censorship was already present in the version I heard)

Replies from: simplicio, Bugmaster, None
comment by simplicio · 2013-01-02T21:40:20.323Z · LW(p) · GW(p)

Sympathetic, but ultimately, we die OF diseases. And the years we do have are more or less valuable depending on their quality.

Physicians should maximize QALYs, and extending lifespan is only one way to do it.

Replies from: ChristianKl
comment by ChristianKl · 2013-01-07T18:01:54.384Z · LW(p) · GW(p)

Sympathetic, but ultimately, we die OF diseases.

The question is whether that's a useful paradigm. Aubrey Gray argues that it isn't.

comment by Bugmaster · 2013-01-02T21:26:16.227Z · LW(p) · GW(p)

I'd vote this up, but I can't shake the feeling that the author is setting up a false dichotomy. Living forever would be great, but living forever without arthritis would be even better. There's no reason why we shouldn't solve the easier problem first.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-02T22:33:09.489Z · LW(p) · GW(p)

Sure there is. If you have two problems, one of which is substantially easier than the other, then you still might solve the harder problem first if 1) solving the easier problem won't help you solve the harder problem and 2) the harder problem is substantially more pressing. In other words, you need to take into account the opportunity cost of diverting some of your resources to solving the easier problem.

Replies from: Bugmaster
comment by Bugmaster · 2013-01-02T22:45:07.494Z · LW(p) · GW(p)

In general this is true, but I believe that in this particular case the reasoning doesn't apply. Solving problems like arthritis and cancer is essential for prolonging productive biological life.

Granted, such solutions would cease to be useful once mind uploading is implemented. However, IMO mind uploading is so difficult -- and, therefore, so far in the future -- that, if we did chose to focus exclusively on it, we'd lose too many utilons to biological ailments. For the same reason, prolonging productive biological life now is still quite useful, because it would allow researchers to live longer, thus speeding up the pace of research that will eventually lead to uploading.

comment by [deleted] · 2013-01-05T05:30:05.601Z · LW(p) · GW(p)

Using punctuation that is normally intended to match ({[]}), confused me. Use the !%#$ing other punctuation for that.

Replies from: roystgnr
comment by roystgnr · 2013-01-07T20:11:51.941Z · LW(p) · GW(p)

Edited.

comment by Will_Newsome · 2013-01-01T20:00:39.306Z · LW(p) · GW(p)

For the Greek philosophers, Greek was the language of reason. Aristotle's list of categories is squarely based on the categories of Greek grammar. This did not explicitly entail a claim that the Greek language was primary: it was simply a case of the identification of thought with its natural vehicle. Logos was thought, and Logos was speech. About the speech of barbarians little was known; hence, little was known about what it would be like to think in the language of barbarians. Although the Greeks were willing to admit that the Egyptians, for example, possessed a rich and venerable store of wisdom, they only knew this because someone had explained it to them in Greek.

— Umberto Eco, The Search for the Perfect Language

comment by James_Miller · 2013-01-01T17:58:51.110Z · LW(p) · GW(p)

The women of this country learned long ago, those without swords can still die upon them.

Éowyn explaining to Aragorn why she was skilled with a blade. The Lord of the Rings: The Two Towers, the 2002 movie.

Replies from: fburnaby
comment by fburnaby · 2013-01-02T02:07:53.918Z · LW(p) · GW(p)

It's funny. I've seen that movie five times or so. But I watched it again a few days ago, and that line struck me, too. Never stood out before.

Replies from: James_Miller
comment by James_Miller · 2013-01-02T02:53:06.551Z · LW(p) · GW(p)

If you are an American perhaps it stood out this time because of all the recent discussion of gun control.

comment by Carwajalca · 2013-01-29T11:21:28.207Z · LW(p) · GW(p)

"I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive."

-- Randall Munroe, in http://what-if.xkcd.com/30/ (What-if xkcd, Interplanetary Cessna)

comment by [deleted] · 2013-01-01T18:19:58.157Z · LW(p) · GW(p)

.

Replies from: David_Gerard, dspeyer
comment by David_Gerard · 2013-01-02T00:54:26.821Z · LW(p) · GW(p)

I'd have thought from observation that quite a lot of human club is just about discussing the rules of human club, excess meta and all. Philosophy in daily practice being best considered a cultural activity, something humans do to impress other humans.

Replies from: Emile, lavalamp
comment by Emile · 2013-01-02T10:59:53.386Z · LW(p) · GW(p)

I dunno; "philosophy", at least, doesn't seem to be about discussing the rules of the human club, or maybe it's discussing a very specific part of the rules (but then, so is Maths!). Family gossip and stand-up comedy seem much closer to "discussing the rules of the human club".

Replies from: David_Gerard
comment by David_Gerard · 2013-01-02T11:07:13.898Z · LW(p) · GW(p)

Going meta is the quick win (in the social competition) for cultural discourse, though, c.f. postmodernism.

comment by lavalamp · 2013-01-06T23:37:46.808Z · LW(p) · GW(p)

I'd have thought from observation that quite a lot of human club is just about discussing the rules of human club...

Most of this is done by people that don't understand the rules of human club...

comment by dspeyer · 2013-01-02T02:17:37.015Z · LW(p) · GW(p)

Do you know what he means by this? We spend a lot of time here discussing the rules of human club, and so far seem glad we have.

Replies from: TimS, army1987
comment by TimS · 2013-01-02T02:26:48.554Z · LW(p) · GW(p)

Imagine the average high school clique. They would be very uncomfortable explicitly discussing the rules of the group - even as they enforced them ruthlessly. Further, the teachers, parents, and other adults who knew the students would be just as uncomfortable describing the rules of the clique.

In short, we are socially weird for being willing to discuss the social rules - that our discussion is an improvement doesn't mean it is statistically ordinary.

Replies from: dspeyer, someonewrongonthenet
comment by dspeyer · 2013-01-02T02:36:44.559Z · LW(p) · GW(p)

Ah, I see.

Human club has many rules. Some can be bent. Others can be broken.

comment by someonewrongonthenet · 2013-01-03T03:07:07.018Z · LW(p) · GW(p)

we are socially weird for being willing to discuss the social rules

Well...only insofar as we discuss the social rules on Lesswrong itself. No one, not even the high school clique, is uncomfortable with discussing social rules as generalities. I've seen school age kids discuss these things as generalities quite enthusiastically when a teacher instigates the discussion.

It's only when specific names and people are mentioned that the discussion can become dangerous for the speakers. But this is no more true for social rules than it is for other conversations of similar type - for instance, when pointing out flaws of people sitting around you. I don't think LW is immune to this (for example, the use of the word "phyg" is particularly salient, if usually tongue-in-cheek, example) although the anonymity and fact that very few of us know each other personally does provide some level of protection.

Replies from: TimS
comment by TimS · 2013-01-03T03:42:55.993Z · LW(p) · GW(p)

I've seen school age kids discuss these things as generalities quite enthusiastically when a teacher instigates the discussion.

I suspect this population was not randomly selected. Otherwise, someone might have explained to me why nerds are unpopular at a time in my life that it might have been actually helpful.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-01-03T04:27:21.556Z · LW(p) · GW(p)

I think the author is needlessly overcomplicating things.

1) People instinctively form tight nit groups of friends with people they like. People they like usually means help them survive and raise offspring. This usually means socially adept, athletic, and attractive.

2) Having friends brings diminishing returns. The more friends a person have, the less they feel the need to make new friends. That's why the first day of school is vital.

3) Ill feelings develop between sally and bob. Sally talks to Susanne, and now they both bear ill feelings towards Bob. Thus, Bob has descended a rung in the dominance hierarchy.

4) Bob's vulnerability is a function of how many people Sally can find who will agree with her about him. As a extension of this principle, those with the fewest friends will get the most picked on. The bullies can be both from the popular and unpopular crowd.

5) Factors leading to few friends - lack of social or athletic ability, conspicuous non-conformity via eccentric behavior, dress, or speech, low attractiveness, or misguided use of physical or verbal aggression.

By the power law, approximately 20% of the kids will be friends with 80% of the network. These are the popular kids. As a result of their privileged position, they do not even notice popularity hierarchies...it's sort of like being white, male, upper middle class, etc. These kids will claim that there is no such thing as "popularity".

Any random person is likely to find themselves in the bottom 80%. They will find themselves excluded from the main network because the people in the main network already have enough friends. They will fined themselves picked on because they are vulnerable like Bob.

When we call someone a nerd, we refer to a constellation of traits which include intelligence, obscure interests (non-conformity), lack of social skills, lack of fashion sense, lack of athletic ability, and glasses-wearing. Obviously such folks are less likely to be in the top 20%...not because of the intelligence but because of all that other stuff.

But they aren't the only ones who find themselves unpopular. In fact, a vast segment of the population finds themselves in this position.

comment by A1987dM (army1987) · 2013-01-02T11:05:05.609Z · LW(p) · GW(p)

Something like this?

comment by Vaniver · 2013-01-11T01:32:09.814Z · LW(p) · GW(p)

Some may think these trifling matters not worth minding or relating; but when they consider that tho' dust blown into the eyes of a single person, or into a single shop on a windy day, is but of small importance, yet the great number of the instances in a populous city, and its frequent repetitions give it weight and consequence, perhaps they will not censure very severely those who bestow some attention to affairs of this seemingly low nature. Human felicity is produc'd not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day.

--Benjamin Franklin

comment by James_Miller · 2013-01-01T17:34:35.616Z · LW(p) · GW(p)

We cannot dismiss conscious analytic thinking by saying that heuristics will get a “close enough” answer 98 percent of the time, because the 2 percent of the instances where heuristics lead us seriously astray may be critical to our lives.

Keith E. Stanovich, What Intelligence Tests Miss: The Psychology of Rational Thought

Replies from: DanArmak
comment by DanArmak · 2013-01-01T19:02:14.312Z · LW(p) · GW(p)

For instance, you need analytical thinking to design your heuristics. Let your heuristics build new heuristics and a 2% failure rate compounded will give you a 50% failure rate in a few tens of generations.

Replies from: James_Miller
comment by James_Miller · 2013-01-01T19:23:05.297Z · LW(p) · GW(p)

Possibly, but Stanovich thinks that most heuristics were basically given to us by evolution and rather than choose among heuristics what we do is decide whether to (use them and spend little energy on thinking) or (not use them and spend a lot of energy on thinking).

Replies from: crap
comment by crap · 2013-01-02T12:01:58.486Z · LW(p) · GW(p)

What is analytical thinking, but a sequence of steps of heuristics well vetted not to lead to contradictions?

Replies from: simplicio
comment by simplicio · 2013-01-02T22:38:57.248Z · LW(p) · GW(p)

A heuristic is a "rule of thumb," used because it is computationally cheap for a human brain and returns the right answer most of the time.

Analytical thinking uses heuristics, but is distinctive in ALSO using propositional logic, probabilistic reasoning, and mathematics - in other words, exceptionless, normatively correct modes of reasoning (insofar as they are done well) that explicitly state their assumptions and "show the work." So there is a real qualitative difference.

Replies from: crap
comment by crap · 2013-01-02T22:42:58.156Z · LW(p) · GW(p)

Propositional logic is made of many very simple steps, though.

Replies from: simplicio
comment by simplicio · 2013-01-02T22:48:32.929Z · LW(p) · GW(p)

Sure. The point is that "A->B; A, therefore B" is necessarily valid.

Unlike, say, "the risk of something happening is proportional to the number of times I've heard it mentioned."

Calling logic a set of heuristics dissolves a useful semantic distinction between normatively correct reasoning and mere rules of thumb, even if you can put the two on a spectrum.

Replies from: crap
comment by crap · 2013-01-02T23:10:26.846Z · LW(p) · GW(p)

Ohh, I agree. I just don't think that there is a corresponding neurological distinction. (Original quote was about evolution).

comment by Oscar_Cunningham · 2013-01-02T14:21:31.517Z · LW(p) · GW(p)

I don't blame them; nor am I saying I wouldn't similarly manipulate the truth if I thought it would save lives, but I don't lie to myself. You keep two books, not no books. [Emphasis mine]

The Last Psychiatrist (http://thelastpsychiatrist.com/2010/10/how_not_to_prevent_military_su.html)

comment by katydee · 2013-01-01T13:37:48.176Z · LW(p) · GW(p)

The dream is damned and dreamer too if dreaming's all that dreamers do.

--Rory Miller

Replies from: Document, MugaSofer
comment by MugaSofer · 2013-01-13T15:05:14.080Z · LW(p) · GW(p)

Amusingly, I read this at first as referring to literal dreams.

comment by [deleted] · 2013-01-02T19:57:26.425Z · LW(p) · GW(p)

Just because someone isn't into finding out The Secrets Of The Universe like me doesn't necessarily mean I can't be friends with them.

-Buttercup Dew (@NationalistPony)

Replies from: arborealhominid
comment by arborealhominid · 2013-01-08T00:15:49.038Z · LW(p) · GW(p)

Never in my life did I expect to find myself upvoting a comment quoting My Nationalist Pony.

comment by Alicorn · 2013-01-11T03:08:53.861Z · LW(p) · GW(p)

He tells her that the earth is flat -
He knows the facts, and that is that.
In altercations fierce and long
She tries her best to prove him wrong.
But he has learned to argue well.
He calls her arguments unsound
And often asks her not to yell.
She cannot win. He stands his ground. The planet goes on being round.

--Wendy Cope, He Tells Her from the series ‘Differences of Opinion’

comment by Stabilizer · 2013-01-01T18:29:14.226Z · LW(p) · GW(p)

“To succeed in a domain that violates your intuitions, you need to be able to turn them off the way a pilot does when flying through clouds. Without visual cues (e.g. the horizon) you can't distinguish between gravity and acceleration. Which means if you're flying through clouds you can't tell what the attitude of the aircraft is. You could feel like you're flying straight and level while in fact you're descending in a spiral. The solution is to ignore what your body is telling you and listen only to your instruments. But it turns out to be very hard to ignore what your body is telling you. Every pilot knows about this problem and yet it is still a leading cause of accidents. You need to do what you know intellectually to be right, even though it feels wrong.”

-Paul Graham

Replies from: army1987
comment by James_Miller · 2013-01-01T17:46:59.889Z · LW(p) · GW(p)

What You Are Inside Only Matters Because of What It Makes You Do

David Wong, 6 Harsh Truths That Will Make You a Better Person. Published in Cracked.com

Replies from: Multiheaded, brazil84, ChristianKl, army1987, DanArmak
comment by Multiheaded · 2013-01-02T04:21:17.444Z · LW(p) · GW(p)

This article greatly annoyed me because of how it tells people to do the correct practical things (Develop skills! Be persistent and grind! Help people!) yet gives atrocious and shallow reasons for it - and then Wong says how if people criticize him they haven't heard the message. No, David, you can give people correct directions and still be a huge jerk promoting an awful worldview!

He basically shows NO understanding of what makes one attractive to people (especially romantically) and what gives you a feeling of self-worth and self-respect. What you "are" does in fact matter - both to yourself and to others! - outside of your actions; they just reveal and signal your qualities. If you don't do anything good, it's a sign of something being broken about you, but just mechanically bartering some product of your labour for friendship, affection and status cannot work - if your life is in a rut, it's because of some deeper issues and you've got to resolve those first and foremost.

This masochistic imperative to "Work harder and quit whining" might sound all serious and mature, but does not in fact has the power to make you a "better person"; rather, you'll know you've changed for the better when you can achieve more stuff and don't feel miserable.

I wanted to write a short comment illustrating how this article might be the mirror opposite of some unfortunate ideas in the "Seduction community" - it's "forget all else and GIVE to people, to obtain affection and self-worth" versus "forget all else and TAKE from people, to obtain affection and self-worth" - and how, for a self-actualized person, needs, one's own and others', should dictate the taking and giving, not some primitive framework of barter or conquest - but I predictably got too lazy to extend it :)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-01-02T06:40:18.151Z · LW(p) · GW(p)

I've taken a crack at what's wrong with that article.

The problem is, there's so much wrong with it from so many different angles that it's rather a large topic.

Replies from: simplicio, Multiheaded
comment by simplicio · 2013-01-02T21:55:29.181Z · LW(p) · GW(p)

Yep.

As with most self-help advice, it is like an eyeglass prescription - only good for one specific pathology. It may correct one person's vision, while making another's massively worse.

Also, I remember what it was like to be (mildly!) depressed, and my oh my would that article not have helped.

comment by Multiheaded · 2013-01-02T08:15:54.658Z · LW(p) · GW(p)

Yep :). I was doing a more charitable reading than the article really deserves, to be honest. It carried over from the method of political debate I am attempting these days - accept the opponent's premises (e.g. far-right ideas that they proudly call "thoughtcrime"), then show how either a modus-tollens inference from them is instrumentally/ethically preferrable, or how they just have nothing to do with the opponent being an insufferable jerk.

The basic theme of the article is that you're only well-treated for what you bring to other people's lives. You're worthless otherwise.

This is a half-truth. What you bring to other people's lives matters. However, the reason I'm posting about this is that I believe framing the message that way is actively dangerous for depressed people. The thing is, if you don't believe you're worth something no matter what, you won't do the work of making your life better.

100% true. I often shudder when I think how miserable I could've got if I hadn't watched this at a low point in my life.

Replies from: Omegaile, NancyLebovitz
comment by Omegaile · 2013-01-02T15:02:49.132Z · LW(p) · GW(p)

I think the only problem with the article is that it tries to otheroptimize. It seems to address a problem that the author had, as some people do. He seems to overestimate the usefulness of his advices though (he writes for anyone except if "your career is going great, you're thrilled with your life and you're happy with your relationships"). As mentioned by NancyLebovitz, the article is not for the clinical depressed, in fact it is only for a small (?) set of people who sits around all day whining, who thinks they deserve better for who they are, without actually trying to improve the situation.

That said, this over generalization is a problem that permeates most self help, and the article is not more guilty than the average.

Replies from: Multiheaded, NancyLebovitz
comment by Multiheaded · 2013-01-02T16:39:28.175Z · LW(p) · GW(p)

I think I'll just quote the entirety of an angry comment on Nancy's blog. I basically can't help agreeing with the below. Although I don't think the article is entirely bad and worthless - there are a few commonplace yet forcefully asserted life instructions there, if that's your cup of tea - its downsides do outweigh its utility.

What especially pisses me off is how Wong hijacks the ostensibly altruistic intent of it as an excuse to throw a load of aggression and condescending superiority in the intended audience's face, then offers an explanation of how feeling repulsed/hurt by that tone further confirms the reader's lower status. This is, like, a textbook example of self-gratification and cruel status play.

6: not all of the world is made up of selfish bastards. Or for some people 'what they can get from you' equals 'hanging out, having a good time doing nothing much' so maybe it's right - but not in the materialistic, selfish way the article implies.

5: I don't quite get what he wants that's different from his #6 'the world expects you to do stuff'. Also, I don't care how right someone is (he's not), if you have to be an asshole about it and if you don't care about hurting people, not only are you doing it wrong, there's a good chance that your message is manipulative rather than insightful. He's trying to make you believe that all that counts is how he wants to see the world.

4 is the same message again, in a different form - the world (here 'women') expects you to deliver. Don't be nice, get results. (If your goal is being a well-rounded individual with good mental health, maybe that's not the best way forward. Just sayin.)

3 has kind of a point if you don't do anything at all (and if you don't do anything, you're probably severely depressed and need far more help than an internet article) - but what people think 'doing' means differs wildly. And the first half of the article seems to discard a lot of stuff that 'people do' (for instance, caring for family members)- you don't have tangible results, but by gods, have you put work into it. As you point out, there can be a severe dissonance between what a depressed you thinks you do (nothing) and what a non-depressed you or a friend might think of it.
And for some people 'go and do something productive' might be good advice, and for others it's even more pressure - and the kind of person who feels guilty eating more than a salad? Needs help, not to be elevated to a role model.

2 Everything bad you've done was because of a bad impulse? Please. Nobody carries black and white around like that, and plenty of things are done out of habit and out of an impulse to do something good (people might think they are helping you not to go to hell by preventing you from being with someone you love) ...

1 just seems to be a self-congratulatory bit that says 'if you don't accept me as a Great Thinker who Knows Better, something is wrong with you.

Conclusion: a truth that's told with bad intent beats all the lies you can invent. And when you mix in some outright lies...

comment by NancyLebovitz · 2013-01-04T15:04:02.167Z · LW(p) · GW(p)

One of the comments at dreamwidth is by a therapist who said that being extremely vulnerable to shame is a distinct problem-- not everyone who's depressed has it, and not everyone who's shame-prone is depressed.

Also, I didn't say clinically depressed. I'm in the mild-to-moderate category, and that sort of talk is bad for me.

comment by NancyLebovitz · 2013-01-04T15:01:11.225Z · LW(p) · GW(p)

Actually the article says enough different and somewhat contradictory things that it supports multiple readings, or to put it less charitably, it's contradictory in a way that leads people to pick the bits which are most emotionally salient to them and then get angry at each other for misreading the article.

The title is "6 Harsh Truths That Will Improve Your Life"-- by implication, anyone's life. Then Wong says, "this will improve your life unless it's awesome in all respects". Then he pulls back to "this is directed at people with a particular false view of the universe".

comment by brazil84 · 2013-01-02T13:23:05.933Z · LW(p) · GW(p)

My complaint about the article is that it has the same problem as most self-help advice. When you read it, it sounds intelligent, you nod your head, it makes sense. You might even think to yourself "Yeah, I'm going to really change now!"

But as everyone whose tried to improve himself knows, it's difficult to change your behavior (and thoughts) on a basis consistent enough to really make a long-lasting difference.

comment by ChristianKl · 2013-01-01T20:53:54.747Z · LW(p) · GW(p)

It a misleading claim. Studying of how parents influence their kids generally conclude that "being" of the parent is more important than what they specifically do with the kids.

From the article:

"But I'm a great listener!" Are you? Because you're willing to sit quietly in exchange for the chance to be in the proximity of a pretty girl?

The author of the article doesn't seem to understand that there such a thing as good listening. If a girl tell you about some problem in her life it can be more effective to empathize with the girl than to go and solve the problem.

If something says "It's what's on the inside that matters!" a much better response would be ask: What makes you think that your inside is so much better than the inside of other people?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-01-02T00:41:09.041Z · LW(p) · GW(p)

Studying of how parents influence their kids generally conclude that "being" of the parent is more important than what they specifically do with the kids.

Could you explain this? Or link to info about such studies? (Or both?)

Replies from: ChristianKl, Paulovsk
comment by ChristianKl · 2013-01-02T15:18:53.988Z · LW(p) · GW(p)

If a parent has a low self esteem their child is also likely to have low self esteem. The low self esteem parent might a lot to prove try to do for his child to prove to himself that he's worthy.

There a drastic difference between a child observing: "Mommy hugs me because she read in a book that good mothers hug their children and she wants to prove to herself that she's a good mother and Mommy hugs me because she loves me".

On paper the women who spents a lot of energy into doing the stuff that good mothers are supposed to do is doing more for their child then a mother who's not investing that much energy because she's more secure in herself. Being secure in herself increase the chance that she will do the right things at the right time signal her self confidence to the child. A child who sees that her mother is self confident than has also a reason to believe that everything is alright.

As far as studies go, unfortunately I don't keep good records on what information I read from what sources :( (I would add that hugging is an example I use here to illustrate the point instead of refering to specific study about hugging)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-01-02T18:50:18.442Z · LW(p) · GW(p)

If a parent has a low self esteem their child is also likely to have low self esteem.

Yes... and studies show that this is largely due to genetic similarity, much less so to parenting style.

Being secure in herself increase the chance that she will do the right things at the right time signal her self confidence to the child.

Which still means that it boils down to what the mother does.

The thing is, no one can see what you "are" except by what you do. Your argument seems to be "doing things for the right reason will lead to doing the actual right thing, instead of implementing some standard recommendation of what the right thing is". Granted. But the thing that matters is still the doing, not the being. "Being" is relevant only to the extent that it makes you do.

Oh, and as for this:

There a drastic difference between a child observing: "Mommy hugs me because she read in a book that good mothers hug their children and she wants to prove to herself that she's a good mother and Mommy hugs me because she loves me".

There's a third possibility: "Mommy doesn't hug me, but I know she loves me anyway". Sometimes that's worse than either of the other two.

Replies from: ChristianKl, MugaSofer
comment by ChristianKl · 2013-01-03T16:24:22.573Z · LW(p) · GW(p)

But the thing that matters is still the doing, not the being.

What do you exactly mean with "matter"?

If you want to define whether A matters for B, than it's central to look whether changes in A that you can classify cause changes in B.

But the thing that matters is still the doing, not the being. "Being" is relevant only to the extent that it makes you do.

When one speaks about doing one frequently doesn't think about actions like raising one heart rate by 5 bpm to signal that something created an emotional impact on yourself. If an attractive woman walks through the street and a guy sees her and gets attracted, you can say that the woman is doing something because she's reflecting light in exactly the correct way to get the guy attracted.

If you define "doing" that broadly it's not a useful word anymore. The cracked article from which I quoted doesn't seem to define "doing" that broadly. On the other hand it's no problem to define "being" broadly enough to cover all "doing" as well.

comment by MugaSofer · 2013-01-02T19:54:16.497Z · LW(p) · GW(p)

"Being" is relevant only to the extent that it makes you do.

If there is no other method, then advising people to ignore changing what they are in favor of what they do is bad advice.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-01-02T21:44:26.363Z · LW(p) · GW(p)

I am having trouble parsing your comment. Could you elaborate? "no other method" of what?

Also, who is advising people to ignore changing what they are...? And why is advising people to change what they do bad advice?

Please do clarify, as at this point I am not sure whether, and on what, we are disagreeing.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-03T11:16:40.028Z · LW(p) · GW(p)

If "what you are" is the only/most effective way to change "what you do" (eg unconscious signalling) then the advice of the original article to focus on "what you do" is poor advice, even if it is technically correct that only what you do matters.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-01-03T15:50:52.940Z · LW(p) · GW(p)

"We are what we repeatedly do. Excellence, then, is not an act, but a habit." — Aristotle

It goes both ways. And it's meaningless to speak of changing "what you are" if you do not, as a result, do anything different.

I don't think the Cracked article, or I, ever said that the only way to change your actions is by changing some mysterious essence of your being. That's actually a rather silly notion, when it's stated explicitly, because it's self-defeating unless you ignore the observable evidence. That is, we can see that changing your actions by choosing to change your actions IS possible; people do it all the time. The conclusion, then, is that by choosing to change your actions, you have thereby changed this ineffable essence of "what you are", which then proceeds to affect what you do. If that's how it works, then worrying about whether you're changing what you are or only changing what you do is pointless; the two cannot be decoupled.

And that's the point of the article, as I understand it. "What you are" may be a useful notion in your own internal narrative — it's "how the algorithm feels from the inside" (the algorithm in this case being your decisions to do what you do). But outside of your head, it's meaningless. Out in the world, there is no "what you are" beyond what you do.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-03T17:11:35.037Z · LW(p) · GW(p)

As was pointed out elsewhere in these comments, there are situations where changing "what you are" - for example, increasing your confidence levels - is more effective then trying to change your actions directly.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-01-03T21:43:08.735Z · LW(p) · GW(p)

Let's say that you don't do something that you want to do, because you're not confident enough.

What is the difference between doing that thing, and improving your confidence which causes you to do that thing? What does it even mean to distinguish between those two cases?

And if improving your confidence doesn't cause you to do the thing in question, then what's the point?

Edit: On a reread, I might interpret you as saying that one might try (but fail) to change one's actions "directly", or one might attack the root cause, and having done so, succeed at changing one's actions thereby. If that's what you mean, then you're right.

However the advice to "change what you do" should not, I think, be interpreted as saying "ignore the root causes of your inaction"; that is not a charitable reading. The author of the Cracked article isn't railing against people who want to do a thing, but can't (due to e.g. lack of confidence); rather, his targets are people who just don't think that they need to be doing anything, because "what they are" is somehow sufficient.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-04T11:00:05.219Z · LW(p) · GW(p)

his targets are people who just don't think that they need to be doing anything, because "what they are" is somehow sufficient.

Oh, I didn't realize that. You're right, that is a much more charitable reading.

comment by Paulovsk · 2013-01-02T02:05:23.530Z · LW(p) · GW(p)

Yep.

I'm rather curious how parents can "be" something to children without doing, since it's supposed children don't know their parents before their first contact (after birth, I mean).

Replies from: ChristianKl, Omegaile
comment by ChristianKl · 2013-01-02T15:15:24.275Z · LW(p) · GW(p)

I didn't said that they aren't doing anything. I said that identifying specific behaviors doesn't make a good predictor. Characteristics like high emotional intelligence are better predictors.

Working on increased emotional intelligence and higher self esteem would be work that changes "who you are".

Taking steps to raise their own emotional intelligence might have a much higher effect that taking children to the museum to teach them about

comment by Omegaile · 2013-01-02T14:27:45.437Z · LW(p) · GW(p)

I think I have heard of such studies, but the conclusion is different.

Who the parents are matter more than things like which school do the kids go, or in which neighborhood they live, etc.

But in my view, that's only because being something (let's say, a sportsman), will makes you do things that influence your kids to pursue a similar path

comment by A1987dM (army1987) · 2013-01-01T20:29:35.218Z · LW(p) · GW(p)

I wish my 17-year-old self had read that article.

comment by DanArmak · 2013-01-01T18:54:25.962Z · LW(p) · GW(p)

For Instance It Makes You Write With Odd Capitalization.

Replies from: dspeyer
comment by dspeyer · 2013-01-01T19:28:08.814Z · LW(p) · GW(p)

It's probably a section title.

Replies from: Paulovsk
comment by Paulovsk · 2013-01-02T02:09:59.562Z · LW(p) · GW(p)

It's a copywriting technique. It makes easier for people to read (note how the subjects of newsletters campaings usually come like that). Don't ask me for the research, I have no idea where I've read it.

In this case, I guess he just pasted the title here.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-02T13:49:05.515Z · LW(p) · GW(p)

Don't ask me for the research, I have no idea where I've read it.

So, is there any research done about this kind of stuff? All the discussions of this kind of things I've seen on Wikipedia talk:Manual of Style and places like that appear to be based on people Generalizing From One Example.

comment by NancyLebovitz · 2013-01-17T03:43:04.870Z · LW(p) · GW(p)

I keep coming back to the essential problem that in our increasingly complex society, we are actually required to hold very firm opinions about highly complex matters that require analysis from multiple fields of expertise (economics, law, political science, engineering, others) in hugely complex systems where we must use our imperfect data to choose among possible outcomes that involve significant trade offs. This would be OK if we did not regard everyone who disagreed with us as an ignorant pinhead or vile evildoer whose sole motivation for disagreeing is their intrinsic idiocy, greed, or hatred for our essential freedoms/people not like themselves. Except that there actually are LOTS of ignorant pinheads and vile evildoers whose sole motivation etc., or whose self-interest is obvious to everyone but themselves.

osewalrus

Replies from: Nornagest, simplicio
comment by Nornagest · 2013-01-17T04:01:49.865Z · LW(p) · GW(p)

I try to get around this by assuming that self-interest and malice, outside of a few exceptional cases, are evenly distributed across tribes, organizations, and political entities, and that when I find a particularly self-interested or malicious person that's evidence about their own personality rather than about tribal characteristics. This is almost certainly false and indeed requires not only bad priors but bad Bayesian inference, but I haven't yet found a way to use all but the narrowest and most obvious negative-valence concepts to predict group behavior without inviting more bias than I'd be preventing.

comment by simplicio · 2013-01-22T16:09:07.111Z · LW(p) · GW(p)

I keep coming back to the essential problem that in our increasingly complex society, we are actually required to hold very firm opinions about highly complex matters that require analysis from multiple fields of expertise (economics, law, political science, engineering, others) in hugely complex systems where we must use our imperfect data to choose among possible outcomes that involve significant trade offs.

A possible partial solution to this problem.

comment by Richard_Kennaway · 2013-01-13T20:11:24.382Z · LW(p) · GW(p)

"Just because you no longer believe a lie, does not mean you now know the truth."

Mark Atwood

comment by Kindly · 2013-01-07T03:42:13.891Z · LW(p) · GW(p)

Then for the first time it dawned on him that classing all drowthers together made no more sense than having a word for all animals that can't stand upright on two legs for more than a minute, or all animals with dry noses. What possible use could there be for such classifications? The word "drowther" didn't say anything about people except that they were not born in a Westil Family. "Drowther" meant "not us," and anything you said about drowthers beyond that was likely to be completely meaningless. They were not a "class" at all. They were just... people.

Orson Scott Card, The Lost Gate

Replies from: MixedNuts
comment by MixedNuts · 2013-01-16T18:33:59.417Z · LW(p) · GW(p)

As my math teacher always said,

The complement of a vector subspace is a repulsive object.

Replies from: None
comment by [deleted] · 2013-01-16T19:29:24.234Z · LW(p) · GW(p)

Why is it repulsive...? I guess I don't get it. I mean sure, it's not a subspace... is that what they mean?

Replies from: MixedNuts
comment by MixedNuts · 2013-01-16T21:09:16.667Z · LW(p) · GW(p)

It's extremely inelegant, and finding yourself using one means you're running into a dead end.

Replies from: DanArmak
comment by DanArmak · 2013-01-19T12:07:16.377Z · LW(p) · GW(p)

I don't see why a complement would be inelegant. It's just one extra bit of specification.

Now, non-definable non-computable numbers (or sets), they are inelegant :-)

comment by Jayson_Virissimo · 2013-01-02T10:06:26.939Z · LW(p) · GW(p)

It is not an epistemological principle that one might as well hang for a sheep as for a lamb.

-Bas van Fraassen, The Scientific Image

Replies from: DanielLC
comment by DanielLC · 2013-01-02T20:17:21.074Z · LW(p) · GW(p)

What does that mean?

Replies from: Eliezer_Yudkowsky, simplicio
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T21:07:07.530Z · LW(p) · GW(p)

Believing large lies is worse than small lies; basically, it's arguing against the What-The-Hell Effect as applied to rationality. Or so I presume, did not read original.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-02T23:51:16.210Z · LW(p) · GW(p)

the What-The-Hell Effect

I had noticed that effect myself, but I didn't know it had a name.

Replies from: PDH
comment by PDH · 2013-01-03T15:41:27.492Z · LW(p) · GW(p)

I had noticed it and mistakenly attributed it to the sunk cost fallacy but on reflection it's quite different from sunk costs. However, it was discovering and (as it turns out, incorrectly) generalising the sunk cost fallacy that alerted me to the effect and that genuinely helped me improve myself, so it's a happy mistake.

One thing that helped me was learning to fear the words 'might as well,' as in, 'I've already wasted most of the day so I might as well waste the rest of it,' or 'she'll never go out with me so I might as well not bother asking her,' and countless other examples. My way of dealing it is to mock my own thought processes ('Yeah, things are really bad so let's make them even worse. Nice plan, genius') and switch to a more utilitarian way of thinking ('A small chance of success is better than none,' 'Let's try and squeeze as much utility out of this as possible' etc.).

I hadn't fully grasped the extent to which I was sabotaging my own life with that one, pernicious little error.

comment by simplicio · 2013-01-02T20:59:46.392Z · LW(p) · GW(p)

Lambs are young sheep; they have less meat & less wool.

The punishment for livestock rustling being identical no matter what animal is stolen, you should prefer to steal a sheep rather than a lamb.

Replies from: elspood
comment by elspood · 2013-01-15T07:21:34.072Z · LW(p) · GW(p)

However, the parent says this is NOT an epistemological principle, that one should prefer to get the most benefit when choosing between equally-punished crimes.

So is it saying that epistemology should not allow for equal punishments for unequal crimes? That seems less like epistemology and more like ethics.

Should our epistemology simply not waste time judging which untrue things are more false than others because we shouldn't be believing false things anyway?

It would be great if Jason would give us more context about this one, since the meaning doesn't seem clear without it.

Replies from: simplicio
comment by simplicio · 2013-01-15T14:43:15.763Z · LW(p) · GW(p)

I think Eliezer has got the meaning more or less right. When Daniel asked "what it meant," I assumed he was merely referring to the idiom, not the entire quote.

So is it saying that epistemology should not allow for equal punishments for unequal crimes? That seems less like epistemology and more like ethics.

As an example of the kind of thing I think the quote is warning against, the theist philosopher Plantinga holds (I'm paraphrasing somewhat uncharitably) that believing in the existence of other minds (i.e., believing that other people are conscious) requires a certain leap of faith which is not justified by empirical evidence. Therefore, theists are not any worse off than everybody else when they make the leap to a god.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-01-01T21:16:14.669Z · LW(p) · GW(p)

The ideas of the Hasids are scientifically and morally wrong; the fashion, food and lifestyle are way stupid; but the community and family make me envious.

-- Penn Jilette

Replies from: MixedNuts
comment by MixedNuts · 2013-01-01T22:29:54.628Z · LW(p) · GW(p)

Disagree about the fashion.

comment by Mass_Driver · 2013-01-25T22:00:41.899Z · LW(p) · GW(p)

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go back to him and meet his wage demands. The first show of the next series began "Having escaped from the well, Superman hurried to..." There's a lesson in there somewhere, but I've no idea what it is.

-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html

I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.

Replies from: Fronken, CronoDAS, Richard_Kennaway, Eliezer_Yudkowsky
comment by Fronken · 2013-01-29T15:31:45.582Z · LW(p) · GW(p)

Story ... too awesome ... not to upvote ...

not sure why its rational, though.

comment by CronoDAS · 2013-01-26T19:16:17.919Z · LW(p) · GW(p)

Speaking of writing yourself into a corner...

According to TV Tropes, there was one show, "Sledge Hammer", which ended its first season with the main character setting off a nuclear bomb while trying to defuse it. They didn't expect to be renewed for a second season, so when they were, they had a problem. This is what they did:

Previously on Sledge Hammer:
[scene of nuclear explosion]
Tonight's episode takes place five years before that fateful explosion.

comment by Richard_Kennaway · 2013-01-26T19:39:06.279Z · LW(p) · GW(p)

I think this is an updating of the cliché from serial adventure stories for boys, where an instalment would end with a cliffhanger, the hero facing certain death. The following instalment would resolve the matter by saying "With one bound, Jack was free." Whether those exact words were ever written is unclear from Google, but it's a well-known form of lazy plotting. If it isn't already on TVTropes, now's your chance.

Replies from: Desrtopa, Kindly, CCC
comment by Desrtopa · 2013-01-29T03:22:34.420Z · LW(p) · GW(p)

Did you just create that redlink? That's not the standard procedure for introducing new tropes, and if someone did do a writeup on it, it would probably end up getting deleted. New tropes are supposed to be introduced as proposals on the YKTTW (You Know That Thing Where) in order to build consensus that they're legitimate tropes that aren't already covered, and gather enough examples for a proper launch. You could add it as a proposal there, but the title is unlikely to fly under the current naming policy.

Pages launched from cold starts occasionally stick around (my first page contribution from back when I was a newcomer and hadn't learned the ropes is still around despite my own attempts to get it cutlisted,) but bypassing the YKTTW is frowned upon if not actually forbidden.

Replies from: Richard_Kennaway, Nornagest
comment by Richard_Kennaway · 2013-01-29T07:30:56.061Z · LW(p) · GW(p)

I didn't make any edits to TVTropes -- the page that it looks like I'm linking to doesn't actually exist. But I wasn't aware of YKTTW.

ETA: Neither is their 404 handler, that turns URLs for nonexistent pages into invitations to create them. As a troper yourself, maybe you could suggest to TVTropes that they change it?

Replies from: Desrtopa
comment by Desrtopa · 2013-01-29T15:05:14.443Z · LW(p) · GW(p)

If you're referring to what I think you are, that's more of a feature than a bug, since works pages don't need to go through the YKTTW. We get a lot more new works pages than new trope pages, so as long as the mechanics for creating either are the same, it helps to keep the process streamlined to avoid too much inconvenience.

comment by Nornagest · 2013-01-29T04:06:38.493Z · LW(p) · GW(p)

To be fair, that kind of flies in the face of standard wiki practice. Not Invented Here isn't defined in the main namespace, but the entire site probably counts as self-demonstration.

comment by Kindly · 2013-01-26T19:58:39.469Z · LW(p) · GW(p)

I believe that Cliffhanger Copout refers to the same thing. The Harlan Ellison example in particular is worth reading.

comment by CCC · 2013-01-29T08:30:59.042Z · LW(p) · GW(p)

Wouldn't that fall under "Cliffhanger Copout"?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-26T18:59:02.571Z · LW(p) · GW(p)

There's so many different ways that story couldn't possibly be true...

(EDIT: Ooh, turns out that the Superman Radio program was the one that pulled off the "Clan of the Fiery Cross" punch against the KKK.)

comment by Endovior · 2013-01-03T18:07:47.919Z · LW(p) · GW(p)

If your ends don’t justify the means, you’re working on the wrong project.

-Jobe Wilkins (Whateley Academy)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-05-22T13:04:26.953Z · LW(p) · GW(p)

... or going about it wrong.

comment by ygert · 2013-01-01T17:29:18.416Z · LW(p) · GW(p)

I was rereading HP Lovecraft's The Call of Cthulhu lately, and the quote from the Necronomicon jumped out at me as a very good explanation of exactly why cryonics is such a good idea.

(Full disclosure: I myself have not signed up for cryonics. But I intend to sign up as soon as I can arrange to move to a place where it is available.)

The quote is simply this:

That is not dead which can eternal lie,

And with strange aeons even death may die.

Replies from: None, Document
comment by [deleted] · 2013-01-01T18:18:01.559Z · LW(p) · GW(p)

.

Replies from: Eliezer_Yudkowsky, army1987, Raemon
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T13:23:32.111Z · LW(p) · GW(p)

Er... logical fallacy of fictional evidence, maybe? I wince every time somebody cites Terminator in a discussion of AI. It doesn't matter if the conclusion is right or wrong, I still wince because it's not a valid argument.

Replies from: MixedNuts, None
comment by MixedNuts · 2013-01-02T14:36:07.194Z · LW(p) · GW(p)

The original quote has nothing to do with life extension/immortality for humans. It just happens to be an argument for cryonics, and it seems to be a valid one: death as failure to preserve rather than cessation of activity, mortality as a problem rather than a fixed rule.

comment by [deleted] · 2013-01-02T18:47:04.867Z · LW(p) · GW(p)

.

comment by A1987dM (army1987) · 2013-01-01T20:32:29.745Z · LW(p) · GW(p)

RationalWiki is extremely sceptical of cryonics and still it has quoted that.

comment by Raemon · 2013-01-02T05:54:12.080Z · LW(p) · GW(p)

It featured prominently in last year's Solstice.

Replies from: None
comment by [deleted] · 2013-01-02T05:59:55.287Z · LW(p) · GW(p)

.

comment by Document · 2013-01-03T04:51:11.444Z · LW(p) · GW(p)

http://lesswrong.com/lw/1pq/rationality_quotes_february_2010/1js5

(Full disclosure: I myself don't intend to sign up for cryonics.)

Replies from: ygert
comment by ygert · 2013-01-03T08:39:46.546Z · LW(p) · GW(p)

Huh... Before posting the quote I did try searching to see if it had already been posted before, but that didn't show up. Oh well.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-29T09:29:29.039Z · LW(p) · GW(p)

"A stupid person can make only certain, limited types of errors. The mistakes open to a clever fellow are far broader. But to the one who knows how smart he is compared to everyone else, the possibilities for true idiocy are boundless."

-- Steven Brust, spoken by Vlad, in Iorich

Replies from: shminux
comment by shminux · 2013-01-31T00:11:36.700Z · LW(p) · GW(p)

to the one who knows how smart he is compared to everyone else

Seems to describe well the founder of this forum. I wonder if this quote resonates with a certain personal experience of yours.

comment by Will_Newsome · 2013-01-03T07:07:41.235Z · LW(p) · GW(p)

[O]ne may also focus on a single problem, which can appear in different guises in various disciplines, and vary the methods. An advantage of viewing the same problem through the lens of different models is that we can often begin to identify which features of the problem are enduring and which are artifacts of our particular methods or background assumptions. Because abstraction is a license for us to ignore information, looking at several approaches to modeling a problem can give you insight into what is important to keep and what is noise to ignore. Moreover, discovering robust features of a problem, when it happens, can reshape your intuitions.

— Gregory Wheeler, "Formal Epistemology"

Replies from: RobinZ
comment by RobinZ · 2013-01-03T21:38:44.700Z · LW(p) · GW(p)

Is there a concrete example of a problem approached thus?

Replies from: Sengachi, CCC
comment by Sengachi · 2013-01-04T07:44:41.062Z · LW(p) · GW(p)

Viewing the interactions of photons as both a wave and a billiard ball. Both are wrong, but by seeing which traits remain constant in all models, we can project what traits the true model is likely to have.

Replies from: RobinZ
comment by RobinZ · 2013-01-05T04:58:19.016Z · LW(p) · GW(p)

Does that work? I don't know enough physics to tell if that makes sense.

Replies from: Sengachi
comment by Sengachi · 2013-01-06T10:11:10.996Z · LW(p) · GW(p)

It doesn't give you all the information you need, but that's how the problem was originally tackled. Scientists noticed that they had two contradictory models for light, which had a few overlapping characteristics. Those overlapping areas allowed them to start formulating new theories. Of course it took ridiculous amounts of work after that to figure out a reasonable approximation of reality, but one has to start somewhere.

comment by CCC · 2013-01-14T08:21:20.174Z · LW(p) · GW(p)

I had a thought recently; considering reproduction in animals (for simpicity, let me assume mammals) via a programming metaphor. The DNA contributed by mother and father is the source code; the mother's womb is the compiler; and the baby is the executable code.

The first thing that's noted is that there is a very good chance (around 50%) that the executable code will include its own compiler. And this immediately leads to the possibility that the compiler can slip in any little changes to the executable code that it wants; it can, in fact, elect to entirely ignore the father's input and simply clone itself. (It seems that it doesn't). Or, in other words, the DNA is quite possibly only a partial description of the resulting baby.

comment by HalMorris · 2013-01-02T09:58:44.707Z · LW(p) · GW(p)

In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

  • Mark Twain - Life on the Mississippi

(If you wonder where "two hundred and forty-two miles" shortening of the river came from, it was the straightening of its original meandering path to improve navigation)

Replies from: army1987
comment by BerryPick6 · 2013-01-01T16:15:35.693Z · LW(p) · GW(p)

Many of our most serious conflicts are conflicts within ourselves. Those who suppose their judgements are always consistent are unreflective or dogmatic.

-- John Rawls, Justice as Fairness: A Restatement.

comment by Qiaochu_Yuan · 2013-01-01T22:45:50.501Z · LW(p) · GW(p)

If you ever decide that your life is not too high a price to pay for saving the universe, let me know. We'll be ready.

-- Kyubey (Puella Magi Madoka Magica)

Replies from: Eliezer_Yudkowsky, MarkusRamikin
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T21:11:53.704Z · LW(p) · GW(p)

For you, I'll walk this endless maze...

Replies from: Sengachi
comment by Sengachi · 2013-01-04T08:08:48.162Z · LW(p) · GW(p)

The only ones to love a martyr's actions are those who did not love them.

Replies from: MixedNuts, wedrifid
comment by MixedNuts · 2013-01-16T18:28:07.692Z · LW(p) · GW(p)

Yeah, if the English language had any words for feelings that aren't hopelessly vague, we wouldn't have those silly arguments about catchy proverbs.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-17T01:13:32.916Z · LW(p) · GW(p)

Yeah, if the English language had any words for feelings that aren't hopelessly vague

I suspect this is because of large psychological differences between humans. Specifically, not all humans experience all feelings; thus when a human hears a word referring to a feeling he hasn't experienced he assumes it refers to the closest feeling that he has.

Replies from: MixedNuts, MugaSofer
comment by MixedNuts · 2013-01-17T03:38:28.865Z · LW(p) · GW(p)

Other theories: people just don't introspect much, people like being vague because "I love you" pleases someone who wants to hear "I want to work to make you happy" when you mean "I have lots of fun on dates with you", people before the advent of self-help books had a taboo against discussing (and generally expressing) feelings qua feelings and used preference-revealing actions instead.

comment by MugaSofer · 2013-01-21T16:11:58.167Z · LW(p) · GW(p)

I'm given to understand that English is unusually bad in this regard; that would seem to screen out standard human variance unless English-speakers are unusually varied.

Also, I don't think it's generalizing from one example; humans demonstrate pretty standard emotions (and facial expressions, for that matter) AFAICT.

(Also, there's The Psychological Unity of Mankind. We don't want to overgeneralize, sure, but evidence regarding one human mind is, in fact, evidence regarding all of them. It's far from overwhelming evidence, but still.)

comment by wedrifid · 2013-01-04T14:29:53.479Z · LW(p) · GW(p)

The only ones to love a martyr's actions are those who did not love them.

That isn't true. If I love someone and they martyr themselves (literally or figuratively) in a way that is the unambiguously and overwhelmingly optimal way to fulfill both their volition and my own then I will love the martyr's actions. If you say I do not love the martyr or do not love their actions due to some generalization then you are just wrong.

Replies from: TheOtherDave, Sengachi
comment by TheOtherDave · 2013-01-04T14:50:54.302Z · LW(p) · GW(p)

Agreed... but also, this gets complicated because of the role of external constraints.

I can love someone, "love" what they do in the context of the environment in which they did it (I put love here in scare quotes because I'm not sure I mean the same thing by it when applied to an action, but it's close enough for casual conversation), and hate the fact that they were in such an environment to begin with, and if so my feelings about it can easily get confused.

comment by Sengachi · 2013-01-06T10:15:38.964Z · LW(p) · GW(p)

Ding, rationalist level up!

Unfortunately, most people don't view things this way. I figured that so long as we were discussing a show based on how humans try to rationalize away and fight against the truly rational optimum, I might as well throw out a comment on how such people react to truly rational optimizers (martyrs).

comment by MarkusRamikin · 2013-01-08T20:17:33.964Z · LW(p) · GW(p)

I'm not sure in what way this is about rationality. Can someone please explain? (And yes, I've seen PM and I do remember that line).

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-08T22:10:19.066Z · LW(p) · GW(p)

I had scope insensitivity in mind. The universe is pretty big and one person's life is pretty small.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2013-01-08T22:18:11.431Z · LW(p) · GW(p)

I see, thanks.

Of course Kyubey never reveals how much saving-of-the-universe Madoka's life would pay for exactly. It's not just her life (and suffering) they want, but all the MGs in history, past and future, for an unspecified extention of the Universe's lifespan...

Replies from: earthwormchuck163
comment by earthwormchuck163 · 2013-01-08T22:52:12.231Z · LW(p) · GW(p)

Also, Kyubey clearly has pretty drastically different values from people, and thus his notion of saving the universe is probably not quite right for us.

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-14T05:10:40.405Z · LW(p) · GW(p)

I guess my point here is that part of the reason I stayed in Mormonism so long was that the people arguing against Mormonism were using such ridiculously bad arguments. I tried to find the most rigorous reasoning and the strongest research that opposed LDS theology, but the best they could come up with was stuff like horses in the Book of Mormon. It's so easy for a Latter-Day Saint to simply write the horse references off as either a slight mistranslation or a gap in current scientific knowledge that that kind of "evidence" wasn't worth the time of day to me. And for every horse problem there was something like Hugh Nibley's "Two Shots in the Dark" or Eugene England's work on Lehi's alleged travels across Saudi Arabia, apologetic works that made Mormon historical and theological claims look vaguely plausible. There were bright, thoughtful people on both sides of the Mormon apologetics divide, but the average IQ was definitely a couple of dozen points higher in the Mormon camp.

http://www.exmormon.org/whylft18.htm

Replies from: Bakkot
comment by Bakkot · 2013-01-15T19:55:10.951Z · LW(p) · GW(p)

This is part of why it's important to fight against all bad arguments everywhere, not just bad arguments on the other side.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-16T12:09:30.530Z · LW(p) · GW(p)

Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments. (On the other hand, the fact that all the smart people seem to believe X should probably be seen as evidence too...)

Yes, argument screens off authority, but that assumes that you're in a universe where it's possible to know everything and think of everything, I suspect. If one side is much more creative about coming up with clever arguments in support of itself (much better than you), who should you believe if the clever side also has all the best arguments?

Replies from: Wei_Dai, peuddO
comment by Wei Dai (Wei_Dai) · 2013-01-16T13:15:21.936Z · LW(p) · GW(p)

Another interpretation: Try to figure out which side has more intelligent defenders and control for that when evaluating arguments.

Isn't the real problem here that the author of the quote was asking the wrong question, namely "Mormonism or non-Mormon Christianity?" when he should have been asking "Theism or atheism?" I don't see how controlling for which side had the more intelligent defenders in the former debate would have helped him better get to the truth. (I mean that may well be the right thing to do in general, but this doesn't seem to be a very good example for illustrating it.)

Replies from: blashimov
comment by blashimov · 2013-01-30T01:41:01.257Z · LW(p) · GW(p)

That may be too much to ask for. Besides, if the horse evidence had worked, you'd be forced to turn around and apply it to Jesus...it may not have worked for her, but it has worked on some theists.

comment by peuddO · 2013-02-01T21:51:06.854Z · LW(p) · GW(p)

That's just not very correct. There are no external errors in measuring probability, seeing as the unit and measure comes from internal processes. Errors in perceptions of reality and errors in evaluating the strength of an argument will invariably come from oneself, or alternatively from ambiguity in the argument itself (which would make it a worse argument anyway).

Intelligent people do make bad ideas seem more believable and stupid people do make good ideas seem less believable, but you can still expect the intelligent people to be right more often. Otherwise, what you're describing as intelligence... ain't. That doesn't mean you should believe something just because a smart person said it - just that you shouldn't believe it less.

It's going back to the entire reverse stupidity thing. Trying to make yourself unbiased by compensating in the opposite direction doesn't remove the bias - you're still adjusting from the baseline it's established.

On a similar note, I may just have given you an uncharitable reading and assumed you meant something you didn't. Such a misunderstanding won't adjust the truth of what I'm saying about what I'd be reading into your words, and it won't adjust the truth of what you were actually trying to say. Even if there's a bias on my part, it skews perception rather than reality.

comment by aribrill (Particleman) · 2013-01-03T05:35:04.481Z · LW(p) · GW(p)

"How is it possible! How is it possible to produce such a thing!" he repeated, increasing the pressure on my skull, until it grew painful, but I didn't dare object. "These knobs, holes...cauliflowers -" with an iron finger he poked my nose and ears - "and this is supposed to be an intelligent creature? For shame! For shame, I say!! What use is a Nature that after four billion years comes up with THIS?!"

Here he gave my head a shove, so that it wobbled and I saw stars.

"Give me one, just one billion years, and you'll see what I create!"

  • Stanislaw Lem, "The Sanatorium of Dr. Vliperdius" (trans. Michael Kandel)
comment by [deleted] · 2013-01-01T18:21:03.337Z · LW(p) · GW(p)

.

Replies from: Eliezer_Yudkowsky, ArisKatsaris
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T21:12:57.680Z · LW(p) · GW(p)

"Two roads diverged in a wood. I took the one less traveled by, and had to eat bugs until the park rangers rescued me."

Replies from: MixedNuts, Risto_Saarelma
comment by MixedNuts · 2013-01-02T21:14:56.751Z · LW(p) · GW(p)

Two roads diverged in a wood. I took the one less traveled by, and I got to eat bugs until the park rangers kicked me out.

comment by ArisKatsaris · 2013-01-02T00:01:52.926Z · LW(p) · GW(p)

Wasn't that poem sarcastic anyway? Until the last stanza, the poem says how the roads were really identical in all particulars -- and in the last stanza the narrator admits that he will be describing this choice falsely in the future.

Replies from: gjm
comment by gjm · 2013-01-02T00:46:01.151Z · LW(p) · GW(p)

That's not how I read it. There's no particular difference between the two roads, so far as Frost can tell at the point of divergence, but they're still different roads and lead by different routes to different places, and he expects that years from now he'll look back and see (or guess?) that it did indeed make a big difference which one he took.

Replies from: RobinZ
comment by RobinZ · 2013-01-02T02:12:09.093Z · LW(p) · GW(p)

He expects that years from now he'll look back and claim it made a big difference. He says he will also claim it was the road less traveled by, for all that the passing there had worn them the same and so forth. It may well make a big difference, but of what nature no-one knows.

Replies from: gjm
comment by gjm · 2013-01-02T22:33:37.491Z · LW(p) · GW(p)

Yes, I understand that what the poem says is that he'll say it made a big difference, rather than that it did. But there's a difference between not saying it's a real difference and saying it's not a real difference; it seems to me that the former is what Frost does and the latter is what ArisKatsaris says he does.

(My reading of the to-ing and fro-ing about whether the road is "less traveled by" is that the speaker's initial impression is that one path is somewhat untrodden, and he picks it on those grounds; on reflection he's not sure it's actually any less trodden than the other, but his intention was to take the less-travelled road; and later on he expects to adopt the shorthand of saying that he picked the less-travelled one. I don't see that any actual dishonesty is intended, though perhaps a certain lack of precision.)

comment by GLaDOS · 2013-01-10T19:10:30.808Z · LW(p) · GW(p)

While truths last forever, taboos against them can last for centuries.

--"Sid" a commenter from HalfSigma's blog

comment by [deleted] · 2013-01-30T12:10:07.528Z · LW(p) · GW(p)

Whenever you can, count.

--Sir Francis Galton

comment by pleeppleep · 2013-01-02T02:28:38.124Z · LW(p) · GW(p)

I intend to live forever or die trying

-- Groucho Marx

Replies from: DanielLC
comment by DanielLC · 2013-01-02T20:51:38.067Z · LW(p) · GW(p)

I'm not sure that's great advice. It will result in you trying to try to live forever. The only way to live forever or die trying is to intend to live forever.

Replies from: sketerpot
comment by sketerpot · 2013-01-06T04:27:44.558Z · LW(p) · GW(p)

How to apply that, though? I could try not to try to try to live forever, but that sounds equivalent to trying to merely try to live forever. And now "try" has stopped sounding like a real word, which makes my misfortune even more trying.

Replies from: DanielLC
comment by DanielLC · 2013-01-06T05:40:10.682Z · LW(p) · GW(p)

Live forever. Details are in the linked post.

comment by [deleted] · 2013-01-21T17:20:59.783Z · LW(p) · GW(p)

Person 1: "I don't understand how my brain works. But my brain is what I rely on to understand how things work." Person 2: "Is that a problem?" Person 1: "I'm not sure how to tell."

-Today's xkcd

comment by blashimov · 2013-01-13T07:53:31.764Z · LW(p) · GW(p)

I have always had an animal fear of death, a fate I rank second only to having to sit through a rock concert. My wife tries to be consoling about mortality and assures me that death is a natural part of life, and that we all die sooner or later. Oddly this news, whispered into my ear at 3 a.m., causes me to leap screaming from the bed, snap on every light in the house and play my recording of “The Stars and Stripes Forever” at top volume till the sun comes up.

-Woody Allen EDIT: Fixed formatting.

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-01-14T15:13:06.238Z · LW(p) · GW(p)

FWIW, it seems like whatever is parsing the markdown in these comments, whenever it sees a ">" for a quote at the beginning of a paragraph it'll keep reading until the next paragraph break, i.e. double-whitespace at the end of a line or two linebreaks.

comment by MugaSofer · 2013-01-13T13:51:55.072Z · LW(p) · GW(p)

Formatting is broken. Great quote, though.

Replies from: ygert
comment by ygert · 2013-01-14T13:49:47.865Z · LW(p) · GW(p)

The irony of it... (Although your formatting is less broken, as your only mistake was missing out a single period.)

Replies from: MugaSofer
comment by GLaDOS · 2013-01-29T19:34:17.370Z · LW(p) · GW(p)

I notice with some amusement, both in America and English literature, the rise of a new kind of bigotry. Bigotry does not consist in a man being convinced he is right; that is not bigotry, but sanity. Bigotry consists in a man being convinced that another man must be wrong in everything, because he is wrong in a particular belief; that he must be wrong, even in thinking that he honestly believes he is right.

-G. K. Chesterton

comment by Jay_Schweikert · 2013-01-09T16:39:07.096Z · LW(p) · GW(p)

Suppose you've been surreptitiously doing me good deeds for months. If I "thank my lucky stars" when it is really you I should be thanking, it would misrepresent the situation to say that I believe in you and am grateful to you. Maybe I am a fool to say in my heart that it is only my lucky stars that I should thank—saying, in other words, that there is nobody to thank—but that is what I believe; there is no intentional object in this case to be identified as you.

Suppose instead that I was convinced that I did have a secret helper but that it wasn't you—it was Cameron Diaz. As I penned my thank-you notes to her, and thought lovingly about her, and marveled at her generosity to me, it would surely be misleading to say that you were the object of my gratitude, even though you were in fact the one who did the deeds that I am so grateful for. And then suppose I gradually began to suspect that I had been ignorant and mistaken, and eventually came to the correct realization that you were indeed the proper recipient of my gratitude. Wouldn't it be strange for me to put it this way: "Now I understand: you are Cameron Diaz!"

--Daniel Dennett, Breaking the Spell (discussing the differences between the "intentional object" of a belief and the thing-in-the-world inspiring that belief)

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T08:47:51.026Z · LW(p) · GW(p)

He's talking about God here, right?

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2013-01-10T15:04:06.358Z · LW(p) · GW(p)

In large part, yes. This passage is in Dennett's chapter on "Belief in Belief," and he has an aside on the next page describing how to "turn an atheist into a theist by just fooling around with words" -- namely, that "if 'God' were just the name of whatever it is that produced all creatures great and small, then God might turn out to be the process of evolution by natural selection."

But I think there's also a more general rationality point about keeping track of the map-territory distinction when it comes to abstract concepts, and about ensuring that we're not confusing ourselves or others by how we use words.

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-01-10T15:30:39.138Z · LW(p) · GW(p)

Besides, none of it passes an ideological turing test with an overwhelming majority of God-believers. I tried it.

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2013-01-10T15:53:33.113Z · LW(p) · GW(p)

Sorry, can you clarify what you mean here? None of what passes an ideological turing test? Are you saying something like "theists erroneously conclude that the proponents of evolution must believe in God because evolutionists believe that evolution is what produced all creatures great and small"? What exactly is the mistake that theists make on this point that would lead them to fail the ideological turing test?

Or, did I misunderstand you, and are you saying that people like Dennett fail the ideological turing test with theists?

Replies from: DaFranker
comment by DaFranker · 2013-01-10T16:44:36.969Z · LW(p) · GW(p)

Oh, sorry.

"if 'God' were just the name of whatever it is that produced all creatures great and small, then God might turn out to be the process of evolution by natural selection."

This, specifically, almost never passes an i-turing test IME. I've been called a "sick scientologist" (I assume they didn't know what "Scientology" really is) on account of the claim that if there is a "God", it's the process by which evolution or physics happens to work in our world.

Likewise, if I understand what Dennett is saying correctly, the things he's saying are not accepted by God-believers, namely that God could be any sort of metaphor or anthropomorphic representation of natural processes, or of the universe and its inner workings, or "fate" in the sense that "fate" and "free will" are generally understood (i.e. the dissolved explanation) by LWers, or some unknown abstract Great Arbiter of Chance and Probability.

(I piled in some of my own attempts in there, but all of the above was rejected time and time again in discussion with untrained theists, down to a single exception who converted to a theology-science hybrid later on and then, last I heard, doesn't really care about theological issues anymore because they seem to have realized that it makes no difference and intuitively dissolved their questions. Discussions with people who have thoroughly studied formal theology usually fare slightly better, but they also have a much larger castle of anti-epistemology to break down.)

Replies from: Jay_Schweikert, ikrase, MugaSofer
comment by Jay_Schweikert · 2013-01-10T17:16:22.317Z · LW(p) · GW(p)

Ah, okay, thanks for clarifying. In case my initial reply to MugaSofer was misleading, Dennett doesn't really seem to be suggesting here that this is really what most theists believe, or that many theists would try to convert atheists with this tactic. It's more just a tongue-in-cheek example of what happens when you lose track of what concept a particular group of syllables is supposed to point at.

But I think there are a great many people who purport to believe in "God," whose concept of God really is quite close to something like the "anthropomorphic representation of natural processes, or of the universe and its inner workings." Probably not for those who identify with a particular religion, but most of the "spiritual but not religious" types seem to have something like this in mind. Indeed, I've had quite a few conversations where it became clear that someone couldn't tell me the difference between a universe where "God exists" and where "God doesn't exist."

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-01-10T17:25:42.967Z · LW(p) · GW(p)

Regarding the second paragraph, I agree with your estimate that most "spiritual but not religious" people might think this way. I was, at some point in the past, exactly there in belief-space - I identified as "spiritual but not religious" explicitly, and explicitly held beliefs along those lines (minus the "anthropomorphic" part, keeping only the "mental" or "thinking" part of the anthropomorphism for some reason).

When I later realized that there was no tangible difference and no existing experiment that could tell me whether it was true, I kind of stopped caring, and eventually the questions dissipated on their own, though I couldn't tell exactly why at the time. When I found LW and read the sequences, I figured out what had happened, which was fun, but the real crisis of faith (if you can call it that - I never was religious to begin with, only "spiritual") had happened long before then.

People I see who call themselves "spiritual but not religious" and also know some science seem to behave in very similar manners to how I did back then, so I think it makes sense to assume a significant amount of them believe something like this.

comment by MugaSofer · 2013-01-11T13:20:40.227Z · LW(p) · GW(p)

In case my initial reply to MugaSofer was misleading, Dennett doesn't really seem to be suggesting here that this is really what most theists believe, or that many theists would try to convert atheists with this tactic.

For the record, I didn't interpret your comment that way.

comment by ikrase · 2013-01-11T22:47:11.752Z · LW(p) · GW(p)

Before I became a rationalist, I believed that there was no god, but there were souls, and they manifested through making quantum randomness nonrandom.

Replies from: None
comment by [deleted] · 2013-01-11T23:18:51.310Z · LW(p) · GW(p)

Which part(s) of that set of beliefs did becoming a rationalist cause you to change?

Replies from: ikrase
comment by ikrase · 2013-01-12T02:04:14.267Z · LW(p) · GW(p)

This was long before Less Wrong.

I realized that lower-level discussions of free will were kind of pointless. I abandoned the eternal-springing hope that souls and psychic powers (Hey! Look! For some reason, all of air molecules just happened to be moving upward at the same time! It seems like this guy is a magnet for one in a 10^^^^^^^10 thermodynamic occurances! And they always help him!) could exist. I fully accepted the physical universe.

Replies from: somervta
comment by somervta · 2013-02-01T05:17:57.385Z · LW(p) · GW(p)

I get the concept of hyperbole, but this:

It seems like this guy is a magnet for one in a 10^^^^^^^10 thermodynamic occurances!>

Is ludicrously too far.

Replies from: ikrase
comment by ikrase · 2013-02-01T16:17:20.280Z · LW(p) · GW(p)

It's two tens with six supers between them! That's twice as much as 10^^^10, right!

I guess it just intuitively seems like there should be a useful not-impossible-just-rare event that has a probability in that range (long-term vacuum fluctuation appearance of a complex and useful machine on the order of 5kg, maybe?)

Replies from: Kindly
comment by Kindly · 2013-02-01T18:13:13.490Z · LW(p) · GW(p)

Not... quite.

Let's say there are 10^^10 particles in the universe, each one of them independently has a 1 in 10^^10 chance of doing what we want over some small unit of time, and we are interested in 10^^10 of those units of time. Then the probability that the event we want to observe happens is much better than 1 in 10^^12, and that was only two up-arrows.

(We can rewrite ((10^^10)^(10^^10))^(10^^10) as 10^(10^^9 x 10^^10 x 10^^10) which is less than 10^((10^^10)^3) which is less than 10^((10^^10)^10). This would be the same as 10^^12 if we took exponents in a different order, and the order used to calculate 10^^12 happens to be the one that gives the largest possible number. Actually if I were more careful I could probably get 10^^11 as a bound as well.)

And although I'm not entirely sure about the time-resolution business, I think the numbers in the calculation I just did are an upper bound for what we'd want in order to compute the probabability of any universe-history at an atomic scale.

comment by MugaSofer · 2013-01-11T13:21:34.688Z · LW(p) · GW(p)

I've been called a "sick scientologist" (I assume they didn't know what "Scientology" really is) on account of the claim that if there is a "God", it's the process by which evolution or physics happens to work in our world.

Holy cow, you've tried this? Were you dealing with creationists?

Replies from: DaFranker
comment by DaFranker · 2013-01-11T14:39:33.010Z · LW(p) · GW(p)

I assume a significant amount of them were. I also tried subtly using God/Fate, God/Freewill, God/Physics, God/Universe, God/ConsciousMultiverse, and God/Chance, as interchangeable "redefinitions" (of course, on different samples each time) and was similarly called on it.

Incidentally, I can't confirm if this suggests a pattern (it probably does), but in one church I tried, for fun, combining all of them and just conflating all the meanings of all the above into "God", and then sometimes using the specific terms and/or God interchangeably when discussing a specific subset or idea. The more confused I made it, the more people I convinced and got to engage in self-reinforcing positive-affect dialogue. So if this was the only evidence available, I'd be forced to tentatively conclude: The more confused your usage of "God" is, the more it matches the religious usage, and the more it passes ideological turing tests!

(Spoiler: It does. The rest of my evidence confirms this.)

Replies from: TheOtherDave, BerryPick6, MugaSofer
comment by TheOtherDave · 2013-01-11T18:02:56.262Z · LW(p) · GW(p)

This ought not be surprising. The more confused a concept is, the more freedom my audience has to understand it to mean whatever suits their purposes. In some audiences, this means it gets criticized more. In others, it gets accepted more uncritically.

comment by BerryPick6 · 2013-01-11T15:17:19.351Z · LW(p) · GW(p)

This reminds me a lot of Spinoza's proof of God in Ethics, although I recognize that is probably partially due to personal biases of mine.

comment by MugaSofer · 2013-01-11T15:07:03.592Z · LW(p) · GW(p)

Hmm. I'm pretty sure that if I renamed some other confused idea "God" it wouldn't work so well. Or do you mean confusing?

I assume a significant amount of them were.

On it's own, that sounds like your assumption is based on the fact that they were religious, which is on the face of it absurd, so I'm guessing you have some evidence you declined to mention.

Incidentally, where does the term "ideological turing test" come from? I've never heard it before.

Replies from: Watercressed, DaFranker
comment by Watercressed · 2013-01-13T05:29:24.964Z · LW(p) · GW(p)

The term was coined by Brian Caplan here

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T13:31:26.478Z · LW(p) · GW(p)

Ah, cool. That's actually a really good idea, someone should set that up. An empirical test of how well you understand a position/ideology.

comment by DaFranker · 2013-01-11T15:22:58.492Z · LW(p) · GW(p)

Hmm. I'm pretty sure that if I renamed some other confused idea "God" it wouldn't work so well. Or do you mean confusing?

Yes, sorry. I was using the term "confused" in a slightly different manner from the one LWers are used to, and "confusing" fits better. Basically, "meaninglessly mysterious and deep-sounding" would be the more LW-friendly description, I think.

On it's own, that sounds like your assumption is based on the fact that they were religious, which is on the face of it absurd, so I'm guessing you have some evidence you declined to mention.

Ah, yes. Mostly the conversations and responses I got themselves gave me very strong impressions of creationism, and also some (rather unreliable, however, but still sufficient bayesian evidence) small-scale, local, privately-funded survey statistics about religion and beliefs.

To top that, most of the religious places and forums/websites I was visiting were found partially through the help of my at-the-time-girlfriend, whose family was very religious (and dogmatic) and creationist, so I suspect there probably was some effect there. I don't count this, though, because that would be double-counting (it's overridden by the "conversations with people" evidence)

Incidentally, where does the term "ideological turing test" come from? I've never heard it before.

No clue. I first saw it on LessWrong, and I think someone linked me to a wiki page about it when I asked what it meant, but I can't remember or find that instance.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T10:41:12.675Z · LW(p) · GW(p)

Thanks for explaining! Shame about the ITT though.

comment by MugaSofer · 2013-01-11T13:18:43.650Z · LW(p) · GW(p)

That's what I thought. Thanks for explaining.

comment by arborealhominid · 2013-01-08T00:38:19.976Z · LW(p) · GW(p)

Tobias adjusted his wings and appeared to tighten his talons on the branch. "Maybe you’re right. I don’t know. Look, Ax, it’s a whole new world. We’re having to make all this up as we go along. There aren’t any rules falling out of the sky telling us what and what not to do." "What exactly do you mean?" "Too hard to explain right now," Tobias said. "I just mean that we don’t really have any time-tested rules for dealing with these issues... So we have to see what works and what doesn’t. We can’t afford to get so locked into one idea that we defend it to the death, without really knowing if that idea works- in the real world."

  • Animorphs, book 52: The Sacrifice
comment by Eugine_Nier · 2013-01-02T03:26:49.878Z · LW(p) · GW(p)

The Harvard Law states: Under controlled conditions of light, temperature, humidity, and nutrition, the organism will do as it damn well pleases.

-- Larry Wall

Replies from: RomeoStevens
comment by RomeoStevens · 2013-01-02T18:48:12.229Z · LW(p) · GW(p)

See the Mouse Universe.

comment by lukeprog · 2013-01-30T22:38:17.842Z · LW(p) · GW(p)

Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.

Vannevar Bush

comment by taelor · 2013-01-02T15:41:07.076Z · LW(p) · GW(p)

It is startling to realize how much unbelief is necessary to make belief possible. What we know as blind faith is sutained by innnumerable unbeliefs: the fanatical Japanese in Brazil refused to believe for years the evidence of Japan's defeat; the fanatical Communist refuses to believe any unfavorable reports or evidence about Russia, nor will he be disillusioned by seeing with his own eyes the cruel misery inside the Soviet promised land.

It is the true believer's ability to "shut his eyes and stop his ears" to facts that do not deserve to be either seen or heard which is the source of his unequaled fortitude and constancy. He can not be frightened by danger, nor disheartened by obstacles nor baffled by contradictions because he denies their existence. Strength of faith, as Bergson pointed out, manifests itself not in moving mountains, but in not seeing mountains to move.

-- Eric Hoffer, The True Believer

Replies from: simplicio
comment by simplicio · 2013-01-02T22:27:29.253Z · LW(p) · GW(p)

A decent quote, except I am minded to nitpick that there is no such thing as unbelief as a separate category from belief. We just have credences.

Many futile conversations have I seen among the muggles, wherein disputants tried to make some Fully General point about unbelief vs belief, or doubt vs certainty.

comment by taelor · 2013-01-01T15:02:53.841Z · LW(p) · GW(p)

As for the hopeful, it does not seem to make any difference who it is that is seized by a wild hope -- whether it be an enthusiastic intellectual, a land-hungry farmer, a get-rich-quick speculator, a sober merchant or industrialist, a plain workingman or a noble lord -- they all proceed recklessly with the present, wreck it if they must, and create a new world. [...] When hopes and dreams are loose on the streets, it is well for the timid to lock doors, shutter windows and lie low until the wrath has passed. For there is often a monsterous incongruity between the hopes, however noble and tender, and the action which follows them. It is as if ivied maidens and garlanded youths were to herald the four horsemen of the apocalypse.

-- Eric Hoffer, The True Believer

comment by A1987dM (army1987) · 2013-01-12T20:52:37.794Z · LW(p) · GW(p)

Unfortunately, this is how the brain works:

-- Sir! We are receiving information that conflicts with the core belief system!

-- Get rid of it.

Beatrice the Biologist

comment by Zubon · 2013-01-06T00:47:42.900Z · LW(p) · GW(p)

Obviously, it was his own view that had been in error. That was quite a realization, that he had been wrong. He wondered if he had ever been wrong about anything important.

-- Sterren with a literal realization that the territory did not match his mental map in The Unwilling Warlord by Lawrence Watt-Evans

comment by A1987dM (army1987) · 2013-01-01T20:20:35.558Z · LW(p) · GW(p)

If you'd have told a 14th-century peasant that there'd be a huge merchant class in the future who would sit in huge metal cylinders eating meals and drinking wine while the cylinders hurtled through the air faster than a speeding arrow across oceans and continents to bring them to far-flung business opportunities, the peasant would have classified you as insane. And he'd have been wrong to the tune of a few gazillion frequent-flyer miles.

-- someone on Usenet replying to someone deriding Kurzweil

Replies from: David_Gerard
comment by David_Gerard · 2013-01-02T00:52:06.092Z · LW(p) · GW(p)

In general, though, that argument is the Galileo gambit and not a very good argument.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-02T09:50:35.602Z · LW(p) · GW(p)

There's a more charitable reading of this comment, which is just "the absurdity heuristic is not all that reliable in some domains."

Replies from: Eliezer_Yudkowsky, army1987
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-02T10:17:06.095Z · LW(p) · GW(p)

What makes this the Galileo Gambit is that the absurdity factor is being turned into alleged support (by affective association with the positive benefits of air travel and frequent flier miles) rather than just being neutralized. Contrast to http://lesswrong.com/lw/j1/stranger_than_history/ where absurdity is being pointed out as a fallible heuristic but not being associated with positives.

comment by A1987dM (army1987) · 2013-01-02T23:57:25.625Z · LW(p) · GW(p)

That's the way I interpreted it. (How comes I tend to read pretty much anything¹ charitably?)


  1. Well, not really anything. I don't think I would have been capable of this.
comment by foret · 2013-01-14T21:52:52.264Z · LW(p) · GW(p)

In reference to Occam's razor:

"Of course giving an inductive bias a name does not justify it."

--from Machine Learning by Tom M. Mitchell

Interesting how a concept seems more believable if it has a name...

Replies from: robert-miles
comment by Robert Miles (robert-miles) · 2013-04-25T17:09:10.727Z · LW(p) · GW(p)

Or less. Sometimes an assumption is believed implicitly, and it's not until it has a name that you can examine it at all.

comment by SPLH · 2013-01-13T07:34:40.210Z · LW(p) · GW(p)

"De notre naissance à notre mort, nous sommes un cortège d’autres qui sont reliés par un fil ténu."

Jean Cocteau

("From our birth to our death, we are a procession of others whom a fine thread connects.")

Replies from: simplicio
comment by woodside · 2013-01-03T11:01:46.649Z · LW(p) · GW(p)

It's not easy to find rap lyrics that are appropriate to be posted here. Here's an attempt.

Son, remember when you fight to be free

To see things how they are and not how you like em to be

Cause even when the world is falling on top of me

Pessimism is an emotion, not a philosophy

Knowing what's wrong doesn't imply that you right

And its another, when you suffer to apply it in life

But I'm no rookie

And I'm never gonna make the same mistake twice pussy

  • Immortal Technique "Mistakes"
comment by TsviBT · 2013-01-02T16:57:38.810Z · LW(p) · GW(p)

There are four types among those who study with the Sages: the sponge, the funnel, the strainer, the sifter. The sponge absorbs everything; the funnel - in one end and out the other; the strainer passes the wine and retains the dregs; the sifter removes the chaff and retains the edible wheat.

-Pirkei Avot (5:15)

Replies from: MugaSofer, army1987
comment by MugaSofer · 2013-01-02T17:30:24.563Z · LW(p) · GW(p)

Deep wisdom indeed. Some people believe the wrong things, and some believe the right things, some people believe both, some people believe neither.

Replies from: TsviBT
comment by TsviBT · 2013-01-02T20:26:50.352Z · LW(p) · GW(p)

To me, it expresses the need to pay attention to what you are learning, and decide which things to retain and which to discard. E.g. one student takes a course in Scala and memorizes the code for generics, while the other writes the code but focuses on understanding the notion of polymorphism and what it is good for.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-03T11:25:38.271Z · LW(p) · GW(p)

I genuinely don't understand this comment.

Replies from: TsviBT
comment by TsviBT · 2013-01-03T21:36:52.136Z · LW(p) · GW(p)

Sorry. Attempt #2:

If I had infinite storage space and computing power, I would store every single piece of information I encountered. I don't, so instead I have to efficiently process and store things that I learn. This generally requires that I throw information out the window. For example, if I take a walk, I barely even process most of the detail in my visual input, and I remember very little of it. I only want to keep track of a very few things, like where I am in relation to my house, where the sidewalk is, and any nearby hazards. When the walk is over, I discard even that information. On the other hand, I often have to take derivatives. Although understanding what a derivative means is very important, it would be silly of me to rederive e.g. the chain rule each time I wanted to use it. That would waste a lot of time, and it does not take a lot of space to store the procedure for applying the chain rule. So I store that logically superfluous information because it is important.

In other words, I have to be picky about what I remember. Some information is particularly useful or deep, some information isn’t. Just because this is incredibly obvious, doesn’t mean we don’t need to remind ourselves to consciously decide what to pay attention to.

I thought the quote expressed this idea nicely and compactly. Whoever wrote the quote probably did not mean it in quite the same way I understand it, but I still like it.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-04T11:08:24.431Z · LW(p) · GW(p)

While this comment is true - you can't remember everything - I'm not sure how you could get that from the categorization in the quote. Still, if that's what you got out of it, I can see why you posted it here.

comment by A1987dM (army1987) · 2013-01-02T23:49:36.883Z · LW(p) · GW(p)

To doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection.

-- Henri Poincaré

Replies from: TobyBartels
comment by Eugine_Nier · 2013-01-02T03:24:52.430Z · LW(p) · GW(p)

[Physics] has come to see that thinking is merely a form of human activity…with no assurance whatever that an intellectual process has validity outside the range in which its validity has already been checked by experience.

-- P. W. Bridgman, ‘‘The Struggle for Intellectual Integrity’’

comment by Alejandro1 · 2013-01-01T15:46:33.381Z · LW(p) · GW(p)

The universe is not indifferent. How do I know this? I know because I am part of the universe, and I am far from indifferent.

--Scott Derrickson

Replies from: NoisyEmpire, Kindly, MixedNuts, BerryPick6, Jayson_Virissimo, taelor
comment by NoisyEmpire · 2013-01-02T19:26:46.014Z · LW(p) · GW(p)

While affirming the fallacy-of-composition concerns, I think we can take this charitably to mean "The universe is not totally saturated with only indifference throughout, for behold, this part of the universe called Scott Derrickson does indeed care about things."

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-02T23:42:06.200Z · LW(p) · GW(p)

That's the way I interpreted it, too. There's a speech in HP:MOR where Harry makes pretty much the same point.

Replies from: NoisyEmpire
comment by NoisyEmpire · 2013-01-03T01:38:40.957Z · LW(p) · GW(p)

“There is light in the world, and it is us!”

Love that moment.

Replies from: Alejandro1
comment by Alejandro1 · 2013-01-04T21:52:27.143Z · LW(p) · GW(p)

That's exactly the sentiment I was aiming for with the quote.

comment by Kindly · 2013-01-02T14:59:53.878Z · LW(p) · GW(p)

Scott Derrickson is indifferent. How do I know this? I know because Scott Derrickson's skin cells are part of Scott Derrickson, and Scott Derrickson's skin cells are indifferent.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-02T23:44:59.025Z · LW(p) · GW(p)

If you interpret “X is indifferent” as “no part of X cares”, the original quote is valid and yours isn't.

comment by MixedNuts · 2013-01-01T16:47:35.589Z · LW(p) · GW(p)

I touched her hand. Her hand touched her boob. By the transitive property, I got some boob. Algebra's awesome!

-- Steve Smith, American Dad!, season 1, episode 7 "Deacon Stan, Jesus Man", on the applicability of this axiom.

comment by BerryPick6 · 2013-01-01T16:54:19.191Z · LW(p) · GW(p)

We are all faced throughout our lives with agonizing decisions. Moral choices. Some are on a grand scale. Most of these choices are on lesser points. But! We define ourselves by the choices we have made. We are in fact the sum total of our choices. Events unfold so unpredictably, so unfairly, human happiness does not seem to have been included, in the design of creation. It is only we, with our capacity to love, that give meaning to the indifferent universe. And yet, most human beings seem to have the ability to keep trying, and even to find joy from simple things like their family, their work, and from the hope that future generations might understand more.

-- Closing lines of Crimes and Misdemeanors, script by Woody Allen.

Replies from: Alejandro1
comment by Alejandro1 · 2013-01-01T21:59:23.619Z · LW(p) · GW(p)

I agree with the sentiment expressed in this quote, and I don't see it as opposed to the one expressed i mine, but judging from the pattern of up votes and downvotes, people do not agree.

I guess the quote I posted is ambiguous. You could read it as a kind of bad theistic argument ("since there is meaning in my life, there must be Ultimate Meaning in the universe"). Or you could read it as an anti-nihilistic quote ("even if there is no Ultimate Meaning, the fact that there is meaning in my life is enough to make it false that the universe is meaningless"). I was assuming the second reading, but I guess either the people who voted either assumed the first one. Or perhaps they saw the second one and just judged it a poor way of stating this idea.

comment by Jayson_Virissimo · 2013-01-02T03:13:39.722Z · LW(p) · GW(p)

The fallacy of composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole (or even of every proper part).

comment by taelor · 2013-01-02T04:57:20.408Z · LW(p) · GW(p)

Scott Derrickson may be a part of the universe, but he is not the universe.

comment by airandfingers · 2013-01-11T21:04:10.940Z · LW(p) · GW(p)

Most things that we and the people around us do constantly... have come to seem so natural and inevitable that merely to pose the question, 'Why are we doing this?' can strike us as perplexing - and also, perhaps, a little unsettling. On general principle, it is a good idea to challenge ourselves in this way about anything we have come to take for granted; the more habitual, the more valuable this line of inquiry.

-Alfie Kohn, "Punished By Rewards"

comment by TimS · 2013-01-07T01:51:05.283Z · LW(p) · GW(p)

Let’s see if I get this right. Fear makes you angry and anger makes you evil, right?

Now I’ll concede at once that fear has been a major motivator of intolerance in human history. I can picture knightly adepts being taught to control fear and anger, as we saw credibly in “The Empire Strikes Back.” Calmness makes you a better warrior and prevents mistakes. Persistent wrath can cloud judgment. That part is completely believable.

But then, in “Return of the Jedi,” Lucas takes this basic wisdom and perverts it, saying — “If you get angry — even at injustice and murder — it will automatically and immediately transform you into an unalloyedly evil person! All of your opinions and political beliefs will suddenly and magically reverse. Every loyalty will be forsaken and your friends won’t be able to draw you back. You will instantly join your sworn enemy as his close pal or apprentice. All because you let yourself get angry at his crimes.”

Uh, say what? Could you repeat that again, slowly?

In other words, getting angry at Adolf Hitler will cause you to rush right out and join the Nazi Party? Excuse me, George. Could you come up with a single example of that happening? Ever?

David Brin

Replies from: gwern, wedrifid, ChristianKl
comment by gwern · 2013-01-07T03:39:44.701Z · LW(p) · GW(p)

Lots of people in Weimar Germany got angry at the emerging fascists - and went out and joined the Communist Party. It was tough to be merely a liberal democrat.

Replies from: TimS
comment by TimS · 2013-01-07T16:54:13.271Z · LW(p) · GW(p)

I suspect you have your causation backwards. People created / joined the Freikorps and other quasi-fascist institutions to fight the threat of Communism. Viable international Communism (~1917) predates the fall of the Kaiser - and the Freikorp had no reason to exist when the existing authorities were already willing and capable of fighting Communism.

More generally, the natural reading of the Jedi moral rules is that the risk of evil from strong emotions was so great that liberal democrats should be prohibited from feeling any (neither anger-at-injustice nor love)

Replies from: gwern
comment by gwern · 2013-01-07T17:04:15.327Z · LW(p) · GW(p)

I suspect you have your causation backwards.

I don't know why you would think the causation would be only in one direction.

Replies from: TimS
comment by TimS · 2013-01-07T17:16:12.330Z · LW(p) · GW(p)

Now I'm confused. What is the topic of discussion? Clarification of Weimar Republic politics is not responsive to the Jedi-moral-philosophy point. Anger causing political action, including extreme political action, is a reasonable point, but I don't actually think anger-at-opponent-unjust-acts was the cause of much Communist or Fascist membership.

You might think anger-at-social-situation vs. anger-at-unjust-acts is excessive hair-splitting. But I interpreted your response as essentially saying "Anger-at-injustice really does lead to fairly directly evil." Your example does not support that assertion. If I've misinterpreted you, please clarify. I often seem to make these interpretative mistakes, and I'd like to do better at avoiding these types of misunderstandings in the future.

Replies from: gwern
comment by gwern · 2013-01-07T17:20:44.742Z · LW(p) · GW(p)

But I interpreted your response as essentially saying "Anger-at-injustice really does lead to fairly directly evil." Your example does not support that assertion.

It certainly does. In reaction to one evil, Naziism, Germans could go and support a second evil, Communism, which to judge by its global body counts, was many times worse than Naziism, which is exactly the sort of reaction Brin is ridiculing: "oh, how ridiculous, how could getting angry at evil make you evil too?" Well, it could make you support another evil, perhaps even aware of the evil on the theory of 'the enemy of my enemy is my friend'...

I don't know how you could get a better example of 'fighting fire with fire' than that or 'when fighting monsters, beware lest you become one'.

Replies from: TimS
comment by TimS · 2013-01-07T17:28:32.052Z · LW(p) · GW(p)

Anger can lead to evil vs. Anger must lead to evil.

And ignoring anger for the moment, Jedi moral philosophy says love leads to evil (that's the Annakin-Padme plot of Attack of the Clones - the romance was explicitly forbidden by Jedi rules).

Replies from: gwern
comment by gwern · 2013-01-07T17:31:54.998Z · LW(p) · GW(p)

Anger can lead to evil vs. Anger must lead to evil.

Not what we're discussing.

And ignoring anger for the moment

Let's stay on topic here.

Let me quote Brin:

In other words, getting angry at Adolf Hitler will cause you to rush right out and join the Nazi Party? Excuse me, George. Could you come up with a single example of that happening? Ever?

How is my example - chosen from the very time period and milieu that Brin himself chose - not a 'single example of that happening ever'?

Replies from: TimS
comment by TimS · 2013-01-07T17:58:20.217Z · LW(p) · GW(p)

Anger can lead to evil vs. Anger must lead to evil.

Not what we're discussing.

Exactly what we are discussing. Brin explicitly acknowledges the first point - he's rejecting the second point.

How is my example - chosen from the very time period and milieu that Brin himself chose - not a 'single example of that happening ever'?

That's not a charitable reading of that point. In the real world, there are lots of different ways to be evil. In Jedi-land, evil = Sith.

Annakin opposes the Sith. Then he feels strong emotions (love of Padme). Then he becomes Sith. Not extremist-opponent-who-is-just-as-bad.

Opposing Nazis does not lead one to becoming a Nazi. Of course, in the real world, Nazi isn't the only way to be evil.

Replies from: OrphanWilde, gwern
comment by OrphanWilde · 2013-01-07T18:21:24.338Z · LW(p) · GW(p)

In fairness to Lucas, Anakin's love of Padme isn't what converted him; it was Mace Windu's disregard for the morality the Jedi professed to follow.

I regard the Jedi versus Sith as less "Good versus evil" and more "Principle Ethics versus Pragmatist/Utilitarian Ethics" - Anakin reluctantly embraced Principles until he saw that the Principles were ineffectual; even its adherents would ultimately choose pragmatism. It's kind of implied, in-canon within the movies (the books go further in vindicating the SIth still), that Sidious' master might not have been evil, per se; he sought to end death.

comment by gwern · 2013-01-07T18:17:00.475Z · LW(p) · GW(p)

That's not a charitable reading of that point. In the real world, there are lots of different ways to be evil. In Jedi-land, evil = Sith.

Annakin opposes the Sith. Then he feels strong emotions (love of Padme). Then he becomes Sith. Not extremist-opponent-who-is-just-as-bad.

If there is only one way to be evil in Star Wars, then to become an extremist opponent of a different flavor maps back onto becoming a Sith...

Replies from: TimS
comment by TimS · 2013-01-07T18:59:54.760Z · LW(p) · GW(p)

Respectfully, I think we have reached the limit of our ability to have productive conversation.

(1) I don't desire to have the "Who is more evil: Nazis or Communists?" fight - I'm not sure that discussion is anything more than Blue vs. Green tu quoque mindkiller-ness. The important lesson is "beware 'do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.'"

(2) It is possible to piece together acceptable moral lessons from Jedi philosophy, just like it is possible to have an interesting story of political intrigue in the world of Harry Potter. It just isn't very true to the source material - neither original author would endorse the improvements.

In short, I'm tapping out.

comment by wedrifid · 2013-01-07T14:45:09.935Z · LW(p) · GW(p)

Let’s see if I get this right. Fear makes you angry and anger makes you evil, right?

If the memories of my youth serve me anger 'leads to the dark side of the force' via the intermediary 'hate'. That is, it leads you to go around frying things with lightening and choking people with a force grip. This is only 'evil' when you do the killing in cases where killing is not an entirely appropriate response. Unfortunately humans (and furry green muppet 'Lannik') are notoriously bad at judging when drastic violation of inhibitions is appropriate. Power---likely including the power to kill people with your brain---will almost always corrupt.

But then, in “Return of the Jedi,” Lucas takes this basic wisdom and perverts it

Not nearly as much as David Brin perverts the message that Lucas's message. I in fact do reject the instructions of Yoda but I reject what he actually says. I don't need to reject a straw caricature thereof.

“If you get angry — even at injustice and murder — it will automatically and immediately transform you into an unalloyedly evil person!

Automatically. Immediately. Where did this come from? Yoda is 900 years old, wizened and gives clear indications that he thinks of long term consequences rather than being caught up in the moment. We also know he's seen at least one such Jedi to Sith transition with his own eyes (after first predicting it). Anakin took years to grow from a whiny little brat into an awesome badass (I mean... "turn evil"). That is the kind of change that Yoda (and Lucas) clearly have in mind.

All of your opinions and political beliefs will suddenly and magically reverse.

That seems unlikely. It also wasn't claimed by the Furry Master. Instead what can be expected is that that opinions and political beliefs will change in predictable ways---most notably in the direction of endorsing the acquisition and use of power in ways that happen to benefit the self. Maybe the corrupted will change from a Blue to a Green but more likely they'll change into a NavyBlue and consider it Right to kill Greens with their brain, take all their stuff and ravage their womenfolk (or menfolk, or asexual alien humanoids, depending on generalized sexual orientation).

Every loyalty will be forsaken and your friends won’t be able to draw you back.

Except that Lucas in the very same movie has Darth Vader turn back to the Light and throw Palpatine down some shaft due to loyalty to his son. Perhaps Lucas isn't presenting the moral lesson that Brin believes he is presenting.

Replies from: TimS, TheOtherDave
comment by TimS · 2013-01-07T16:46:26.277Z · LW(p) · GW(p)

Drawing from Attack of the Clones:

The proximate emotion that leads to Anakin's fall is love. Even if we ignore the love-of-mother --> Tusken raiders massacre, the romance between Anakin and Padme is expressly forbidden because of the risk of Anakin turning evil.

If any strong emotion has such a strong risk of turning evil that the emotion must be forbidden, we aren't really talking about a moral philosophy that bears any resemblance to one worth trying to implement in real humans.

I'm not saying that strong emotions don't have a risk of going overboard - they obviously do. But the risk is maybe in the 10% range. It certainly isn't in the >90% range.


Immediately. Where did this come from?

That's probably an overstatement by Brin. But evil (Sith-ness) is highly likely from feeling strong emotions (in-universe), and that's not representative of the way things work in the real world. It's roughly parallels the false idea that we rationalists want to remove emotions from human experience.

comment by TheOtherDave · 2013-01-07T16:04:33.273Z · LW(p) · GW(p)

Agreed generally, but will quibble about your last par. Vader's redemption is being presented as a Heroic Feat, it is no more representative of normal moral or psychological processes in this universe than blowing up the Death Star with a single shot is representative of normal tactics.

comment by ChristianKl · 2013-01-07T16:32:51.116Z · LW(p) · GW(p)

People in Star Wars don't really have political beliefs in any meaningful sense. The Star Wars universe is actual about a struggle between Good and Evil instead of being a struggle between two political factions.

Citizens of the US got angry after 2001. The US became a lot more evil in response to torturing people and commits war crimes such as attacking people who try to rescue injured people with drones.

Replies from: TimS
comment by TimS · 2013-01-07T16:56:17.699Z · LW(p) · GW(p)

The problem Brin is criticizing is that Good is entirely prohibited from feeling strong emotions. Brin explicitly acknowledges that strong emotions can lead to evil acts - he's challenging the implicit idea that strong emotions must lead to evil.

Also, not my downvote.

comment by shminux · 2013-01-29T16:51:56.144Z · LW(p) · GW(p)

When they realized they were in a desert, they built a religion to worship thirstiness.

SMBC comics: a metaphor for deathism.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-29T18:46:53.616Z · LW(p) · GW(p)

While I am a fan of SMBC, in this case he's not doing existentialism justice (or not understanding existentialism). Existentialism is not the same thing as deathism. Existentialism is about finding meaning and responsibility in an absurd existence. While mortality is certainly absurd, biological immortality will not make existential issues go away. In fact, I suspect it will make them stronger..


edit: on the other hand, "existentialist hokey-pokey" is both funny and right on the mark!

Replies from: shminux, OrphanWilde
comment by shminux · 2013-01-29T19:37:22.909Z · LW(p) · GW(p)

I don't see how this strip can be considered to be about existentialism.

EDIT: Actually, I'm no longer sure what the strip is about. It obviously starts with Camus' absurdism, but then switches from his anti-nihilist argument against suicide in an absurd world to a potential critique of... what? nihilism? absurdism? as a means of resolving the cognitive dissonance of having a finite lifespan while wanting to live forever... Or does it? Zack Weiner can be convoluted at times.

Replies from: IlyaShpitser, DaFranker
comment by IlyaShpitser · 2013-01-29T19:41:32.886Z · LW(p) · GW(p)

It quotes Camus, the father of existentialism. It quotes from "The Myth of Sisyphus," one of the founding texts of existentialism. The invitation to live and create in the desert (e.g. invitation to find your own meaning, responsibility, and personal integrity without a God or without objective meaning in the world) is the existential answer to the desert of nihilism. Frankly, I am not sure how you can think the strip is about anything else. What do you think existentialism is?


A more accurate pithy summary of existentialism is this: "When they realized they were in a desert, they built water condensators out of sand."


"Beyond the reach of God" is existential.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-29T19:51:24.625Z · LW(p) · GW(p)

SMBC has also featured a bunch of other strips about existentialism, leading me to suspect he has studied it in some capacity. Notably, here, here, here, here and here.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-29T19:54:20.929Z · LW(p) · GW(p)

http://www.smbc-comics.com/index.php?db=comics&id=1595#comic

That's relativism, not existentialism. I mean he's trying to entertain, not be a reliable source about anything. Like wikipedia :).

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-29T20:15:19.025Z · LW(p) · GW(p)

Yeah, the third one I linked too isn't really existentialism either now that I think about it...

comment by DaFranker · 2013-01-29T20:12:10.398Z · LW(p) · GW(p)

[Meta]

I don't see why the parent was downvoted.

Is it seriously being downvoted just because it called to attention an inference that was not obvious, but seemed obvious to some who had studied a certain topic X?

Replies from: TimS
comment by TimS · 2013-01-29T20:29:11.497Z · LW(p) · GW(p)

Not my downvote. But if you don't know enough about existentialism to recognize Camus is a central early figure, then you don't know enough about existentialism to comment about whether a particular philosophical point invokes existentialism accurately.

If we replaced "Camus" with "J.S. Mill" and "existentialism" with "consequentialism," the error might be clearer.

In short, it isn't an error to miss the reference, but it is an error to challenge someone who explains the reference. (And currently, the karma for the two posts by shminux correctly reflect this difference - with the challenge voted much lower)

Replies from: DaFranker
comment by DaFranker · 2013-01-29T21:11:20.367Z · LW(p) · GW(p)

But if you don't know enough about existentialism to recognize Camus is a central early figure, then you don't know enough about existentialism to comment about whether a particular philosophical point invokes existentialism accurately.

Errh... does not follow.

I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase.

(edit: To throw in an example, I spent several months in the dark a while back doing bayesian inference while completely missing references to / quotes from Thomas Bayes. Yes, literally, that bad. So forgive me if I wouldn't have caught your reference to consequentialism if you hadn't explicitly stated that as what "J.S. Mill" was linked to.)

In short, it isn't an error to miss the reference, but it is an error to challenge someone who explains the reference.

The later explanation (in response to said "challenge") was necessary for me to understand why someone was talking about existentialism at all in the first place, so the first comment definitely did not make the reference any more obvious or explained (to me, two-place) than it was beforehand.

The "challenge" is actually not obvious to me either. When I re-read the comment, I see someone mentioning that they're missing the information that says "This strip is about existentialism".

If any statement of the form "X is not obvious to me" is considered a challenge to those for whom it is obvious, then I would argue that the agents doing this considering have missed the point of the inferential distance articles. To go meta, this previous sentence is what I would consider a challenge.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-29T21:17:57.574Z · LW(p) · GW(p)

I care about the central early figures of any topic about as much as I care about the size of the computer monitor used by the person who contributed the most to the reddit codebase.

I think this is a mistake, and a missed chance to practice the virtue of scholarship. Lesswrong could use much more scholarship, not less, in my opinion. The history of the field often gives more to think about than the modern state of the field.

Progress does not obey the Markov property.

Replies from: shminux, DaFranker
comment by shminux · 2013-01-29T21:44:23.995Z · LW(p) · GW(p)

The history of the field often gives more to think about than the modern state of the field.

Maybe more to think, but less value to the mastery the field, at least in the natural sciences (philosophy isn't one). You can safely delay learning about the history of discovery of electromagnetism, or linear algebra, or the periodic table until after you master the concepts. Apparently in philosophy it's somehow the other way around, you have to learn the whole history first. What a drag.

comment by DaFranker · 2013-01-29T21:39:53.540Z · LW(p) · GW(p)

[Obligatory disclaimer: This is not a challenge.]

I think this is a mistake, and a missed chance to practice the virtue of scholarship.

I honestly don't see how or why.

I already have a rather huge list of things I want to do scholarship with, and I don't see any use I could have for knowledge about the persons behind these things I want to study. Knowing a name for the purposes of searching for more articles written under this name is useful, knowing a name to know the rate of accuracy of predictions made by this name is useful, and often the "central early figures" in a field will coincide with at least one of these or some other criteria for scholarly interest.

I hear Galileo is also a central early figure for something related to stars or stellar motion or heliocentrism or something. Something about stellar bodies, probably. This seems entirely screened off (so as to make knowledge about Galileo useless to me) by other knowledge I have from other sources about other things, like newtonian physics and relativity and other cool things.

Studying history is interesting, studying the history of some things is also interesting, but the central early figures of some field are only nodes in a history, and relevant to me proportionally to their relevance to the parts of said history that carry information useful for me to remember after having already propagated the effects of this through my belief network.

Once I've done updates on my model based on what happened historically, I usually prefer forgetting the specifics of the history, as I tend to remember that I already learned about this history anyway (which means I won't learn it again, count it again, and break my mind even more later on).

So... I don't see where knowledge about the people comes in, or why it's a good opportunity to learn more. Am I cheating by already having a list of things to study and a large collection of papers to read?

To rephrase, if the information gained by knowing the history of something can be screened off by a more compact or abstract model, I prefer the latter.

Replies from: IlyaShpitser, BerryPick6
comment by IlyaShpitser · 2013-01-29T22:32:09.807Z · LW(p) · GW(p)

To rephrase, if the information gained by knowing the history of something can be screened off by a more compact or abstract model, I prefer the latter.

That's fine if you are trying to do economics with your time. But it sounded to me from the comment that you didn't care as well. Actually the economics is nontrivial here, because different bits of the brain engage with the formal material vs the historic context.

I think an argument for learning a field (even a formal/mathematical field) as a living process evolving through time, rather than the current snapshot really deserves a separate top level post, not a thread reply.

My personal experience trying to learn math the historic way and the snapshot way is that I vastly prefer the former. Perhaps I don't have a young, mathematically inclined brain. History provides context for notational and conceptual choices, good examples, standard motivating problems that propelled the field forward, lessons about dead ends and stubborn old men, and suggests a theory of concepts as organically evolving and dying, rather than static. Knowledge rooted in historic context is much less brittle.

For example, I wrote a paper with someone about what a "confounder" is. * People have been using that word probably for 70 years without a clear idea of what it means, and the concept behind it for maybe 250 more (http://jech.bmj.com/content/65/4/297.full.pdf+html). In the course of writing the paper we went through maybe half a dozen historic definitions people actually put forth (in textbooks and such), all but one of them "wrong." Probably our paper is not the last word on this. Actually "confounder" as a concept is mostly dying, to be replaced by "confounding" (much clearer, oddly). Even if we agree that our paper happens to be the latest on the subject, how much would you gain by reading it, and ignoring the rest? What if you read one of the earlier "wrong" definitions and nothing else?

You can't screen off, because history does not obey the Markov property.

  • This is "analytic philosophy," I suppose, and in danger of running afoul of Luke's wrath!
comment by BerryPick6 · 2013-01-29T21:50:08.659Z · LW(p) · GW(p)

Am I cheating by already having a list of things to study and a large collection of papers to read?

Not really, but only because the example you gave was Astronomy. If we're talking specifically about Existentialism (although I guess the conversation has progressed a bit passed that) I'm not entirely sure how one would come up with a list of readings and concepts without turning to the writings of the Central Figures (I'm not even sure it's legitimate to call Camus an 'early' thinker, since the Golden Age of Existentialism was definitely when he and Sartre were publishing.)

I would very much agree with your assessment for many if not most scientific fields, but in this particular instance, I happen to disagree that disregarding the Central Figures won't hurt your knowledge and understanding of the topic.

comment by OrphanWilde · 2013-01-29T19:02:54.227Z · LW(p) · GW(p)

Existentialism is just one branch of nihilistic philosophy, one which specifically attempts to address the issues inherent in nihilism.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-01-29T19:15:18.527Z · LW(p) · GW(p)

I think it is more accurate to describe existentialism as a reaction to nihilism, not a branch of nihilism. Camus opposed nihilism. It is true that he (and other existentialists) took nihilism very seriously indeed.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-29T22:08:06.647Z · LW(p) · GW(p)

Retracted, sorry - figured out where the disconnect was coming from after reading your other comments, which led to confusion, which led me to try to identify the source. I was interpreting nihilism itself to be the theology of the desert, so your comment didn't make any sense; rereading the comic I realized I had missed the connection between the "Take that!" and the "And yet". It felt to me like an Objectivist complaining that a critic of free market philosophy didn't understand Ayn Rand; taking a generalized point and interpreting it very specifically.

I don't think Camus opposed nihilism, though, I think he opposed the commonly-held philosophic ramifications of nihilism. Existentialism isn't a rejection of nihilism, it's a development of it, or at least that's what it looks like to me, as somebody who finds nihilism to be similar to an argument about what angels look like (given that I'm also an atheist). "What's our purpose?" "What's purpose?" - which is to say, I find the philosophy to be an answer, "Nothing!", in searching for a question. Existentialism replaces the answer with "What you make of it" (broadly speaking, as it's hard to actually pin down any concretes in existentialism, which is an umbrella term for a bunch of loosely-related concepts), but never really identifies the question.

Trivially, you could say the question is "What's the meaning of life?", or something deep-sounding like that, but what is the question really asking? The only meaningful question to my mind is "What should I do with my life?", which doesn't really require deep philosophy.

I'm a lifelong atheist. To me the "Purpose of life" question, as it pertains to atheists, is a concept imported from religion - that we can have a purpose - which lacks the referent which made that concept meaningful - a god or gods, being an entity or entities which can assign such purpose. Nihilism just seems confused, to me, and existentialism is an attempt to address a confused question. Which may or may not make me existentialist, depending on exactly which existentialism you call existentialism.

comment by Alicorn · 2013-01-16T18:13:25.294Z · LW(p) · GW(p)

"My baby is dead. Six months old and she's dead."
"Take solace in the knowledge that this is all part of the Corn God's plan."
"Your god's plan involves dead babies?"
"If you're gonna make an omelette, you're gonna have to break a few children."
"I'm not entirely sure I want to eat that omelette."

-- Scenes From A Multiverse

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-17T00:55:41.439Z · LW(p) · GW(p)

This works equally well as an argument against utilitarianism, which I'm guessing may be your intent.

Replies from: Qiaochu_Yuan, TsviBT, MugaSofer, Alicorn
comment by Qiaochu_Yuan · 2013-01-18T05:04:03.837Z · LW(p) · GW(p)

I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don't think people should be VNM-rational, and I haven't seen a cogent argument supporting this. Why isn't this quote just establishing that the utility of babies is high?

Replies from: None, CarlShulman, None, Eugine_Nier
comment by [deleted] · 2013-01-18T05:16:29.107Z · LW(p) · GW(p)

I aspire to be VNM rational, but not a utilitarian.

It's all very confusing because they both use the word "utility" but they seem to be different concepts. "Utilitarianism" is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of "utility" between people, independence and linearity of utility function components, utility is proportional to "happyness" or "well-being" or preference fulfillment, etc. I'm sure any given utilitarian will disagree with something in that list, but I've seen all of them claimed.

VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it's not hard to make utility assignments that aren't really usefully described as consequentialist.

I reject "utilitarianism" because it is very vague, and because I disagree with many of its interpretations.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-18T06:11:21.812Z · LW(p) · GW(p)

Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.

Replies from: Sniffnoy
comment by Sniffnoy · 2013-01-18T06:43:05.091Z · LW(p) · GW(p)

Probably best not to refer to it with the word "utilitarianism", since it isn't a form of that. Calling it "consequentialism" is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps "utility-function based consequentialism". Or perhaps "VNM-consequentialism".

comment by CarlShulman · 2013-01-18T05:56:34.496Z · LW(p) · GW(p)

A bounded utility function that places a lot of value on signaling/being "a good person" and desirable associate, getting some "warm glow" and "mostly doing the (deontologically) right thing" seems like a pretty good approximation.

comment by [deleted] · 2013-01-19T16:11:57.532Z · LW(p) · GW(p)

I have no idea what people mean when they say they are against utilitarianism.

I find these criticisms by Vladimir_M to be really superb.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-19T19:23:14.596Z · LW(p) · GW(p)

Okay. So none of that is an argument against VNM-rationality, it's an argument against a bunch of other ideas that have historically been attached to the label "utilitarian," right? The main thing I got out of that post is that utilitarianism is hard, not that it's wrong.

Replies from: None
comment by [deleted] · 2013-01-19T19:56:07.156Z · LW(p) · GW(p)

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-19T20:35:48.315Z · LW(p) · GW(p)

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

"People should aim to be VNM-rational." I think of this as a weak claim, which is why I didn't understand why people appeared to be arguing against it. I concluded that they probably weren't, and instead meant something else by utilitarianism, which is why I switched to a different term.

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2013-01-21T00:35:31.615Z · LW(p) · GW(p)

"People should aim to be VNM-rational."

What definition of "should" are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?

comment by [deleted] · 2013-01-19T21:24:23.918Z · LW(p) · GW(p)

"People should aim to be VNM-rational."

Can you spell this out a little more?

What does Vladimir_M think we should be doing instead?

I don't know. I think this comment reveals a lot of respect for what you might call "folk ethics," i.e. the way normal people do it.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-19T21:39:22.227Z · LW(p) · GW(p)

Can you spell this out a little more?

"People should aim for their behavior to satisfy the VNM axioms." I'm not sure how to get more precise than this.

Replies from: None
comment by [deleted] · 2013-01-19T22:10:16.182Z · LW(p) · GW(p)

"People should aim for their behavior to satisfy the VNM axioms."

OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people's behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That's sort of the point of the VNM theorem: you can model people's behavior as though they were maximizing something, even if that's not the way an individual understands his own behavior.

Even if you don't buy that premise, it's hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn't they do so, and still make the world worse by valuing bad things?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

Is "people should aim for their behavior to satisfy the VNM axioms" all that you meant originally by utilitarianism? From what you've written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-20T01:39:45.676Z · LW(p) · GW(p)

Even if you don't buy that premise, it's hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn't they do so, and still make the world worse by valuing bad things?

Yes, but if I think that optimal moral behavior means using a specific utility function, somebody who isn't being VNM-rational is incapable of optimal moral behavior.

Is "people should aim for their behavior to satisfy the VNM axioms" all that you meant originally by utilitarianism? From what you've written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.

It's all I originally meant. I gathered from all of the responses that this is not how other people use the term, so I stopped using it that way.

comment by Eugine_Nier · 2013-01-18T05:30:57.556Z · LW(p) · GW(p)

Well, Alicorn is a deontologist.

In any case, as an ultafinitist you should know the problems with the VNM theorem.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-18T05:58:20.637Z · LW(p) · GW(p)

I also have no idea what people mean when they say they are deontologists. I've read Alicorn's Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it's reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don't see a reasonable way to judge which heuristics to use that isn't consequentialist / utilitarian.

Then again, it's quite likely that my understanding of these terms doesn't agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to "VNM-rational."

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-18T07:26:51.744Z · LW(p) · GW(p)

My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me.

Taboo "make everything worse".

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-18T08:08:05.102Z · LW(p) · GW(p)

Taboo "make everything worse".

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?

Replies from: Eugine_Nier, None, Kindly
comment by Eugine_Nier · 2013-01-19T09:16:46.158Z · LW(p) · GW(p)

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-19T09:54:28.993Z · LW(p) · GW(p)

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-21T00:28:41.840Z · LW(p) · GW(p)

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property."

No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"

comment by [deleted] · 2013-01-18T19:26:59.034Z · LW(p) · GW(p)

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc", where the list refers to all the good things, as argued in the metaethcis sequence.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-19T09:21:11.600Z · LW(p) · GW(p)

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc"

Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don't agree, I challenge you to give a non-deontological definition.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-19T09:56:13.707Z · LW(p) · GW(p)

What is a deontological concept and what is a non-deontological concept?

Replies from: Eugine_Nier, Eugine_Nier
comment by Eugine_Nier · 2013-01-21T17:59:16.980Z · LW(p) · GW(p)

After thinking about it some more, I think I have a better way to explain what I mean.

What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else's freedom is a constraint on one's decision algorithm not just on one's outcome, thus it doesn't satisfy the VNM axioms.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-21T21:24:54.580Z · LW(p) · GW(p)

It sounds to me like you're implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-22T21:44:50.514Z · LW(p) · GW(p)

I don't see why I'm relying in it anymore than than the VNM-utiliterian is.

comment by Eugine_Nier · 2013-01-21T00:30:03.950Z · LW(p) · GW(p)

I thought I had made that clear in my second sentence:

If you don't agree, I challenge you to give a non-deontological definition [of freedom].

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-21T04:27:43.012Z · LW(p) · GW(p)

Um, no. I can't respond to a challenge to give a non-X definition of Y if I don't know what X means.

comment by Kindly · 2013-01-18T14:07:31.680Z · LW(p) · GW(p)

For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

A sufficiently crazy consequentialist might want to kill all such agents because he's scared of what the voices in his head might otherwise do. Your argument is not an argument at all.

And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they're not like that. Usually the "don't kill people if you can help it" moral principle tends to be ranked pretty high up there to prevent things like this from happening.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-18T19:34:10.160Z · LW(p) · GW(p)

to prevent things like this from happening.

Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they're doing if they don't think they're just using heuristics that approximate consequentialist reasoning.

comment by TsviBT · 2013-01-17T03:03:43.655Z · LW(p) · GW(p)

Huh? How so?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-19T09:22:49.885Z · LW(p) · GW(p)

Replace the "corn god" in the quote with a sufficiently rational utiliterian agent.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-01-20T03:33:53.683Z · LW(p) · GW(p)

To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I'm pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?

comment by MugaSofer · 2013-01-20T15:36:56.970Z · LW(p) · GW(p)

Nah, it's just a cheap shot at the theists.

EDIT: not sure about the source, but the way it's edited ...

comment by Alicorn · 2013-01-17T02:39:36.188Z · LW(p) · GW(p)

I hadn't actually thought of that, but that could be part of why I liked the quote.

comment by JQuinton · 2013-01-03T22:13:37.547Z · LW(p) · GW(p)

the decision to base your life on beliefs which not only can you not prove, but which, on the balance of the evidence, seem unlikely to be true, seems incredibly irresponsible. If religious believing had implications only for the individual believer, then it could be easily dismissed as a harmless idiosyncrasy, but since almost all religious beliefs have incredibly serious implications for many people, religious belief cannot be regarded as harmless. Indeed, a glance at the behavior of religious believers worldwide day by day makes it very clear that religion is something to be feared and justly criticized. “Houses built of emotion” is one thing, but beliefs that can lead to mass beheading for mixed-sex dancing, or the marginalization and victimization of gay and lesbian people, and the second-listing of women, is quite another, and it is for the latter that religious belief is justly held to require more justification

Even though this quote is focusing on religion, I think it applies to any beliefs people have that they think are "harmless" but greatly influence how they treat others. In short, since no person is an island, we have a duty to critically examine the beliefs we have that influence how we treat others.

Replies from: ChristianKl
comment by ChristianKl · 2013-01-07T17:52:50.098Z · LW(p) · GW(p)

If you actually care about the influence on how you treat others, why don't use that as your test whether to hold a belief? Instead of focusing on whether the belief in likely to be true you could focus on whether it's likely to be harm other people.

A lot of Christian's don't believe in beheading people for mixed-sex dancing and victimizate homosexuals. For them the fact that other Christian's do those thing is no good reason to drop their beliefs.

Replies from: Toddling
comment by Toddling · 2013-01-15T00:11:55.210Z · LW(p) · GW(p)

If you actually care about the influence on how you treat others, why don't use that as your test whether to hold a belief? Instead of focusing on whether the belief in likely to be true you could focus on whether it's likely to be harm other people.

It can be difficult to know what will be harmful without knowing whether certain things are true.

Hypothetical example: A person kills their child in order to prevent them from committing some kind of sin and going to hell. If this person's beliefs about the existence of hell and how people get in and stay out of it are true, they have saved their child from a great deal of suffering. If their beliefs are not true, they have killed their child for nothing.

comment by Jayson_Virissimo · 2013-01-02T10:24:31.343Z · LW(p) · GW(p)

He that believes without having any Reason for believing, may be in love with his own Fancies; but neither seeks Truth as he ought, nor pays the Obedience due to his Maker, who would have him use those discerning Faculties he has given him, to keep him out of Mistake and Errour.

John Locke, Essay Concerning Human Understanding

comment by Dorikka · 2013-01-13T05:25:00.348Z · LW(p) · GW(p)

"We are living on borrowed time and abiding by the law of probability, which is the only law we carefully observe. Had we done otherwise, we would now be dead heroes instead of surviving experts." –Devil's Guard

comment by [deleted] · 2013-01-10T20:57:13.001Z · LW(p) · GW(p)

The generation of random numbers is too important to be left to chance. Robert R. Coveyou, Oak Ridge National Laboratory

comment by RolfAndreassen · 2013-01-02T20:12:31.627Z · LW(p) · GW(p)

It is more incumbent on me to declare my opinion on this question, because they have, on further reflection, undergone a considerable change; and although I am not aware that I have ever published any thing respecting machinery which it is necessary for me to retract, yet I have in other ways given my support to doctrines which I now think erroneous; it, therefore, becomes a duty in me to submit my present views to examination, with my reasons for entertaining them.

-- Ricardo, publicly saying "oops" in his restrained Victorian fashion, in his essay "On Machinery".

Replies from: gwern
comment by gwern · 2013-01-02T20:37:35.886Z · LW(p) · GW(p)

I was actually just reading that yesterday because of Cowen linking it in http://marginalrevolution.com/marginalrevolution/2013/01/the-ricardo-effect-in-europe-germany-fact-of-the-day.html

I'm not entirely sure I understand Ricardo's chapter (Victorian economists being hard to read both because of the style and distance), or why, if it's as clear as Ricardo seems to think, no-one ever seems to mention the point in discussions of technological unemployment (and instead, constantly harping on comparative advantage etc). What did you make of it?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-01-02T20:57:22.758Z · LW(p) · GW(p)

because of Cowen linking it

That's how I found it, too. But I need the LessWrong karma and you don't. :D

What did you make of it?

If I followed the discussion of circulating versus fixed capital, and gross versus net increase, Ricardo is showing that (in modern jargon as opposed to Victorian jargon) if you set the elasticities correctly, you can make a new machine decrease total wages in spite of substitution effects. He seems to think about this in terms of the "carrying capacity" of the economy, ie the total population size, presumably because Victorian economists worked much closer to true Malthusian conditions than ours do. In other words it's a bit of a model, not necessarily related to any particular economic change that has ever actually happened. Possibly you could get the same result re-published today if you put it in modern jargon with some nice equations, but it would be one of those papers that basically say "If we set variable X to extreme value Y, what happens?" So it's probably not that important when discussing actual machinery, as Ricardo acknowledges; he's exploring the edges of the parameter space.

comment by FiftyTwo · 2013-01-29T13:18:52.501Z · LW(p) · GW(p)

But I've never seen the Icarus story as a lesson about the limitations of humans. I see it as a lesson about the limitations of wax as an adhesive.

Randall Munroe

Replies from: Oscar_Cunningham, Jay_Schweikert
comment by Jay_Schweikert · 2013-01-29T18:30:59.335Z · LW(p) · GW(p)

And to think, I was just getting on to post this quote myself!

comment by shminux · 2013-01-27T22:02:32.489Z · LW(p) · GW(p)

even though you can’t see or hear them at all, a person’s a person, no matter how small.

Dr. Seuss

comment by A1987dM (army1987) · 2013-01-07T19:50:46.303Z · LW(p) · GW(p)

Our tragedy is that in these hyper-partisan times, the mere fact that one side says, ‘Look, there's [a problem],’ means that the other side's going to say, ‘Huh? What? No, I'm not even going to look up.’

-- Jonathan Haidt

Replies from: MixedNuts
comment by MixedNuts · 2013-01-07T20:23:51.996Z · LW(p) · GW(p)

But if either side admits that they care about disaster befalling the US economy, then if the other does not so admit, this second side can blackmail the first side for whatever they want. Therefore, the only reasonable negotiating strategy is to pretend not to care at all about the US economy.

-- Yvain, on why brinkmanship is not stupid

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-08T23:44:58.362Z · LW(p) · GW(p)

Belief propagation fail on my part. I had already read that Yvain post when watching that Haidt talk, but I still interpreted the behaviour he described in terms of ape social dynamics, forgetting that the average politician is probably more cold-blooded (i.e. more resembling the idealized agents of game theory) than the average ape is.

OTOH, the problems Haidt describes (global warming, rising public debt, rising income inequality, rising prevalence of out-of-wedlock births) don't have a hard deadline, they just gradually get worse and worse over the decades; so the dynamics of brinkmanship is probably not quite the same.

comment by Document · 2013-01-03T04:38:42.361Z · LW(p) · GW(p)

The lapse of time during which a given event has not happened, is, in [the] logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident as a reason why he should apprehend no danger, though the roof is beginning to sink; and it is often observable, that the older a man gets, the more difficult it is to him to retain a believing conception of his own death.

--George Eliot

Apologies to Jayson_Virissimo.

Replies from: simplicio
comment by simplicio · 2013-01-10T20:20:55.385Z · LW(p) · GW(p)

The lapse of time during which a given event has not happened, is, in [the] logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident...

Not to get too nitpicky, but the mine example doesn't really work here. Working for 40 years in a mine without accident doesn't actually make disaster imminent; I would imagine that a mine disaster is a Poisson process, in which expected duration to the next accident is independent of any previous occurrences.

It seems like there might be some gambler's fallacy stuff happening here.

An actually good example of this would be a bridge whose foundations are slowly eroding, and is now in danger of collapse.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-01T00:59:43.763Z · LW(p) · GW(p)

Where there's smoke, there's fire... unless someone has a smoke machine.

-- thedaveoflife

Replies from: hairyfigment
comment by hairyfigment · 2013-02-02T02:41:05.340Z · LW(p) · GW(p)

Where there's smoke, there's a chemical reaction of some kind. Unless it's really someone blowing off steam.

Replies from: Manfred
comment by Manfred · 2013-05-08T05:44:48.685Z · LW(p) · GW(p)

Or someone with an aerosolizer.

comment by GLaDOS · 2013-01-24T21:34:00.312Z · LW(p) · GW(p)

The dissident temperament has been present in all times and places, though only ever among a small minority of citizens. Its characteristic, speaking broadly, is a cast of mind that, presented with a proposition about the world, has little interest in where that proposition originated, or how popular it is, or how many powerful and credentialed persons have assented to it, or what might be lost in the way of property, status, or even life, in denying it. To the dissident, the only thing worth pondering about the proposition is, is it true? If it is, then no king’s command can falsify it; and if it is not, then not even the assent of a hundred million will make it true.

--John Derbyshire

Replies from: ygert
comment by ygert · 2013-01-24T21:50:16.992Z · LW(p) · GW(p)

Wile this is all very inspiring, is it true? Yes, truth in and of itself is something that many people value, but what this quote is claiming is that there are a class of people (that he calls "dissidents") that specifically value this above and beyond anything else. It seems a lot more likely to me that truth is something that all or most people value to one extent or another, and as such, sometimes if the conditions are right people will sacrifice stuff to achieve it, just like for any other thing they value.

comment by Mark_Eichenlaub · 2013-01-05T04:48:13.339Z · LW(p) · GW(p)

Cartman: I can try to catch it, but I'm going to need all the resources you've got. If this thing isn't contained, your Easter Egg hunt is going to be a bloodbath.

Mr. Billings: What do you think, Peters? What are the chances that this 'Jewpacabra' is real?

Peters: I'm estimating somewhere around .000000001%.

Mr. Billings: We can't afford to take that chance. Get this kid whatever he needs.

South Park, Se 16 ep 4, "Jewpacabra"

note: edited for concision. script

Replies from: arundelo
comment by arundelo · 2013-02-08T00:45:12.725Z · LW(p) · GW(p)

This is a duplicate. You probably checked and didn't find it because for some reason Google doesn't know about it.

comment by taelor · 2013-01-04T05:06:18.894Z · LW(p) · GW(p)

Hatred is the most accessible and comprehensive of all unifying agents. It pulls and whirls the individual away from his own self, makes him oblivious to his weal and future, frees him of jelousies and self-seeking. He becomes an anonymous partical quivering with a craving to fuse and coalesce with his like into one flaming mass. [...] Mass movements can rise and spread without a belief in God, but never without belief in a devil. Usually the strength of a mass movement is proportionate to the vividness and tangibility of its devil. When Hitler was asked whether he thought that the Jew must be destroyed, he answered: "No... we should then have to invent him. It is essential to have a tangible enemy, not merely an abstract one." F. A. Voigt tells of a Japanese mission that arrived in Berlin in 1932 to study the National Socialist movement. Voigt asked a member what he thought of the movement. He replied: "It is magnificent. I wish we could have something like it in Japan, only we can't, because we haven't got any Jews."

-- Eric Hoffer, The True Believer

Replies from: taelor
comment by taelor · 2013-01-04T05:24:07.547Z · LW(p) · GW(p)

When we renouce the self and become part of a compact whole, we not only renoucne personal advantage, but are also rid of personal responsiblity. There is no telling what extremes of cruelty and ruthlessness a man will go to when he is freed from the fears, hesitations, doubts and vague stirrings of decency that go with individual judgement. When we loose our individual independence into the corporateness of a mass movement, we find a new freedom -- freedom to hate, bully, lie, torture, murder and betray without shame or remorse. Herein undoubtedly lies part of the attractiveness of mass movements. We find the "right to dishonor", which according to Dostoyevsky has an irrisistible fascination. Hitler had a contemptuous opinion of the brutality of an autonomous indivisual: "Any violence which does not spring from a firm spiritual base will be wavering and uncertain. It lacks the stability which can only rest in a fanatical outlook."

Thus, hatred is not only a means of unification, but also its product. Renan says that we have never, since the world began, heard of a merciful nation. Nor have we heard of a merciful church or a merciful revolutionary party. The hatred and cruelty which have their source in selfishness are ineffectual things compared to the venom and ruthlessness that is born of selflessness.

When we see bloodshed, terror and destruction born from such generous enthusiasms as the love of God, love of Christ, love of nation, compassion for the oppressed and so on, we usually blame this shameful perversion on a cynical, power-hungry leadership. Actually, it is the unification set in motion by these enthusiasms, rather than the manipulation of scheming leaders that transmutes noble impulses into a reality of hatred and violence.

-- Eric Hoffer, The True Believer

comment by simplicio · 2013-01-14T20:45:04.577Z · LW(p) · GW(p)

lacanthropy, n. The transformation, under the influence of the full moon, of a dubious psychological theory into a dubious social theory via a dubious linguistic theory.

(Source: Dennettations)

Replies from: MixedNuts
comment by MixedNuts · 2013-01-14T20:53:25.429Z · LW(p) · GW(p)

Is there a reason you're quoting this, or are you just being humeorous?

Replies from: simplicio
comment by simplicio · 2013-01-14T21:38:04.505Z · LW(p) · GW(p)

I thought it was quite Witty.

comment by Mycroft65536 · 2013-03-05T02:29:55.815Z · LW(p) · GW(p)

If you're commited to rationality, then you're putting your belief system at risk every day. Any day you might acquire more information and be forced to change you belief system, and it could be very unpleasant and be very disturbing.

--Michael Huemer

comment by pragmatist · 2013-01-31T10:05:56.476Z · LW(p) · GW(p)

Perhaps the day will come when philosophy can be discussed in terms of investigation rather than controversies, and philosophers, like scientists, be known by the topics they study rather than by the views they hold.

Nelson Goodman

comment by NancyLebovitz · 2013-01-28T14:50:42.308Z · LW(p) · GW(p)

I don't think change can be planned. It can only be recognized.

jad abumrad, a video about the development of Radio Lab and the amount of fear involved in doing original work

comment by Qiaochu_Yuan · 2013-01-30T00:10:51.145Z · LW(p) · GW(p)

-

Replies from: Vaniver
comment by Vaniver · 2013-01-30T00:27:58.402Z · LW(p) · GW(p)

Partial dupe.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-30T00:36:16.497Z · LW(p) · GW(p)

Oy. I did not go down far enough to check whether this had been posted already. Thanks.

comment by [deleted] · 2013-01-13T23:40:58.638Z · LW(p) · GW(p)

If I could offer one piece of advice to young people thinking about their future, it would be this: Don't preconceive. Find out what the opportunities are.

--Thomas Sowell

Replies from: None
comment by [deleted] · 2013-01-18T04:50:43.810Z · LW(p) · GW(p)

That's hopelessly vague. Advice is hard enough to absorb even if you understand it.

comment by deathpigeon · 2013-02-01T02:48:08.116Z · LW(p) · GW(p)

A straight line may be the shortest distance between two points, but it is by no means the most interesting.

The Third Doctor

comment by [deleted] · 2013-01-18T04:56:34.961Z · LW(p) · GW(p)

Always do the right thing.

-The mayor, in "do the right thing"

Replies from: DanArmak, HalMorris
comment by DanArmak · 2013-01-19T11:32:18.184Z · LW(p) · GW(p)

I think the bigger problem is that people mostly disagree on what the right thing to do is.

Replies from: None
comment by [deleted] · 2013-01-19T18:13:06.466Z · LW(p) · GW(p)

I still find it useful to play it back in my head to remind myself to actually think whether what I'm doing is right "nyan, always do the right thing".

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

Replies from: Eugine_Nier, Qiaochu_Yuan, MugaSofer
comment by Eugine_Nier · 2013-01-25T01:49:12.024Z · LW(p) · GW(p)

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

That's not at all clear.

comment by Qiaochu_Yuan · 2013-01-24T23:23:53.179Z · LW(p) · GW(p)

I think that we agree on enough that if people "did the right thing" it would be better than the current situation, if not perfect.

Unclear. Some people have very bad ideas about what constitutes the right thing and their impact might not be canceled out.

comment by MugaSofer · 2013-01-25T11:09:43.570Z · LW(p) · GW(p)

In fairness, people aren't great at deciding what the right thing is, but I still agree with you; most people are not wrong about most things. For example, boycotts would work. So well.

OTOH, every abortion clinic would be bombed before the week was out; terrorist attacks would probably go up generally, as would revenge killings. You could argue those would have positive net impacts (since terrorists would presumably stop once their demands are met? I think?) but it's certainly not one-sided.

comment by HalMorris · 2013-01-18T15:56:21.232Z · LW(p) · GW(p)

Funny, I was thinking for the last few days or weeks of "Do the right thing!" as a sort of summary of deontology. It's all very well if you know what the right thing is. Another classic expression is "Let justice be done though the heavens may fall" (see http://en.wikipedia.org/wiki/Fiat_justitia_ruat_caelum), apparently most famously said by the English Jurist Lord Mansfield when reversing the conviction of John Wilkes for libel while, it seems, riots and demonstrations were going on in the streets (my very brief research indicates he did not say it in the case that outlawed slavery in the British homeland long before even the British abolished elsewhere -- though a book on that case is titled "Though the Heavens may fall" -- the fact that he made that remark and that decision just made it too tempting to conflate them).

Some examples in the Bible pointedly illustrate "do the right thing" (in the sense of whatever God says is right -- though in this case, "right" clearly isn't in any conflict with "the Heavens"). I.e. Abraham: Sacrifice your son to me (ha ha just kidding/testing you), or Joshua "Run around the walls of Jericho blowing horns and the walls will fall down". These are extreme cases of "Right is right, never mind how you'd imagine it would turn out -- with hour tiny human mind).

Personally, since I am not an Objectivist, or a fundamentalist, or one who talks with God, I don't fully trust any set of rules I may currently have as to what "is right", though I trust them enough to get through most days. Nor am I a perfect consequentialist since I don't perfectly trust my ability to predict outcomes.

An awful lot of examples given to justify consequentialism are extremely contrived, like "ticking bomb" scenarios to justify torture. Unfortunately many of us have seen these scenarios all too often in fiction (e.g. "24"), where they are quite common because they furnish such exciting plot points. Then they are on a battlefield in the real world which does not follow scriptwriter logic, and they imagine they are living such a heroic moment, which gets them to do something wrong and stupid.

In my opinion the best course is some of both. If I find myself, say, as a policeman, thinking that by shooting this guy though it really isn't self-defence but I can sell it as such, I will rid the world of a bad actor who'd probably kill two people, then I suspect the best course is to fall back on the manual which says I'm not justified in shooting him in this situation. Similarly if I think by this or that unethical action I'll increase the chance of the right person being elected to some important office On the other hand, if on some occasion I believe that by lying I will prevent some calamity then I might lie. There is no guarantee that we'll get it right, and we'll have to face the consequences if we're wrong.

The worst thing, I think, is to think we've figured it all out and know exactly how to be get it right all the time.

comment by [deleted] · 2013-01-16T15:57:24.192Z · LW(p) · GW(p)

Well, we have a pretty good test for who was stronger. Who won? In the real story, overdogs win.

--Mencius Moldbug, here

I can't overemphasise how much I agree with this quote as a heuristic.

Replies from: shminux, Oligopsony, None
comment by shminux · 2013-01-16T17:53:32.872Z · LW(p) · GW(p)

As I noted in my other comment, he redefined the terms underdog/overdog to be based on poteriors, not priors, effectively rendering them redundant (and useless as a heuristic).

Replies from: GLaDOS, Kindly
comment by GLaDOS · 2013-01-16T18:36:09.674Z · LW(p) · GW(p)

I consider this an uncharitable reading, I've read the article twice and I still understood him much as Konkvistador and Athrelon have.

comment by Kindly · 2013-01-16T23:11:26.425Z · LW(p) · GW(p)

Most of the time, priors and posteriors match. If you expect the posterior to differ from your prior in a specific direction, then change your prior.

And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog. If the standard narrative reports the underdog winning, that doesn't make the narrative impossible, but puts a burden of implausibility on it.

Replies from: Qiaochu_Yuan, gwern
comment by Qiaochu_Yuan · 2013-01-16T23:19:48.686Z · LW(p) · GW(p)

And thus, you should expect 99% of underdogs to lose and 99% of overdogs to win. If all you know is that a dog won, you should be 99% confident the dog was an overdog.

Second statement assumes that the base rate of underdogs and overdogs is the same. In practice I would expect there to be far more underdogs than overdogs.

Replies from: Kindly
comment by Kindly · 2013-01-16T23:54:12.083Z · LW(p) · GW(p)

Good point. I was thinking of underdog and overdog as relative, binary terms -- in any contest, one of two dogs is the underdog, and the other is the overdog. If that's not the case, we can expect to see underdogs beating other underdogs, for instance, or an overdog being up against ten underdogs and losing to one of them.

comment by gwern · 2013-01-17T01:58:35.391Z · LW(p) · GW(p)

If you expect the posterior to differ from your prior in a specific direction, then change your prior.

How should I change my prior if I expect it to change in the specific directions either up or down - but not the same?

Replies from: khafra
comment by khafra · 2013-01-17T19:09:28.957Z · LW(p) · GW(p)

Fat tailed distributions make the rockin' world go round.

Replies from: gwern
comment by gwern · 2013-01-17T19:21:13.282Z · LW(p) · GW(p)

They don't even have to be fat-tailed; in very simple examples you can know that on the next observation, your posterior will either be greater or lesser but not the same.

Here's an example: flipping a biased coin in a beta distribution with a uniform prior, and trying to infer the bias/frequency. Obviously, when I flip the coin, I will either get a heads or a tails, so I know after my first flip, my posterior will either favor heads or tails, but not remain unchanged! There is no landing-on-its-edge intermediate 0.5 coin. Indeed, I know in advance I will be able to rule out 1 of 2 hypotheses: 100% heads and 100% tails.

But this isn't just true of the first observation. Suppose I flip twice, and get heads then tails; so the single most likely frequency is 1/2 since that's what I have to date. But now we're back to the same situation as in the beginning: we've managed to accumulative evidence against the most extreme biases like 99% heads, so we have learned something from the 2 flips, but we're back in the same situation where we expect the posterior to differ from the prior in 2 specific directions but cannot update the prior: the next flip I will either get 2/3 or 1/3 heads. Hence, I can tell you - even before flipping - that 1/2 must be dethroned in favor of 1/3 or 2/3!

Replies from: None, shminux
comment by [deleted] · 2013-01-17T20:30:45.807Z · LW(p) · GW(p)

And yet if you add those two posterior distributions, weighted by your current probability of ending up with each, you get your prior back. Magic!

(Witch burners don't get their prior back when they do this because they expect to update in the direction of "she's a witch" in either case, so when they sum over probable posteriors, they get back their real prior which says "I already know that she's a witch", the implication being "the trial has low value of information, let's just burn her now".)

Replies from: gwern
comment by gwern · 2013-01-17T20:34:53.665Z · LW(p) · GW(p)

Yup, sure does. Which is a step toward the right idea Kindly was gesturing at.

comment by shminux · 2013-01-17T20:56:19.675Z · LW(p) · GW(p)

For coin bias estimate, as for most other things, the self-consistent updating procedure follows maximum likelihood.

Replies from: None
comment by [deleted] · 2013-01-17T21:10:07.819Z · LW(p) · GW(p)

Max liklihood tells you which is most likely, which is mostly meaningless without further assumptions. For example, if you wanted to bet on what the next flip would be, a max liklihood method won't give you the right probability.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-18T16:13:38.978Z · LW(p) · GW(p)

Yes.

OTOH, the expected value of the beta distribution with parameters a and b happens to equal the mode of the beta distribution with parameters a - 1 and b - 1, so maximum likelihood does give the right answer (i.e. the expected value of the posterior) if you start from the improper prior B(0, 0).

(IIRC, the same thing happens with other types of distributions, if you pick the ‘right’ improper prior (i.e. the one Jaynes argues for in conditions of total ignorance for totally unrelated reasons) for each. I wonder if this has some particular relevance.)

comment by Oligopsony · 2013-01-16T17:40:46.469Z · LW(p) · GW(p)

I suppose this is a hilariously obvious thing to say, but I wonder how much leftism Marcion Mugwump has actually read. We're completely honest about the whole power-seizing thing. It's not some secret truth.

(Okay, some non-Marxist traditions like anarchism have that whole "people vs. power" thing. But they're confused.)

Replies from: None
comment by [deleted] · 2013-01-16T19:11:47.779Z · LW(p) · GW(p)

but I wonder how much leftism Marcion Mugwump has actually read.

Ehm... what?

I suppose this is a hilariously obvious thing to say

Yes but as a friend reminded me recently, saying obvious things can be necessary.

comment by [deleted] · 2013-01-16T17:35:38.360Z · LW(p) · GW(p)

The heuristic is great, but that article is horrible, even for Moldbug.

Replies from: TimS, shminux
comment by TimS · 2013-01-16T18:47:28.024Z · LW(p) · GW(p)

I agree. For example:

"Civil disobedience" is no more than a way for the overdog to say to the underdog: I am so strong that you cannot enforce your "laws" upon me.

This statement is obviously true. But it sure would be useful to have a theory that predicted (or even explained) when a putative civil disobedience would and wouldn't work that way.

Obviously, willing to use overwhelming violence usually defeats civil disobedience. But not every protest wins, and it is worth trying to figure out why - if for no other reason than figuring out if we could win if we protested something.

Replies from: hairyfigment
comment by hairyfigment · 2013-01-29T18:13:09.221Z · LW(p) · GW(p)

is no more than

This statement is obviously true.

I see no way to interpret it that would make it true. Civil disobedience serves to provoke a response that will - alone among crises that we know about - decrease people's attitudes of obedience or submission to "traditional" authority. In the obvious Just-So Story, leaders who will use violence against people who pose no threat might also kill you.

We would expect this Gandhi trick to fail if the authorities get little-to-none of their power from the attitude in question. The nature of their response must matter as well. (Meanwhile, as you imply, I don't know how Moldbug wants us to detect strength. My first guess would be that he wants his chosen 'enemies' to appear strong so that he can play underdog.)

Replies from: TimS
comment by TimS · 2013-01-29T19:34:49.283Z · LW(p) · GW(p)

I don't think we are disagreeing on substance. "Underdog" and similar labels are narrative labels, not predictive labels. I interpreted Moldbug as saying that treating narrative labels as predictive labels is likely to lead one to make mistaken predictions and / or engage in hindsight bias. This is a true statement, but not a particularly useful one - it's a good first step, but not a complete analysis.

Thus, the extent to which Moldbug treats the statement as complete analysis is error.

comment by shminux · 2013-01-16T17:56:05.520Z · LW(p) · GW(p)

How is it great? How would you use this "heuristic"?

Replies from: None
comment by [deleted] · 2013-01-16T18:02:37.704Z · LW(p) · GW(p)

I hadn't read your comment before I posted this. I assumed it meant what the terms usually mean, and lacked moldbuggerian semantics. In that sense, it would be a warning against rooting for the (usual) underdog, which is certainly a bias I've found myself wandering into in the past.

In retrospect I was somewhat silly for assuming Moldbug would use a word to mean what it actually means.

Replies from: None
comment by [deleted] · 2013-01-16T18:39:48.128Z · LW(p) · GW(p)

I have read his comment and the article. Knowing Moldbug's style I agree with GLaDOS on the interpretation. I may be wrong in which case interpret the quote in line with my interpretation rather than original meaning.

comment by RobertLumley · 2013-01-16T00:53:23.479Z · LW(p) · GW(p)

“Our vision is inevitably contracted, and the whole horizon may contain much which will compose a very different picture.”

Cheney Bros v. Doris Silk Corporation, New York Circuit Court of Appeals, Second Circuit

comment by arborealhominid · 2013-01-16T00:03:35.389Z · LW(p) · GW(p)

Whenever I'm about to do something, I think, "Would an idiot do that?" And if they would, then I do not do that thing.

-Dwight K. Schrute

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-16T00:56:00.256Z · LW(p) · GW(p)

Reversed stupidity is not intelligence. Would an idiot drink when they're thirsty? Yes they would.

Replies from: arborealhominid, Desrtopa
comment by arborealhominid · 2013-01-16T02:47:29.850Z · LW(p) · GW(p)

Extremely good point! I liked this quote because I thought it was a funny way to describe taking the outside view, but you're completely right that it advocates reversed stupidity (at least when taken completely literally).

comment by Desrtopa · 2013-01-16T01:50:42.049Z · LW(p) · GW(p)

Also a duplicate, about which I made roughly the same comment the first time it was posted.

comment by Peterdjones · 2013-01-08T11:38:53.386Z · LW(p) · GW(p)

"Study everything, join nothing"

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2013-01-30T23:34:45.575Z · LW(p) · GW(p)

Atribution?

Replies from: Peterdjones
comment by Peterdjones · 2013-01-31T11:47:03.227Z · LW(p) · GW(p)

Paul Brunton, quotes by The Maverick Philosopher.

comment by [deleted] · 2013-01-04T05:11:03.272Z · LW(p) · GW(p)

If someone tells you they solved the mystery of Amelia Earhart's fate, you might be skeptical at first, but if they have a well documented, thoroughly pondered explanation, you would probably hear them out and who knows, you might even be convinced. But what if, in the next breath, they tell you that they actually have a second explanation as well. You listen patiently and discover and are surprised to find the alternate explanation to be as well documented and thought through as the first. And after finishing the second explanation you are presented with a third, a fourth, and even a fifth explanation--each one different from the others and yet equally convincing. No doubt, by the end of the experience you would feel no closer to Amelia Earhart's true fate than you did at the outset. In the arena of fundamental explanations, more is definitely less.

--Brian Greene, The Elegant Universe

comment by Jay_Schweikert · 2013-01-16T18:25:32.793Z · LW(p) · GW(p)

[After analyzing the hypothetical of an extra, random person dying every second.] All in all, the losses would be dramatic, but not devastating to our species as a whole. And really, in the end, the global death rate is 100%—everyone dies.

. . . or do they? Strictly speaking, the observed death rate for the human condition is something like 93%—that is, around 93% of all humans have died. This means the death rate among humans who were not members of The Beatles is significantly higher than the 50% death rate among humans who were.

--Randall Munroe, "Death Rates"

comment by elspood · 2013-01-06T09:29:08.755Z · LW(p) · GW(p)

BART: It's weird, Lis: I miss him as a friend, but I miss him even more as an enemy.
LISA: I think you need Skinner, Bart. Everybody needs a nemesis. Sherlock Holmes had his Dr. Moriarty, Mountain Dew has its Mellow Yellow, even Maggie has that baby with the one eyebrow.

Everyone may need a nemesis, but while Holmes had a distinct character all his own and thus used Dr. Moriarty simply to test formidable skills, Bart actually seems to create or define himself precisely in opposition to authority, as the other to authority, and not as some identifiable character in his own right.

- Mark T. Conrad, "Thus Spake Bart: On Nietzche and the Virtues of Being Bad", The Simpsons and Philosophy: The D'Oh of Homer

Replies from: gwern
comment by gwern · 2013-01-26T04:56:14.302Z · LW(p) · GW(p)

That's not a bad essay (BTW, essays should be in quote marks, and the book itself, The Simpsons and Philosophy, in italics), but I don't think the quote is very interesting in isolation without any of the examples or comparisons.

Replies from: elspood
comment by elspood · 2013-01-29T01:02:45.431Z · LW(p) · GW(p)

Edited, thanks for the style correction.

I suspect you're probably right that more examples makes this more interesting, given the lack of upvotes. In fact, I probably found the quote relevant mostly because it more or less summed up the experience of my OWN life at the time I read it years ago.

I spent much of my youth being contrarian for contradiction's sake, and thinking myself to be revolutionary or somehow different from those who just joined the cliques and conformed, or blindly followed their parents, or any other authority.

When I realized that defining myself against social norms, or my parents, or society was really fundamentally no different from blind conformity, only then was I free to figure out who I really was and wanted to be. Probably related: this quote.

comment by [deleted] · 2013-01-13T23:39:37.997Z · LW(p) · GW(p)

Intellectuals may like to think of themselves as people who "speak truth to power" but too often they are people who speak lies to gain power.

--Thomas Sowell

Replies from: hairyfigment, simplicio
comment by hairyfigment · 2013-01-29T18:52:26.437Z · LW(p) · GW(p)

Dupe.

Replies from: None
comment by [deleted] · 2013-01-29T20:03:12.927Z · LW(p) · GW(p)

Thank you!

comment by simplicio · 2013-01-14T19:39:11.124Z · LW(p) · GW(p)

Seems like an implausible view of the motivations of said intellectuals. Otherwise, agreed.

Replies from: None
comment by [deleted] · 2013-01-18T04:54:00.832Z · LW(p) · GW(p)

an implausible view of the motivations

The first clause was a statement about what they think, not really about motivations, and quite plausible anyway.

The second statement was about what they do. Related to "adaptation executors not fitness maximizers".

comment by taelor · 2013-01-05T06:52:53.756Z · LW(p) · GW(p)

It is easy to see how the faultfinding man of words, by persistent ridicule and denunciation, shakes prevailing beliefs and loyalties, and familiarizes the masses with the idea of change. What is not so obvious is the process by which the discrediting of existing beliefs and institutions makes possible the rise of a fanatical new faith. For it is a remarkable fact that the millitant man of words who "sounds the established order to its source to mark its want of authority and justice" often prepares the way not for a society of freethinking individuals but for a coprorate society that cherishes utmost unity and blind faith. A wide diffusion of doubt and irreverence thus leads to unexpected results. The irreverence of the Renaissance was a prelude to the new fanaticism of Reformation and Counter Reformation. The Frenchmen of the enlightenment who debunked the church and crown and preached reason and tolerance released a burst of revolutionary and nationalist fanticism which has not yet abated. Marx and his followers discredited religion, nationalism and the passionate pursuit of business, and brought into being the new fanaticism of socialism, communism, Stalinist nationalism and the passion for world dominion.

When we debunk a fanatical faith or prejudicem we do not strike at the root of fanaticism. We merely prevent its leaking out at a certain point, with the likely result that it will leak out at some other point. Thus, by denigrating prevailing beliefs and loyalties, the militant man of words unwittingly creates in the disillusioned masses a hunger for faith. [...] These fanatical and faith-hungry masses are likely to invest such speculations with the certitude of holy writ, and make them the fountainhead of a new faith. Jesus was not a Christian, nor was Marx a Marxist.

--Eric Hoffer, The True Believer

comment by Baruta07 · 2013-01-15T03:06:25.180Z · LW(p) · GW(p)

He shook his head. "No, for the purposes of this discussion, Asuka... only I have the power to decide humanity's fate. And I refuse that power to give it back to them. Humanity is made of neither heaven or hell; that with freedom of choice and honor, as though the maker and molder of itself... that they may fashion themselves in whatever form they shall prefer. People, individuals, are not single things but always tip from order to chaos and back again. Those with order are needed for stability. Those who espouse chaos bring change. Only humanity may balance humanity. If a God should be needed, only that nothing from without should threaten that free choice. The maker should be the first servant, just as the mother would care without hesitation for her child

-Shinji & Warhammer Xover.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-15T10:17:22.340Z · LW(p) · GW(p)

How is this a rationality quote?

Also, I haven't read the text in question, but I for one would be very wary about letting Warhammer!humanity "fashion themselves in whatever form they shall prefer."

comment by Jayson_Virissimo · 2013-01-02T11:16:50.751Z · LW(p) · GW(p)

...since the 1930s, self-driving cars have been just 20 years away.

-Bryant Walker Smith

Replies from: Luke_A_Somers, MugaSofer
comment by Luke_A_Somers · 2013-01-02T16:39:56.953Z · LW(p) · GW(p)

But we've had self-driving cars for multiple years now...

Replies from: IlyaShpitser, Jayson_Virissimo
comment by IlyaShpitser · 2013-01-05T09:17:48.567Z · LW(p) · GW(p)

By principle of charity (everyone knows about google cars by now), I took the grandparent to mean "past performance is not a guarantee of future returns."

comment by Jayson_Virissimo · 2013-01-06T05:27:14.778Z · LW(p) · GW(p)

Obviously, Smith knows this, since he has published papers on the legality of self-driving cars as late as 2012. The purpose of the quote (for me) is to draw an analogy between Strong AI and self-driving cars, both of which have had people saying "it is just 20 years away" for many decades now (and one of which is now here, making the people that made such a prediction 20 year ago correct).

comment by MugaSofer · 2013-01-02T16:56:36.020Z · LW(p) · GW(p)

Considering there are working prototypes of such cars driving around right now ...

EDIT: Damn, ninja'd by Luke.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-01-01T22:54:04.862Z · LW(p) · GW(p)

If god (however you perceive him/her/it) told you to kill your child -- would you do it?

If your answer is no, i my booklet you're an atheist. There is doubt in your mind. Love and morality are more important than your faith.

If your answer is yes, please reconsider.

-- Penn Jilette.

Replies from: MixedNuts, Endovior, sketerpot, Jayson_Virissimo, CCC, Scottbert, MugaSofer
comment by MixedNuts · 2013-01-01T23:01:23.019Z · LW(p) · GW(p)

It's hardly fair to describe this tiny modicum of doubt as atheism, even in the umbrella sense that covers agnosticism.

comment by Endovior · 2013-01-14T12:14:11.959Z · LW(p) · GW(p)

This argument really isn't very good. It works on precisely none of the religious people I know, because:

A: They don't believe that God would tell them to do anything wrong.

B: They believe in Satan, who they are quite certain would tell them to do something wrong.

C: They also believe that Satan can lie to them and convincingly pretend to be God.

Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scripture to back their position, often from memory. This is probably better than a belief framework that would let them go out and start killing people if the right impulse struck them, but it's also not a worldview that can be moved by this sort of argument.

Replies from: TheOtherDave, Kawoomba, Richard_Kennaway, MugaSofer
comment by TheOtherDave · 2013-01-14T15:30:08.701Z · LW(p) · GW(p)

My experience is that this framework is not consistently applied, though.

For example, I've tried pointing out that it follows from these beliefs that if our moral judgments reject what we've been told is the will of God then we ought to obey our moral judgments and reject what we've been told is the will of God. The same folks who have just used this framework to reject treating something reprehensible as an expression of the will of God will turn around and tell me that it's not my place to judge God's will.

Replies from: Endovior
comment by Endovior · 2013-01-14T17:25:03.614Z · LW(p) · GW(p)

Yeah, that happens too. Best argument I've gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions 'from God' that run counter to that must be false. So it's not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.

comment by Kawoomba · 2013-01-14T12:46:32.937Z · LW(p) · GW(p)

Penn Jilette is wrong to call someone not following a god's demands an atheist. Theism is defined by existence claims regarding gods (whether personal or more broadly defined), as a classifier it does not hinge on following said gods' mandates.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T14:20:38.029Z · LW(p) · GW(p)

Although it seems like an overly-broad definition of "atheist", I think that the quote is only intended to apply to belief in the monotheistic Supreme Being, not polytheistic small-g-gods.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-14T16:07:00.697Z · LW(p) · GW(p)

My comment applies just the same, whether you spell god God, G_d, GOD or in some other manner: You can believe such a being exists (making you a theist) without following its moral codex or whatever commands it levies on you. Doesn't make you an atheist.

Replies from: BerryPick6, MugaSofer
comment by BerryPick6 · 2013-01-15T18:11:30.428Z · LW(p) · GW(p)

Although, if you believe it always tells the truth, then you should follow whatever counterintuitive claim it makes about your own preferences and values, no? So if God were to tell you that sacrificing your son is what CEV_(Kawoomba) would do, would you do it?

Replies from: Kawoomba
comment by Kawoomba · 2013-01-15T18:30:33.474Z · LW(p) · GW(p)

I have a certain probability I ascribe to the belief that god always tells the truth, let's say this is very high.

I also have a certain probability with which I believe that CEV_(Kawoomba) contained such a command. This is negligible because (from the definition) it certainly doesn't fit with "were more the [man] [I] wished [I] were".

However, we can lay that argument (evening out between a high and a very low probability) aside, there's a more important one:

The point is that my values are not CEV_(Kawoomba), which is a concept that may make sense for an AI to feed with, or even to personally aspire to, but is not self-evidently a concept we should unequivocally aspire to. In a conflict between my values and some "optimized" (in whatever way) values that I do not currently have but that may be based on my current values, guess which ones win out? (My current ones.)

That aside, there is no way that the very foundation of my values could be turned topsy turvy and still fit with CEV's mandate of "being the person I want to be".

Replies from: MugaSofer
comment by MugaSofer · 2013-01-16T15:01:49.459Z · LW(p) · GW(p)

The point is that my values are not CEV_(Kawoomba)

You don't mean ... Kawoomba isn't your real name?!!

Seriously, though, humans are not perfect reasoners, nor do we have perfect information. If we find onsomething that does, and it thinks our values are best implemented in a different way than we do, then we are wrong. Trivially so.

Or are you nitpicking the specification of "CEV"?

comment by MugaSofer · 2013-01-14T17:36:25.133Z · LW(p) · GW(p)

Well, if you value your son more than, say, genocide, then sure. If, on the other hand, you're moral in the same way, say, CEV is, then you should do what the Friendly superintelligence says.

[Edited for clarity.]

Replies from: Kawoomba
comment by Kawoomba · 2013-01-14T17:52:35.187Z · LW(p) · GW(p)

Do you mean CEV_(mankind)?

CEV_(mankind) is a compromise utility function (that some doubt even contains anything) that is different from your own utility function.

Why on earth would I ever voluntarily choose a different utility function, out of a mixture of other human utility functions, over my own? I already have one that fits me perfectly by definition - my own.

If you meant CEV_(Kawoomba), then it wouldn't change the outcome of that particular decision. Maybe refer to the definition here?

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-01-14T18:01:40.343Z · LW(p) · GW(p)

If you meant CEV_(Kawoomba), then it wouldn't change the outcome of that particular decision. Maybe refer to the definition here?

Ah, but would it really not?

I strongly expect a well-formed fully reflective CEV_(DaFranker) to make different decisions from current_DaFranker. For starters,CEV_(DaFranker) would not have silly scope insensitivity biases, availability bias, and other things that would factor strongly into current_DaFranker's decision, since we assume CEV_(DaFranker) has immense computational power and can brute-force-optimize their decision process if needed, and current_DaFranker would strongly prefer to have those mental flaws fixed and go for the pure, optimal rationality software and hardware as long as their consciousness and identity is preserved continuously.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-14T18:16:13.180Z · LW(p) · GW(p)

Since we're talking about CEV_(individual), the "poetic" definition would be "[my] wish if [I] knew more, thought faster, were more the [man] [I] wished [I] were, (...), where [my] wishes cohere rather than interfere; extrapolated as [I] wish that extrapolated, interpreted as [I] wish that interpreted."

Nothing that would change my top priorities, though I'd do a better job convincing Galactus.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T19:38:35.274Z · LW(p) · GW(p)

How sure are you that you haven't made a mistake somewhere?

Replies from: Kawoomba
comment by Kawoomba · 2013-01-15T17:55:34.983Z · LW(p) · GW(p)

Quite sure. I assume you value the life of a sparrow (the bird), all else being equal. Is there a number of sparrows to spare which you would consign yourself and your loved ones to the flames? Is there a hypothetical number of sparrows for which you would choose them living over all currently living humans?

If not, then you are saying that not all goals reduce to a number on a single metric, that there are tiers of values, similar in principle to Maslow's.

You're my sparrows.

Replies from: JGWeissman, BerryPick6, DaFranker, MugaSofer
comment by JGWeissman · 2013-01-16T14:49:30.334Z · LW(p) · GW(p)

Suppose you had the chance to save the life of one sparrow, but doing so kills you with probability p. For what values of p would you do so?

If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.

Replies from: Kawoomba, ArisKatsaris
comment by Kawoomba · 2013-01-18T12:07:13.623Z · LW(p) · GW(p)

A strong argument, well done.

This indeed puts me in a conundrum: If I answer anything but p=0, I'm giving a kind of weighting factor that destroys the supposedly strict separation between tiers.

However, if I answer p=0, then indeed as long as there is anything even remotely or possibly affecting my top tier terminal values, I should rationally disregard pursuing any other unrelated goal whatsoever.

Obviously, as evident by my writing here, I do not solely focus all my life's efforts on my top tier values, even though I claim they outweigh any combination of other values.

So I am dealing with my value system in an irrational way. However, there are two possible conclusions concerning my confusion:

  • Are my supposed top tier terminal values in fact outweigh-able by others, with "just" a very large conversion coefficient?

or

  • Do I in fact rank my terminal values as claimed and am just making bad choices effectively matching my behavior to those values, wasting time on things not strictly related to my top values? (Is it just an instrumental rationality failure?) Anything with a terminal value that's valued infinitely higher than all other values should behave strictly isomorphically to a paperclip maximizer with just that one terminal value, at least in our universe.

This could be resolved by Omega offering me a straight out choice, pressing buttons or something. I know what my consciously reflected decision would be, even if my daily routine does not reflect that.

Another case of "do as I say (I'd do in hypothetical scenarios), not as I do (in daily life)" ...

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2013-01-18T14:57:28.845Z · LW(p) · GW(p)

This indeed puts me in a conundrum: If I answer anything but p=0, I'm giving a kind of weighting factor that destroys the supposedly strict separation between tiers.

Well, you could always play with some fun math...

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-18T15:58:41.037Z · LW(p) · GW(p)

Even that would be equivalent to an expected utility maximizer using just real numbers, except that there's a well-defined tie-breaker to be used when two different possible decisions would have the exact same expected utility.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-20T13:34:12.042Z · LW(p) · GW(p)

How often do two options have precisely the same expected utility? Not often, I'm guessing. Especially in the real world.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-20T17:57:47.772Z · LW(p) · GW(p)

I guess almost never (in the mathematical sense). OTOH, in the real world the difference is often so tiny that it's hard to tell its sign -- but then, the thing to do is gather more information or flip a coin.

comment by MugaSofer · 2013-01-20T13:34:28.773Z · LW(p) · GW(p)

I would like to point out that there is a known bias interfering with said hypothetical scenarios. It's called "taboo tradeoffs" or "sacred values", and it's touched upon here; I don't think there's any post that focuses on explaining what it is and how to avoid it, though. One of the more interesting biases, I think.

Of course, your actual preferences could mirror the bias, in this case; lets not fall prey to the fallacy fallacy ;)

comment by ArisKatsaris · 2013-01-16T15:04:59.014Z · LW(p) · GW(p)

If the answer is only when p=0, then your value of sparrows should never affect your choices, because it will always be dominated by the greater probability of your own welfare.

Not sure that holds. Surely there could be situations where you can't meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death, therefore you act to preserve its life because though you consider it a fundamentally lesser terminal value, it's still a terminal value.

Replies from: JGWeissman
comment by JGWeissman · 2013-01-16T15:18:40.978Z · LW(p) · GW(p)

Surely there could be situations where you can't meaningfully calculate whether acting to preserve the life of a sparrow increases or decreases the probability of your death

In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives. Lower tier utility functions just don't matter.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-16T15:39:49.080Z · LW(p) · GW(p)

In this case you try harder to figure out a way to calculate the impact on your chance of death. The value of information of such an effort is worth infinite sparrow lives.

What if you've already estimated that calculating excessively (e.g. beyond a minute) on this matter will have near-definite negative impact on your well-being?

Replies from: JGWeissman
comment by JGWeissman · 2013-01-16T15:52:07.978Z · LW(p) · GW(p)

Then you go do something else that's relevant to your top-tier utility function.

You can contrive a situation where the lower tier matters, but it looks like someone holding a gun to your head, and threatening to kill you if you don't choose in the next 5 seconds whether or not they shoot the sparrow. That sort of thing generally doesn't happen.

And even then, if you have the ability to self-modify, the costs of maintaining a physical representation of the lower tier utility functions is greater than the marginal benefit of choosing to save the sparrow automatically because you lower tier utility function says so over choosing alphabetically.

comment by BerryPick6 · 2013-01-15T18:08:18.450Z · LW(p) · GW(p)

I think this is more due to diminishing marginal returns for the amount of sparrows in existence, to be honest...

Jokes aside, you are offering a very persuasive argument; I'd be curious to know how you figure out what tier certain values are and whether you ever have reason to (a) change your mind about said tier or (b) create new tiers altogether?

comment by DaFranker · 2013-01-15T18:35:39.908Z · LW(p) · GW(p)

If not, then you are saying that not all goals reduce to a number on a single metric, (...)

Simply false AFAIK. There is a mathematical way to express as a single-number metric every tiered system I've ever been capable of conceiving, and I suspect those I haven't also have such expressions with more mathematics I might not know.

So, I don't know if the grandparent was saying that, but I assume it wasn't, and if it was implied somewhere and I missed it then it certainly is false.

But then again, I may simply be interpreting your words uncharitably. I assume you're already aware that Maslow's can also be reduced to a single number formula.

A more interesting question than maximizing numbers of sparrows, however, is maximizing other value-factors. Suppose instead of imagining a number of sparrows sufficiently large for which you would trade a human you care about, you imagine a number of different minimum-sufficiency-level values being traded off for that "higher-tiered" life.

One human against a flock of sparrows large enough to guarantee the survival of the species is easy enough. Even the sparrow species doesn't measure up against a human, or at least that's what I expect you'd answer.

Now measure it up against a flock of each species of birds, on pain of extinction of birds (but let's magically handwave away the ecosystem impacts of this - suppose we have advanced technology to compensate for all possible effects).

Now against a flock of each species of birds, a group of each species of nonhuman mammals, a school of each species of fish, a colony of each species of insects, and so forth throughout the entire fauna and flora of the planet - or of all known species of life in the universe. Again we handwave - we can make food using the power of Science and so on.

Now against the same, but without the handwaving. The humans you care about are all fine and healthy, but the world they live in is less interesting, and quite devastated, but Science keeps y'all healthy and stuff.

Now against that, plus having actual human bodies. You're all jarbrains.

Feel free to keep going, value by value, until you reach a sufficient tradeoff or accept that no amount of destructive alteration to the universe will ever compare to the permanent loss of consciousness of your loved ones, and therefore you'd literally do everything and anything, up to and including warping space and time and sacrificing knowledge or memory or thinking-power or various qualia or experience or capacity for happiness or whatever else you can imagine, all traded for this one absolute value.

"Life/consciousness of loved ones" can also be substituted for whichever is your highest-tiered value if different.

Replies from: Kawoomba, MugaSofer
comment by Kawoomba · 2013-01-15T18:47:00.799Z · LW(p) · GW(p)

Simply false AFAIK. There is a mathematical way to express as a single-number metric every tiered system I've ever been capable of conceiving

I don't understand. Do you mean that as in "you can describe/encode arbitrary systems as a single number" or something related to that?

If not, do you mean that there must be some number of sparrows outweighing everything else as it gets sufficiently large?

Please explain.

Replies from: DaFranker
comment by DaFranker · 2013-01-15T19:20:28.108Z · LW(p) · GW(p)

Do you mean that as in "you can describe/encode arbitrary systems as a single number" or something related to that?

Yes.

For my part, I also consider it perfectly plausible (though perhaps less likely than some alternatives) that some humans might actually have tiered systems where certain values really truly never can be traded off in the slightest fraction of opportunity costs against arbitrarily high values of all lower-tiered values at the same time.

For instance, I could imagine an agent that values everything I value but has a hard tier cutoff below the single value that its consciousness must remain continuously aware until the end of the universe if such a time ever arrives (forever otherwise, assuming the simplest alternative). This agent would have no trouble sacrificing the entire solar system if it was proven to raise the expected odds of this survival. Or the agent could also only have to satisfy a soft threshold or some balancing formula where a certain probability of eternal life is desired, but more certainty than that becomes utility-comparable to lower-tier values. Or many other kinds of possible constructs.

So yes, arbitrary systems, for all systems I've ever thought of. I like to think of myself as imaginative and as having thought of a lot of possible arbitrary systems, too, though obviously my search space is limited by my intelligence and by the complexity I can formulate.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-15T19:34:40.027Z · LW(p) · GW(p)

There are actual tiered systems all around us, even if most examples that come to mind are constructed/thought of by humans.

That aside, I am claiming that I would not trade my highest tier values against arbitrary combinations of all lower tiered values. So ... hi!

Re: Just a number; I can encode your previous comments (all of them) in the form of a bitstring, which is a number. Doesn't mean that doing "+1" on that yields any sensible result. Maybe we're talking past each other on the "describe/encode" point, but I don't see how describing a system containing strict tiers as a number somehow makes those tiers go away, unless you were nitpicking about "everything's just a number that's interpreted in a certain way" or somesuch.

Replies from: DaFranker
comment by DaFranker · 2013-01-15T19:56:16.544Z · LW(p) · GW(p)

Ah, on the numbers thing, what I meant was only that AFAIK there always exists some formula for which higher output numbers will correspond to things any abitrary agent (at least, all the logically valid and sound ones that I've thought of) would prefer.

So even for a hard tier system, there's a way to compute a number linearly representative of how happy the agent is with worldstates, where at the extreme all lower-tier values flatline into arbitrarily large negatives (or other, more creative / leakproof weighing) whenever they incur infinitesimal risk of opportunity cost towards the higher-tier values.

The reason I'm said this is because it's often disputed and/or my audience isn't aware of it, and I often have to prove even the most basic versions of this claim (such as "you can represent a tiered system where as soon as the higher tier is empty, the lower tier is worthless using a relatively simple mathematical formula") by showing them the actual equations and explaining how it works.

comment by MugaSofer · 2013-01-16T14:38:34.591Z · LW(p) · GW(p)

It might be simpler to compare less sacred values (how many sparrows is a dog worth? How many dogs is a chimp worth?) building up to something greater. Unfortunately, Kawoomba seems to be under the impression that nothing could possibly be worth the life of his family. Not sparrows, not humans, not genocides-prevented.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-17T20:40:55.923Z · LW(p) · GW(p)

That is so. Why unfortunately? Also, why "under the impression"? If you were to tell me some of your terminal values, I'd give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).

I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that. It is not a question of epistemic beliefs about the world state, of creating a better match between map and territory. It is a difference about values. If Omega gave me a button choice decision, I'm very sure what I would do. That's where it counts.

For consolidation purposes, this is also meant to answer "How sure? Based on what? What would persuade you otherwise?" - As sure as I can be, based on "what I value above all else", persuade you otherwise: nothing short of a brain reprogram.

Diction impaired by C2H6O.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-18T12:01:33.153Z · LW(p) · GW(p)

Why unfortunately? Also, why "under the impression"? If you were to tell me some of your terminal values, I'd give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).

  • If your values conflict with those of greater humanity (in aggregate,) then you are roughly equivalent to Clippy - not dangerous unless you actually end up being decisive regarding existential risk, but nevertheless only co-operating based on self-interest and bargaining, not because we have a common cause.
  • Humans are usually operating based on cached thoughts, heuristics whih may conflict with their actual terminal values. Picture a Nazi measuring utility in Jews eliminated. He doesn't actually, terminally value killing people - but he was persuaded that Jews are undermining civilization, and his brain cached the thought that Jews=Bad. But he isn't a Paperclipper - if he reexamines this cached thought in light of the truth that Jews are, generally speaking, neurotypical human beings then he will stop killing them.

I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that.

Well, sacred value is a technical term.

If you genuinely attached infinite utility to your family's lives, then we could remove the finite terms in your utility function without affecting it's output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn't kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that's not the right choice. That's just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I'm not perfectly Friendly is a bad thing.

Anyway.

Considering you're not typing from a bunker, and indeed probably drive a car, I'm guessing you're willing to accept small risks to your family. So my question for you is this: how small?

Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing ...

Replies from: Kawoomba
comment by Kawoomba · 2013-01-18T12:15:05.472Z · LW(p) · GW(p)

If you genuinely attached infinite utility to your family's lives, then we could remove the finite terms in your utility function without affecting it's output. You are not valuing their lives above all else (...)

You do have a point, but there is another explanation to resolve that, see this comment.

We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).

There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-20T13:44:18.621Z · LW(p) · GW(p)

I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist

I see your psychopath and raise you one Nazi.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-20T14:04:33.465Z · LW(p) · GW(p)

I'm sorry, does that label impact our debate whether rationality implies terminal values?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-20T15:13:32.816Z · LW(p) · GW(p)

My point is that, while an agent that is not confused about its values will not change them in response to rationality (obviously,) one that is confused will. For example, a Nazi realizing Jews are people after all.

Sorry if that wasn't clear.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-21T00:45:54.419Z · LW(p) · GW(p)

For example, a Nazi realizing Jews are people after all.

Taboo "people".

Replies from: hairyfigment, MugaSofer
comment by hairyfigment · 2013-01-21T01:20:18.560Z · LW(p) · GW(p)

'Share many human characteristics with the Nazi, and in particular suffered in similar ways from the economic conditions that helped produce Nazism.'

comment by MugaSofer · 2013-01-21T10:41:32.752Z · LW(p) · GW(p)

"not Evil Mutants"

Hairyfigment's answer would also work. The point is that they are as worthy of moral consideration as everyone else, and, to a lesser extent, that they aren't congenitally predisposed to undermine civilization and so on and so forth.

comment by MugaSofer · 2013-01-16T14:40:32.014Z · LW(p) · GW(p)

Interesting analogy. If we accept that utilities are additive, then there is presumably a number of sparrows worth killing for. (Of course, there may be a limit on all possible sparrows or sparrow utilities may be largely due to species preservation or something. As an ethics-based vegetarian, however, I can simply change it to "sparrows tortured.) I would be uncomfortable trying to put a number on it, what with the various sacred value conflicts involved, but I accept that a Friendly AI (even one Friendly only to me) would know and act on it.

Maslow's Pyramid is not intended as some sort of alternative to utilitarianism, it's a description of how we should prioritize the needs of humans. An imperfect one, of course, but better than nothing.

How sure? Based on what? What would persuade you otherwise?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-17T01:15:22.786Z · LW(p) · GW(p)

If we accept that utilities are additive

Why?

Replies from: None, MugaSofer
comment by [deleted] · 2013-01-17T01:51:22.200Z · LW(p) · GW(p)

They almost certainly are on the margin (think taylor series approx of utility function). Get to the point where you are talking about killing a significant fraction of the sparrow population, then there's no reason to think so.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-01-18T04:34:19.647Z · LW(p) · GW(p)

They almost certainly are on the margin

True, but this doesn't apply to MugaSofer's claim that

there is presumably a number of sparrows worth killing for.

comment by MugaSofer · 2013-01-18T10:27:13.922Z · LW(p) · GW(p)

To be clear: are you claiming that utilities are not additive? That there is some level of Bad Things that two (a thousand, a billion ...) times as much is not worse? I've seen the position advocated, but only by appealing to scope insensitivity.

comment by MugaSofer · 2013-01-14T19:37:40.754Z · LW(p) · GW(p)

I refer to CEV(mankind), which you claim contradicts CEV(Kawoomba). An agent that thinks it should be maximizing CEV_(mankind) (such as myself) would have no such difficulty, obviously.

comment by Richard_Kennaway · 2013-01-14T12:40:09.448Z · LW(p) · GW(p)

They actually think like that

Seems a perfectly sensible way to think. Being religious doesn't mean being stupid enough to fall for that argument.

comment by MugaSofer · 2013-01-14T14:29:30.457Z · LW(p) · GW(p)

The quote specifies God, not "a voice claiming to be God". I'm not sure what evidence would be required, but presumably there must be some, or why would you follow any revelation?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-14T14:52:12.864Z · LW(p) · GW(p)

The quote specifies God, not "a voice claiming to be God".

In that case, the Christian's obvious and correct response is "that wouldn't happen", and responding to that with "yeah, but what if? huh? huh?" is unlikely to lead to a fruitful conversation. Penn's original thought experiment is simply broken.

Replace "God" by "rationality" and consider the question asked of yourself. How do you respond?

Replies from: DaFranker, MugaSofer
comment by DaFranker · 2013-01-14T15:22:47.699Z · LW(p) · GW(p)

That seems like a misuse of the word "rationality". The "rational" course of action is directly dependent upon whatever your response will be to the thought experiment according to your utility function (and therefore values mostly) and decision algorithm, and so somewhat question-begging.

A better term would be "your decision theory", but that is trivially dismissible as non-rational - if you disagree with the results of the decision theory you use, then it's not optimal, which means you should pick a better one.

If a utility function and decision theory system that are fully reflectively coherent with myself agree with me that for X reasons killing my child is necessarily and strictly more optimal than other courses of action even taking into account my preference for the survival of my child over that of other people, then yes, I definitely would - clearly there's more utility to be gained elsewhere, and therefore the world will predictably be a better one for it. This calculation will (must) include the negative utility from my sadness, prison, the life lost, the opportunity costs, and any other negative impacts of killing my own child.

And as per other theorems, since the value of information and accuracy here would obviously be very high, I'd make really damn sure about these calculations - to a degree of accuracy and formalism much higher than I believe my own mind would currently be capable of with lives involved. So with all that said, in a real situation I would doubt my own calculations and would assign much greater probability to an error in my calculations or a lack / bias in my information, than to my calculations being right and killing my own child being optimal.

Any other specifics I forgot to mention?

comment by MugaSofer · 2013-01-14T15:14:17.918Z · LW(p) · GW(p)

In that case, the Christian's obvious and correct response is "that wouldn't happen", and responding to that with "yeah, but what if? huh? huh?" is unlikely to lead to a fruitful conversation.

Except that most Christians think that people have, in reality, been given orders directly by God. I suspect they would differ on what evidence is required before accepting the voice is God, but once they have accepted it's God talking then reusing to comply would be ... telling. OTOH, I would totally kill people in that situation (or an analogous one with an FAI,) and I don't think that's irrational.

Replace "God" by "rationality" and consider the question asked of yourself. How do you respond?

[EDIT: I had to replace a little more than that to make it coherent; I hope I preserved your intentions while doing so.]

If it was rational to kill your child -- would you do it?

If your answer is no, in my booklet you're not a rationalist. You are still confused. Love and morality are more important than winning.

If your answer is yes, please reconsider.

My answer is, of course, yes. If someone claims that they would not kill even if it was the rational choice then ... ***** them. It's the right damn choice. Not choosing it is, in fact, the wrong choice.

(I'm ignoring issues regarding running on hostile hardware here, because you should be taking that kind of bias into account before calling something "rational".)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-14T15:48:56.669Z · LW(p) · GW(p)

Except that most Christians think that people have, in reality, been given orders directly by God.

What do the real Christians that you know say about that characterisation? I don't know any well enough to know what they think (personal religious beliefs being little spoken of in the UK), but just from general knowledge of the doctrines I understand that the sources of knowledge of God's will are the Bible, the church, and personal revelation, all of these subject to fallible human interpretation. Different sects differ in what weight they put on these, Protestants being big on sola scriptura and Catholics placing great importance on the Church. Some would add the book of nature, God's word revealed in His creation. None of this bears any more resemblance to "direct orders from God", than evolutionary biology does to "a monkey gave birth to a human".

Replace "God" by "rationality" and consider the question asked of yourself. How do you respond?

My answer is, of course, yes.

Now look at what you had to do to get that answer: reduce the matter to a tautology by ignoring all of the issues that would arise in any real situation in which you faced this decision, and conditioning on them having been perfectly solved. Speculating on what you would do if you won the lottery is more realistic. There is no "rationality" that by definition gives you perfect answers beyond questioning, any more than, even to the Christian, there is such a God.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T17:27:19.293Z · LW(p) · GW(p)

What do the real Christians that you know say about that characterisation? I don't know any well enough to know what they think

They think you should try and make sure it's really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says. Many (I think all, actually) mentioned the Binding of Isaac. Of course, they do not a believe they will actually encounter such a situation.

just from general knowledge of the doctrines I understand that the sources of knowledge of God's will are the Bible, the church, and personal revelation, all of these subject to fallible human interpretation. Different sects differ in what weight they put on these, Protestants being big on sola scriptura and Catholics placing great importance on the Church. Some would add the book of nature, God's word revealed in His creation. None of this bears any more resemblance to "direct orders from God", than evolutionary biology does to "a monkey gave birth to a human".

AFAIK, all denominations of Christianity, and for that matter other Abrahamic religions, claim that there have been direct revelations from God.

Now look at what you had to do to get that answer: reduce the matter to a tautology

As I said, simply replacing the word "God" with "rationality" yields clear nonsense, obviously, so I had to change some other stuff while attempting to preserve the spirit of your request. It seems I failed in this. Could you perform the replacement yourself, so I can answer what you meant to ask?

Replies from: CCC, Richard_Kennaway
comment by CCC · 2013-01-14T18:27:03.706Z · LW(p) · GW(p)

They think you should try and make sure it's really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says.

There is a biblical description of how to tell if a given instruction is divine or not, found at the start of 1 John chapter four:

1 John 4:1 Dear friends, stop believing every spirit. Instead, test the spirits to see whether they are from God, because many false prophets have gone out into the world. 4:2 This is how you can recognize God’s Spirit: Every spirit who acknowledges that Jesus the Messiah has become human—and remains so—is from God. 4:3 But every spirit who does not acknowledge Jesus is not from God. This is the spirit of the antichrist. You have heard that he is coming, and now he is already in the world.

One can also use the example of Jesus' temptation in the desert to see how to react if one is not sure.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T19:30:07.428Z · LW(p) · GW(p)

And yet, I have never had a theist claim that "Every spirit who acknowledges that Jesus the Messiah has become human—and remains so—is from God." That any spirit that agrees with scripture, maybe.

Was Jesus unsure if the temptation in the desert was God talking?

Replies from: CCC
comment by CCC · 2013-01-29T08:28:52.246Z · LW(p) · GW(p)

Was Jesus unsure if the temptation in the desert was God talking?

No, but the temptation was rejected specifically on the grounds that it did not agree with scripture. Therefore, the same grounds can surely be used in other, similar situations, including those where one is unsure of who is talking.

For those unaware of how the story goes:

  • Jesus goes into the desert, and fasts for 40 days. After this, He is somewhat hungry.
  • The devil turns up, and asks Him to turn some stones into bread, for food (thus, symbolically, treating the physical needs of the body as the most important thing).
  • He refuses, citing old testament scripture: "Man shall not live on bread alone, but on every word that comes from the mouth of God."
  • The devil tries again, quoting scripture and basically telling him 'if you throw yourself from this cliff, you will be safe, for God will protect you. If you are the Son of God, why not prove it?'
  • Jesus refuses, again quoting scripture; "Do not put the Lord your God to the test"
  • For a third temptation, the devil shows him all the kingdoms of the world, and offers to give tham all to him - "if you will bow down and worship me". A direct appeal to greed.
  • Jesus again quotes scripture, "Worship the Lord your God, and serve him only.", and the devil leaves, unsatisfied.
comment by Richard_Kennaway · 2013-01-14T19:40:04.910Z · LW(p) · GW(p)

They think you should try and make sure it's really God (they give conflicting answers as to how, mostly involving your own moral judgments which seems kinda tautological) and then do as He says.

Your own moral judgements, of course, come from God, the source of all goodness and without whose grace man is utterly corrupt and incapable of anything good of his own will. That is what conscience is (according to Christians). So this is not tautological at all, but simply a matter of taking all the evidence into account and making the best judgement we can in the face of our own fallibility. A theme of this very site, on occasion.

AFAIK, all denominations of Christianity, and for that matter other Abrahamic religions, claim that there have been direct revelations from God.

Yes, I mentioned that ("personal revelation"). But it's only one component of knowledge of the divine, and you still have the problem of deciding when you've received one and what it means.

As I said, simply replacing the word "God" with "rationality" yields clear nonsense, obviously, so I had to change some other stuff while attempting to preserve the spirit of your request. It seems I failed in this.

Not at all. Your formulation of the question is exactly what I had in mind, and your answer to it was exactly what I expected.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T19:55:48.600Z · LW(p) · GW(p)

Your own moral judgements, of course, come from God, the source of all goodness and without whose grace man is utterly corrupt and incapable of anything good of his own will. That is what conscience is (according to Christians). So this is not tautological at all, but simply a matter of taking all the evidence into account and making the best judgement we can in the face of our own fallibility. A theme of this very site, on occasion.

Ah, good point. But the specific example was that He had commanded you to do something apperently wrong - kill your son - hence the partial tautology.

Yes, I mentioned that ("personal revelation"). But it's only one component of knowledge of the divine, and you still have the problem of deciding when you've received one and what it means.

Whoops, so you did.

... how is that compatible with "None of this bears any more resemblance to "direct orders from God", than evolutionary biology does to "a monkey gave birth to a human"."?

Not at all. Your formulation of the question is exactly what I had in mind, and your answer to it was exactly what I expected.

Then why complain I had twisted it into a tautology?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-16T13:17:14.978Z · LW(p) · GW(p)

You cannot cross a chasm by pointing to the far side and saying, "Suppose there was a bridge to there? Then we could cross!" You have to actually build the bridge, and build it so that it stays up, which Penn completely fails to do. He isn't even trying to. He isn't addressing Christians. He's addressing people who are atheists already, getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to. Ha ha ha! Is he not witty!

The more I think about that quote, the stupider it seems.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-16T14:58:41.979Z · LW(p) · GW(p)

You cannot cross a chasm by pointing to the far side and saying, "Suppose there was a bridge to there? Then we could cross!"

I'm, ah, not sure what this refers to. Going with my best guess, here:

If you have some problem with my formulation of, and responce to, the quote retooled for "rationality", please provide your own.

He isn't addressing Christians. He's addressing people who are atheists already, getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to. Ha ha ha! Is he not witty!

He is not "getting in a good dig at those dumb Christians who think that a monkey gave birth to a human, sorry, that anyone should kill their child if God tells them to." He is attemppting to demonstrate to Christians that they do not alieve that they should do anything God says. I think he is mistaken in this, but it's not inconsistant or trivially wrong, as some commenters here seem to think.

(He also appears to think that this is the wrong position to hold, which is puzzling; I'd like to see his reasoning on that.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-18T13:52:33.979Z · LW(p) · GW(p)

He is attemppting to demonstrate to Christians that they do not alieve that they should do anything God says. I think he is mistaken in this, but it's not inconsistant or trivially wrong, as some commenters here seem to think.

It seems trivially wrong to me, but maybe that's just from having some small familiarity with how intellectually serious Christians actually do things (and the non-intellectual hicks are unlikely to be knocked down by Penn's rhetoric either). It is absolutely standard in Christianity that any apparent divine revelation must be examined for its authenticity. The more momentous the supposed revelation the more closely it must be examined, to the extent that if some Joe Schmoe feels a divine urge to kill his son, there is, practically speaking, nothing that will validate it, and if he consults his local priest, the most important thing for the priest to do is talk him out of it. Abraham -- this is the story Penn is implicitly referring to -- was one of the greatest figures of the past, and the test that God visited upon him does not come to ordinary people. Joe Schmoe from nowhere might as well go to a venture capitalist, claim to be the next Bill Gates, and ask him to invest $100M in him. Not going to happen.

And Penn has the affrontery to say that anyone who weighs the evidence of an apparent revelation with the other sources of knowledge of God's will, as any good Christian should do, is an atheist. No, I stand by my characterisation of his remark.

Having just tracked down something closer to the source, I find it only confirms what I've been saying.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-20T13:33:25.392Z · LW(p) · GW(p)

I was going to point out that your comment misrepresents my point, but reading your link I see that I was misrepresenting Penn's point.

Whoops.

I could argue that my interpretation is better, and the quote should be judged on it's own merits ... but I wont. You were right. I was wrong. I shall retract my comments on this topic forthwith.

comment by sketerpot · 2013-01-06T04:35:47.307Z · LW(p) · GW(p)

It is entirely possible for someone to believe in an evil god, and (quite reasonably) decline to do that god's alleged bidding.

Replies from: bbleeker, MugaSofer
comment by Sabiola (bbleeker) · 2013-01-07T15:11:17.478Z · LW(p) · GW(p)

Amen!

comment by MugaSofer · 2013-01-08T11:34:00.260Z · LW(p) · GW(p)

Most theists use the term "God" to refer to a good god. An evil god, by this definition, would not be God, and thus believing in it does not mean you are not an "atheist" (defined as someone who does not believe in God.)

(Whether this definition is more or less useful than one that doesn't mention morality is left as an exercise for the reader.)

comment by Jayson_Virissimo · 2013-01-08T03:18:23.649Z · LW(p) · GW(p)

If Alice told you to kill your child -- would you do it?

If your answer is no, in my booklet you're a person that doesn't believe in the existence of Alice.

?!

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T11:31:35.800Z · LW(p) · GW(p)

If you believe in an immensely powerful being that isn't moral, then you don't believe in "God". You believe in Cthulhu.

Replies from: Desrtopa, Richard_Kennaway, Jayson_Virissimo, BerryPick6
comment by Desrtopa · 2013-01-14T07:09:05.933Z · LW(p) · GW(p)

Plenty of cultures throughout history have conceived of gods that weren't particularly good. The gods of Mesopotamia were assholes by the standards of their own culture, not just ours; a hero was someone who could stand up to them.

If people who don't even believe in the religion have come to conceive of divinity in such a way that it "doesn't count" if the entity doesn't satisfy the whole omnipotent/omnibenevolent package, it's a sign of just how much modern monotheism has dominated the memetic landscape.

I managed to pass much of my childhood as a real outsider to religion, not just lacking belief, but having almost no awareness of what most people believed. My first exposure to religion and mythology was polytheistic, and I didn't recognize the distinction between "living" and "dead" religion (at the time, I thought they were all fringe beliefs preserved by minorities,) so I still recall the confusion I felt when I started to find that most people saw polytheism as fundamentally different and less plausible.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:52:53.153Z · LW(p) · GW(p)

I managed to pass much of my childhood as a real outsider to religion, not just lacking belief, but having almost no awareness of what most people believed. My first exposure to religion and mythology was polytheistic, and I didn't recognize the distinction between "living" and "dead" religion (at the time, I thought they were all fringe beliefs preserved by minorities,) so I still recall the confusion I felt when I started to find that most people saw polytheism as fundamentally different and less plausible.

... wow. That sounds like a story worth telling in full.

As a member of this culture, however, I note that when the G is capitalized it's referring to the Supreme Being that the top ... four, I think? ... religions claim exists, which is different enough from the usual squabbling polytheistic gods to be sometimes worth distinguishing, in addition to having rather more traction. Of course, a Gods are a specific subtype of gods.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-14T16:02:40.117Z · LW(p) · GW(p)

As a member of this culture, however, I note that when the G is capitalized it's referring to the Supreme Being that the top ... four, I think? ... religions claim exists

Well, top two at least. Christianity and Islam take the top spots, followed by Hinduism, which has a Supreme Existence, but no tenets of it being benevolent (at least as far as I've been able to find, maybe some Hindus believe differently, as it's not a very homogeneous religion.) Here's a table of top religions by adherents. It's not clear how to count down from there since some of the items are aggregations of what might not fairly count as individual religions, but after Islam, the next religion down claiming a benevolent supreme being has close to two orders of magnitude fewer adherents.

Not that this affects the point of what our cultural understanding of "God" means, but it does give a bit of a sense of how much that idea is an outlier in human culture.

... wow. That sounds like a story worth telling in full.

I don't know if this is as interesting as you're hoping, but my father is an atheist offshoot of a very religious family, and my mother is an agnostic/deist who was once a member of the Transcendental Meditation movement. My paternal grandmother is a Born Again Christian, and she decided I was old enough to proselytize to around the age of three. One of my earliest memories is having her ask me "Do you know anything about God?" I told her "I know about a god named Zeus." Since I didn't know how to read that point, I can only guess how I picked up that information, but my best guess is that it was from my father, since he would occasionally tell me stories from mythology.

My family moved out of the Bible Belt very shortly after that, and I believe it had more than a little to do with getting away from my father's family, who my mother was always somewhat uncomfortable with. When I became a little bit older, I became very interested in mythology (my reading level exploded in first grade, and I remember passing long hours in class reading books of mythology when I was supposed to be doing something else.) At this time, I didn't know anyone who talked to me about an active belief in a living religion (except when my grandmother called.) My only exposure to living religion was basically killed before presenting it to me; we celebrated both Christmas and Hanukkah (my mother is from a Jewish family,) and I was told the stories behind the celebrations, as well as a couple of the major Jewish holidays we would sometimes visit my mother's family for, but they were presented as stories, not as something I ought to believe in. I was explicitly told that some people believed these things, but I interpreted this as "People like my grandma, who're kind of weird and make my parents a bit uncomfortable, but not the sort of people I know around here." I saw religious celebrations as part of a cultural heritage that people were taking the excuse to celebrate over, not a sign that the religions were still a force in the community.

Since my exposure to Christianity and Judaism as well as various dead religions came mostly from stories, rather than dogma (I learned about things like the story of Samson way before anyone tried to convince me that Jesus died for my sins,) my idea of religion was basically "huge collections of stories that people used to make up to explain the world and entertain each other. Some people still believe in them, but you'd have to be kind of weird to believe in those sorts of things today." If I saw someone wearing a cross or something, I assumed that they were using an observation of cultural heritage as an excuse to wear something pretty. I couldn't be surrounded by the sort of weird people who made my parents uncomfortable, or they'd be uncomfortable all the time, and people would be talking about God nonstop (I recognized that Hassidic Jews were religious, because they did make my mother uncomfortable, since she was raised in extremely reformed family, and was brought up thinking of them as the sort of people who made it hard for Jews to assimilate.)

Since nobody was trying to mold my religious beliefs for me, I experimented myself with various ideas of the divine, but I never got any sort of sign that they were real, so I always dropped them. I had a sense that somewhere out there there were things that worked by different rules than we were used to, but eventually (before I started getting accustomed to the idea of live religion) I realized that I was probably suffering from motivated cognition, and just because I wanted to discover something like that didn't mean it existed.

The idea of a single, absolute, benevolent god, was one that I experimented with, but it never had much traction with me because it was like one of those simple, elegant scientific hypotheses you hear fringe scientists expound now and then, which seem to perfectly explain everything they apply it to, only then you look around a bit more and find that there are a zillion experiments that completely fly in the face of its predictions, and the reason other scientists don't believe it is because it simply doesn't agree with the data. Polytheism was a bit less elegant, but a much better fit for the data. How does the supernatural produce a sort-of functional world like ours where some people are really happy an successful and some people are miserable, and there are huge accomplishments and major tragedies and love and war, and people can end up lucky or unlucky whether they're good or bad? By having a whole bunch of gods with goals at complete cross purposes, who sometimes get along and sometimes hate each other, who may cooperate, or may outright sabotage each others' efforts, but who've at least found that they're better off if they don't devolve into all-out war against each other. When I tried to get signs that these gods actually existed, I always turned up nada, so I figured they probably didn't really exist either, but I could see why most religious people, ever, would be polytheists.

It wasn't until I was eleven or so that I started to find that significantly more of my peers professed to actually believe in things like a big man in the sky who rules over everything than I would have imagined, and at first I thought it was just because they were even more immature than I thought, and their parents were just making up stories they could handle that they'd just grow out of later, like Santa Claus. It was when I was twelve that I finally realized that no, lots of people really do still believe in religion, and not just the people you can tell right away that they're kind of weird or crazy.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T17:13:51.909Z · LW(p) · GW(p)

I dunno, that's an interesting case study if nothing else.

Regarding monotheism ... it has emerged independently a few times, if nothing else:

  • In Greece and India, philosophers noticed that the gods didn't really seem worthy of worship / multiple omnipotent agents didn't make sense.

  • In both Egypt and Israel, individual cults grew into henotheistic religions which eventually blurred into monotheism - probably for at least partly political reasons, and Happy Death Spirals were probably also involved.

  • Interestingly, Christianity appears to have partly reverted to polytheism during the dark ages - and there are many syncretic minor religions/cults that are polytheistic while retaining a largely Christian framework.

It seems to me that polytheism is easier to grasp, and so tends to be popular by default, while monotheism is easier to defend, and so tends to emerge when it needs to be defended (whether from political opponents or your own knowledge about reality) - and the two most popular religions both started out monotheist (give or take a trinity.)

More ontopic, it's arguable whether using such a common term to mean such a specific concept is privileging the hypothesis, but it's pretty common these days, and most "converts" to atheism came from Christian backgrounds. [citation needed]

Replies from: Desrtopa, ygert
comment by Desrtopa · 2013-01-14T17:40:29.851Z · LW(p) · GW(p)

It seems to me that polytheism is easier to grasp, and so tends to be popular by default, while monotheism is easier to defend, and so tends to emerge when it needs to be defended (whether from political opponents or your own knowledge about reality)

I'm not so sure about that. We have much more exposure to attempts to defend monotheism from polytheism or atheism, so it may appear easier, because there's a glut of arguments coming from that direction. That could just be a historical accident though. Maybe we could have ended up quite easily in a world where the most popular religions were offshoots of Chinese syncretism, and we'd be much more familiar with arguments defending polytheism.

Monotheism has sprung up in polytheistic cultures, but in some cases we've also reinterpreted the work of old philosophers through monotheistic lenses. A lot of classical Greek philosophers framed their arguments in terms of "the gods," who're now interpreted as talking about "god," and the idea of omnipotence wasn't really in popular circulation. The closest I know of any Greek philosopher coming to monotheism was Aristotle with his Prime Mover, but it was Aquinas who reinterpreted this as being about God. To Aristotle, the Prime Mover was more like a basic energy principle behind everything. The gods came from it, but it wasn't a being so much as The Stuff that Makes Stuff Happen.

Replies from: CCC, MugaSofer
comment by CCC · 2013-01-14T18:36:15.327Z · LW(p) · GW(p)

The Science of the Discworld II provides a substantiation for the claim that monotheism produces better science than polytheism; when a monotheist wants to know why thunderstorms happen, he has no trouble with the idea that there's a single, consistent set of rules to be applied, if he can but find out what they are (while the polytheist is still trying to work out which gods are having an argument).

Replies from: Desrtopa
comment by Desrtopa · 2013-01-14T19:00:49.456Z · LW(p) · GW(p)

I never found that argument very compelling. The Classical Greeks did a whole lot better than the Christians at developing scientific knowledge, before the Renaissance. Both monotheistic and polytheistic tradtions can foster either strong or weak scientific progress. Islam is a good example of a monotheistic tradition moving from high to low scientific productivity by the shifting of ideas within that tradition (see The Incoherence of the Philosophers.)

A polytheist can perfectly easily see the world as functioning according to a single, consistent set of rules, that all the various gods operate within, while a monotheist can just as well see the world as completely tied to the whims of an ontologically basic mental entity which is outside our conception of logic, such that the most basic reason we can ever explain anything with is "because that's what God wants" (which is the idea that essentially led to the atrophy of Islamic science.)

comment by MugaSofer · 2013-01-14T19:41:46.051Z · LW(p) · GW(p)

Well, I can hardly prove I'm not biased by overexposure to such arguments. Still, I think disproving Monoteism requires greater, well, skill than disproving polytheism.

comment by ygert · 2013-01-14T17:40:14.034Z · LW(p) · GW(p)

most "converts" to atheism came from Christian backgrounds.

Remember that ~33% of the world is Christian (which is more than any other religion), and so it is not all that surprising that many atheists come from Christian backgrounds, simply because the probability that an arbitrary person came from a Christian background is quite high to start with.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T19:45:56.837Z · LW(p) · GW(p)

Well, yes. That would be why most converts to atheism came from Christian backgrounds. Along with greater concentrations of both in the western world and so on. Since most atheists (and most LWers) come from such a BG, it seems worth having terminology relating to it, was my point.

comment by Richard_Kennaway · 2013-01-14T13:57:36.988Z · LW(p) · GW(p)

A Gnostic who believes that the God the Christians worship is an evil demiurge who made the flawed creation in which we are imprisoned and from which we may escape by regaining contact with the true Supreme Being, has a belief about the Christian God. A Gnostic and a Christian will agree that they are disagreeing with each other over the nature of that being.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T14:36:18.819Z · LW(p) · GW(p)

A Gnostic and a Christian will agree that they are disagreeing with each other over the nature of that being.

... they would? They disagree regarding the source of their beliefs, and various other details (eg the world is evil,) but I wouldn't have thought that the existence (as opposed to identity) of God was one of them.

This discussion shows signs of becoming a dispute over definitions, incidentally.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-01-14T14:54:35.990Z · LW(p) · GW(p)

This discussion shows signs of becoming a dispute over definitions, incidentally.

It's been that since the start. The Penn quote is just broken and deserves no further attention.

comment by Jayson_Virissimo · 2013-01-14T04:47:18.520Z · LW(p) · GW(p)

There are numerous civilizations that believed in immoral and amoral gods. Are you saying they were Cthulhu-worshipers?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T10:08:09.985Z · LW(p) · GW(p)

Well ... yeah. "Immoral and amoral god"* sounds a lot like the definition I was using for "Cthulhu", in fact.

*(as opposed to capital-G-God)

Replies from: Nornagest
comment by Nornagest · 2013-01-14T10:28:31.396Z · LW(p) · GW(p)

That seems to cheapen Cthulhu, to be honest. The emotional impact of Lovecraft's stories, and of their descendants such as the Azathoth metaphor, relies not on an immoral or amoral Power (that's well-trod territory in many religions and not a few fantasies) but rather on Powers with motivations fundamentally incompatible with human minds: entities of godlike potency that can neither be mollified nor bargained with nor easily apprehended in native reasoning modes.

That doesn't describe the occupants of any historical mythology I can think of, at least not in its folk form. Humanity's gods are often profoundly unpleasant in a number of ways, but in terms of characterization they're almost always recognizably humanlike if not fully human.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T10:37:09.323Z · LW(p) · GW(p)

Hmm. You have a point; most gods are recognizably anthropomorphic. OTOH, many of them are "beyond good and evil" in addition to possessing vast power, which is what I was aiming for. If you can suggest a better term, I'll edit the comment.

Replies from: Nornagest
comment by Nornagest · 2013-01-15T01:19:30.755Z · LW(p) · GW(p)

That's an interesting question. The first thing that comes to mind is the Gnostic Demiurge (named as Yaldabaoth, Saklas, or Samael depending on who you're asking; there are other names), the explicitly unFriendly creator spirit who is nonetheless not an embodiment of evil as per Satan or Angra Mainyu.

I'm not sure if there are any good type specimens for "unFriendly god", though. It's not hard to find spirits of evil in polytheistic or henotheistic religions, but using one of those names would carry unwelcome implications, and while a decent working definition for "god" in a number of polytheistic pantheons might be "a bigger jerk than most everyone else", using someone like Thor as an example would imply polytheism before it implies unFriendliness.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-15T10:02:19.095Z · LW(p) · GW(p)

Worse still, it needs to be something immediately familiar to anyone reading the comment :(

comment by BerryPick6 · 2013-01-14T05:39:02.733Z · LW(p) · GW(p)

Well then, arguably, no-one actually believes in "God" at all.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:54:09.762Z · LW(p) · GW(p)

Um, no. The fact that people are mistaken about what such a God would instruct them to do does not change the fact that they believe it exists and gave them certain instructions.

comment by CCC · 2013-01-14T08:28:28.891Z · LW(p) · GW(p)

I think that this quote might benefit by tabooing the word "god".

Does it mean "an omniscient, omnipotent being"?

Does it mean "an omniscient, omnibenevolent being that would never ask you to do anything truly evil, but may on occasion ask you to do things that you don't see the sense in, and that in fact appear evil at first glance"?

Does it mean "a being worthy of respect and obedience, even in the most dire circumstances"?

comment by Scottbert · 2013-01-08T01:43:31.530Z · LW(p) · GW(p)

I asked a religious relative something along these lines.

Her response was that God would never ask people to do bad things, and if it seemed that He was that would just be someone else deceiving her.

I explained the atheist view on this sort of thing and then the conversation shifted directions before I thought to point out the example of God asking someone to sacrifice their child in the bible.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-08T02:56:28.978Z · LW(p) · GW(p)

An acquaintance of my family said something like that to me years ago. My response at the time was to ask her whether that meant more generally that if something I think is bad is presented to me as Divine instruction, I should reject it, since it is clearly something other than God at work. Her response was that no, there's a difference between something that actually is bad, and something I just think is bad. I asked her how I tell the difference; she suggested I ask God. I tapped out.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T11:36:06.745Z · LW(p) · GW(p)

But ... there is a difference between "something that actually is bad, and something I just think is bad." If Omega told me sacrificing a child was the best option by my preferences, I would do so (or at least accept that I should; I would probably experience a lot of akrasia.) Wouldn't you?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-08T14:07:51.386Z · LW(p) · GW(p)

There absolutely is a difference, yes.

But if the answer to "how do I tell the difference?" is that I ask the entity who is making the request in the first place, we've now achieved full epistemic closure.

That is: if I don't know whether Omega tells the truth or not, and I don't know whether Omega has my best interests in mind or not, and Omega tells me to sacrifice a child, I probably wouldn't sacrifice the child. Would you?

More generally, there is a big difference between "what ought I do, if X is the case?" and "what decision will the decision procedure that I ought to implement make, given non-zero but uncompelling evidence that X is the case?" Thought experiments often ask the former, but the latter is more relevant to my actual life.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T14:19:50.047Z · LW(p) · GW(p)

But if the answer to "how do I tell the difference?" is that I ask the entity who is making the request in the first place, we've now achieved full epistemic closure.

I assume she expected you to ask God, y'know, now, not immediately after something claiming to be Him appeared and ordered you to kill 'em all. (Presumably asking Him "wait, are you sure killing children is a good idea?" would be met with a "yes". Or a thunderbolt.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-08T14:33:49.697Z · LW(p) · GW(p)

Sure, that's probably true. I don't see what difference it makes, though.

I mean, OK, suppose I wait an hour, or a day, or a week, or however long I decide to wait, and I ask again, and a Voice says "Yes, kill 'em all." Do I believe it's God now? Why?

Conversely, I wait however long I decide to wait and I ask again and a Voice says "No, don't kill 'em." Do I believe that's God? Why?

Do I ask a dozen times and take the most common answer?

None of those seem reasonable. It seems to me that on her account, what I ought to do is rely on my judgment of right and wrong rather than obeying the Voice, since the Voice is unreliable.

Which I completely agree with, but it didn't seem to be what she was saying more generally.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T15:08:55.120Z · LW(p) · GW(p)

I meant that if you get contradictory answers to your previous question, then you can safely assume that one of the Voices isn't God - and I guess you should go with the one with the best track record? [EDIT: based on your own judgement.] We don't seem to disagree on anything, anyway.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-08T15:46:08.492Z · LW(p) · GW(p)

Agreed with that much, certainly.

comment by MugaSofer · 2013-01-14T15:23:50.214Z · LW(p) · GW(p)

If your answer is yes, please reconsider.

Why?

If I encounter a being approximately equivalent to God - (almost) all-knowing, benevolent etc. - and it tells me to do something, why the hell should I refuse? If Omega told you something was the best choice according to your preferences - presumably as part of a larger game - why wouldn't you try and achieve that?

My best guess is that Mr. Jilette is confused regarding morality.

Replies from: MixedNuts
comment by MixedNuts · 2013-01-14T15:35:26.993Z · LW(p) · GW(p)

Because most people who are convinced by their pet moral principle to kill kids are utterly wrong.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T17:40:44.091Z · LW(p) · GW(p)

You're saying that if a Friendly superintellligence told you something was the right thing to do - however you define right - then you would trust your own judgement over theirs?

Replies from: None
comment by [deleted] · 2013-01-14T17:54:49.248Z · LW(p) · GW(p)

Acting the other way around would be trusting my judgement that the AI is friendly.

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T19:34:05.612Z · LW(p) · GW(p)

Acting the other way around would be trusting my judgement that the AI is friendly.

Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

And so, logically, could God. Apparently FAIs don't arbitrarily reprogram people. Who knew?

comment by cousin_it · 2013-01-14T09:12:06.157Z · LW(p) · GW(p)

Treating life as a hackable problem seems to be a recipe for despair.

-- Tloewald on HN, commenting on Aaron Swartz's suicide.

Replies from: simplicio
comment by simplicio · 2013-01-14T18:29:27.393Z · LW(p) · GW(p)

Any reason whatsoever to think that this particular characteristic contributed wholly or partly to Swartz' suicide, other than its being a known & salient fact about Swartz?

Replies from: cousin_it
comment by cousin_it · 2013-01-14T19:43:18.601Z · LW(p) · GW(p)

One of my favorite threads on HN is an analysis of David Foster Wallace's suicide in light of his famous "fish and water" speech. It's hard to summarize, but do read it, especially Cushman's comment. Then juxtapose it with the list at the end of this post from Aaron's "Raw Nerve" series, and you'll understand what I was getting at with the above quote. In short, treating life as a hackable problem seems to make people be deliberately harsh and "realistic" with themselves in order to cause change. That can make you unhappy (it happened to me), or if you're already predisposed to depression, that can make it much worse.

Replies from: simplicio
comment by simplicio · 2013-01-14T20:19:28.791Z · LW(p) · GW(p)

That makes more sense, thanks for the explanation. Still not entirely sure I buy the causal link, in either case.

comment by Endovior · 2013-01-11T21:39:26.320Z · LW(p) · GW(p)

Machines aren't capable of evil. Humans make them that way.

-Lucca, Chrono Trigger

Replies from: Qiaochu_Yuan, earthwormchuck163
comment by Qiaochu_Yuan · 2013-01-11T21:56:03.687Z · LW(p) · GW(p)

Eh. Would you say that "humans aren't capable of evil. Evolution makes them that way"?

Replies from: Eliezer_Yudkowsky, Kawoomba, army1987
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-12T16:42:56.138Z · LW(p) · GW(p)

I might, if I was a god talking to other gods. And if I was a gun talking to other guns, I'd tell them to shut up about humans and take responsibility for their own bullets.

Replies from: BerryPick6
comment by BerryPick6 · 2013-01-12T17:06:56.063Z · LW(p) · GW(p)

Would you say that "humans aren't capable of evil. Evolution makes them that way"?

-

I might, if I was a god talking to other gods.

I feel like a strange loop is now formed when humans say things like: "God isn't capable of evil. Our definition makes him that way."

comment by Kawoomba · 2013-01-11T22:10:20.054Z · LW(p) · GW(p)

"Evolution isn't capable of evil. Time made it that way."

Zugzwang: your turn!

Replies from: None, benelliott, army1987, KnaveOfAllTrades
comment by [deleted] · 2013-01-11T22:58:07.493Z · LW(p) · GW(p)

"DO NOT MESS WITH TIME"

comment by benelliott · 2013-01-12T09:36:18.605Z · LW(p) · GW(p)

"Time isn't capable of evil, its not even an optimization process."

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-12T19:15:45.951Z · LW(p) · GW(p)

[Time i]s not even an optimization process.

Yes it is. Its optimizand is entropy.

Epistemic status: interesting idea I think I've heard somewhere. Don't dare to ask me if I believe it myself or I will ask you to taboo words and you don't want to do that.

comment by A1987dM (army1987) · 2013-01-12T23:34:11.078Z · LW(p) · GW(p)

But evolution isn't sapient and... Or isn't it?

[Edited to remove potentially mind-killing example.]

comment by A1987dM (army1987) · 2013-01-12T23:24:47.751Z · LW(p) · GW(p)

But humans are sapient and have rebelled against evolution, whereas machines aren't and just do what humans tell them.

The problem is that this may change in the future, and a blog sponsored by the organization that's trying to prevent exactly that is probably not the right place to post such a quote.

Replies from: Endovior
comment by Endovior · 2013-01-13T00:52:40.898Z · LW(p) · GW(p)

My point in posting it was that UFAI isn't 'evil', it's badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.

comment by earthwormchuck163 · 2013-01-11T22:03:38.611Z · LW(p) · GW(p)

That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).

I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-12T23:40:50.663Z · LW(p) · GW(p)

I'm not familiar with Chrono Trigger, but when I hear that sentiment in real life I take it to be a rebuttal to an argument against technology based on confusion between terminal and instrumental values. (Guns aren't intrinsically evil (i.e. there's no negative term in our utility function for how many guns exist in the world) even though they can be used to do evil, &c.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-12T23:58:19.272Z · LW(p) · GW(p)

In Chrono Trigger this line is about a robot.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-13T00:41:15.571Z · LW(p) · GW(p)

Of course, there are other robots in the game about whom this is a dubious claim.

Replies from: Endovior
comment by Endovior · 2013-01-13T01:26:31.167Z · LW(p) · GW(p)

Well, that gets right to the heart of the Friendliness problem, now doesn't it? Mother Brain is the machine that can program, and she reprogrammed all the machines that 'do evil'. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are 'evil', is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It's easy to blame Mother Brain; she's a major antagonist in her timeline. It's less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.

In my view, Lucca is taking personal responsibility with that line. 'Machines aren't capable of evil', (they can't choose to do anything outside their programming). 'Humans make them that way', (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I'd be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you're basically trying to absolve yourself of the consequences if anything goes wrong. "I gave my creation the capacity to choose. Is it my fault if it chose evil?" Yes, yes it is.

comment by HalMorris · 2013-01-18T17:31:14.271Z · LW(p) · GW(p)

Fashion is Danger Flight of the Conchords

Think Intellectual Fashion (if you want to be serious about it).

http://www.youtube.com/watch?v=5i-uDjbP4F8

comment by [deleted] · 2013-01-02T08:03:04.851Z · LW(p) · GW(p)

Science isn't about why, it's about why not. You ask: why is so much of our science dangerous? I say: why not marry safe science if you love it so much? In fact, why not invent a special safety door that won't hit you in the butt on the way out, because you are fired! No, not you, test subject, you're fine. Yes, you! Box! Your stuff! Out the front door! Parking lot! Car! Goodbye!

cave johnson

Replies from: MugaSofer, None
comment by MugaSofer · 2013-01-02T18:52:11.367Z · LW(p) · GW(p)

Not really a rationality quote, is it...

comment by [deleted] · 2013-01-02T08:03:22.326Z · LW(p) · GW(p)

All right, I've been thinking. When life gives you lemons, don't make lemonade. Make life take the lemons back! Get mad! I don't want your d*mn lemons! What am I supposed to do with these?! Demand to see life's manager! Make life rue the day it thought it could give Cave Johnson lemons! Do you know who I am? I'm the man who's gonna burn your house down! With the lemons! I'm gonna get my engineers to invent a combustible lemon that burns your house down!

also cave johnson